Contact Us

AI Breakthroughs in 2018

Top 5 AI Breakthroughs in 2018 Every Business Should Know

No items found.

2018 marks another big year for advancement in artificial intelligence. We observed research breakthroughs of 2017 being adopted as business differentiators in 2018. And this years research will become next years capability. Here are our top breakthroughs of 2018 that can bring value to business in 2019.

1. LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS, BY ANDREW BROCK, JEFF DONAHUE, AND KAREN SIMONYAN (2018)

Overview

In recent years, the AI community has made progress using Generative Adversarial Networks (GANs) to generate new content like art and music. In 2018, a team from DeepMind successfully generated high-resolution, diverse images from complex datasets. The study was able to generate net new images that look photo realistic.

BigGANs

Why it Matters

AI researchers from all over the world are using this technique to generate brand new images of animals, objects, clothing, and art. AI generated imagery can be used to reduce cost and increase performance of marketing campaigns, inspire designers with sample designs, or even train doctors to identify skin disease.

2. DISTRIBUTED FEDERATED LEARNING TO DECENTRALIZE DATA ACQUISITION AND MODEL TRAINING

Overview

Large technology companies now centralize vast amounts of user data. Recently the community is fighting back, demanding data privacy and decentralized data ownership. In certain industries where data privacy is already established, applying machine learning is difficult or impossible. OpenMined is an open-source community focused on researching, developing, and promoting tools for secure, privacy-preserving, value-aligned artificial intelligence. With OpenMined tools, an AI model can be governed by multiple owners and trained securely on an unseen, distributed dataset.

Federated Learning

Why it Matters

Many businesses aspire to leverage AI but don’t have (or cannot use) the data due to privacy concerns. OpenMined’s tool encrypts the AI model and keeps sensitive data local. Now businesses can leverage machine learning on local data and maintain privacy. Federated learning will have major impact to adoption of machine learning in insurance, healthcare, and government.

3. DEEP CONTEXTUALIZED WORD REPRESENTATIONS, BY MATTHEW E. PETERS, MARK NEUMANN, MOHIT IYYER, MATT GARDNER, CHRISTOPHER CLARK, KENTON LEE, LUKE ZETTLEMOYER (2018)

Overview

The Allen Institute for Artificial Intelligence introduces a technique for embeddings, Embeddings from Language Models (ELMo). In ELMO-enhanced models, words are vectorized on the basis of the entire context in which it is used, enhancing existing NLP techniques. The net results sees better results for language processing with less training data.

Why it Matters

This breakthrough will help further the capabilities Natural Language Processing techniques to create more accurate chatbots and understanding customer reviews. We should also see major advancements in search, document retrieval, and content recommendations.

4. AN EMPIRICAL EVALUATION OF GENERIC CONVOLUTIONAL AND RECURRENT NETWORKS FOR SEQUENCE MODELING, BY SHAOJIE BAI, J. ZICO KOLTER, VLADLEN KOLTUN (2018)

Overview

For sequence modeling tasks, it was generally agreed that recurrent neural networks were the gold standard. This paper test conventional wisdom by competing RNNs with convolutional architectures. The results indicate that a Temporal Convolutional Networks (TCNs) outperform canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.

Why it Matters

TCNs appear to have a much longer memory than traditional methods. This is great new for task that require more memory like speech translation, voice generation, and recognition. We should see far greater accuracy in digital assistants and conversational interfaces proliferate similar to Google Duplex.

5. TASKONOMY: DISENTANGLING TASK TRANSFER LEARNING, BY AMIR R. ZAMIR, ALEXANDER SAX, WILLIAM SHEN, LEONIDAS J. GUIBAS, JITENDRA MALIK, AND SILVIO SAVARESE (2018)

Overview

Transfer learning is a big reason we are seeing a proliferation of machine learning. Transfer learning is the improvement of learning in a new task through the transfer of knowledge from a related task that the Al already learned. It provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. New research sites a computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning.

Transfer Learning

Why it Matters

Transfer learning has execrated AI advancement in statistical and categorial areas. This research demonstrates how transfer learning can be applied to visual systems. We should see the same acceleration where better visual systems emerge with less training data and lower computational costs. This could result in more, better autonomous vehicles, robotics, and visual recognition systems like the Amazon Go store.