Top 10 trends in artificial intelligence (AI) technologies in 2018

Of good!
Listeners of the first year "Developer BigData" came to the finish line - today began the last month, where the survivors will take part in the military final project. Accordingly, we have also opened a set for this rather complicated course. So let's look at one interesting article-a note on modern trends in AI, which are closely related to BD, ML and others :)
Artificial intelligence is under the close attention of the heads of government and business leaders as the main means of assessing the correctness of decisions. But what happens in laboratories where the opening of academic and corporate researchers will determine the course of development of AI for the next years? Our own team of researchers from AI Accelerator from PwC has aimed at leading developments, which should be closely monitored by both business leaders and technologists. That's what they are and why they are so important.
Top 10 trends in artificial intelligence (AI) technologies in 2018
1. The theory of in-depth training: demystification of the work of neural networks
What is this : deep neural networks that mimic the human brain have demonstrated their ability to "learn" from images, audio and text data. Nevertheless, even with the fact that they have been used for more than ten years, we still do not know much about deep learning , including how neuronal networks are trained or why they work so well. This can change thanks to the new theory of , which applies the principle of a bottleneck of information for in-depth training. In essence, the theory assumes that after the initial phase of tuning, the deep neural network will "forget" and compress data-noise (that is, data sets containing a lot of extra meaningless information), while retaining information about what these data represent.
Why this is important : an accurate understanding of how deep training works contributes to its wider development and use. For example, it can make the optimal choice of design and network architecture more obvious, while providing greater transparency for systems of increased reliability or control applications. Expect to see more results from researching this theory, in application to other types of deep neural networks and development in general.
2. Capsule networks: imitation of brain processing of visual information
What is this : capsular networks , a new type of deep neural networks, process visual information in much the same way as the brain, which means that they can maintain a hierarchical relationship. This contrasts sharply with convolutional neural networks, one of the most widely used neural networks that do not take into account important spatial hierarchies between simple and complex objects, which leads to erroneous classification and a high error rate.
Why this is important : for typical identification tasks, capsule networks promise better accuracy by reducing errors - by as much as 50%. And also they do not need as much data for training models. Expect to see widespread use of capsule networks in many problem areas and deep neural network architectures.
3. Deep training with reinforcement: interaction with the environment for solving business problems
What is it: the type of training of a neural network that learns by interacting with the environment through observations, actions and rewards. Deep reinforcement learning ( ? Deep reinforcement learning - DRL ) Was used to study gaming strategies such as Atari and Go, including the famous AlphaGo program that defeated a person.
Why this is important: DRL is the most universal of all learning methods, so it can be used in most business applications. It requires less data than other methods for training its models. Even more remarkable is the fact that it can be trained with the help of modeling, which completely eliminates the need for marking the data. Given these advantages, expect to see more business applications, which combine DRL and agent modeling in the coming year.
4. Generative and adversarial networks: combining neural networks to stimulate learning and facilitate computational load
What is this : The generative advisory network (
? generative adversarial network - GAN
) Is a type of deep learning system without a teacher, which is implemented as two competing neural networks. One network, the generator, creates fake data that looks exactly like a real data set. The second network, the discriminator, processes the authentic and generated data. Over time, each network improves, allowing the pair to study the entire distribution of this set of data.
Why this is important : GANs open up in-depth training for a wide range of learning tasks without a teacher, in which marked data does not exist or is too expensive to receive. They also reduce the load required to implement a deep neural network, because the burden is shared by the two networks. Expect to see more business applications, such as detecting cyberattacks using GAN.
5. Training on incomplete (Lean Data) and supplemented dаta: solution of the problem with labeled data

What is this : A fairly large problem in machine learning (in particular, in deep learning) is the availability of large amounts of labeled data for training the system. Two common methods can help solve this problem: (1) synthesize new data and (2) transfer the model prepared for one task or area to another. Methods such as transfer of training (transfer of knowledge obtained from one task /area to another) or training from the first time ("extreme" transfer of learning, taking place with only one or without corresponding examples) is the technique of learning on "incomplete data" (Lean Data). Similarly, the synthesis of new data through modeling or interpolation helps to obtain more data, thereby supplementing existing data to improve learning.
Why this is important : using these methods, we can solve a wide variety of problems, especially those that do not have full input. Expect to see more variants of incomplete and supplemented data, as well as various types of training used to solve a wide range of business problems.
6. Probabilistic programming: languages ​​to facilitate the development of the model
What is this : a high-level programming language that facilitates the development of a probabilistic model, and then automatically "solves" this model. Probabilistic programming languages ​​ They allow you to reuse model libraries, support interactive modeling and formal validation, and provide the level of abstraction necessary to create a common and efficient output in the universal model classes.
Why this is important : probabilistic programming languages ​​have the ability to take into account the vague and incomplete information that is so prevalent in the business area. We will see a wider adoption of these languages ​​and expect that they will also be applied to in-depth training.
7. Models of hybrid learning: combining approaches to the uncertainty of model
What is this : Different types of deep neural networks, such as GAN or DRL, have shown great promise in terms of their performance and wide application with different types of data. However, the models of deep learning Do not model the uncertainty of , as Bayesian or probabilistic approaches do. Hybrid training models combine two approaches to exploit the strengths of each. Some examples of hybrid models are Bayesian deep training, Bayesian GAN and Bayesian conditional GAN ​​ .
Why this is important : Hybrid learning models allow you to expand the variety of business tasks, including in-depth training with uncertainty. This can help us achieve better performance and model explanability, which in turn can contribute to wider adoption. Expect deeper learning methods to get Bayesian equivalents, and the layout of probabilistic programming languages ​​will begin to include in-depth training.
8. Automatic machine learning (AutoML): creating a model without programming
What is this : the development of machine learning models requires a labor-intensive workflow under the supervision of experts, which includes the preparation of data, the selection of functions, the choice of model or technique , training and customization. AutoML seeks to automate this workflow, using a number of different methods of statistical and in-depth training.
Why this is important : AutoML is part of what is considered as the democratization of AI tools, allowing business users to develop machine learning models without deep programming. This will also speed up the time spent by data scientists to create models. Expect to see more commercial AutoML packages and AutoML integration on larger machine learning platforms.
9. Digital double: virtual copies outside of industrial applications
What is this : A digital double is a virtual model used to facilitate detailed analysis and monitoring of physical or psychological systems. The concept of digital double emerged in the industrial world , where it was widely used to analyze and monitor things such as wind farms or industrial systems. Now, using agent-based modeling (computational models for modeling actions and interactions of autonomous agents) and system dynamics (a computer approach to analyzing and modeling behavioral lines), digital twins are applied to non-physical objects and processes, including forecasting customer behavior .
Why this is important : digital counterparts can contribute to the development and wider adoption of the Internet of things (IoT), providing predictive diagnostics and supporting IoT systems. In the future, we expect greater use of digital counterparts in both physical systems and in consumer choice modeling.
10. Explanable AI: black box method
What is this : today there are many algorithms of machine learning that touch, think and act in a great variety of different applications. Nevertheless, many of these algorithms are considered "black boxes", spilling very little light on how they achieved their result. Explanable AI Is a movement towards the development of methods of machine learning that create more explainable models while maintaining the accuracy of forecasting.
Why this is important : AI, explainable, demonstrable and transparent, will be crucial to establishing trust in this technology and will contribute to a wider adoption of machine learning methods. Enterprises will apply explicable AI as a requirement or best practice before embarking on a large-scale deployment of AI, while governments can make explicable AI a normative standard in the future.
As always we wait for comments, discussion questions here or, for example, it can be discussed with Ksenia on open lesson .
+ 0 -

Add comment