From the 1950s onwards, until very recently, the usual field of advanced Artificial Intelligence (AI) was mostly the research laboratory and science fiction.
With the exception of counted cases, practically all systems with human-like intelligence have appeared in futuristic films or works such as those of Isaac Asimov. However, this scenario is changing radically in recent years.
The great technological impulse that we usually refer to under the term Big Data has revolutionized the business environment. Organizations subjected to the need for digital transformation have become thirsty creatures of vast amounts of data; And for the first time in the history of AI there is a widespread demand for systems with advanced intelligence, equivalent to that of a human, that are capable of processing that data. This is happening in virtually all sectors, as it is rare for business or public administration that can not benefit from an intelligent and automated analysis of the data.
We live in a historical moment, not because the organizations want to incorporate something radically new, but because they are now aware that there is technology capable of processing all the data they have, do it at times scales less than human and even provide the necessary intelligence .
We could say that the Big Data was simply the first wave and the great tsunami is about to arrive. The new Big Data architectures have appeared in the hands of the big Internet companies, native digital organizations and completely connected since its conception. At present we are seeing how Big Data proliferates rapidly to encompass all organizations and all sectors, because in a digital and global ecosystem companies that are not digital natives also need to become data eaters.
Machine Learning, automatic learning
One of the keys to advanced AI is in learning. It is becoming increasingly common for us to ask machines to learn on their own. We can not afford to pre-program rules to deal with the infinite combinations of input data and situations that appear in the real world.
Instead of doing that, we need the machines to be able to self-program, in other words, we want machines that learn from their own experience. The Machine Learning discipline takes care of this challenge and thanks to the perfect storm in which we have just entered all the Internet giants have entered fully into the world of automatic learning, offering services in the cloud to build Applications that learn from the data they ingest.
Today, automatic learning is more than ever available to any programmer. To experiment with these services we have platforms like IBM Watson Developer Cloud, Amazon Machine Learning, Azure Machine Learning, TensorFlow or BigML.
Training of trainers course
Understanding learning algorithms is easy if you look at how we learn ourselves from children. Reinforcement learning encompasses a group of automatic learning techniques that we often use in artificial systems. In these systems, as in children, the behaviors that are rewarded tend to increase their likelihood of occurrence, whereas the behaviors that are punished tend to disappear.
These types of approaches are called supervised learning, because it requires the intervention of humans to indicate what is right and what is wrong (ie for reinforcement proportional). In many other applications of cognitive computing humans, apart from reinforcement, also provide some of the semantics necessary for algorithms to learn. For example, in the case of software that must learn to differentiate the different types of documents that an office receives, it is humans who initially have to tag a significant set of examples so that the machine can later learn.
That is, humans are what they initially really know if a document is a complaint, an instance, a claim, a request for registration, a request for change, etc. Once the algorithms have a training set provided by humans, then they are able to generalize and start sorting documents automatically without human intervention.
At present, it is these constraints or training limitations of algorithms that largely limit their power, since good training data sets (often manually labeled by humans) are required for algorithms to learn effectively . In the field of artificial vision, for algorithms to learn to detect objects in images automatically have to be trained beforehand with a good set of tagged images, such as Microsoft COCO.
Deep Learning: The Approach to Human Perception
Possibly the future of automatic learning goes through a turn towards unsupervised learning. In this paradigm the algorithms are able to learn without previous human intervention, drawing the conclusions about the semantics embedded in the data themselves. There are already companies that focus entirely on unsupervised automated learning approaches, such as Loop AI Labs, whose cognitive platform is capable of processing millions of unstructured documents and autonomously building structured representations.
The discipline of automatic learning is booming thanks to its application in the world of Big Data and IoT. The advances and improvements of the most traditional algorithms, from ensemble learning to Deep Learning, are very much in vogue today because of their ability to get closer to the human perceptual power.
Possibly the future of automatic learning will shift to unsupervised learning
In the Deep Learning approach, logical structures are used that are more closely resembling the organization of the mammalian nervous system, having layers of process units (artificial neurons) that specialize in detecting certain features in perceived objects. Artificial vision is one of the areas where Deep Learning provides a considerable improvement compared to more traditional algorithms. There are several Deep Learning code libraries and environments running on powerful modern CUDA GPUs such as NVIDIA cuDNN.
Training of trainers course
Deep Learning represents a more intimate approach to the way the human nervous system works. Our brain has a microarchitecture of great complexity, in which we have discovered nuclei and differentiated areas whose networks of neurons are specialized to perform specific tasks.
Thanks to neuroscience, the study of clinical cases of brain damage and advances in diagnostic imaging, for example, we know that there are specific language centers (such as the areas of Broca or Wernicke), or that specialized networks exist to detect different aspects of Vision, such as edges, inclination of lines, symmetry and even areas intimately related to face recognition and emotional expression of the same (spindle twist in collaboration with the amygdala).
Deep Learning’s computer models mimic these architectural features of the nervous system, allowing within the global system to have networks of process units that specialize in detecting certain features hidden in the data. This approach has allowed better results in tasks of computational perception, when compared with the monolithic networks of artificial neurons.
So far we have seen that cognitive computing is based on the integration of typically human psychological processes such as learning or language. In the coming years we will see how artificial cognitive systems expand in multiple applications in the digital ecosystem.
In addition, we will see how learning and language begin to integrate with more psychological functions such as semantic memory, reasoning, attention, motivation and emotion, so that artificial systems get closer and closer to the human level of intelligence, Or perhaps, as already advanced in ConsScale (a scale for measuring cognitive development), machines can reach higher levels than humans.