'' Big Data plus Artificial Intelligence: can analysis lead to synthesis? (Part 1)
Big Data plus Artificial Intelligence: can analysis lead to synthesis? (Part 1)

Big Data plus Artificial Intelligence: can analysis lead to synthesis? (Part 1)

One of the most successful entrepreneurs of modern times Sergei Brin has uttered a rather outrageous phrase at the recent World Economic Forum in Davos. Speaking about the progress of Artificial Intelligence, he said that despite the certain achievements IT industry demonstrated with AI, he could not recognize the clear evidence of coming Deep Learning.

The cofounder of Google where AI algorithms are implemented in each and every product does not see the obvious Global occupation with Artificial Intelligence. Curious. The enormous investments, thousands of talents gathered in specialized labs, the furious competition with Microsoft, Facebook, and Apple just in the segment of the AI-powered products like self-improving voice assistants, and the growing wave of buzz-wording media news about daily achievements in AI consumerization throughout the world – all together are not sufficient for Mr. Brin to see the upcoming and deep transformation of the global business environment which is expected by many IT experts due to the current AI evolution

There is something wrong here. Either Mr. Brin means something different from what other experts do speaking about the Deep Learning or he knows some important facts concealed by guys of the Silicon Valley from the wide audience. Let’s try to deal with this together.  


Genesis of the issue

Describing the Artificial Intelligence history, many observers start counting from the ancient Greece and China where people were trying to build something automotive capable of working without human assistance. Probably in those days, the idea about an artificial agent acting independently came to mind of the then engineers. However, the very Industrial revolution of the XIX-XX centuries distinguished clearly the automatization with regard to rather repetitive actions of some equipment based on a pre-set agenda and the artificial intelligence as such having the decision-making abilities based on self-adjusting and learning. At the dawn of electronics, the enthusiasts of robots realized that computers required the end-to-end programming which made automats’ independence highly conventional.

However, in 1943, two mathematicians Warren S. McCulloch and Walter Pitts published their research where the artificial neural networks mimicking human brain were described in a predictive manner. This paper became influential and inspiring for further consideration of the computer-based “deep learning” of the artificial neurons able to perform logical functions.




The notorious Turing Test supposedly apt to distinguish a machine from a human appeared in “Computing Machinery and Intelligence” published by Alan Turing in 1950. By the way, recently this “imitation game” has been accomplished successfully by a bot named Eugene Goostman during a chat session where more than a half of human raters could not be able to recognize the artificial creature among people.

The term “Artificial intelligence” itself was coined during a conference at Dartmouth College in New Hampshire in 1956 where Marvin Minsky optimistically announced: “Within a generation [...] the problem of creating artificial intelligence will substantially be solved”.

However, after some insignificant progress of the subject, two long periods of the so-called “AI winter” followed. The first “winter” lasted during 1974-1980, and after a short revival when the British government was trying to compete with Japanese the second “winter” of 1987-1993 came aligning the market collapse of the general-purpose personal computers.

A definite resurgence of the AI development appeared after 1997 when IBM’s Deep Blue could beat a chess grandmaster Garry Kasparov first in the history. Although this achievement is impressive, the number of combinations in chess is limited. It means that all possible variants of chess moves can be scanned and downloaded in a computer with the sufficient computation power.


AI is created. What’s next?

A real breakthrough in the genuine AI able to make logical decisions without any brute force pre-set approach happened in 2016 when AlphaGo AI-powered program created by Google’s DeepMind beat the world’s best Go player having the divine 9th dan. This win differs from the achievement of Deep Blue of 1997 because the number of moves in Go is almost unlimited (10170 board positions against 1080 atoms in the universe). It means that it is impossible to predict all possible Go moves and download them into the computer’s memory. This time AI demonstrated something similar to human intuition. AlphaGo was taught how to play Go by means of analyzing of millions of the best Go games when  multilayer artificial neural networks figured out its own strategy playing itself millions of times.

This is the point. In brief, the modus operandi of AI includes a simple algorithm of a desirable action and tons of data downloaded for analyzing in order to achieve the expected output. The process is called Deep Learning. Now no explicit end-to-end programming is needed to explain to AI what is an apple, for example. Just feed it with millions of pictures of apples from the Internet. After a while, AI will possess a general idea what the “apple” means. Thus, it seems that huge datasets are exactly what we need to make machines independent providing their decision-making behavior. The neural network is trained with the data while learning how to transform input into a correct output.  It looks like today we can build machines capable of performing cognitive functions with working algorithms composing almost the entire scope of human intelligence. So, the formula of success is straightforward: feed AI with enough Big Data and intelligent machines will solve all your problems by themselves!


When Big Data is not a silver bullet

In accordance with modern scientific calculations, 10 billion megabytes of data are generated by the global population every second. And it is doubling every 1.5 years. Almost everything we do on the Internet becomes valuable data for those who can analyze it and make a decision how to engage all of us more effectively in … generating much bigger data. Our engagement tending, in many cases, to the obsession of being present in the cyberspace results in exponentially multiplying personal data. The ubiquitous sensorization inherent in the gaining momentum IoT adds its own “two cents” to the process. The popular concept of smart cities requires new sources of spatiotemporal data about different urban activities for more holistic computation modelling of such domains as land use, transport and energy. Thousands of traffic-tracing cameras and level sensors of dumpsters can generate the continuous data streams to the municipal servers. Big Data engenders another Big Data generating another Big Data and so far. Looks like a vicious circle.

Let’s step back and ponder about whether lots of data always mean Big Data. Is everything collectible from the sensors, computers, devices, and gadgets can be accepted as valuable information? Why, for example, Yahoo decided to give away about 13 terabytes of the actual consumers’ data? Does collected data always coincide with business challenges of organizations trying to get actionable insights to acquire better opportunities and feasible returns on investment? And why after all a shortage of 140.000 to 190.000 big data analysts will occur by 2018 in the USA alone?

Those and other Big Data questions, further consideration about the intelligence of AI algorithms, ethical issues of the post-labour environment, and suggestions how to turn AI into IA (!) will be discussed in the next part of the article.

Don’t miss, to be continued...

Tags: Big DataArtificial IntelligenceRobotHardwareApplicationTechnologyWebSoftwareDevelopmentIndeema

Subscribe to start receiving notifications about new posts

and monitoring social networks

<?xml version="1.0" encoding="utf-8"?>