Skip to content

Past, present and future of Artificial intelligence

Virtual top-up (VTU) Business in Nigeria

The Beginning, present and future of Artificial intelligence

Artificial intelligence is a commonly used term these days, but where does artificial intelligence actually come from? And what will it ultimately bring us if we look further from history?

What is AI?

When we talk about artificial intelligence (AI), it usually refers to ‘intelligent’ software, devices or machines that perform tasks independently. It is important to note that a distinction is made between two types: applied (weak) AI and general (strong) AI.

The first form is the most common and is programmed to perform a specific task in the same way. While the second form can learn and develop tasks by itself through machine learning. Both types use computer algorithms.

Algorithms are a kind of recipe, in which successive preset instructions of an input give a desired output. Just like you can prepare a meal from ingredients by following a recipe.

Automata

We know, of course, that software technology is a thing of the last century, but there have been self-moving creations called ‘automata’ for thousands of years. These are ‘machines’ that automatically perform a series of movements through built-in mechanics.

For example, there were self-moving sacred images in Egypt, China and Greece that, instead of by modern computer algorithms, were activated by a series of mechanical movements. Some visitors therefore thought that these images had thoughts and emotions.

This indicates that these automata were actually already a kind of artificial intelligence for the people at that time. Hero of Alexandria (circa 70 BC) is perhaps one of the best-known Greek builders of automata. For example, he made automatic opening and closing doors in temples.

In the Middle Ages

In the Middle Ages there were also many developments in the field of science in the Arab world. It is known that scientists and engineers there also dealt with automata. For example, the Islamic scholar Al-Jazari (1136-1206) described complex automata based on people and thought about converting thoughts into matter.

It is also important to mention that the word ‘algorithm’ is derived from the name of the Persian mathematician Mohammed Al-Khwarizmi (ca. 780-845), who coined the concept of algorithm in mathematics.

Who we should certainly not forget to mention is Leonardo da Vinci (1452-1519). He designed Leonardo’s mechanical knight. This was a humanoid armored automaton, which could stand, sit, raise its visor and move its arms. Da Vinci used a sequence of pulleys and cables for this. He also designed much more, such as an automatic machine to test the tensile strength of cables.

An important thinker was René Descartes (1596-1650). He described the bodies of animals as nothing but complex machines, made up of bones, muscles and organs.

This, according to him and the adherents of his ideas, meant that nature could be simplified to mechanical instructions.

Together with thinkers such as Gottfried Wilhelm Leibniz (1646-1716) and Thomas Hobbes (1588-1679), he tried to convert all rational thought into mathematical symbols, thus creating the suggestion that intelligence could be described by symbols. This line of thinking is still the basis for today’s KI, where ‘and’, ‘or’, ‘not’ statements, algebraic manipulations, numbers and equations, along with a binary system, describe current KI.

Statistics and probability

Another important development in mathematics, for the functioning of AI, is statistics and probability. Thomas Bayes (1702-1761) developed the Bayesian framework in the 18th century, within this branch of mathematics. This is still an important part of AI and machine learning today.

The innovative Jacques de Vaucanson (1709-1782) brought Descartes’ idea of ​​animals into reality by designing a duck that seemed to be able to eat on its own, then digesting and excreting the food. It was therefore also called Canard Digérateur.

In addition, de Vaucanson also made a self-playing flute player, who could even perform twelve different songs.

Later, in 1840, Innocenzo Manzetti (1826-1877) made an improved version of a self-playing flute player, sitting on a chair playing songs produced by levers, crankshafts and air tubes.

Also in Asia plenty of automata were designed. For example, in Japan you had the popular Japanese karakuri ningyo during the 17th to the 19th century. These were dolls that could serve tea themselves, for example.

The automata had their heyday from 1848 to 1914. This is also known as the Golden Age of automata. During this period, many mechanical songbirds and timepieces were produced in Paris and scattered around the world. In China, for example, these were very popular.

Although the automata are not always taken within the history of artificial intelligence, it is questionable whether this is justified, since these creations are certainly the forerunners of applied (weak) AI.

20th century

In science fiction, the impossible became possible, such as truly intelligent ‘robotic’ creatures. These machines went a step further than automata, as automata only perform specific tasks, while humanoid robots like humans can react to and learn from new situations.

At the beginning of this post, I described this type as general (strong) AI. This concept was fairly new in the early 20th century.

For example, Lyman Frank Baum (1856-1919) wrote the popular book ‘ The Wonderful Wizard of Oz ‘, which features such robots. Incidentally, it was not until 1921 that the word ‘robot’ was first used by Czech writer Karel Capek (1890-1938) in his performance Rossum’s Universal Robots.

Science fiction has certainly played a big part in motivating scientists to really start creating these kinds of humanoid robots. It is therefore not surprising that in this same period the mathematical logic as the language of (current) AI really broke through.

The Church-Turing hypothesis, first devised by Church (1903-1995) in 1935, asks whether a binary system (consisting of 0’s and 1’s) can perform any mathematical deduction. This was based on the idea of ​​a Turing machine, with which every possible calculation can be performed by means of simple algorithms (if there is enough memory and time).

This implies that every possible (complex) computer can be simulated by a Turing machine. Alan Turing (1912-1954) – also the cracker of the Nazi’s enigma code – came up with this concept. Today, the Turing machine is used to study the power of computer systems.

In addition, Alan Turing devised the well-known Turing test in 1936, which he developed in 1950. This is a thought experiment in which the aim is to determine whether a computer is as intelligent as a human being in a conversation.

How seriously we should take this test remains to be seen, as Turing has not clearly indicated the limits of this test. It is possible that a computer can pretend to be a human without ‘understanding’ this itself. In any case, these ideas from Alan Turing can be called an important impetus for the modern concept of AI

During World War II, neurophysiologist Warren McCulloch (1898-1969) and mathematician Walter Pitts (1923-1969) were the first to bring the workings of neurons in the brain and mathematics (logic and statistics) together. In doing so, they developed a model of such networks by using an electrical network.

The first artificial neural network was created by Marvin Minsky (1927-2016) and Dean Edmunds in 1951. Here, 3000 vacuum tubes were used to simulate a network of 40 neurons.

To actually obtain AI, computers first had to improve, because shortly after the war there were extremely expensive computers that were able to process information and execute commands, but not yet to store data. Let alone to learn something.

Fortunately, this soon changed, because Arthur Samuel (1901-1990) built the first self-learning computer in the 1950s in the form of a game called ‘checkers’. In 1959 he also coined the term ‘ machine learning'(machine learning).

What followed was a huge development in the field of AI. A lot of research was done and there seemed to be a growing understanding of how to use AI in specific situations to solve problems.

This caused governments and universities to increasingly invest in AI. The main hope was that a machine would soon be available that could translate spoken language.

In 1970, the first real humanoid robot, WABOT-1, was built at Waseda University in Japan. The robot could move its limbs, see, listen and speak (Japanese). In addition, the robot was able to indicate distances and directions to objects.

People were so optimistic about WABOT-1 and other recent inventions that in 1970 it was said that within eight years there would be a machine with the intelligence of an average human being. Unfortunately, this turned out to be technologically too complex to achieve in the short term. These disappointments led to a period when AI research and development was put on the back burner.

Meanwhile, computers continued to develop, and interest was rekindled in 1982 when John Hopfield devised a neural network that is very similar to how real neurons work in our brain. Namely in two directions. Knowledge is first processed by making the input work through the neural network. The result is then used to go back through the neural network and train the neurons so that a subsequent input provides a better result.

This is now also known as theback-propagation algorithm’ and stimulated the development of deep neural networks. Again, significant investments were made in AI. You would expect that lessons had been learned from having too high expectations, but again disappointments came because the developments did not go fast enough.

As a result, most funding ceased from 1987. Nevertheless, researchers continued, because they certainly saw enough potential. This resulted in the peak of the 1990s in AI, when chess grandmaster Gary Kasparov was defeated by IBM’s Deep Blue.

This chess-playing computer showed that an age-old game, which at first could only be played by humans, was suddenly dominated by AI.

Deep Blue gained his knowledge of chess by studying old games of Kasparov’s competitors and Kasparov’s own. This allowed it to calculate exactly what the best counter move would be for each move made by Kasparov.

In 1997, speech recognition software was also implemented in Windows for the first time. Another highlight was the self-driving car that drove 211 km on its own in 2005. Incidentally, this was 80 years after Francis Houdina had already had the first self-driving radio-controlled car drive through New York.

Present

So in the end it seemed that the decades-old optimistic ideas could become reality. This had everything to do with computers becoming more computing power and memory increasing. 

You could say that in the world of AI it is actually mainly a matter of waiting for the further development of computer technology, while the underlying scientific theories are already ahead. 

So it was only 19 years after Deep Blue that we saw Google’s Alpha Go beat the world champion Lee Sedol in a game of Go . This game is much more complex than chess and it was therefore mainly a matter of waiting for the computing power to program such a complex – also called deep – neural network.

Today it has become quite easy to process huge amounts of data – known as ‘big data’ – into useful information using AI

This can be done by using ‘machine learning’ to recognize patterns. In this way, in addition to data mining , speech and image recognition can also be done. For example, many marketing companies also use machine learning to learn from their customers.

As computers get stronger, more and more is possible.

Moore’s law tells us that the number of transistors (binary circuits) doubles every two years. In other words, computing power is progressing exponentially. Unfortunately Moore’s law is not valid forever and we will have to look for alternatives, such as the quantum computer to make AI more and more complex.

For now, we will mainly have to make do with AI that are extremely good at one task, such as self-driving cars, recognizing objects, mowing the lawn, vacuuming, making predictions or searching the internet. In addition, there are even AI that can take over the work of a journalist.

Future

We are currently seeing a huge increase in applications and research to and from AI. This creates a great hype with the result that many companies and governments invest in these developments.

We saw in the past that these kinds of periods ultimately resulted in higher expectations than the computer technology developments could handle, resulting in a lesser period for AI.

Because we know that it is theoretically possible, we can assume that when the technology is ready, AI will eventually be able to transcend human intelligence and take over many tasks from us.

What all this will ultimately yield is still the question: a doomsday scenario in which humans become subordinate to AI, or a world in which people and AI ‘live’ side-by-side and AI supplements us with new insights and forms of creativity.

SHARE TO FRIENDS USING:

Leave a Reply

Your email address will not be published.