Introduction to Artificial Intelligence

The objective of this first article is to define the main concepts around artificial intelligence.

At we specialise in artificial intelligence (AI) algorithms in the field of computer vision (algorithms that analyse images).

You don't understand anything? Don't worry, we are starting a series of articles to familiarise you with all these concepts.

The objective of this first article is to define the main concepts around artificial intelligence.

A little history

Artificial intelligence as a field of research dates back to 1956, following the famous Dartmouth conference. However, some experts consider that it started in 1958 with the first neural network (the perceptron) by Frank Rosenblatt.

AI was thus born out of the consequences of the work of Alan Turing, the founder of modern computing, as well as the achievements of other mathematicians and programmers in the 1950s.

Alan Turing - Source :

AI then experienced its first golden age until 1974 when researchers and investors invested millions in research. Many theories were formulated, but the low power of computers at the time quickly put an end to this wave and caused the first slowdown in AI research until 1980.

In 1980, expert systems were born. They are simply rules to follow in certain situations.

Their usefulness for companies allowed AI to grow again, until 1987: this was the second golden age.

Once again, the expert systems that had raised so many hopes quickly reached their limits and research suffered numerous budget cuts, causing a second slowdown in AI until 1993.

From then until 2011, AI experienced a new golden age during which it grew, diversified, saw its first media successes (including Deep Blue's chess victory over Kasparov in 1997), and took advantage of the era of Big Data to become similar to Business Intelligence and take advantage of new calculation capacities and data sets.

Since 2011, we are still in the golden age of AI with Deep Learning (DL), a type of artificial intelligence algorithm that is more autonomous and versatile than anything that came before. Too data and computationally intensive, DL can be applied to real-world cases and is winning its first competitions in the field of image recognition.


An algorithm is a sequence of actions to be performed to achieve a result. For example :

- Choose a number
- Multiply it by 9
- If it is odd, subtract 8

AI for Artificial Intelligence: refers to an algorithm that imitates the behaviour of a human, not in its calculation but in its result. For example, an AI can predict the weather according to the wind, clouds, etc. to give the probability of good weather as a result.

ML for Machine Learning: a sub-branch of AI, machine learning is a set of statistical methods applied to AI. A distinction is generally made between supervised and unsupervised learning:

- Supervised learning consists of training a model on the basis of existing results. The training is done on a training data set. The objective is to compare the results of the model with reality in order to adjust the precision and then to test the results on a test set.
- Unsupervised learning consists in letting the model learn autonomously. The model is given data without providing it with examples of the expected results as output.

DL for Deep Learning: a sub-branch of ML, deep learning relies mainly on the use of neural networks to solve problems. The principle remains the same, to have example data and adapt its internal model so that it predicts its outputs correctly. Unsupervised learning is also possible (as in ML): the AI will group data according to their similarity to form clusters, for example. The most amazing applications of AI today are possible thanks to DL (including facial recognition and image classification, text generation etc.).

Data Science

Data Science, Machine Learning : Source : Eurodecision

Data science is a cross-cutting subject in AI and consists of tools and methods for :

- Structure and organise data in databases;
- Collect, integrate and aggregate data from several sources, possibly heterogeneous, using ETL tools;
- Validate, control and clean data;
- Learning from the data, using machine learning algorithms;
- Visualising and disseminating the data, in the form of dashboards and reports.

Generally speaking, data science is very often seen as applied machine learning: finding the right algorithm to use, implementing and optimising it and then putting it into production.

Data science differs from business intelligence (BI) in that it is based on ML and that BI is geared towards explaining past data whereas data science is geared towards predicting the future.

Conclusion of this first part

In this first part, we have focused on the definition of the different concepts around AI. In the next part, we will look in more detail at what revolves around data, the different applications and the different algorithms.

If you have any comments or requests for information, please do not hesitate to contact us: 🧐

Subscribe to receive articles right in your inbox