There is no hotter topic at the moment than AI. AI is everywhere. It will do everything. It will replace us in everything. It will cause us to lose our jobs....
If we asked 100 randomly met people on the pavement of a big city what AI is, the answers would mostly fall into two categories. The first: that it is a kind of chat bot, very smart, and answering questions. Plus helping kids with their homework. The second: that it's a generator of detailed, colourful, amazing, but also often slightly weird pictures. On the other hand, almost every company is now boasting about the use of artificial intelligence in their day-to-day work, and you will come across a smart hoover or even a smart face cream in the shops....
The concept of artificial intelligence is therefore much broader. To help sort out all these concepts, we are starting a series of articles under the heading 'AI Alphabet'. In more than 20 articles, we will tell both the history of artificial intelligence research and the most important concepts related to this field.

Want to learn more about artificial intelligence? Find out first what real one is
We won't build artificial intelligence without thinking first - what we actually want to emulate. When do we recognise, observed behaviour as intelligent? What exactly is intelligence?
We do not have a precise definition of this phenomenon. In simple terms, we can assume that it is the ability to achieve goals. From this definition, it follows that one can be intelligent to varying degrees, and intelligence is demonstrated by humans and animals. Under changing conditions, we - animals, humans and computer programs - are able to achieve our goals to varying degrees. And since we can establish some gradation here, we may also be tempted to measure intelligence.
Artificial intelligence - what were its beginnings?
It all started just after the Second World War, when computer science had developed enough for us to start thinking about building the first intelligent solutions. A key figure in the field of artificial intelligence research was the famous English mathematician, Alan Turing. In his lecture as early as 1947, he pointed the way forward for artificial intelligence. According to him, it was necessary to write intelligent programmes rather than build intelligent machines. And this is what we still do today - artificial intelligence is actually intelligent software, me or more. We can see Turing's story in the film The Imitation Game with Benedict Cumberbatch.
What are the characteristics of intelligence (including artificial one)?
A computer program that learns from previous interactions with its environment can be called intelligent. A simple example is memory learning, used not only in online dictionaries, but also in other pattern recognition applications.
A higher level of learning is deep learning, which is mainly based on neural networks that allow the transformation of input data, such as photos, into structured knowledge, such as information about the people depicted in them. A characteristic feature of deep learning is minimal human involvement in the learning process, which sometimes leads to difficulties in interpreting the results, known as the 'black box' problem.

Another area in which humans - like most mammals - continue to outperform machines is inference. This is the process of creating new knowledge from past experience, both by deduction (for example: "I left my wallet in the room or in the kitchen; since it's not in the room, it must be in the kitchen") and induction ("Last time I was in this mall, I had trouble parking, so this time I'll opt for the metro"). These examples show how we create new knowledge, which, when implemented in the form of computer algorithms, is a process that is still in development.
Another feature we are trying to instil in artificial intelligence algorithms is pattern search. This is a form of knowledge synthesis, already being used in medical diagnostics, for example. Artificial neural networks can learn to recognise common patterns, for example in X-ray images. In this way, they can support doctors in detecting abnormalities. Such technologies make it possible, among other things, to speed up the diagnosis of cancer.
Humanity test
The first test of whether artificial intelligence has reached human intelligence was devised by the aforementioned Alan Turing. Hence its name 'the Turing test'. It assumes that if we are able to talk to a programme, and not distinguish it from a human - then the application passes the test. It is interesting how history has come full circle: the father of artificial intelligence predicted that we would talk to AI: just like ChatGPT and other conversational models of neural networks.
One might be tempted to say that passing the Turing test is close - we have a growing problem determining whether we are talking to AI or a human.
By the way, an interesting fact: the popular Captcha tests that require us to "mark street lights on pictures" are also Turing tests - they are one to prove that the interlocutor is actually a human and not a machine.
AI winter vs AI boom: A cautionary tale about technological enthusiasm
The notion of an 'AI Winter' offers a historical perspective on the cyclical nature of technological advances and societal expectations, particularly in the field of artificial intelligence. AI Winter refers to periods that are characterised by a significant decline in enthusiasm, funding and progress in AI research.
The term emerged during a debate at the 1984 AAAI meeting, reflecting disillusionment after the exaggerated expectations of the 1970s. These winters were triggered by disillusionment after overly ambitious predictions about AI failed to materialise, leading to severe cuts in funding and interest. The periods 1974-1980 and 1987-2000 are significant examples where failures in machine translation and perceptrons led to widespread scepticism about the feasibility of AI technology.
This phenomenon highlights the importance of setting realistic goals and managing public and investor expectations in emerging technologies.
In contrast, the AI landscape has seen a dramatic revival since around 2012, commonly referred to as the 'AI boom'. This renewed energy is largely driven by significant breakthroughs in machine learning and neural networks, which have found applications in a variety of sectors, including healthcare, automotive, finance and many others. Investment and research has increased, driven by successes in natural language processing, image recognition and autonomous systems.
Companies such as Google, Amazon, Facebook, Microsoft and Apple are leading the way, investing billions in AI development and acquiring startups, signalling a solid belief in the technology's potential. This boom is not only about technological advances, but also a deep understanding of the ethical implications of AI, ensuring that its integration into society is handled with accountability and transparency. Despite witnessing this boom, the lessons from previous AI winters remain relevant, reminding us of the need for sustainable and transparent growth in AI.
Comments