Artificial intelligence - Part 1 - Looking into the crystal ball
We encounter AI all the time. ChatGPT & co. are currently revolutionising the world, but how does AI actually work? In this article, we take a closer look at the topic of AI.

Enough bubatz, now it's back to factual topics. I'm sure all of you have already tried ChatGPT. If not? ChatGPT and DALL-E can be used free of charge at https://www.bing.com/chat. At https://chat.lmsys.org/ you can try out different neural networks and compare them directly. The results are amazing! Conversations with the AI often feel almost natural. The answers are often precise and well formulated. In some cases, ChatGPT is even superior to us humans. Study papers with AI sometimes manage to outperform human competitors and in terms of language, it has long been able to hold a candle to most humans.[:] And yet there is one thing AI is not: intelligent. As I wrote in my article „It has long since started“, AI is above all a tool. A tool for making predictions.
Looking into a crystal ball or an intelligent tool?
Artificial intelligence does not (yet) exist. What we do have, and often refer to as such, are neural networks that we link together to generate an output (e.g. images or text) from an input (usually text). The input is the starting point for the neural network to make a prediction about what the result of the input could be. Neural networks are therefore actually tools for "looking into the future".
Roughly speaking, there are two major problem areas: The neural network and how it works and how to train it before using it. In this series, I will take up these topics one by one and provide an explanation and categorisation of the various areas.
Firstly, a basic understanding of what a neural network is actually intended for is necessary.

Recognising patterns and making predictions
Neural networks fill a crucial technological gap in the world of "artificial intelligence". They enable machines to recognise and learn complex patterns and relationships in data that are too complex or subtle for traditional algorithms.
In contrast to conventional algorithms, which are based on explicit instructions, neural networks learn independently from gigantic amounts of data. They adjust internal weightings to make predictions and improve these predictions over time. This approach allows them to tackle tasks that were considered too difficult or impossible for traditional algorithms, such as image and speech recognition, natural language processing and even artistic creations, such as the cover image of this article shown above.
Neural networks are therefore supposed to make predictions. I would like to explain this in more detail using the following examples.
A diagram is shown below. The diagram shows the fictitious number of visitors per weekday for the worst place in Nuremberg: the university library.
If we now look at the diagram, we can immediately recognise a trend: there are fewer people in the library on Friday than on the other working days. On Saturday, the students lie hungover in the corner (this is where Elotrans ©, the magic fairy dust that effectively prevents hangovers, comes to the rescue!
When asked when the easiest day to get a seat in the library is, everyone can probably tell from this graphic that Friday to Sunday are ideal. This is a prediction. Using data from the past, we are able to find patterns that allow us to make predictions about the future, in other words, to look into a crystal ball. We can say with a certain degree of probability what will happen.
However, if we imagine that the underlying data is not an easily comprehensible set of weekdays and corresponding visitor numbers, but unimaginably large databases with millions of entries, then the task of finding patterns in it is almost impossible. Neural networks solve this problem and thus fill the technological gap left by such problems.

Areas of application
Neural networks can perform a number of tasks that were previously impossible or difficult:
- Image recognition: from facial recognition in smartphones to autonomous driving control, neural networks are perfect tools for identifying and classifying objects and patterns in images.
- Speech processing: Whether speech recognition in voice assistants or machine translation, neural networks enable computers to interpret and generate human speech.
- Text analysis: Sentiment analysis, topic modelling and automatic text summarisation are just a few examples of how neural networks can extract valuable information from unstructured text data.
- Recommendation systems: Whether personalised product suggestions on online platforms or the creation of music playlists - neural networks learn from user behaviour and preferences to deliver relevant and appealing recommendations.
All of these applications require predictive modelling. As we have learned, neural networks can analyse historical data to predict future events and make informed decisions.
Problems
However, neural networks are by no means a miracle cure for all applications. They have significant disadvantages compared to conventional systems, which reduce the number of conceivable applications for neural networks.
- Data dependency: Neural networks require large amounts of training data in order to achieve reliable results. Collecting and processing this data can be expensive and time-consuming. This is often associated with copyright issues. In addition, errors in the training data have an impact on the finished neural network.
- Explanatory deficit: The "black box" nature of neural networks can make it difficult to understand how they arrive at their decisions. This can lead to concerns about the results and may prevent their use in some applications.
- Computing power: The training and execution of complex neural networks requires considerable computing power. They are therefore not economical for every application and require greater investment than conventional systems.
- Generalisation problem: A neural network can achieve very good results after training when it is confronted with its training data again, but if it is confronted with data that it has never seen before, its performance can drop sharply. This can lead to serious errors in real-life applications.
In the next part...
In this part, we have looked at the use cases and benefits of neural networks. We'll see why it's not magic in the next part.