The talk was about ChatGPT and the new GPT-4 that was announced last week, but before he talked about GPT he gave a brief Summary of the history of AI and how AI has changed over the years. First of all, when AI used to be only if/then statements and the AI didn’t have any statistics or knowledge. This is what he called GOFAI (Good o’l fashioned AI). And one of the first chat bots made was Eliza in 1966, it was created by only if/then statements so it replies with certain messages if certain words are in the prompt. This created a sort of fake intelligence because it looked like it knew what you where saying and what it would respond with, but it was all coded. Also, deep blue was created in this way.
The way that AI is made now is with a neural network, the idea of this network came from our brain itself because our brain also works like that (as far as we know). Of course, this isn’t exactly like our brain, but it resembles it. One of the first concepts for a neural network was created in 1958 called the perceptron
This was based on inputs and outputs; however, you can change the weight of each input to change the outcome and see if some values generate a better outcome.
In 1980 the neural network was developed like we know it today, which has multiple nodes which can be tweaked to generate different outcomes. These nodes/parameters change automatically based on the outcome because there could be millions upon millions of nodes.
Even though this already was developed in the 1980’s it didn’t work well, however there is not a lot of difference in the innovation of the neural network itself. The main reason the Neural networks work a lot better now is with multiple layers, more data, and computational power. One way this neural network performs better is with changing nodes based on outcome or rewarding it if the outcome is good (reinforcement system).
However, GPT is a language model, which predicts what the next word in the sentence should be. Computers don’t work in words but in numbers. The way this works is they assign a list of number to words. The closer these numbers are to each other in different words that will probably mean they are similar to each other in some way (not based on the word itself but on what the word means, for example: water -> ocean). This is of course not enough because words can mean different things in different contexts. So, the AI keeps looking at the sentence and the full context to determine what words mean what. One example he gave is the sentence: “bank of the river”, bank can mean multiple things but in this case it’s the terrain surrounding the river and not a bank with money for example.
The Way ChatGPT “Learned” is by giving it a lot of data and a lot of these nodes/parameters, but this can be dangerous if you give it access to the internet. Someone could ask for example how to build a bomb and it will explain it in great detail, so there are some ethical concerns. This Graph shows how they learned the AI with human help:
Gpt-4 can do a lot of new things and a lot of things better. it now can label images and give explanations what is happening in this image. The key word here is “label” because a lot of people will say “recognize,” however those are Suitcase terms we use for humans and not far AI or machines. It doesn’t Understand or recognize what is happening it just has a lot of data.
The future of AI is uncertain and potentially dangerous, as different people have different views on its potential implications. Even experts are divided on what the future of AI will look like. You have 3 main layers of AI: narrow, general or super. Most people say we are in the narrow phase still, that means it is not as smart/capable as humans are. Jeffery Hinton is an expert who worked a lot on neural networks and AI’s he has said in an interview that we don’t know if AI will take over the world, but it could happen and within a short time span. These are some scary statements he made, but no one knows if this could or will happen.
At the end we are still humans and AI are not, AI can now a lot of things from data, but they can’t “understand” things, they don’t have the same real-world experiences that we have. We can use AI to further our knowledge, but we do have to be careful.
Auteur: Jens de Graaf