Do you believe in time 1

Text generator GPT-3: If you think he thinks, then you just think he thinks


Read on one side

On September 8, 2020, an article in the Guardian the people that artificial intelligence (AI) is not malevolent to them, but sees its mission in serving them. The information is remarkable because it was given in the first person. It was artificial intelligence itself that wrote the article about itself, more precisely: the Generative Pre-trained Transformer, also called GPT-3. It is also noteworthy that the AI ​​then put the aforementioned reassurance into perspective again. The evil will not emanate from her, but possibly from her programmers: The mistakes made by people, according to the original quote from GPT-3, could "lead to me having to kill" ("humans make mistakes that may cause me to inflict casualties").

What a terrific sentence! The AI ​​warns us not to be abused by us. So it has a will - to warn us - and at the same time it is exposed to its programming without a will. If we don't even bother with this paradox, let's ask what we would actually prefer: a mindless AI or one that comes to its own knowledge and decisions? Who do we fear more: our fellow human beings or our product? In the debate on AI there are both variants, whereby the assumed takeover of power by artificial intelligence mostly does not mean that it is directed against evil programmers, but against humans as such.

Roberto Simanowski

is the author of the book "Interfictions. From Writing on the Net" (Suhrkamp 2002) and is currently researching aspects of digital authorship as a fellow at the Excellence Cluster Temporal Communities at the FU Berlin. His latest book "Death Algorithm. The Dilemma of Artificial Intelligence" (Passagen 2020) received the Tractatus Prize for Philosophical Essay Writing 2020.

What the Guardian-As far as articles are concerned, it was soon heard that it was not really a computer-generated text. GPT-3 is an update for computer-assisted authorship, so the program only supports the human author when writing. Is GPT-3 just an improved spell check? Are we only dealing with a new quantity, but not with the transition to a new quality? The objection has quite good arguments on its side: people provided the topic, the direction and the first paragraph of the article and compiled the text from eight different results that the AI ​​produced, which was then published in the Guardian was presented. Without knowing the eight original results, little can be said about how coherently the GPT-3 can actually write and think.

But "think", that much seems clear, is not the right word anyway. GPT-3 is a kind of artificial neural network that, unlike our brain, processes information purely mathematically. GPT-3 knows from hundreds of gigabytes of text which word appears most frequently after a previous word, just as Google's translation algorithm knows how often people have used a certain German word for a certain English word. The algorithm takes what tops the list without the slightest understanding beyond statistics. So a well-written text from GPT-3 is never a well-thought-out, but always only a probabilistically well-made text.

Then of course the ambivalent reassurance of the AI ​​would be im Guardian-Article that it poses no danger, no message at all from the AI ​​to humans, but rather a message from humans to themselves: Statistically, this is exactly the opinion that the majority takes. If the successors GPT-4 or GPT-5 have access to all text data in the world and write independently of human specifications, their statements in this regard should be even more accurate. In the present case, there was not yet such extensive access, but the writing guideline to dispel people's fear of AI. The information from the AI ​​could have turned out to be the opposite for another assignment, because, all in all, she is not an independent author.

The fact that the AI ​​does not write independently cannot necessarily be seen in its texts. The problem is often approached as a quiz. The task: "Decide whether this poem was written by a human or by a computer!" It is a variant of the Turing test, in which a computer passes the test if its human communication partner believes it is dealing with a person. But what is lost when the computer passes the test and we read its texts as if they were human hands?