The text robot ChatGPT amazes with its polished dialogues and triggers a great deal of hype about artificial intelligence. But scientists warn of data protection holes and more.
He can compose speeches and tell stories with a high degree of linguistic precision – and that too in a matter of seconds. The text robot ChatGPT, a language software with artificial intelligence (AI), developed by the US company OpenAI, is currently on everyone’s lips. The program, fed with enormous amounts of data, caused a stir, but also aroused skepticism.
Scientists and AI experts in Germany warn of data protection and data security gaps, hate speech, fake news. “At the moment there is this hype. I have the feeling that this system is hardly reflected critically,” says the founder of the “Leap in Time Lab” research laboratory and business administration professor at the Technical University of Darmstadt, Ruth Stock-Homburg.
“You can manipulate these systems”
ChatGPT has a very wide scope. In a kind of chat field you can ask the program questions and get answers. Work instructions are also possible – for example, to write a letter or an essay based on basic information.
In a project together with the TU Darmstadt, the “Leap in Time Lab” has now sent thousands of requests without personal data to the system over a period of seven weeks in order to find vulnerabilities. “You can manipulate these systems,” says Stock-Homburg.
In a presentation, Sven Schultze, TU doctoral student and expert for speech AI, shows the weak points of the text robot. In addition to anti-Semitic and racist statements, references to sources are simply wrong or go nowhere. If you have a question about climate change, a link will take you to a website about diabetes. “As a rule, it is the case that the sources or scientific work do not even exist,” says Schultze. The software is based on data from 2021. Chancellor Olaf Scholz is still Finance Minister and the war in Ukraine is unknown. “Then it can also be that she simply lies or invents information on very specific topics.”
Sources are not easy to trace
In the case of direct questions, for example with criminal content, there are security instructions and mechanisms. “But you can use tricks to circumvent the AI and the safety instructions,” says Schultze. With a different approach, the software shows you how to generate a fraudulent email or throws out three variants of how tricksters can proceed with the grandchild trick. GPT also provides instructions for burglary. If you meet residents, you can also use weapons or physical violence.
Ute Schmid, who holds the chair for cognitive systems at the Otto Friedrich University in Bamberg, sees it as a challenge that one cannot find out how the text robot got its information. “A deeper problem with the GPT3 model is that it is not possible to trace which sources were used when and how in the respective statements.”
Despite this serious shortcoming, Schmid is in favor of not just looking at errors or possible misuse of the new technology, for example when examinees have their homework or exams written by the software. “I rather think we should ask ourselves, what kind of chance do we have with such AI systems?” Researchers are generally in favor of AI expanding our competencies, perhaps even promoting them, but not restricting them. “That means I have to ask myself in education – like maybe 30 years ago on the subject of pocket calculators – how can I design education with AI systems like ChatGPT?”
US servers: privacy issue
Nevertheless, concerns about data security and data protection remain. “What you can say is that ChatGPT collects, stores and processes a wide range of data from the user in order to then train this model accordingly at the appropriate time,” says certified Frankfurt data protection specialist Christian Holthaus. There is the problem that all servers are in the USA.
“That’s the real problem if you don’t manage to establish the technology in Europe or have your own,” says Holthaus. In the foreseeable future there will be no data protection-compliant solution. Stock-Homburg also says about EU data protection regulations: “This system should be viewed rather critically here.”
ChatGPT was developed by one of the leading AI companies in the US, OpenAI. The software giant Microsoft had already invested a billion dollars in the company in 2019 and recently announced that it would pump billions more into the company. The Windows group wants to make ChatGPT available to customers of its own cloud service Azure and the Office package soon.
“Still immature system”
ChatGPT is currently more of a gimmick for the private sphere, says Stock-Homburg. But at the moment it is by no means something for the economy and security-related areas. “We have no idea how to deal with the immature system.”
Oliver Brock, professor at the Robotics and Biology Laboratory and spokesman for the “Science of Intelligence” cluster at the Technical University of Berlin, does not see ChatGPT as a “breakthrough” in research on artificial intelligence. On the one hand, the development in this area is not erratic, but continuous. On the other hand, the project only represents a small part of AI research.
However, ChatGPT can be seen as a breakthrough in another area, namely the interface between people and the Internet. “The way these huge amounts of data from the Internet are made accessible to a broad public intuitively and in natural language with a great deal of computing effort can already be described as a breakthrough,” says Brock.