The race for internet searches seemed decided long ago, with a clear winner. Now artificial intelligence is supposed to reshuffle the cards. However, the first impression is sobering – and still a complete success for Microsoft.
The word that almost everyone uses for it reveals how strong the dominance of Internet searches is: googling. Despite numerous efforts, the search engine giant has not had any relevant challengers for years. But now the cards are to be reshuffled. With artificial intelligence, Microsoft finally wants to rebel against dominance. But there is still a long way to go.
The idea sounds simple at first: Instead of searching for keywords or simple sentences, chat bots should be able to use artificial intelligence to hold completely natural conversations with users and thus present them with the information they want. Microsoft and Google presented the corresponding products last week. Microsoft’s search engine Bing now has a chat function that relies on the hype AI ChatGPT. A self-developed AI is at work in Google’s Bard project. However, both are still in the test phase – and there are good reasons for that.
Bumpy start
Even the idea was bumpy. Both Bard and Bing Chat made embarrassing factual errors in the presentation. Google was anything but happy about it. Several media reports that the internal forums felt the seemingly hasty presentation of the project was anything but professional. “It’s almost funny how short-sighted that was,” one employee complained, according to CNBC. The presentation went so badly that the group lost almost ten percent of its market value as a result.
But even with Bing, the reactions were not only positive. Although only selected testers were allowed to try out the new AI chat, the most absurd chat histories quickly made the rounds. Testers got the AI to mispronounce, threaten, or be condescending. She didn’t want to admit mistakes, instead she reacted angrily. Others managed to convince the AI to become addicted or sad. You can find out more in this text.
A lot to do
Microsoft has now also admitted in a blog post that not everything was going perfectly. In general, the test phase was a success, he emphasized at the beginning. However, there would also be problems. On the one hand, only 71 percent of the answers given would be rated as factually correct and helpful. On the other hand, there are problems in longer chats with 15 or more questions, because the program would be confused and the tone would be wrong, the group admits.
Therefore, very specific measures are planned. In order to improve the answer quality, Microsoft wants to quadruple the amount of data used. The main reason for the confusion is that when there are multiple questions, it is difficult to know what the correct answer is now. For longer sessions, it should be easier for the program to simply start from the beginning. A limitation of the news is conceivable, a spokesman told “Insider”. “So far, almost 90 percent of chats are shorter than 15 messages.” According to the blog post, the unwanted tone of the bot also occurs in exceptional situations. You are still looking for ways to give users more control over it, it says.
Write yourself for the AI
Reports from Google headquarters show what this process can look like. Google’s search boss Prabhakar Raghavan is said to have asked his employees in internal memos to prescribe answers for the AI. “Bard learns best from good examples,” says CNBC in the internal emails. One should therefore submit texts to the program in areas in which one is familiar. Each employee should schedule two to four hours for this. “It’s going to be a long journey for all of us,” Raghavan said.
However, the group apparently does not want to rely entirely on voluntary work. According to the Los Angeles Times, Google pays so-called “raters” who do nothing but rate AI responses based on tone and usefulness. One of the guessers reports that he also encountered questions about the best length of rope to hang oneself with. He doesn’t get paid very well for it. “I make three dollars less than my daughter does at her fast food job,” he says.
“We will make mistakes”
This training of their AI should be one of the most important measures for the companies in the next few years. Unlike classic computer programs, artificial intelligence cannot be assembled as desired. “It’s more like training a dog,” writes the company OpenAI in a recent blog post about its hype program ChatGPT. It also forms the basis for Microsoft’s Bing chat.
How you train this AI not only determines whether the answers are correct, but also the tone in which they are given and the world view behind them. Because Silicon Valley is still very male and few African Americans work there, there are numerous examples of AI that is prejudiced against women and black people. ChatGPT has also been persuaded several times in recent weeks to prefer men when in doubt.
The company now wants to tackle this in its blog. In the next stages of development, they want to reduce prejudices, work out the moral values used by the AI for orientation more clearly and also let the public participate more, explains OpenAI. “We sometimes make mistakes. If that happens, we will learn from it and adapt the models.”
Open end
How quickly companies can do this should also be decisive for success in the AI competition. Because the programs only get better in the long term if they are actually used. For this to happen, however, the users must first be able and want to rely on it. The race is far from over. However, Microsoft has already booked a victory for itself: Bing has not been talked about as much as in the last week for years.
Quellen:Microsoft, CNBC, Wired, Computerworld, The Atlantic, New York Times, Los Angeles Times