sourcegraph
March 29, 2024

When Microsoft added a chatbot to its Bing search engine this month, it was noticed that it offered various false information about the Gap, Mexican nightlife and singer Billie Eilish.

Then, when journalists and other early testers held lengthy conversations with Microsoft’s AI bot, it turned into rude and disturbingly creepy behavior.

Ever since Bing bot’s behavior became a global sensation, people have struggled to understand the oddities of this new creation. Typically, scientists say humans are largely to blame.

But there is still some mystery about what the new chatbot can do and why it does it. Its complexity makes it hard to dissect, and even harder to predict, and researchers are looking at it through the lens of philosophy as well as the hard code of computer science.

Like any other student, an AI system can learn bad information from bad sources. that weird behavior? It could be a chatbot’s distorted reflection of the language and intentions of the people using it, says neuroscientist, psychologist and computer scientist Terry Sejnowski, who helped lay the intellectual and technical foundations of modern artificial intelligence.

“This is what happens when you dig deeper and deeper into these systems,” said Sejnowski, Ph.D., a professor at the Salk Institute for Biological Studies and UC San Diego, who published a paper Research Papers to this phenomenon Published this month in the scientific journal Neural Computing“Whatever you’re looking for – whatever you want – they’ll deliver.”

Google also show off A new chatbot, Bud, came out this month, but scientists and journalists soon realized it was writing crap about the James Webb Space Telescope. OpenAI, a San Francisco startup that kicked off the chatbot craze with the launch of ChatGPT in November, doesn’t always tell the truth.

The new chatbot is powered by a technique scientists call large language models, or LLMs. These systems learn by analyzing reams of digital text culled from the Internet, which includes vast amounts of inauthentic, biased, and otherwise toxic material. The text the chatbots learn is also a bit outdated because they have to spend months analyzing it before they can be used by the public.

As it analyzes mountains of good and bad information from the internet, LLM learns to do one specific thing: guess the next word in a sequence of words.

It works like a giant version of autocomplete technology, suggesting the next word as you type an email or instant message on your smartphone. Given the sequence “Tom Cruise is a ____”, it might guess “actor”.

When you chat with a chatbot, the bot doesn’t just use everything it has learned from the internet. It takes advantage of everything you say to it and everything it responds to. It’s not just about guessing the next word in a sentence. It’s guessing the next word in a long block of text that includes your word and its words.

The longer the conversation, the more users unwittingly influence what the chatbot says. If you want it to be angry, it will, says Dr. Sejnowski. If you coax it to be creepy, it will be creepy.

Horrified reactions to the strange behavior of Microsoft’s chatbots obscure an important point: chatbots have no personality. It provides instant results spit out by a very sophisticated computer algorithm.

When Microsoft limits the length of discussions with Bing chatbots, it seems to reduce the oddest of behaviors. It’s like learning from the test driver of a car that driving it too fast for too long can burn out its engine. Microsoft partners OpenAI and Google are also exploring ways to control the behavior of robots.

But this assurance comes with a caveat: Because chatbots learn from so much material and put them together in such complex ways, researchers don’t fully understand how chatbots arrive at their final results. Researchers are watching what robots do and learning to limit that behavior — often after it happens.

Microsoft and OpenAI have decided that the only way they’re going to know what chatbots will do in the real world is to let them go — and put them away when they stray. They believe their large public experiment is worth the risk.

Dr Sejnowski likened the behavior of the Microsoft chatbot to the Mirror of Erised, the mystical artifact featured in JK Rowling’s Harry Potter novels, as well as the many films set in her creative world of young wizards.

“Erised” is “desire” written backwards. When people find the mirror, it seems to offer truth and understanding. but it is not the truth. It reveals the deep-seated desires of anyone who gazes at it. Some people go crazy if they stare at it for too long.

“Because humans and LL.M.s both mirror each other, over time they will converge toward a common conceptual state,” Dr. Sejnowski said.

He said it wasn’t surprising that journalists started seeing creepy behavior in the Bing chatbot. Whether intentionally or not, they are pushing the system in an uncomfortable direction. When chatbots take our words and feed them back to us, they can reinforce and amplify our beliefs and trick us into believing what they tell us.

Dr. Sejnowski was one of a small group of researchers in the late 1970s and early 1980s who began to seriously explore the type of artificial intelligence called neural networks that power today’s chatbots.

A neural network is a mathematical system that learns skills by analyzing numerical data. It’s the same technology that lets Siri and Alexa recognize what you say.

Around 2018, researchers at companies like Google and OpenAI began building neural networks to learn from vast amounts of digital text, including books, Wikipedia articles, chat transcripts and other content posted to the internet. By finding billions of patterns in all this text, these LLMs learned to generate text on their own, including tweets, blog posts, speeches and computer programs. They can even have a conversation.

These systems are a reflection of human nature. They learn skills by analyzing text that humans post to the internet.

But that’s not the only reason chatbots produce questionable language, says Melanie Mitchell, an artificial intelligence researcher at the Santa Fe Institute, an independent lab in New Mexico.

When they generate text, these systems don’t repeat content from the internet verbatim. They generate new text on their own by combining billions of patterns.

Even if researchers train these systems solely on peer-reviewed scientific literature, they can still generate scientifically nonsensical statements. Even if they only learn from real texts, they can still produce things that are not real. Even if they only learn from helpful texts, they can still produce some creepy stuff.

“There’s nothing stopping them from doing that,” Dr. Mitchell said. “They’re just trying to make something that sounds like human speech.”

AI experts have long known that the technology exhibits all sorts of unexpected behaviors. But they don’t always agree on how to explain this behavior or how quickly the chatbots are improving.

Because these systems learn from far more data than we humans can comprehend, even AI experts cannot understand why they are generating a particular piece of text at any given moment.

Dr Sejkowski said he believed in the long-term the new chatbots had the ability to increase people’s productivity and give them better and faster ways to get work done. But that comes with a caveat for both the companies building these chatbots and the people who use them: They can also lead us away from the truth and into some dark places.

“This is uncharted territory,” Dr. Sejkowski said. “Humanity has never experienced this before.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *