sourcegraph
April 18, 2024

Microsoft released a new version of its Bing search engine last week that differs from the normal search engine by including a chatbot that can answer questions in clear, concise text.

Since then, it has been noticed that some of the content generated by the Bing chatbot is inaccurate, misleading, and very strange, raising fears that it has sentience or senses the world around it.

But in fact, it’s not. To understand why, it’s important to understand how chatbots really work.

No, let’s say it again: NO!

In June, Google engineer Blake Lemoine claimed that a similar chatbot technology being tested internally at Google was sentient. That’s fake. Chatbots are not conscious, nor are they intelligent — at least not as intelligent as humans.

Let’s take a step back. Bing chatbots are powered by a type of artificial intelligence called a neural network. It sounds like a computerized brain, but the term is misleading.

A neural network is simply a mathematical system that learns skills by analyzing large amounts of numerical data. For example, a neural network can learn to recognize cats when it examines thousands of photos of cats.

Most people use neural networks every day. This technology can identify people, pets and other objects in images posted to Internet services such as Google Photos. It allows Siri and Alexa (the talking voice assistants from Apple and Amazon) to recognize what you say. This is how services like Google Translate translate between English and Spanish.

Neural networks are very good at mimicking the way humans use language. This can mislead us into thinking the technology is more powerful than it actually is.

About five years ago, researchers at companies like Google and OpenAI began building neural networks that learned from large amounts of digital text, including books, Wikipedia articles, chat logs, and more. and various other content posted to the Internet.

These neural networks are known as large language models. They were able to use these mountains of data to build what you might call a mathematical map of human language. Using this map, the neural network can perform many different tasks, such as writing its own tweets, composing speeches, generating computer programs, and yes, having conversations.

These large language models have proven useful. Microsoft offers a tool, Copilot, that builds on a large language model to suggest the next line of code as computer programmers build software applications, much like autocomplete tools suggest the next word as you type text or email the same way.

Other companies offer similar technology that can generate marketing materials, emails, and other text.This technique is also known as generative artificial intelligence

Exactly. In November, OpenAI released ChatGPT, which is the first time the public has experienced it. People are surprised – and rightfully so.

These chatbots don’t chat exactly like humans, but they often look a lot like them. They can also write term papers and poetry, and improvise on almost any topic they come across.

Because they learn from the Internet. Consider how much misinformation and other garbage is out there on the web.

These systems also don’t repeat word-for-word what’s on the internet. Using what they’ve learned, they generate new text themselves, which AI researchers call “hallucination.”

This is why if you ask the same question twice, the chatbot might give you different answers. They’ll say anything, whether it’s based in reality or not.

AI researchers love to use terms that make these systems look human. But hallucinations are just a catchy term for “they make stuff up.”

That sounds creepy and dangerous, but that doesn’t mean the technology somehow exists or is aware of its surroundings. It just generates text using patterns found on the internet. In many cases, it mixes and matches patterns in surprising and disturbing ways. But it doesn’t know what it’s doing. It cannot reason like a human.

They are working on it.

With ChatGPT, OpenAI is trying to control the behavior of the technology. When a small group of people tested the system in private, OpenAI asked them to rate their responses. Are they useful? Are they telling the truth? OpenAI then uses these ratings to hone the system and define more carefully what it will and won’t do.

But such technology is not perfect. Scientists today don’t know how to build a completely real system. They can limit inaccuracies and weirdness, but they can’t prevent them. One of the ways to control strange behavior is to keep chats short.

But chatbots still spit out things that aren’t true. As other companies start deploying these types of robots, not everyone has great control over what they can and cannot do.

Bottom line: Don’t believe everything a chatbot tells you.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *