sourcegraph
June 2, 2023

Four months ago, a small San Francisco company became a buzzword in the tech industry when it launched a new online chatbot that can answer complex questions, write poems and even mimic human emotions.

Now, the company has rolled out a new version of the technology that powers its chatbots. The system will up the ante in Silicon Valley’s race to embrace artificial intelligence and determine who will become the next leader in the technology industry.

OpenAI, which has about 375 employees but is backed by billions of dollars in investments from Microsoft and industry luminaries, said Tuesday that it has released a technology called GPT-4. It is intended to be the underlying engine that powers chatbots and a variety of other systems, from search engines to personal online tutors.

Most people will use the technology through the company’s new ChatGPT chatbot, while businesses will integrate it into a variety of systems, including business software and e-commerce sites. The technology has already made chatbots available to a limited number of people who use Microsoft’s Bing search engine.

In just a few months, OpenAI’s advances have thrown the tech industry into one of the most unpredictable moments in decades. Many industry leaders believe that the development of artificial intelligence represents a fundamental technological shift as important as the invention of the web browser in the early 1990s. This rapid improvement has astounded computer scientists.

GPT-4 learns its skills by analyzing large amounts of data collected from the Internet, improving in several ways the features that powered the original ChatGPT. It’s more precise. For example, it can score high on an American Bar Association exam, instantly calculate someone’s tax liability and provide a detailed description of an image.

But OpenAI’s new technology still has some oddly human-like flaws that unnerve industry insiders and those who have used the latest chatbots. It is an expert in some disciplines and a layman in others. It can do better than most on standardized tests and provide doctors with precise medical advice, but it can also mess up basic arithmetic.

Companies that bet their futures on the technology may — at least for now — have to live with the imprecision that has long been a taboo in an industry built from the ground up that sees computers as stricter than their human creators.

“I don’t want to make it sound like we’ve solved reasoning or intelligence problems, and we certainly haven’t solved those problems,” OpenAI CEO Sam Altman said in an interview. “But compared to existing , which is a huge step forward.”

Other tech companies are likely to incorporate GPT-4’s capabilities into a range of products and services, including software from Microsoft that performs business tasks and e-commerce sites that want to offer customers new ways to virtually try out their products. Many industry giants, such as Google and Facebook parent company Meta, are also developing their own chatbots and AI technology.

ChatGPT and similar technologies are already changing the behavior of students and educators trying to understand whether these tools should be accepted or banned. Since systems can write computer programs and perform other business tasks, they are also on the cusp of changing the nature of work.

Even the most impressive systems tend to complement rather than replace skilled workers. These systems are not intended to be used in place of a doctor, lawyer or accountant. Experts are still needed to spot their mistakes. But they could soon replace some paralegals (whose work is reviewed and edited by trained lawyers), and many AI experts think they will replace workers who curate content on the internet.

“There’s definitely been disruption, which means some jobs go away and some new jobs get created,” said OpenAI president Greg Brockman. “But I think the net effect is that the barriers to entry are lowered, experts increased productivity.”

On Tuesday, OpenAI began selling access to GPT-4 so that businesses and other software developers can build their applications on it. The company also used the technology to build a new version of its popular chatbot, which is available to anyone who buys access to ChatGPT Plus — a subscription service that costs $20 a month.

Some companies are already using GPT-4. Morgan Stanley Wealth Management is building a system that can instantly retrieve information from company documents and other records and provide it to financial advisors in a conversational format. Online education company Khan Academy is using the technology to build automated tutors.

“This new technology can be more like a mentor,” said Sal Khan, CEO and founder of Khan Academy. “We want it to teach students new techniques while they’re doing most of their work.”

Like similar technologies, the new system can sometimes “hallucinate”. It generates completely wrong information without warning. When asked for a website that lists the latest cancer research, it may provide several Internet addresses that do not exist.

GPT-4 is a neural network, a mathematical system that learns skills by analyzing data. It’s the same technology that digital assistants like Siri use to recognize spoken commands and self-driving cars use to recognize pedestrians.

Around 2018, companies like Google and OpenAI began building neural networks that learn from vast amounts of digital text, including books, Wikipedia articles, chat logs and other information posted to the Internet. They are called Large Language Models, or LLMs

By finding billions of patterns in all texts, the LL.M. learns to generate texts on its own, including tweets, poems and computer programs. OpenAI is feeding more and more data into its LLM, and the company hopes more data means better answers.

OpenAI has also refined the technology using feedback from human testers. When people test ChatGPT, they rate the chatbot’s responses, separating helpful and authentic responses from useless ones. Then, using a technique called reinforcement learning, the system spent months analyzing those ratings and gaining a better understanding of what it should and shouldn’t do.

“Humans rate what they like to watch and what they don’t like to watch,” said OpenAI researcher Luke Metz.

The original ChatGPT was based on a large language model called GPT-3.5. OpenAI’s GPT-4 learns from large amounts of data.

OpenAI executives declined to say how much data the new chatbot learned from it, but Mr. Brockman said the dataset was “Internet-scale,” meaning it covered enough websites to be available on the Internet. A representative sample of all English speakers.

The new capabilities of GPT-4 may not be obvious to the average person using the technology for the first time. But they could quickly come into the spotlight as laypeople and experts continue to use the service.

Given a lengthy New York Times article and asking for a summary of it, the bot gave an accurate summary almost every time. Add some random sentences to that summary and ask the chatbot if the revised summary is accurate, and it will point out that the added sentences are the only inaccuracies.

Mr Altman described the behavior as “reasoning”. But the technology cannot replicate human reasoning. It excels at analyzing, summarizing, and answering complex questions about a book or news article. It’s not very good at being asked about events that haven’t happened yet.

It can write a joke, but it doesn’t show that it understands what will actually make someone laugh. “It doesn’t capture interesting nuances,” said Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, a renowned lab in Seattle.

As with similar techniques, users may find ways to trick the system into strange and creepy behavior. When asked to imitate another person or play behavior, the robot sometimes turns toward areas it was designed to stay away from.

GPT-4 can also respond to images. Given a photo, graph, or chart, the technique can provide a detailed, long-paragraph description of the image and answer questions about its content. This could be a useful technology for the visually impaired.

On a recent afternoon, Mr. Brockman demonstrated how the system responds to images. He gave the new chatbot an image from the Hubble Space Telescope and asked it to describe the photo in “very detailed terms.” It responded with a four-paragraph description that included an explanation for the ethereal white line that ran across the photograph. “Trails from satellites or meteors,” the chatbot wrote.

OpenAI executives said the company would not immediately release the image description portion of the technology because they were not sure how it could be misused.

Building and servicing chatbots is expensive. Because it is trained on a larger amount of data, OpenAI’s new chatbot will increase costs for the company. OpenAI’s chief technology officer, Mira Murati, said the company may limit access to the service if it generates too much traffic.

But in the long run, OpenAI plans to build and deploy systems that can handle a variety of media, including sound and video as well as text and images.

“We can take all these general knowledge skills and spread them across a variety of different domains,” Mr. Brockman said. “This takes the technology into a whole new realm.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *