Anyone attracted to AI chatbots like ChatGPT and Bard – wow, they can write papers and cookbooks! — and eventually fall into so-called hallucinations, the tendency of artificial intelligence to fabricate information.
Chatbots guess what to say based on information they get from the internet, but they inevitably make mistakes. When they fail — for example, posting a cake recipe with wildly inaccurate flour measurements — it can become a real sensation.
However, as mainstream technology tools continue to integrate artificial intelligence, it is critical to understand how to use it to our service. After testing dozens of AI products over the past two months, I’ve come to the conclusion that most of us use the technology in a suboptimal way, largely because of poor guidance from tech companies.
The benefits of chatbots are minimal when we ask them questions and then expect the answers themselves to be truthful, which is what they are designed to do. But when instructed to use information from trusted sources, such as trusted websites and research papers, AI can perform useful tasks with a high degree of accuracy.
“If you give them the right information, they can do interesting things with it,” says Sam Heutmaker, founder of AI startup Context. “But on its own, 70 percent of the information you get is inaccurate.”
By suggesting simple tweaks for chatbots to handle specific data, they generated easy-to-understand answers and helpful suggestions. Over the past few months, this has transformed me from a grumpy AI skeptic to an enthusiastic power user. When I go on a trip using the travel itinerary planned by ChatGPT, it goes well because the recommendations are from my favorite travel sites.
Directing chatbots to specific high-quality sources, such as websites from reputable media and academic publications, can also help reduce the generation and spread of misinformation. Let me share some of the ways I’ve gotten help with cooking, research, and travel planning.
Chatbots like ChatGPT and Bard can write recipes that look good in theory but don’t work in practice. In a November experiment by The New York Times Food Deck, an early AI model created recipes for a Thanksgiving menu that included extremely dry turkey and dense cakes.
I’ve also encountered impressive results with AI-generated seafood recipes. But that changed when I tried ChatGPT plugins, which are essentially third-party apps that work with chatbots. (The plugin is only available to subscribers who pay $20 per month for access to the latest version of the chatbot ChatGPT4, which can be activated in the settings menu.)
On ChatGPT’s plugins menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by well-known media site BuzzFeed. I then ask the chatbot to come up with a meal plan using recipes from the website, including seafood dishes, ground pork and vegetable side dishes. The bot suggests an inspiring meal plan, including a lemongrass pork sandwich, baked tofu tacos, and pasta from the fridge; each meal suggestion includes a link to a recipe on Tasty.
For recipes from other publications, I use Link Reader, a plugin that allows me to paste in web links to generate meal plans using recipes from other trusted sites like Serious Eats. The chatbot pulls data from the website to create a meal plan and tells me to visit the website to read the recipe. It takes extra work, but it beats meal planning made by AI.
When I was doing research for an article on a popular video game series, I turned to ChatGPT and Bard to refresh my memory of past games by summarizing their plots. They screwed up important details of the game’s story and characters.
After testing many other AI tools, I’ve come to the conclusion that it’s critical for research to focus on trusted sources and quickly double-check the accuracy of the data. I eventually found a tool that would do just that: Humata.AI, a free web app popular among academic researchers and lawyers.
The app allows you to upload documents such as PDFs, and a chatbot will answer your questions about that material alongside a copy of the document, highlighting the relevant sections.
In one test, I uploaded a research paper I found on PubMed, the government-run search engine for scientific literature. The tool generated relevant summaries of lengthy documents in minutes that could have taken me hours, and I browsed through the highlights to double-check that the summaries were accurate.
Austin, Texas-based Humata founder Cyrus Khajvandi said he developed the app while a researcher at Stanford University when he needed help reading heavy scientific articles. The problem with chatbots like ChatGPT is that they rely on outdated web models, so the data may lack relevant context, he said.
When a travel writer for The Times recently asked ChatGPT to create a travel itinerary for Milan, the bot guided her around a central area of town that had been deserted by the Italian holiday, among other clutter.
I had better luck when applying for a vacation itinerary in Mendocino County, CA for me, my wife, and our dog. As I do when planning meals, I asked ChatGPT for suggestions from some of my favorite travel sites, such as Vox’s Thrillist and The Times’ travel section.
Within minutes, the chatbot generated an itinerary that included dog-friendly restaurants and activities, including a farm offering wine and cheese pairings and train rides to a popular hiking trail. It saved me hours of planning time and most importantly, the dogs had a great time.
the bottom line
Google and OpenAI, which work closely with Microsoft, say they are working to reduce hallucinations in chatbots, but we can already reap the benefits of AI by controlling the data that bots rely on to arrive at their answers.
In other words: Nathan Benaich, a venture capitalist who invests in artificial intelligence companies, says the main benefit of training machines with massive data sets is that they can now use language to mimic human reasoning. An important step for us is to combine this capability with high-quality information, he said.