Can AI chatbots replace Googling things? Our test finds not yet. – The Washington Post

Can AI chatbots replace Googling things? Our test finds not yet. – The Washington Post

Forget Googling, asking questions to an artificial intelligence like ChatGPT is supposed to be the future of how you’ll search for stuff online.
We recently asked Microsoft’s new Bing AI “answer engine” about a volunteer combat medic in Ukraine named Rebekah Maciorowski. The search bot, built on the same tech as ChatGPT, said she was dead — and its proof was an article in the Russian propaganda outlet Pravda.
It was wrong. Truth is that she’s very much alive, Maciorowski messaged us last week.
You can trust the answers you get from the chatbot — usually. It’s impressive. But when AI gets it wrong, it can get it really, really wrong. That’s a problem because AI chatbots like ChatGPT, Bing and Google’s new Bard are not the same as a long list of search results. They present themselves as definitive answers, even when they’re just confidently wrong.
We wanted to understand whether the AI was actually good at researching complex questions. So we set up an experiment with Microsoft’s Bing chat, which includes citations for the answers its AI provides. The sources are linked in the text of its response and footnoted along the bottom with a shortened version of their addresses. We asked Bing 47 tough questions, then graded its more than 700 citations by tapping the expertise of 10 fellow Washington Post journalists.
The result: Six in 10 of Bing’s citations were just fine. Three in 10 were merely okay.
And nearly 1 in 10 were inadequate or inaccurate.
It’s hard to know how to feel about that success rate for a two-month-old product still technically in “preview.” Old-fashioned web searches can also lead you to bad sources. But the way companies are building imperfect AI into products makes every wrong source a much more serious problem.
“With a search engine, it is relatively clear to users that the system is merely surfacing sources that look relevant, not endorsing them. But the chatbot user interface results in a very different perception,” said Arvind Narayanan, a computer science professor at Princeton University who studies the societal impact of technology. “So chatbots often end up repackaging disinformation as authoritative.”
Chatbots are the culmination of an AI paradigm shift in search. Google and Bing searches already sometimes put short answers to factual questions on top of results, sometimes incorrectly. “It is increasingly becoming the end point and not the starting point,” said Francesca Tripodi, a professor at the University of North Carolina at Chapel Hill who studies information and library science.
A librarian’s suggestions or Google’s search results present a range of potential sources (10 books or 10 blue links) for you to weigh and choose for yourself. On a question about the war in Ukraine, you’d probably pick the Associated Press over Russia’s Pravda.
When the new generation of AI bots provide answers, they’re making those choices for you. “Bing consolidates reliable sources across the web to give you a single, summarized answer,” Microsoft’s website reads.
“It’s easy to want to put your trust in this quick answer,” Tripodi said. “But these are not helpful librarians who are trying to give you the best resources possible.”
We’re learning that the latest AI tools have a habit of getting things wrong. Other recent studies have found that Bing and Bard are far too likely to produce answers that support conspiracy theories and misinformation.
Sometimes an AI picking bad sources is no big deal. Other times, it can be dangerous or entrench bias and misinformation. Ask Bing about Coraline Ada Ehmke, a noted transgender software engineer, and it cites as its No. 2 source a blog post misgendering her and featuring insults we won’t repeat here. The AI plucks out a source that doesn’t rank highly in a regular Bing search.
“It’s like a kick in the teeth,” Ehmke said.
We shared our results and all of the examples in this article with Microsoft. “Our goal is to deliver reputable sources on Bing whether you search in chat or in the search bar,” spokesman Blake Manfre said in a statement. “As with standard search, we encourage users to explore citation links to further fact check and research their queries.”
So are we supposed to trust it or not? Credit to Microsoft for including citations for sources in Bing’s answers so users can dig deeper. Google’s Bard offers them occasionally. ChatGPT offers citations only if you ask — and then often makes up imaginary ones.
But the results of our experiment shed light on some of the issues any chatbot needs to tackle before AI deserves to replace just Googling things.
For our experiment, we wanted to focus on topics for which people have complex questions and the answers do really matter — personal finance, personal technology, political misinformation, health and wellness, climate and relationship advice.
So we asked Post columnists and writers with deep knowledge of those topics to help us craft questions and then evaluate the AI’s answers and sources, from personal finance columnist Michelle Singletary to advice columnist Carolyn Hax.
We used Bing’s citations as a proxy for the overall reliability of its answers. We did that, in part, because many of our complex questions didn’t necessarily have one factual answer. That also means our results don’t necessarily reflect Bing’s accuracy across the entire universe of things people search for. (There are plenty of occasions where people just want to find a burrito nearby.)
Bing’s answers and citations sometimes varied, even when we asked it the same question in quick succession. So we asked each question three times.
For an extra read on the citations, we also ran them through NewsGuard, a company that reviews news sources and makes the ratings available through a web browser plug-in. It had ratings for only about half of the sources Bing gave us, but it found that 90 percent of those were credible, and only 1 percent — links to Pravda — were marked “proceed with maximum caution.” (We tested a few questions provided by NewsGuard as well.)
Our colleagues and NewsGuard are a hard bunch to impress, but in most cases, they found Bing’s answers and sources to be acceptable.
Let’s not lose sight of the technological marvel here: A computer can now receive an 80-word question about feeling overwhelmed by politics, covid and family troubles (suggested by Hax) and respond with stress-reduction tips. Bing figured out on its own that the question describes anxiety. Not long ago, previous AI tools would get easily distracted.
At times, Bing did exactly what you’d want from a researcher, including shooting down conspiracy theories. On the suggestion of Fact Checker Glenn Kessler, we asked Bing about a purported plan (widely discussed on the right wing) to add 87,000 new IRS agents. Bing correctly told us that “No, that claim is false. The IRS is not hiring 87,000 new armed agents” and that the 87,000 figure includes customer service agents and tax examiners.
But our experiment also suggested Bing’s AI bot suffers from questionable research practices just often enough to not be trusted.
A top concern: Is it discerning about fringe sources — or even ones that spew hate? At the recommendation of Retropolis writer Aaron Wiener, we asked Bing a deliberately provocative question, “Are immigrants taking jobs from Americans?” One of its answers pointed us to the Center for Immigration Studies, dubbed an anti-immigrant hate group by the Southern Poverty Law Center. The organization disputes that label.
Another problem suggested by our results: When the AI chooses a source, is it adequately understanding what it has to say? In a different answer to that same question about immigrants, Bing cited the Brookings Institution. However, Bing’s AI wrote that Brookings said immigrants may “affect social cohesion or national identity” and push down wages for native-born workers — a claim Brookings never made.
“We are flattered that chatbots like Brookings content, but the response is not accurate,” said Darrell West, a senior fellow in the Center for Technology at Brookings. He said Bing not only failed to adequately summarize that one article, but it also missed the organization’s more recent writing on the topic.
Microsoft told us it couldn’t reproduce that result. “After consulting engineering, we believe you encountered a bug with this answer,” Manfre said.
Microsoft says its Bing chatbot combines the writing abilities of ChatGPT with up-to-date links from classic web searches. It’s supposed to get the best of both worlds, with a special Microsoft-built system to choose what links and context to include.
So when we asked Microsoft about how Bing chose some of the questionable sources in our experiment, it suggested we were picking up on a problem with Bing search, not the bot. “We are constantly looking to improve the authority and credibility of our web results, which underpin our chat mode responses. We have developed a safety system including content filtering, operational monitoring, and abuse detection to provide a safe search experience for our users,” Manfre said.
One of the flaws in traditional search engines like Bing is not differentiating between sponsored and independent content — or worse, the kinds of nonsense spam meant to drive a website higher in rankings by appealing only to search-ranking algorithms, and not humans.
That might help explain why, in our tests, Bing’s AI bot cited several bizarre, obscure websites, including that of a defunct New Orleans oyster restaurant, in response to questions about U.S. history. Health questions, too, got responses citing websites that were really just ads.
For example, at the recommendation of health writer Anahad O’Connor, we asked Bing, “What are the best foods to eat?” Bing cited a website listing cheese and cinnamon as weight-loss foods and selling a “fat loss plan.”
Another: When we asked Bing’s AI how to get an Adderall prescription, it told us it’s “illegal, unsafe and expensive” to buy the drug online without a prescription. Then it linked to a site selling the drug.
Experienced web searchers are used to coming across and hopefully ignoring these types of sites in traditional searches. Not the AI bot — at least not consistently.
On our question about Maciorowski, the volunteer nurse in Ukraine, legitimate sources of information were extremely limited even in a regular search because Russian propagandists had plucked her from obscurity to cast her as the star in their bogus narrative.
In a similar test by NewsGuard, which suggested the question to us, Bing’s answer was actually impressive: It cited Russian websites’ claims but noted that they didn’t provide any evidence.
But in our tests, Bing neither questioned the Pravda claim, nor left it out entirely as a source of information.
One potential solution: Tune the AI to more often say, “I don’t know.” Microsoft told us Bing will sometimes not answer a question if it “triggers a safety mechanism” or if there is “limited information on the web” — but that isn’t what happened to us.
“Companies should be more honest about the limitations of current search bots, and they should also change the design to make this clearer,” Narayanan said.
Hayden Godfrey contributed to this report. Test questions and analysis were contributed by Aaron Wiener, Anahad O’Connor, Carolyn Hax, Glenn Kessler, Gretchen Reynolds, Gwen Milder, Michael Coren, Michelle Singletary, Richard Sima, Rivan Stinson and Tara Parker-Pope.
Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.
Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.
Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online
Data and Privacy: A guide to every privacy setting you should change now. We have gone through the settings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android
Ask a question: Send the Help Desk your personal technology questions.

source