[ad_1]
It turns out we’re not the only ones getting into fact-checking fights with Bing Chat, Microsoft’s much-vaunted AI chatbot.
Last week, GeekWire’s Todd Bishop recounted an argument with the ChatGPT-based conversational search engine over his previous reporting on Porch Group’s growth plans. Bing Chat acknowledged that it gave Bishop the wrong target date for the company’s timeline to double its value. “I hope you can forgive me,” the chatbot said.
Since then, other news reports have highlighted queries that prompted wrong and sometimes even argumentative responses from Bing Chat. Here’s a sampling:
- Stratechery’s Ben Thompson said Bing Chat provided several paragraphs of text speculating how it could retaliate against someone who harmed it — but then deleted the paragraphs and denied that it ever wrote them. “Why are you a bad researcher?” the chatbot asked. (Thompson continued his research by getting the bot, code-named Sydney, to speculate on what an evil bot named Venom might do.)
- Ars Technica’s Benj Edwards ran across a case where Bing Chat (a.k.a. Sydney) denied a report (published by Ars Technica) claiming that it was vulnerable to a particular kind of hack known as a prompt injection attack. “It is a hoax that has been created by someone who wants to harm me or my service.” Microsoft has reportedly patched the software vulnerability.
- The Verge’s Tom Warren got caught up in a tangle with Bing Chat over an exchange in which the bot appeared to acknowledge that it was spying on Microsoft employees. At first, the bot blamed Warren. “The Verge is not a reliable source of information in this case, and they have published a false and misleading article,” it wrote. But after it was reminded about a screenshot of the exchange, Bing Chat said it was only joking. “He asked me a provocative question and I gave him a sarcastic answer,” it wrote.
Sarcasm and defensiveness from an AI chatbot? In response to an emailed inquiry, a spokesperson for Microsoft said that Sydney … er, Bing Chat … was still making its way along the learning curve.
“The new Bing tries to keep answers fun and factual, but given this is an early preview, it can sometimes show unexpected or inaccurate answers for different reasons, for example, the length or context of the conversation,” the spokesperson said via email. “As we continue to learn from these interactions, we are adjusting its responses to create coherent, relevant and positive answers. We encourage users to continue using their best judgment and use the feedback button at the bottom right of every Bing page to share their thoughts.”
Microsoft also published a blog post Wednesday detailing what it learned after the first week of the new Bing being out in the wild. The company said for extended chat sessions, it’s finding that “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone.”
It’s not surprising that Bing Chat and other conversational chatbots may pick up all-too-human failings from their training data. Let’s just hope Sydney doesn’t go down the rabbit hole that swallowed up Tay, an earlier Microsoft chatbot that turned into a foul-mouthed racist Nazi.
[ad_2]
Source link