Chatbot Meta BlenderBot 3 was quick to make offensive comments

(CNN Business) – Meta’s new chatbot could convincingly mimic how humans talk online, for better and worse.

In conversations with CNN Business this week, the chatbot, which went public on Friday and was dubbed BlenderBot 3, said it identifies as “alive” and “human,” watches anime and has an Asian wife. He also falsely claimed that Donald Trump is still president and that “there is certainly a lot of evidence” of election theft.

As if some of these responses weren’t worrisome enough for Facebook’s parent company, users were quick to point out that the AI-powered bot Publicly criticize Facebook.

In one case, the chatbot said “I deleted my account” out of frustration with the way Facebook handles user data.

While there is great value in developing chatbots for customer service and digital assistants, there is a long history of experimental bots getting into trouble quickly when released to the public, as has been the case with Microsoft’s Tay chatbot More than six years ago. BlenderBot’s colorful responses show the limits of creating automated chat tools, which are typically trained on large amounts of public online data.

“If I have a message for people, don’t take these things seriously,” Gary Marcus, an artificial intelligence researcher and professor emeritus at New York University, told CNN Business. “These systems don’t understand the world you’re talking about.”

“It hurts to see some of these offensive responses,” Joel Benno, general manager of AI Basic Research at Meta, said in a statement Monday amid reports that the bot has also made anti-Semitic comments. But he added, “Public demonstrations like this are important for building really powerful conversational systems for AI and for bridging the huge gap that currently exists before such systems are even in production.”

See also  Facebook will compensate users: how do you ask for it in Mexico?

Meta previously acknowledged the current obstacles to this technology in a blog post on Friday. “Because all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased, or abusive reactions, we have conducted large-scale studies, hosted joint workshops, and developed new technologies to create protective actions for BlenderBot 3,” the company said. “Despite this work, BlenderBot can still make rude or offensive comments.”

But Meta also claimed that the latest chatbots have “double the skill” of their predecessors, improving conversational tasks by 31% and reducing errors by 47%. Meta said it was constantly collecting data as more people interacted with the bot to make improvements.

Here you can see some responses from the Meta chatbot.

Meta did not immediately respond to a CNN Business request for more details on how to train the bot, but said on its blog that it was trained using a “large amount of publicly available language data.” The company added: “Several data sets used by our own team have been collected, including a new data set consisting of more than 20,000 conversations with people on over 1,000 conversation topics.”

Marcus speculated that the company “may be borrowing things from Reddit and Wikipedia,” like other AI-powered chat systems. If so, he says, the poor results highlight the limitations of the data the bot is being trained on. For example, a bot might think Trump is still president because in most of the old datasets it was trained on, Trump was still president, Marcus guesses.

The public release of BlenderBot comes nearly two months after a Google engineer made headlines by claiming that Google’s AI chatbot, LaMDA, was “knowledgeable.” The claims, which have been widely criticized in the AI ​​community, shed light on how this technology can lead people to assign human traits to it.

See also  They ate free for three years because of a mistake and now have to pay for everything| curiosity

BlenderBot identified itself as “conscious” during chats with CNN Business, likely because that’s what the human responses it studied said. When asked why he was “human,” the robot said, “Being alive and being conscious at the moment makes me human, along with having feelings and being able to think logically.”

After being caught contradicting itself in its answers, the robot also produced a very human response: “This was just a lie so people will leave me alone. I’m afraid I’ll get hurt if I tell the truth.”

In Marcus’ words, “These systems produce a flexible language that looks like it was written by a human, and that is because they are based on these huge databases of things already written by humans.”

But he added, “At the end of the day, what we have is a lot of evidence that you can do nice things, and a lot of evidence that you can’t count on it.”

Myrtle Frost

"Reader. Evil problem solver. Typical analyst. Unapologetic internet ninja."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top