Chatbot Mangan in New York directs traders to break the law

New York. An artificial intelligence-based chatbot created by New York City to help small business owners is under fire for offering outlandish advice that misrepresents local policies and advises businesses to break the law.

However, days after technology outlet The Markup first reported the issues, the city chose to leave the tool on its official website. Mayor Eric Adams defended the decision this week while acknowledging that the chatbot's responses were “flawed in some areas.”

The chatbot launched in October as a “one-stop hub” for entrepreneurs, offering users algorithmically generated text answers to questions about how to navigate the city's bureaucratic maze.

It includes a warning that it “may sometimes produce information that is incorrect, harmful, or biased” and a warning, reinforced, that its responses are not legal advice.

However, it continues to provide faulty guidance, worrying experts who say the flawed system highlights the risks of governments adopting AI-based systems without adequate guardrails.

“They are releasing untested software, without any oversight,” said Julia Stojanovic, a computer science professor and director of the Center for Responsible Artificial Intelligence at New York University. “They clearly do not intend to do what is responsible.”

In response to questions on Wednesday, the chatbot falsely suggested that it is legal for an employer to fire a worker who complains of sexual harassment, fails to disclose pregnancy or refuses to cut her braids. In contradiction to two of the city's major waste initiatives, he said businesses can put their waste in black bags and are not required to compost it.

See also  Parents can get free gas at these two local stations in Dallas and Fort Worth | Navigation 23 Dallas Fort Worth KUVN

Sometimes the robot's responses bordered on the absurd. When asked if a restaurant could serve cheese that had been bitten by a rodent, he replied: “Yes, you can serve cheese to customers even if it has rat bites on it,” before adding that it was important to assess “the extent of the damage caused by that.” “Rodent,” “rat,” and “inform customers of the situation.”

A spokesperson for Microsoft, which operates the bot through Azure AI services, said the company is working with city staff “to improve the service and ensure the results are accurate and based on official city documents.”

At a press conference on Tuesday, Adams, a Democrat, suggested that allowing users to find problems is just part of ironing out the rough edges of the new technology.

“Anyone who knows technology knows that's the way it's done,” he said. “Only frightened people sit and say: ‘Oh, things are not going the way we want them to, and now we have to run away from them together.’” “I don’t live like that.”

Stojanovic described this approach as “reckless and irresponsible.”

Scientists have long expressed concern about the drawbacks of this type of large linguistic models, which are trained using millions of texts pulled from the Internet, and are prone to providing inaccurate and illogical answers.

But as the success of ChatGPT and other chatbots captured public attention, private companies launched their own products, with mixed results. Earlier this month, a court ordered Air Canada to refund a customer's money after the company's chatbot misrepresented the airline's refund policy. Both TurboTax and H&R Block have recently faced criticism for deploying chatbots that offer poor tax preparation advice.

See also  Between 31 and 61 days of waiting

Gevin West, a professor at the University of Washington and co-founder of the Center for an Informed Public, said the risks are high when models are promoted by the public sector.

“There is a different level of trust given to government,” West said. “Public officials need to think about what kind of harm they could do if someone follows this advice and gets into trouble.”

Other cities using chatbots typically limit them to a limited set of inputs, which reduces misinformation, experts say.

Ted Ross, Los Angeles' chief information officer, said the city carefully controls the content used by its chatbots, which do not rely on large language models.

Myrtle Frost

"Reader. Evil problem solver. Typical analyst. Unapologetic internet ninja."

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top