New York City’s AI-Powered Chatbot Gives Businesses Disastrous and Potentially Illegal Advice

New York Mayor Eric Adams and Police Robot
Business Wire/AP

New York City’s AI chatbot, designed to help businesses and landlords navigate government regulations, has been found to provide incorrect and potential illegal information to users.

The Markup reports that in an effort to harness the power of artificial intelligence to improve government services, New York City Mayor Eric Adams (D) announced the launch of an AI-powered chatbot in October 2023. The chatbot, powered by Microsoft’s Azure AI services, was intended to provide New Yorkers with reliable information on starting and operating a business in the city, drawing from over 2,000 NYC Business web pages. However, five months after its launch, an investigation by The Markup has revealed that the chatbot is providing advice that could lead businesses to break the law.

Eric Adams, mayor of New York, speaks to members of the media during a New York State Financial Control Board meeting in New York, US, on Tuesday, Sept. 6, 2022. The New York State Financial Control Board discussed the Fiscal Year 2023 adopted budget and financial plan. Photographer: Stephanie Keith/Bloomberg via Getty Images

Eric Adams, mayor of New York Photographer: Stephanie Keith/Bloomberg via Getty Images

The chatbot has been found to offer inaccurate information on a wide range of topics, including housing policy, worker rights, and rules for entrepreneurs. When asked if landlords are required to accept tenants with Section 8 housing vouchers, the chatbot incorrectly stated that they do not need to accept these tenants. This advice goes against New York City law, which makes it illegal for landlords to discriminate based on the source of income, with only a minor exception for small buildings where the landlord or their family resides.

Rosalind Black, Citywide Housing Director at the legal assistance nonprofit Legal Services NYC, tested the chatbot herself and discovered additional false information. The bot wrongly stated that it was legal for landlords to lock out tenants and that there were no restrictions on the amount of rent that could be charged for residential tenants. Black emphasized that these inaccuracies are related to fundamental housing policies in the city and that the chatbot should be taken down if it cannot provide accurate and responsible information.

The chatbot’s lack of knowledge extends to consumer and worker protections as well. It failed to acknowledge a 2020 law requiring businesses to accept cash, wrongly stated that restaurant owners could take workers’ tips, and incorrectly claimed that there were no regulations on informing staff about scheduling changes. The bot’s inaccuracies persist even when questions are asked in other languages.

While the city has labeled the chatbot as a pilot program and acknowledged that it may occasionally produce incorrect or biased content, users have little way of knowing whether the information they receive is false. A pop-up notice encourages visitors to report inaccuracies through a feedback form, but the consequences of acting on false information could be severe for business owners.

Read more at the Markup here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.