The Italian government’s data protection agency has announced that the artificial intelligence bot ChatGPT will be banned from the country until it complies with regulations on data privacy.
Italy’s Guarantor for the Protection of Personal Data announced on Friday that OpenAI, the Silicon Valley tech company behind the ChatGPT software application used to simulate and process human-like conversations, will be temporarily banned from accessing Italian user data through which it develops its algorithms and machine learning until it complies with privacy regulations.
The regulator went on to say that it has opened up an investigation in response to an alleged data breach “regarding user conversations and information related to the payment of subscribers to the paid service.”
OpenAI was accused by the Privacy Guarantor of a lack of transparency to its users and other interested parties in terms of the data collects, and claimed that there is no legal justification for the program to sweep up massive swaths of data from the internet in order for its algorithm to be trained to mimic human responses to prompts.
In addition, the Italian regulators said that the tech company has contravened its own terms of service for ChatGPT, in that while the service is directed towards those over the age of 13-years-old, there are no age verification filters to prevent young children from accessing the programme.
ChatGPT provides “absolutely unsuitable answers with respect to their degree of development and self-awareness,” the privacy watchdog said of children using the AI bot.
The privacy regulator went on to demand that OpenAI “communicate within 20 days the measures undertaken” in order to come in compliance with regulations or risk facing a fine of 20 million euros or 4 per cent of its global turnover.
The move from the Italian government comes just days after a report from the European Union Agency for Law Enforcement Cooperation (Europol), which warned of potential criminal applications of the artificial intelligence application.
Europe warned that ChatGPT could be a potential boon to fraudsters, saying: “ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.”
The European Union’s law enforcement agency went on to warn that because of its ability to produce “authentic sounding text at speed and scale” also makes it ideal for spreading propaganda and disinformation.
Finally, Europol said that because in addition to replicating human speech ChatGPT is also adept at producing programming code, cybercriminals could exploit the programme to deploy hacking and other malicious cybercrimes even if the criminals themselves have limited technological know-how.
Follow Kurt Zindulka on Twitter here @KurtZindulka