A.I. ‘Risks Undermining the Fabric of Our Society’, Says Top British Spy

Ciaran Martin (UK National Cyber Security Centre) during CYBERUK held at the Scottish Even
Getty Images

A founder of the UK National Cyber Security Centre and former chief of the government’s signals intelligence agency warns A.I.’s aptitude for producing convincing fakes risks undermining the trust that underpins society.

Artificial Intelligence (A.I.) has made making convincing fakes so easy, it threatens society, so says Ciaran Martin, the former head of the Government Communications Headquarters (GCHQ), the UK’s NSA equivalent.

Speaking to The Times, the former spymaster turned academic said governments were “struggling to keep up with this” because A.I. research is being done by private companies. Euphemistically lamenting this means governments weren’t able to control the development of technology to their own ends, Martin said:

AI is now making it much easier to fake things, much easier to spoof voices, much easier to look like genuine information, much easier to put that out at scale… So having a sense of what is true and reliable, it’s going to become much more difficult. And that’s something that risks undermining the fabric of our society.

Part of the answer, Martin said, was validating information “credible, and in sort of [an] economically efficient way”, but this is “a really, really difficult challenge”. The top civil servant said the government was not doing enough to regulate information in this way, and continued: “Everybody was warning about the apocalyptic consequences of our dependence on cybersecurity in a way that could cause large-scale fatalities. We need to take a balanced and responsible approach to these risks.”

Martin’s comments come months after Europol warned the recent spread of A.I. risked a surge in fraud, as “ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes… can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.”

Responses to A.I. vary. While the United Kingdom has moved to investigate the technology to regulate it — an approach Martin warns risks leaving governments “behind the curve” as development outpaces the law meant to constrain it — Italy has moved to block the tech, citing privacy concerns.

As reported, Italy instituted a temporary ban on ChatGPT in March, and that:

OpenAI was accused by the Privacy Guarantor of a lack of transparency to its users and other interested parties in terms of the data collects, and claimed that there is no legal justification for the program to sweep up massive swaths of data from the internet in order for its algorithm to be trained to mimic human responses to prompts.

Romania, on the other hand, has gone strongly in the other direction, with the government giving itself an A.I.-powered advisor to the Prime Minister.

COMMENTS

Please let us know if you're having issues with commenting.