MIT Technology Review: Hundreds of AI Projects Completely Failed to Help in Coronavirus Detection

Nurse Canan Emcan shows a test kit for coronavirus samples at the isolation ward of the Un
INA FASSBENDER/AFP via Getty Images

In a recent article, the MIT Technology Review notes that during the coronavirus pandemic, a large number of AI-powered tools were developed in an effort to predict the effects of the virus, but none have appeared to actually make any real difference.

The MIT Technology Review reports in an article titled “Hundreds of AI Tools Have Been Built to Catch Covid. None of Them Helped,” that when the coronavirus pandemic first reached Europe around March 2020, AI researchers began looking for ways to help.

Given that data was just being released from China, where the virus appeared to originate, it was believed that machine learning algorithms could be trained on that data. The predictions of the machine learning algorithms could then be used to help doctors understand the virus better and make decisions on how to deal with the ongoing situation.

However, it appears that despite the AI community around the world working to help, their output was of little use. The AI community developed software that it believed would allow hospitals to diagnose or triage patients faster, but in the end, the majority of predictive tools developed made little difference, and some may have even caused more harm.

The MIT Technology Review writes:

That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid.

This echoes the results of two major studies that assessed hundreds of predictive tools developed last year. Wynants is lead author of one of them, a review in the British Medical Journal that is still being updated as new tools are released and existing ones tested. She and her colleagues have looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. They found that none of them were fit for clinical use. Just two have been singled out as being promising enough for future testing.

Laure Wynants, an epidemiologist at Maastricht University in the Netherlands, who studies predictive tools commented: “It’s shocking. I went into it with some worries, but this exceeded my fears.”

The MIT Technology Review adds:

Wynants’s study is backed up by another large review carried out by Derek Driggs, a machine-learning researcher at the University of Cambridge, and his colleagues, and published in Nature Machine Intelligence. This team zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images, such as chest x-rays and chest computer tomography (CT) scans. They looked at 415 published tools and, like Wynants and her colleagues, concluded that none were fit for clinical use.

Read more at the MIT Technology Review here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com

COMMENTS

Please let us know if you're having issues with commenting.