The far-left New York Times has permanently cut ties with a freelance book reviewer over “a serious violation” involving the use of AI.
Journalist and author Alex Preston’s January 6 review of Jean-Baptiste Andrea’s Watching Over Her received a complaint this month that the review was found to be uncomfortably similar to a review of the novel published by the far-left Guardian four months earlier, in August.
After an investigation, the Times spoke to Preston, who admitted that he used an “AI tool to help draft the piece and that he failed to catch the Guardian material before the paper published the review,” reports the far-left The Wrap.
“Preston told the Times he had not used AI to help draft any of his other stories,” the report adds. “The paper notified the Guardian about the similarities and, on Monday, added an editors’ note to the review that noted the use of AI and linked to the Guardian review.”
The March 30 editor’s note reads:
A reader recently alerted The Times that this review included language and details similar to those in a review of the same book published in The Guardian. We spoke to the author of this piece, a freelancer reviewer, who told us he used an A.I. tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on A.I. and his use of unattributed work by another writer are a clear violation of The Times’s standards. The reviewer said he had not used A.I. in his previous reviews for The Times, and we have found no issues in those pieces.
The Times closed with a link to the Guardian review.
In a statement to The Wrap, Preston said that he had used an “A.I. editing tool improperly on a draft I had written” and missed the “overlapping language” from the Guardian piece. “I took responsibility immediately and apologized to the New York Times. Beyond that, I have nothing more to add.”
Here’s a guy who’s written six books and as many book reviews for the Times reaching back to 2021, and now this is forever on his Google searches. That seems pretty harsh for what looks like an honest mistake. I can’t imagine someone with Preston’s background trying to or even believing he could sneak something like this past the Times’ editors, who share some of the blame here. Don’t they have tools to check for possible plagiarism?
So far I haven’t found AI useful for much of anything other than producing a transcript of a video or podcast. The idea that this clumsy technology will someday become sentient and produce time-traveling robot-killers sounds preposterous to me.
It’s good to see the Times putting all this energy toward an AI mishap — you know, an investigation followed by an editor’s note followed by cutting the writer off at the knees. Why can’t the Times be half as diligent and willing to deal out consequences when its news “reporters” lie and publish lies?
The book section might be shipshape, but the news section is still a steaming pile of disinformation and outright garbage.


COMMENTS
Please let us know if you're having issues with commenting.