Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 28 years ago, published in IASL’s journal School Libraries Worldwide in 1997. I was disturbed by Academia’s use of the article without my permission on two counts: (1) because they gave the impression that iI had uploaded the recording (and thereby given my blessing) when I had not done either and (2) because the podcast gave a very false impression of my research methods (?) and also of what the article is about. The AI had misread and misinterpreted the piece and I was not impressed.
Just a few days later I got another email from Academia, this time telling me that their AI had reviewed my article. Reviewed? In their dreams! Nightmares!! Hallucinations!!!
Although presented as a peer review of sorts with these sections
-
- Overview
- Relevant references
- Strengths
- Major comments (two paragraphs headed respectively Methodology and Clarity)
- Minor comments (one paragraph headed Presentation)
- Reviewer commentary
- Summary assessment
the review made no sense, from beginning AI Review of ” “ to end.
There was no title, no indication of what was being reviewed or, thankfully, authorship. The text makes no sense. I have checked some of the hyperlinks and URLs in the list of “relevant references” and they do lead to live web-pages. Some of them carry the title shown in the list though not necessarily the authors or other bibliographic details as shown and I cannot think what content on the pages might have contributed to any kind of article or paper. Authors listed incidentally include Henry James and Alexandre Dumas.
Indeed, it all sounds and reads as if this is an AI peer-review of a “paper” written by AI. Is that a bit incestuous?
As I read it, the review sounds learned but says little. I am reminded of Eric Morecambe’s comeback to André Previn, that he was “playing all the right notes but not necessarily in the right order** – the words in the review sound right but the sentences make little sense – at least, not to me. At best they are generalizations and stock phrases, pulled together to make some kind of narrative.
One thing is for sure, a review of my work this is not. Whose work it might be, if anyone’s, who knows?
Once again, Academia, I am not impressed.
The advice to anyone using AI, certainly generative AI or large-language models, is to engage brain and verify, verify, verify. Academia.edu, are you listening?
* Graham McCann, The Prelude of Mr Preview: How André Previn won over Morecambe & Wise, 2020, Comedy Chronicles.
The review in full (PDF file)
ADDENDUM: After posting this report, I did what I probably should have done straight off: searched to see if anyone else had had similar experience.
I do not see anyone else having received notification from Academia.edu and reporting on it, but I did find a review of Academia.edu’s AI reviews. The report, by Miklós Sebők and Rebeka Kiss and published on the Prompt Revolution site, is titled Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback. Sebők and Kiss submitted several genuine papers for review by the Academia.edu AI Reviewer. Enough to say here, even with genuine papers they were not impressed either, noting, among other things including difficulty in uploading their papers, “the tool often returns repetitive and overly general suggestions, with little evidence of meaningful engagement with the actual content, methodology, or disciplinary context of the submissions”. Right!
Corrections (I don’t know how I made TWO errors in the opening paragraph – apologies. The errors have been corrected; for the record, the original paragraph read
Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 18 years ago, published in IASL’s journal School Libraries Worldwide in 1987.