Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 28 years ago, published in IASL’s journal School Libraries Worldwide in 1997. I was disturbed by Academia’s use of the article without my permission on two counts: (1) because they gave the impression that I had uploaded the recording (and thereby given my blessing) when I had not done either and (2) because the podcast gave a very false impression of my research methods (?) and also of what the article is about. The AI had misread and misinterpreted the piece and I was not impressed.
Just a few days later I got another email from Academia, this time telling me that their AI had reviewed my article. Reviewed? In their dreams! Nightmares!! Hallucinations!!!
Although presented as a peer review of sorts with these sections
-
- Overview
- Relevant references
- Strengths
- Major comments (two paragraphs headed respectively Methodology and Clarity)
- Minor comments (one paragraph headed Presentation)
- Reviewer commentary
- Summary assessment
the review made no sense, from beginning AI Review of ” “ to end.
There was no title, no indication of what was being reviewed or, thankfully, authorship. The text makes no sense. I have checked some of the hyperlinks and URLs in the list of “relevant references” and they do lead to live web-pages. Some of them carry the title shown in the list though not necessarily the authors or other bibliographic details as shown and I cannot think what content on the pages might have contributed to any kind of article or paper. Authors listed incidentally include Henry James and Alexandre Dumas.
Indeed, it all sounds and reads as if this is an AI peer-review of a “paper” written by AI. Is that a bit incestuous?
As I read it, the review sounds learned but says little. I am reminded of Eric Morecambe’s comeback to André Previn, that he was “playing all the right notes but not necessarily in the right order** – the words in the review sound right but the sentences make little sense – at least, not to me. At best they are generalizations and stock phrases, pulled together to make some kind of narrative.
One thing is for sure, a review of my work this is not. Whose work it might be, if anyone’s, who knows?
Once again, Academia, I am not impressed.
The advice to anyone using AI, certainly generative AI or large-language models, is to engage brain and verify, verify, verify. Academia.edu, are you listening?
* Graham McCann, The Prelude of Mr Preview: How André Previn won over Morecambe & Wise, 2020, Comedy Chronicles.
The review in full (PDF file)
ADDENDUM: After posting this report, I did what I probably should have done straight off: searched to see if anyone else had had similar experience.
I do not see anyone else having received notification from Academia.edu and reporting on it, but I did find a review of Academia.edu’s AI reviews. The report, by Miklós Sebők and Rebeka Kiss and published on the Prompt Revolution site, is titled Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback. Sebők and Kiss submitted several genuine papers for review by the Academia.edu AI Reviewer. Enough to say here, even with genuine papers they were not impressed either, noting, among other things including difficulty in uploading their papers, “the tool often returns repetitive and overly general suggestions, with little evidence of meaningful engagement with the actual content, methodology, or disciplinary context of the submissions”. Right!
Corrections (I don’t know how I made TWO errors in the opening paragraph – apologies. The errors have been corrected; for the record, the original paragraph read
Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 18 years ago, published in IASL’s journal School Libraries Worldwide in 1987.
I also received one of the notifications that there was a podcast / review – to be honest I ignored / deleted it and don’t even bother to look – maybe I should have. I’m so singularly unimpressed by most of what AI is and does that ignoring and deleting is becoming a default mode- the question actually should be what on earth we can do about it if anything? Do authors have any rights or did we give them up when joining these sites!? And worst of all will future researchers just read AI reviews or abstracts or listen to AI podcasts when doing their literature reviews and get the wrong end of the stick without bothering to revert to the original?
Good questions, Nadine.
Is there anything we can do when Academia.edu provides misinformation or misrepresents our work. The T&Cs page <https://www.academia.edu/terms> states that although we retain ownership of and copyright in any material we upload to the platform, by uploading we give Academia.edu the right to use our content in any way…
This could be an example of what Cory Doctorow calls “enshittification”, the gradual downgrading, degrading and decay of what were once innovative, genuinely helpful services into something which is not so helpful, not so user-friendly, sometimes far more expensive services.
As for what we can do about it in education, curiously I think that much of the advice suggested in my 28 year-old paper remain just as valid – and possibly more urgent, most especially: read critically, think critically (Ira Winn once posited, I think in an article in Phi Delta Kappan, that “The opposite of ‘critical thinking’ is ‘uncritical thinking'”).
John