Yerwhat?

 

Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 28 years ago, published in IASL’s journal School Libraries Worldwide in 1997.  I was disturbed by Academia’s use of the article without my permission on two counts: (1) because they gave the impression that I had uploaded the recording (and thereby given my blessing) when I had not done either and (2) because the podcast gave a very false impression of my research methods (?) and also of what the article is about. The AI had misread and misinterpreted the piece and I was not impressed.

Just a few days later I got another email from Academia, this time telling me that their AI had  reviewed my article.  Reviewed? In their dreams! Nightmares!! Hallucinations!!!

Although presented as a peer review of sorts with these sections

    • Overview
    • Relevant references
    • Strengths
    • Major comments (two paragraphs headed respectively Methodology and Clarity)
    • Minor comments (one paragraph headed Presentation)
    • Reviewer commentary
    • Summary assessment

the review made no sense, from beginning AI Review of ” “ to end.

AI reviewThere was no title, no indication of what was being reviewed or, thankfully, authorship. The text makes no sense. I have checked some of the hyperlinks and URLs in the list of “relevant references” and they do lead to live web-pages. Some of them carry the title shown in the list though not necessarily the authors or other bibliographic details as shown and I cannot think what content on the pages might have contributed to any kind of article or paper.  Authors listed incidentally include Henry James and Alexandre Dumas.

reference listIndeed, it all sounds and reads as if this is an AI peer-review of a “paper” written by AI. Is that a bit incestuous?

As I read it, the review sounds learned but says little. I am reminded of Eric Morecambe’s comeback to André Previn, that he was “playing all the right notes but not necessarily in the right order** – the words in the review sound right but the sentences make little sense – at least, not to me. At best they are generalizations and stock phrases, pulled together to make some kind of narrative.

One thing is for sure,  a review of my work this is not. Whose work it might be, if anyone’s, who knows?

Once again, Academia, I am not impressed.

The advice to anyone using AI, certainly generative AI or large-language models, is to engage brain and verify, verify, verify. Academia.edu, are you listening?

* Graham McCann, The Prelude of Mr Preview: How André Previn won over Morecambe & Wise, 2020, Comedy Chronicles.

The review in full (PDF file)

ADDENDUM: After posting this report, I did what I probably should have done straight off: searched to see if anyone else had had similar experience.

I do not see anyone else having received notification from Academia.edu and reporting on it, but I did find a review of Academia.edu’s AI reviews. The report, by Miklós Sebők and Rebeka Kiss and published on the Prompt Revolution site, is titled Testing Academia.edu’s AI Reviewer: Technical Errors and Template-Based Feedback. Sebők and Kiss submitted several genuine papers for review by the Academia.edu AI Reviewer. Enough to say here, even with genuine papers they were not impressed either, noting, among other things including difficulty in uploading their papers, “the tool often returns repetitive and overly general suggestions, with little evidence of meaningful engagement with the actual content, methodology, or disciplinary context of the submissions”. Right!

Corrections (I don’t know how I made TWO errors in the opening paragraph – apologies. The errors have been corrected; for the record, the original paragraph read

Last week I wrote (Trust AI – to get it wrong…?) of Academia’s AI podcast of an article I wrote some 18 years ago, published in IASL’s journal School Libraries Worldwide in 1987.

Trust AI – to get it wrong…?

 

It has been some time since I put fingers to keyboard and posted anything here. There are many many reasons for my lack of activity, I will spare you the list.

It has taken something quite personal to get me writing again, almost a personal attack except that “attack” makes it sound deliberate. I do not think this is deliberate, I hope it isn’t.

just a few days ago I received an email from Academia.edu with the subject line An AI created a podcast of your paper “Surviving Information Overload:…”.  This was an article I wrote for School Libraries Worldwide, the peer-reviewed journal of the International Association of School Librarianship (IASL);  the article was published in 1997.  I probably/possibly uploaded the paper later as my “admission fee” to Academia.edu. (If it was not me who uploaded it, it must have been someone else, uploading it as a free pass for their registration; it is not unknown.)  If you are an Academia user, you can find the paper here.

Intrigued, I clicked on the link to be taken to a page in Academia.edu headed by the advice “Private to you”. Below that in bold is the title of the podcast “AI Podcast of ‘Surviving Information Overload: Lessons From the Reading Reluctance Research'”. Below that is the note “Uploaded by John Royce”. Falsehood #1.AI podcast 1

For the record: I did not upload this audio file (nor any other). It does not have my blessing nor even my permission.

I think this could be an attempt to have me subscribe to their Premium service and have the podcast available publicly; I have not clicked on the “Add this AI Podcast to my Academia profile” to find out, at least, not without listening to what AI had made of my article.

AI podcast2

What I heard decided me: there is no way that I would want my name put to this. The recording is full of misinterpretations of my article, full of mistakes. It rang alarm bells from start to finish.

I have to say, it is very well done. Without that first word in the title, “AI”. there is no indication that this is produced by AI. The voice is warm and engaging and very natural (as in “They found that, well, um, our brain sort of goes on a defensive mode…”), you could not tell, at least I could not tell, that it is an AI tool reading this.

The one giveaway might be in the opening lines, “Welcome to In Depth with Academia, I’m Richard Price, your host and the CEO of Academia.edu, here to take you through the wonderful world of academic research.”

The founder and CEO of Academia is indeed one Richard Price, according to his page on LinkedIn. https://www.linkedin.com/in/richardprice.  So is this Richard Price, voicing a piece written by AI promoting an article written by little me or is this AI impersonating Richard Price? I rather suspect the latter and I suspect that I am not the only writer whose work has been given the Academia-AI treatment.

Did Price give permission for his name to be used like this? I was not asked, it could well be that Price was not either. Is that impersonation? There is a podcast with the title “In Depth” but it is produced by First Round Review.

I have not found a podcast with this name produced by Academia. More deception?

The voice of Price goes on, “In today’s episode, we’re diving into a paper by none other than John Royce, an insightful academic who has really tackled a problem, I think a lot of us can relate to”.   Flag “a paper by none other than John Royce”. By none other than who? Me, I may be known by a few hundred people but that does not merit the “none other than …” treatment.

The podcast goes on to make several false claims about the article such as “John Royce and his team conducted a series of studies to understand this” and ” And you know, Royce used mixed methods research, blending quantitative surveys with qualitative interviews to gather comprehensive data”.

I worked alone. I did not conduct any studies. I did not use mixed methods research. I carried out no surveys, conducted no interviews. I did not collect data, comprehensive or otherwise.

Worse still, Price – or the AI acting in his name – has completely misinterpreted the article.  The voice declares, “At the heart of it, the paper asks, ‘How does the massive influx of information in today’s digital world lead to reading reluctance?'” and the whole podcast is based on this premise.

Just the opposite. My aim was to suggest that research into different forms of reading reluctance and its amelioration, including the learning of coping strategies and helping poor and reluctant readers develop the skills of good, critical readers, could be used to help us handle information overload.  I was suggesting that in many ways we already had the tools – and they were as applicable in the digital world as in the world of print.

It is, I think, well-known that AI is prone to hallucinate, at times providing poor information, sometimes inventing information.  Current advice, at least in the educational world, is to use AI as an assistive tool but to verify everything it tells you. Many do check and verify – and many more do not, and sometimes come a cropper. Is this another echo with the past, again nothing new?

I am thinking of the arrival of Google and the opening of the floodgates of search results. Go below the fold, we advised our students, the results you seek may well be on page 2 or 3 of the hit lists.  And again, while everyone knew the mantra and what they should be doing, most searchers still went straight to Google hit #1 and went no further.

Is it the same today, with AI, do students hear the advice, know what they ought to do – and then not do it?

Which is why I find this podcast, the misuse and misinterpretation of my 18-year old article, worrisome. If any listeners to this podcast accept it without checking against the actual paper, how will they know that the AI tool has got it so badly wrong? Could the wrong information become, in time, accepted wisdom, quote in later papers and articles?

Nor does it matter if the podcast itself is genuine, if AI was used to voice an essay written by Richard Price. Or if AI was not used at all, if this is a human writing the piece and a human speaking it. The AI is irrelevant. The podcast-article is completely wrong.

But without checking against the original article, how will anyone know? Not just my article, any LLM output, any AI generated summary, perhaps anything generated by AI – without checking and verifying, how will you know?

Added: 10 July 2025 : the podcast / a transcript of the podcast

To be verified…

Half-listening to the news on BBC Radio 4 this morning, I was jerked to full attention during the regular quick look at the front pages of today’s UK newspapers. The Times has a front-page report declaring that the International Baccalaureate (IB) allowing students to use artificial intelligence to help them write their essays as long as they credit the AI used.

News to me!

Quick checks: the BBC News website includes front-page views of today’s newspapers (for a limited time only, possibly for copyright reasons). I took a screengrab.

The headline reads: Exams body lets pupils use AI chatbot to write essays.

The Times website carries the story as well – unfortunately behind a paywall, and The Times is not a newspaper to which I subscribe.

A quick Google check for [artificial intelligence international baccalaureate] – using the News feature and limiting the search to the last week found just one mention of the story –

Google search [artificial intelligence international baccalaureate]

the story in today’s The Times. There are several stories of students being punished for using artificial intelligence, even in IB schools.

Checks on the open IB website and in the closed-access My IB find no mention of this. It looks as if The Times has a world exclusive! (The thought that the newspaper had fallen victim to a hoax crossed my mind.)

Having bought a print copy of the newspaper, I wonder about the accuracy of the headline Exams body lets pupils use AI chatbot to write essays – that “lets” may be a trifle misleading. It implies that the IB already allows students to use AI in their work for assessment. The second paragraph states

Continue reading

Back to basics, again

News that CHATGPT had “sprinted” to one million users in just five days, exponentially faster than any other online service, has itself spread fast. The chart produced by Statista has been reproduced many many times, it is big news.

Articles about ChatGPT and AI generally seem to be increasing almost as fast, and my last post here, Here we are again!, just added to the number.  News that Google is about to launch its own chatbot, Bard, keeps the story much alive. Those commenting on developments in the AI field must feel that it is sometimes hard to keep up. 

Meanwhile, many in education and other fields fear that ChatGPT will make plagiarism and other forms of non-authentic work easier.  On the other hand, there are many, even in education, who see great potential in ChatGPT, see ways it can make their work easier. Some hold that it could lead to improved work and enhance critical thinking and student creativity.  At the same time, Courtney Cullen, in a post on the International Centre for Academic Integrity (ICAI) site, Artificial Intelligence: Friend, Foe, or Neither?, strikes a balance; shewelcomes “the increased focus on academic integrity” in educational circles.  We want our students to learn and show that they are learning, not simply to parrot, possibly unread, something generated by a machine.

Continue reading

Here we are again!

Since ChatGPT was first launched towards the end of 2022, there has been much alarm expressed in schools and colleges, in discussion forums, blogs and other social media platforms, in the educational press and in the general press too. There has also been calmer discussion; we shall come to that.

ChatGPT is an artificial intelligence (AI) text-generator, developed by OpenAI.  Its appearance marks a huge step forward in the evolution of AI.  To now, text-based AI has been uninspiring and flawed: think of the chatbots used by many support centres Continue reading