Trust AI – to get it wrong…?

 

It has been some time since I put fingers to keyboard and posted anything here. There are many many reasons for my lack of activity, I will spare you the list.

It has taken something quite personal to get me writing again, almost a personal attack except that “attack” makes it sound deliberate. I do not think this is deliberate, I hope it isn’t.

just a few days ago I received an email from Academia.edu with the subject line An AI created a podcast of your paper “Surviving Information Overload:…”.  This was an article I wrote for School Libraries Worldwide, the peer-reviewed journal of the International Association of School Librarianship (IASL);  the article was published in 1997.  I probably/possibly uploaded the paper later as my “admission fee” to Academia.edu. (If it was not me who uploaded it, it must have been someone else, uploading it as a free pass for their registration; it is not unknown.)  If you are an Academia user, you can find the paper here.

Intrigued, I clicked on the link to be taken to a page in Academia.edu headed by the advice “Private to you”. Below that in bold is the title of the podcast “AI Podcast of ‘Surviving Information Overload: Lessons From the Reading Reluctance Research'”. Below that is the note “Uploaded by John Royce”. Falsehood #1.AI podcast 1

For the record: I did not upload this audio file (nor any other). It does not have my blessing nor even my permission.

I think this could be an attempt to have me subscribe to their Premium service and have the podcast available publicly; I have not clicked on the “Add this AI Podcast to my Academia profile” to find out, at least, not without listening to what AI had made of my article.

AI podcast2

What I heard decided me: there is no way that I would want my name put to this. The recording is full of misinterpretations of my article, full of mistakes. It rang alarm bells from start to finish.

I have to say, it is very well done. Without that first word in the title, “AI”. there is no indication that this is produced by AI. The voice is warm and engaging and very natural (as in “They found that, well, um, our brain sort of goes on a defensive mode…”), you could not tell, at least I could not tell, that it is an AI tool reading this.

The one giveaway might be in the opening lines, “Welcome to In Depth with Academia, I’m Richard Price, your host and the CEO of Academia.edu, here to take you through the wonderful world of academic research.”

The founder and CEO of Academia is indeed one Richard Price, according to his page on LinkedIn. https://www.linkedin.com/in/richardprice.  So is this Richard Price, voicing a piece written by AI promoting an article written by little me or is this AI impersonating Richard Price? I rather suspect the latter and I suspect that I am not the only writer whose work has been given the Academia-AI treatment.

Did Price give permission for his name to be used like this? I was not asked, it could well be that Price was not either. Is that impersonation? There is a podcast with the title “In Depth” but it is produced by First Round Review.

I have not found a podcast with this name produced by Academia. More deception?

The voice of Price goes on, “In today’s episode, we’re diving into a paper by none other than John Royce, an insightful academic who has really tackled a problem, I think a lot of us can relate to”.   Flag “a paper by none other than John Royce”. By none other than who? Me, I may be known by a few hundred people but that does not merit the “none other than …” treatment.

The podcast goes on to make several false claims about the article such as “John Royce and his team conducted a series of studies to understand this” and ” And you know, Royce used mixed methods research, blending quantitative surveys with qualitative interviews to gather comprehensive data”.

I worked alone. I did not conduct any studies. I did not use mixed methods research. I carried out no surveys, conducted no interviews. I did not collect data, comprehensive or otherwise.

Worse still, Price – or the AI acting in his name – has completely misinterpreted the article.  The voice declares, “At the heart of it, the paper asks, ‘How does the massive influx of information in today’s digital world lead to reading reluctance?'” and the whole podcast is based on this premise.

Just the opposite. My aim was to suggest that research into different forms of reading reluctance and its amelioration, including the learning of coping strategies and helping poor and reluctant readers develop the skills of good, critical readers, could be used to help us handle information overload.  I was suggesting that in many ways we already had the tools – and they were as applicable in the digital world as in the world of print.

It is, I think, well-known that AI is prone to hallucinate, at times providing poor information, sometimes inventing information.  Current advice, at least in the educational world, is to use AI as an assistive tool but to verify everything it tells you. Many do check and verify – and many more do not, and sometimes come a cropper. Is this another echo with the past, again nothing new?

I am thinking of the arrival of Google and the opening of the floodgates of search results. Go below the fold, we advised our students, the results you seek may well be on page 2 or 3 of the hit lists.  And again, while everyone knew the mantra and what they should be doing, most searchers still went straight to Google hit #1 and went no further.

Is it the same today, with AI, do students hear the advice, know what they ought to do – and then not do it?

Which is why I find this podcast, the misuse and misinterpretation of my 18-year old article, worrisome. If any listeners to this podcast accept it without checking against the actual paper, how will they know that the AI tool has got it so badly wrong? Could the wrong information become, in time, accepted wisdom, quote in later papers and articles?

Nor does it matter if the podcast itself is genuine, if AI was used to voice an essay written by Richard Price. Or if AI was not used at all, if this is a human writing the piece and a human speaking it. The AI is irrelevant. The podcast-article is completely wrong.

But without checking against the original article, how will anyone know? Not just my article, any LLM output, any AI generated summary, perhaps anything generated by AI – without checking and verifying, how will you know?

Added: 10 July 2025 : the podcast / a transcript of the podcast

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.