Trust AI – to get it wrong…?

 

It has been some time since I put fingers to keyboard and posted anything here. There are many many reasons for my lack of activity, I will spare you the list.

It has taken something quite personal to get me writing again, almost a personal attack except that “attack” makes it sound deliberate. I do not think this is deliberate, I hope it isn’t.

just a few days ago I received an email from Academia.edu with the subject line An AI created a podcast of your paper “Surviving Information Overload:…”.  This was an article I wrote for School Libraries Worldwide, the peer-reviewed journal of the International Association of School Librarianship (IASL);  the article was published in 1997.  I probably/possibly uploaded the paper later as my “admission fee” to Academia.edu. (If it was not me who uploaded it, it must have been someone else, uploading it as a free pass for their registration; it is not unknown.)  If you are an Academia user, you can find the paper here.

Intrigued, I clicked on the link to be taken to a page in Academia.edu headed by the advice “Private to you”. Below that in bold is the title of the podcast “AI Podcast of ‘Surviving Information Overload: Lessons From the Reading Reluctance Research'”. Below that is the note “Uploaded by John Royce”. Falsehood #1.AI podcast 1

For the record: I did not upload this audio file (nor any other). It does not have my blessing nor even my permission.

I think this could be an attempt to have me subscribe to their Premium service and have the podcast available publicly; I have not clicked on the “Add this AI Podcast to my Academia profile” to find out, at least, not without listening to what AI had made of my article.

AI podcast2

What I heard decided me: there is no way that I would want my name put to this. The recording is full of misinterpretations of my article, full of mistakes. It rang alarm bells from start to finish.

I have to say, it is very well done. Without that first word in the title, “AI”. there is no indication that this is produced by AI. The voice is warm and engaging and very natural (as in “They found that, well, um, our brain sort of goes on a defensive mode…”), you could not tell, at least I could not tell, that it is an AI tool reading this.

The one giveaway might be in the opening lines, “Welcome to In Depth with Academia, I’m Richard Price, your host and the CEO of Academia.edu, here to take you through the wonderful world of academic research.”

The founder and CEO of Academia is indeed one Richard Price, according to his page on LinkedIn. https://www.linkedin.com/in/richardprice.  So is this Richard Price, voicing a piece written by AI promoting an article written by little me or is this AI impersonating Richard Price? I rather suspect the latter and I suspect that I am not the only writer whose work has been given the Academia-AI treatment.

Did Price give permission for his name to be used like this? I was not asked, it could well be that Price was not either. Is that impersonation? There is a podcast with the title “In Depth” but it is produced by First Round Review.

I have not found a podcast with this name produced by Academia. More deception?

The voice of Price goes on, “In today’s episode, we’re diving into a paper by none other than John Royce, an insightful academic who has really tackled a problem, I think a lot of us can relate to”.   Flag “a paper by none other than John Royce”. By none other than who? Me, I may be known by a few hundred people but that does not merit the “none other than …” treatment.

The podcast goes on to make several false claims about the article such as “John Royce and his team conducted a series of studies to understand this” and ” And you know, Royce used mixed methods research, blending quantitative surveys with qualitative interviews to gather comprehensive data”.

I worked alone. I did not conduct any studies. I did not use mixed methods research. I carried out no surveys, conducted no interviews. I did not collect data, comprehensive or otherwise.

Worse still, Price – or the AI acting in his name – has completely misinterpreted the article.  The voice declares, “At the heart of it, the paper asks, ‘How does the massive influx of information in today’s digital world lead to reading reluctance?'” and the whole podcast is based on this premise.

Just the opposite. My aim was to suggest that research into different forms of reading reluctance and its amelioration, including the learning of coping strategies and helping poor and reluctant readers develop the skills of good, critical readers, could be used to help us handle information overload.  I was suggesting that in many ways we already had the tools – and they were as applicable in the digital world as in the world of print.

It is, I think, well-known that AI is prone to hallucinate, at times providing poor information, sometimes inventing information.  Current advice, at least in the educational world, is to use AI as an assistive tool but to verify everything it tells you. Many do check and verify – and many more do not, and sometimes come a cropper. Is this another echo with the past, again nothing new?

I am thinking of the arrival of Google and the opening of the floodgates of search results. Go below the fold, we advised our students, the results you seek may well be on page 2 or 3 of the hit lists.  And again, while everyone knew the mantra and what they should be doing, most searchers still went straight to Google hit #1 and went no further.

Is it the same today, with AI, do students hear the advice, know what they ought to do – and then not do it?

Which is why I find this podcast, the misuse and misinterpretation of my 18-year old article, worrisome. If any listeners to this podcast accept it without checking against the actual paper, how will they know that the AI tool has got it so badly wrong? Could the wrong information become, in time, accepted wisdom, quote in later papers and articles?

Nor does it matter if the podcast itself is genuine, if AI was used to voice an essay written by Richard Price. Or if AI was not used at all, if this is a human writing the piece and a human speaking it. The AI is irrelevant. The podcast-article is completely wrong.

But without checking against the original article, how will anyone know? Not just my article, any LLM output, any AI generated summary, perhaps anything generated by AI – without checking and verifying, how will you know?

Added: 10 July 2025 : the podcast / a transcript of the podcast

Reader beware – different views of point

Do you use Reader View?  Do you recommend it to your students?  I often use Reader View when available, especially if I want to print out or save a PDF version of the page I am looking at and there is no ready-made PDF version already linked on the page.

Reader and Reader View are extensions or apps which enable “clean” views of the page you are looking at, keeping the textual matter but avoiding the advertisements, embedded videos, navigation and sidebar matter and other distractions.

Here, for instance, is a page on MacWorld, How to enable Reader View automatically for websites in mobile and desktop Safari:

The advertisements flicker and change, the video clip plays automatically and floats so that it is always on the screen, there are several more distractions as you scroll through the article.

These distractions disappear Continue reading

How many…?

It’s a fascinating and possibly pointless exercise, trying to work out how search engines work.  Although this article was inspired by a news story on beating (so-called) plagiarism detectors, I found myself more interested in what the story told us about Google and (presumably) other search engines.

The story starts withn an article in Hoax-Alert: Forget Russian Bots: Fake Native Americans Are Using Russian Characters To Avoid Fake News and Plagiarism DetectorsThe story relates how a number of websites which appear to be promoted by Native Americans are in fact sites originating in Kosovo and other countries. It seems that they are stealing content, disguising it (to escape similarity detectors) and getting away with it. The way they disguise the content is to substitute Cyrillic characters which look like Latin alphabet characters in text, in order to beat text-matching software.  The HoaxAlert story shows this illustration: Continue reading

Guilty by association

A month or so ago, an incident at Ohio State University made headlines. One or more students had posted information on business course assignments in a GroupMe study group.  The type of information shared violated the University’s code of student conduct.  As a consequence, more than 80 students – all members of the GroupMe group – were charged with cheating.

GroupMe is a free group messaging app, widely used to send messages and documents simultaneously to all members of a group. Members of educational GroupMe groups often use it to share dates due and study tips and readings. When collaboration is permitted, this kind of app can be a great boon in assisting collaborative work. In this particular case, however, some users had overstepped the mark and had posted suggested answers to homework assignments. Legitimate collaboration had become illegitimate collusion.

By and large, the headlines (of which this is just a small selection) seemed to get more dramatic Continue reading

Hang on …

As noted in my last post, Kardinia International College library has disposed of 60% of its book collection. Manchester Central Library has recently disposed of 240,000 books, passed on to other institutions – or pulped.  Priceless and irreplaceable.  These are not isolated cases; it’s been happening for years, and the pace is increasing. Do we still need print? Is print dead?

A few days ago, I posted this on the librarians’ pages of iSkoodle, the ECIS listserv/ bulletin board, a discussion of print versus online resources, a plea to hold on to print:

I ended with a mention of Ken Vesey’s milking-stool analogy in an article for Teacher Librarian in 2005, “Eliminate “Wobbly” Research with the Information Resource Tripod.”  I invited iSkoodlers to track down Ken’s article. As yet, nobody has written to claim success.

Where would you go? Can you find it? Continue reading

What’s better than a book … ?

A LinkedIn alert this morning caught my eye.  The heading reads Do you have a ‘Learning Commons’ at your school? You should! and it’s been posted by Maxine Driscoll.

“Meeting the needs of 21st Century learners.
I had an amazing experience last week. I was invited to visit the new Learning Commons at Kardinia International College a K-12 school in Australia and was blown away by what I saw! 21st Century thinking, creativity, courage and conviction! Here is…”

I like the learning commons concept. It’s exciting, it enables a refreshingly different approach to teaching and to learning. It makes learning more enjoyable, and reports promise great things. It may well be too early to say if the benefits are real, but there are aspects of learning commons that any library can use to advantage.

The post to which Maxine Driscoll’s LinkedIn alert refers is, Continue reading

Safe in their hands?

Earlier this week, David Cameron announced a number of measures aimed at reducing exposure to internet pornography and images of child abuse.  By the end of 2014, internet service providers will be required to provide family-friendly filters for all households in England and Wales. Internet users who desire access to pornography will have to opt in.

Other measures announced include forcing search engines to provide no results for certain terms which are used by those seeking child pornography, Continue reading

Burnt offerings

Pearson ad Xerox Ignite (software which enables a photocopier to grade student papers, including essays) does not exactly set me burning with enthusiasm, as I said a few weeks ago (Baby, you can’t light my fire). However, I was delighted to find that Pearson is looking for student essays for the Pearson Essay Scorer.  I would have the chance to test an online essay grader for myself.

The notion is that submitted essays “will help us calibrate the evaluation engine that examines student work.” Continue reading

Baby, you can’t light my fire…

Automated test marking is not new. It has been around for years.  The grail is automated essay marking, and I fear we may be one step nearer with Xerox Ignite. A blog-post by Diane Ravitch Can Machines Grade Essays? Should They? drew my attention to this piece of software – the comments to her piece are well worth reading, and I’ve found out more thanks to Gizmodo Xerox’s New Grading Copiers Will Finally Make Scantron Obsolete and an article by Bob Yirka at Phys.Org Xerox to offer ‘Ignite’ software upgrade for copiers to let them grade school papers.

Yirka (unintenionally) sums up what’s wrong, sums up my fears, quite neatly: Continue reading

Do we need the middleman?

The robots are coming! The robots are coming!

I’m just reading a New York Times article,  Essay-Grading Software Offers Professors a Break.  It’s one of those the-machines-can-do-it-better-than-you pieces.  In this case, the machines can grade student essays.  And students can do better too, if they give the machines what they want.  Forget originality, creativity, accuracy, thought.  Just give the machines what they have been programmed to look for, just fill in the boxes – and don’t ever think outside one? Continue reading

Watch this space…

A few weeks ago, I discussed secret cameras, in particular “a camcorder disguised as a car key.
Quad-band-Amazon

Today, the British Telecoms regulator Ofcom announced the results of its 4G telephony auction. And bang, straight-away, in my mailbox, there’s a flyer from Amazon, “NEW Version Ultra-thin Quad-band….” Coincidence? Perhaps – this is from the US Amazon store, not the UK branch. But I can’t help thinking that this is clever marketing:  Amazon had noticed Continue reading