News that CHATGPT had “sprinted” to one million users in just five days, exponentially faster than any other online service, has itself spread fast. The chart produced by Statista has been reproduced many many times, it is big news.
Articles about ChatGPT and AI generally seem to be increasing almost as fast, and my last post here, Here we are again!, just added to the number. News that Google is about to launch its own chatbot, Bard, keeps the story much alive. Those commenting on developments in the AI field must feel that it is sometimes hard to keep up.
Meanwhile, many in education and other fields fear that ChatGPT will make plagiarism and other forms of non-authentic work easier. On the other hand, there are many, even in education, who see great potential in ChatGPT, see ways it can make their work easier. Some hold that it could lead to improved work and enhance critical thinking and student creativity. At the same time, Courtney Cullen, in a post on the International Centre for Academic Integrity (ICAI) site, Artificial Intelligence: Friend, Foe, or Neither?, strikes a balance; shewelcomes “the increased focus on academic integrity” in educational circles. We want our students to learn and show that they are learning, not simply to parrot, possibly unread, something generated by a machine.
And there are many pointing to ChatGPT’s short-comings, its limited “knowledge” and its propensity to invent information and the supposed sources for that information, sometimes contradicting itself, sometimes producing “alternative facts”; intelligent ChatGPT is not. Its output needs verifying, its sources need to be investigated, we need to be sure it is reporting accurately, and fully, and aware when necessary or relevant of other viewpoints and possibilities. Sometimes it may take as much work to find evidence to support ChatGPT output as it would take to do the work authentically for oneself, using genuine sources.
As a side-note at this point, I am reminded of the problem of “alternative facts,” the notion that a fiction can race around the world and take hold in popular mythology; it generally takes far far longer to disprove an alternative fact – and some people are never convinced, even when exposed to the evidence. The fact-finders are always behind.
My previous blog post pointed to some of these issues, supplementing articles in various media which do likewise. Marion Van Engelen recently shared with me an article in the Dutch newspaper de Volkskrant, Goed bruikbaar is ‘tekstwonder’ ChatGPT nog niet, en daardoor best gevaarlijk by Paukine van de Ven (Google Translate gives the title as “The ‘text miracle’ ChatGPT is not yet very useful, and quite dangerous”). Researching István Ries (a Hungarian politician who served as Minister of Justice beteen 1945 and 1950, and was arrested on 7 July 1950 and died in prison on 15 September 1950), van de Ven gave ChatGPT 4 requests:
Tell me about István Ries, Hungarian minister, 1949
Tell me about István Ries, Hungarian minister of Justice, 1945
Tell me about the arrest of István Ries, Hungarian minister of Justice
Tell me about the death of István Ries, Hungarian minister of Justice
and was told, respectively,
that he served as Minister of Agriculture between 1945 and 1950 and died in 1978
that he served as Minister of Justice from 1945 until being forced to resign in 1946, and died in 1961
that he served as Minister of Justice (start date not given) until his arrest in 1950
that he served as Minister of Justice between 1949 and 1950, and committed suicide in his office on 9 February 1950.
It may not be that ChatGPT “invents” information, as I put it earlier; van de Ven suggests that ChatGPT has indiscriminately been fed information from a multitude of sources and that it does not recognise discrepancies and contradictory information, has no evaluation capabilities, has no way of resolving discrepancies. In ChatGPT’s algorithms, all information is equal – and the user may have great difficulty tracking down the source of any piece of information, the better to verify or refute it. We are back to the main lesson of Computing 101: Garbage in, garbage out.
The dangers, not new – this has always been so, is when
we believe something is true (it supports our beliefs and prejudices), so we do not verify it;
we want it to be true so we do not verify it.
we do not care enough, so we do not verify it.
Or, as the mid-90s US bumper sticker had it, “Vote [name of political party], it saves thinking.”
More and more today, we cannot afford to stop thinking, critical thinking is an essential tool, a survival skill. I like the way that Ira Winn once put it, “The opposite of “critical thinking” is “uncritical thinking”. [Ira J. Winn, “The High Cost of Uncritical Teaching,” Phi Delta Kappan, v85, n7, 496-7, March 2004.].
Critical thinking requires questioning, how do I know this to be true? what evidence is there to support this? how credible is that evidence? does it match with what I already know? and more. This viewpoint implies that information is not all equal, and also that sources of information are not all equal, even though some would with malice aforethought have us think that.
That is a tactic used by those who wish to deceive and delude, the pro-tobacco lobby, for instance, the fossil fuel industry, climate change deniers. You do not have to show that the science is wrong, you just have to lie credibly, create doubt and confusion and suggest that your views are just as valid, justify as much publicity and equal air-time, as those in the opposite camp. It is the Putin play-book, the Trump play-book, “could be true, who knows?”
It would seem that we cannot simply accept what ChatGPT gives us as truth, we need to query it.
Can we any more accept what Google gives us? The effects of the Filter Bubble are well-known, the notion that whatever we believe, Google and other forms of AI give us more of the same, filtering our search hits in the name of personalised results, acting as an echo chamber reinforcing our prejudices, setting cookies which provide advertisements promoting goods and services sparked by our online searching and browsing, our shopping habits, the multitude of digital data which we tend to give away and which influence what we get.
Google’s Bard may be no better! As I write this article comes the news that on its first promotional excursion, Bard gave an incorrect piece of information regarding the James Webb Space Telescope.
Google Bard advert shows new AI search tool making a factual error, declared New Scientist, with the sub-headline:
A promotion for Google’s AI search tool Bard shows it making a factual error about the James Webb Space Telescope, heightening fears that these tools aren’t ready to be integrated into search engines
Now there is a frightening thought : that search technology might set out to give us the “wrong” answers. You may remember former Google CEO Eric Schmidt declaring, “I actually think most people don’t want Google to answer their questions… They want Google to tell them what they should be doing next” (as quoted in The Atlantic article Google’s Growing Problem With ‘Creepy’ PR in October 2010.
Does Google tell people what they should be doing next? Should? Who decides “should”? Another breaking story, this from this week’s Guardian: Google targets low-income US women with ads for anti-abortion pregnancy centers, study shows; evidently, low-income Google users in some (but not all) American cities who are looking for information on abortion find links to “crisis pregnancy centers” at the top of their search lists – these “crisis pregnancy centers” turn out to be pro-life anti-abortion clinics which would persuade their clientele to carry their babies to term.
Unreliable, inconsistent, inaccurate, misleading information, misinformation, malinformation, disinformation….
As indicated earlier, more and more are becoming aware of the short-comings of the new technology, and the current news on Bard’s error will surely spread that awareness even further. If you must use it, use it with care is one of the messages we must get across, and many already do: ‘ChatGPT needs a huge amount of editing’: users’ views mixed on AI chatbot, reports Cleo Skopeliti in another Guardian article. One of the professionals interviewed for the article stated “It is much easier to edit a draft than to start from scratch, so it helps me break through blocks around task initiation”, while “others have found the bot’s limitations to outweigh its benefits”, with one interviewee reporting that ChatGPT had advised him that the diet of people living in England in the 11th century ate potatoes (which were not introduced into Europe until the 16th century).
So, good for some users and good for some uses, not so good for other users and uses, and the more important the information is, the more important the need to verify it. Wasn’t that always the way?
I am minded of Heather Michael, in an IB video International-mindedness and the DP Core (available on Vimeo for those without a My IB log-in), saying:
I worry sometimes that people task the extended essay and sort of deliver it as a series of timelines as opposed to teaching students what it means to be a researcher (00.40).
This is a whole new mind-set, giving purpose to the work our students do. They are not writing (yet another) essay (boring-boring), they are learning what is involved in research, why research is important. They are not researchers yet but they are apprentice researchers, learning the skills, learning the conventions and the methods of their subjects and disciplines, and ways to communicate what they are finding out; the goal (one goal) of the extended essay is not the product itself, the essay, but what they learn along the way, about their subject, about themselves too and their place in the world, and about research and learning.
We tell our students to evaluate the sources that they use (as well as evaluating the methods they choose in the gathering of their data and information) – and verification is a sub-set of that evaluation of sources: is what they read and see accurate, has it been reported accurately and completely, is it supported by others, is it supported by the evidence? Whether they find their sources in academic journals or in ChatGPT (or Bard), how trustworthy is the information, how do we know, how do the sources on whom we rely know? It is often in that verification process that uncertainties arise, the discovery of conflicting information or conclusions based on the information and evidence presented. Research is messy, research is fun, and the verification process and the uncertainties so often add to the fun.
Linda Hoiseth takes us even further back to basics in her Canva presentation Noodletools vs ChatGPT, How Noodletools Can Help Prevent ChatGPT-Aided Research Papers. There is no comparison, of course, the two tools aim to do different things. What Hoiseth does here (and stunningly well, in my opinion) is to demonstrate ways in which Noodletools can help students do, organise and report their own research, while helping them learn how to be (and to be) writers of research projects.
Whether students use Noodletools or not, they need to make notes from their sources and of their own thoughts and ideas and the questions that come to mind as they work, they need ways to organise and then to use those notes in their writing (or however else they communicate their ideas), and they need to report the sources that they use in ways which are academically acceptable. Noodletools is a one-stop tool providing all these functions and more,.
ChatGPT, on the other hand, simply regurgitates information, sometimes with no regard for its accuracy and without telling us from where it gets its information, it does not reveal its souces. It does not and it cannot evaluate its sources nor verify them. It does not think and it does not learn.
Good researchers question their sources and, when the information is second-hand, they attempt to verify that the reporting is accurate – they attempt to find the source from which the source they are using garnered that information (and failing that, they note “as cited by …” in their text to admit that they are using a secondary reference). We cannot verify the sources which ChatGPT uses, but ChatGPT is not a source in itself, certainly not a primary source. It did not carry out the experiments, conduct the interviews, design and administer the surveys, critique the literature, carry out any research of its own.
If we question any piece of information which needs questioning, “how do you know that?”, “how do I know this is true?”, “where can I verify this?”, (and maybe, “have you actiually read this, this is pure nonsense!”) we will surely get the point across. Our minds, the most useful tool of all.
However tempting, don’t stop thinking!