My last article, To be verified…, centred on an item in The Times which claimed that the International Baccalaureate (IB) was or would be allowing students to use ChatGPT and other forms of artificial intelligence (AI) in essays and other work, as long as the use of such tools was acknowledged and attributed appropriately.
The news turned out to be true; an article by Matt Glanville, Head of Assessment Principles and Practice at the IB, was published on the same day, and may well have been the source of The Times’ reporter’s story. Titled Artificial intelligence in IB assessment and education: a crisis or an opportunity?, it provides deeper and more thoughtful detail and consideration than the story in The Times, including a rationale for the decision to allow its use and thoughts on how it might change learning and teaching and the purpose of assessment. For those with access to My IB, Appendix 6 of IB’s newly updated Academic Integrity Policy provides much for educationists to think about, and requires that use of AI be acknowledged and, when used in assessments and coursework must be cited and referenced.
Elsewhere, on Linkedin, IB has published a slide set Guidance for students on referencing AI, with the first slide reading “How IB students can correctly (sic) reference AI tools like ChatGPT”. (I am not so sure that “correctly” is the right word, mainly because it implies that there is just one “correct” way to reference AI tools regardless of which style guide is being used for the rest of the work. I am not sure about the helpfulness of the examples used in the slides, but that is very much another matter.)
Not everyone in education agrees, on whether AI can be used or not and on the efficacy of AI-generated text detection software. The i newspaper reports Oxford and Cambridge ban ChatGPT over plagiarism fears but other universities embrace AI bot, The Guardian declares Australian universities split on using new tool to detect AI plagiarism.
Publishers have different takes on AI as well, raising some interesting and paradoxical considerations: if we require tools such as ChatGPT to be cited and referenced, does this give them some form of authority? Authority implies responsibility and, dare I say it, authorship – but can ChatGPT be an author? If it can be regarded as an author, can it then be a co-author if it has significantly contributed to a study and/or its resulting article or paper?
The first sentence of that previous paragraph may need revision. Instead of starting “Publishers have different takes on AI as well” it might be more accurate to say “Publishers had different takes on AI as well”. When ChatGPT first became widely known, some publishers seemed very ready to accept ChatGPT as author or co-author; in January 2023, the journal Nature carried a News article ChatGPT listed as author on research papers: many scientists disapprove : At least four articles credit the AI tool as a co-author, as publishers scramble to regulate its use.
One of those articles may well have been published in Nurse Education in Practice, part of the Elsevier stable, in January 2023: Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? originally recorded two authors, Siobhan O’Connor and ChatGPT.
Elsevier has had second thoughts. In February 2023 a Corrigendum was made. Without saying what had been corrected, the paper now shows Siobhan O’Connor as sole author.
[As an aside but for further consideration, I am concerned that there is no obvious indication on the original article showing that a correction had been made; clicking on the Show more indicator reveals a message and an invitation to check for updates, an Erratum message and a link to the Corrigendum, but I do wonder why the correction is not more obvious.]
Academic publishers may be clearer now in their views on the inclusion of AI tools as authors.
As instance, journals in the Science stable (published by the American Association for the Advancement of Science – AASL) do not accept AI as author or co-author. , Holden Thorp, editor of Science, stated in ChatGPT is fun, but not an author that AASL’s Editorial Policies require authors to have agency and to take responsibility for their contributions; since artificial intelligence lacks agency and cannot be held responsible for its output, it cannot be cited as an author.
In a position statement on Authorship and AI tools, the UK Committee on Publication Ethics (COPE) also advises that AI cannot take responsibility for its output and therefore cannot be named as an author or co-author of a paper; its use must instead be acknowledged in the Methods or other appropriate section of a paper.
The lines “A paradox / A most ingenious paradox!” (as Ruth sings in The Pirates of Penzance) come to mind. Scholarly publishers demand that we do not cite and reference AI tools, the IB (and probably other educational bodies which allow use of AI as well) requires citations and references.
So we come to the question, what do the major referencing style guides say?
Perhaps not surprisingly, the major style guides also give different advice. APA, for instance, gives advice on how to cite and reference AI when this is required by instructors, while recognising that many instructors either forbid its use or strongly urge caution on those who do use it (How to cite ChatGPT). I think APA’s original advice was to treat ChatGPT output as a personal communication, cited in the text but not included in the reference list as it is a non-retrievable source – but the advice in this blog article is different; I wonder if I am thinking of advice given in libguides and by other gurus, based on how they thought APA might handle ChatGPT output.
APA’s current advice, as stated in How to cite ChatGPT, is to reference it as an algorithm:
Quoting ChatGPT’s text from a chat session is therefore more like sharing an algorithm’s output; thus, credit the author of the algorithm with a reference list entry and the corresponding in-text citation.
The example given is
When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).
Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
The article also notes that the APA Style team is in discussion with the editors of the journals published by APA and will issue (more) definitive guidance later this year.
APA’s stance contrasts with that of the Chicago Manual of Style, which prefers the notion of a personal communication. The CMOS page Citation, Documentation of Sources gives examples of both footnote and in-text citations of AI source material, but advises
But don’t cite ChatGPT in a bibliography or reference list. Though OpenAI assigns unique URLs to conversations generated from your prompts, those can’t be used by others to access the same content (they require your login credentials), making a ChatGPT conversation like an email, phone, or text conversation—or any other type of personal communication (see CMOS 14.214 and 15.53).
MLA takes the stance that AI cannot be treated as an author so a user of AI should treat its output as authorless, with the prompt (or a short-form of the prompt) used in the citation in the text and the full prompt included in the list of Works Cited (How do I cite generative AI in MLA style?).
Works-Cited-List Entry
“In 200 words, describe the symbolism of the green light in The
Great Gatsby” follow-up prompt to list sources. ChatGPT, 13
Feb. version, OpenAI, 9 Mar. 2023, chat.openai.com/chat.
APA and MLA (but not Chicago) caution that writers should verify whatever information they are given by an AI tool, whether AI gives a citation or not. This is good practice – should be standard practice – whether it is AI or an online source or a print source being used. It is especially so while ChatGPT (and possibly other AI tools) is so notoriously given to hallucination, sometimes “making up” the information it gives, sometimes inventing its sources of information – and sometimes giving very accurate information without citing its sources. “Go to the source – and then cite that” has always been good advice.
Clearly (and despite that IB slide set) there is more than one “correct” way to acknowledge, cite and/or reference use of artificial intelligence tools. Best advice might be to use any examples in the published style guides as templates for whatever AI is being used and, for IB assessment, to include a bibliographic reference even when the style in use suggests that a reference is unnecessary.
Panic!
And still there is panic in educational circles. Part of the concern is due to fears of plagiarism – it is all very well requiring students to document their use of AI tools, but what of students who use AI to produce their work in part or full but who do not declare it at all?
Almost an echo of the “how much plagiarism is acceptable?” non-question commonly asked in educational forums, now that Turnitin is flagging AI-produced content, some teachers are now asking “how much AI-produced content is acceptable?” and “how much AI-produced content is acceptable if it is cited and referenced?”! (These are the gist of two questions raised in a post Clarity on the IB Guidelines on the use of AI Tools on My IB Programme Communities, so accessible only to those who can access My IB, I am afraid).
Leaving aside the issue of how accurate AI-detectors are,and reports of both false positives (material flagged as AI-generated when it is genuinely the work of the writer) and false negatives (material flagged as genuine when it is AI-generated), there is the issue (already mentioned here) that AI tools do not always report from where they obtained the information they output and, when they do, this may not be true or accurate. This begs the question, if a student uses and cites ChatGPT when the software has plagiarised or invented its information and/or its sources, is the student plagiarising or otherwise misleading the reader too? Can the student be accused of plagiarising if they have cited their source, or secondary cited the source which ChatGPT claims to have used?
My own thought is probably not, not if the writer has cited the source, either directly to ChatGPT or with a “ChatGPT cites named source as saying. “bla bla bla …”. The student is being honest about the source of the information – but that student may well be guilty of a lack of academic integrity by not digging deeper, not checking the veracity and accuracy of what has been garnered from the AI. And again, this goes for use of any material, be it AI or online or print or broadcast or whatever – the integrity of the research is at risk if we do not check and verify.
There is a lack of honesty – and of integrity too – if there is no attempt to cite AI as the source of information, just as there are these deficiencies if the source is print or digital or online, just as there are deficiencies if writers reuse their own earlier work without stating this, self-plagiarism. Plagiarism (and self-plagiarism) is two-sided. Not only do writers (or AI tools etc) whose work is used without attribution miss out on the credit which is their due, but those who read the plagiarised material lose out too – they are deceived into thinking that the current writer is responsible for the words, thoughts and information and given more credit than they deserve.
Jonathan Bailey’s blog post One Way AI Has Changed Plagiarism takes this line of thought further. Commenting on the criticism that CNET received when it revealed earlier this year that articles which it had published as written by “CNET Money Staff” were in fact AI-generated content, he suggests
The audience felt lied to, and for good reason. The fact no person was plagiarized from was unimportant, it was the lie (or the omission) that was the issue.
This cuts more to the fundamental issue of what plagiarism is. It is a lie. It is an author saying, either directly or implicitly, that the work is theirs and is original when, in fact, it is not.
This puts the focus on what the actual act of plagiarism is. It’s not a sneaky attempt to deprive attribution, but an attempt to lie and pass off the work to others.
With no direct victim, willing or not, the conversation can finally focus on that.
I think Bailey has captured and extended on what I have tried to say in several of my own posts, most recently in Back to basics, again, where I quote Heather Michael saying, in an IB video International-mindedness and the DP Core (also available on Vimeo)
I worry sometimes that people task the extended essay and sort of deliver it as a series of timelines as opposed to teaching students what it means to be a researcher (00.40).
As educators, we really should be concerned with process as well as product, helping students understand what it means to be a researcher – and thus why integrity is so important, is not just a matter of citing and referencing sources. And of course, many of us are so concerned, including the readers of this blog.
Being a researcher requires accuracy, transparency, thoughtfulness, honesty, integrity and more. Being an authentic researcher means going to the source, checking and verifying, weighing and evaluating. Something which gets us beyond pondering questions of authorship and considering the author themselves.
It sounds like hard work and maybe it is – but research is rewarding, research is fun and the result should be something genuine, helpful, something of which to be proud.