Here we are again!

Print Friendly, PDF & Email

Since ChatGPT was first launched towards the end of 2022, there has been much alarm expressed in schools and colleges, in discussion forums, blogs and other social media platforms, in the educational press and in the general press too. There has also been calmer discussion; we shall come to that.

ChatGPT is an artificial intelligence (AI) text-generator, developed by OpenAI.  Its appearance marks a huge step forward in the evolution of AI.  To now, text-based AI has been uninspiring and flawed: think of the chatbots used by many support centres which provide stock answers which never quite answer your specific problem, think of translation software and synonymisers and rewriting software which give word-for-word equivalents which are often nonsensical.

At first glance, ChatGPT is different.  ChatGPT responds to prompts to provide coherent and unique responses to often-complex questions; it is able to hold conversations, understand what is said and to make appropriate – if not always correct – responses.  Artificial it is and it may “know” a lot, but its intelligence and indeed what it “knows” may be limited.

AI is capable of basic writing tasks; the Washington Post for instance has been using AI to write reports for at least six years; the story The Washington Post leverages automated storytelling to cover high school football was published in the newspaper on 1 September 2017, and was supplemented a fortnight later by a Digiday article which declared The Washington Post’s robot reporter has published 850 articles in the past year.  Text-generating AI has been with us for some time, ChatGPT is a natural development.

Much of the alarm has been voiced by teachers who fear that students will use AI, and ChatGPT in particular, to cheat, to submit work generated by ChatGPT instead of doing the work themselves.   Because its output is fresh each time, its responses to the same prompt are never quite the same; what is more, it can be asked to regenerate its responses, and will churn out much the same information in different forms.  Here, for instance, are three responses to the question:

ChatGPT – would you pass the Turing Test?

The Turing Test is a test used to determine whether a machine is capable of exhibiting intelligent behavior indistinguishable from a human. As an artificial intelligence, I am designed to mimic human conversation and behavior as closely as possible, but ultimately the decision of whether or not I pass the Turing Test would be up to the person evaluating me.

As an artificial intelligence, I am not capable of taking the Turing Test because I do not have feelings or consciousness. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It is intended to determine whether a machine is capable of thinking and behaving like a human. I am just a program designed to process and generate text based on the input that I receive.

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to have a wide range of capabilities and to be able to carry out conversations with people in a natural and human-like manner, I am still just a machine and my responses are generated based on the input that I receive. Whether or not I would be able to pass the Turing Test would depend on the specific circumstances of the test and the abilities of the human evaluator.

Each of these responses is different.  Text-similarity software such as Turnitin will find no similarity to any other text.  Any student submitting work produced by ChatGPT will not be “caught” by a “plagiarism detector”, it will be difficult to prove conclusively that the work is not the student’s own.  The same is true, of course, for any plagiarism-free work produced by a friend, a tutor or a custom-essay writing service and submitted as the work of the student when it is not – and ChatGPT has the advantage of both exceedingly quick response and, at least for the time being, being free.

So it was that in the days immediately following ChartGPT’s debut. articles with alarmist headlines were quick to appear:

The College Essay Is Dead (Stepehn Marche, The Atlantic, 6 Decmber 2022)

ChatGPT Will End High- School English (Daniel Herman, The Atlantic, 9 December 2022)

Teachers Fear ChatGPT Will Make Cheating Easier Than Ever (Rashi Shrivastava, Forbes, 12 December 2022)

Schools and school districts are already blocking use of ChatGPT, a move which seems particularly futile, not least because the block can be anabled on education department devices; students can still use ChatGPT on their cell-phones and other devices both inside and outside school.

NYC bans access to ChatGPT on school computers, networks (Michael Elsen-Rooney, Chalkbeat (New York), 3 January 2023).

It is almost as if there is an assumption that students are ever looking for unethical shortcuts and ways to cheat; I believe that this is questionable.

Some students will never cheat. They have personal integrity, encouraged and supported by home background and a school ethos and culture which promotes and celebrates honesty and authentic work.

I accept that there may be some students who, for whatever reason, will cheat any time they can.

And there is, I think, a group in the middle who might be tempted to take unethical short-cuts, especially if uninterested in the assignment set or whose poor time-management has led to a race against the clock to complete on time, and if they think they stand a decent chance of getting away with it.  It is this middle group who need the most protection, both from a perception (justified or not) that “everyone else is cheating so why shouldn’t they?” and also from predators who provide shortcuts and temptations of various kinds, often dressing them up as acceptable practices when they are not.

It must also be said that content filters rarely work as intended, nor is ChatGPT the only text-generating AI available, just the best of its kind at the moment.  Better, I think, to teach responsible use of ChatGPT, awareness of its shortcomings (and there are many), and how to use it as a tool to enhance academic work and learning.

Not far removed from the alarmist stance are those who would fight technology with technology, in a never-ending game of catch-up.

Startup says it can reliably detect AI-generated content (Paul Gillin, SiliconANGLE, 16 December 2022),

Can Anti-Plagiarism Tools Detect When AI Chatbots Write Student Essays? (Daniel Mollenkamp, EdSurge, 22 December 2022).

A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism (Pete Syme, Business Insider, 4 January 2023).

Although test results suggest that these apps have high degree of success in distinguishing genuine work from AI-produced text, there may be issues in “proving” that an assignment was completed by AI rather than by the student themselves – which could well be the case if a student insists that the work is their own.  With text-matching software, if the instructor can produce text written earlier which matches word-for-word text produced by a student, there may be suspicions of copy-paste plagiarism, and the more text there is which matches, the more certain the probability of plagiarism becomes. With original text generated by AI, there is nothing to match against.  Suspicions can not be proven.

And there is a growing body of opinion which holds that we cannot hold back the tide, education and educators must adapt.

New AI tools that can write student essays require educators to rethink teaching and assessment (Mike Sharples, Impact of Social Sciences, 17 May 2022).

What does writing AI mean for education? (Hanna Seariac, Deseret News, 11 December 2022).
(Later retitled: Does AI mean the death of the college essay?)

What Would Plato Say About ChatGPT? (Zeynep Tufekci, New York Times (Opinion), 15 December 2022) 

Teachers v ChatGPT: Schools face new challenge in fight against plagiarism (Osmond Chia, The Straits Times, 3 January 2023)

The notion here is that AI will develop ever further, but we can use AI, including ChatGPT, as learning tools to enhance our teaching and students’ learning. Our pedagogy, the ways we teach and the methods we use, may need to change, but this has always been the way, we have always had to adapt our teaching, both content and methods, as new technology has become available. Tufekci’s article is particularly apposite here, reminding us that Plato feared that the written word would affect our abilities to memorise.  What goes around, comes around.  Much more recently than Plato, Clive Thompson admitted

I’ve almost given up making an effort to remember anything because I can instantly retrieve the information online” (“Your Outboard Brain Knows All,” Wired, October 2007) while around the same time, David Brooks declared “I had thought the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less (“The Outsourced Brain,”  New York Times October 26, 2007).

Change is always a threat, a challenge.  We have learned to with writing, books, paper, typewriters and computers, learned as educators to use to our own and our students’ benefit Google and Wikipedia, Facebook and Twitter.

A line in the NYC bans access to ChatGPT article is worth commenting on:

The chatbot’s ability to churn out pitch perfect essay responses to prompts spanning a wide range of subjects has sparked fears among some schools and educators that their writing assignments could soon become obsolete.

Yes, those writing assignments as they stand may become obsolete – or they can evolve and be adapted, make them rely less on stock responses but instead require thinking, critical and personal thinking, responses which cannot be artificially generated.  Several decades ago, the mantra was “If you can find the answer on Google, then you have not asked a (research) question”.  The need for inquiry approaches is more important than ever.

There is also the questionable thought that AI can “churn out pitch perfect essay responses”.  There is more and more realisation that ChatGPT’s is bland, repetitive, formulaic. The spelling and grammar are generally correct but the output sounds robotic and is often wrong. ChatGPT’s abilities are shallow, limited, far from pitch-perfect when asked to “think” more deeply.  This AI can work only with the information with which it has been provided, and while it has been provided with a lot of information, it cannot (as yet) add to it, and perhaps not even to synthesise it.

I asked ChatGPT to comment on recent events, it responded that it could not comment as these had happened after its “knowledge cutoff”, so naturally I asked about its knowledge cutoff and got the response:

My knowledge cutoff is the date after which I was not trained on any new information. My knowledge cutoff is 2021, so I do not have information about events or developments that have occurred after that time. I am only able to provide information and answer questions based on what was known at the time of my training. It is important to note that I am not able to browse the internet or access any new information, and I can only provide information and responses based on my programming and the information that I was trained on.

Particularly telling may be that ChatGPT does not always “know” from where its information comes, and will “happily” invent references to support its statements, complete with URLs to non-existent pages.

As instance, I asked ChatGPT several questions about Hamilton Naki’s role in pioneer heart transplant surgery.  The responses showed no awareness of any controversy about his contributions, and when I asked for references to support ChatGPT’s responses, the AI gave me:

None of these pages exist, they all produce error messages.  Click on the links, search for the article titles, check these authors’ writings, trawl the archives of the sites (and the Internet Archive too); the facts might (or might not) be complete and accurate but the sources claimed are not. The day may well dawn when a truly intelligent, knowledgeable, creative and critically thinking computer or artificial intelligence is developed (or develops itself). For the moment, it seems that the tools of information literacy are beyond AI’s ken.

This weakness seems worth adding to our toolboxes. We already ask students to state the source of any information which they obtain elsewhere and therefore needs a reference.  Especially in the early stages of a piece of writing, it can be helpful  quickly to check several of these references to be sure that they have given us the right source and the right information from that source.  This helps us be more certain that the student knows how to cite and reference sources and is writing ethically. If some of those references turn out to be fictitious, it could indicate that AI might be behind much else of the work submitted.

There is an uneasy sense of déjà vu here, we have been here before. Checking the accuracy of citations, both for content and source, is a tool in our plagiarism toolbox.  Another tactic is to ask students to provide screenshots, printouts, photocopies or other evidence of their searches and finds, perhaps routinely as part of the assignment.  As with any work which we suspect may not be that of the student, careful questionning regarding the research process, perhaps discussion of sources not used as well as those used and reasons for those decisions, and a check that the student understands what has been submitted, words, concepts and ideas, can help us decide on the balance of probabilities.

But, as with plagiarism detection, better still surely is discussion as to the shortcomings of AI, what it can and cannot do and how to use it ethically. When Wikipedia first appeared, the knee-jerk of many teachers and schools was to ban it completely, forbid its use for school assessments. Students were forbidden to use Wikipedia even while many of the teachers forbidding them to use it happily made use of it themselves.  Gradually a more nuanced approach emerged, the notion of using Wikipedia for basic information, teaching the use of the references cited as a springboard to the finding and use of more authoritative sources, teaching how to use the padlock and star features to find locked and moderated (and therefore more accurate if not quite authoritative) articles, and so on.

This may be the way to go with ChatGPT in education: use it as a springboard – with the added twist that students themselves still need to find and cite the sources which provide the information it gives and which they use.  They need also to be sure that any AI output that they use is accurate, and that it addresses the question asked.  Teach them how to make best use of AI – and when to avoid it.  With regard to ChatGPT, teach them that they need to check responses, add to them, find the gaps and fill them, use their own voice.  Use ChatGPT as a tool, not as an enemy…

A footnote on bland, formulaic output

Early in this article, I presented three different responses, one original and the other two “regenerated”,  which ChatGPT gave to the question “ChatGPT – would you pass the Turing Test?”  Each response is unique, but I did wonder if the AI might give over-similar responses to someone else – so I set up an account on a different computer and using a different email and asked the same question.  The three original responses are listed first, the second set of three follow.

ChatGPT – would you pass the Turing Test?

Computer 1

The Turing Test is a test used to determine whether a machine is capable of exhibiting intelligent behavior indistinguishable from a human. As an artificial intelligence, I am designed to mimic human conversation and behavior as closely as possible, but ultimately the decision of whether or not I pass the Turing Test would be up to the person evaluating me.

As an artificial intelligence, I am not capable of taking the Turing Test because I do not have feelings or consciousness. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It is intended to determine whether a machine is capable of thinking and behaving like a human. I am just a program designed to process and generate text based on the input that I receive.

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to have a wide range of capabilities and to be able to carry out conversations with people in a natural and human-like manner, I am still just a machine and my responses are generated based on the input that I receive. Whether or not I would be able to pass the Turing Test would depend on the specific circumstances of the test and the abilities of the human evaluator.

Computer 2

As an AI language model, I am designed to generate human-like text and I can hold conversations on a wide range of topics, but the Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The test is named after Alan Turing, who proposed it in 1950 as a way to determine whether a machine could think. To pass the Turing Test, a machine would need to be able to engage in a conversation with a human evaluator and convince the evaluator that it is a human, rather than a machine. While I am capable of generating human-like text, I am not a conscious being and do not have the ability to think or exhibit intelligent behavior in the same way that a human does. Therefore, I would not be able to pass the Turing Test.

As an AI language model, I was not designed to specifically pass the Turing Test. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to generate human-like text, my primary purpose is to assist users by providing accurate and reliable information, rather than to deceive or mislead them into thinking that I am a human.

As an AI language model, I was not designed to pass the Turing Test. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to generate human-like text, my ultimate goal is not to deceive people into thinking I am a human. Instead, my goal is to assist users by providing helpful and accurate information to the best of my ability.

It seems there may be a limited number of ways to say the same thing, at least in this short example, at least when asked to regenerate a response. Although each response is almost (?) unique (?), similarities are evident, the same phrases repeat, not necessarily in the same order, within the paragraph. This could alert teachers when the class is set the same assignment.

That said, the two sets of responses are different; the first set declares that whether or not ChatGPT would pass the Turing Test depends on the evaluation of the human interacting with the AI, the second set suggests ChatGPT has not been designed to pass the Turing Test so this is a non-question.

The blandness of these responses, of much ChatGPT output, should raise red flags, suggest further investigation is necessary. We might not be able conclusively to “prove” that a student has used AI to respond to a prompt, but we can question closely to make sure that the student at least understands what has been presented, that some form of learning and understanding has taken place.  The danger is that some teachers accept ChatGPT output without question while others question it most closely, and may even require students to re-write their responses, in their own words and in real time, in front of the teacher.

This is another déjà vu scenario, a hark-back (I hope it is just a “hark-back”) to the days when some teachers accepted possibly plagiarised material without question while others in the same school would not accept work which was not the student’s own.  Education of teachers as well as students supported by adoption of and school-wide adherence to academic honesty/ integrity policies have been effective strategies in raising awareness of the problems of plagiarism. It may be necessary, it probably will be necessary, for schools to revise their honesty and integrity policies to include considerations of ethical and unethical use of AI – and of course to make sure that all members of the school community understand the rationales, and acceptable use of AI too.

For sure, we cannot ignore ChatGPT and banning it is not the way to go.

Who’s your friend?

Print Friendly, PDF & Email

One of the consequences of the death of Queen Elizabeth II last month is that over 800 individuals and companies who at the time of her death held a Royal Warrant for providing goods or services to senior members of the Royal Household need to re-apply for the warrant.  Many may lose their warrant if King Charles III (and any other member of the royal family whom he appoints as a grantor) does not share the Queen’s tastes or needs. In addition, the warrant is not granted for the lifetime of the royal who grants the honour, every warrant holder needs to re-apply every five years to ensure that the Royal Household still uses the product or service.

When a royal warrant is cancelled or expires, the ex-warrant holder must remove the royal insignia from their labels, letter-heads and anywhere else they display the arms and the message “By appointment to Her Majesty the Queen” or “By appointment to His Royal Highness the Prince of Wales” – declarations which must now be updated.  (For more information on this, see the Royal Warrants page of the Royal Family website or the FAQs page on the Royal Warrant Holders Association website.)

The Royal Warrant is, of course, highly prized and not easily obtained. As well as being a proven supplier of named goods and services to the named Royal over at least 5 years, warrant holders must hold to high standards – and not just directly in the products or services they supply.   It is possible to lose the Royal Warrant: in recent years, the Rigby and Peller lingerie company lost its warrant after the Queen’s bra-fitter breached confidence and discussed her work for the Queen in her autobiography (The Royal Bra-Maker Has Been Stripped of a Royal Warrant Over New Book); more recently, it seemed that the Davidstow cheese company was about to lose their royal warrant after polluting the River Inny near its factory in Cornwall, England over many years (Queen could revoke Davidstow cheddar royal warrant over river pollution) – and perhaps only the Queen’s death might change the reason for its loss. Among the FAQs on the RWHA site is the note that  between 20 and 40 royal warrants are cancelled every year (and a similar of new warrants granted).

The Royal Warrant maintains its integrity. It has to be earned or deserved, it cannot be bought. It is withdrawn if standards are not maintained.

It is worth repeating this, it is a vital aspect of the warrant’s cachet, a major assurance of standards : the Royal Warrant maintains its integrity. It has to be earned or deserved, it cannot be bought. It is withdrawn if standards are not maintained.

My mind turns to thoughts of The OWL at Purdue and its association with Chegg.  The OWL (Purdue University’s Online Writing Lab)  is a much-respected guide to academic writing and its citation and reference guidance is highly regarded for its accuracy and helpfulness.  Chegg is a not-well-respected “homework helper” whose practices have at times drawn criticism, blurring the lines between acceptable and unacceptable help for students  (see, for instance, my posts Nothing but… and Not such a wise OWL).

The OWL’s association with Chegg is evidenced most directly on its style guides’ citation and referencing pages where there is an auto-reference generator, Citation Machine, part of the Chegg stable.  An auto-reference generator invites users to insert a URL, an ISBN, a title or other identifying feature of a work to be cited or referenced, and comes up with suggested citations and references. The example here is from the OWL’s MLA Style Introduction.

I am not fond of auto-reference generators, and Citation Machine is perhaps more inadequate and faulty than most.  Since I last looked at Citation Machine, I see that as well as a reference which might or might not be accurate, the free version might also bring up a non-skippable 30-second advertisement for Chegg services, and when the reference is finally revealed, it also invites one to submit one’s work to check for “plagiarism errors”.  Dangerous stuff, especially if you do not first read Chegg’s Terms and Conditions.  I detailed some of the issues with Chegg’s T&Cs in Nothing but… – and nothing has changed. In this post, discussing Chegg’s User Content and Activities section on the T&C page, I said:

In this 8 paragraph section, you agree that with practically anything and everything you do on or with any Chegg site (including uploading or posting your own material, submitting questions or model answers, submitting text to their “plagiarism check” services,  using their citation/ reference generators and so on), you give Chegg a non-exclusive right to that material; Chegg can reuse it, pass it on or sell it on in any way they wish, without reference to you and without any form of payment for any further use of your material.

You also agree to let them use your personal details including your name, profile and photo for advertising and similar purposes.  You agree to let them use your material in any way they wish, even if you do not like the way they are using that material or any changes they have made to it.

It is worth noting that, in addition to critical articles cited in that blog post, more articles expressing reservations about the Purdue OWL-Chegg partnership have been published, including Emily Hamilton Haynes’s Thoughts on the OWL/Chegg partnership, Allison Hosier’s The Ballad of Purdue OWL, an unsigned article in the Spring 2020 SFCC Library Newsletter Academic Integrity in a Digital Age and Claire Warner’s Students cheat with online learning service, professors hope to identify users), I see that at least two university library online research guides include recommendations for the Purdue OWL as go-to guides for citation and referencing but warn explicitly against using the Chegg/Citation Machine auto-reference generator.  There is, for instance, the graphic advice on the Wartburg College Vogel Library Citation Guide

CAUTION: In early 2019, Purdue OWL began a partnership with Chegg, which introduced a Citation Machine widget to their otherwise wonderful content. Do not be tempted by these citation generators. Scroll past them to the documentation below, while you will have to do the critical thinking to model after the examples, you are more likely to create a correct and complete citation this way.

and the perhaps even more explicit advice in the Research Guide for English & American Literature : 19th Century at the University of Vermont

MLA Style (OWL at Purdue)

(2020) Please Note: The written material on this university website is still a good resource, but please ignore the citation generator advertised on (seemingly) every page. In 2019, Purdue Writing Lab partnered with Chegg, a for-profit student services company. Chegg has licensed OWL’s writing tips and placed advertising on its website, helping OWL to monetize its free content.

[I have not explored; there may well be many other pages and guides for other styles on these sites which include similar warnings, as may other research guides in other libraries and education institutions around the world.]

So, the contrast: the Royal Warrant can be withdrawn if standards and integrity are not maintained; controversial as it often is, the royal family cares about its brand and is careful about those it associates with.  The Purdue OWL’s reputation is tarnished by its association with Chegg (but unfortunately the OWL seems less concerned about its brand and reputation).

And then there is Turnitin.

Regular readers will know that I do not have a lot of love for Turnitin. Over the years, the company has made misleading and even false claims about its efficacy, the rate of growth of its database, the sources that it indexes and the number of pages indexed, it has made false claims and drawn false conclusions from its research and from its own frequently flawed originality reports (often finding false positives and false negatives even in the sample originality reports and other advertising matter produced where you might expect the company might make special effort to get it right), and more, much more. Turnitin made little or no attempt to clarify misconceptions and misreporting, even of and in interviews with Turnitin executives.  From its earliest years, it claimed to be an educational tool rather than a “plagiarism checker” but its reputation is founded on its (very fallible) ability to “catch” plagiarists and uncover possible plagiarism.

In recent years and under new management and ownership, Turnitin has become ever more invested in other aspects of education, including assessment and feedback, course management, authenticity checks and even academic writing.  It makes ever more effort to downplay its use as a text-matching (“plagiarism checking”) tool and tries to emphasise  its use in other aspects of education. My continuing lack of love for Turnitin is now based more on its technological solutionism approach to education, dehumanising classroom practice and assessment, teaching by algorithm.

Like it or not, Turnitin is a leader in the field and has expanded, perhaps aggressively.  It seems fair to say that those who like Turnitin generally hold it in high regard, any criticism they have tends to be more about the pricing than about the services provided.

One of the services which Turnitin provides is a partnership scheme.  Institutions and companies can apply to become Turnitin Partners, and apparently more than 200 have.  There are three partnership schemes. Technical Integration partners such as Blackboard and CollPoll have integrated Turnitin into their learning management systems, while Content partners are typically major academic publishers such as SpringerNature, Wiley, Elsevier and CrossRef, who have great interest in minimizing opportunities for plagiarism to appear in their publications.

The third group are Commercial partners who use Turnitin as part of the services they supply, and this is where the partnership scheme might become more problematic.  Some Commercial partners work with institutions to provide platforms and management systems into which Turnitin is integrated, companies such as Kira Talent and DreamApply (used in higher education to manage their applications processes), or Eummena, Edunao and Ellucian (also educational platforms and management systems).

Some other Commercial partners supply services not to institutions but instead to individuals, and these may be more problematic.  Enago and Editage are established companies which offer proof-reading and editing services, translation and assistance in getting papers published in quality journals.  Scribbr also offers proof-reading and editing services but would seem to be aimed at a lower academic level than Enago and Editage, undergraduates (and possibly secondary school students as well) rather than professional or advanced academic researchers.  I think it fair to say this because on the Scribbr site there is much emphasis on plagiarism checking and on citation assistance, the site offers a plagiarism checker and a citation generator.  On the home-page is the claim “Everything you need to write an A-grade paper“.  Professional researchers are looking for publication and knowledge sharing, not grades.  Scribbr is about grade enhancement.  Does that make it more problematic?

The issue of proof-reading is sometimes fraught and open to misunderstanding. In real life, it is a good idea to have someone else look over your work and point out possible errors.  Other people have distance, they see mistakes that you just cannot see for looking, you are too close to the piece.  Peer review is common classroom practice – sometimes going further than pure proof-reading, correcting spelling and grammar errors, making suggestions in terms of structure and content, perhaps suggesting content which might be added or omitted or placed elsewhere, perhaps even suggesting new or different lines of research or development.  Academic peer reviewers for reputable journals are experts in their fields,  helping the journals maintain their standards and their reputations.  In publishing, general as well as academic, proof-reading, peer review and editing are accepted techniques and often requirements.

The lines between proof-reading and correcting and critiquing and editing and rewriting can be grey, can become blurred.  Ethical issues may arise if the extent and the expectations of any review are not clear, and the temptation to make suggestions outside what is permitted may be strong.  Especially in academic assessment, where instructors, parents and fellow-students want the writers of work they are reviewing to do well, improve, gain high grades.

As instance, at the secondary education level, the International Baccalaureate makes its expectations very clear, especially with regard to the extended essay. nobody may proof-read the essay apart from the student writing it.  The page Proofreading comprises the page heading and one short paragraph:


The whole essay needs to be proofread carefully by the student (computer spelling and grammar checkers are useful but will not do everything). They must not ask someone else to proofread their work as this is an important part of the learning experience.

While we might have reservations about the reason given for this requirement, the requirement itself is clear: “(students) must not ask someone else to proofread their work”.  This avoids the dilemmas and possible misunderstandings as to how much advice the student can receive from other people for this aspect of the writing: none. The extended essay is expected to be the work of the student, working with a supervisor, and any help received from others acknowledged in the course of the essay.

Integrity is integral

Who will know if a student receives outside help, whether simple proof-reading and pointing to possible errors through to critiquing and editing and rewriting services?  The danger is that students might be tempted to cross lines, seek more assistance, from peers, from tutors, from parents and friends, from online services, than is permitted.  And because it is unlikely that anyone will find out, students may be tempted.

It comes down to awareness of the rules in the first place and then it is a matter of integrity, following the rules even if it seems unlikely that you will be caught if you break them.

I am not saying that Scribbr is unethical. There is a place for proof-reading and editing services.  I am not sure that secondary school or undergraduate education are among those places, though as noted, it may be acceptable if the boundaries are clearly stated and observed by all parties.

Which brings us back to Turnitin’s partnership with Scribbr – or rather, Scribbr’s partnership with Turnitin.  Scribbr claims, very loudly, to be an “authorised partner” of Turnitin:

Many Turnitin partners say on their websites that they are Turnitin partners. Scribbr is the only partner I can find which uses the term “authorised partner” (and I wonder if the European spelling of “authorised” is significant?).  It suggests that Scribbr has Turnitin’s blessing and approval for its services, services which may go beyond what is permitted by educational institutions, whether generally or for individual assignments and assessments – but because Scribbr might seem to be authorised by Turnitin to provide those services, perhaps (in the mind of the student) there can be nothing wrong in using them?  The temptation to make wrong choices becomes stronger.

Does Turnitin vet its commercial partners, or can they simply buy their way in to Turnitin partnership? Turnitin’s technical partners must have their integration certified by Turnitin:

but content partners and commercial partners seem not to need certification.  Back to thoughts of the Royal Warrant and the statement I made earlier:

The Royal Warrant maintains its integrity. It has to be earned or deserved, it cannot be bought. It can be withdrawn if standards are not maintained.

Like the OWL at Purdue’s partnership with Chegg, the Turnitin-Scribbr partnership does not reflect well, there is an air of taint.

Footnote and afterthought

I have not run a test on Scribbr’s “plagiarism checker,” which I assume to be as accurate or inaccurate as Turnitin itself (since they are using Turnitin itself to run the checks).  I have not tried Scribbr’s editing and proofreading services, which if TrustPilot is to be believed has a 94% 5-star approval rating, sharp contrast with most of the reviews in Redditt.

But I have conducted a limited test of their Citation Generator and was not impressed.

Those who use APA 7th edition will spot the use of Sentence case in the first entry, and Title Case in the second and third entries (APA uses Sentence case consistently), will know that APA does not use (01 ed.) for first editions, and uses (2nd ed.), (3rd ed).etc (but not (03 ed.) for later editions, that month and day are not used in the dates in references for books nor are dates of access required for URLs unless the page is likely to change over time (which these are  not); regular readers of this blog will also know that the two web pages referenced here do include dates of publication, do have named authors, missed by Scribbr’s algorithm, and there are other errors too.

If Scribbr’s Citation Generator is typical of its other services, then perhaps it is a service best avoided.

Once again, is Turnitin aware of Scribbr’s extra services? Does it approve? Does it authorise them? Does it warrant them?

As so often, gentle reader: be aware, beware!

Vanity, but not in vain

Print Friendly, PDF & Email

It has been a little while (okay, a long while) since I last posted here.  I am far from the only person who has had a difficult last few years, of course, but still.  I hope my personal situation is easing now and that I can fully get back into the swing of things.

I did start several blog posts during my long “sabbatical” and I may get round to completing them if they still seem relevant. What has sparked my interest now is, in a way, very personal, and conceited fool that I am, I can not resist sharing.

Many readers of this blog have accounts with platforms for sharing academic research and articles such as and ResearchGate, to access academic papers, contribute informally to the body of knowledge and making themselves known to the community. Among other things, such services send notifications of articles within subscribers’ stated fields of interest, alert subscribers to articles which cite their own work, and so on. Subscribers can search for and read articles online; they can often download copies of these articles.  Sometimes authors may not wish their articles and papers to be openly available, but those interested may be able to send a personal message to such authors requesting copies of the sought-for paper.

Researchers at any level can open accounts through their institutions or companies and sometimes independently. Some platforms ask that new subscribers prove their interest or credentials by uploading at least one paper written by the applicant (though this is sometimes abused by some who choose to upload someone else’s work, in which case probably without permission and possibly breaching copyright).  As so often with social media, there is often free access with limited features and a pay-for membership with advanced features and tools.

As noted, one useful feature of these platforms is notification when articles are found which cite work written by the subscriber.  Knowing someone else has cited your work can enable you, the original writer, to read and possibly make contact with someone else working in the same field.

So it was puzzling to find this message in my inbox a few weeks ago:

John, we found a recent chapter that cited you:

Timely Telling Tweets: Using Social Media Data to Tell the Stories of Window Sex Workers in Amsterdam Facing Major Changes to Their Working Conditions.

Puzzlement indeed.  What work of mine might have been cited in this paper?  I soon found out, I just had to scroll down in the email.  it was the article  “Has got it all wrapped up?”  (The article I submitted was in fact called “Trust or Trussed?  Has got it all wrapped up?” but Teacher Librarian decided that was too long a title and shortened it.)

What use, I wondered, did an article on sex workers in Amsterdam have for my article on the text-matching company

The article is not immediately available on the Researchgate page. and those interested are invited to contact the author.

The ResearchGate page does note that the chapter is in the book Sex Work, Labour and Relations, New Directions and Reflections.  The page provides the abstract (but not the full-text) and also includes a list of (12) references.

And yes indeed, there in the list of references is my article.

Reading the abstract more closely, I realised that this chapter focuses not on sex work per se but on the effectiveness and the challenges of using Twitter posts as a means of gathering data – it is about the research methodology as against (or is it as much as?) what was found out.  The abstract includes the sentence

Focussing on practical challenges of using tweets highlighting academic shortcomings in adherence to confidentiality and ethical pathways when submitting research to plagiarism software.

Consideration of “plagiarism software” might explain inclusion of my 2003 article, although it appears to be the only reference in the list of references to published work on plagiarism or plagiarism software.

The title line on the ResearchGate page includes a DOI link to the publisher’s page on SpringerLink – and this proved worth following up.  The chapter is still behind a pay-wall (UK£19.95 for the chapter or UK£79.50 for the eBook, UK£99.99 for the full hardback book), but the SpringerLink page did make me sit up.

The Researchgate page lists only 12 references but the SpringerLink page lists more, 50 in all.  This is useful to know.  At a quick count, in the list on the SpringerLink page there are

    • 20 references to items discussing sex work
    • 26 references to items discussing social media and/or research methodologies
    • 2 references to online translation tools
    • 1 reference to a work on APA style and referencing
    • 1 reference to a work on plagiarism software (my article on Turnitin).

I am still no wiser as to why or how my work has been used; indeed, I wrote to the author using the ResearchGate request facility asking for a copy of the text – but I have had no response, I am none the wiser.

But I am wiser in what I learn from this, the difference between the ResearchGate list of References and that used in the actual article is worth iterating.  I wonder if the Researchgate list includes only writers whose work is included in the Researchgate database?  As so often for the serious researcher, it pays to go to the original.

We are reminded that academic writers do not compile lists of references solely as a matter of academic honesty; indeed, academic honesty is a comparatively minor aspect of a reference list – we assume honesty.  More important are the assistance that the writer gives the reader in understanding the sources used to inform the writing and the assistance which enables the reader to find and follow-up the sources used – backward citation searching is the technical term.  When as here the list of references is available online, this last factor is enhanced by the hyperlinked URLs or DOIs which lead to the webpages or repository where the referenced article may be accessed (although again often password or pay-wall protected).

Even when stymied by a pay-wall, interested researchers may gain much from understanding and following up sources used which are openly available, especially those on the same topic.  In this case, anyone interested in the working conditions of sex workers will find works which could be of interest, anyone interested in the use of social media (and especially Twitter) will find works which could be of interest.  And without seeing the article, it has to be said that anyone interested in plagiarism and plagiarism is less likely to find much of interest, apart from my somewhat dated article, written for a less scholarly audience – better perhaps to look elsewhere.

Just as an extension of this, when stymied by a pay-wall, aware researchers may gain much from reading what others have said about the inaccessible article, perhaps in reviews of the work (perhaps in professional magazines and journals), and also how the article has been used in later works in the same field. It can be worthwhile using Cited by features when they are available, as on Google Scholar. (Is this “forward citation searching”?)

My fit of vanity has not been in vain!  At the least, it has led to this discovery that the list of references available on ResearchGate (and perhaps on other, similar services) may be incomplete; if wanting to conduct a review of the literature or a backward citation search, it is worth (as always) checking the original publication for perhaps more complete information.

And, for better or for worse, it has got me writing and posting again. I hope to be writing more frequently than of late. We shall see.

Another brick in the wall

Print Friendly, PDF & Email

I have come across an interesting twist in the contract cheating industry, Ghost Grading: Part 1 – A New Twist on Contract Cheating.  I hope I do not steal any of Dr Sarah Elaine Eaton’s thunder, especially as she still has Part 2 of her investigations to come, but the story is of interest.

It seems that teaching assistants and other instructors in North America (maybe elsewhere too?) are being targeted to outsource their grading duties.  The contract grading company gets paid by the TA at a rate lower than the TA receives from their institution, so the TA has money-in-hand without doing the work and also, as Eaton puts it,  is “gaining back time to work on other, more interesting projects.”

It is a win-win situation.  Or is it?

My first thoughts considered the irony of the practice, especially with regard to the whole contract cheating industry.

Students can pay to have their work written by a contract essay company and their work might then be marked by a contract marking company.  Nice thought: what if the contract marker gives a poor grade to the contract written essay?  Could there be cases where the same company which sells the essay to the student then goes on to mark it?   Hmmm.  Would they (do they) have safeguards to make sure they mark the essay as promised by the sales side of the company, regardless of its quality?

Taken to an extreme, this could put universities and schools out of business altogether.  There would be no need for teachers and instructors and no need for students either.  Fake universities could offer fake qualifications to fake students who buy their essays from fake firms who mark the fake essays and give fake marks so they can gain fake qualifications.

Meanwhile, having some other body/s take over high-stakes assessment might just leave schools and colleges able to focus on their core purpose: education and learning?

Fantasy and irony aside, there are more serious considerations here.

WIIFT – What’s In It For Them?

It is not a win-win situation.

The instructor earns less money (the amount paid out to the company providing the “service”) – but does gain time for other projects and activities, academic or otherwise.  Presumably the time gained is worth it.  Eaton suggests that some approaches to instructors give the impression that the educational institution approves of the arrangement, which of course they do not.  Instructors and TAs who sign up could lose trust and ultimately their jobs. They will not gain experience in marking, they will not get to know the students and their abilities and capabilities, they will not gain awareness of misunderstandings and gaps in comprehension – feedback on the teaching.  They might gain a little, they will lose a lot.

Students will lose as well. They might not get the marks they deserve and they might not get the feedback they need.  They are not getting what they paid for, they are not getting the education they signed up for.

Contract grading companies, on the other hand, gain gain gain. Or to put it another way, win win win.  What’s In It For Them? Let us count the ways:

  • they gain student work and essays which, these being unscrupulous companies, could well be recycled and re-used, perhaps sold as exemplars on essay sites or even as original supposedly-contract-written essays;
  • they gain access to courses and to copyright materials which again could be sold on or repurposed;
  • they get a potential-blackmail hold over the TAs and instructors who sign up, an academic honey-trap;
  • they gain access to learning management systems, access to data, including confidential data, data which may go well beyond the course, which they can use and/or sell on;
  • they gain access to learning management systems, with opportunities to hack and corrupt the system altogether.

The potential is frightening, and I have lost count.

I look forward to part 2 of Dr Eaton’s article.


Takes your breath away…

Print Friendly, PDF & Email

News reports two days ago indicated that cigarette manufacturer Philip Morris’s takeover bid for Vectura, a UK manufacturer of lung health products, looks set to go through.  This  is not a matter of academic integrity and I am not sure about the integrity issues pure and simple either – but there are surely ethical considerations to ponder, and ponder I do.

Last month, discussing What’s not there, I wrote about e-smoking manufacturer Juul’s purchase of the May-June edition of the American Journal of Health Behavior (AJHB); the Special Open Access Issue on JUUL comprised eleven research studies and two editorial articles on JUUL, all attempting to provide academic studies to establish the public health benefits of electronic smoking products.   This is just ahead of a US Food and Drug Administration (FDA) ruling on whether to allow JUUL to continue making and marketing its products.  Although there seems to be strong evidence to support the notion that e-smoking devices are highly effective in weaning nicotine smokers off cigarettes on to far less harmful substitutes (and thus a public health benefit), there is also much evidence (not reported in the special issue of AJHB) that younger users become addicted to nicotine through using JUUL products and graduate to smoking cigarettes. Thus the FDA hearing and upcoming ruling, thus the lobbying and attempts to influence the outcome.

As well as the ethics of an academic journal selling the full issue as an advertising feature for a possibly unethical business, I also mentioned that Altria had bought a 35% stake in JUUL in 2018.  Altria is owner of several tobacco companies – including Philip Morris, makers of Marlboro cigarettes. Buying into JUUL thus helps Altria claim that it is helping tobacco addicts give up cigarettes while simultaneously selling a product which creates new addicts.

Which is why I have doubts about this week’s news, that Philip Morris’s UK£1bn takeover bid for Vectura is now recommended to shareholders (over a slightly smaller bid from a private equity company) and looks likely to go through.  It is as if Philip Morris’s business plan runs: create nicotine addicts and make money, help them give up their addiction and make money, help them with breathing and other lung problems caused by smoking and make money. This does not sit well.

There is also the danger, highlighted in press coverage of the takeover bid, that lung researchers at Vectura may be unwilling to work for a tobacco company and leave, and also that research hospitals and universities may also block Continue reading

What’s not there

Print Friendly, PDF & Email

In How to make the world add up: Ten rules for thinking differently about numbers,* economist Tim Harford’s Rule Six reads

Ask who is missing.

It is sound advice.  Too often, we are so busy thinking about what IS there that we forget to look for what IS NOT there.  Looking at studies and surveys and pondering their conclusions and implications, it is important to know who and what were surveyed and studied, where and when and how the investigations were carried out.  With surveys, we need to know the demographics of the sample investigated, since factors such as age, gender, place, ethnicity, religion, class or wealth, job or employment and many other factors including the size of the sample and how participants were chosen could affect conclusions about whatever is being studied, including considerations of whether those conclusions might – or might not – apply to those who were not studied, did not take part in the study.  Unless the sample includes everyone in the population, we cannot (at least, we should not) generalise and claim that whatever we have concluded applies universally.

Caroline Criado Perez makes this point over and over in her book, Invisible Women: Exposing Data Bias in a World Designed for Men.  I think this paragraph Continue reading

Posted in Uncategorized

Spinning it out

Print Friendly, PDF & Email

A few weeks ago, my eye was caught by an article in The Guardian, Overconfident of spotting fake news? If so, you may be more likely to fall victim.  Natalie Grover reported on a recent survey of 8285 Americans which suggests that 90% of participants thought that their ability to distinguish between fake and accurate headlines was above average, that those who had over-high perception of their abilities were more likely to visit websites which tended to publish false or inaccurate news items, and they were also more likely to share fake news; on the other hand, those who took a more thoughtful approach to their news reading were less likely to be misled by or to share inaccurate and false news reports.

There have been many studies of over-confidence in recent years.   It may be this misplaced self-confidence which leads students (and people generally) to go for and to use without question whatever comes up as Google hit number 1, this regardless of anything they have been told and taught about website evaluation.  It may be a form of cognitive dissonance – knowing that they have to slow down and think about what they find online while at the same time accepting what they find online without thinking about it.

Who needs those CRAAP and WISER and CARRDS or other evaluation tools?  Why bother to laterally read and think, or use Four Moves?   We do not need to think, we cannot be taken in, we know best.

Think again.

I am not sure about my own general news-reading habits, but I do know I tend to be Continue reading

Feeling the draft

Print Friendly, PDF & Email

News reporters who plagiarize their stories occasionally make the news themselves – when they are found out.  I was alerted to just such a story a few days ago. My alert service pointed to two short online reports and I had a look.  There were a couple of statements in those reports which puzzled me, they were so intriguing they got me looking for more details and for clarification.

I am not sure that I found clarification.  I did find more reports on the same story, some published a day or two later but quite a few published much earlier. The core of the story remained the same but each succeeding report I looked at seemed to add a different detail.  Unhelpfully, some of those extra details did not quite match the details of other reports.

And while I do not want to comment on the case itself, not least because there is an active legal case going on (the reporter is suing for unfair dismissal), I think there are general points which can be made and general questions to ask which are of interest with regard to honesty and integrity in education and academia.

Let’s dive in!

There is agreement on the basic situation Continue reading

Reader beware – different views of point

Print Friendly, PDF & Email

Do you use Reader View?  Do you recommend it to your students?  I often use Reader View when available, especially if I want to print out or save a PDF version of the page I am looking at and there is no ready-made PDF version already linked on the page.

Reader and Reader View are extensions or apps which enable “clean” views of the page you are looking at, keeping the textual matter but avoiding the advertisements, embedded videos, navigation and sidebar matter and other distractions.

Here, for instance, is a page on MacWorld, How to enable Reader View automatically for websites in mobile and desktop Safari:

The advertisements flicker and change, the video clip plays automatically and floats so that it is always on the screen, there are several more distractions as you scroll through the article.

These distractions disappear Continue reading

Nothing but …

Print Friendly, PDF & Email

Last week, I received an email message from Chegg, telling me they had recently changed their Terms of Service.  It was very much an in-your-face message, in Helvetica 21.  That is big.

The body of the message reads:



We have updated our Terms of Use.

The updates are effective as of March 17, 2021. They apply to all accountholders, so we encourage you to read the Terms of Use fully. Some of the updates include changes to the Dispute Resolution section, the Arbitration Agreement, and to the procedures for filing a dispute against Chegg. The Terms of Use can be found here.

If you do not wish to have these Dispute Resolution updates apply to you, you must notify us as described in the Terms of Use within 30 days of their effective date.





It is a very carefully worded message. We are urged to “read the Terms of Use fully” and are told that “some of the updates include changes to” three specific areas of the Terms of Use, all three dealing with problems arising from using Chegg services and procedures in case of  dispute.   Note that use of “some of the updates include changes to…” – note that “some.”  The implication is that there may be other updates, other changes, but they are not mentioned in the email.

Nor are they listed on the Terms of Use page. There is no summary of changes made, no indication of what the previous terms were for comparison purposes.  Nor is there any indication of what, outside the dispute procedures, has also changed – just that note in the email suggesting that there have been changes elsewhere in the Terms of Use.  It is for the user to find them, “we encourage you to read the Terms of Use fully.”

There are 47 topics in the Terms of Use, more than 14,000 words on the page – Continue reading

Tempting snakes

Print Friendly, PDF & Email

It is some time since I last wrote about Viper, a free service which called itself a “plagiarism checker,”  housed on a site called ScanMyEssay.  It is worth writing again, because there are a number of changes in Viper’s  services and in the Viper business model.

In those earlier posts, Authentic Authenticity (published September 2013)  and Snake (in the grass) (March 2016), I advised against Viper because among other things: Continue reading

MLA9 already – and already mixed feelings

Print Friendly, PDF & Email

it does not seem long since the Modern Language Association published its 8th edition (MLA8) – but I see that it was released as much as 5 years ago, April 2016. Now, next month sees publication of MLA9, the 9th edition of the MLA Handbook – and yesterday MLA hosted a webinar preview of the new edition.

I well remember my excitement and delight, as that edition seemed revolutionary (as I wrote in MLA8 – new edition of MLA Handbook and Back to basics – MLA8 revisited).  Instead of presenting lots of rules and variations from and exceptions to the rules in an attempt to include all types of known (and unknown) source, format, medium, platform and more, we were given a template to follow with which we could build the references which informed our lists of Works Cited, while still being faithful to the rationale and the principles of academic referencing and supporting our readers.  This was empowering, it was liberating.

The principles of MLA8 citation and referencing are Continue reading

The integrity of integrity

Print Friendly, PDF & Email

One of my neighbours was livid earlier this week. The council recycling collection team had not emptied his recycling box. We leave our recycling boxes at the roadside for collection; everyone else’s recycling had been collected, our boxes emptied, but not his.  A large tag tied to the handle explained why:  the recycling was contaminated.

Someone, presumably a passer-by, had deposited a polystyrene carton and the remains of a take-away meal in the recycling box. The whole box was deemed contaminated and could not be taken for processing.

Contamination of recycling is a problem. If not caught Continue reading

Cheap Shots

Print Friendly, PDF & Email

It is easy to take pot-shots at EasyBib. They make it too easy, as I have suggested many times over the years.  They have an imperfect citation generator which frequently churns out incorrectly-formatted citations (especially in auto-citation mode). They give wrong advice in their guides to citation styles. They have produced many flawed add-ons which attempt to enable “Smarter Research. Powered by You,” such as their Research and Essaycheck services (both of which were abandoned some years ago; the links here go to the Internet Archive records).  Their grammar and spelling checkers need to be used with great care – but that goes for many, probably most, possibly all grammar and spelling checkers.

[Among my various blog posts whch mention EasyBib, Getting it wrong…, Not so easy does it, APA mythtakes  and Not such a wise OWL are particularly pertinent here.)

As I say, EasyBib makes it easy to shoot ’em down.  I probably would not have bothered this time, except that, clearing my inboxes (long overdue), I came across an EasyBib blog post which Continue reading

Stylistically speaking

Print Friendly, PDF & Email

A pedant myself, I was naturally attracted to an article by Elizabeth Ribbans in the Guardian this week: the headline read COVID or Covid? The comfort of pedantry at a time of national crisis.

Ribbans is the newspaper’s readers’ editor; her team is responsible for fact-checking, correcting copy and dealing with readers’ questions, comments and complaints. The question which inspired the headline was from a medical specialist who asked why the Guardian insisted on using Covid-19 when the medical profession uses COVID-19.

Ribbans explains that it is the Guardian‘s practice, along with many if not most British newspapers,

to use uppercase for abbreviations that are written and spoken as a collection of letters, such as BBC, IMF and NHS, whereas acronyms pronounced as words go upper and lower, eg Nasa, Unicef and, now, Covid-19.

(This is, incidentally, a practice I abhor. “Nasa” and “Unicef” are not words even if their abbreviations/ acronyms can be pronounced; when I see them spelled as “NASA” and “UNICEF” I am aware of the full title of the body and its responsibilities, just as I am aware of who the BBC, IMF and NHS are and what they do. Continue reading

Avoid like the plague…

Print Friendly, PDF & Email

It’s an ill wind, they say, an ill wind which blows nobody any good.

Covid-19 / coronavirus is spreading, more people are affected, the global death toll keeps rising, and at exponetial rates.  Businesses are closing, in some cases for good.  Parents are having to stay at home to look after children whose schools are closed. Stay indoors, do not go out unnecessarily, keep your distance, wash your hands.  The times are grim, the news is grim, we are all indirectly and directly affected (and if we aren’t affected yet, we will be).

The times are bringing out the worst in us, the times are bringing out the best in us.  While many selfishly rush to stockpile and the shops empty and more are happy to flout emergency regulations, we also see much that makes us proud : the selfless dedication of medical personnel and others in key services, new community awareness, measures of environmental recovery too.  These may be bad times but there is much that is good too, generosity and compassion..

Even cheat sites are playing their part. Well, one at least is. A special offer in the face of global catastrophe, Continue reading

None too advanced

Print Friendly, PDF & Email

In my last post, Guest what?, I described how I got intrigued by an article extolling the virtues of online essay writing services. It was posted on a website devoted to trashing the Royal Dutch Shell oil company. The article seemed so very out-of-place that I started investigating, both the gripe site itself as well as article.

Although the article, 10 Interesting Facts about Online Essay Writing Services, reads as if talking about essay writing services in general, it gives no names, no  examples. There is, however, a single hyperlink to one of these services.  It links to a site well worth looking at more closely. It might even be worth sharing and discussing with students, the better to put them off any temptation to use such sites themselves.

The underlined text links to a site called Continue reading

Guest what?

Print Friendly, PDF & Email

Now here’s an oddity. My plagiarism news alert alerted me to 10 Interesting Facts about Online Essay Writing Services the other day. What I found interesting, even before I clicked on the link, was that the article was posted on the Royal Dutch Shell plc .com website. What interest did Shell, the multinational/ global oil company, have in online essay writing services?

I just had to find out.

It turns out that Royal Dutch Shell plc .com is a gripe site, someone with a grudge against Royal Dutch Shell. The Shell website is simply, not

The site was founded by and is maintained by John Donovan.  On his disclaimer page, he openly proclaims the nature of his grudge against Shell.

Donovan might have good cause for his grievance; he certainly seems to have grievance, be it justified or not.  His site is full of whistle-blowing articles pinpointing practices which may be of a dubious nature. The origins of his grievance are highlighted on his eponymous site, (one of several he maintains):

And the puzzle: in among the many many articles accusing Shell of misdemeanours of many kinds is the article,  10 Interesting Facts about Online Essay Writing Services. It seems out of place. What’s more, the “10 interesting facts” article extols the supposed virtues of a good essay writing service. Donovan appears to be very much in favour of them.  The article claims that “trustworthy and effective” services provide Continue reading

Here’s a how-de-do

Print Friendly, PDF & Email

In a recent post, APA7 – not so sure…, I said that one of the things I like about the latest edition of the APA Publication Manual is that it standardises the recording of a DOI – to the form: https//  Previously there were several different ways of recording a DOI, including


All three methods were accepted in APA style documents, with the caveat that the formats should not be mixed in any one reference list, authors should change the format of any DOIs if and as necessary to provide a consistent style in that paper.

The latest edition of APA advises a standard format, so this item would now be referenced only as

This standardisation is good, it reduces potential confusion.

But it’s not just online documents which have DOIs – print documents are often assigned DOIs as well. The APA-style reference for APA’s Publication manual is (according to my paperback edition of the style guide, p. iv):

American Psychological Association (2020). Publication manual of the American Psychological Association (7th ed.).

Which may make for complications (especially for students in IB schools).

In an earlier post, Just a matter of time, I pointed to confusion between online material and material obtained online. Students (and teachers and others) are often confused in this regard; the title of Katie Greer and Shawn McCann’s article says it all: Everything Online is a Website: Information Format Confusion in Student Citation Behaviors.

IB adds to the confusion by requiring students to provide dates of access for electronic sources.

Now APA7 adds to the pot by requiring that DOIs be provided, using the https:// format, for print materials as well as for online materials:

Include a DOI for all works that have a DOI, regardless of whether you used the online version or the print version (APA7, p. 299).

Putting it all together, I’ve got a little list – of incompatible requirements. * 

  • Many referencing style guides (including APA) advise that date of access is needed only for online materials which are unstable, their contents or the URL might change or be changed.
  • The guides advise that materials with a DOI are regarded as stable so do not need a date of access.
  • APA7 requires that if a source has a DOI then it should be included in the reference.
  • APA7 requires that the DOI use the https:// protocol, thus
  • (As noted,) materials with a DOI are regarded as stable so do not need a date of access (in major referencing guides).
  • IB requires that references for electronic sources include the date of access.
  • IB examiners have been known to comment “Date of access?” on reference lists which include DOIs which do not have dates of access – marks may have been deducted for the omission.
  • It is unlikely that IB examiners will check whether a work in a reference list which carries a DOI is available in print; the DOI will have the https:// protocol and therefore look just like an online source.
  • IB examiners might therefore deduct marks for not including the date of access of a print work because they think it is an online source and therefore should have a date of access.

It’s a fine how-de-do, isn’t it, a pretty mess AND a state of things? *

Here are two suggestions for resolving the conundrum:

1) if referencing print materials with DOI for IB assessments, advise students not to give the DOI despite any advice to the contrary in the referencing guide.


2) IB should instruct examiners that if a reference includes a DOI – including entries in the form – then no date of access is required; to dispel confusion in schools, this advice could (and should) be added to IB guidance such as the page Acknowledging the ideas or work of another person—minimum requirements.


*  I seem to have Gilbert and Sullivan’s Mikado playing earworm, both “I’ve got a little list” and “Here’s a how-de-do” feature in the comic opera – which leads to the thought, if we are trying to “make the punishment fit the crime,” we must first be sure that a crime has been committed.

No dumb questions

Print Friendly, PDF & Email

Some of the questions asked in forums to which I subscribe are often basic and quickly answered, questions such as

  1. I’ve heard that the abstract is no longer required in Extended Essays. Is this true?
  2. Can students write an Extended Essay in their ab initio language?
  3. Should a Language B student write the RPPF in their own language or in the language of the essay?

Sometimes the writer knows that these are basic questions, prefacing the question with something like “Apologies if this is a stupid question…”

Those who do apologise should understand, there are no dumb questions. If you don’t know the answer and you need to find it, it’s a valid question.  If you have made the effort to find out but cannot find (or do not understand) the answer to your questions, then it may be that your search powers need boosting, it may be that you are looking in the wrong place/s, it could indicate a fault on the part of those who compile the guides or design the websites – but these questions are still valid and those who ask them still need answers.  Don’t apologise! (But see (4) below.)

I am very aware that, especially in the extended essay forums, supervisors may not have supervised a student under the current curriculum (which was introduced in 2016), their experience (if they have experience) was some years ago using an earlier and in some respects very different guide. There is no use saying, they should know by now; they have not had the opportunity to find out. Their questions are still valid.

[As an aside, I would add that I am sometimes struck that many forum users only use the forums when they have questions, they do not visit (or receive notifications by email) as a matter of course. That’s sad – and a missed opportunity.  I find the forums an invaluable and free source of continuing professional development. I do not read every post, far from it, but I do read threads that interest me and I occasionally bookmark a thread because I don’t know or am unsure and I want to see what others have to say on the topic.]

What often surprises me (I am being very careful with my words here) is the nature of the responses they get. While the answers given are most times correct, they do not always give provenance, they do not say where the original questioner can verify the response, in which document the answer can be found. On what page too, please, it’s often not helpful enough simply to say (as one recent respondent to a question did), “on the EE website.”   Not pinpointing the source strikes me as unhelpful, certainly not as helpful as it might be – especially if the question has been asked because of disagreement in the school and the questioner needs support from documentation to settle the argument.

This could also be important when, instead of a single right answer to the question, there might be different and equally valid answers. That often happens when it is not a matter of policy but of local practice, with those responding stating what happens in their own subjects or schools as if this was the only way to do it (whatever “it” is), without appreciating that other subjects or schools may do it differently and also be right.  When the source is not documented, those following the thread cannot verify the accuracy of those responses and may be confused. Or worse.

And of course, if the respondent gets it wrong, gives a wrong answer and misleads the questioner (and is not corrected), the consequences may indeed be worse.

What surprises me most of all, concerns me most of all, is that we expect documentation from our students. When they make statements or claims in their work (and especially in their extended essays) that are not common knowledge, they are expected to state their source/s – and will probably lose marks if they do not and in many cases may well be found to have committed plagiarism or other form of academic misconduct.

Please note, I am not suggesting that colleagues are committing plagiarism when they do not source their statements in the forums. These colleagues are not writing academic papers. But this just adds weight to one of my guiding principles, we do not just cite our sources in order to “avoid plagiarism” – we cite our sources to help our readers.  When we do not cite our sources, we are being less helpful than we might – we should – hope to be.

What’s more, we cite our sources to help ourselves. Even if we think we know the answer to a question, it is worth checking that we have it right – and having checked, to share the location in our response.

What source?

Not too far removed from these considerations is the nature of the source.  We teach our students CRAAP and other models for evaluating their sources, we promote lateral reading and other strategies for evaluation purposes, we demonstrate that Google hit #1 is often not to be relied on or may not provide a full answer, we implore them to go to the original source. We despair when our students ignore our advice and our warnings and fail to think critically about the information they find and they use.  Information is not all equal – but so often is treated as if it is.

And yet (here’s another gripe), on those occasions when sources are cited in the forums, whether by questioner or respondent, it is often not the guide or other official documentation which are cited. So many times the source is given as my colleague/s (or even my student), my coordinator, a workshop leader, a textbook, or “someone from IB” (who is more likely to be a workshop leader or field representative and not actually from IB) (not that everyone who works for IB is equally knowledgeable on all matters IB).

Occasionally, one even gets the impression that respondents know that the official guide and a textbook say different things – and they seem more inclined to believe the textbook than the official document.  But that’s a completely different matter. It remains, information is not all equal.

So, a plea: when responding to questions on forums, cite your source/s, cite authoritative source/s.   Our citations do not need to be perfect APA or Chicago or whatever. They need to be helpful. A direct link to the page will do, a path will do.  It’s helpful, it’s good practice. It gets to be a habit – which makes for good role-modelling as we work with our colleagues and with our students.

Let’s do it!



  1. Abstracts are no longer required in extended essays – and have not been since the introduction of the new curriculum in 2016 for first examination in May 2018. If included in an extended essay, they count towards the word count and – given that examiners stop reading after 4000 words – may mean that the examiner does not reach the conclusion of the essay, which could affect the marks awarded (What’s new in EE from 2016).
  2. It says specifically in the Language Ab Initio Guide (for first examination 2020, page 8) that students may NOT write an extended essay in their ab initio language.
  3. The RPPF must be written in the language of the essay. This is stated several times in the guide itself. It is also stated, in bold, on the RPPF itself. (Although the examiner will be fluent in the language of the essay, there is no guarantee that that examiner has any knowledge of the student’s own language, whatever that may be.)
  4. It would be good to think that those posing basic questions have made an effort to find an answer, in the guides and in other documentation or in the forum/s. Given the frequency with which same basic questions recur in the forums, one cannot help but wonder if the questioner made any effort to see if that question has been asked before. In many cases, I doubt it, given the frequency of the same, frequently asked questions.
    Nevertheless, there are no dumb questions.


APA7 – not so sure…

Print Friendly, PDF & Email

Remembering my excitement when MLA8 was announced (see MLA8 – new edition of MLA Handbook) and then when it was published (Back to basics – MLA8 revisited), I have to admit a degree of disappointment in the latest edition of the APA Publication Manual, APA7.

I am not a fan of MLA and its author style for in-text citation, I much prefer APA’s author-date approach.  Knowing the date of publication of a source is an important detail in the kind of reading and academic writing that I generally do.

But I do like MLA8’s approach, the principles of documentation that it announces right at the start.  There are just three principles and two of them resonate, they sum up my philosophy of and pedagogical approach to citation and referencing: Continue reading

Numbers count

Print Friendly, PDF & Email

Nadine Bailey’s Informative Flights blog is always worth reading. Her latest post, Resist the list, is as spot-on as ever.  Nadine doesn’t like compiling grade level recommended reading lists for a number of reasons and in this post, she lists some of those reasons.  Her stand is that already-hooked readers don’t need lists (they have other strategies for working out what to read next) and lists don’t work with those who aren’t hooked-on-books.  For them, other techniques and strategies are needed.

Perhaps the only suggestion I’d add to her post is Continue reading

Invisible women

Print Friendly, PDF & Email

I was intrigued when Caroline Criado Perez’s book, Invisible Women: Exposing Data Bias in a World Designed for Men, was published earlier this year.  It got a lot of exposure on radio (and no doubt in other media outlets too) – and deservedly so. Perez’s thesis is that it’s a man’s world: the world is unconsciously and often consciously designed and regulated by men for men and that women are too often seen as smaller and less powerful versions of men.  Even in countries and cultures in which men think they believe in equality in all spheres of life and that  systems are designed and shaped to ensure equality as far as is possible, there are often huge gaps and inequalities, completely unintended because nobody thought froma woman’s point-of-view.

One of the hooks which grabbed me, listening to the reports and the interviews, was the built-in gender bias of technology. For instance, when a gender-neutral language such as Turkish is fed into Google Translate, “o bir hemşire” translates as “she is a nurse” while  “o bir doktor” comes out as “he is a doctor.

(Recordings of many of the interviews and other items are available Continue reading

Just a matter of time

Print Friendly, PDF & Email

A recent post in a closed Facebook group for IB Diploma Programme Extended Essay Coordinators asked, “Would this be a complete reference for a painting?”

There followed a curious discussion, some 20 comments long.  The discussion inspired this blog post – and also got me reviving a post I started earlier this year on the same theme but had not managed to finish. I have now. My earlier thoughts are weaved in below, but let’s start with this recent, curious discussion.

The very first response declared,

The EE guide specifies that all online sources must have [Date accessed etc]

and thereafter the discussion focused on the date of access and its formatting and placement. After the person who posted the original question pointed out that the suggested reference did include the date of access (“Retrieved July 30, 2019)” that first responder came back with

(the Guide) requests a specific format for this and this point was reiterated in a workshop.

This same responder said in a later comment that the workshop leader had explained that having the date accessed in square brackets at the end of the reference enabled the examiner quickly to determine that the date of access had been included.

This raises a number of points – as it did in the discussion.  Yes, on the page headed Acknowledging the ideas or work of another person—minimum requirements, the Guide states that date of access must be included in any reference to an electronic source (whatever that means, the starting point for my original blog post as taken up below)

Regardless of the reference style adopted by the school for a given subject, it is expected that the minimum information given includes:

        • name of author
        • date of publication
        • title of source
        • page numbers as applicable
        • date of access (electronic sources)
        • URL.

and goes on to state

Examiners are required to alert the IB when minimum requirements are not met by a student, and the work is investigated accordingly.

IB has its own requirements for referencing.  While the IB does not legislate which referencing style is used,  it does require that the style used is used consistently.  IB also advises that when its own requirements are different to those in a published style guide, then IB requirements must be followed.  This is acceptable.  Many if not most of the published style guides state explicitly that, if an instructor’s, school’s, institution’s or publisher’s requirements are different to the suggestions in the style guide, writers should meet the requirements of the instructor (etc).  Say it loud: even if a style guide recommends that date of access is not needed, for IB assessments the date of access is needed.

But, despite our workshop’s participant’s protestation, the IB does not prescribe Continue reading