Here we are again!

Print Friendly, PDF & Email

Since ChatGPT was first launched towards the end of 2022, there has been much alarm expressed in schools and colleges, in discussion forums, blogs and other social media platforms, in the educational press and in the general press too. There has also been calmer discussion; we shall come to that.

ChatGPT is an artificial intelligence (AI) text-generator, developed by OpenAI.  Its appearance marks a huge step forward in the evolution of AI.  To now, text-based AI has been uninspiring and flawed: think of the chatbots used by many support centres which provide stock answers which never quite answer your specific problem, think of translation software and synonymisers and rewriting software which give word-for-word equivalents which are often nonsensical.

At first glance, ChatGPT is different.  ChatGPT responds to prompts to provide coherent and unique responses to often-complex questions; it is able to hold conversations, understand what is said and to make appropriate – if not always correct – responses.  Artificial it is and it may “know” a lot, but its intelligence and indeed what it “knows” may be limited.

AI is capable of basic writing tasks; the Washington Post for instance has been using AI to write reports for at least six years; the story The Washington Post leverages automated storytelling to cover high school football was published in the newspaper on 1 September 2017, and was supplemented a fortnight later by a Digiday article which declared The Washington Post’s robot reporter has published 850 articles in the past year.  Text-generating AI has been with us for some time, ChatGPT is a natural development.

Much of the alarm has been voiced by teachers who fear that students will use AI, and ChatGPT in particular, to cheat, to submit work generated by ChatGPT instead of doing the work themselves.   Because its output is fresh each time, its responses to the same prompt are never quite the same; what is more, it can be asked to regenerate its responses, and will churn out much the same information in different forms.  Here, for instance, are three responses to the question:

ChatGPT – would you pass the Turing Test?

The Turing Test is a test used to determine whether a machine is capable of exhibiting intelligent behavior indistinguishable from a human. As an artificial intelligence, I am designed to mimic human conversation and behavior as closely as possible, but ultimately the decision of whether or not I pass the Turing Test would be up to the person evaluating me.

As an artificial intelligence, I am not capable of taking the Turing Test because I do not have feelings or consciousness. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It is intended to determine whether a machine is capable of thinking and behaving like a human. I am just a program designed to process and generate text based on the input that I receive.

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to have a wide range of capabilities and to be able to carry out conversations with people in a natural and human-like manner, I am still just a machine and my responses are generated based on the input that I receive. Whether or not I would be able to pass the Turing Test would depend on the specific circumstances of the test and the abilities of the human evaluator.

Each of these responses is different.  Text-similarity software such as Turnitin will find no similarity to any other text.  Any student submitting work produced by ChatGPT will not be “caught” by a “plagiarism detector”, it will be difficult to prove conclusively that the work is not the student’s own.  The same is true, of course, for any plagiarism-free work produced by a friend, a tutor or a custom-essay writing service and submitted as the work of the student when it is not – and ChatGPT has the advantage of both exceedingly quick response and, at least for the time being, being free.

So it was that in the days immediately following ChartGPT’s debut. articles with alarmist headlines were quick to appear:

The College Essay Is Dead (Stepehn Marche, The Atlantic, 6 Decmber 2022)

ChatGPT Will End High- School English (Daniel Herman, The Atlantic, 9 December 2022)

Teachers Fear ChatGPT Will Make Cheating Easier Than Ever (Rashi Shrivastava, Forbes, 12 December 2022)

Schools and school districts are already blocking use of ChatGPT, a move which seems particularly futile, not least because the block can be anabled on education department devices; students can still use ChatGPT on their cell-phones and other devices both inside and outside school.

NYC bans access to ChatGPT on school computers, networks (Michael Elsen-Rooney, Chalkbeat (New York), 3 January 2023).

It is almost as if there is an assumption that students are ever looking for unethical shortcuts and ways to cheat; I believe that this is questionable.

Some students will never cheat. They have personal integrity, encouraged and supported by home background and a school ethos and culture which promotes and celebrates honesty and authentic work.

I accept that there may be some students who, for whatever reason, will cheat any time they can.

And there is, I think, a group in the middle who might be tempted to take unethical short-cuts, especially if uninterested in the assignment set or whose poor time-management has led to a race against the clock to complete on time, and if they think they stand a decent chance of getting away with it.  It is this middle group who need the most protection, both from a perception (justified or not) that “everyone else is cheating so why shouldn’t they?” and also from predators who provide shortcuts and temptations of various kinds, often dressing them up as acceptable practices when they are not.

It must also be said that content filters rarely work as intended, nor is ChatGPT the only text-generating AI available, just the best of its kind at the moment.  Better, I think, to teach responsible use of ChatGPT, awareness of its shortcomings (and there are many), and how to use it as a tool to enhance academic work and learning.

Not far removed from the alarmist stance are those who would fight technology with technology, in a never-ending game of catch-up.

Startup says it can reliably detect AI-generated content (Paul Gillin, SiliconANGLE, 16 December 2022),

Can Anti-Plagiarism Tools Detect When AI Chatbots Write Student Essays? (Daniel Mollenkamp, EdSurge, 22 December 2022).

A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism (Pete Syme, Business Insider, 4 January 2023).

Although test results suggest that these apps have high degree of success in distinguishing genuine work from AI-produced text, there may be issues in “proving” that an assignment was completed by AI rather than by the student themselves – which could well be the case if a student insists that the work is their own.  With text-matching software, if the instructor can produce text written earlier which matches word-for-word text produced by a student, there may be suspicions of copy-paste plagiarism, and the more text there is which matches, the more certain the probability of plagiarism becomes. With original text generated by AI, there is nothing to match against.  Suspicions can not be proven.

And there is a growing body of opinion which holds that we cannot hold back the tide, education and educators must adapt.

New AI tools that can write student essays require educators to rethink teaching and assessment (Mike Sharples, Impact of Social Sciences, 17 May 2022).

What does writing AI mean for education? (Hanna Seariac, Deseret News, 11 December 2022).
(Later retitled: Does AI mean the death of the college essay?)

What Would Plato Say About ChatGPT? (Zeynep Tufekci, New York Times (Opinion), 15 December 2022) 

Teachers v ChatGPT: Schools face new challenge in fight against plagiarism (Osmond Chia, The Straits Times, 3 January 2023)

The notion here is that AI will develop ever further, but we can use AI, including ChatGPT, as learning tools to enhance our teaching and students’ learning. Our pedagogy, the ways we teach and the methods we use, may need to change, but this has always been the way, we have always had to adapt our teaching, both content and methods, as new technology has become available. Tufekci’s article is particularly apposite here, reminding us that Plato feared that the written word would affect our abilities to memorise.  What goes around, comes around.  Much more recently than Plato, Clive Thompson admitted

I’ve almost given up making an effort to remember anything because I can instantly retrieve the information online” (“Your Outboard Brain Knows All,” Wired, October 2007) while around the same time, David Brooks declared “I had thought the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less (“The Outsourced Brain,”  New York Times October 26, 2007).

Change is always a threat, a challenge.  We have learned to with writing, books, paper, typewriters and computers, learned as educators to use to our own and our students’ benefit Google and Wikipedia, Facebook and Twitter.

A line in the NYC bans access to ChatGPT article is worth commenting on:

The chatbot’s ability to churn out pitch perfect essay responses to prompts spanning a wide range of subjects has sparked fears among some schools and educators that their writing assignments could soon become obsolete.

Yes, those writing assignments as they stand may become obsolete – or they can evolve and be adapted, make them rely less on stock responses but instead require thinking, critical and personal thinking, responses which cannot be artificially generated.  Several decades ago, the mantra was “If you can find the answer on Google, then you have not asked a (research) question”.  The need for inquiry approaches is more important than ever.

There is also the questionable thought that AI can “churn out pitch perfect essay responses”.  There is more and more realisation that ChatGPT’s is bland, repetitive, formulaic. The spelling and grammar are generally correct but the output sounds robotic and is often wrong. ChatGPT’s abilities are shallow, limited, far from pitch-perfect when asked to “think” more deeply.  This AI can work only with the information with which it has been provided, and while it has been provided with a lot of information, it cannot (as yet) add to it, and perhaps not even to synthesise it.

I asked ChatGPT to comment on recent events, it responded that it could not comment as these had happened after its “knowledge cutoff”, so naturally I asked about its knowledge cutoff and got the response:

My knowledge cutoff is the date after which I was not trained on any new information. My knowledge cutoff is 2021, so I do not have information about events or developments that have occurred after that time. I am only able to provide information and answer questions based on what was known at the time of my training. It is important to note that I am not able to browse the internet or access any new information, and I can only provide information and responses based on my programming and the information that I was trained on.

Particularly telling may be that ChatGPT does not always “know” from where its information comes, and will “happily” invent references to support its statements, complete with URLs to non-existent pages.

As instance, I asked ChatGPT several questions about Hamilton Naki’s role in pioneer heart transplant surgery.  The responses showed no awareness of any controversy about his contributions, and when I asked for references to support ChatGPT’s responses, the AI gave me:

None of these pages exist, they all produce error messages.  Click on the links, search for the article titles, check these authors’ writings, trawl the archives of the sites (and the Internet Archive too); the facts might (or might not) be complete and accurate but the sources claimed are not. The day may well dawn when a truly intelligent, knowledgeable, creative and critically thinking computer or artificial intelligence is developed (or develops itself). For the moment, it seems that the tools of information literacy are beyond AI’s ken.

This weakness seems worth adding to our toolboxes. We already ask students to state the source of any information which they obtain elsewhere and therefore needs a reference.  Especially in the early stages of a piece of writing, it can be helpful  quickly to check several of these references to be sure that they have given us the right source and the right information from that source.  This helps us be more certain that the student knows how to cite and reference sources and is writing ethically. If some of those references turn out to be fictitious, it could indicate that AI might be behind much else of the work submitted.

There is an uneasy sense of déjà vu here, we have been here before. Checking the accuracy of citations, both for content and source, is a tool in our plagiarism toolbox.  Another tactic is to ask students to provide screenshots, printouts, photocopies or other evidence of their searches and finds, perhaps routinely as part of the assignment.  As with any work which we suspect may not be that of the student, careful questionning regarding the research process, perhaps discussion of sources not used as well as those used and reasons for those decisions, and a check that the student understands what has been submitted, words, concepts and ideas, can help us decide on the balance of probabilities.

But, as with plagiarism detection, better still surely is discussion as to the shortcomings of AI, what it can and cannot do and how to use it ethically. When Wikipedia first appeared, the knee-jerk of many teachers and schools was to ban it completely, forbid its use for school assessments. Students were forbidden to use Wikipedia even while many of the teachers forbidding them to use it happily made use of it themselves.  Gradually a more nuanced approach emerged, the notion of using Wikipedia for basic information, teaching the use of the references cited as a springboard to the finding and use of more authoritative sources, teaching how to use the padlock and star features to find locked and moderated (and therefore more accurate if not quite authoritative) articles, and so on.

This may be the way to go with ChatGPT in education: use it as a springboard – with the added twist that students themselves still need to find and cite the sources which provide the information it gives and which they use.  They need also to be sure that any AI output that they use is accurate, and that it addresses the question asked.  Teach them how to make best use of AI – and when to avoid it.  With regard to ChatGPT, teach them that they need to check responses, add to them, find the gaps and fill them, use their own voice.  Use ChatGPT as a tool, not as an enemy…

A footnote on bland, formulaic output

Early in this article, I presented three different responses, one original and the other two “regenerated”,  which ChatGPT gave to the question “ChatGPT – would you pass the Turing Test?”  Each response is unique, but I did wonder if the AI might give over-similar responses to someone else – so I set up an account on a different computer and using a different email and asked the same question.  The three original responses are listed first, the second set of three follow.

ChatGPT – would you pass the Turing Test?

Computer 1

The Turing Test is a test used to determine whether a machine is capable of exhibiting intelligent behavior indistinguishable from a human. As an artificial intelligence, I am designed to mimic human conversation and behavior as closely as possible, but ultimately the decision of whether or not I pass the Turing Test would be up to the person evaluating me.

As an artificial intelligence, I am not capable of taking the Turing Test because I do not have feelings or consciousness. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. It is intended to determine whether a machine is capable of thinking and behaving like a human. I am just a program designed to process and generate text based on the input that I receive.

The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to have a wide range of capabilities and to be able to carry out conversations with people in a natural and human-like manner, I am still just a machine and my responses are generated based on the input that I receive. Whether or not I would be able to pass the Turing Test would depend on the specific circumstances of the test and the abilities of the human evaluator.

Computer 2

As an AI language model, I am designed to generate human-like text and I can hold conversations on a wide range of topics, but the Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. The test is named after Alan Turing, who proposed it in 1950 as a way to determine whether a machine could think. To pass the Turing Test, a machine would need to be able to engage in a conversation with a human evaluator and convince the evaluator that it is a human, rather than a machine. While I am capable of generating human-like text, I am not a conscious being and do not have the ability to think or exhibit intelligent behavior in the same way that a human does. Therefore, I would not be able to pass the Turing Test.

As an AI language model, I was not designed to specifically pass the Turing Test. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to generate human-like text, my primary purpose is to assist users by providing accurate and reliable information, rather than to deceive or mislead them into thinking that I am a human.

As an AI language model, I was not designed to pass the Turing Test. The Turing Test is a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. While I have been designed to generate human-like text, my ultimate goal is not to deceive people into thinking I am a human. Instead, my goal is to assist users by providing helpful and accurate information to the best of my ability.

It seems there may be a limited number of ways to say the same thing, at least in this short example, at least when asked to regenerate a response. Although each response is almost (?) unique (?), similarities are evident, the same phrases repeat, not necessarily in the same order, within the paragraph. This could alert teachers when the class is set the same assignment.

That said, the two sets of responses are different; the first set declares that whether or not ChatGPT would pass the Turing Test depends on the evaluation of the human interacting with the AI, the second set suggests ChatGPT has not been designed to pass the Turing Test so this is a non-question.

The blandness of these responses, of much ChatGPT output, should raise red flags, suggest further investigation is necessary. We might not be able conclusively to “prove” that a student has used AI to respond to a prompt, but we can question closely to make sure that the student at least understands what has been presented, that some form of learning and understanding has taken place.  The danger is that some teachers accept ChatGPT output without question while others question it most closely, and may even require students to re-write their responses, in their own words and in real time, in front of the teacher.

This is another déjà vu scenario, a hark-back (I hope it is just a “hark-back”) to the days when some teachers accepted possibly plagiarised material without question while others in the same school would not accept work which was not the student’s own.  Education of teachers as well as students supported by adoption of and school-wide adherence to academic honesty/ integrity policies have been effective strategies in raising awareness of the problems of plagiarism. It may be necessary, it probably will be necessary, for schools to revise their honesty and integrity policies to include considerations of ethical and unethical use of AI – and of course to make sure that all members of the school community understand the rationales, and acceptable use of AI too.

For sure, we cannot ignore ChatGPT and banning it is not the way to go.

Vanity, but not in vain

Print Friendly, PDF & Email

It has been a little while (okay, a long while) since I last posted here.  I am far from the only person who has had a difficult last few years, of course, but still.  I hope my personal situation is easing now and that I can fully get back into the swing of things.

I did start several blog posts during my long “sabbatical” and I may get round to completing them if they still seem relevant. What has sparked my interest now is, in a way, very personal, and conceited fool that I am, I can not resist sharing.

Many readers of this blog have accounts with platforms for sharing academic research and articles such as Academia.edu and ResearchGate, to access academic papers, contribute informally to the body of knowledge and making themselves known to the community. Among other things, such services send notifications of articles within subscribers’ stated fields of interest, alert subscribers to articles which cite their own work, and so on. Subscribers can search for and read articles online; they can often download copies of these articles.  Sometimes authors may not wish their articles and papers to be openly available, but those interested may be able to send a personal message to such authors requesting copies of the sought-for paper.

Researchers at any level can open accounts through their institutions or companies and sometimes independently. Some platforms ask that new subscribers prove their interest or credentials by uploading at least one paper written by the applicant (though this is sometimes abused by some who choose to upload someone else’s work, in which case probably without permission and possibly breaching copyright).  As so often with social media, there is often free access with limited features and a pay-for membership with advanced features and tools.

As noted, one useful feature of these platforms is notification when articles are found which cite work written by the subscriber.  Knowing someone else has cited your work can enable you, the original writer, to read and possibly make contact with someone else working in the same field.

So it was puzzling to find this message in my inbox a few weeks ago:

John, we found a recent chapter that cited you:

Timely Telling Tweets: Using Social Media Data to Tell the Stories of Window Sex Workers in Amsterdam Facing Major Changes to Their Working Conditions.

Puzzlement indeed.  What work of mine might have been cited in this paper?  I soon found out, I just had to scroll down in the email.  it was the article  “Has Turnitin.com got it all wrapped up?”  (The article I submitted was in fact called “Trust or Trussed?  Has Turnitin.com got it all wrapped up?” but Teacher Librarian decided that was too long a title and shortened it.)

What use, I wondered, did an article on sex workers in Amsterdam have for my article on the text-matching company Turnitin.com?

The article is not immediately available on the Researchgate page. and those interested are invited to contact the author.

The ResearchGate page does note that the chapter is in the book Sex Work, Labour and Relations, New Directions and Reflections.  The page provides the abstract (but not the full-text) and also includes a list of (12) references.

And yes indeed, there in the list of references is my article.

Reading the abstract more closely, I realised that this chapter focuses not on sex work per se but on the effectiveness and the challenges of using Twitter posts as a means of gathering data – it is about the research methodology as against (or is it as much as?) what was found out.  The abstract includes the sentence

Focussing on practical challenges of using tweets highlighting academic shortcomings in adherence to confidentiality and ethical pathways when submitting research to plagiarism software.

Consideration of “plagiarism software” might explain inclusion of my 2003 article, although it appears to be the only reference in the list of references to published work on plagiarism or plagiarism software.

The title line on the ResearchGate page includes a DOI link to the publisher’s page on SpringerLink – and this proved worth following up.  The chapter is still behind a pay-wall (UK£19.95 for the chapter or UK£79.50 for the eBook, UK£99.99 for the full hardback book), but the SpringerLink page did make me sit up.

The Researchgate page lists only 12 references but the SpringerLink page lists more, 50 in all.  This is useful to know.  At a quick count, in the list on the SpringerLink page there are

    • 20 references to items discussing sex work
    • 26 references to items discussing social media and/or research methodologies
    • 2 references to online translation tools
    • 1 reference to a work on APA style and referencing
    • 1 reference to a work on plagiarism software (my article on Turnitin).

I am still no wiser as to why or how my work has been used; indeed, I wrote to the author using the ResearchGate request facility asking for a copy of the text – but I have had no response, I am none the wiser.

But I am wiser in what I learn from this, the difference between the ResearchGate list of References and that used in the actual article is worth iterating.  I wonder if the Researchgate list includes only writers whose work is included in the Researchgate database?  As so often for the serious researcher, it pays to go to the original.

We are reminded that academic writers do not compile lists of references solely as a matter of academic honesty; indeed, academic honesty is a comparatively minor aspect of a reference list – we assume honesty.  More important are the assistance that the writer gives the reader in understanding the sources used to inform the writing and the assistance which enables the reader to find and follow-up the sources used – backward citation searching is the technical term.  When as here the list of references is available online, this last factor is enhanced by the hyperlinked URLs or DOIs which lead to the webpages or repository where the referenced article may be accessed (although again often password or pay-wall protected).

Even when stymied by a pay-wall, interested researchers may gain much from understanding and following up sources used which are openly available, especially those on the same topic.  In this case, anyone interested in the working conditions of sex workers will find works which could be of interest, anyone interested in the use of social media (and especially Twitter) will find works which could be of interest.  And without seeing the article, it has to be said that anyone interested in plagiarism and plagiarism is less likely to find much of interest, apart from my somewhat dated article, written for a less scholarly audience – better perhaps to look elsewhere.

Just as an extension of this, when stymied by a pay-wall, aware researchers may gain much from reading what others have said about the inaccessible article, perhaps in reviews of the work (perhaps in professional magazines and journals), and also how the article has been used in later works in the same field. It can be worthwhile using Cited by features when they are available, as on Google Scholar. (Is this “forward citation searching”?)

My fit of vanity has not been in vain!  At the least, it has led to this discovery that the list of references available on ResearchGate (and perhaps on other, similar services) may be incomplete; if wanting to conduct a review of the literature or a backward citation search, it is worth (as always) checking the original publication for perhaps more complete information.

And, for better or for worse, it has got me writing and posting again. I hope to be writing more frequently than of late. We shall see.

MLA9 already – and already mixed feelings

Print Friendly, PDF & Email

it does not seem long since the Modern Language Association published its 8th edition (MLA8) – but I see that it was released as much as 5 years ago, April 2016. Now, next month sees publication of MLA9, the 9th edition of the MLA Handbook – and yesterday MLA hosted a webinar preview of the new edition.

I well remember my excitement and delight, as that edition seemed revolutionary (as I wrote in MLA8 – new edition of MLA Handbook and Back to basics – MLA8 revisited).  Instead of presenting lots of rules and variations from and exceptions to the rules in an attempt to include all types of known (and unknown) source, format, medium, platform and more, we were given a template to follow with which we could build the references which informed our lists of Works Cited, while still being faithful to the rationale and the principles of academic referencing and supporting our readers.  This was empowering, it was liberating.

The principles of MLA8 citation and referencing are Continue reading

The integrity of integrity

Print Friendly, PDF & Email

One of my neighbours was livid earlier this week. The council recycling collection team had not emptied his recycling box. We leave our recycling boxes at the roadside for collection; everyone else’s recycling had been collected, our boxes emptied, but not his.  A large tag tied to the handle explained why:  the recycling was contaminated.

Someone, presumably a passer-by, had deposited a polystyrene carton and the remains of a take-away meal in the recycling box. The whole box was deemed contaminated and could not be taken for processing.

Contamination of recycling is a problem. If not caught Continue reading

Cheap Shots

Print Friendly, PDF & Email

It is easy to take pot-shots at EasyBib. They make it too easy, as I have suggested many times over the years.  They have an imperfect citation generator which frequently churns out incorrectly-formatted citations (especially in auto-citation mode). They give wrong advice in their guides to citation styles. They have produced many flawed add-ons which attempt to enable “Smarter Research. Powered by You,” such as their Research and Essaycheck services (both of which were abandoned some years ago; the links here go to the Internet Archive records).  Their grammar and spelling checkers need to be used with great care – but that goes for many, probably most, possibly all grammar and spelling checkers.

[Among my various blog posts whch mention EasyBib, Getting it wrong…, Not so easy does it, APA mythtakes  and Not such a wise OWL are particularly pertinent here.)

As I say, EasyBib makes it easy to shoot ’em down.  I probably would not have bothered this time, except that, clearing my inboxes (long overdue), I came across an EasyBib blog post which Continue reading

No dumb questions

Print Friendly, PDF & Email

Some of the questions asked in forums to which I subscribe are often basic and quickly answered, questions such as

  1. I’ve heard that the abstract is no longer required in Extended Essays. Is this true?
  2. Can students write an Extended Essay in their ab initio language?
  3. Should a Language B student write the RPPF in their own language or in the language of the essay?

Sometimes the writer knows that these are basic questions, prefacing the question with something like “Apologies if this is a stupid question…”

Those who do apologise should understand, there are no dumb questions. If you don’t know the answer and you need to find it, it’s a valid question.  If you have made the effort to find out but cannot find (or do not understand) the answer to your questions, then it may be that your search powers need boosting, it may be that you are looking in the wrong place/s, it could indicate a fault on the part of those who compile the guides or design the websites – but these questions are still valid and those who ask them still need answers.  Don’t apologise! (But see (4) below.)

I am very aware that, especially in the extended essay forums, supervisors may not have supervised a student under the current curriculum (which was introduced in 2016), their experience (if they have experience) was some years ago using an earlier and in some respects very different guide. There is no use saying, they should know by now; they have not had the opportunity to find out. Their questions are still valid.

[As an aside, I would add that I am sometimes struck that many forum users only use the forums when they have questions, they do not visit (or receive notifications by email) as a matter of course. That’s sad – and a missed opportunity.  I find the forums an invaluable and free source of continuing professional development. I do not read every post, far from it, but I do read threads that interest me and I occasionally bookmark a thread because I don’t know or am unsure and I want to see what others have to say on the topic.]

What often surprises me (I am being very careful with my words here) is the nature of the responses they get. While the answers given are most times correct, they do not always give provenance, they do not say where the original questioner can verify the response, in which document the answer can be found. On what page too, please, it’s often not helpful enough simply to say (as one recent respondent to a question did), “on the EE website.”   Not pinpointing the source strikes me as unhelpful, certainly not as helpful as it might be – especially if the question has been asked because of disagreement in the school and the questioner needs support from documentation to settle the argument.

This could also be important when, instead of a single right answer to the question, there might be different and equally valid answers. That often happens when it is not a matter of policy but of local practice, with those responding stating what happens in their own subjects or schools as if this was the only way to do it (whatever “it” is), without appreciating that other subjects or schools may do it differently and also be right.  When the source is not documented, those following the thread cannot verify the accuracy of those responses and may be confused. Or worse.

And of course, if the respondent gets it wrong, gives a wrong answer and misleads the questioner (and is not corrected), the consequences may indeed be worse.

What surprises me most of all, concerns me most of all, is that we expect documentation from our students. When they make statements or claims in their work (and especially in their extended essays) that are not common knowledge, they are expected to state their source/s – and will probably lose marks if they do not and in many cases may well be found to have committed plagiarism or other form of academic misconduct.

Please note, I am not suggesting that colleagues are committing plagiarism when they do not source their statements in the forums. These colleagues are not writing academic papers. But this just adds weight to one of my guiding principles, we do not just cite our sources in order to “avoid plagiarism” – we cite our sources to help our readers.  When we do not cite our sources, we are being less helpful than we might – we should – hope to be.

What’s more, we cite our sources to help ourselves. Even if we think we know the answer to a question, it is worth checking that we have it right – and having checked, to share the location in our response.

What source?

Not too far removed from these considerations is the nature of the source.  We teach our students CRAAP and other models for evaluating their sources, we promote lateral reading and other strategies for evaluation purposes, we demonstrate that Google hit #1 is often not to be relied on or may not provide a full answer, we implore them to go to the original source. We despair when our students ignore our advice and our warnings and fail to think critically about the information they find and they use.  Information is not all equal – but so often is treated as if it is.

And yet (here’s another gripe), on those occasions when sources are cited in the forums, whether by questioner or respondent, it is often not the guide or other official documentation which are cited. So many times the source is given as my colleague/s (or even my student), my coordinator, a workshop leader, a textbook, or “someone from IB” (who is more likely to be a workshop leader or field representative and not actually from IB) (not that everyone who works for IB is equally knowledgeable on all matters IB).

Occasionally, one even gets the impression that respondents know that the official guide and a textbook say different things – and they seem more inclined to believe the textbook than the official document.  But that’s a completely different matter. It remains, information is not all equal.

So, a plea: when responding to questions on forums, cite your source/s, cite authoritative source/s.   Our citations do not need to be perfect APA or Chicago or whatever. They need to be helpful. A direct link to the page will do, a path will do.  It’s helpful, it’s good practice. It gets to be a habit – which makes for good role-modelling as we work with our colleagues and with our students.

Let’s do it!

 

Footnotes

  1. Abstracts are no longer required in extended essays – and have not been since the introduction of the new curriculum in 2016 for first examination in May 2018. If included in an extended essay, they count towards the word count and – given that examiners stop reading after 4000 words – may mean that the examiner does not reach the conclusion of the essay, which could affect the marks awarded (What’s new in EE from 2016).
  2. It says specifically in the Language Ab Initio Guide (for first examination 2020, page 8) that students may NOT write an extended essay in their ab initio language.
  3. The RPPF must be written in the language of the essay. This is stated several times in the guide itself. It is also stated, in bold, on the RPPF itself. (Although the examiner will be fluent in the language of the essay, there is no guarantee that that examiner has any knowledge of the student’s own language, whatever that may be.)
  4. It would be good to think that those posing basic questions have made an effort to find an answer, in the guides and in other documentation or in the forum/s. Given the frequency with which same basic questions recur in the forums, one cannot help but wonder if the questioner made any effort to see if that question has been asked before. In many cases, I doubt it, given the frequency of the same, frequently asked questions.
    Nevertheless, there are no dumb questions.

 

Just a matter of time

Print Friendly, PDF & Email

A recent post in a closed Facebook group for IB Diploma Programme Extended Essay Coordinators asked, “Would this be a complete reference for a painting?”

There followed a curious discussion, some 20 comments long.  The discussion inspired this blog post – and also got me reviving a post I started earlier this year on the same theme but had not managed to finish. I have now. My earlier thoughts are weaved in below, but let’s start with this recent, curious discussion.

The very first response declared,

The EE guide specifies that all online sources must have [Date accessed etc]

and thereafter the discussion focused on the date of access and its formatting and placement. After the person who posted the original question pointed out that the suggested reference did include the date of access (“Retrieved July 30, 2019)” that first responder came back with

(the Guide) requests a specific format for this and this point was reiterated in a workshop.

This same responder said in a later comment that the workshop leader had explained that having the date accessed in square brackets at the end of the reference enabled the examiner quickly to determine that the date of access had been included.

This raises a number of points – as it did in the discussion.  Yes, on the page headed Acknowledging the ideas or work of another person—minimum requirements, the Guide states that date of access must be included in any reference to an electronic source (whatever that means, the starting point for my original blog post as taken up below)

Regardless of the reference style adopted by the school for a given subject, it is expected that the minimum information given includes:

        • name of author
        • date of publication
        • title of source
        • page numbers as applicable
        • date of access (electronic sources)
        • URL.

and goes on to state

Examiners are required to alert the IB when minimum requirements are not met by a student, and the work is investigated accordingly.

IB has its own requirements for referencing.  While the IB does not legislate which referencing style is used,  it does require that the style used is used consistently.  IB also advises that when its own requirements are different to those in a published style guide, then IB requirements must be followed.  This is acceptable.  Many if not most of the published style guides state explicitly that, if an instructor’s, school’s, institution’s or publisher’s requirements are different to the suggestions in the style guide, writers should meet the requirements of the instructor (etc).  Say it loud: even if a style guide recommends that date of access is not needed, for IB assessments the date of access is needed.

But, despite our workshop’s participant’s protestation, the IB does not prescribe Continue reading

Finding my voice

Print Friendly, PDF & Email

A few years ago, I wrote (in Somewhere, over the spectrum …) of an AHA! moment, a realisation that understanding of academic citation practices may best be imaged, not just by a straight-line continuum from black to white with shades of grey between, but by a spectrum, all shades of the rainbow and anywhere in between.

It was Teddi Fishman, then director of the International Center for Academic Integrity, who gave me this insight.  In a plagiarism case in which she was asked for her opinion, had a published piece of work had been plagiarised, Fishman said

With regard to citation errors and plagiarism, there is a wide spectrum and certainly not all are created equal. The main defining characteristic in cases that we’d classify as citation errors is that there is an attempt to identify the source of the information rather than to make it appear as if the words or ideas are those of the person using them in the document.

(The full article from which this quotation is taken is no longer available on the Cambridge Chronicle site.  Fortunately, it can still be found in the Internet Archive;  the quotation of Fishman’s response as reported by journalist Sara Feijo is on page 3 of this article.)

Fig. 1 – Black and white and shades of grey

In the continuum imagery, the white end comprises writers who know the rules, know what is right, what is expected, what is needed – and do them!  Ideally they will observe the conventions of citation and referrencing because they have integrity, they wouldn’t – couldn’t – do otherwise.

At the black end we have the writers who know the rules, who know what is right, what is expected, what is needed – and they knowingly break the rules! They copy, they paraphrase without acknowledgement, they use other people’s work and claim it as their own, they use their own work over and over and claim Continue reading

Consistently inconsistent?

Print Friendly, PDF & Email

I’ve got a bit behind in my reading lately. Although it was published in May 2018, I came across Jennifer Yao Weinraub’s  Harder to Find than Nemo: The Elusive Image Citation Standard only recently.  In this paper, Weinraub discusses confusion and inconsistencies in the citation of images and the lack of good examples, with particular reference to MLA8 and Chicago. She also discusses other style guides and citation generators, the recommendations of some specific image collections. She points to tutorials and libguides which also attempt to give guidance.

Coming across this article is timely.  Over the last few weeks I seem to have received a steady stream of image citation questions in my inbox. Some notifications originate in online groups and forums, some are emails sent to me directly. It’s a hot topic!  The images presented by questioners are rarely straight-forward, rarely textbook examples. I suppose if they were, there would be less doubt as to how to cite them, the questions would not be asked.  So it is good to find Weinraub’s article, if only to confirm the difficulties and the contradictory or missing advice.

Weinraub suggests confusion in the use of the terms caption and citation (which I would call “reference” – the location details which specify edition (etc) and enable retrieval). She also suggests differences, uncertainty and inconsistencies as to what might or should be included in these. She also notes Continue reading

None too sweet

Print Friendly, PDF & Email

I have remarked before on possible problems raised by conflicting definitions and usages of the terms “reference” and “citation.”

Some style guides use the term “reference” to mean the short form in the text which links to what they call a “citation”, the full details in the list at the end; some call that short form in the text “citation” and use “reference” for the full details in the list at the end; some use both terms interchangeably; some use reference to mean the quotation (or paraphrase or summary) from someone else’s work, acknowledged with a short-citation in the text which links to the full citation at the end.

It makes for confusion. In workshops, I often tell Lori’s story:  her teacher kept reminding her to check that she had citations for all her sources and she thought she had … except that the teacher meant Continue reading

A critical criterion

Print Friendly, PDF & Email

Over the last few weeks, The IB has been publishing Extended Essay reports for the May 2018 exams.  They are available for most subjects now.

I’ve been looking through them.  Some of them make sad reading, marks thrown away needlessly.  Most students should score in the top mark band for Criterion D, Presentation, at least for the elements of structural presentation.  And yet, and yet…  too many don’t.

Are the students who don’t get maximum points here careless?   Don’t they know what’s required? Are supervisors letting them down by not advising what to check?  Care here with that last though, of course:  supervisors are not permitted to tell students that the page number for (say) the Discussion section does not match the page number given on the Table of Contents page; they are permitted to advise students to check that numbers on the pages match those in the Table of Contents page.  The first situation is being specific and amounts to proof-reading and/or editing (neither of which are permitted); the second is general and generic, and advises the student to do the work of finding errors and correcting them.

Examiner comments regarding page-numbers bother me.  Not the comments themselves but Continue reading

Out of step footnotes – 2

Print Friendly, PDF & Email

In recent weeks, I’ve been indulging a footnote fetish – last week’s post was part 1 of a 2-post mini-critique of the Chicago/Turabian style. I am almost over my obsession, just this last blast to go.  It’s a particularly pertinent piece for readers in IB schools, in that it focuses on inconsistencies in Turabian.  While they do  (are supposed to) accept any referencing style, IB examiners are well-concerned to have references and citations recorded completely and consistently within each individual assessment.  Given that IB requirements are sometimes inconsistent with the guidance of particular style guides, confusion can be compounded when the chosen style guide is inconsistent within itself.

[All references and scans used in this piece are from Turabian, 9th edition – more properly Kate L. Turabian’s A manual for writers of research papers, theses, and dissertations: Chicago style for students and researchers 9th. ed., University of Chicago, 2017.]

First, a general note, not specific to Turabian.  Turabian advises that many items should be cited in the text but not in the bibliography, for instance:

personal interviews, correspondence, blog posts and other social media, newspaper articles, reviews (of books, performances), well-known reference works, the Bible and other sacred works etc. etc.
(Turabian, section 16.2.3, lists many more…)

Turabian is not alone in suggesting that writers give details of certain types of source in the text but not in the bibliography; many style guides list exceptions to the general rule.  In all instances, when writing for IB, IB requirements overrule the advice of any style guide: if you cite it in the text, be sure to give a full reference in the list at the end.

Similarly, Turabian advises that Continue reading

Out of step footnotes – 1

Print Friendly, PDF & Email

A couple of posts ago, I declared myself Not a friend of footnotes. I don’t like them as a reader, I don’t like them as a writer.

I appreciate that many, many people, readers and writers, do like footnotes and endnotes, and that’s fine with me. I’ll put up with them if what I read is interesting, I’ll use them as a writer if my editors demand them.  I’ll agree that they may well suit particular forms of writing and different media. But I do not like them.  In this post and the next, I’ll detail some of the reasons why I don’t like them, particularly as a writer.

[I’ve been told that my two-weeks-ago post was unfair. Here I described some of my problems as a reader, and I used some illustrations from Yuval Noah Harari’s Sapiens to make my point, illustrations I used in a workshop soon after. “But he’s not using endnotes properly!” I was told.  “He shouldn’t use several authors in one endnote, they should be distinct.”

[Far be it for me to suggest that Harari is using endnotes wrongly, especially as Turabian (9th ed.) states Continue reading

Transferable skills

Print Friendly, PDF & Email

If you were hoping for more thoughts on footnotes and endnotes this week, my apologies. The thoughts I had in mind are still to come.  This post is still about footnotes, but not quite what I thought I’d be saying.

The IB has begun posting the May 2018 DP subject reports in the Programme Resource Centre and I have spent some time this past week looking through them.

This is not something I do as a matter of course. I do look at the Extended Essay reports for all subjects – and eagerly await publication, they must surely be posted any day now. But I don’t follow the subject reports that carefully.

My look at the subject reports was impelled by a comment made in a workshop I led last week – a history teacher insistent that the subject guide for History says that students are required to use footnotes.  I was sure that the subject guide says no such thing; IB allows the use of any documentation system as long as Continue reading

To quote or not to quote

Print Friendly, PDF & Email

A couple of weeks ago, Nadine wrote a comment on my post Multiple confusion in which she noted, ” Not that encouraging direct quotations is best writing form, but at that age it’s probably more common.”

That got me thinking. When teaching citation and referencing, we often start with quotations.  Is this because it is easy to demonstrate, based on something that most students can and do already do?  When you copy-and-paste, you are using someone else’s exact words, you are quoting someone. When you quote someone, you need quotation marks. You use quotation marks around the copy-pasted words to show you are quoting, and you also say who that someone is, whose words you have borrowed.

From there we go on to say that, when you use your own words to put over someone else’s thoughts and ideas and findings, you need also to cite them;  they may be your own words but they are NOT your own thoughts.  You still need to say whose thoughts or ideas or findings you are using.

It’s a common complaint, that although most students know how and when to quote someone else’s material, it is when they paraphrase or summarise someone else’s work that they often forget that they need also to cite the source of that work.  It might be because they confuse using their own words with their own original words and ideas… they are using their own words so a citation is not necessary?

I won’t go too far down that track today. What I do want to do is to go back to Nadine’s comment, that “encouraging direct quotations” is not “best writing form.”  My first thought was, why then do we teach how to quote and cite?  But a second thought quickly followed Continue reading

Cite check

Print Friendly, PDF & Email

I’ve just finished an online workshop for librarians. Good fun as usual and very worthwhile. The participants made really great strides over the four weeks and they knew it, they had so much new awareness by the end of the month.  it was very encouraging.

Many went beyond the bounds of the workshop readings to find information and opinion elsewhere, the spirit of inquiry was strong.  Many quoted from the articles they found – great!  Quite a few copied graphics and images from articles and other materials found – and most did not need to be reminded to cite the sources of those graphics as well as of the text.

But … perhaps because there were larger numbers of newcomers to librarianship on the course than usual, there seemed to be a rash of participants who would simply cite their images as “Google” or “Google Images” or present the Google image search URL.

That’s not helpful and it’s not right either. I would send a personal message Continue reading

Multiple confusion

Print Friendly, PDF & Email

A question came up in Programme Communities in My IB just recently:

My student is using a book and a website as her primary sources about the organisation she is researching for her extended essay.  When there are several quotations or summaries from the same book or article, it is easy to show in the in-text citation from which page the quotation/ summary/ parahrase is taken.  What about the website, how does she indicate the different pages used from within the same website?  (This is a slightly edited version of the question as posed.)

I checked the manuals and was able to answer the question fairly quickly.  But it’s been bugging me, because the approaches taken by MLA and APA are very different.

APA style

Usually, I prefer APA to MLA. There are several reasons, one of which is that APA is nicely straightforward with its WHO-WHEN-WHAT-WHERE approach.  In this instance, though, I think the APA is confusing.

The answer is not spelt out in the Publications handbook so I checked Continue reading

Smile, please – it’s for real

Print Friendly, PDF & Email

I came across this news item in the i newspaper (page 13 of the 29 August 2018 edition, a short article by John von Radowitz). The article reports on a study in which “Scientists showed 20 goats unfamiliar photos of the same human face looking happy or angry;”  they found that “goats preferred to interact with the smiling face.”

It sounds fun, it sounds odd, it almost sounds improbable.

Two things struck me immediately.  The first was that phrase, “unfamiliar photos.”  When you’re a goat, who’s to say whether a photo is familiar or unfamiliar?

The second was a memory – a memory of the academic paper Feline Reactions to Bearded Men.  You might remember it: the researchers claimed to have held cats in front of photos of bearded men and observed their reactions.  The paper suggests that ” Cats do not like men with long beards, especially long dark beards.”

The cats “paper” was first published in 1999, maybe earlier.  It is frequently used in website evaluation exercises to make students aware of web pages which look authentic but could be big hoaxes.

The name of the site – Improbable Research – is claimed as a warning signal (though as this is the site responsible for the annual Ig Nobel Prizes, a very real event, one might not be so sure). The biggest giveaway in the cats paper is probably the bibliography, which includes entries for Pat Boone, Madonna, Yul Brynner, Sinead O’Connor, Mary Quant, Arnold Schwarzenegger and the if-only Dr Seuss (responsible for the paper “Feline Responses to Hats”).  How much of a giveaway, 20 years on, might be questionable; many of the names are probably unknown Continue reading

Not just CRAAP – 3

Print Friendly, PDF & Email

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests that a fresh look at the way we evaluate web pages and sites could be valuable.]
[In part 2, we looked at a rebuttal of Wineburg and McGrew’s study – and rebutted the rebuttal.]
[In this third part, we look at reasons why we may need a compromise between the “old” and the “new” ways of evaluating pages and sites online.]

In my last two posts, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates. The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

In one particular task, a comparison and evaluation of two articles both of which dealt with bullying, the researchers found that historians and students tended to spend much time considering the actual articles before they moved elsewhere; some never left the target sites, some left them to look elsewhere. By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites. That is not necessarily (at least in my eyes) a concern. What does concern is that the evaluations made by the two groups were very different. Continue reading

Not just CRAAP – 2

Print Friendly, PDF & Email

In part 1 of this three-part article, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates.  The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

The three groups were asked to complete six tasks in timed conditions. The findings and ensuing discussion are detailed in the paper Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.

In this earlier post (Not just CRAAP – 1), I invited readers to try one of the tasks for themselves. If you haven’t already done this, it might be a good idea to try before reading on here.

The task asked participants to imagine they looking for information on bullying, and describe their thought processes as they considered two particular articles on two different websites.  The articles were Bullying at School: Never Acceptable on the site of the American College of Pediatricians (ACPeds – the College) and then  Stigma: At the Root of Ostracism and Bullying on the site of the American Academy of Pediatrics (AAP – the Academy).

Participants were allowed to look elsewhere on the sites and anywhere else online that they wished.  They had to decide which website was the more reliable and trustworthy.

What the researchers found was that Continue reading

Not just CRAAP – 1

Print Friendly, PDF & Email

Over the weekend, a newsletter item in the Chronicle of Higher Education caught my attention, One way to fight fake news by Dan Berrett and Beckie Supiano.  It was originally published in November 2017;  I’ve got behind in my reading.

The item reports on a study by Sam Wineburg and Sarah McGrew.  Wineburg and McGrew compared the search habits and evaluation techniques of three different groups, professional historians, professional fact-checkers, and students at Stanford University.  They found that :

  • the historians and the students mostly used very different techniques of search and evaluation to the techniques of the fact-checkers;
  • the historians and the students could not always find the information they were asked to search for;
  • the historians and the students took longer to decide on the validity and reliability of the sites they were asked to look at;
  • most disturbingly, the historians and the students came by-and-large to diametrically opposite conclusions to those of the fact-checkers as to the validity and reliability of the various sites; the two groups could not both be right.

Before reading further, you might want to try an approximation of one of the tasks undertaken by the participants (there were six tasks in all, in timed conditions). Continue reading

Guilty by association

Print Friendly, PDF & Email

A month or so ago, an incident at Ohio State University made headlines. One or more students had posted information on business course assignments in a GroupMe study group.  The type of information shared violated the University’s code of student conduct.  As a consequence, more than 80 students – all members of the GroupMe group – were charged with cheating.

GroupMe is a free group messaging app, widely used to send messages and documents simultaneously to all members of a group. Members of educational GroupMe groups often use it to share dates due and study tips and readings. When collaboration is permitted, this kind of app can be a great boon in assisting collaborative work. In this particular case, however, some users had overstepped the mark and had posted suggested answers to homework assignments. Legitimate collaboration had become illegitimate collusion.

By and large, the headlines (of which this is just a small selection) seemed to get more dramatic Continue reading

WHYs before the event

Print Friendly, PDF & Email

I have long suggested that students will more readily understand the conventions of citing and referencing if they understand WHY we do it, WHY they are asked, expected and required to do it.  HOW to do it is necessary, but knowing WHY we do it gives purpose, can even make it fun.

When I “crowd-source” the reasons WHY we cite and reference, in classrooms and in workshops, the group usually comes up with the main reasons between them. That is good. But there is no guarantee that any one individual in the room appreciates all of those reasons – as evidenced perhaps by my questioner in Qatar, a story I relate in Lighten the load, “Is referencing taken as seriously at university as it is in this school?”

Trouble is, for many students, the notions of building on what has gone before, showing the trail which has led to our present thinking or contributing to an academic conversation are just too abstract to appreciate. This is so, even at university level, as suggested by Continue reading

It takes time

Print Friendly, PDF & Email

One of the basic tenets of this blog is that we do students a disservice when we give them the impression that the main purpose of citing and referencing is to “avoid plagiarism.”

The way I see it, “avoiding plagiarism” is at best a by-product of citation and referencing. It is a long way from being the main or the only reason for the practice. It makes for angst (“what if I get it wrong?”) and it leads to confusion. Because of the nit-picking demands of getting one’s references absolutely perfect, it can lead to boredom. It leads to taking short-cuts, to avoidance of using other people’s work in support of one’s own ideas and statements, to a loss of the writer’s own voice and ideas.

At the same time, as demonstrated by repeated uses of Jude Carroll’s Where do you draw the line? exercise, there are wide differences between what different teachers class as plagiarism. This serves further to confuse, as when a student who has had work long accepted finds her standard practice is suddenly condemned Continue reading

APA mythtakes

Print Friendly, PDF & Email

We don’t take note of non-coincidences, do we? It’s different when two similar events happen close to each other. Wow! we say, what are the chances of that happening twice in the same day? Coincidences stick in the mind, single events do not stick so readily. (This one stuck so solidly that it pushed me into blogging again.)

A recent EasyBib blog post was one half of such a coincidence. Michele Kirschenbaum’s blog post Video Lesson: In-Text Citations had upset me on two counts. Although published on 29 September 2017, my Google Alert did not pick it up until last week.

Count #1: the video gives the impression that in-text citations and parenthetical citations are one and the same

This impression is confirmed in the text of the blog where we read “We think this video, which provides an introduction to in-text, or parenthetical citations, is a great addition to your classroom resources.”

Me, I don’t think it such a great addition, not least because parenthetical citations are one kind of in-text citation, but not the only kind.

Other kinds are available, not least when the citation starts the sentence Continue reading