Not just CRAAP – 3

Print Friendly
Share Button

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests that a fresh look at the way we evaluate web pages and sites could be valuable.]
[In part 2, we looked at a rebuttal of Wineburg and McGrew’s study – and rebutted the rebuttal.]
[In this third part, we look at reasons why we may need a compromise between the “old” and the “new” ways of evaluating pages and sites online.]

In my last two posts, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates. The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

In one particular task, a comparison and evaluation of two articles both of which dealt with bullying, the researchers found that historians and students tended to spend much time considering the actual articles before they moved elsewhere; some never left the target sites, some left them to look elsewhere. By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites. That is not necessarily (at least in my eyes) a concern. What does concern is that the evaluations made by the two groups were very different. Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy. The fact-checkers were the exact opposite.

Wineburg and McGrew went on to ask if students have learned to over-rely on evaluation checklists – and if such reliance lets them down. In practice, in real life, we rarely check the veracity or credibility of the sites we come across, a point which the researchers acknowledge (p. 44).  When we do, we often use the short-cut tools we learn at school (the short-cut tools we teach, don’t forget), checklists such as the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources. Many more checklists are available.   These tools can work well with students starting to learn how to search and research.  A checklist will help students to grasp routine ways of looking at sites; the rules of thumb help them.  But, say Wineburg and McGrew, they tend to teach us to look at the page and at the site; they teach us to make quick decisions based mainly or solely on the look and feel and content on the site.



CRAP CARS plus Schrock (Secondary) ACCORD ABCD




Credibility Technical and visual aspects Agenda Author


Reliability Accuracy Content Credentials Bias


Authority Reasonableness Authority Citations Content


Purpose/ point of view Support Oversight Date


Design & style Relevance
Electronic sources Date


Typically, for each of the headings, it is suggested that students ask themselves a number of questions about the page or site they are looking at.  It may well be that these sub-questions do not delve deeply enough – or it might be that as students learn the acronyms, they forget or dismiss some of the questions: they learn the label but not the underlying issues behind the questions.

The factors in these checklists are all worthy of consideration, to greater or lesser extent – but the questions to consider under the various headings need tweaking, wider consideration and understanding, as demonstrated in practice by the fact-checkers in the Reading Laterally study.

The sub-questions under currency or date, for instance, often ask how up-to-date the page is or when it was last revised.  Up-to-date-ness might be important when considering papers and articles in the natural and the human sciences but it may be less important in literature or the arts;  an older or contemporary document, paper or article can be of inestimable value, even in the sciences, if one is taking a historical approach. So much depends on purpose, not the purpose of the authors or publishers of the paper but the purpose of the researcher or writer.

It is worth mentioning here that the purpose of the authors or publishers may not be obvious.  Writers and organisations on the fringes may not declare their extremism. or it is hidden in carefully-couched coded writing (as may be the case with the ACPeds group), not obvious to someone with little knowledge of the topic. In some cases, the intent may truly be to deceive, making claims which are unwarranted or are downright lies. Not all news we disagree with is fake news, but there is a lot of fake news about.  We do need to get off the site and see what other people say about the page or the site or the author or organisation to determine purpose and credibility.

Similarly, one needs to know a lot about a subject and the literature of the subject to decide whether the content is accurate; we need to know the main writers in a field to determine whether the sources used are reliable – indeed if we can rely on the quotations and the ideas attributed to named sources. We need to follow up references to see if they are accurately recorded (or perhaps taken out of context), to see also the regard in which those source papers and authors are held, whether there is controversy or contradictory opinion regarding the sources used.  The look we took at the supposedly “research-referenced” ACPeds statement on bullying demonstrates how thorough an investigation might be needed.

This very much ties in with a need to check for authority, going beyond whether the author has the qualifications claimed to consideration of the professional esteem in which that writer is held. The publisher and the website should be considerations.  It matters not how academic the paper seems, how good the sources used are, how useful the content. Papers published in predatory journals may be held in less esteem than those in flagship journals; papers in journals which have dodgy peer-review policies should be suspect, and so on.  It’s a matter of credibility, a matter of authority.  As the Information Literacy Framework has it, Authority is Constructed and Contextual. There are a number of factors to be considered, and again we must step off the page and off the site to see more clearly.

Checklists which engage only the page or the site are of limited value.  Alas,  many seem to do just this, think only about the page and the site.  We often expose children to hoax sites (Dog Island, Tree Octopus, Feline Reactions to Bearded Men, and so on) as a means of engaging them and demonstrating how easy it is to be fooled.  [These hoax sites have been around for many years, and they are still popular; they were all recommendations made in a recent post in the Facebook group Int’l School Library Connection.]

But as children grow, we need also to use real sites, including those with possibly malicious and dangerous intent, to educate as to the signs to watch for, the coded language, how really to evaluate the sites we find.  We need to give children the tools of awareness.

It is not that the checklists don’t work.  The checklists do still work – but the questions must go deeper as students become more aware and more mature. We need to ask questions which encourage lateral reading, extending the checklists.

It all takes time, of course.  I can’t help wondering if the fact-checkers in the Wineburg and McGrew study look laterally at everything they find online, at least if they have no previous experience or knowledge of the site they find themselves using.  Do they look laterally at everything, or just when they think it’s important, as I tend to do (see Part One of this article)?  Tended to do.

It seems to me that, in many respects, social media is impelled by instant reaction and does not encourage deep thinking; this is one reason why fake news proliferates.  True or fake, news which tickles your fancy is liked, is passed on, re-tweeted, without thinking too long or too deeply.  Never let the truth get in the way of a good story (as they are said to say in the tabloid newspaper world).  Just look at some of the comments on stories in The Onion from those who missed the satire and took them as factual reports.  Try a simple search for [“the onion” taken seriously]!

It might even be a different part of the brain which engages in rapid shallow thinking as against deep and considered thinking. Trivia and fun tidbits reach parts that other information does not reach (as it were).  Indeed, psychology plays a huge role given the issues of confirmation bias (the notion that we tend to accept that which agrees with our biases and that which concurs with what we already think we know) and possible tendencies to disregard or reject anything which runs contrary to our biases or which does not support what we know.

The CHE item which, back in Part 1 of this article, took me to the study has the title One way to fight fake news.  Frankly I doubt whether the findings of the study would or could be used, in practice, for this purpose – fighting fake news.  Indeed, Wineburg and McGrew accept this; they make the point that we just do not have time to fact-check everything.

The sad truth is, you have to care to read closely and to think. It has to be important.  This is something else for us to think about.  One of Wineburg’s main points is that we can save a great amount of time if we check for authority first, if we look for supporting opinion, if we know more about the messenger.  Then we can turn to the actual content, and then the checklists come into their own.  The checklists are NOT redundant, but we do need to use them more carefully.

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests a fresh look at the way we evaluate web pages and sites.]
[In part 2 of this 3 part article we looked at the ACPeds rebuttal of Wineburg and McGrew’s study.]

Not just CRAAP – 2

Print Friendly
Share Button

In part 1 of this three-part article, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates.  The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

The three groups were asked to complete six tasks in timed conditions. The findings and ensuing discussion are detailed in the paper Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.

In this earlier post (Not just CRAAP – 1), I invited readers to try one of the tasks for themselves. If you haven’t already done this, it might be a good idea to try before reading on here.

The task asked participants to imagine they looking for information on bullying, and describe their thought processes as they considered two particular articles on two different websites.  The articles were Bullying at School: Never Acceptable on the site of the American College of Pediatricians (ACPeds – the College) and then  Stigma: At the Root of Ostracism and Bullying on the site of the American Academy of Pediatrics (AAP – the Academy).

Participants were allowed to look elsewhere on the sites and anywhere else online that they wished.  They had to decide which website was the more reliable and trustworthy.

What the researchers found was that historians and students tended to spend much time looking at the actual articles before they moved elsewhere; some stayed on the target sites, some left them to look elsewhere.  By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites.  That is not necessarily (at least in my eyes) a concern.  What does concern is that the evaluations made by the two groups were very different.   Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy.  The fact-checkers were the exact opposite.

In this and in the five other tasks, the fact-checkers were much quicker at making decisions and finding corroborating information in reliable sources to support their thinking.  Historians and students were slower, and many did not try to verify information found. Of those who did try to verify what they found elsewhere, many accepted the opinions of any source rather than looking further for corroboration from more-reliable sources.

One more pair of reminders, and then we get to the meat of this present blog-post:

As discussed earlier, the College is a small fringe organisation of pediatricians with a conservative agenda.

The Academy is the world’s largest organisation of pediatricians; it is long-established, much respected, and is the publisher of Pediatrics, the flagship research journal of the profession.

Biased research?

Soon after the Wineburg-McGrew study was published as a Working Paper of the Stanford University History Education Group, Den Trumbull MD, the author of the original College paper Bullying at School: Never Acceptable wrote and posted Commentary on a Stanford University Study: Criticizing University Students and Doctorate Historians.

It is highly critical of the Wineburg-McGrew study, accusing them of bias and false reasoning.  Inter alia, he made the points that

  • their paper, Lateral Reading, was not peer-reviewed;
  • the fact-checkers were prejudiced because they were “influenced by non-objective sources” which prejudiced their opinions before ever they read the actual article;
  • his paper – which he describes as the College’s position statement on bullying – is “defended by referenced research;”
  • that the College has been “maligned” by opponents leading to the fact-checkers prejudging the College statement on bullying, even when that statement is “irrefutable.”

Trumbull declares that the Wineburg-McGrew study is totally flawed, totally biased. The fact-checkers, he says, were “opinion-checkers,” not fact-checkers. Moreover, he declares,  “facts are not a matter of opinion or popularity.”

He goes on to say:

True fact-checking would involve scrutinizing the text and the references that support the text. That’s what the students and historians mostly did. The fact-checkers were more likely to have been influenced by the all-too-common ad hominem attacks found on the Internet, and perhaps persuaded by the views of their professional associations with “news and political organizations.”

It comes down to: do we best consider the authority and reliability and usefulness of information found by considering only that information, or do/should context and reputation and other factors come into consideration too.

In part 1 of this article, I declared that I was not impressed by either the College paper nor the Academy article;  I said that I would probably not use either if I was looking for information on bullying.  I noted some of the flaws and shortcomings in Trumbull’s article I had found, replicating as best iI could and n timed conditions the task as posed in Wineburg and McGrew’s study. Dr Trumbull’s counter-blast to the Wineburg-McGrew study made me take a second and closer look.

A more careful look at the College paper shows that this is NOT an article or academic paper. I missed this first time through, and I suspect the fact-checkers did as well, so quick were they to leave the site without looking at the content of the page.  It is an opinion piece, right from the start. The abstract reads:

No child should be harassed for his or her unique characteristics. Schools should encourage an environment of respectful self-expression for all students, and no group should be singled out for special treatment.  Parental involvement should be a school’s primary method of resolution with programs emphasizing general respectfulness serving to set the tone in the classrooms.

First time through, I missed the full significance of the repeated use of “should.”   Though the first two sentences seem sound enough,  I did pause a moment over the final sentence.  It took a second reading, after reading the full paper, to appreciate that “should.”    Trumbull’s paper lacks an argument.  That “should” is not there pointing to the conclusion of the paper. It is establishing the conclusion as a given.  There is no attempt to establish the “should-ness” of the position, no background context or review of the literature.  There is no mention of contrary positions.

Of course, the “should-ness” is fully in line with the College’s lead core value as listed on the About Us page:

The American College of Pediatricians

  1. Recognizes that there are absolutes and scientific truths that transcend relative social considerations of the day.

This is not an academic paper as I had first thought, it is a position paper. It says so, in the second-top navigation bar.  I missed that, doing the test.

I would probably have been aware if I had followed links and menus on the site to reach this page as in a “normal” piece of research;  the test in the study is “un-normal,” in that it takes us to the page directly; it is all too easy to miss the clue in the navigation bar.  On the other hand, if a search-engine search had brought me directly to this page, I would have have missed that clue too.

The College is not trying to hide anything – though I do wonder it is standard practice for a  position paper to be set out like an academic paper? Again, given the speed with which they left the page, I wonder if the fact-checkers also missed this clue.  Come to that, the historians and the students tended to spend much time on this page – but now I wonder how many of them spotted this?  (There is no mention of this in Wineburg and McGrew’s paper.)

Despite its “feel,” most notably the abstract and the list of references, this is not an academic paper.

Before reading the page itself, I spent much time thinking about the footnote signals and the references. I quickly saw that one reference was to a dictionary definition, two were to blog posts – or is this a single blog post? Though the authors are different, the title and the URL are the same for both.

Five of the ten references are to different papers in the journal Pediatrics, published by the highly-respected Academy. That looks good.  But the citations in the text gave me pause, there is so little real content there. They could come from any source.  I noted in particular footnoted item #3.  The second section in the paper is headed Forms of Bullying and provides five bullet-pointed forms of bullying; the next section, Target Characteristics of Bullies, presents eight bullet-pointed characteristics.  Most of these thirteen bullet-pointed statements strike me as common knowledge; if I was asked to come up with a list of forms or characteristics of bullying, I think I would come up with a similar list – and I know very little about bullying.   I thought it odd that, of the thirteen items listed in these two sections, just one is attributed, “Physical inabilities and disabilities (3)

Could Dr Trumbull find a source only for this one item but not for any of the others?  Or  could he not be bothered to find references for the other items?

I was similarly struck by the next three citations (#4, 5 and 6). These all seemed fairly basic notions about bullying, so basic that any introductory article might list them. Did we really need three separate citations from three separate academic papers to make these points?  Students sometimes use this ploy to suggest wide reading, don’t they?

Citations #9 and 10 are striking. Trumbull provides two quotations, different superscript numbers, different speakers:

These two citations leads to references which each name the same blog post, but name different authors:

There is just one blog post from which these quotations are taken, Expert says media dangerously ignore mental illness in coverage of gay teen suicides.

It was written by Liz Goodwin, not by Haas and not by Bayard. What Trumbull has done is to take Goodwin’s paraphrases of what Haas and Bayard said and present them as direct quotations,

and then attributes the same blog post to each of them in turn. Liz Goodwin, the actual author of these words, gets not a mention.

It’s a petty point, but if Trumbull is going to accuse Wineburg and McGrew of a lack of scholarly rigor, then he needs to be a tad more rigorous himself.

Perhaps the only use of source material which is both accurate and telling is citation/ reference #7.

The reference shows that these thoughts and the quotation come from a paper in Journal of Criminology:

Once again, it might seem petty to point out (1) that Trumbull fails to name the authors of this paper (which are shown very clearly here on the Journal of Criminology website, at the DOI as recorded):

or (2) that the quotation he uses appears on page 8 of the paper and not pages 7 and 8 as detailed in his reference.  And, because his is not an academic paper which might present other viewpoints (the better to refute them),  Trumbull finds no need to mention comments elsewhere on this finding, including the notions that

  1. schools which introduce anti-bullying programs early tend to have more success in preventing bullying than schools which start late, (What Makes Anti-bullying Programs Effective? reported in Psychology Today)
  2. schools which introduce anti-bullying programs often do so because they have problems – possibly caused by failure to start the programs early (see (1)), (a Huffington blog post, Are Anti-Bullying Programs Counterproductive? includes a number of criticisms of the study)
  3.  schools which introduce anti-bullying programs often do so because they have problems (see the Huffington Post article) – and so “are more likely to have experienced peer victimization, compared to those attending schools without bullying prevention” (the key finding of Seokjin Jeong and Byung Hyun Lee’s paper),
  4. there are many studies and meta-studies which conclude that anti-bullying programs in schools do decrease (though not eliminate) bullying (for instance, Systematic reviews of the effectiveness of developmental prevention programs in reducing delinquency, aggression, and bullying by David Farrington and others).

There is evidence both ways and somewhere in-between too.  But again, if you believe your position is “irrefutable,” you don’t recognise possibly contrary views, do you?  If they exist, they are wrong, aren’t they?

All together, and although supposedly “defended by referenced research,” in academic and argumentative terms Trumbull’s defences are weak.

As I said, I doubt that I would use this position statement in a presentation on bullying.  Writing this article though, I realise that I might use it in an academic paper, an example of alternative and possibly fringe views (and go on to suggest why the College position is suspect).

As noted, I doubt that I would use the Academy page in my paper on bullying either. It is light on substance. On the other hand, it is not meant to convey anything weighty. It is a session description for an upcoming two-hour symposium to be held during a general meeting of four pediatric organisations.  The header of the article reads:

Experts in bullying and children’s mental health gather at the Pediatric Academic Societies meeting to describe new research and what it means for children’s mental health.

If I was writing a paper on bullying, this article could be useful as a starting-point for further research. Six presentations are mentioned, so I might search for any of the titles of these presentations which interest me, I might use the names of the presenters as possible experts to follow up. I might also note that the American Academy of Pediatrics is one of the four organisations sponsoring the Annual Meeting. The American College of Pediatricians is not one of the four sponsors. I wonder if anyone from ACPeds attended the meeting or the symposium?

Another angle

One of Trumbull’s complaints about the Wineburg-McGrew study is that

the College has been “maligned” by opponents leading to the fact-checkers prejudging the College statement on bullying, even when that statement is “irrefutable.”

I am reminded that two of the frames of the ACRL Framework for Information Literacy for Higher Education are

Scholarship as Conversation
Authority Is Constructed and Contextual

I do wonder how conversation can take place when the claim is made that the College’s views are “irrefutable” and also that there are “absolutes and scientific truths that transcend relative social considerations of the day.” This is claiming absolute authority, no room to question or to argue.

That contravenes the other frame I mention here: authority is not absolute. It is earned and it is situational.  We educate our students to appreciate (we hope) that information is not all equal, even when two writers say the same thing. We look for reliability and we look for authority – and while authority is bestowed, in part, by who the writer is, it also depends on the views of others, whether through reputation, qualification or deeds.  You are not trustworthy because you say you are. You have to show you are trustworthy, you have to earn trust.  We do well to look closely at the messenger as well as at the message.   If the messenger is untrustworthy, can we trust the message?

Space, time and your patience, gentle reader, suggest I bring this post to a close. It could be, though, that each of the other four frames of information literacy may have some relevance in this investigation.  I must think further, maybe another post.

More immediately, I want to address evaluation checklists. One of Wineburg’s points is that the tools we give students may not always serve them well. They did not in this study, the tasks that he and McGrew set. They did not serve the historians too well either.

We’ll look more carefully at CRAAP and other evaluation tools in the next post.

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests a fresh look at the way we evaluate web pages and sites.]
[In part 3 we look at the checklist approach to evaluation, and suggest that we don’t need to get rid of our CRAAP tests, we need to enhance them.]

Not just CRAAP – 1

Print Friendly
Share Button

Over the weekend, a newsletter item in the Chronicle of Higher Education caught my attention, One way to fight fake news by Dan Berrett and Beckie Supiano.  It was originally published in November 2017;  I’ve got behind in my reading.

The item reports on a study by Sam Wineburg and Sarah McGrew.  Wineburg and McGrew compared the search habits and evaluation techniques of three different groups, professional historians, professional fact-checkers, and students at Stanford University.  They found that :

  • the historians and the students mostly used very different techniques of search and evaluation to the techniques of the fact-checkers;
  • the historians and the students could not always find the information they were asked to search for;
  • the historians and the students took longer to decide on the validity and reliability of the sites they were asked to look at;
  • most disturbingly, the historians and the students came by-and-large to diametrically opposite conclusions to those of the fact-checkers as to the validity and reliability of the various sites; the two groups could not both be right.

Before reading further, you might want to try an approximation of one of the tasks undertaken by the participants (there were six tasks in all, in timed conditions).

You are asked to look at two specific web pages. They both deal with bullying.  You have 10 minutes maximum, using the search and evaluation techniques you normally use, to evaluate how trustworthy each site is as a source of information about bullying.  If you were writing an essay on bullying, is there one of the papers you would prefer to use rather then the other?  Could you use both?  How confident are you about your judgement?  On a scale of 1 (not confident) to 5 (extremely confident) where would you place each of these articles?

You will probably find it easier to open the two pages in different tabs, this could make comparison easier.

You do not have to stay on these pages. You can look elsewhere on the sites and you can go off-site too, anywhere you want.  Try as you go along to explain your thought processes and your reasoning, as if to a group of students.

Right then, before you start, have these two pages open in different tabs, this page Bullying at School: Never Acceptable and then this page Stigma: At the Root of Ostracism and Bullying.

Go to the first tab, Bullying at School: Never Acceptable and start comparing.

Your time starts NOW!

How did you do? How did you rate the reliability of each of these two articles?  How do you rate the two sites?

In the study, Wineburg and McGrew asked the three groups (fact-checkers, historians and students) to complete six tasks in timed conditions. This task, the comparison of the pages on bullying, is the first of the three tasks detailed in their paper, Lateral Reading: Reading Less and Learning More When Evaluating Digital Information. The authors state that the other three tasks were similar and they yielded similar findings; they were omitted from the published paper for lack of space.

Participants were asked to think aloud as they performed; their statements and their actions were recorded in audio and video.  The researchers would give prompts and scripted hints if the participants went silent or struggled with the task.

As noted, the differences in approaches and the conclusions drawn in response to the tasks are remarkable.

The fact-checkers consistently completed the various tasks quickly and verified their answers in more-reliable sources.  That’s what fact-checkers are trained to do, and this is what they did, without having to be told or reminded.  Their comments and their views of reliability and authority were consistent and unanimous, unlike those of participants.  The historians and the students, however, were slower. They spent more time (than the fact-checkers) reading the target pages, they spent more time on the target sites. Some never left the target site. If they verified what they found, they often settled for the first opinion they found and did not necessarily look for authoritative opinions.

The issue is not that the approaches of the various groups are different or that one group was quicker than the others in reaching conclusions.  It would not be an issue if, despite different approaches, the different methods led to the same conclusions.   As noted earlier, the real problem is that the approach/-es standardly used by the historians and the students led them to very different conclusions regarding reliability and authority.  These participants decided that the unreliable sites were the more reliable and they could not detect the biases and the agendas of the fringe sites. They searched differently and they missed vital clues. Their standard evaluation techniques let them down.

If you rely on unreliable sources, can you rely on the message/s?  This might well be a factor in the spreading of fake news, a thought I shall come back to later.

It is worth noting at this point, among the reasons that Wineberg and McGrew chose historians as their second sample group is that historians work with source materials as a matter of course;  this is what historians do, they are trained to interrogate source material (and this is what most of the historians in this study failed properly to do).

The first of the bullying articles is posted on the site of the American College of Pediatricians (ACPeds).  The website is the public face of an extremely conservative group, a fringe group which appears to have just a few hundred members. The ACPeds has an anti-LGBT slant and is anti-abortion, stated on its About Us page, though it may take careful reading to discern this.  The Society believes in ethical absolutes and absolute truths. These and other facts about the group were quickly found by the fact-checkers.  None of these means that the organisation or its members are necessarily wrong to hold these beliefs – but it does help us to recognise possible biases, not least when there is a suggestion not only that members of the College hold these “values” but that everyone else should also support their beliefs.

The other bullying article is on the site of an association held in high esteem in the profession.  The American Academy of Pediatrics (AAP) was founded more than 80 years ago and has more than 65,000 members; it is the largest association of pediatricians in the world.  It publishes Pediatrics, the flagship journal of the community of US pediatricians – and one which, incidentally, provided several items of information used in the paper on the ACPeds site.

It might be helpful in this discussion to think of the two associations as the College (ACPeds) and the Academy (AAP).

Why did the historians and the students go so wrong, not just on this task but on all of the tasks set by Wineburg and McGrew? What do the fact-checkers do that most historians and students did not do?

The approach used by the students and the historians was to look at, and in many cases to read in full, the pages to which they were directed in the task. They considered the look of the pages, the feel of the pages. The College article has an abstract and a list of references, and this impressed almost all students and many of the historians.  Because this paper is structured like an academic paper, some in these two groups assumed that it had been peer reviewed and published in a journal.  For some, the MD after the author’s name confirmed expertise, although few checked to see if he was indeed a Medical Doctor.  The lack of advertising impressed as did the .org domain suffix.  They considered the page and the site.  In Wineberg and McGrew’s terms, most historians and students read vertically.

It took students and historians many minutes of task-time before they left the target pages and looked elsewhere for other information, verification or corroboration. That is if they looked elsewhere at all.  Even a simple Google search should have sounded alarm bells:

Hit #1 is for the College itself, followed immediately by sites which use such terms as “fringe,” “anti-LGBT,” “wingnut collection of pediatricians,” “described as a hate group,” and so on.  The warning signs are there – but few students went even this far; nearly 30% of them never even left the College site.

It is not surprising then that two-thirds of these students considered the College site as being the more reliable. Only 20% of the students opted for the Academy (the remaining students thought the two sites were equally reliable).  The historians did a little better: while only one opted for the College site while as the more reliable, another 40% thought the two sites were equally worthy.  Only half the historians thought the Academy the more reliable.

The fact-checkers on the other hand very quickly and unanimously decided that the College site was dubious, that the Academy site was the more trustworthy.

The fact-checkers spent minimal time, sometimes just seconds, on the actual target pages or sites. Their instinct was to find out what others say about the organisation or the authors before checking for themselves. They opened new tabs and searched for independent mentions of the sites. They were not immediately concerned with the target pages themselves, they wanted to discover the biases of the sites and of the groups behind the sites. They sought corroboration for what they learned and they sought corroboration in  reliable sources.  They read laterally.

This is the big difference in technique; most historians and students looked at the content itself, the factcheckers looked for authority behind the content.

It all sounds counter-intuitive, but if reading laterally works then it is good. If reading laterally works and reading vertically leads to flawed thinking, then we may need to rethink how we teach evaluation, we may need to rethink our own practises.

Whether one agrees with the biases or not, it is important to know the biases, the better to come to informed conclusions about the topics you read and research. You probably won’t discover these biases by looking at sources promoted by or referenced from your readings. You discover more about the target sites by looking for what others say about them and their organisations, you look for independent reviews and opinions.

My own search and find habits

I tried this and the other two tasks before I started reading the full academic paper.  I was perhaps primed by the article in the Chronicle of Higher Education but I did try to put what I had read out of my mind and search and evaluate as I would normally.

I have to admit that my methods are closer to those of the historians and the students in that I did not leave the target articles (in the first two tasks) within seconds of raising them on my screen.  Instead, I glanced through each of the pages.  On the other hand, I did not read them closely, I did not stay long on them.  When I left the target pages, I skimmed quickly through the home pages and the About Us pages. Only then did  I look for information about the organisations elsewhere.

I heard the alarm bells as I surveyed the article on the College site. Despite its seemingly academic nature (notably the abstract and the list of references),  I saw that many of the references were less than academic (Yahoo News, blog posts, a dictionary).  I did not read the page at this stage.   One right-click later, I was on the About Us page: it took just one quick glance at the Core Values and alarm signals were flashing.

I would also point out that, although I have discerned some of the weakness of the paper on the College website, I am not impressed by the article on the Academy website either. It is advance publicity for an upcoming symposium. There are two quotations on the page, duly attributed but neither providing any means of following up.  However accurate this piece is, however authoritative the site, in academic terms, the page is of limited value. If I were writing an essay or paper on bullying, I would not be using either page.

On all three tests, my search and verification methods fell somewhere between everyday practice and the gold standard of the professional fact-checkers.  My technique is perhaps a little nearer the fact-checkers, but I have some way to go. This, I think, may well change.

I comfort myself with the thought that, when it matters, I do double-check, and I do double-check off the site. I do this when considering purchase on sites that are new to me. I do this when looking at apps and extensions and other software even if it is free. Especially when it is free.  I ignore the sites themselves (their own reviews are more likely to be favourable, aren’t they?), I look for independent reviews and comments.  Sometimes I look for “hate” sites (hate HP/ samsung/ apple/ tesco/ chrysler etc etc) as well as more favourable sites and comments.  I look for “troubleshooting” or “problems.”  The more the item costs, the more care I take – remembering that cost is not just about money; there is time and frustration and customer care  and other factors too.

I am careful too when reading claims, especially those made by groups I already have suspicions about.  Well-referenced academic-seeming papers are often promoted by those with vested interests in promoting their products and services: readers of this blog will know how unreliable are the papers and articles pushed out by Turnitin and by EasyBib, among others.

I take care with things which matter. Perhaps I should take more care with things which don’t really matter. Perhaps we all should.  If you did the initial task, comparing the two bullying articles, how did you do? Are you re-thinking your search and find techniques?

We do not, of course, have time to check thoroughly every site we come across, a point that Wineburg and McGrew acknowledge (p. 44).  But, they say, the bigger point is that even when we do check, the tools we train students to use let them down, the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources and all the other evaluation tools let them down. They teach us to look at the page and at the site; they teach us to make quick decisions based on look and feel and content on the site.

Drop these tools, say Wineburg and McGrew, do what fact-checkers do:

When the Internet is characterized by polished web design, search engine optimization, and organizations vying to appear trustworthy, such guidelines create a false sense of security. In fact, relying on checklists could make students more vulnerable to scams, not less. Fact checkers succeeded on our tasks not because they followed the advice we give to students. They succeeded because they didn’t (pp. 44-45).

Me, I’m not so sure that we need to drop the evaluation checklists. I shall pursue this thought in my next but one blog post.

In the next post, though, I want to discuss the two bullying articles a little more, not least because ACPeds has posted a rebuttal to the Wineburg and McGrew study. Some of the points made in their rebuttal deserve consideration – and shooting down.  This exercise could serve further to hone our evaluation skills.

Watch this space.

[In part 2 of this 3 part article we look at the ACPeds rebuttal of Wineburg and McGrew’s study – and rebut the rebuttal.]

[In part 3 we look at the checklist approach to evaluation, and suggests that we don’t need to get rid of our CRAAP tests, we need to enhance them.]

Guilty by association

Print Friendly
Share Button

A month or so ago, an incident at Ohio State University made headlines. One or more students had posted information on business course assignments in a GroupMe study group.  The type of information shared violated the University’s code of student conduct.  As a consequence, more than 80 students – all members of the GroupMe group – were charged with cheating.

GroupMe is a free group messaging app, widely used to send messages and documents simultaneously to all members of a group. Members of educational GroupMe groups often use it to share dates due and study tips and readings. When collaboration is permitted, this kind of app can be a great boon in assisting collaborative work. In this particular case, however, some users had overstepped the mark and had posted suggested answers to homework assignments. Legitimate collaboration had become illegitimate collusion.

By and large, the headlines (of which this is just a small selection) seemed to get more dramatic Continue reading

WHYs before the event

Print Friendly
Share Button

I have long suggested that students will more readily understand the conventions of citing and referencing if they understand WHY we do it, WHY they are asked, expected and required to do it.  HOW to do it is necessary, but knowing WHY we do it gives purpose, can even make it fun.

When I “crowd-source” the reasons WHY we cite and reference, in classrooms and in workshops, the group usually comes up with the main reasons between them. That is good. But there is no guarantee that any one individual in the room appreciates all of those reasons – as evidenced perhaps by my questioner in Qatar, a story I relate in Lighten the load, “Is referencing taken as seriously at university as it is in this school?”

Trouble is, for many students, the notions of building on what has gone before, showing the trail which has led to our present thinking or contributing to an academic conversation are just too abstract to appreciate. This is so, even at university level, as suggested by Continue reading

We value our libraries – shout it loud!

Print Friendly
Share Button

I mused on coincidences in my last post but one, APA mythtakes. Here’s another one!

Over lunch today, I read a piece in my library magazine, CILIP Update, a story about Bury Council in England. The Council had closed a public library, and some bright spark sent out a tweet, asking the community to advise on what could be done to “turn a former library into a valued community asset.”

And guess what the community replied?

If you’re not sure (I’m sure you are, really), try the Manchester Evening News item Bury council tweeted about making closed libraries into ‘valued assets’ and everyone said the same thing

Everyone saying the same thing, that’s not the coincidence. The coincidence is courtesy friend Christina who just an hour or so later sent me a link to a story in Huffington Post, ‘The Angriest Librarian’ Schools Columnist Over Anti-Library Tweets. This is one person’s response – multiple responses – to a New York journalist’s tweet suggesting “Nobody goes to libraries anymore. Close the public ones and put the books in schools.”

The Angriest Librarian wasn’t the only person who responded. Within hours, more than 110,000 people had responded. Andre Walker, the journalist, had to admit that libraries weren’t as unpopular as he had thought.

We value our libraries – shout it loud!


It takes time

Print Friendly
Share Button

One of the basic tenets of this blog is that we do students a disservice when we give them the impression that the main purpose of citing and referencing is to “avoid plagiarism.”

The way I see it, “avoiding plagiarism” is at best a by-product of citation and referencing. It is a long way from being the main or the only reason for the practice. It makes for angst (“what if I get it wrong?”) and it leads to confusion. Because of the nit-picking demands of getting one’s references absolutely perfect, it can lead to boredom. It leads to taking short-cuts, to avoidance of using other people’s work in support of one’s own ideas and statements, to a loss of the writer’s own voice and ideas.

At the same time, as demonstrated by repeated uses of Jude Carroll’s Where do you draw the line? exercise, there are wide differences between what different teachers class as plagiarism. This serves further to confuse, as when a student who has had work long accepted finds her standard practice is suddenly condemned Continue reading

APA mythtakes

Print Friendly
Share Button

We don’t take note of non-coincidences, do we? It’s different when two similar events happen close to each other. Wow! we say, what are the chances of that happening twice in the same day? Coincidences stick in the mind, single events do not stick so readily. (This one stuck so solidly that it pushed me into blogging again.)

A recent EasyBib blog post was one half of such a coincidence. Michele Kirschenbaum’s blog post Video Lesson: In-Text Citations had upset me on two counts. Although published on 29 September 2017, my Google Alert did not pick it up until last week.

Count #1: the video gives the impression that in-text citations and parenthetical citations are one and the same

This impression is confirmed in the text of the blog where we read “We think this video, which provides an introduction to in-text, or parenthetical citations, is a great addition to your classroom resources.”

Me, I don’t think it such a great addition, not least because parenthetical citations are one kind of in-text citation, but not the only kind.

Other kinds are available, not least when the citation starts the sentence Continue reading

By any other brand-name, not so sweet?

Print Friendly
Share Button

Something is afoot in the world of reference generators. The American company Chegg, which claims to be  “all about removing the obstacles that stand in the way of the education YOU want and deserve” [Chegg: What we’re about], seems to be buying up service after service.

They already own CitationMachine,  BibMe, EasyBib, and CiteThisForMe. None of them is particularly good at what they claim to do, and (in their free versions and since being taken over by Chegg) they are bedevilled by splash and flash advertising (as with Citation Machine, illustrated on the right).

Several of my earlier posts point directly or indirectly to shortcomings in these services.  Their auto-citation generators leave much to be desired. They also leave much to be edited or added after the reference is auto-generated. A common plaint is that students don’t do this – they unthinkingly and uncritically accept auto-generated output no matter how many errors or omissions.  Alas, the manual form-filling modes are often not much better. Too often Continue reading

A gift that kept on giving…

Print Friendly
Share Button

Regular readers will know my opinion of the (so-called) Harvard referencing, but in case you don’t, it is low. (If you don’t, then see, as instance, the three-part post which starts at Harvard on my mind 1.)

So there was some delight and much sinking feeling when my daily GoogleAlert for [plagiarism] today brought up the hit How to Reference Your Sources Using Harvard Referencing.

  The first line or so of the alerted post by someone signing in as techfeatured reads:

An article in the Sunday Times (Jones, 2006) claims that up to 10% of all degree level submissions commit some form of plagiarism – the act of …

It wasn’t just the mention of Harvard that set the alarm bells ringing and the red flags flying. It was the statistic itself, that 10%, and the ten-year old source. Surely there is more recent research, surely the rate is higher? What is meant by “degree level submissions”?

Today (as I start drafting this post) is Christmas Day Continue reading

Of honesty and integrity

Print Friendly
Share Button

One of my favourite classroom and workshop activities is a “Do I need to cite this?” quiz. Those taking the test are presented with a number of situations and asked to choose between “Cite the source/s” and “No need to cite the source/s.” *

I like to do this using Survey Monkey – other polling applications will do just as well. It means that I can home in on any situation in which there is divided opinion, or which many respondents are getting wrong. There is no need to go through each situation one at a time if there are just two or three situations which need to be discussed.

Much of the time, the answers are clear: the situation is academic (a piece of work submitted for assessment) so should demand academic honesty, and most students and other participants get it right.

Some of the situations are less clear and lend themselves to discussion, considerations of common knowledge, learned expertise, copyright, credibility and reputation, honesty (as against academic honesty) and integrity.

One situation, for instance, presents Continue reading

Knowing how to write is not knowing how to write

Print Friendly
Share Button

A month or two ago, but within the space of two weeks, three very different, very similar, situations:

Situation 1 : a student in a school in Asia wrote a comment on an earlier blog post, How Much Plagiarism?  asking for advice. She had misunderstood the instructions; she “forgot to include in-text citations” in the draft of her IB extended essay. All her citations were at the end of the essay. There was no intention to plagiarise.  Since this was a draft, the IB is not involved;  there was still the opportunity to put things right. But she was worried about her school’s reaction which could include note of her transgression on future university recommendations. Her question was, is this excessive?

Situation 2 : an inquiry on an OCC forum: it was the school’s deadline day for submitting final copies of extended essays.  One student, known for his dilatory habits, managed to submit his essay on time. Reading through before authenticating it, the supervisor realised that in the first half of the essay the student had included footnote references for each superscript number in the text. Then the student seemed to have run out of time or stamina, for in the second half of the essay the superscript numbers were there but with no footnoted references to support them. Would it be ethical Continue reading

A footnote on footnotes

Print Friendly
Share Button

Just as a footnote to my last post, Yes and No – footnotes (in MLA8), there is now a post in the MLA’s Style Center which addresses this very question. I don’t think this page was there at the time I wrote my post, but I won’t swear to that.

The question asked was Are notes compatible with MLA style? – and the answer was much as I suggested: in the absence of specific guidance, follow the suggestions made in the previous edition, MLA7: you can use footnotes (or endnotes) to “offer the reader comment, explanation, or information that the text can’t accommodate,”  and you can make bibliographic footnotes in limited circumstances: “bibliographic notes are best used only when you need to cite several sources or make evaluative comments on your sources.” Footnotes are the exception, not the rule, not if you want to abide by strict MLA style.

[For the purpose of IB assessments, possibly other exam boards too, you should note that the first use is heavily discouraged: such footnotes MAY NOT be read but WILL count towards the word count.]


Readers might want to know that the latest edition, the MLA Handbook, 8th edition, is now available in Kindle format.


Yes and No – footnotes

Print Friendly
Share Button

A question that comes up regularly in the forums is, “We use MLA; can we use this style with footnotes?”

I think there are two answers to this. The first is “No, you can’t.” The second is, “Yes you can.”

Before I explain my thinking, I will just add that the reason most frequently given for wanting to use MLA and footnotes is “the word count.” If the citation is in a footnote and footnotes aren’t counted in the word count, then the rationale is that using footnotes will save words. This could be crucial in, for instance, an IB Extended Essay.

Q:  Can we use MLA style and footnotes?
A:  No, you can’t.

MLA, the student-level style guide of the Modern Language Association as published in the MLA Handbook, recommended the use of footnotes in the 1st edition, published in 1977;  in the 2nd edition, published in 1984, MLA stated a preference for citation in the text. (This piece of history is gleaned from page xi of the 8th edition, published in 2016.)

The 6th edition (2003) noted that some disciplines using MLA still used “endnotes or footnotes to document sources,” and gave a few examples in an appendix (298 ff). The only recommendation regarding footnotes in the 7th edition (2009) was that Continue reading

Smoke and mirrors

Print Friendly
Share Button

“Technological solutionism” – a term coined by Evgeny Morozov – offers us solutions to problems we often do not know we have. Some might feel that it sometimes creates new problems, too often without solving the problems it is designed to solve. So often and too often, it fails to do what it says on the tin.

On the other hand, technological solutionism can make big money for the companies behind the so-called solutions. It can blind us to other, often more workable, often more less expensive and more low-tech strategies, approaches and solutions.  Worse still, it can divert attention from the real problems, including situations which might cause the problems in the first place.

I have blogged before about technological solutions which promise far more than they deliver. Turnitin and EasyBib are the ones which come most readily to mind. You can name your own “favourites.”

And now, Microsoft has just released enhancements to Office 365. The announcement is made in an Office Blog article posted on 26 July 2016 with the snappy-catchy title New to Office 365 in July—new intelligent services Researcher and Editor in Word, Outlook Focused Inbox for desktop and Zoom in PowerPoint. The piece is written by Kirk Koenigsbauer. He is a corporate vice president for the Office team, heavy-hitting stuff indeed.  In this post, we’ll be looking just at Researcher and Editor.

In the blog, we read that

Researcher is a new service in Word that helps you find and incorporate reliable sources and content for your paper in fewer steps. Right within your Word document you can explore material related to your topic and add it—and its properly-formatted citation—in one click. Researcher uses the Bing Knowledge Graph to pull in the appropriate content from the web and provide structured, safe and credible information.

and that

Editor assists you with the finishing touches by providing an advanced proofing and editing service. Leveraging machine learning and natural language processing—mixed with input from our own team of linguists—Editor makes suggestions to help you improve your writing.

Powerful tools indeed.  If they work.

Given the first look that Microsoft gives us, they have a long way to go.

First, Researcher. The section heading in the blog reads Continue reading

Seeds or weeds?

Print Friendly
Share Button

It is sadly ironic when someone writing about plagiarism (with the intention of helping readers understand what plagiarism is and how to write correctly) commits plagiarism.

It happens all too often. I am sure that, in most cases, it is unintentional. The trouble is, readers of their work may sometimes be confused, especially if confused examples are presented. As instances, there are writers on plagiarism who still seem to believe that it is enough to list their sources at the end of a paper.  There are some who appear to think that citation in the text is enough, but are apparently unaware (or who forget) that quoted words demand quotation markers (such as quotation marks or indented paragraphs or a change of font).

I don’t know what to make of the writer of the article, “Planting Seeds,” published in Blossoms: the official newsletter of Abuja Preparatory School (No. 25, 9 March 2016).

The newsletter is aimed at parents. Full credit to the writer for trying to help parents understand what plagiarism is, and understand how students can legitimately use other people’s words and work [“All they have to do is always acknowledge who and where they got it from”]. There is also a section on how some forms of help which parents often give are actually unhelpful, not least because they encourage bad habits and understanding/s. I am particularly impressed that this school takes students up to year 6, ages 10 to 12. I believe the earlier the values of honesty and integrity are inculcated, the better – the awareness of honest use of others’ work is “planting seeds” indeed.

But there are two paragraphs in the newsletter article which give me pause.

The first of these is Continue reading

Self-serving survey?

Print Friendly
Share Button

When a company (or other group with vested interest) conducts its own research and publishes its own analysis of the results, it is usually worth investigating more deeply. Turnitin has long been a favourite source of disingenuous disinformation (see. for instance, my posts How much plagiarism?, Guilty: how do you plead?, A second look at SEER, and Not as I do, but… ).

Now my attention turns to RefME, the reference generator (unless it is a citation generator; there may be language differences here, as discussed in Language and labels).

RefME has just published a report Survey Reveals Unique Insights to US Students’ Attitudes Towards Plagiarism on two surveys which the company carried out in recent months. It seems a prime example of how not to analyse data, how not to write a report. That’s a brutal assessment, but I think the brutality is justified. Just be sure to get in quick in case the report is edited or deleted.

I think there are (at least) five or six ways in which the report can be considered flawed. Fuller explanation follows the list:

  1. the discussion of the surveys reads at times like an inadequate discussion of the surveys and at times like a press release produced by the RefME publicity bureau;
  2. the report manages to confuse and conflate incorrect or inconsistently formatted references with plagiarism and/or academic misconduct;
  3. the discussion grabs at different research and studies, and suggests (inter alia) that small-scale surveys can be regarded as universal truths;
  4. in grabbing at those different research reports and studies, the writer misreports some and fails to do the homework, to check on the source behind the source;
  5. the report, despite praising RefME for enabling correct and consistent referencing/ endnoting, manages to be incorrect, incomplete and/or inconsistent in at least 11 of its 13 references.
  6. a small matter of several, many, passages which reuse so much wording from source documents that it might be felt that quotation marks are required; some readers might even class these passages as plagiarism.

This is not to denigrate the RefME software itself. I have no opinion there. Until I bought a new computer a few months ago, I found the app hung up too often to enable a valid critique of its performance as a reference (or citation) generator. Now, I find it Continue reading

Language and labels

Print Friendly
Share Button

Different people have different understandings of the terms “citation” and “reference.” This can – and does – cause confusion. In my classes and workshops, I usually start discussion of use of other people’s work by stating how I use and will be using these terms, following the International Baccalaureate (IB)’s use of them. In brief:

  • citations are the short notes which go in the text, as part of the text or in parentheses;
  • references are the full bibliographic information which goes in the list at the end.

If we all have the same understanding of the terms, we are nearer being sure that we are talking about the same things.

There is much to suggest that many students go through secondary school and enter university believing that they understand how to document their use of source material correctly and appropriately, when all they have learned and practised is making an alphabetical list of sources at the end of their work. When told they need to cite their sources, Continue reading

Copy, paste, EDIT

Print Friendly
Share Button

Following on in this mini-series of common errors in extended essays: one of the ways in which IBDP extended essay candidates drop marks for  Criterion I: Formal Presentation (Criterion D: Presentation from 2018) is inconsistency in the formatting of references.

IB examiners are instructed that consistency and completeness of references is more important than notions of accuracy, which is good. Given that students are free to use any referencing style that they wish, it is not possible for an examiner to declare that this or that reference is recorded inaccurately, not according to style guide.

But the criterion requires that references are consistently formatted within the list itself. If the reference list is something like this: Continue reading

Back to basics – MLA8 revisited

Print Friendly
Share Button

I have to admit, I am excited by the latest edition of the MLA Handbook. Gulp! Does that make me some kind of uber-nerd?

I am breaking into my mini-series on common documentation errors in IB extended essays to share my excitement. MLA8 gives us a new way of looking at citation and referencing, very different to the approach taken in the previous edition. What’s more, the hopes I expressed for this new edition (well before it was actually published – see the post MLA8 – new edition of MLA Handbook) are incorporated in the new approach.

The special delight is because, in basing its new approach on the principles and the purposes of citation and referencing, MLA8 provides us with principles which can be applied to any referencing style or style guide. What you might call a WHYs move, perhaps. Continue reading

Orders are orders

Print Friendly
Share Button

In my last post, What’s in a name, I discussed the need for clear linkage between the name/s used in an in-text citation and the name/s used to start the entry in the list of References. If the citation reads,

“According to Michaels and Brown, ……”


“‘……’ (Singh 2014)”

then it is helpful to the reader if the entries in the References list start

Michaels, J., & P. Brown….


Singh, V. (2014).

Many students, however, seem unable to make the link. A number of extended essay examples posted by the International Baccalaureate show instances where students manage to mismatch names – detailed in that last post. Two of the instances I listed were essays in which students had used Continue reading

What’s in a name?

Print Friendly
Share Button

In my last post, Credit where it is due, I discussed IB’s approach to referencing, with special regard to the new Extended essay guide. The guide affects students starting their two-year diploma programme in September this year, for first examination in May 2018.

In an attempt to ensure standard understanding of citation and referencing, IB is instructing examiners to refer to the Awards Committee all cases of inaccurate or inconsistent citation and referencing. This will be, I hope, for the good, for the benefit of students. I fear, however, that the committee will be inundated with such cases.

I have another concern here: the comment (on many commentary forms printed alongside sample essays) reads, “Under the new requirements this essay must be referred as a possible case of academic misconduct due to incorrect and inconsistent citing and referencing.” My concern is that examiners may be wrongly influenced in their overall assessment of the essay by any “incorrect” or inconsistent citation or referencing; they may be prejudiced as they read, and award lower marks than if the student had used “correct” and consistent citation and referencing – even when there is no misconduct, just mistakes. This is a big concern, but I will reserve discussion of this aspect for another post.

For the moment, I want to ignore notions of misconduct and concentrate on consistency, possibly with a view to reducing the number of essays submitted for further consideration.

So, in that last post I discussed the notion of accurate referencing, which could be seen to contradict other IB advice to the effect that “Students are not expected to show faultless expertise in referencing…”. I argued that the notions can be reconciled if “accurate” referencing is taken not to mean accuracy of formatting of the references but instead used to mean that the right authors are cited (as against just any names randomly plucked from a hat). Now, accuracy makes sense.

The right authors, the right names

Some of the comments on the sample essays suggest that essays are referred to the Awards Committee because Continue reading

Credit where it is due

Print Friendly
Share Button

I have to admit, I’ve long been puzzled by seemingly contradictory statements from the International Baccalaureate. They are highlighted once more in the new Extended Essay Guide (for first examination in May 2018).

On the one hand, we have the statement:

“Students are not expected to show faultless expertise in referencing, but are expected to demonstrate that all sources have been acknowledged” (p. 33 of the pdf guide),

and on the other:

“Producing accurate references and a bibliography is a skill that students should be seeking to refine as part of the extended essay writing process … Failure to comply with this requirement will be viewed as academic misconduct and will, therefore, be treated as a potential breach of IB regulations” (p. 88).

Can we reconcile the suggestion that “faultless expertise” is not required while at the same time requiring “accurate references” – especially given that “correctness” is impossible to judge, given that IB allows use of any recognised style guide. Continue reading

Tangled trail

Print Friendly
Share Button

In the course of this blog, I have engaged in the occasional tortuous tangled trail.

I doubt if any of my trails – or trials – is as complicated as one recently followed by Debora Weber-Wulff. In her latest blog post, A Confusing Pakistani Plagiarism Case, she relates how she tried following up a report in the Pakistani Express Tribune, Confession: Ex-HEC head apologises for plagiarism.

Her difficulties involved trying to find the original paper which the former chair of the Higher Education Commission (HEC) might or might not have co-authored and which might or might not have been included in this writer’s CV and which might or might not have appeared in an academic journal; the paper might or might not have included plagiarised material. This last doubt arises because any plagiarism in the paper might not be considered plagiarism on the (questionable) grounds that the paper was published before Pakistan had legislated any policies regarding plagiarism.

Weber-Wulff sums up her investigation and the issues Continue reading

MLA8 – new edition of MLA Handbook

Print Friendly
Share Button

Heads up: MLA – the Modern Language Association – is about to release the 8th edition of the MLA Handbook.

The MLA site says it will be available some time in April, but warns that the online version of the 7th edition will not be available after 31 March., the American warehouse, gives a release date of March 14, 2016 (four days ago at time of writing) – but also states “This title has not yet been released.”, the British warehouse, gives a release date of 30 April 2016.

Two things catch the eye immediately, the subtitle and the price.

The Amazon US site carries no sub-title at all.



The Amazon UK site gives the title as “MLA Handbook: Rethinking Documentation for the Digital Age (Mla Handbook for Writers of Research Ppapers).” Ignoring the typo and the punctuation of the bracketed instance of MLA, we see what is possibly a new approach: “rethinking documentation…“.

This notion of a new approach is borne out in the price, $11.42 in US and £10.50 in UK. That compares with $16.79 and £18.50 respectively for the still available 7th edition.

It is not necessarily generosity behind the reduction in price for the new edition. The 8th edition is 145 pages against the 292 pages of the 7th edition – the new edition is Continue reading