Not just CRAAP – 3

Print Friendly
Share Button

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests that a fresh look at the way we evaluate web pages and sites could be valuable.]
[In part 2, we looked at a rebuttal of Wineburg and McGrew’s study – and rebutted the rebuttal.]
[In this third part, we look at reasons why we may need a compromise between the “old” and the “new” ways of evaluating pages and sites online.]

In my last two posts, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates. The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

In one particular task, a comparison and evaluation of two articles both of which dealt with bullying, the researchers found that historians and students tended to spend much time considering the actual articles before they moved elsewhere; some never left the target sites, some left them to look elsewhere. By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites. That is not necessarily (at least in my eyes) a concern. What does concern is that the evaluations made by the two groups were very different. Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy. The fact-checkers were the exact opposite.

Wineburg and McGrew went on to ask if students have learned to over-rely on evaluation checklists – and if such reliance lets them down. In practice, in real life, we rarely check the veracity or credibility of the sites we come across, a point which the researchers acknowledge (p. 44).  When we do, we often use the short-cut tools we learn at school (the short-cut tools we teach, don’t forget), checklists such as the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources. Many more checklists are available.   These tools can work well with students starting to learn how to search and research.  A checklist will help students to grasp routine ways of looking at sites; the rules of thumb help them.  But, say Wineburg and McGrew, they tend to teach us to look at the page and at the site; they teach us to make quick decisions based mainly or solely on the look and feel and content on the site.



CRAP CARS plus Schrock (Secondary) ACCORD ABCD




Credibility Technical and visual aspects Agenda Author


Reliability Accuracy Content Credentials Bias


Authority Reasonableness Authority Citations Content


Purpose/ point of view Support Oversight Date


Design & style Relevance
Electronic sources Date


Typically, for each of the headings, it is suggested that students ask themselves a number of questions about the page or site they are looking at.  It may well be that these sub-questions do not delve deeply enough – or it might be that as students learn the acronyms, they forget or dismiss some of the questions: they learn the label but not the underlying issues behind the questions.

The factors in these checklists are all worthy of consideration, to greater or lesser extent – but the questions to consider under the various headings need tweaking, wider consideration and understanding, as demonstrated in practice by the fact-checkers in the Reading Laterally study.

The sub-questions under currency or date, for instance, often ask how up-to-date the page is or when it was last revised.  Up-to-date-ness might be important when considering papers and articles in the natural and the human sciences but it may be less important in literature or the arts;  an older or contemporary document, paper or article can be of inestimable value, even in the sciences, if one is taking a historical approach. So much depends on purpose, not the purpose of the authors or publishers of the paper but the purpose of the researcher or writer.

It is worth mentioning here that the purpose of the authors or publishers may not be obvious.  Writers and organisations on the fringes may not declare their extremism. or it is hidden in carefully-couched coded writing (as may be the case with the ACPeds group), not obvious to someone with little knowledge of the topic. In some cases, the intent may truly be to deceive, making claims which are unwarranted or are downright lies. Not all news we disagree with is fake news, but there is a lot of fake news about.  We do need to get off the site and see what other people say about the page or the site or the author or organisation to determine purpose and credibility.

Similarly, one needs to know a lot about a subject and the literature of the subject to decide whether the content is accurate; we need to know the main writers in a field to determine whether the sources used are reliable – indeed if we can rely on the quotations and the ideas attributed to named sources. We need to follow up references to see if they are accurately recorded (or perhaps taken out of context), to see also the regard in which those source papers and authors are held, whether there is controversy or contradictory opinion regarding the sources used.  The look we took at the supposedly “research-referenced” ACPeds statement on bullying demonstrates how thorough an investigation might be needed.

This very much ties in with a need to check for authority, going beyond whether the author has the qualifications claimed to consideration of the professional esteem in which that writer is held. The publisher and the website should be considerations.  It matters not how academic the paper seems, how good the sources used are, how useful the content. Papers published in predatory journals may be held in less esteem than those in flagship journals; papers in journals which have dodgy peer-review policies should be suspect, and so on.  It’s a matter of credibility, a matter of authority.  As the Information Literacy Framework has it, Authority is Constructed and Contextual. There are a number of factors to be considered, and again we must step off the page and off the site to see more clearly.

Checklists which engage only the page or the site are of limited value.  Alas,  many seem to do just this, think only about the page and the site.  We often expose children to hoax sites (Dog Island, Tree Octopus, Feline Reactions to Bearded Men, and so on) as a means of engaging them and demonstrating how easy it is to be fooled.  [These hoax sites have been around for many years, and they are still popular; they were all recommendations made in a recent post in the Facebook group Int’l School Library Connection.]

But as children grow, we need also to use real sites, including those with possibly malicious and dangerous intent, to educate as to the signs to watch for, the coded language, how really to evaluate the sites we find.  We need to give children the tools of awareness.

It is not that the checklists don’t work.  The checklists do still work – but the questions must go deeper as students become more aware and more mature. We need to ask questions which encourage lateral reading, extending the checklists.

It all takes time, of course.  I can’t help wondering if the fact-checkers in the Wineburg and McGrew study look laterally at everything they find online, at least if they have no previous experience or knowledge of the site they find themselves using.  Do they look laterally at everything, or just when they think it’s important, as I tend to do (see Part One of this article)?  Tended to do.

It seems to me that, in many respects, social media is impelled by instant reaction and does not encourage deep thinking; this is one reason why fake news proliferates.  True or fake, news which tickles your fancy is liked, is passed on, re-tweeted, without thinking too long or too deeply.  Never let the truth get in the way of a good story (as they are said to say in the tabloid newspaper world).  Just look at some of the comments on stories in The Onion from those who missed the satire and took them as factual reports.  Try a simple search for [“the onion” taken seriously]!

It might even be a different part of the brain which engages in rapid shallow thinking as against deep and considered thinking. Trivia and fun tidbits reach parts that other information does not reach (as it were).  Indeed, psychology plays a huge role given the issues of confirmation bias (the notion that we tend to accept that which agrees with our biases and that which concurs with what we already think we know) and possible tendencies to disregard or reject anything which runs contrary to our biases or which does not support what we know.

The CHE item which, back in Part 1 of this article, took me to the study has the title One way to fight fake news.  Frankly I doubt whether the findings of the study would or could be used, in practice, for this purpose – fighting fake news.  Indeed, Wineburg and McGrew accept this; they make the point that we just do not have time to fact-check everything.

The sad truth is, you have to care to read closely and to think. It has to be important.  This is something else for us to think about.  One of Wineburg’s main points is that we can save a great amount of time if we check for authority first, if we look for supporting opinion, if we know more about the messenger.  Then we can turn to the actual content, and then the checklists come into their own.  The checklists are NOT redundant, but we do need to use them more carefully.

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests a fresh look at the way we evaluate web pages and sites.]
[In part 2 of this 3 part article we looked at the ACPeds rebuttal of Wineburg and McGrew’s study.]

Not just CRAAP – 2

Print Friendly
Share Button

In part 1 of this three-part article, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates.  The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

The three groups were asked to complete six tasks in timed conditions. The findings and ensuing discussion are detailed in the paper Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.

In this earlier post (Not just CRAAP – 1), I invited readers to try one of the tasks for themselves. If you haven’t already done this, it might be a good idea to try before reading on here.

The task asked participants to imagine they looking for information on bullying, and describe their thought processes as they considered two particular articles on two different websites.  The articles were Bullying at School: Never Acceptable on the site of the American College of Pediatricians (ACPeds – the College) and then  Stigma: At the Root of Ostracism and Bullying on the site of the American Academy of Pediatrics (AAP – the Academy).

Participants were allowed to look elsewhere on the sites and anywhere else online that they wished.  They had to decide which website was the more reliable and trustworthy.

What the researchers found was that historians and students tended to spend much time looking at the actual articles before they moved elsewhere; some stayed on the target sites, some left them to look elsewhere.  By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites.  That is not necessarily (at least in my eyes) a concern.  What does concern is that the evaluations made by the two groups were very different.   Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy.  The fact-checkers were the exact opposite.

In this and in the five other tasks, the fact-checkers were much quicker at making decisions and finding corroborating information in reliable sources to support their thinking.  Historians and students were slower, and many did not try to verify information found. Of those who did try to verify what they found elsewhere, many accepted the opinions of any source rather than looking further for corroboration from more-reliable sources.

One more pair of reminders, and then we get to the meat of this present blog-post:

As discussed earlier, the College is a small fringe organisation of pediatricians with a conservative agenda.

The Academy is the world’s largest organisation of pediatricians; it is long-established, much respected, and is the publisher of Pediatrics, the flagship research journal of the profession.

Biased research?

Soon after the Wineburg-McGrew study was published as a Working Paper of the Stanford University History Education Group, Den Trumbull MD, the author of the original College paper Bullying at School: Never Acceptable wrote and posted Commentary on a Stanford University Study: Criticizing University Students and Doctorate Historians.

It is highly critical of the Wineburg-McGrew study, accusing them of bias and false reasoning.  Inter alia, he made the points that

  • their paper, Lateral Reading, was not peer-reviewed;
  • the fact-checkers were prejudiced because they were “influenced by non-objective sources” which prejudiced their opinions before ever they read the actual article;
  • his paper – which he describes as the College’s position statement on bullying – is “defended by referenced research;”
  • that the College has been “maligned” by opponents leading to the fact-checkers prejudging the College statement on bullying, even when that statement is “irrefutable.”

Trumbull declares that the Wineburg-McGrew study is totally flawed, totally biased. The fact-checkers, he says, were “opinion-checkers,” not fact-checkers. Moreover, he declares,  “facts are not a matter of opinion or popularity.”

He goes on to say:

True fact-checking would involve scrutinizing the text and the references that support the text. That’s what the students and historians mostly did. The fact-checkers were more likely to have been influenced by the all-too-common ad hominem attacks found on the Internet, and perhaps persuaded by the views of their professional associations with “news and political organizations.”

It comes down to: do we best consider the authority and reliability and usefulness of information found by considering only that information, or do/should context and reputation and other factors come into consideration too.

In part 1 of this article, I declared that I was not impressed by either the College paper nor the Academy article;  I said that I would probably not use either if I was looking for information on bullying.  I noted some of the flaws and shortcomings in Trumbull’s article I had found, replicating as best iI could and n timed conditions the task as posed in Wineburg and McGrew’s study. Dr Trumbull’s counter-blast to the Wineburg-McGrew study made me take a second and closer look.

A more careful look at the College paper shows that this is NOT an article or academic paper. I missed this first time through, and I suspect the fact-checkers did as well, so quick were they to leave the site without looking at the content of the page.  It is an opinion piece, right from the start. The abstract reads:

No child should be harassed for his or her unique characteristics. Schools should encourage an environment of respectful self-expression for all students, and no group should be singled out for special treatment.  Parental involvement should be a school’s primary method of resolution with programs emphasizing general respectfulness serving to set the tone in the classrooms.

First time through, I missed the full significance of the repeated use of “should.”   Though the first two sentences seem sound enough,  I did pause a moment over the final sentence.  It took a second reading, after reading the full paper, to appreciate that “should.”    Trumbull’s paper lacks an argument.  That “should” is not there pointing to the conclusion of the paper. It is establishing the conclusion as a given.  There is no attempt to establish the “should-ness” of the position, no background context or review of the literature.  There is no mention of contrary positions.

Of course, the “should-ness” is fully in line with the College’s lead core value as listed on the About Us page:

The American College of Pediatricians

  1. Recognizes that there are absolutes and scientific truths that transcend relative social considerations of the day.

This is not an academic paper as I had first thought, it is a position paper. It says so, in the second-top navigation bar.  I missed that, doing the test.

I would probably have been aware if I had followed links and menus on the site to reach this page as in a “normal” piece of research;  the test in the study is “un-normal,” in that it takes us to the page directly; it is all too easy to miss the clue in the navigation bar.  On the other hand, if a search-engine search had brought me directly to this page, I would have have missed that clue too.

The College is not trying to hide anything – though I do wonder it is standard practice for a  position paper to be set out like an academic paper? Again, given the speed with which they left the page, I wonder if the fact-checkers also missed this clue.  Come to that, the historians and the students tended to spend much time on this page – but now I wonder how many of them spotted this?  (There is no mention of this in Wineburg and McGrew’s paper.)

Despite its “feel,” most notably the abstract and the list of references, this is not an academic paper.

Before reading the page itself, I spent much time thinking about the footnote signals and the references. I quickly saw that one reference was to a dictionary definition, two were to blog posts – or is this a single blog post? Though the authors are different, the title and the URL are the same for both.

Five of the ten references are to different papers in the journal Pediatrics, published by the highly-respected Academy. That looks good.  But the citations in the text gave me pause, there is so little real content there. They could come from any source.  I noted in particular footnoted item #3.  The second section in the paper is headed Forms of Bullying and provides five bullet-pointed forms of bullying; the next section, Target Characteristics of Bullies, presents eight bullet-pointed characteristics.  Most of these thirteen bullet-pointed statements strike me as common knowledge; if I was asked to come up with a list of forms or characteristics of bullying, I think I would come up with a similar list – and I know very little about bullying.   I thought it odd that, of the thirteen items listed in these two sections, just one is attributed, “Physical inabilities and disabilities (3)

Could Dr Trumbull find a source only for this one item but not for any of the others?  Or  could he not be bothered to find references for the other items?

I was similarly struck by the next three citations (#4, 5 and 6). These all seemed fairly basic notions about bullying, so basic that any introductory article might list them. Did we really need three separate citations from three separate academic papers to make these points?  Students sometimes use this ploy to suggest wide reading, don’t they?

Citations #9 and 10 are striking. Trumbull provides two quotations, different superscript numbers, different speakers:

These two citations leads to references which each name the same blog post, but name different authors:

There is just one blog post from which these quotations are taken, Expert says media dangerously ignore mental illness in coverage of gay teen suicides.

It was written by Liz Goodwin, not by Haas and not by Bayard. What Trumbull has done is to take Goodwin’s paraphrases of what Haas and Bayard said and present them as direct quotations,

and then attributes the same blog post to each of them in turn. Liz Goodwin, the actual author of these words, gets not a mention.

It’s a petty point, but if Trumbull is going to accuse Wineburg and McGrew of a lack of scholarly rigor, then he needs to be a tad more rigorous himself.

Perhaps the only use of source material which is both accurate and telling is citation/ reference #7.

The reference shows that these thoughts and the quotation come from a paper in Journal of Criminology:

Once again, it might seem petty to point out (1) that Trumbull fails to name the authors of this paper (which are shown very clearly here on the Journal of Criminology website, at the DOI as recorded):

or (2) that the quotation he uses appears on page 8 of the paper and not pages 7 and 8 as detailed in his reference.  And, because his is not an academic paper which might present other viewpoints (the better to refute them),  Trumbull finds no need to mention comments elsewhere on this finding, including the notions that

  1. schools which introduce anti-bullying programs early tend to have more success in preventing bullying than schools which start late, (What Makes Anti-bullying Programs Effective? reported in Psychology Today)
  2. schools which introduce anti-bullying programs often do so because they have problems – possibly caused by failure to start the programs early (see (1)), (a Huffington blog post, Are Anti-Bullying Programs Counterproductive? includes a number of criticisms of the study)
  3.  schools which introduce anti-bullying programs often do so because they have problems (see the Huffington Post article) – and so “are more likely to have experienced peer victimization, compared to those attending schools without bullying prevention” (the key finding of Seokjin Jeong and Byung Hyun Lee’s paper),
  4. there are many studies and meta-studies which conclude that anti-bullying programs in schools do decrease (though not eliminate) bullying (for instance, Systematic reviews of the effectiveness of developmental prevention programs in reducing delinquency, aggression, and bullying by David Farrington and others).

There is evidence both ways and somewhere in-between too.  But again, if you believe your position is “irrefutable,” you don’t recognise possibly contrary views, do you?  If they exist, they are wrong, aren’t they?

All together, and although supposedly “defended by referenced research,” in academic and argumentative terms Trumbull’s defences are weak.

As I said, I doubt that I would use this position statement in a presentation on bullying.  Writing this article though, I realise that I might use it in an academic paper, an example of alternative and possibly fringe views (and go on to suggest why the College position is suspect).

As noted, I doubt that I would use the Academy page in my paper on bullying either. It is light on substance. On the other hand, it is not meant to convey anything weighty. It is a session description for an upcoming two-hour symposium to be held during a general meeting of four pediatric organisations.  The header of the article reads:

Experts in bullying and children’s mental health gather at the Pediatric Academic Societies meeting to describe new research and what it means for children’s mental health.

If I was writing a paper on bullying, this article could be useful as a starting-point for further research. Six presentations are mentioned, so I might search for any of the titles of these presentations which interest me, I might use the names of the presenters as possible experts to follow up. I might also note that the American Academy of Pediatrics is one of the four organisations sponsoring the Annual Meeting. The American College of Pediatricians is not one of the four sponsors. I wonder if anyone from ACPeds attended the meeting or the symposium?

Another angle

One of Trumbull’s complaints about the Wineburg-McGrew study is that

the College has been “maligned” by opponents leading to the fact-checkers prejudging the College statement on bullying, even when that statement is “irrefutable.”

I am reminded that two of the frames of the ACRL Framework for Information Literacy for Higher Education are

Scholarship as Conversation
Authority Is Constructed and Contextual

I do wonder how conversation can take place when the claim is made that the College’s views are “irrefutable” and also that there are “absolutes and scientific truths that transcend relative social considerations of the day.” This is claiming absolute authority, no room to question or to argue.

That contravenes the other frame I mention here: authority is not absolute. It is earned and it is situational.  We educate our students to appreciate (we hope) that information is not all equal, even when two writers say the same thing. We look for reliability and we look for authority – and while authority is bestowed, in part, by who the writer is, it also depends on the views of others, whether through reputation, qualification or deeds.  You are not trustworthy because you say you are. You have to show you are trustworthy, you have to earn trust.  We do well to look closely at the messenger as well as at the message.   If the messenger is untrustworthy, can we trust the message?

Space, time and your patience, gentle reader, suggest I bring this post to a close. It could be, though, that each of the other four frames of information literacy may have some relevance in this investigation.  I must think further, maybe another post.

More immediately, I want to address evaluation checklists. One of Wineburg’s points is that the tools we give students may not always serve them well. They did not in this study, the tasks that he and McGrew set. They did not serve the historians too well either.

We’ll look more carefully at CRAAP and other evaluation tools in the next post.

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests a fresh look at the way we evaluate web pages and sites.]
[In part 3 we look at the checklist approach to evaluation, and suggest that we don’t need to get rid of our CRAAP tests, we need to enhance them.]

Not just CRAAP – 1

Print Friendly
Share Button

Over the weekend, a newsletter item in the Chronicle of Higher Education caught my attention, One way to fight fake news by Dan Berrett and Beckie Supiano.  It was originally published in November 2017;  I’ve got behind in my reading.

The item reports on a study by Sam Wineburg and Sarah McGrew.  Wineburg and McGrew compared the search habits and evaluation techniques of three different groups, professional historians, professional fact-checkers, and students at Stanford University.  They found that :

  • the historians and the students mostly used very different techniques of search and evaluation to the techniques of the fact-checkers;
  • the historians and the students could not always find the information they were asked to search for;
  • the historians and the students took longer to decide on the validity and reliability of the sites they were asked to look at;
  • most disturbingly, the historians and the students came by-and-large to diametrically opposite conclusions to those of the fact-checkers as to the validity and reliability of the various sites; the two groups could not both be right.

Before reading further, you might want to try an approximation of one of the tasks undertaken by the participants (there were six tasks in all, in timed conditions).

You are asked to look at two specific web pages. They both deal with bullying.  You have 10 minutes maximum, using the search and evaluation techniques you normally use, to evaluate how trustworthy each site is as a source of information about bullying.  If you were writing an essay on bullying, is there one of the papers you would prefer to use rather then the other?  Could you use both?  How confident are you about your judgement?  On a scale of 1 (not confident) to 5 (extremely confident) where would you place each of these articles?

You will probably find it easier to open the two pages in different tabs, this could make comparison easier.

You do not have to stay on these pages. You can look elsewhere on the sites and you can go off-site too, anywhere you want.  Try as you go along to explain your thought processes and your reasoning, as if to a group of students.

Right then, before you start, have these two pages open in different tabs, this page Bullying at School: Never Acceptable and then this page Stigma: At the Root of Ostracism and Bullying.

Go to the first tab, Bullying at School: Never Acceptable and start comparing.

Your time starts NOW!

How did you do? How did you rate the reliability of each of these two articles?  How do you rate the two sites?

In the study, Wineburg and McGrew asked the three groups (fact-checkers, historians and students) to complete six tasks in timed conditions. This task, the comparison of the pages on bullying, is the first of the three tasks detailed in their paper, Lateral Reading: Reading Less and Learning More When Evaluating Digital Information. The authors state that the other three tasks were similar and they yielded similar findings; they were omitted from the published paper for lack of space.

Participants were asked to think aloud as they performed; their statements and their actions were recorded in audio and video.  The researchers would give prompts and scripted hints if the participants went silent or struggled with the task.

As noted, the differences in approaches and the conclusions drawn in response to the tasks are remarkable.

The fact-checkers consistently completed the various tasks quickly and verified their answers in more-reliable sources.  That’s what fact-checkers are trained to do, and this is what they did, without having to be told or reminded.  Their comments and their views of reliability and authority were consistent and unanimous, unlike those of participants.  The historians and the students, however, were slower. They spent more time (than the fact-checkers) reading the target pages, they spent more time on the target sites. Some never left the target site. If they verified what they found, they often settled for the first opinion they found and did not necessarily look for authoritative opinions.

The issue is not that the approaches of the various groups are different or that one group was quicker than the others in reaching conclusions.  It would not be an issue if, despite different approaches, the different methods led to the same conclusions.   As noted earlier, the real problem is that the approach/-es standardly used by the historians and the students led them to very different conclusions regarding reliability and authority.  These participants decided that the unreliable sites were the more reliable and they could not detect the biases and the agendas of the fringe sites. They searched differently and they missed vital clues. Their standard evaluation techniques let them down.

If you rely on unreliable sources, can you rely on the message/s?  This might well be a factor in the spreading of fake news, a thought I shall come back to later.

It is worth noting at this point, among the reasons that Wineberg and McGrew chose historians as their second sample group is that historians work with source materials as a matter of course;  this is what historians do, they are trained to interrogate source material (and this is what most of the historians in this study failed properly to do).

The first of the bullying articles is posted on the site of the American College of Pediatricians (ACPeds).  The website is the public face of an extremely conservative group, a fringe group which appears to have just a few hundred members. The ACPeds has an anti-LGBT slant and is anti-abortion, stated on its About Us page, though it may take careful reading to discern this.  The Society believes in ethical absolutes and absolute truths. These and other facts about the group were quickly found by the fact-checkers.  None of these means that the organisation or its members are necessarily wrong to hold these beliefs – but it does help us to recognise possible biases, not least when there is a suggestion not only that members of the College hold these “values” but that everyone else should also support their beliefs.

The other bullying article is on the site of an association held in high esteem in the profession.  The American Academy of Pediatrics (AAP) was founded more than 80 years ago and has more than 65,000 members; it is the largest association of pediatricians in the world.  It publishes Pediatrics, the flagship journal of the community of US pediatricians – and one which, incidentally, provided several items of information used in the paper on the ACPeds site.

It might be helpful in this discussion to think of the two associations as the College (ACPeds) and the Academy (AAP).

Why did the historians and the students go so wrong, not just on this task but on all of the tasks set by Wineburg and McGrew? What do the fact-checkers do that most historians and students did not do?

The approach used by the students and the historians was to look at, and in many cases to read in full, the pages to which they were directed in the task. They considered the look of the pages, the feel of the pages. The College article has an abstract and a list of references, and this impressed almost all students and many of the historians.  Because this paper is structured like an academic paper, some in these two groups assumed that it had been peer reviewed and published in a journal.  For some, the MD after the author’s name confirmed expertise, although few checked to see if he was indeed a Medical Doctor.  The lack of advertising impressed as did the .org domain suffix.  They considered the page and the site.  In Wineberg and McGrew’s terms, most historians and students read vertically.

It took students and historians many minutes of task-time before they left the target pages and looked elsewhere for other information, verification or corroboration. That is if they looked elsewhere at all.  Even a simple Google search should have sounded alarm bells:

Hit #1 is for the College itself, followed immediately by sites which use such terms as “fringe,” “anti-LGBT,” “wingnut collection of pediatricians,” “described as a hate group,” and so on.  The warning signs are there – but few students went even this far; nearly 30% of them never even left the College site.

It is not surprising then that two-thirds of these students considered the College site as being the more reliable. Only 20% of the students opted for the Academy (the remaining students thought the two sites were equally reliable).  The historians did a little better: while only one opted for the College site while as the more reliable, another 40% thought the two sites were equally worthy.  Only half the historians thought the Academy the more reliable.

The fact-checkers on the other hand very quickly and unanimously decided that the College site was dubious, that the Academy site was the more trustworthy.

The fact-checkers spent minimal time, sometimes just seconds, on the actual target pages or sites. Their instinct was to find out what others say about the organisation or the authors before checking for themselves. They opened new tabs and searched for independent mentions of the sites. They were not immediately concerned with the target pages themselves, they wanted to discover the biases of the sites and of the groups behind the sites. They sought corroboration for what they learned and they sought corroboration in  reliable sources.  They read laterally.

This is the big difference in technique; most historians and students looked at the content itself, the factcheckers looked for authority behind the content.

It all sounds counter-intuitive, but if reading laterally works then it is good. If reading laterally works and reading vertically leads to flawed thinking, then we may need to rethink how we teach evaluation, we may need to rethink our own practises.

Whether one agrees with the biases or not, it is important to know the biases, the better to come to informed conclusions about the topics you read and research. You probably won’t discover these biases by looking at sources promoted by or referenced from your readings. You discover more about the target sites by looking for what others say about them and their organisations, you look for independent reviews and opinions.

My own search and find habits

I tried this and the other two tasks before I started reading the full academic paper.  I was perhaps primed by the article in the Chronicle of Higher Education but I did try to put what I had read out of my mind and search and evaluate as I would normally.

I have to admit that my methods are closer to those of the historians and the students in that I did not leave the target articles (in the first two tasks) within seconds of raising them on my screen.  Instead, I glanced through each of the pages.  On the other hand, I did not read them closely, I did not stay long on them.  When I left the target pages, I skimmed quickly through the home pages and the About Us pages. Only then did  I look for information about the organisations elsewhere.

I heard the alarm bells as I surveyed the article on the College site. Despite its seemingly academic nature (notably the abstract and the list of references),  I saw that many of the references were less than academic (Yahoo News, blog posts, a dictionary).  I did not read the page at this stage.   One right-click later, I was on the About Us page: it took just one quick glance at the Core Values and alarm signals were flashing.

I would also point out that, although I have discerned some of the weakness of the paper on the College website, I am not impressed by the article on the Academy website either. It is advance publicity for an upcoming symposium. There are two quotations on the page, duly attributed but neither providing any means of following up.  However accurate this piece is, however authoritative the site, in academic terms, the page is of limited value. If I were writing an essay or paper on bullying, I would not be using either page.

On all three tests, my search and verification methods fell somewhere between everyday practice and the gold standard of the professional fact-checkers.  My technique is perhaps a little nearer the fact-checkers, but I have some way to go. This, I think, may well change.

I comfort myself with the thought that, when it matters, I do double-check, and I do double-check off the site. I do this when considering purchase on sites that are new to me. I do this when looking at apps and extensions and other software even if it is free. Especially when it is free.  I ignore the sites themselves (their own reviews are more likely to be favourable, aren’t they?), I look for independent reviews and comments.  Sometimes I look for “hate” sites (hate HP/ samsung/ apple/ tesco/ chrysler etc etc) as well as more favourable sites and comments.  I look for “troubleshooting” or “problems.”  The more the item costs, the more care I take – remembering that cost is not just about money; there is time and frustration and customer care  and other factors too.

I am careful too when reading claims, especially those made by groups I already have suspicions about.  Well-referenced academic-seeming papers are often promoted by those with vested interests in promoting their products and services: readers of this blog will know how unreliable are the papers and articles pushed out by Turnitin and by EasyBib, among others.

I take care with things which matter. Perhaps I should take more care with things which don’t really matter. Perhaps we all should.  If you did the initial task, comparing the two bullying articles, how did you do? Are you re-thinking your search and find techniques?

We do not, of course, have time to check thoroughly every site we come across, a point that Wineburg and McGrew acknowledge (p. 44).  But, they say, the bigger point is that even when we do check, the tools we train students to use let them down, the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources and all the other evaluation tools let them down. They teach us to look at the page and at the site; they teach us to make quick decisions based on look and feel and content on the site.

Drop these tools, say Wineburg and McGrew, do what fact-checkers do:

When the Internet is characterized by polished web design, search engine optimization, and organizations vying to appear trustworthy, such guidelines create a false sense of security. In fact, relying on checklists could make students more vulnerable to scams, not less. Fact checkers succeeded on our tasks not because they followed the advice we give to students. They succeeded because they didn’t (pp. 44-45).

Me, I’m not so sure that we need to drop the evaluation checklists. I shall pursue this thought in my next but one blog post.

In the next post, though, I want to discuss the two bullying articles a little more, not least because ACPeds has posted a rebuttal to the Wineburg and McGrew study. Some of the points made in their rebuttal deserve consideration – and shooting down.  This exercise could serve further to hone our evaluation skills.

Watch this space.

[In part 2 of this 3 part article we look at the ACPeds rebuttal of Wineburg and McGrew’s study – and rebut the rebuttal.]

[In part 3 we look at the checklist approach to evaluation, and suggests that we don’t need to get rid of our CRAAP tests, we need to enhance them.]

Memory hole

Print Friendly
Share Button

Yesterday, halfway through writing my next post, I needed a quotation I had used in an earlier post.  I quickly found the quotation, clicked on the link so that I could check and then cite the original source – and, horror, although part of the passage I wanted to use was still there, – the words of the vital sentence were not. They had been replaced, the evidence  I wanted to support my claim was no longer there.

The quotation in question was from the post Flattering flaws. I was commenting on a press release put out by, promoting their then-recently published study Survey Shows Plagiarism a Regular Problem in Scholarly Research, Publishing, But Action to Prevent Falls Short. I pointed to several questionable statements in the press release, statements which were not always reflected in the actual study.

The paragraph in question reads:

Editors at scholarly publications were the exact opposite, with a majority reporting routinely checking submitting authors’ work for plagiarism. The web site Retraction Watch estimates that the number of retractions in scholarly publications doubled between 2010 and 2011 (iThenticate Press Release, 2012 December 5).

and, amongst other things, I questioned the second statement. There is no evidence in the study to indicate that “the number of retractions in scholarly publications (had) doubled between 2010 and 2011” – and there was nothing on the Retraction Watch website to suggest this either. Where, I asked, had iThenticate found this statement?

I still don’t have an answer to this question. It might not even be a valid question any more, because the statement is no longer there. Instead, what I see now Continue reading

Wrong to be forgotten?

Print Friendly
Share Button

The ECJ ruling that individuals be allowed to request that search engines remove links to web pages which mention them, the so-called “right to be forgotten,” has come in for a lot of support and a lot of criticism. It raises a lot of questions as to whether the law is enforceable.

Some of the biggest criticisms raise notions of censorship and attempts to change history and the historical record. One of my biggest concerns is that the search engine company is judge and jury, and the “defendant” – the person or organisation behind the “offending” page – is not informed of the request unless and until the request to remove Continue reading

By any other name…

Print Friendly
Share Button

An interesting way of putting it : “extensive text overlap.”

The full Retraction reads,

This article [1] has been retracted by the author due to extensive text overlap with a previous publication by Roberts et al. [2]. The author apologises for any inconvenience caused.

The offending paper, now retracted, is “Infantile colic, facts and fiction” by Abdelmoneim E.M. Kheir. It was published in the Italian Journal of Pediatrics (IPJ) in 2012. There is a note on the page that this paper is “Highly accessed.”

The text overlap which has been identified is with Continue reading

Texas sharp-shooting?

Print Friendly
Share Button

Congratulations, Ben Goldacre!  Damning Report From The Public Accounts Committee On Clinical Trial Results Being Withheld tells it all.

On 3 January, the Public Accounts Committee of the House of Commons issued a report which expressed concern at the fact that pharmaceutical companies tend to publish results of clinical trials which make them look good, but withhold publication of trials in which the results are less favourable. This affects doctors’ knowledge and perceptions Continue reading

Thirty percent

Print Friendly
Share Button

I use the Google Alert feature to be made aware of new web pages which include terms I regularly search for. It saves me having to remember to repeat my favourite searches, and it pinpoints new or changed pages.

I thought the feature had gone berserk the other day. My alert for “every written assignment they complete” usually gives me just one or two hits a week.  This week’s digest gave me forty hits. Continue reading

Getting it wrong…

Print Friendly
Share Button

The strange story of Hamilton Naki

A strange story, and a strange journey too. This post is not just Naki’s story, strange as that is.

We visit Wikipedia (and wonder if teachers who forbid its use might want to think again), touch on journalistic ethics, have a quick look at the online citation generator EasyBib, and finish at the gates of Turnitin, the software which will “check students’ work for improper citation or potential plagiarism” (Turnitin OriginalityCheck). Continue reading