Not just CRAAP – 1

Print Friendly, PDF & Email

Over the weekend, a newsletter item in the Chronicle of Higher Education caught my attention, One way to fight fake news by Dan Berrett and Beckie Supiano.  It was originally published in November 2017;  I’ve got behind in my reading.

The item reports on a study by Sam Wineburg and Sarah McGrew.  Wineburg and McGrew compared the search habits and evaluation techniques of three different groups, professional historians, professional fact-checkers, and students at Stanford University.  They found that :

  • the historians and the students mostly used very different techniques of search and evaluation to the techniques of the fact-checkers;
  • the historians and the students could not always find the information they were asked to search for;
  • the historians and the students took longer to decide on the validity and reliability of the sites they were asked to look at;
  • most disturbingly, the historians and the students came by-and-large to diametrically opposite conclusions to those of the fact-checkers as to the validity and reliability of the various sites; the two groups could not both be right.

Before reading further, you might want to try an approximation of one of the tasks undertaken by the participants (there were six tasks in all, in timed conditions).

You are asked to look at two specific web pages. They both deal with bullying.  You have 10 minutes maximum, using the search and evaluation techniques you normally use, to evaluate how trustworthy each site is as a source of information about bullying.  If you were writing an essay on bullying, is there one of the papers you would prefer to use rather then the other?  Could you use both?  How confident are you about your judgement?  On a scale of 1 (not confident) to 5 (extremely confident) where would you place each of these articles?

You will probably find it easier to open the two pages in different tabs, this could make comparison easier.

You do not have to stay on these pages. You can look elsewhere on the sites and you can go off-site too, anywhere you want.  Try as you go along to explain your thought processes and your reasoning, as if to a group of students.

Right then, before you start, have these two pages open in different tabs, this page Bullying at School: Never Acceptable and then this page Stigma: At the Root of Ostracism and Bullying.

Go to the first tab, Bullying at School: Never Acceptable and start comparing.

Your time starts NOW!

How did you do? How did you rate the reliability of each of these two articles?  How do you rate the two sites?

In the study, Wineburg and McGrew asked the three groups (fact-checkers, historians and students) to complete six tasks in timed conditions. This task, the comparison of the pages on bullying, is the first of the three tasks detailed in their paper, Lateral Reading: Reading Less and Learning More When Evaluating Digital Information. The authors state that the other three tasks were similar and they yielded similar findings; they were omitted from the published paper for lack of space.

Participants were asked to think aloud as they performed; their statements and their actions were recorded in audio and video.  The researchers would give prompts and scripted hints if the participants went silent or struggled with the task.

As noted, the differences in approaches and the conclusions drawn in response to the tasks are remarkable.

The fact-checkers consistently completed the various tasks quickly and verified their answers in more-reliable sources.  That’s what fact-checkers are trained to do, and this is what they did, without having to be told or reminded.  Their comments and their views of reliability and authority were consistent and unanimous, unlike those of participants.  The historians and the students, however, were slower. They spent more time (than the fact-checkers) reading the target pages, they spent more time on the target sites. Some never left the target site. If they verified what they found, they often settled for the first opinion they found and did not necessarily look for authoritative opinions.

The issue is not that the approaches of the various groups are different or that one group was quicker than the others in reaching conclusions.  It would not be an issue if, despite different approaches, the different methods led to the same conclusions.   As noted earlier, the real problem is that the approach/-es standardly used by the historians and the students led them to very different conclusions regarding reliability and authority.  These participants decided that the unreliable sites were the more reliable and they could not detect the biases and the agendas of the fringe sites. They searched differently and they missed vital clues. Their standard evaluation techniques let them down.

If you rely on unreliable sources, can you rely on the message/s?  This might well be a factor in the spreading of fake news, a thought I shall come back to later.

It is worth noting at this point, among the reasons that Wineberg and McGrew chose historians as their second sample group is that historians work with source materials as a matter of course;  this is what historians do, they are trained to interrogate source material (and this is what most of the historians in this study failed properly to do).

The first of the bullying articles is posted on the site of the American College of Pediatricians (ACPeds).  The website is the public face of an extremely conservative group, a fringe group which appears to have just a few hundred members. The ACPeds has an anti-LGBT slant and is anti-abortion, stated on its About Us page, though it may take careful reading to discern this.  The Society believes in ethical absolutes and absolute truths. These and other facts about the group were quickly found by the fact-checkers.  None of these means that the organisation or its members are necessarily wrong to hold these beliefs – but it does help us to recognise possible biases, not least when there is a suggestion not only that members of the College hold these “values” but that everyone else should also support their beliefs.

The other bullying article is on the site of an association held in high esteem in the profession.  The American Academy of Pediatrics (AAP) was founded more than 80 years ago and has more than 65,000 members; it is the largest association of pediatricians in the world.  It publishes Pediatrics, the flagship journal of the community of US pediatricians – and one which, incidentally, provided several items of information used in the paper on the ACPeds site.

It might be helpful in this discussion to think of the two associations as the College (ACPeds) and the Academy (AAP).

Why did the historians and the students go so wrong, not just on this task but on all of the tasks set by Wineburg and McGrew? What do the fact-checkers do that most historians and students did not do?

The approach used by the students and the historians was to look at, and in many cases to read in full, the pages to which they were directed in the task. They considered the look of the pages, the feel of the pages. The College article has an abstract and a list of references, and this impressed almost all students and many of the historians.  Because this paper is structured like an academic paper, some in these two groups assumed that it had been peer reviewed and published in a journal.  For some, the MD after the author’s name confirmed expertise, although few checked to see if he was indeed a Medical Doctor.  The lack of advertising impressed as did the .org domain suffix.  They considered the page and the site.  In Wineberg and McGrew’s terms, most historians and students read vertically.

It took students and historians many minutes of task-time before they left the target pages and looked elsewhere for other information, verification or corroboration. That is if they looked elsewhere at all.  Even a simple Google search should have sounded alarm bells:

Hit #1 is for the College itself, followed immediately by sites which use such terms as “fringe,” “anti-LGBT,” “wingnut collection of pediatricians,” “described as a hate group,” and so on.  The warning signs are there – but few students went even this far; nearly 30% of them never even left the College site.

It is not surprising then that two-thirds of these students considered the College site as being the more reliable. Only 20% of the students opted for the Academy (the remaining students thought the two sites were equally reliable).  The historians did a little better: while only one opted for the College site while as the more reliable, another 40% thought the two sites were equally worthy.  Only half the historians thought the Academy the more reliable.

The fact-checkers on the other hand very quickly and unanimously decided that the College site was dubious, that the Academy site was the more trustworthy.

The fact-checkers spent minimal time, sometimes just seconds, on the actual target pages or sites. Their instinct was to find out what others say about the organisation or the authors before checking for themselves. They opened new tabs and searched for independent mentions of the sites. They were not immediately concerned with the target pages themselves, they wanted to discover the biases of the sites and of the groups behind the sites. They sought corroboration for what they learned and they sought corroboration in  reliable sources.  They read laterally.

This is the big difference in technique; most historians and students looked at the content itself, the factcheckers looked for authority behind the content.

It all sounds counter-intuitive, but if reading laterally works then it is good. If reading laterally works and reading vertically leads to flawed thinking, then we may need to rethink how we teach evaluation, we may need to rethink our own practises.

Whether one agrees with the biases or not, it is important to know the biases, the better to come to informed conclusions about the topics you read and research. You probably won’t discover these biases by looking at sources promoted by or referenced from your readings. You discover more about the target sites by looking for what others say about them and their organisations, you look for independent reviews and opinions.

My own search and find habits

I tried this and the other two tasks before I started reading the full academic paper.  I was perhaps primed by the article in the Chronicle of Higher Education but I did try to put what I had read out of my mind and search and evaluate as I would normally.

I have to admit that my methods are closer to those of the historians and the students in that I did not leave the target articles (in the first two tasks) within seconds of raising them on my screen.  Instead, I glanced through each of the pages.  On the other hand, I did not read them closely, I did not stay long on them.  When I left the target pages, I skimmed quickly through the home pages and the About Us pages. Only then did  I look for information about the organisations elsewhere.

I heard the alarm bells as I surveyed the article on the College site. Despite its seemingly academic nature (notably the abstract and the list of references),  I saw that many of the references were less than academic (Yahoo News, blog posts, a dictionary).  I did not read the page at this stage.   One right-click later, I was on the About Us page: it took just one quick glance at the Core Values and alarm signals were flashing.

I would also point out that, although I have discerned some of the weakness of the paper on the College website, I am not impressed by the article on the Academy website either. It is advance publicity for an upcoming symposium. There are two quotations on the page, duly attributed but neither providing any means of following up.  However accurate this piece is, however authoritative the site, in academic terms, the page is of limited value. If I were writing an essay or paper on bullying, I would not be using either page.

On all three tests, my search and verification methods fell somewhere between everyday practice and the gold standard of the professional fact-checkers.  My technique is perhaps a little nearer the fact-checkers, but I have some way to go. This, I think, may well change.

I comfort myself with the thought that, when it matters, I do double-check, and I do double-check off the site. I do this when considering purchase on sites that are new to me. I do this when looking at apps and extensions and other software even if it is free. Especially when it is free.  I ignore the sites themselves (their own reviews are more likely to be favourable, aren’t they?), I look for independent reviews and comments.  Sometimes I look for “hate” sites (hate HP/ samsung/ apple/ tesco/ chrysler etc etc) as well as more favourable sites and comments.  I look for “troubleshooting” or “problems.”  The more the item costs, the more care I take – remembering that cost is not just about money; there is time and frustration and customer care  and other factors too.

I am careful too when reading claims, especially those made by groups I already have suspicions about.  Well-referenced academic-seeming papers are often promoted by those with vested interests in promoting their products and services: readers of this blog will know how unreliable are the papers and articles pushed out by Turnitin and by EasyBib, among others.

I take care with things which matter. Perhaps I should take more care with things which don’t really matter. Perhaps we all should.  If you did the initial task, comparing the two bullying articles, how did you do? Are you re-thinking your search and find techniques?

We do not, of course, have time to check thoroughly every site we come across, a point that Wineburg and McGrew acknowledge (p. 44).  But, they say, the bigger point is that even when we do check, the tools we train students to use let them down, the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources and all the other evaluation tools let them down. They teach us to look at the page and at the site; they teach us to make quick decisions based on look and feel and content on the site.

Drop these tools, say Wineburg and McGrew, do what fact-checkers do:

When the Internet is characterized by polished web design, search engine optimization, and organizations vying to appear trustworthy, such guidelines create a false sense of security. In fact, relying on checklists could make students more vulnerable to scams, not less. Fact checkers succeeded on our tasks not because they followed the advice we give to students. They succeeded because they didn’t (pp. 44-45).

Me, I’m not so sure that we need to drop the evaluation checklists. I shall pursue this thought in my next but one blog post.

In the next post, though, I want to discuss the two bullying articles a little more, not least because ACPeds has posted a rebuttal to the Wineburg and McGrew study. Some of the points made in their rebuttal deserve consideration – and shooting down.  This exercise could serve further to hone our evaluation skills.

Watch this space.

[In part 2 of this 3 part article we look at the ACPeds rebuttal of Wineburg and McGrew’s study – and rebut the rebuttal.]

[In part 3 we look at the checklist approach to evaluation, and suggests that we don’t need to get rid of our CRAAP tests, we need to enhance them.]

5 thoughts on “Not just CRAAP – 1

  1. This is great, John. I like that you tell your experience working with these sites because it gives me ideas about how to present evaluation in class. Thank you.

  2. Pingback: Not just CRAAP – 2 | Honesty, honestly…

  3. Pingback: Not just CRAAP – 3 | Honesty, honestly…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.