Not just CRAAP – 3

Print Friendly, PDF & Email

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests that a fresh look at the way we evaluate web pages and sites could be valuable.]
[In part 2, we looked at a rebuttal of Wineburg and McGrew’s study – and rebutted the rebuttal.]
[In this third part, we look at reasons why we may need a compromise between the “old” and the “new” ways of evaluating pages and sites online.]

In my last two posts, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates. The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

In one particular task, a comparison and evaluation of two articles both of which dealt with bullying, the researchers found that historians and students tended to spend much time considering the actual articles before they moved elsewhere; some never left the target sites, some left them to look elsewhere. By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites. That is not necessarily (at least in my eyes) a concern. What does concern is that the evaluations made by the two groups were very different. Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy. [inaccurately expressed : see the correction below.] The fact-checkers were the exact opposite.

Wineburg and McGrew went on to ask if students have learned to over-rely on evaluation checklists – and if such reliance lets them down. In practice, in real life, we rarely check the veracity or credibility of the sites we come across, a point which the researchers acknowledge (p. 44).  When we do, we often use the short-cut tools we learn at school (the short-cut tools we teach, don’t forget), checklists such as the CRAP test and the CRAAP test, Kathy Schrock’s Critical Evaluation Strategies, the CARS Checklist , the ACCORD Model, the ABCDs of Evaluating Sources. Many more checklists are available.   These tools can work well with students starting to learn how to search and research.  A checklist will help students to grasp routine ways of looking at sites; the rules of thumb help them.  But, say Wineburg and McGrew, they tend to teach us to look at the page and at the site; they teach us to make quick decisions based mainly or solely on the look and feel and content on the site.

CRAAP

 

CRAP CARS plus Schrock (Secondary) ACCORD ABCD
Currency

 

Currency

 

Credibility Technical and visual aspects Agenda Author
Relevance

 

Reliability Accuracy Content Credentials Bias
Accuracy

 

Authority Reasonableness Authority Citations Content
Authority

 

Purpose/ point of view Support Oversight Date
Purpose

 

Design & style Relevance
Electronic sources Date

 

Typically, for each of the headings, it is suggested that students ask themselves a number of questions about the page or site they are looking at.  It may well be that these sub-questions do not delve deeply enough – or it might be that as students learn the acronyms, they forget or dismiss some of the questions: they learn the label but not the underlying issues behind the questions.

The factors in these checklists are all worthy of consideration, to greater or lesser extent – but the questions to consider under the various headings need tweaking, wider consideration and understanding, as demonstrated in practice by the fact-checkers in the Reading Laterally study.

The sub-questions under currency or date, for instance, often ask how up-to-date the page is or when it was last revised.  Up-to-date-ness might be important when considering papers and articles in the natural and the human sciences but it may be less important in literature or the arts;  an older or contemporary document, paper or article can be of inestimable value, even in the sciences, if one is taking a historical approach. So much depends on purpose, not the purpose of the authors or publishers of the paper but the purpose of the researcher or writer.

It is worth mentioning here that the purpose of the authors or publishers may not be obvious.  Writers and organisations on the fringes may not declare their extremism. or it is hidden in carefully-couched coded writing (as may be the case with the ACPeds group), not obvious to someone with little knowledge of the topic. In some cases, the intent may truly be to deceive, making claims which are unwarranted or are downright lies. Not all news we disagree with is fake news, but there is a lot of fake news about.  We do need to get off the site and see what other people say about the page or the site or the author or organisation to determine purpose and credibility.

Similarly, one needs to know a lot about a subject and the literature of the subject to decide whether the content is accurate; we need to know the main writers in a field to determine whether the sources used are reliable – indeed if we can rely on the quotations and the ideas attributed to named sources. We need to follow up references to see if they are accurately recorded (or perhaps taken out of context), to see also the regard in which those source papers and authors are held, whether there is controversy or contradictory opinion regarding the sources used.  The look we took at the supposedly “research-referenced” ACPeds statement on bullying demonstrates how thorough an investigation might be needed.

This very much ties in with a need to check for authority, going beyond whether the author has the qualifications claimed to consideration of the professional esteem in which that writer is held. The publisher and the website should be considerations.  It matters not how academic the paper seems, how good the sources used are, how useful the content. Papers published in predatory journals may be held in less esteem than those in flagship journals; papers in journals which have dodgy peer-review policies should be suspect, and so on.  It’s a matter of credibility, a matter of authority.  As the Information Literacy Framework has it, Authority is Constructed and Contextual. There are a number of factors to be considered, and again we must step off the page and off the site to see more clearly.

Checklists which engage only the page or the site are of limited value.  Alas,  many seem to do just this, think only about the page and the site.  We often expose children to hoax sites (Dog Island, Tree Octopus, Feline Reactions to Bearded Men, and so on) as a means of engaging them and demonstrating how easy it is to be fooled.  [These hoax sites have been around for many years, and they are still popular; they were all recommendations made in a recent post in the Facebook group Int’l School Library Connection.]

But as children grow, we need also to use real sites, including those with possibly malicious and dangerous intent, to educate as to the signs to watch for, the coded language, how really to evaluate the sites we find.  We need to give children the tools of awareness.

It is not that the checklists don’t work.  The checklists do still work – but the questions must go deeper as students become more aware and more mature. We need to ask questions which encourage lateral reading, extending the checklists.

It all takes time, of course.  I can’t help wondering if the fact-checkers in the Wineburg and McGrew study look laterally at everything they find online, at least if they have no previous experience or knowledge of the site they find themselves using.  Do they look laterally at everything, or just when they think it’s important, as I tend to do (see Part One of this article)?  Tended to do.

It seems to me that, in many respects, social media is impelled by instant reaction and does not encourage deep thinking; this is one reason why fake news proliferates.  True or fake, news which tickles your fancy is liked, is passed on, re-tweeted, without thinking too long or too deeply.  Never let the truth get in the way of a good story (as they are said to say in the tabloid newspaper world).  Just look at some of the comments on stories in The Onion from those who missed the satire and took them as factual reports.  Try a simple search for [“the onion” taken seriously]!

It might even be a different part of the brain which engages in rapid shallow thinking as against deep and considered thinking. Trivia and fun tidbits reach parts that other information does not reach (as it were).  Indeed, psychology plays a huge role given the issues of confirmation bias (the notion that we tend to accept that which agrees with our biases and that which concurs with what we already think we know) and possible tendencies to disregard or reject anything which runs contrary to our biases or which does not support what we know.

The CHE item which, back in Part 1 of this article, took me to the study has the title One way to fight fake news.  Frankly I doubt whether the findings of the study would or could be used, in practice, for this purpose – fighting fake news.  Indeed, Wineburg and McGrew accept this; they make the point that we just do not have time to fact-check everything.

The sad truth is, you have to care to read closely and to think. It has to be important.  This is something else for us to think about.  One of Wineburg’s main points is that we can save a great amount of time if we check for authority first, if we look for supporting opinion, if we know more about the messenger.  Then we can turn to the actual content, and then the checklists come into their own.  The checklists are NOT redundant, but we do need to use them more carefully.

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests a fresh look at the way we evaluate web pages and sites.]
[In part 2 of this 3 part article we looked at the ACPeds rebuttal of Wineburg and McGrew’s study.]

Correction 31 March 2018

My summary of my summary is inaccurate and misleading, I misrepresent the historians.   Here I write:

Historians and students tended to choose the College site as the more reliable and trustworthy, the Academy site as the less reliable and trustworthy.  The fact-checkers were the exact opposite.

I expressed this better in part 1 of this investigation, where I wrote:

It is not surprising then that two-thirds of these students considered the College site as being the more reliable. Only 20% of the students opted for the Academy (the remaining students thought the two sites were equally reliable). The historians did a little better: while only one opted for the College site while as the more reliable, another 40% thought the two sites were equally worthy. Only half the historians thought the Academy the more reliable.

My apologies.

3 thoughts on “Not just CRAAP – 3

  1. Pingback: Not just CRAAP – 2 | Honesty, honestly…

  2. Pingback: Not just CRAAP – 1 | Honesty, honestly…

  3. Pingback: What’s not there | Honesty, honestly…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.