Smile, please – it’s for real

Print Friendly, PDF & Email

I came across this news item in the i newspaper (page 13 of the 29 August 2018 edition, a short article by John von Radowitz). The article reports on a study in which “Scientists showed 20 goats unfamiliar photos of the same human face looking happy or angry;”  they found that “goats preferred to interact with the smiling face.”

It sounds fun, it sounds odd, it almost sounds improbable.

Two things struck me immediately.  The first was that phrase, “unfamiliar photos.”  When you’re a goat, who’s to say whether a photo is familiar or unfamiliar?

The second was a memory – a memory of the academic paper Feline Reactions to Bearded Men.  You might remember it: the researchers claimed to have held cats in front of photos of bearded men and observed their reactions.  The paper suggests that ” Cats do not like men with long beards, especially long dark beards.”

The cats “paper” was first published in 1999, maybe earlier.  It is frequently used in website evaluation exercises to make students aware of web pages which look authentic but could be big hoaxes.

The name of the site – Improbable Research – is claimed as a warning signal (though as this is the site responsible for the annual Ig Nobel Prizes, a very real event, one might not be so sure). The biggest giveaway in the cats paper is probably the bibliography, which includes entries for Pat Boone, Madonna, Yul Brynner, Sinead O’Connor, Mary Quant, Arnold Schwarzenegger and the if-only Dr Seuss (responsible for the paper “Feline Responses to Hats”).  How much of a giveaway, 20 years on, might be questionable; many of the names are probably unknown Continue reading

Not just CRAAP – 3

Print Friendly, PDF & Email

[In part 1 of this 3 part article we looked at Wineburg and McGrew’s study which suggests that a fresh look at the way we evaluate web pages and sites could be valuable.]
[In part 2, we looked at a rebuttal of Wineburg and McGrew’s study – and rebutted the rebuttal.]
[In this third part, we look at reasons why we may need a compromise between the “old” and the “new” ways of evaluating pages and sites online.]

In my last two posts, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates. The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

In one particular task, a comparison and evaluation of two articles both of which dealt with bullying, the researchers found that historians and students tended to spend much time considering the actual articles before they moved elsewhere; some never left the target sites, some left them to look elsewhere. By contrast, the fact-checkers spent very little time on the target pages – sometimes just seconds; they all quickly looked elsewhere, often outside the publishing sites. That is not necessarily (at least in my eyes) a concern. What does concern is that the evaluations made by the two groups were very different. Continue reading

Not just CRAAP – 2

Print Friendly, PDF & Email

In part 1 of this three-part article, I discussed a study by Sam Wineburg and Sarah McGrew into different methods of search-and-find as employed by three distinct groups, professional fact-checkers, professional historians and first-year undergraduates.  The researchers found that the methods used and the thinking processes of the historians and the students were different to the strategies and the thinking processes of the fact-checkers – and that the methods used by these historians and the students could be among the reasons why many of them made incomplete analyses of the sites visited and made flawed conclusions.

The three groups were asked to complete six tasks in timed conditions. The findings and ensuing discussion are detailed in the paper Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.

In this earlier post (Not just CRAAP – 1), I invited readers to try one of the tasks for themselves. If you haven’t already done this, it might be a good idea to try before reading on here.

The task asked participants to imagine they looking for information on bullying, and describe their thought processes as they considered two particular articles on two different websites.  The articles were Bullying at School: Never Acceptable on the site of the American College of Pediatricians (ACPeds – the College) and then  Stigma: At the Root of Ostracism and Bullying on the site of the American Academy of Pediatrics (AAP – the Academy).

Participants were allowed to look elsewhere on the sites and anywhere else online that they wished.  They had to decide which website was the more reliable and trustworthy.

What the researchers found was that Continue reading

Not just CRAAP – 1

Print Friendly, PDF & Email

Over the weekend, a newsletter item in the Chronicle of Higher Education caught my attention, One way to fight fake news by Dan Berrett and Beckie Supiano.  It was originally published in November 2017;  I’ve got behind in my reading.

The item reports on a study by Sam Wineburg and Sarah McGrew.  Wineburg and McGrew compared the search habits and evaluation techniques of three different groups, professional historians, professional fact-checkers, and students at Stanford University.  They found that :

  • the historians and the students mostly used very different techniques of search and evaluation to the techniques of the fact-checkers;
  • the historians and the students could not always find the information they were asked to search for;
  • the historians and the students took longer to decide on the validity and reliability of the sites they were asked to look at;
  • most disturbingly, the historians and the students came by-and-large to diametrically opposite conclusions to those of the fact-checkers as to the validity and reliability of the various sites; the two groups could not both be right.

Before reading further, you might want to try an approximation of one of the tasks undertaken by the participants (there were six tasks in all, in timed conditions). Continue reading

A second look at SEER

Print Friendly, PDF & Email

Last week, a friend asked if I had come across a source evaluation tool which interacted with Turnitin’s text-matching software.  Attached to the email was a copy of Turnitin’s Source Educational Evaluation Rubric (SEER).

That was news to me!   Interactive with Turnitin?  Trying to work out why my friend thought SEER was interactive, and with Turnitin, took me down some strange paths. And the search got me taking a second look at SEER, a second look and a closer look. A strange journey.

I had in fact been alerted to the release of the rubric back in January 2013, Continue reading