The websites make great claims for online plagiarism detectors, for auto-cite reference generators, for automatic essay graders, and spelling and grammar checkers. Of course they do. But just how reliable are these services?
In earlier posts, I have critiqued and often – usually – criticised such services as Pearson Essay Scorer, Xerox Ignite, WorksCited4U, EasyBib, Turnitin, WriteCheck, PaperAssistance, ScanMyEssay, and more.
It is easy to criticise, but are some better than others? Which is best of its kind? Are they worth using? In my last blog post, I suggested a comparison test of free citation generators. I am still working on that, and intend to go on to testing grammar checkers and plagiarism detectors.
So I am very grateful to Debbie for pointing me to Nick Carbone’s “An Experiment with Grammar Checkers“.
Nick Carbone took a look at two online grammar checkers, Grammarly and WriteCheck, and also at the grammar checker built into MicroSoft Word. (Nick Carbone currently works in publishing, and before that he taught writing at some highly prestigious U.S. universities and colleges.)
Nick submitted the same student essay, a first draft for which students had been asked to concentrate on the discussion and the argument. Students would be able to correct spelling and grammar and other issues later.
I won’t detail the results here, apart from a brief breakdown below. Do go to Nick’s account yourself for the details of the experiment, the discussion and the conclusions.
Suffice it to say that while all three checkers found some errors which were indeed mistakes and correctly explained, they also found some errors but misdiagnosed the issue/s, they gave confusing and unhelpful explanations of some of the issues, and they flagged as wrong some phrases and structures which Nick thought perfectly acceptable. He did not investigate issues that the grammar checkers failed to spot, only those which they found.
The results aren’t pretty. None performed well.
Grammar Checkers - per Nick Carbone's "Experiment with Grammar Checkers"
|# issues found
|# issues misdiagnosed / poorly explained
This is, of course, just one experiment with one essay. Further investigation is needed.
It is also vital that users of any of these services do not blindly accept the reports; each “correction” suggested needs to be considered, by the writer, by the teacher. Mistakes missed are also a problem.
The greatest problem, for those who rely on auto-services, is that users may be lulled into a false sense of security. Just because the checker flagged, or failed to flag, an issue, it ain’t necessarily so.
Bottom line, for the user to decide, are they more trouble than they are worth?