In my last post, I wondered “how do you plagiarize on a standardized test?”
The answer to that question depends on your definition of plagiarism. Or, in that particular case, on how the Louisiana Department of Education defines “plagiarism.” It seems that, to that august body, plagiarism includes taking unauthorised materials into an examination, or the teacher giving unauthorisded direction to the students. Not plagiarism as we know it (Jim).
Now I have another answer. In Oklahoma, one of the CTB/McGraw-Hill state-wide Grade 5 writing tests asks students to read a passage and respond to open-ended questions using evidence in the passage.
When the results were published, the children’s teachers were concerned on at least two counts. The first was the breakdown of the scores. The tests are supposed to measure writing abilities across five categories: ideas and development; organization, unity and coherence; word choice; sentences and paragraphs; and grammar, usage and mechanics. It seems that in many schools, students received exactly the same score for each of the five categories: one student might score 2.0 for each and every category, another student might score 2.5 for every category, and so on. This struck teachers as highly unusual – and highly unlikely.
The second concern was that many students were penalized for plagiarism.
In one online news report, Andrea Eger declared
(School officials) also question widespread reductions in scores for “plagiarism” for students they say simply followed instructions to cite directly from reading passages on the test about which they were to take a position in an essay.
It seems that students lost marks if they quoted too much from the text in support of their responses to the questions. They lost marks, even when they put quotation marks around the text and added a citation. The quoted passages were ignored in the grading, and students were graded on the remainder of their answers.
One segment of KOCO’s online television news report illustrates this: in the student answer there is a citation, followed by words between quotation marks which have been marked by a highlighter. Not plagiarism – but evidently not marked.
This seems confirmed by another television news report, which has a Testing Coordinator saying she was told by the testing companies that the similar scores were
” ‘… indicative of plagiarism.’ And we’re like ‘Okay.’ Plagiarism though not in the traditional sense that they copied something from somewhere else, but plagiarism in the sense that they over-cited,” said Reynolds.
That is NOT plagiarism, traditional or otherwise.
It may be ignoring test instructions, which is what CTB/McGraw-Hill claim was the root problem. Belying the statement quoted above from Andrea Eger’s report (“… followed instructions to cite directly from reading passages on the test …”), it seems that CTB/McGraw-Hill had changed the amount of material students are allowed to quote, but had told only the test-scorers. They had not, it seems, advised teachers of the change to previous practice.
None of the four reports or the two television clips that I’ve found answers these questions:
- What exactly were the instructions given, to teachers in their training, and to students on the test paper itself?
- Furthermore, did the students who did use evidence from the original passage but who did use their own words also include a citation?
- Good for them if they did, but if they didn’t, was this classed as plagiarism?
- If not, why not?
Meanwhile, I am full of admiration for those grade 5ers who can use quotation marks and who cite their sources. Well taught, teachers, well learned, students! Don’t be discouraged, you’re doing it right.
CTB/McGraw-Hill have offered to recheck test papers, but will charge $125 per paper if the mark is not changed. That’s a huge gamble, a high cost to the education department. And it could be a pointless exercise. If the marking rubric is not changed, if the tests are marked according to the same criteria, especially as regards “plagiarism,” then the scores won’t change.
The test is already expensive, not least because the results are probably useless. With the same mark given across the board in so many cases and so much not marked, the tests do not show what students are capable of, and they do not show where students need more help.
What seems to be needed is either a new rubric – the rubric used in previous years – or else a completely new test – and to be sure that teachers, students and graders are aware of the instructions – with time to teach students new approaches if direct quotations are likely to be penalised.
Defining plagiarism is problem enough. We don’t need the confusion of students being penalised for “plagiarism” when this is exactly what they are not doing.
Sources:
Andrea Eger, “Schools across Oklahoma say writing test results deeply flawed,” Tulsa World 3 June 2014. http://www.tulsaworld.com/news/education/schools-across-state-say-writing-test-results-deeply-flawed/article_0d3b4f2a-b462-52b2-8a2e-5b3900936578.html
Brian Shlonsky, “School districts say test scores inaccurate, asking for rescore,” KOCO.com, 3 June 2014. http://www.koco.com/news/school-districts-say-test-scores-inaccurate-asking-for-rescore/26314828
Dana Hertneky, “Oklahoma Schools Report Issues With Writing Test Grades,” World Now, 4 June 2014. http://wnow.worldnow.com/story/25685825/oklahoma-schools-report-issues-with-writing-test-grades
“School districts question writing test scores,” KWSO, 9 June 2014. http://www.kswo.com/story/25732947/school-districts-question-writing-test-scores