Sections:

Article

'Unscientific American' article retrieves aged nonsense about how students should learn and why corporate 'reform' testing is OK based on another one of those semi-fictional fairy tales about a classroom that 'works' (to raise test scores)...

Three weeks ago, when the latest issue of "Scientific American" arrived in our mailbox (we have a subscription to the print edition, as well as getting it on line), we noted but didn't read carefully the article on testing. Later, after several friends wrote to us asking about the article on testing, we read it and wondered whether anything had changed in the 20 years or so since many of us were criticizing the nonsense preached on behalf of high-stakes testing and what we called the "testocracy" (Substance) or "Standardistos" (Susan Ohanian).

The August 2015 edition of Scientific American features an article entitled "Building the 21st Century Learner" on its cover. But the article, a deft combination of fact and fiction, simply continues the propaganda that has been utilized to try and legitimize corporate -- test based -- "school reform" since the formulas for the narrative were launched in Chicago by the Marva Colins hoax (exposed in 1984 by Substance) and put into practice slowly after the passage of the "Amendatory Act" gave Chicago's mayor dictatorial power over the schools in 1995. During the late 1990s then-President Bill Clinton was praising the lies coming out of Chicago on behalf of corporate "school reform." Clinton put into practice the three "reforms" that were promoted by neo-liberalism: "school reform," "welfare reform," and "housing reform." Pioneers of "school reform in Chicago included then Mayor Richard M. Daley and the education "reform" team he assembled after he took power over the schools in 1995. Paul G. Vallas and Gery Chico, both political hacks working in 1995 at City Hall (Vallas was budget chief; Chico was "Chief of Staff"), were chosen to lead the new "Chicago School Reform Board of Trustees." The "Board of Trustees replaced the school board for a few years, and instead of a superintendent of schools, the new "reform" law said that Chicago's schools were to be led by a corporate leader, a "Chief Executive Officer."

Key to the putsch against public education in Chicago (and slowly, elsewhere) were two main realities. "Reform" would first be utilized against school systems, most of them urban, serving large numbers of poor (usually minority) children. And the success or failure of "reform" would be measured by so-called "standardized tests."

By the second decade of the 21st Century, all of Clinton's "reforms" had been exposed as reactionary attacks on the democratic rights of poor people and the working class. The struggle forced a focus on the tests that were utilized to measure school and student success. But no matter what the tests, they were always a lousy measure of success at anything except -- taking similar tests. And because the "business model" required cost efficiency, the tests had to be largely multiple choice tests, quickly graded.

By the time Gerald Bracey (now deceased) and others among us (including this reporter) had destroyed the credibility of the testing programs and the claims -- sometimes characterized as "No Excuses" -- that better schools, at least for poor children, did not require smaller class sizes and more wrap around services, but simply better "teaching," which meant teaching to the testing.

By 2014, hundreds of thousands of parents had organized their children to Opt Out of the testing programs, These, by the 21st Century, most recently under the Obama and Duncan administrations, fully understood that good schools are more complicated than those that could cheaply raise test scores.

Generally, during the more than 20 years following the onset of the reign of testing and corporate "school reform", corporate propaganda in most of the mainstream media had stopped being so silly in praise of corporate "school reform." Although hundreds of articles in The New York Times, The Washington Post, the New Yorker, and the Atlantic, among many others, had once praised each new iteration of corporate reform. Central to each dodge was the program of testing. Always testing. But a national oversimplified program of usually multiple-choice testing as a measure of "reform" was easily debunked. The propagandists continue writing "news" stories about the latest lie, but over time, at the grass roots level, the families of those facing the sanctions of "reform" refused to go along with it, and understood how ridiculous the program was (and had been all along).

Surprise, in the summer of 2015, that the testing crowd is making one last attempt to re-establish the credibility of the testing nonsense, and all of the so-called "teaching" practices of those whose sole job is to raise test scores.

Here is the latest debate:

STEPHEN KRASHEN'S JULY 27, 2015 LETTER TO SCIENTIFIC AMERICAN...

On 7/27/2015 10:48 PM, Stephen Krashen wrote: Sent to Scientific American, July 27, 2015 When one exposure is enough

The idea that we remember things better when we retrieve them more frequently from memory, as claimed in "Building The 21st-century Learner," (July 15) applies only to facts and concepts that are irrelevant to us. When a fact or concept solves a problem that is of genuine interest, one exposure is enough. That's why this poem is nonsense: Do you love me? Or do you not? You told me once. But I forgot.

Let's stop worrying about better ways of getting students to master material that is irrelevant to them. Let's make school more intellectually compelling. Stephen Krashen original article: http://www.nature.com.ezproxy.uvm.edu/scientificamerican/journal/v313/n2/full/scientificamerican0815-54.html

For full text of the article, plus Susan Ohanian commentary: http://susanohanian.org/show_research.php?id=583 This letter: SKrashen: When one exposure is enough

SKrashen: When one exposure is enough

Sent to Scientific American, July 27, 2015 View on tinyurl.com

Preview by Yahoo From: Susan Ohanian To: Stephen Krashen ; Peter Farruggio Cc: "Csubstance@aol.com" Sent: Monday, July 27, 2015 12:30 PM

Subject: Re: [elladvocates] Sci Amer's take on testing

My letter is too negative but I could not contain my ire. http://susanohanian.org/show_letter.php?id=1804 On 7/27/2015 11:02 AM, Stephen Krashen wrote:

I responded in detail to the Scientific American years and years ago when they defended heavy phonics. Not published. One reason: I might have been the only one who responded. I posted my response, but didn't see any others.

We have a MUCH better chance if more people write in, saying similar things ... raises the odds that at least one letter will be published. They need to be short and to the point. Short letters. What Kate wrote is the basis for a strong letter. Thanks Kate for being willing to share the article.

Scientific American needs to get at least ten letters about this. From: "'Menken, Kate' kmenken@gc.cuny.edu [elladvocates]" To: "elladvocates@yahoogroups.com" Cc: "Csubstance@aol.com" ; Susan Ohanian Sent: Monday, July 27, 2015 7:37 AM

Subject: Re: [elladvocates] Sci Amer's take on testing

Hi Everyone,

Thank you for sharing this Sal. I think it's more that the author misses the point, which more than anything has to do with the high-stakes consequences attached to standardized tests today and the accountability systems we have in place. Most educators would agree that assessment can be useful and beneficial for both teaching and learning, and will use a range of classroom-based assessments including informal to more formal measures. But the main premise of this article, which is that the development of better tests will resolve all of the problems associated with NCLB and the CCSS, is totally misguided because it fails to address the problems associated with high-stakes testing and accountability. I will paste the text of the article below so all can access it. Thanks again for sharing and to you and Pete for your comments.

Best,

Kate

__________________ Kate Menken Associate Professor of Linguistics Queens College e-mail: mailto:kmenken@qc.cuny.edukmenken@qc.cuny.edu Research Fellow, Research Institute for the Study of Language in Urban Society, CUNY Graduate Center e-mail: mailto:kmenken@gc.cuny.edukmenken@gc.cuny.edu Website: katemenken.org Co-Principal Investigator, CUNY-New York State Initiative for Emergent Bilinguals (cuny-nysieb.org)

Section: BUILDING

THE 21st -CENTURY LEARNER Too often school assessments heighten anxiety and hinder learning. New research shows how to reverse the trend

In schools across the U.S., multiple-choice questions such as this one provoke anxiety, even dread. Their appearance means it is testing time, and tests are big, important, excruciatingly unpleasant events.

But not at Columbia Middle School in Illinois, in the classroom of eighth grade history teacher Patrice Bain. Bain has lively blue eyes, a quick smile, and spiky platinum hair that looks punkish and pixieish at the same time. After displaying the question on a smartboard, she pauses as her students enter their responses on numbered devices known as clickers.

"Okay, has everyone put in their answers?" she asks. "Number 19, we're waiting on you!" Hurriedly, 19 punches in a selection, and together Bain and her students look over the class's responses, now displayed at the bottom of the smartboard screen. "Most of you got it -- John Glenn -- very nice." She chuckles and shakes her head at the answer three of her students have submitted. "Oh, my darlings," says Bain in playful reproach. "Khrushchev was not an astronaut!"

Bain moves on to the next question, briskly repeating the process of asking, answering and explaining as she and her students work through the decade of the 1960s.

When every student gives the correct answer, the class members raise their hands and wiggle their fingers in unison, an exuberant gesture they call "spirit fingers." This is the case with the Bay of Pigs question: every student nails it.

"All right!" Bain enthuses. "That's our fifth spirit fingers today!"

The banter in Bain's classroom is a world away from the tense standoffs at public schools around the country. Since the enactment of No Child Left Behind in 2002, parents' and teachers' opposition to the law's mandate to test "every child, every year" in grades three through eight has been intensifying. A growing number of parents are withdrawing their children from the annual state tests; the epicenter of the "opt-out" movement may be New York State, where as many as 90 percent of students in some districts reportedly refused to take the year-end examination last spring. Critics of U.S. Schools' heavy emphasis on testing charge that the high-stakes assessments inflict anxiety on students and teachers, turning classrooms into test-preparation factories instead of laboratories of genuine, meaningful learning.

In the always polarizing debate over how American students should be educated, testing has become the most controversial issue of all. Yet a crucial piece has been largely missing from the discussion so far. Research in cognitive science and psychology shows that testing, done right, can be an exceptionally effective way to learn. Taking tests, as well as engaging in well-designed activities before and after tests, can produce better recall of facts -- and deeper and more complex understanding -- than an education without exams. But a testing regime that actively supports learning, in addition to simply assessing, would look very different from the way American schools "do" testing today.

What Bain is doing in her classroom is called retrieval practice. The practice has a well-established base of empirical support in the academic literature, going back almost 100 years -- but Bain, unaware of this research, worked out something very similar on her own over the course of a 21-year career in the classroom.

"I've been told I'm a wonderful teacher, which is nice to hear, but at the same time I feel the need to tell people: 'No, it's not me -- it's the method,' " says Bain in an interview after her class has ended. "I felt my way into this approach, and I've seen it work such wonders that I want to get up on a mountaintop and shout so everyone can hear me: You should be doing this, too!' But it's been hard to persuade other teachers to try it."

Then, eight years ago, she met Mark McDaniel through a mutual acquaintance. McDaniel is a psychology professor at Washington University in St. Louis, a half an hour's drive from Bain's school. McDaniel had started to describe to Bain his research on retrieval practice when she broke in with an exclamation. "Patrice said, 'I do that in my classroom! It works!'" McDaniel recalls. He went on to explain to Bain that what he and his colleagues refer to as retrieval practice is, essentially, testing. "We used to call it 'the testing effect' until we got smart and realized that no teacher or parent would want to touch a technique that had the word 'test' in it," McDaniel notes now.

Retrieval practice does not use testing as a tool of assessment. Rather it treats tests as occasions for learning, which makes sense only once we recognize that we have misunderstood the nature of testing. We think of tests as a kind of dipstick that we insert into a student's head, an indicator that tells us how high the level of knowledge has risen in there -- when in fact, every time a student calls up knowledge from memory, that memory changes. Its mental representation becomes stronger, more stable and more accessible.

Why would this be? It makes sense considering that we could not possibly remember everything we encounter, says Jeffrey Karpicke, a professor of cognitive psychology at Purdue University. Given that our memory is necessarily selective, the usefulness of a fact or idea -- as demonstrated by how often we have had reason to recall it -- makes a sound basis for selection. "Our minds are sensitive to the likelihood that we'll need knowledge at a future time, and if we retrieve a piece of information now, there's a good chance we'll need it again," Karpicke explains. "The process of retrieving a memory alters that memory in anticipation of demands we may encounter in the future."

Studies employing functional magnetic resonance imaging of the brain are beginning to reveal the neural mechanisms behind the testing effect. In the handful of studies that have been conducted so far, scientists have found that calling up information from memory, as compared with simply restudying it, produces higher levels of activity in particular areas of the brain. These brain regions are associated with the so-called consolidation, or stabilization, of memories and with the generation of cues that make memories readily accessible later on. Across several studies, researchers have demonstrated that the more active these regions are during an initial learning session, the more successful is study participants' recall weeks or months later.

According to Karpicke, retrieving is the principal way learning happens. "Recalling information we've already stored in memory is a more powerful learning event than storing that information in the first place," he says. "Retrieval is ultimately the process that makes new memories stick." Not only does retrieval practice help students remember the specific information they retrieved, it also improves retention for related information that was not directly tested. Researchers theorize that while sifting through our mind for the particular piece of information we are trying to recollect, we call up associated memories and in so doing strengthen them as well. Retrieval practice also helps to prevent students from confusing the material they are currently learning with material they learned previously and even appears to prepare students' minds to absorb the material still more thoroughly when they encounter it again after testing (a phenomenon researchers call "test-potentiated learning").

Hundreds of studies have demonstrated that retrieval practice is better at improving retention than just about any other method learners could use. To cite one example: in a study published in 2008 by Karpicke and his mentor, Henry Roediger III of Washington University, the authors reported that students who quizzed themselves on vocabulary terms remembered 80 percent of the words later on, whereas students who studied the words by repeatedly reading them over remembered only about a third of the words. Retrieval practice is especially powerful compared with students' most favored study strategies: highlighting and rereading their notes and textbooks, practices that a recent review found to be among the least effective.

And testing does not merely enhance the recall of isolated facts. The process of pulling up information from memory also fosters what researchers call deep learning. Students engaging in deep learning are able to draw inferences from, and make connections among, the facts they know and are able to apply their knowledge in varied contexts (a process learning scientists refer to as transfer). In an article published in 2011 in the journal Science, Karpicke and his Purdue colleague Janell Blunt explicitly compared retrieval practice with a study technique known as concept mapping. An activity favored by many teachers as a way to promote deep learning, concept mapping asks students to draw a diagram that depicts the body of knowledge they are learning, with the relations among concepts represented by links among nodes, like roads linking cities on a map.

In their study, Karpicke and Blunt directed groups of undergraduate volunteers -- 200 in all -- to read a passage taken from a science textbook. One group was then asked to create a concept map while referring to the text; another group was asked to recall, from memory, as much information as they could from the text they had just read. On a test given to all the students a week later, the retrieval-practice group was better able to recall the concepts presented in the text than the concept-mapping group. More striking, the former group was also better able to draw inferences and make connections among multiple concepts contained in the text. Overall, Karpicke and Blunt concluded, retrieval practice was about 50 percent more effective at promoting both factual and deep learning.

Transfer -- the ability to take knowledge learned in one context and apply it to another -- is the ultimate goal of deep learning. In an article published in 2010 University of Texas at Austin psychologist Andrew Butler demonstrated that retrieval practice promotes transfer better than the conventional approach of studying by rereading. In Butler's experiment, students engaged either in rereading or in retrieval practice after reading a text that pertained to one "knowledge domain" -- in this case, bats' use of sound waves to find their way around. A week later the students were asked to transfer what they had learned about bats to a second knowledge domain: the navigational use of sound waves by submarines. Students who had quizzed themselves on the original text about bats were better able to transfer their bat learning to submarines.

Robust though such findings are, they were until recently almost exclusively made in the laboratory, with college students as subjects. McDaniel had long wanted to apply retrieval practice in real-world schools, but gaining access to K-12 classrooms was a challenge. With Bain's help, McDaniel and two of his Washington University colleagues, Roediger and Kathleen McDermott, set up a randomized controlled trial at Columbia Middle School that ultimately involved nine teachers and more than 1,400 students. During the course of the experiment, sixth, seventh and eighth graders learned about science and social studies in one of two ways: 1) material was presented once, then teachers reviewed it with students three times; 2) material was presented once, and students were quizzed on it three times (using clickers like the ones in Bain's current classroom).

When the results of students' regular unit tests were calculated, the difference between the two approaches was clear: students earned an average grade of C+ on material that had been reviewed and A-on material that had been quizzed. On a follow-up test administered eight months later, students still remembered the information they had been quizzed on much better than the information they had reviewed.

"I had always thought of tests as a way to assess -- not as a way to learn -- so initially I was skeptical," says Andria Matzenbacher, a former teacher at Columbia who now works as an instructional designer. "But I was blown away by the difference retrieval practice made in the students' performance." Bain, for one, was not surprised. "I knew that this method works, but it was good to see it proven scientifically," she says. McDaniel, Roediger and Mc-Dermott eventually extended the study to nearby Columbia High School, where quizzing generated similarly impressive results. In an effort to make retrieval practice a common strategy in classrooms across the country, the Washington University team (with the help of research associate Pooja K. Agarwal, now at Harvard University) developed a manual for teachers, How to Use Retrieval Practice to Improve Learning.

Even with the weight of evidence behind them, however, advocates of retrieval practice must still contend with a reflexively negative reaction to testing among many teachers and parents. They also encounter a more thoughtful objection, which goes something like this: American students are tested so much already -- far more often than students in other countries, such as Finland and Singapore, which regularly place well ahead of the U.S. In international evaluations. If testing is such a great way to learn, why aren't our students doing better?

Marsha Lovett has a ready answer to that question. Lovett, director of the Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University, is an expert on "metacognition" -- the capacity to think about our own learning, to be aware of what we know and do not know, and to use that awareness to effectively manage the learning process.

Yes, Lovett says, American students take a lot of tests. It is what happens afterward -- or more precisely, what does not happen -- that causes these tests to fail to function as learning opportunities. Students often receive little information about what they got right and what they got wrong. "That kind of item-by-item feedback is essential to learning, and we're throwing that learning opportunity away," she says. In addition, students are rarely prompted to reflect in a big-picture way on their preparation for, and performance on, the test. "Often students just glance at the grade and then stuff the test away somewhere and never look at it again," Lovett says. "Again, that's a really important learning opportunity that we're letting go to waste."

A few years ago Lovett came up with a way to get students to engage in reflection after a test. She calls it an "exam wrapper." When the instructor hands back a graded test to a student, along with it comes a piece of paper literally wrapped around the test itself. On this paper is a list of questions: a short exercise that students are expected to complete and hand in. The wrapper that Lovett designed for a math exam includes such questions as:

Based on the estimates above, what will you do differently in preparing for the next test? For example, will you change your study habits or try to sharpen specific skills? Please be specific. Also, what can we do to help?

The idea, Lovett says, is to get students thinking about what they did not know or did not understand, why they failed to grasp this information and how they could prepare more effectively in advance of the next test. Lovett has been promoting the use of exam wrappers to the Carnegie Mellon faculty for several years now, and a number of professors, especially in the sciences, have incorporated the technique into their courses. They hand out exam wrappers with graded exams, collect the wrappers once they are completed, and -- cleverest of all -- they hand back the wrappers at the time when students are preparing for the next test.

Does this practice make a difference? In 2013 Lovett published a study of exam wrappers as a chapter in the edited volume Using Reflection and Metacognition to Improve Student Learning. It reported that the metacognitive skills of students in classes that used exam wrappers increased more across the semester than those of students in courses that did not employ exam wrappers. In addition, an end-of-semester survey found that among students who were given exam wrappers, more than half cited specific changes they had made in their approach to learning and studying as a result of filling out the wrapper.

The practice of using exam wrappers is beginning to spread to other universities and to K-12 schools. Lorie Xikes teaches at Riverdale High School in Fort Myers, Fla., and has used exam wrappers in her AP Biology class. When she hands back graded tests, the exam wrapper includes such questions as:

Based on your responses to the questions above, name at least three things you will do differently in preparing for the next test. BE SPECIFIC.

"Students usually just want to know their grade, and that's it," Xikes says. "Having them fill out the exam wrapper makes them stop and think about how they go about getting ready for a test and whether their approach is working for them or not."

In addition to distributing exam wrappers, Xikes also devotes class time to going over the graded exam, question by question -- feedback that helps students develop the crucial capacity of "metacognitive monitoring," that is, keeping tabs on what they know and what they still need to learn. Research on retrieval practice shows that testing can identify specific gaps in students' knowledge, as well as puncture the general overconfidence to which students are susceptible -- but only if prompt feedback is provided as a corrective.

Over time, repeated exposure to this testing-feedback loop can motivate students to develop the ability to monitor their own mental processes. Affluent students who receive a top-notch education may acquire this skill as a matter of course, but this capacity is often lacking among low-income students who attend struggling schools -- holding out the hopeful possibility that retrieval practice could actually begin to close achievement gaps between the advantaged and the underprivileged.

This is just what James Pennebaker and Samuel Gosling, professors at the University of Texas at Austin, found when they instituted daily quizzes in the large psychology course they teach together. The quizzes were given online, using software that informed students whether they had responded correctly to a question immediately after they submitted an answer. The grades earned by the 901 students in the course featuring daily quizzes were, on average, about half a letter grade higher than those earned by a comparison group of 935 of Pennebaker and Gosling's previous students, who had experienced a more traditionally designed course covering the same material.

Astonishingly, students who took the daily quizzes in their psychology class also performed better in their other courses, during the semester they were enrolled in Pennebaker and Gosling's class and in the semesters that followed -- suggesting that the frequent tests accompanied by feedback worked to improve their general skills of self-regulation. Most exciting to the professors, the daily quizzes led to a 50 percent reduction in the achievement gap, as measured by grades, among students of different social classes. "Repeated testing is a powerful practice that directly enhances learning and thinking skills, and it can be especially helpful to students who start off with a weaker academic background," Gosling says.

Gosling and Pennebaker, who (along with U.T. Graduate student Jason Ferrell) published their findings on the effects of daily quizzes in 2013 in the journal PLOS ONE, credited the "rapid, targeted, and structured feedback" that students received with boosting the effectiveness of repeated testing. And therein lies a dilemma for American public school students, who take an average of 10 standardized tests a year in grades three through eight, according to a recent study conducted by the Center for American Progress. Unlike the instructor-written tests given by the teachers and professors profiled here, standardized tests are usually sold to schools by commercial publishing companies. Scores on these tests often arrive weeks or even months after the test is taken. And to maintain the security of test items -- and to use the items again on future tests -- testing firms do not offer item-by-item feedback, only a rather uninformative numerical score.

There is yet another feature of standardized state tests that prevents them from being used more effectively as occasions for learning. The questions they ask are overwhelmingly of a superficial nature -- which leads, almost inevitably, to superficial learning.

If the state tests currently in use in U.S. Were themselves assessed on the difficulty and depth of the questions they ask, almost all of them would flunk. That is the conclusion reached by Kun Yuan and Vi-Nhuan Le, both then behavioral scientists at RAND Corporation, a nonprofit think tank. In a report published in 2012 Yuan and Le evaluated the mathematics and English language arts tests offered by 17 states, rating each question on the tests on the cognitive challenge it poses to the test taker. The researchers used a tool called Webb's Depth of Knowledge-created by Norman Webb, a senior scientist at the Wisconsin Center for Education Research -- which identifies four levels of mental rigor, from DOK1 (simple recall), to DOK2 (application of skills and concepts), through DOK3 (reasoning and inference), and DOK4 (extended planning and investigation).

Most questions on the state tests Yuan and Le examined were at level DOK1 or DOK2. The authors used level DOK4 as their benchmark for questions that measure deeper learning, and by this standard the tests are failing utterly. Only 1 to 6 percent of students were assessed on deeper learning in reading through state tests, Yuan and Le report; 2 to 3 percent were assessed on deeper learning in writing; and 0 percent were assessed on deeper learning in mathematics. "What tests measure matters because what's on the tests tends to drive instruction," observes Linda Darling-Hammond, emeritus professor at the Stanford Graduate School of Education and a national authority on learning and assessment. That is especially true, she notes, when rewards and punishments are attached to the outcomes of the tests, as is the case under the No Child Left Behind law and states' own "accountability" measures.

According to Darling-Hammond, the provisions of No Child Left Behind effectively forced states to employ inexpensive, multiple-choice tests that could be scored by machine -- and it is all but impossible, she contends, for such tests to measure deep learning. But other kinds of tests could do so. Darling-Hammond wrote, with her Stanford colleague Frank Adamson, the 2014 book Beyond the Bubble Test, which describes a very different vision of assessment: tests that pose open-ended questions (the answers to which are evaluated by teachers, not machines); that call on students to develop and defend an argument; and that ask test takers to conduct a scientific experiment or construct a research report.

In the 1990s Darling-Hammond points out, some American states had begun to administer such tests; that effort ended with the passage of No Child Left Behind. She acknowledges that the movement toward more sophisticated tests also stalled because of concerns about logistics and cost. Still, assessing students in this way is not a pie-in-the-sky fantasy: Other nations, such as England and Australia, are doing so already. "Their students are performing the work of real scientists and historians, while our students are filling in bubbles," Darling-Hammond says. "It's pitiful."

She does see some cause for optimism: A new generation of tests are being developed in the U.S. To assess how well students have met the Common Core State Standards, the set of academic benchmarks in literacy and math that have been adopted by 43 states. Two of these tests -- Smarter Balanced and Partnership for Assessment of Readiness for College and Careers (PARCC) -- show promise as tests of deep learning, says Darling-Hammond, pointing to a recent evaluation conducted by Joan Herman and Robert Linn, researchers at U.C.L.A.'s National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Herman notes that both tests intend to emphasize questions at and above level 2 on Webb's Depth of Knowledge, with at least a third of a student's total possible score coming from questions at DOK3 and DOK4. "PARCC and Smarter Balanced may not go as far as we would have liked," Herman conceded in a blog post last year, but "they are likely to produce a big step forward."

IN BRIEF

Since the enactment of No Child Left Behind in 2002, parents' and teachers' opposition to the law's mandate to test "every child, every year" in grades three through eight has been intensifying.

Critics charge that the high-stakes assessments inflict anxiety on students and teachers, turning classrooms into test-preparation factories instead of laboratories of meaningful learning.

Research in cognitive science and psychology shows that testing, done right, can be an effective way to learn. Taking tests can produce better recall of facts and a deeper understanding than an education devoid of exams. Tests being developed to assess how well students have met the Common Core State Standards show promise as evaluations of deep learning.

Who was the first American to orbit Earth?

A NEIL ARMSTRONG

B YURI GAGARIN

C JOHN GLENN

D NIKITA KHRUSHCHEV

The failed Bay of Pigs invasion involved the United States and which country?

A HONDURAS

B HAITI

C CUBA

D GUATEMALA

TESTING THE TEAM PLAYER

The world's most watched test, the PISA, ventures into a new domain: instant messaging By Peg Tyre

When tens of thousands of 15-year-olds worldwide sit down at computers to take the Program for International Student Assessment (PISA) examination this fall, they will be tested on reading, math and science. They will also tackle a new and controversial series of questions designed to measure "collaborative problem solving skills." Instead of short-answer questions or lengthier explanations, the test taker will record outcomes of games, solve jigsaw puzzles and perform experiments with the help of a virtual partner that the test taker can communicate with by typing in a chat box. Although the new test domain is still experimental, PISA officials believe the results from these novel problems will push governments to better equip their young people to thrive in the global economy.

Critics of the unit say that PISA has stepped backward into an old and acrimonious debate about whether skills such as critical thinking and collaboration are teachable skills and whether they can be taught independent of content.

Given the pace of technological innovation, schools must adapt, and the new domain gives schools a road map to do that, says Jenny Bradshaw, senior PISA project manager, who oversees the test: "Working with unseen partners, especially online, will become a bedrock skill for career success. Increasingly, this is the way the workplace and the world will function."

It is a departure for the 15-year-old exam, which is coordinated by the Organization for Economic Co-operation and Development (OECD), a coalition of 34 member countries guided by industry. Since it was rolled out in 2000, the PISA exam has measured a student's ability to use reading, math and science in real-life settings. The PISA rankings and the headlines they generate quickly became a flashpoint for policy makers concerned about international competitiveness. The PISA score ranking has fueled, at least in part, a patchwork of efforts at school reform in the U.S. And Europe. America's mediocre performance on the PISA helped to prompt President Barack Obama to vow in 2009 that U.S. Students must "move from the middle to the top of the pack in science and math" within a decade.

In 2008 tech industry giants Cisco, Intel and Microsoft, concerned that the job applicants they were seeing were poorly prepared for crucial tasks, began funding their own research through a group called Assessment & Teaching of 21st Century Skills (ATC21S) to identify and promote so-called 21st-century skills -- roughly the ability to think critically and creatively, to work cooperatively, and to adapt to the evolving use of technology in business and society. Over several years ATC21S persuaded PISA to begin testing students across the globe for some of these abilities -- and found academics to provide a research framework for how this might be done.

Three years ago the PISA exam added questions that were supposed to ferret out the problem-solving abilities of 15-year-olds around the globe. (PISA says Chinese students are good problem solvers. Israelis, not so much. Americans fall somewhere in the middle.) A wired, global economy, the test framers decided, requires an even more specific set of skills -- group problem solving mediated by the Internet. This year PISA will have students in 51 countries put collaborative problem solving under the microscope.

The test questions themselves are alternately fun and frustrating. Although researchers at ATC21S believe it is best to test collaborative problems through actual collaboration, PISA test takers will be paired with a virtual partner dubbed "Abby."Together the test taker and Abby will be expected, for example, to determine the prime conditions for fish living in an aquarium when the tester controls water, scenery and lighting and Abby controls food, fish population and temperature. To solve the task, the student must build consensus around how to solve the problem, respond to concerns, clear up misunderstandings, share information from trials and synthesize the results to come up with the correct answer.

Plenty of critics say the new domains are a blunder. "Is there an independent set of skills -- in this case, collaborative problem solving -- that is transferable across domains of knowledge?" asks Tom Loveless, an education researcher at the Brookings Institution. "Is problem solving between two biologists the same as problem solving between two historians? Or is it different? Progressive educators since John Dewey have insisted it is the same, but we just don't know that."

School systems that want to prepare students for the future should help them achieve mastery of complex math, science and literacy instead of putting resources into promoting nebulous concepts.

PISA's Bradshaw acknowledges that questions do remain about the innovative domains but that she and her team believe it is an experiment worth trying. While PISA researchers conduct validation studies and focus groups on collaborative problem solving, others are already working on PISA's next frontier. By 2018 she says her team will have come up with a valid way to measure "global competence."

Because it is true in education that what gets tested gets taught, ATC21S is preparing for the international hand-wringing from low- ranked countries by offering videos of classrooms where the researchers say teachers and students are getting it right. It has also rolled out a MOOC (massive open online course) to train teachers how to bring collaborative problem solving into their classrooms; 30,000 teachers have enrolled in the course, and a quarter of them have completed it.

Peg Tyre is a longtime education journalist and author of The Good School and the best-selling book The Trouble with Boys. She is also director of strategy for the Edwin Gould Foundation, which invests in organizations that send low-income students to and through college.

How much time did you spend reviewing with each of the following: � Reading class notes? _______ MINUTES

� Reworking old homework problems? _______ MINUTES

� Working additional problems? _______ MINUTES

� Reading the book? _______ MINUTES

Now that you have looked over your exam, estimate the percentage of points you lost due to each of the following: � _______ % FROM NOT UNDERSTANDING A CONCEPT

� _______ % FROM NOT BEING CAREFUL (I.E., CARELESS MISTAKES)

� _______ % FROM NOT BEING ABLE TO FORMULATE AN APPROACH TO A PROBLEM

� _______ % FROM OTHER REASONS (PLEASE SPECIFY)

Approximately how much time did you spend preparing for the test? (BE HONEST)

Was the TV/radio/computer on? Were you on any social media site while studying? Were you playing video games? (BE HONEST)

Now that you have looked over the test, check the following areas that you had a hard time with: � APPLYING DEFINITIONS _______

� LACK OF UNDERSTANDING CONCEPTS _______

� CARELESS MISTAKES _______

� READING A CHART OR GRAPH _______

RECALL

Tests That Teach

Quizzes can do more than assess learning -- they can boost it. In a study designed to compare studying versus testing, published in 2008 in the journal Science, psychologists asked four groups of college students to learn 40 Swahili vocabulary words. The first group studied the words and was repeatedly tested on them. Other groups dropped the words they had memorized from subsequent study or testing, or both. One week later students who were repeatedly quizzed on all the words remembered 80 percent, whereas students who only studied the words remembered about a third.

GRAPH: Clear Benefits from Repeated Testing

PHOTO (COLOR)

PHOTO (COLOR)

~~~~~~~~

By Annie Murphy Paul

Annie Murphy Paul is a frequent contributor to the New York Times, Time magazine and Slate. Paul is author of The Cult of Personality Testing and Origins, which was included in the New York Times' list of 100 Notable Books of 2010. Her next book, forthcoming from Crown, is entitled Brilliant: The Science of How We Get Smarter.

Scientific American is a registered trademark of Nature America, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

On Jul 26, 2015, at 10:31 PM, Peter Farruggio pfarruggio@utpa.edu [elladvocates] wrote:

Can�t log in; but a caveat: I wouldn�t put much stock in Scientific American�s expertise on education. Remember the infamous (and wrong) article of 16 years ago in which they bought the false claims of the Foorman pro-fonix horse race study done in Alejo, TX?

They say that testing promotes learning. Do they mean the mediated assessment of Vygotsky and Feuerstein? Or are they buying the hogwash of the ed-deformers about the deep intellectualism of the common core?

Pete Farruggio, PhD

Associate Professor

Bilingual Education

University of Texas Rio Grande Valley

Here's to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They're not fond of rules, and they have no respect for the status quo. You can quote them, disagree with them, glorify, or vilify them. About the only thing you can't do is ignore them because they change things. They push the human race forward. And while some may see them as crazy, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do. - Jack Kerouac, letter to Ed White, 1950 "The war is not meant to be won, it is meant to be continuous. Hierarchical society is only possible on the basis of poverty and ignorance." ~ George Orwell

When I was a boy of fourteen my father was so ignorant I could hardly stand to have the old man around. But when I got to be twenty one, I was astonished at how much the old man had learned in seven years -- Mark Twain



Comments:

Add your own comment (all fields are necessary)

Substance readers:

You must give your first name and last name under "Name" when you post a comment at substancenews.net. We are not operating a blog and do not allow anonymous or pseudonymous comments. Our readers deserve to know who is commenting, just as they deserve to know the source of our news reports and analysis.

Please respect this, and also provide us with an accurate e-mail address.

Thank you,

The Editors of Substance

Your Name

Your Email

What's your comment about?

Your Comment

Please answer this to prove you're not a robot:

4 + 5 =