If project-based learning were to form the core of curricula in American schools, our problems with large-scale standardized testing would become even more pronounced than they are now. This is not a reason to forego project-based learning, of course; rather, it’s a reason to find a better way to test.
We do need, and will continue to need, large-scale assessments, despite the many dissatisfactions we may have with them at present. Local assessment by itself doesn’t tell us what we need to know about student performance at the state or national level. Without large-scale assessment, we’re blind to differences among subgroups and regions, and thus cannot make fully informed decisions about who needs more help, where best to put resources, which efforts are working and which aren’t.
Many of the underlying aims of large-scale assessment are laudable: equity, improvement, good stewardship. Rather it is the limitations of their form that make the tests so problematic. They are severely restricted in the kinds of academic work they can elicit and measure. Thus, to the degree that they demand classroom focus or drive instruction, they actively discourage the authentic academic work that is the aim of project-based learning.
This is so because efficiency and scalability, rather than authenticity, govern their form. The tests are composed of artificial tasks — mostly multiple-choice questions — so that student performances can be recorded and evaluated fast and cheap. They are administered under highly contrived conditions because the artificiality of the tasks creates particular kinds of problems with security and reliability, problems that can only be addressed by further compounding the artificiality with rigid time strictures, centralized testing locations, and snapshot performances by students isolated from all real-world aids and resources, including one another.
Because of their format restrictions, conventional standardized tests simply cannot provide opportunities for students to demonstrate the array of skills that comprise authentic intellectual work: e.g. generating ideas, planning, collaborating, experimenting and revising, spit-shining the finished product. The tests can’t elicit these skills because, in their existing incarnation, they can’t accommodate the time it takes to do the work, and because the cost of evaluating this kind of student work at scale would doom the whole enterprise from the start.
In other words, the real academic work that is the aim of project-based learning is uncapturable by conventional large-scale standardized tests. If PBL, then, formed the core of curricula, the existing testing paradigm would utterly fail at generating the student performance information that justifies testing in the first place.
Of course, one might argue, standardized tests never claim to be more than indirect measures. They’re proxies designed to indicate larger sets of skills, not exhaustively evaluate everything a student knows and can do. The partialness and indirectness of the measurement is precisely the concession we make to time and cost constraints. And anyway, some information is better than none; the assessments just need to be good enough to sample content domains and show whether kids are mastering basic skills.
If that’s the case, then justification for the tests comes down to whether basic skills are a good enough proxy for the higher order skills PBL would place at the center of education. Would we be OK with making funding and accountability decisions based on such a limited slice of what we’re actually teaching? Or does there come a point at which the disparity between what the tests can measure and what we believe students need to know becomes so great as to render the proxy argument altogether implausible?
The problem with standardized tests is their form, not necessarily their function. Fundamentally, it’s a technological problem. The multiple-choice question is a 20th-century technology that made possible the economies of scale that have driven the format of standardized testing for nearly a century. Today’s tests still rely predominantly on multiple-choice questions, even as they migrate from paper to computer. New “technology-enhanced” item types appearing on computer-based tests are still mostly elaborated forms of the multiple-choice format. They still fall short of eliciting from students the skills and abilities that lie at the heart of authentic academic work.
The tests have remained rooted in the mid-twentieth century technologies, even as immensely powerful networked digital technologies have arisen around them. Today we could use existing technologies to capture all of the skills and abilities students display as they engage in extended academic projects, from planning to finish. We could facilitate and record collaborative interactions within work groups, whether localized in a single classroom, or assembled from across the nation or world. We could even generate assessable information about personal qualities such as persistence and resilience, capturing the effort students put into generating solutions, for example, or revising their work in response to feedback, or contributing ideas and assistance to others.
Existing technologies can elicit and capture the critical skills and abilities, both cognitive and non-cognitive, cultivated by project-based learning. With some creativity in modes of assessment, and a willingness to re-think hidebound approaches to test reliability and validity, we can replace a model of testing that runs counter to our most ambitious education goals.
For project-based learning to someday take a primary place in education, we will need a form of large-scale assessment that can validate its efficacy and equity across groups and regions. That assessment will need to be embedded within instruction and learning activities, an integral dimension of learning itself, rather than an interruption of or intrusion into the authentic, challenging and satisfying experience of practicing and trying, making and doing, building and creating. It will need to be an authentic assessment capable of capturing the full array of knowledge and skills required for the successful completion of real academic work.
We invite you to contribute your ideas and comments below.
© 2016 BetterRhetor Resources LLC
LevelUp is a blog by William Bryant, examining Assessments, College Admissions, and the Readiness Gap. William is Founder and CEO of BetterRhetor, a company developing new ideas and technologies to address challenges in assessment and instruction. He can be reached at firstname.lastname@example.org. Join the BetterRhetor email list HERE.