At BetterRhetor we’re concerned with helping students succeed by closing the gap between what they learn in high school and what is expected of them in college. That’s why we’re building a new approach to college preparation, college admissions testing, and college recruitment.
The college-readiness gap is well documented and gravely concerning, considering that the majority of students entering college are not ready for college-level work (Briggs 2009; Conley, Aspengren, Stout, & Veach 2006; Conley 2007; Achieve 2014). The ramifications of this gap are severe: not enough kids go to college; too many need remediation once they get there; too few graduate; too few graduate within four years. (NCES 2016; NCPPHE 2010; NCHEMS 2016; Attewell, Heil, & Reisel 2011).
Despite all the emphasis on “college readiness” in education circles, something is obviously not working properly.
We believe that conventional standardized tests, including — perhaps especially including — the ACT and SAT, are part of the problem. These two tests have enjoyed a duopoly in the college admissions arena for more than 50 years. Though each test has undergone some revision over the years — most recently, the SAT was overhauled to look a lot more like the ACT — neither has taken much advantage of the technological changes that have grown up around them; they’re still pretty much what they’ve always been: multiple-choice-based tests that measure basic skills, contributing little to authentic learning, and depending for reliability on hyper-contrived testing conditions.
There is a large body of research on the tests, investigating their predictive validity, their fairness, and their impact on education. Conclusions vary, but the predominant view is that 1) high school grade point average is a better predictor of college success than the tests alone; 2) the tests add only a modest measure of incremental predictive validity when considered together with high school GPA; 3) concerted efforts to improve the fairness of the tests have made gains, but the tests still yield differences for minority groups and women compared with the overall test-taking population. (Zwick 2007; Atkinson and Geiser 2009).
Unsurprisingly, students’ scores correlate strongly with the socio-economic status of their families. Wealthier students can afford better test preparation, can afford to take the tests multiple times, and they attend high schools that have more resources for helping them through the college admissions process.
Studies tend to agree that test preparation can disadvantage low-income students as well (Briggs 2009). A few large studies have focused on test-optional post-secondary schools, comparing applicants who submitted ACT and/or SAT results to students who didn’t. These studies confirm that GPA was a better predictor of success than the ACT or SAT. They also reveal that students who chose not to submit ACT and/or SAT test scores as part of their college applications were more likely to be first-generation-to-college enrollees, minority students, women, Pell Grant recipients, and students with learning differences. The requirement to take the ACT and/or SAT discourages many potentially successful students from even applying to college (Bowen et al 2009; Hiss & Franks 2014).
In short, students who are economically and socially equipped to participate in the college application and admissions process apply to and are admitted by preferred schools in greater numbers than academically capable but underserved students who don’t have the same advantages. Thus, differences in opportunity widen for underserved subgroups, and economic and social inequality deepens throughout U.S. society (Hiss & Franks 2014).
In significant ways, all students, privileged and underprivileged, are ill-served by the current paradigm. Since the effort and expense that go toward test preparation and test-taking are of little academic value in themselves, they create significant opportunity costs for every student.
These problems are owing, in large part, to the fact that the ACT and SAT are not assessments of authentic academic ability. The timed, multiple-choice-focused test formats do not elicit academic knowledge and skills as students will be required to employ them in actual academic settings, or in their lives outside of academics (Cummins & Maxwell 1999; Keating 2014).
The tests measure only what is measurable given the constraints of their formats and their means of delivery and scoring. They do not necessarily provide the information that is most telling of a student’s abilities in authentic academic contexts, or most valuable for colleges and universities in evaluating an applicant’s academic strengths, weaknesses, and potential for success. (See my blog entry on this topic HERE.)
In fact, in that the tests claim to be both “curriculum-based” and measures of college readiness, a strong argument can be made that they are not actually valid tests at all. (I’ve made that argument HERE.)
We at BetterRhetor are working to develop an alternative to this outmoded and detrimental college admissions testing duopoly, so that we can begin to close the college-readiness gap for more students. As part of that effort, we want to encourage the community of interested folks to join us in exploring the problems and finding solutions.
Why Rhetor? Because we believe that finding solutions in education entails thinking and communicating better with one another, encouraging the collaborative problem-solving abilities of a community of passionate rhetors, who are prepared and eager to reason, speak, argue, decide, and do. That’s the kind of community we want to be part of.
Achieve (2014). Rising to the Challenge: Are High School Students Prepared for College and Work? Key findings from surveys among recent high school graduates.
Attewell, P., Heil, S., & Reisel, L. (2012). “What is academic momentum? And does it matter?” Educational Evaluation and Policy Analysis, 34(1), 27-44.
Atkinson, R. C., and Geiser, S. (2009). “Reflections on a century of college admissions tests.” Educational Researcher, Vol 38. No. 9. 665-676.
Bowen, W. C., Chingos, M. M., & McPherson, M. S. (2009). Crossing the Finish Line: Completing College at America’s Public Universities. Princeton: Princeton University Press.
Briggs, D. C. (2009). “Preparation for college admission exams.” Report of the National Association for College Admission Counseling.
Conley, D. (2007). “The challenge of college readiness.” Educational Leadership, 64(7), 23-29.
Conley, D. T., Aspengren, K., Stout, O., & Veach, D. (2006). College Board Advanced Placement best practices course study report. Eugene, OR: Educational Policy Improvement Center.
Cumming, J. J., & Maxwell, G. S. (1999). “Contextualising authentic assessment.” Education: Principles, Policy & Practice, 6(2), 177-194.
Hiss, W. C., & Franks, V. W. (2014). “Defining promise: Optional standardized testing policies in American college and university admission.” Report of the National Association for College Admissions Counseling.
Keating, E. B. (2014). “How we talk about testing and college writing: the revealing rhetoric of SAT prep books.” Plaza: Dialogues in Language and Literature, 5.1. Winter: 29-38.
National Center for Education Statistics 2016, “Digest of Education Statistics.” Table 326.10. https://nces.ed.gov/programs/digest/d15/tables/dt15_326.10.asp?current=yes
NCHEMS Information Center for Higher Education Policymaking and Analysis. “Graduation Rates.” (2016)
National Center for Public Policy and Higher Education (NCPPHE). “Beyond the Rhetoric: Improving COllege Readiness Through Coherent State Policy.” (2010).
Zwick, R. (2007). “College Admission Testing.” Report of National Association of College Admission Counseling.