It seriously looks like SAT/ACT testing is going away
A little over a year ago, I wrote about the accelerating rate at which colleges and universities were going test optional. I explained that test-optional isn’t going to last, simply because there’s no good reason for it to remain. I’m going to re-post the entire thing here, but read all the way to the bottom for really important updates and recommendations. Or at least skip down to the bottom for the updates. But really, just keep reading.
Colleges becoming test-optional, which means they do not require SAT or ACT scores for applicants but will consider them, has become a major trend in the past decade. The movement got a big boost when the University of Chicago—a very prestigious and selective university—announced last year that they’d be going test-optional. I understand that this is the trend, but I’m predicting that test-optional isn’t going to last for long. It’s not going away in the next year or two, but it’s not going to become normal. Maybe required testing will make a come-back, maybe some new test will come to dominate SAT and ACT, or maybe (but less likely) standardized testing will disappear. But the middle ground of “send us scores if you want to” won’t be around for too long, because there’s no good reason for it to exist.
To understand, it helps to consider the nature of the Test Paradox itself, and it helps to start with a small example—a single teacher.
Say you’re a high school teacher, and you want to get an idea of how your students are progressing. You want to know how your students are doing overall, and you want to know who has really mastered the content and who is struggling. So you come up with an objective assessment—a test. Everybody gets the same questions, or at least really similar questions, so that you can make valid comparisons among classes and individuals. You tell your classes “next week I’m going to take a class period and do a standardized assessment of everyone to see how we’re progressing and to identify who has really mastered the content and who needs more help.” The first two questions you’ll hear are probably “do we have to?” and “will it be for a grade?” But those are really the same question. Because if you say “no, it’s not for a grade. It’s just an assessment to help me plan,” you’ve lost a lot of people. They have no buy-in, no ownership, no skin in the game, and they see no need to worry with your assessment. They know they don’t have to. Few will give it their best attention, and some will completely blow it off. In other words, your assessment to get data won’t get accurate data. That’s a problem.
Instead, you say “yes, of course it’s for a grade. This test is supposed to show how much you’ve mastered, and your grades are supposed to reflect how much you’ve mastered. This test is a major grade.” Now it counts, and now the students care. The problem, though, is that many now care too much. They’re motivated to cram the night before to get questions right even if they haven’t really mastered the content and will forget it by the next day. They’re motivated to cheat. They’re motivated to scrutinize each question looking for problems. They’re motivated to find the ways you’re possibly being unfair and complain and get the questions thrown out. They’re motivated—and to be clear, I don’t blame them—to get a higher score, not give you more accurate data. And so making the test count won’t get you accurate data. That’s a problem. This is the Test Paradox. There is no winning when it comes to standardized tests.
In a single classroom, or even in a single school, there are plenty of ways to work around the Test Paradox to minimize the problem. You give more than one assessment, and you give different types of assessments, the more “authentic” the better. You may have an objective, multi-choice test, but you’ll also have some combination of projects and essays and labs and shorter quizzes and homework and class participation. These other things, on top of their own good qualities, also take some of the pressure off the objective test. Students can’t blow off the test, but they’re maybe less motivated to artificially skew it in their favor.
Another tempting thing to try is making the test optional. When students ask “does it count as a grade?” you can say something like “I want you to try your best, and I’ll make it a grade. But if it lowers your overall grade, I won’t count it. So it can’t hurt you, only help you.” I’ve seen plenty of teachers use this compromise, and I’ve used it plenty of times myself. But here’s the problem: making it optional doesn’t take away either side of the Test Paradox, it combines them. Some students are now motivated to blow it off, and some are now motivated to skew it in their favor. When it comes to accurate data, it’s the worst of both worlds.
You see where I’m going with this. But first, let me tell you another story about optional grades.
One year as an AP Literature teacher I tried an experiment with optional homework. For several of the novels we were reading, I had weekly reading guides to help them through it. Some of the guides I bought from a publishing company, and some I wrote myself. They had questions at different levels of complexity that followed the chapters. The guides could be useful for keeping students focused on long, difficult reading, and they could help students think about the reading as they went. But not all my students needed that extra help, and I was looking for ways to lower the amount of homework I gave. So I used the optional compromise: I told them that they didn’t have to do the homework, but if it was helpful to them and they made an effort, I’d give them some extra credit.
To my surprise, almost everyone did the homework, every time. I was proud of my students for doing this extra step that wasn’t even required, and I was proud of myself for finding a way to take some homework pressure off but still make meaningful work for those who chose it.
And then one day I saw an answer so incredibly wrong and stupid that it made me laugh. That happens sometimes—even smart people sometimes write bad answers. But then I saw the same stupid answer on the next person’s homework. And the next. Once I noticed they all had the same bad answer, I noticed how alike all the other answers were. Literally 90% of my students turned in the same answers—they all cheated!
Nobody wanted to say anything in class, but over the next few weeks I had enough private conversations to understand. They knew it was optional, but they didn’t feel they could afford to let other people get that little bit of extra credit and pass it up themselves. They were too competitive for grades, and nobody wanted to see their class rank go down just because they chose not to do some extra homework. I knew that the credit wouldn’t be enough to have that big an effect, but they didn’t know that. They also took the homework’s optional status as an indicator that it wasn’t very important, so they felt little guilt about taking shortcuts or copying. Plus they guessed, correctly, that because it was optional and basic that I wouldn’t scrutinize it too much. So it was easy and tempting to do it as a large group and copy everyone else’s answers. I was really angry and demoralized, but a number of my students felt like I had set them up in a situation where they either had to cheat with everybody else or do a lot of extra homework. In a closed, competitive system, something can be optional or valuable, but not both.
Like I said, if you’re a teacher or a school, then there are ways around the Test Paradox problem. But if your population gets too big, then that doesn’t work so well. If entire school districts, or states, or the nation want to have data to compare individuals and groups, they need standardized data. They’re stuck with the Test Paradox.
If you have no standardized testing, then it’s much more difficult to know which schools/districts/states are making more progress. If you do have standardized testing, then there are complications. Teachers get stuck only “teaching to the test” and not enacting robust education. Students are motivated to cram, prep, and cheat. On a national level, with the SAT and ACT, there’s an entire industry around helping students do better on the tests that has little to do with actually educating the students. People with more access to this industry—people with more money to afford test prep or in school districts with extra resources to provide test prep—get an advantage. Many people who don’t perform high on one or two standardized tests assume they’re not ready for college and don’t bother applying even though they might be successful at college.
That’s where we are right now. SAT and/or ACT scores have traditionally been a very important part of college applications. It’s the only standardized measure to compare students from different geographic areas with different school curricula and different scales for grading. In that sense those test scores, imperfect as they may be, are really useful. On the other hand, we know those test scores are skewed by people who went to a lot of effort to boost their scores. We know the scores are skewed to favor students from wealthier backgrounds. We know that those scores are sometimes skewed by cheating.
For these reasons very few colleges make admissions decisions based only on test scores. They also consider your transcript, essays and personal statements, activities and organizations you’re a part of, recommendation letters, and interviews. Just like individual teachers, they’re trying to work around the Test Paradox.
And increasingly, they’re also making the tests optional. Why do they decide not to require test scores? To a large degree they do it to increase the diversity of their applicants. They don’t want students with lower test scores to be afraid to apply. Especially considering some of the well-documented advantages that wealthier students often—but not always—have on these tests, these colleges are trying to find a bigger pool to search for talent. They’re trying to combat undermatching, which is a noble goal. So far there are mixed reviews. They’re also making the tests optional because they recognize that test scores aren’t a very good indicator of college success. Your high school grades and your mindset are much better ways to predict college readiness and success than SAT or ACT scores.
There are also some less-noble reasons why going test-optional is tempting for colleges. If they get more applicants, they necessarily end up denying more applicants, which makes them more selective—the primary way we judge how “elite” a school is. Plus, if people with lower test scores never report their scores, then the university’s average test scores can go up and make the school look even more “elite.” But the data becomes unreliable. When I see the SAT or ACT mid-range scores for a test-optional college, I have to also look to see how many people submitted scores and try to guess what the actual mid-range might be if all the scores were there. When I try to compare two different test-optional colleges with two different proportions of students who submitted scores, then the comparisons get shaky. In other words: the objective, standardized data is no longer objective and standardized. It’s suddenly a lot less useful.
Test-optional colleges are willing to ignore test scores and look at the whole package—grades, essays, recommendations, demonstrated interest, and interviews. They also provide testing data that is incomplete and therefore not useful. So why don’t they just get rid of testing altogether? Why not move from test-optional to test-blind? There’s no reason for the tests!
Why not just get rid of testing altogether? Ask Sarah Lawrence College. They went test-blind, neither requiring nor even accepting SAT or ACT scores. But this set them up to be in conflict with U.S. News & World Report’s ranking system—the other major way we judge how “elite” a college is. In 2007, Sarah Lawrence’s president wrote in the Washington Post about the tension with U.S. News. By 2012, they went back to being test-optional, and went back to being ranked. Right now the only test-blind college is Hampshire, and they’re on the edge of closing down.
There’s another thing that keeps schools holding on to test scores even if they want to let them go—a huge number of Americans still see standardized test scores as objective, unbiased, and useful. A large part of Harvard’s discrimination trial rests on SAT scores. When Americans talk about “merit” and “meritocracy” in higher education, they often rely on standardized test scores to show merit. They’re not going to let go of that easily.
Let’s put all these together. Our nation’s colleges and universities are stuck in the Test Paradox. With no testing, then students can ignore national standards and work to boost “soft” factors like rec letters and padding their resume. With no national standards, schools can inflate grades to make their students look better than they may be. But with national testing, then there’s immense scrutiny and pressure and complications. There are good reasons to get rid of testing: test scores aren’t a good predictor of college success and the system is (unintentionally) rigged to favor wealthier students. There are good reasons to keep testing: it provides consistent data across a wide range of students, schools, regions, and even nations; it keeps you in the valuable ranking system; there’s a lot of public faith in it. With standardized testing, there’s no winning.
But a test-optional compromise doesn’t solve any of the problems. It just muddles them. Keeping the national testing system alive just so colleges can keep trying to manipulate their ranking ins’t a good use of time or resources. And as more schools become test-optional and their data becomes less reliable, the less useful the rankings based on the data will become. Sooner or later we’re going to have to do something. Maybe there will a more robust and fair standardized test that will replace the SAT and ACT and become mandatory. Maybe the ranking systems will keep getting more outcome-oriented (how much the schools improve their students) and less input-oriented (how good the students are when they’re accepted), so they’ll stop requiring test scores for rankings. Maybe artificial intelligence will get so good at analyzing student data, including writing samples, that the tests will become obsolete. But I’m predicting that the test-optional compromise we have now won’t last for long, because there’s no good reason for it to last.
Ok, now it’s time for updates.
First, this January, Northern Illinois University took the step of becoming test blind in admissions. They opted out of using tests altogether.
And now so so many colleges have gone test-optional, at least for the next year or two, because of COVID-19 and the inability to even administer ACT and SAT tests. Everyone wants to know if those schools will return to testing once the pandemic passes. But right now the schools are focused on figuring out what the heck to do this fall.
Then there was the big bombshell: in May the entire University of California system announced that they’re going test optional for two years, and then test blind after that. For such a big and reputable system to ditch the tests is huge.
This month, at least one other school—Catholic University of America—has said they’re going test blind. So there’s still momentum.
And now there’s an even bigger bombshell. Remember that one reason some universities feel they have to keep considering test scores is because they can’t be in the US News rankings without doing so? They announced this week they will rank test-blind schools. “We need them for the rankings” is no longer going to be a reason to keep taking test scores.
So, yeah. Things are changing fast.
Here’s what I recommended for high school students last month concerning the SAT and/or ACT:
What I’m very comfortable telling everyone is to take your testing plans down at least one notch. Whatever your plan had been at the beginning of this school year, dial it back.
If you were really aggressive about testing, and planned to take both the ACT and the SAT at least once: choose just one and save your time preparing for the other. If you planned to take a test prep course and take the exam multiple times so you can “superscore”: do the prep or the multiple tries, but not both. If you planned to take a test and then decide if you want to take it again, possibly with a test prep course: decide that you’re going to take it only once, and do your best. If you were thinking that you may or may not take a test, because you’re not sure you really need to take it: don’t.
I don’t know if that’s going to sound like good advice for long. Especially soon-to-be 12th graders might already have some test scores to report. And, as many temporarily test-optional schools as there are, there’s a fairly good chance you’ll want to apply to a college that still wants test scores. On the other hand, it’s looking less likely every day that there will be any more testing dates this year. Our COVID curve keeps rising, and online versions of the SAT or ACT probably aren’t happening. We’ll see where we are in September.
I know that test scores are—or at least have been—really important to college-bound high school students, so I’ll keep updating as the year goes on. Stay safe out there, and help keep others safe.
Thanks for reading! If you enjoyed this post, here are two easy things you can do:
Share it on your social media feeds so your friends and colleagues can see it too.
Read these related posts: The Glossary: test optional. SAT scores should look a lot more like AP scores. Are your test scores good? Good news for eliminating test optional. Should you bother to take the SAT or ACT?
Ask a question—or share other resources—in the comments section.
Apply with Sanity doesn’t have ads or annoying pop-ups. It doesn’t share user data, sell user data, or even track personal data. It doesn’t do anything to “monetize” you. You’re nothing but a reader to me, and that means everything to me.
Photo by Zoe Herring.
Apply with Sanity is a registered trademark of Apply with Sanity, LLC. All rights reserved.