Matt's Past SAT/ACT News Update

Matt O'Connor

Jan 18, 2024

 
David Leonhardt of The New York Times has authored a lengthy article questioning the recent widespread criticism and de-emphasis in college admissions of the SAT and ACT. The article asserts that the exams provide more information than high school grades as to which students will succeed in college and beyond.

[Excerpts]

After the Covid pandemic made it difficult for high school students to take the SAT and ACT, dozens of selective colleges dropped their requirement that applicants do so. Colleges described the move as temporary, but nearly all have since stuck to a test-optional policy. It reflects a backlash against standardized tests that began long before the pandemic, and many people have hailed the change as a victory for equity in higher education.

Now, though, a growing number of experts and university administrators wonder whether the switch has been a mistake. Research has increasingly shown that standardized test scores contain real information, helping to predict college grades, chances of graduation and post-college success. Test scores are more reliable than high school grades, partly because of grade inflation in recent years.

Without test scores, admissions officers sometimes have a hard time distinguishing between applicants who are likely to do well at elite colleges and those who are likely to struggle. Researchers who have studied the issue say that test scores can be particularly helpful in identifying lower-income students and underrepresented minorities who will thrive. These students do not score as high on average as students from affluent communities or white and Asian students. But a solid score for a student from a less privileged background is often a sign of enormous potential.

“Standardized test scores are a much better predictor of academic success than high school grades,” Christina Paxson, the president of Brown University, recently wrote. Stuart Schmill — the dean of admissions at M.I.T., one of the few schools to have reinstated its test requirement — told me, “Just getting straight A’s is not enough information for us to know whether the students are going to succeed or not.”

An academic study released last summer by the group Opportunity Insights, covering the so-called Ivy Plus colleges (the eight in the Ivy League, along with Duke, M.I.T., Stanford and the University of Chicago), showed little relationship between high school grade point average and success in college. The researchers found a strong relationship between test scores and later success.

Likewise, a faculty committee at the University of California system — led by Dr. Henry Sánchez, a pathologist, and Eddie Comeaux, a professor of education — concluded in 2020 that test scores were better than high school grades at predicting student success in the system’s nine colleges, where more than 230,000 undergraduates are enrolled. The relative advantage of test scores has grown over time, the committee found.

“Test scores have vastly more predictive power than is commonly understood in the popular debate,” said John Friedman, an economics professor at Brown and one of the authors of the Ivy Plus admissions study.

With the Supreme Court’s restriction of affirmative action last year, emotions around college admissions are running high. The debate over standardized testing has become caught up in deeper questions about inequality in America and what purpose, ultimately, the nation’s universities should serve.

But the data suggests that testing critics have drawn the wrong battle lines. If test scores are used as one factor among others — and if colleges give applicants credit for having overcome adversity — the SAT and ACT can help create diverse classes of highly talented students. Restoring the tests might also help address a different frustration that many Americans have with the admissions process at elite universities: that it has become too opaque and unconnected to merit.

Last week, three scholars — Bruce Sacerdote and Michele Tine of Dartmouth, along with Friedman — released additional research about some unnamed Ivy Plus colleges. It showed only a modest relationship between high school grades and college grades, partly because so many high school students now receive A’s. The relationship between test scores and college grades, by contrast, was strong. Students who did not submit a test score tended to struggle as much as those who had lower scores.

When I have asked university administrators whether they were aware of the research showing the value of test scores, they have generally said they were. But several told me, not for quotation, that they feared the political reaction on their campuses and in the media if they reinstated tests. “It’s not politically correct,” Charles Deacon, the longtime admissions dean at Georgetown University, which does require test scores, has told the journalist Jeffrey Selingo.

“When you don’t have test scores, the students who suffer most are those with high grades at relatively unknown high schools, the kind that rarely send kids to the Ivy League,” [David] Deming, a Harvard economist, said. “The SAT is their lifeline.”

 
The Leonhart New York Times piece has received considerable coverage, as well as support, as in this Forbes article, and this one at Dartmouth:

[Excerpts from the Forbes article]

David Leonhardt at the New York Times recently wrote an exceptionally important piece about the SAT, highlighting why it’s been foolish and shortsighted for colleges and universities to remove the standardized test from the admissions process. Over the past handful of years, many schools have made the assessment optional, or disallowed it entirely, over bias and inequity concerns.

Among those that have moved away from the SAT is the University of California system, home to some of the best, best known, and most respected schools in the country. In 2020, the Cal system banned consideration of test scores outright, a move that I dryly described at the time as “a bad decision.”

Which it was and is.

What makes the Leonhardt offering so important is not just that it supports my contention about the value of SAT scores in admissions. In addition, the NYT kicks the legs out from under a key admissions premise that we’ve been told and taken as gospel for a long, long time – that high school grades are a good indicator of college success. It’s not that grades and GPA don’t correlate to success in higher education, it’s that, according to Leonhardt’s reporting, standardized test scores predict college success better than grades.

[Leonhardt] wrote also that, “Researchers who have studied the issue say that test scores can be particularly helpful in identifying lower-income students and underrepresented minorities who will thrive. These students do not score as high on average as students from affluent communities or white and Asian students. But a solid score for a student from a less privileged background is often a sign of enormous potential.”

It’s this last point that moved me to oppose pulling standardized test scores from the admissions mosaic. Students who for whatever reason lacked the spotless grades, still deserved a way to show schools that they could achieve, that they could prosper in college and beyond – that failure to post a 4.0 did not mean they were a failure. As I wrote in 2020, “Denying students an opportunity to show their ability in a way other than grades will shut students out. Not maybe, definitely.”

 
There has been critical response to the Leonhardt article, including that by Jon Boeckenstedt:

[Excerpts]

The idea of predicting future human potential and behavior and potential using a standardized test has been around for a while. It’s been a popular idea among the less equity minded members of human society for some time: Usually people who write the tests write them for people like themselves, and then are surprised when people not like themselves don’t do well on them. The idea of a single, multiple choice test to measure academic achievement sounds great, until you realize that the students being tested come from 38,000 high schools, following a non-standardized curriculum, taught by–how many?, 500,000?–teachers, and maybe then you realize what a ridiculous proposition that is on its face. But it is what we have.

University of Washington professor Jake Vigdor does a great job of taking apart the NYT article, in this thread here

The Leonhardt article uses the California faculty senate article as a proof point (the one that mentions College Board over 50 times) but fails to cite the rebuttal by the person who has researched the topic more than anyone, and who found serious problems with the research study.

But let’s not argue the statistics; they never seem to persuade. Let me just make some random points, which is probably also likely to fail:

First, ask yourself how well any admissions office can predict college performance at the individual student level. It’s not hard in aggregate, of course (one of the mistakes amateur statisticians make all the time in deciding if two things are related). But at the individual level? When you look at the data you’ll be shocked. Slam dunks who flunk out and big risks who end up being superstars are more common than you might think. On average, all these anomalies even themselves out. 

The reason for the anomalies is simple: Way more inputs go into college success than we could ever hope to measure. And many things colleges never consider (how many hours will a student work during the school year; how much private tutoring can the parents afford; how many hours a week the student plans to play XBox; how far apart–or close together is the romantic interest; how much does the student have to help at home with elder or child care, to name a few) weigh heavily on academic performance in college.

Second, if it is really about predicting performance, ask your friendly admissions or IR staff member how the essay, letters of recommendation, supplemental questions, interviews, or demonstrated interest work. These things are all puzzling additions to the selection process. Many institutions used to require a photograph. Why, do you think? The answer is not a nice one. 

The Leonhardt article mentions that these factors may introduce greater bias than the SAT into the process (I wrote about it in 2016), yet curiously, not a single one of the institutions listed in the article is–as far as I know–contemplating abandoning those.

Fourth: Ask why any admission requirement exists. Hint: It’s for the good of the university. The SAT probably does help the Highly Rejective Colleges (hat tip to Akil Bello but probably not in the way you think. Yale has memos in their archive from the mid-60s that recommended the SAT as the best way to reduce the expenditure on institutional financial aid, for instance, even while admitting it didn’t do much for predicting GPA.

Fifth, if the SAT does add incremental value to the prediction equation (and it almost always does), ask whether the juice is worth the squeeze. When you account for cash costs, stress, opportunity costs, and weigh that against the incremental value, I’m guessing you’ll say “no.” The first time I took a university test optional, the results were interesting: One-tenth of a GPA after the freshman year difference between test submitters and non-submitters. It was statistically significant, but meaningless practically, as both groups were above 3.0.

Finally, ask Leonhardt what the genesis of this article was. I can’t prove it, and of course no reporter would ever reveal their sources, but I’ve got $100 that says this was planted by the College Board, who saw a few cracks, looked at their floundering finances, and decided to take one swing for the fence. I’m sure it’s just coincidence that they found someone who went to Yale for undergrad, and that he included very little input from test critics (other than to label them as “liberals who try to wish away inconvenient facts,” which, if you don’t mind a little editorializing, is pretty lazy and pretty shitty journalism.) Spend three minutes with this video if you think such activity would be above College Board.

Not only is it not out of the question that they planted the whole package; it’s highly probable, I think. Not that I can prove it, but if you believe a magic test created by someone who’s probably never taught a high school class is predictive, I can believe what I want.

That’s it. I don’t care, but people asked. And the river of bull**** will start flowing again because of this article. I’m glad I’m not in a flood zone.

 
For emphasis, here is the statistics-related pushback from Jake Vigdor, Professor of Public Policy and Governance, University of Washington, as mentioned in Jon Boeckenstedt's piece.
 
Another New York Times article examines the questionable practice of colleges paying US News to use the publication's logo in advertising to tout their US News rankings in marketing materials targeted at prospective applicants and their families.

[Excerpts]

Jonathan Henry, a vice president at the University of Maine at Augusta, is hoping that an email will arrive this month. He is also sort of dreading it.

The message, if it comes, will tell him that U.S. News & World Report has again ranked his university’s online programs among the nation’s best. History suggests the email will also prod the university toward paying U.S. News, through a licensing agent, thousands of dollars for the right to advertise its rankings.

For more than a year, U.S. News has been embroiled in another caustic dispute about the worthiness of college rankings — this time with dozens of law and medical schools vowing not to supply data to the publisher, saying that rankings sometimes unduly influence the priorities of universities.

But school records and interviews show that colleges nevertheless feed the rankings industry, collectively pouring millions of dollars into it.

Many lower-profile colleges are straining to curb enrollment declines and counter shrinking budgets. And any endorsement that might attract students, administrators say, is enticing.

Maine at Augusta spent $15,225 last year for the right to market U.S. News “badges” — handsome seals with U.S. News’s logo — commemorating three honors: the 61st-ranked online bachelor’s program for veterans, the 79th-ranked online bachelor’s in business and the 104th-ranked online bachelor’s.

Mr. Henry, who oversees the school’s enrollment management and marketing, said there was just too much of a risk of being outshined and out-marketed by competing schools that pay to flash their shiny badges.

“If we could ignore them, wouldn’t that be grand?” Mr. Henry said of U.S. News. “But you can’t ignore the leviathan that they are.”

Nor can colleges ignore how families evaluate schools. “The Amazonification of how we judge a product’s quality,” he said, has infiltrated higher education, as consumers and prospective students alike seek order from chaos.

The University of Nebraska at Kearney, which has about 6,000 students, bought a U.S. News “digital marketing license” for $8,500 in September. The Citadel, South Carolina’s military college, moved in August to spend $50,000 for the right to use its rankings online, in print and on television, among other places. In 2022, the University of Alabama shelled out $32,525 to promote its rankings in programs like engineering and nursing.

 
With the rise of AI and machine learning, it is inevitable that these technologies will be brought into the college admissions process. Phys.org offers a detailed article examining this issue:

[Excerpts]

There are a growing number of colleges and universities using AI to assist admissions offices as they evaluate applicants. Texas A&M University–Commerce and Case Western Reserve University utilize AI tools like Sia to quickly process college transcripts by extracting information like student coursework and college transfer credits.

Georgia Tech has been experimenting with AI to replicate admissions decisions using machine learning techniques. The technology allows schools to sift through large data sets, evaluating thousands of applications more efficiently. Theoretically, this frees admissions staff members to have more time to thoughtfully consider other aspects of applicants' submitted materials. But what's at stake when AI is incorporated into the review process?

"It's a complicated matter, and it's not the first time that admissions has considered how to use algorithms or formulas in its processes," says Jerome Lucido, founder of USC Rossier's Center for Enrollment Research, Policy and Practice (CERPP) and former chair of and national presenter for the College Board's Task Force on Admissions in the 21st Century.

While related, there are two distinct tools in the college admissions process: Algorithms and machine learning, according to Lucido. A college admissions algorithm is a set of rules or instructions used by educational institutions to evaluate and select applicants for admission. Colleges and universities often have their own unique admissions processes and evaluate based on the university's criteria. Many institutions commonly use a holistic approach that considers a combination of factors including academic records, standardized test scores, extracurricular activities, recommendation letters and interviews.

Machine learning, a subset of AI, is a specific technology that can be used to improve data analysis and decision-making. According to researchers at the USC Viterbi School of Engineering's Information Sciences Institute, machines are taught to behave, react and respond similarly to humans using data collected.

As it applies to college admissions, machine learning combined with admissions algorithms would streamline the process, identify patterns and make informed decisions to form predictions based on historical data. This data-driven approach could potentially help universities identify candidates who possess those characteristics determined by the institution for academic success.

In a joint statement from the Association for Institutional Research (AIR), EDUCAUSE and the National Association of College and University Business Officers (NACUBO), the organizations supported and reinforced the use of data to help better understand students. Data also lays the groundwork to develop innovative approaches for improved student recruiting. However, there is a challenge of relying too much on quantitative data.

AI is efficient for processing data, yes, but it may not capture a student's complete life story, full potential or unique qualities. For instance, factors like personal challenges, resilience and growth might not be reflected in the data, which could lead to missed opportunities for students who have overcome obstacles.

"Many large public flagships and certainly selective privates were already well down a path that wasn't being called AI," says Don Hossler, senior scholar at CERPP. "They were building in algorithms that help them screen students." The use of AI in the screening process, Hossler says, is really the next natural extension.