Matt's Past SAT/ACT News Update:

Matt O'Connor

Sep 25, 2017

James S. Murphy has published an article in Inside Higher Ed about the College Board's citation of grade inflation as a reason to insist on SAT scores in admissions, and the possibility that the College Board itself has contributed to grade inflation due to its wildly successful AP program:

[Excerpts]:

...grade reporting might in fact have become less accurate in the past two decades, because grades have become fuzzier, thanks in large part to the College Board’s AP program.

GPAs are conventionally calculated on a 4.0 scale (4.0 equals A, 3.7 equals A-minus, 3.3 equals B-plus, etc.). Since the 1960s, however, some high schools have given honors and AP course grades an extra bump through weighting, so that a B in AP psychology would be calculated as a 4.0 rather than a 3.0. One study has suggested, because practices are inconsistent across the country, that weighted GPAs are less predictive of college success than traditional GPAs. Many colleges unweight grades when they consider applications.

Weighting AP grades isn’t new. The practice goes back to the 1960s. What has changed is that the number of students enrolled in Advanced Placement classes has exploded in the last two decades. In 1998, the year with which Hurwitz and Lee began, more than 600,000 students took at least one AP exam. In 2015, around 2.6 million students did. With four times as many students getting weighted grades, is it any wonder that the average GPA has risen? This might not be the gradual change that has driven GPAs and A’s up, but it is a gradual change that could have the power to do so, especially among a population of students limited to those who take the SAT. According to the study, both the weighted and the unweighted GPAs have increased over time.

Weighted grades could create a false inflation effect because the College Board student questionnaire does not provide clear direction on how to report weighted grades. It is quite possible, even likely, that a significant number of students who received a weighted B in an AP class might report that grade as an A or use their weighted GPA in the College Board survey, so that even though they have an unweighted B average, they will report it as an A.

What this means is that, like those pharmaceutical companies selling laxatives to people addicted to the drugs they make, the College Board is holding out the SAT as an answer to a problem it helped create.

Jon Boeckenstedt (Assoc. VP of Enrollment at DePaul) has written a detailed and rather scathing article about the recent writings of the College Board and ACT, Inc. that have been critical of test optional colleges.

Surprise: College Board and ACT don’t like Test-optional admissions

[Excerpts]:

Almost every institution that goes test optional has a faculty full of researchers who need to be convinced before changing a policy on admissions, and most places I know that have done this (going back to the California study in 2002) suggest that tests uniquely explain about 2-4% of the variance in freshman grades, and not much of anything beyond that. In short, we don’t need tests to make good admissions decisions. Period. A lot of colleges and universities believe the same thing.

Uniquely is an important word. Tests, by themselves, explain much more of the variance than that. But tests and high school GPA–the best predictor–are strongly correlated. Once you eliminate that correlation, you discover the tests are mostly very low value. This makes sense to most researchers, of course, and the people at the testing agencies are aware of this (Wayne Camara of ACT even agreed with me on the 2% point when we talked face-to-face), so they present the data differently, by talking about things like “chances for a grade of B or better.” Everything, it seems, is linear and continuous until it isn’t.

For a while, test-optional was seen as a fringe movement. I don’t think the big testing agencies, The College Board and ACT, thought too much about it. It was that little pimple that no one saw because it was covered up by your underwear.

[Regarding] the continued pronouncement that test prep doesn’t work: Really? This contention is something people at the conference literally laugh at: The absurdity of the notion that the only test prep that works in our test prep. It’s the hill some people want to die on, I guess, and they have that right.

Here’s the big thing: I do not give a damn if a college wants to use tests or not; if they want to do video interviews in place of high school transcripts; if they want to require students to do backflips and handstands for the admissions committee; if they want to measure shoe size and research how it affects academic performance, or if they want to admit every applicant with ability to benefit, the way community colleges do. It’s their decision.

I do care, and I get annoyed, when testing agencies lie and distort in order to save their bacon. And when they do, I’ll write about it.

This article, published on the Inside Higher Ed website in May, assesses the College Board's trumpeting of score improvements on the SAT among students who engaged in test prep through Khan Academy:

[Excerpts]:

Recently the College Board issued a statement trumpeting the success of its collaboration with Khan Academy to provide free online practice for the new SAT. According to the statement, students who spend at least 20 hours on Khan Academy’s “Official SAT Practice” have an average score increase of 115 points, nearly double the increase of those who don’t use the Khan Academy program.

Define irony.

The College Board affirming the value of test preparation is akin to Greenpeace suddenly denying the existence of global warming. For nearly 60 years the College Board’s position has been that test prep provides minimal benefits. A College Board document, “Effects of Coaching on SAT Scores,” argues that the estimates of the benefits of test prep reported by test prep vendors are much too high, that the typical gain from coaching is 8 points on the verbal side and 18 on the math side.

Is that document, which involves a study of coaching done back in 1996, no longer valid? What’s changed? According to Zach Goldberg, senior director of media relations at the College Board, both the SAT and the approach to test preparation used by Khan Academy are different.

Regarding the test itself, Goldberg responded to an inquiry from Ethical College Admissions, “The new SAT is a different test. It is an achievement test that measures what students are already learning in high school and what they need to know to succeed in college and career. With the new > SAT there is no penalty for guessing. Students no longer lose points for wrong answers. Gone are ‘SAT words’ -- words no one has seen before or will likely see again. Only relevant math concepts are tested. The SAT makes it easier for students to show their best work.”

That statement raises as many questions as it answers. So the SAT is now an “achievement” test? SAT was originally an acronym for Scholastic Aptitude Test. It then became the Scholastic Assessment Test, then just the SAT, standing for nothing in particular. If the new test is an achievement test, then why still have the SAT Subject tests, which are much more closely linked to advanced high school work in specific academic subjects, once upon a time known as “Achievement Tests”?

Wayne Camara has written a piece published on the ACT website in July about the same College Board press release regarding score gains from Khan Academy practice. [Mr. Camara is no longer Senior VP of research at ACT; since May, he has been the "Horace Mann Research Chair" at ACT, with the job description detailing his duties to "Provide technical expertise and research leadership for internal projects, collaborations, and consultation. Represent ACT with key stakeholders in measurement and assessment work."]

[Excerpts]:

Claims that test preparation, short-term interventions, or new curricular or other innovations can result in large score gains on standardized assessments are tempting to believe. These activities require so much less effort than enrolling in more rigorous courses in high school or other endeavors which require years of effort and learning.

If we find an intervention that increases only a test score without a similar effect on actual achievement, then we need to be concerned about the test score. And when we hear claims about score increases that appear to be too good to be true, we need to conduct research based on the same professional standards to which other scientific research adheres. Because if it sounds too good to be true, it very likely is.