Holistic evaluation of applicants = noise? (Or worse?)
Why no evidence-based admissions?
Is there any serious study of whether subjective criteria used to judge applicants actually predict success? In psychometrics this is referred to as test validity. In the article below, it is not even clear that the evaluation method satisfies the weaker criteria of consistency or stability: applicants passed through the system another time might generate significantly different scores. "Expert" evaluation often reduces the power of prediction relative to simple algorithms.
See Data mining the university and Nonlinear psychometric thresholds for physics and mathematics for plenty of evidence of validity, consistency and stability of traditional measures of intellectual ability.
NYTimes: A highly qualified student, with a 3.95 unweighted grade point average and 2300 on the SAT, was not among the top-ranked engineering applicants to the University of California, Berkeley. He had perfect 800s on his subject tests in math and chemistry, a score of 5 on five Advanced Placement exams, musical talent and, in one of two personal statements, had written a loving tribute to his parents, who had emigrated from India.
Why was he not top-ranked by the “world’s premier public university,” as Berkeley calls itself? Perhaps others had perfect grades and scores? They did indeed. Were they ranked higher? Not necessarily. What kind of student was ranked higher? Every case is different.
The reason our budding engineer was a 2 on a 1-to-5 scale (1 being highest) has to do with Berkeley’s holistic, or comprehensive, review, an admissions policy adopted by most selective colleges and universities. In holistic review, institutions look beyond grades and scores to determine academic potential, drive and leadership abilities. Apparently, our Indian-American student needed more extracurricular activities and engineering awards to be ranked a 1.
Now consider a second engineering applicant, a Mexican-American student with a moving, well-written essay but a 3.4 G.P.A. and SATs below 1800. His school offered no A.P. He competed in track when not at his after-school job, working the fields with his parents. His score? 2.5.
Both students were among “typical” applicants used as norms to train application readers like myself. And their different credentials yet remarkably close rankings illustrate the challenges, the ambiguities and the agenda of admissions at a major public research university in a post-affirmative-action world.
[ Despite Prop. 209, the nearly equal scores of these "typical" training cases suggests outcomes very similar to those produced by explicit affirmative action. ]
... I could see the fundamental unevenness in this process both in the norming Webinars and when alone in a dark room at home with my Berkeley-issued netbook, reading assigned applications away from enormously curious family members. First and foremost, the process is confusingly subjective, despite all the objective criteria I was trained to examine.
In norming sessions, I remember how lead readers would raise a candidate’s ranking because he or she “helped build the class.”
... After the next training session, when I asked about an Asian student who I thought was a 2 but had only received a 3, the officer noted: “Oh, you’ll get a lot of them.” She said the same when I asked why a low-income student with top grades and scores, and who had served in the Israeli army, was a 3. ...
11 comments:
"Holistic evaluation of applicants = noise? (Or worse?)"
Holistic evaluation of applicants = the worst and most vile kind of racism and stereotyping, perpetuated by the Nazis who populate the admissions offices of our elite universities, and basically a modern form of Jim Crow whereby some people are held to be more equal than others
UCB isn't special yet Steve believed a paper which claimed that admits to elite unis who chose to attend less elite unis did just as well as those who chose to attend.
I'm not sure about UG admissions, but for med school the issue has been studied.
http://www.ncbi.nlm.nih.gov/pubmed/17089077
http://peds.stanford.edu/faculty-resources/documents/MedSchoolAdmissions.whatpredictssuccess.pdf
Summary: grades and test scores, structured interviews good; unstructured interviews, essays etc. generally not good. Of course, most schools realize that interviews are not even consistent for the same applicant and have a built-in re-interview permission for when an applicant's scores diverge significantly. Utterly ridiculous.
Fat chance of seeing a sane, test-based system, though. Only public outrage could obtain such a thing, and the average person, being average, probably believes (wrongly) that "holistic" admissions work in his favor.
"Despite Prop. 209, the nearly equal scores of these "typical" training cases suggests outcomes very similar to those produced by explicit affirmative action. "
Then it sounds like they are in fact using stable testing criteria. The more the outcome resembles the goals of the school (inlcuding affirmative action), the better. Apparently their noisy system still approximates that.
The American ideology is not consistent.
A test based admissions would, it is supposed, result in under-representation of some minorities and of the poor, but only (what?) 2% of HYPS admits are from households with income less than the median, and the US has the most rigid class structure in the developed world.
The US is like !984's Oceania. One could laugh at it if it weren't so satanic.
"to judge applicants actually predict success?"
And obviously, if like the rest of the world, that success were measured by cumulative exams rather than very short term and often very subjective "grades" the validity of tests as admissions criteria would be so far above any other criteria the consideration of other criteria would be indefensible even for the morons in admissions.
The perfect is made the enemy of the good.
It is true that a great many high scorers will be boring, unambitious, antisocial, etc. And motivated by this fear, the admissions people suppose they can do better. But they can't. Give me a pool of high scorers over a pool selected by admissions for any competition any day.
"when I asked why a low-income student with top grades and scores, and who had served in the Israeli army, was a 3"
M
If you want the sham of "self-government", you can't escape the hypocrisies it breeds.
Great point ben_g, their results probably are stable once we identify what factor they are really using. And this makes it easy to test validity as well. As long as they are not destroying the data ...
I think the missing word in all of these conversations is "grit". I believe it's difficult to quantify it, but it's probably highly correlated with somebody's chance of being successful. And admission officers of top universities seek students who have high intellectual capacity and, more importantly, who have the drive to pursue life-long journeys to contributing to society in significant ways.
Post a Comment