Working Paper: NBER ID: w10803
Authors: Christopher Avery; Mark Glickman; Caroline Hoxby; Andrew Metrick
Abstract: We show how to construct a ranking of U.S. undergraduate programs based on students' revealed preferences. We construct examples of national and regional rankings, using hand-collected data on 3,240 high- achieving students. Our statistical model extends models used for ranking players in tournaments, such as chess or tennis. When a student makes his matriculation decision among colleges that have admitted him, he chooses which college "wins" in head-to-head competition. The model exploits the information contained in thousands of these wins and losses. Our method produces a ranking that would be difficult for a college to manipulate. In contrast, it is easy to manipulate the matriculation rate and the admission rate, which are the common measures of preference that receive substantial weight in highly publicized college rating systems. If our ranking were used in place of these measures, the pressure on colleges to practice strategic admissions would be relieved.
Keywords: No keywords provided
JEL Codes: I2; C11; C25
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
revealed preference ranking of colleges (D79) | reliability of desirability measure (C52) |
traditional metrics (admission and matriculation rates) (I23) | susceptibility to manipulation (D91) |
colleges engaging in strategic admissions practices (I23) | misrepresentation of desirability (D91) |
actual student preferences (A22) | reflection of true desirability (D84) |
consistent patterns in student choices (C92) | indication of true desirability (L15) |
revealed preferences (D11) | accurate reflection of relative desirability of colleges (D29) |