Thursday, May 14, 2009

Identifying Comparison Institutions

Higher education institutions are often concerned with how they compare to other higher education institutions. In the case of my institution, we have often compared ourselves to other Catholic institutions. The last comparison group was developed nearly ten years ago. Senior leadership has begun the discussion as to whether this is still the appropriate comparison group. In order to assist in this discussion, my office attempted to use an empirical method to determine a comparison group based on quantitative similarities.

The National Center for Education Statistics (NCES) collects information on an annual basis from all Title IV higher education institutions. The collection effort is implemented during the academic year with the inclusion of information pertaining to enrollment, financial aid, finance, graduation, and human resources. Subsequently, the institutional identifiable data....



....are made available to the public. An important use of these data results in the development of peer institutions. Although the definition of peer institutions varies from college to college, the primary use is to develop measurable outcomes for benchmarking purposes. In addition to the data, NCES provides the public with a Data Cutting Tool (DCT) for use in their Data Analysis System (DAS).

As a starting point, all US degree-granting master and doctoral level institutions were included in the original dataset. In addition, institutions classified as for profit were excluded from the dataset. This resulted in the inclusion of 884 institutions for this project. As much as possible, the data points used within the study are similar (if not the same) in nature to the data points collected by US News. Overall, there were 26 variables included in the analysis referencing areas such as retention, faculty resources, selectivity, and graduation. Once the variables had been identified, the next step was to develop a measure useful for identifying institutions similar to our institution. In order to accomplish this, each institutions was assigned a standard score for each of the 26 variables. Typically, a standard score is calculated in order to allow for the comparison of different scales by transforming the score into a common scale. Standard scores are a common measure of individual test scores as it enables admissions officers to determine how far above or below the mean (average), an applicant scored on a particular exam such as the LSAT or GMAT.

For the purposes of this project, we are not concerned with how far above or below the institution is from the mean; however, we are concerned with our ability to take the our standard score and identify those institutions that are closest to this score. In other words, we needed a measure that would enable us to create an artificial mean in which our institution was the center of each universe (or variable).

The first step of this two-step process was to identify the institutional standard score for each of the 26 variables. The second step required us to separate each frequency distribution into ten equal groups or deciles. Once this was accomplished, the standard score for our institution was isolated with the intent of identifying all institutions within two deciles above or below the institutional standard score. This created a practical situation in which similar institutions could be identified for each of the 26 variables.

Next, each quantitative variable was recoded into a categorical variable with two discrete values. More specifically, an institution with a standard score within two deciles of the institutional standard score was assigned a value of one. Each institution with a standard score beyond the two deciles limit was assigned a zero. This process resulted in the assignment of a series of zeroes and ones with zero (0) indicating the institution was not similar to our institution for the variable in question and one (1) indicating the institution was similar to our institution for the variable in question. Finally, these variables were aggregated in order to develop a peer comparison rank.

1 comment:

  1. The peer identification has been beneficial to faculty members at schools that have lagged behind in salaries. For example, consider a sleepy Midwestern Catholic men's college that in 30 years has grown into a co-educational university of more than 11,000 graduate and undergraduate students -- but still pays small-liberal-arts-college salaries. The peer process would put this school on a footing with the major Catholic schools such as DePaul and Marquette, and further study would reveal the disparity in pay. Few universities can afford to be too far behind in pay scale lest they lose important faculty members to greener pastures.

    ReplyDelete