As 2013 draws to a close, there has been much debate about the value higher education offers, its costs, and promises of new federal metrics to evaluate institutions of higher education. Not surprisingly, The Yahoo! Finance blog featured an interview with former Secretary of Education William Bennett on the value of higher education as one of its “favorite” stories of 2013.
The piece notes that former Secretary Bennett evaluated colleges and universities based on their return on investment (ROI). In that exercise, he concluded that the ROI was positive for 150 institutions. His assessment was based on PayScale’s College Education ROI Rankings.
Before one celebrates a positive ROI or mourns a negative one, some large caveats are in order. First, the ROI rankings are based on employees who completed PayScale’s employee survey. Second, the survey excludes those who went on to earn degrees beyond a Bachelor’s Degree, which limits the assessment of the value of a Bachelor’s Degree, as that degree both serves students who are interested in launching professional careers, as well as those who seek advanced degrees. The data also exclude self-employed, project-based, and contract employees. PayScale’s methodology describes these limitations and notes that they result in higher confidence intervals for select schools.
Not mentioned in the methodology, but also important is the mix of professional fields in which students enter. Higher Education is not solely about career preparation. There is a wide diversity in degrees. Hence, it is plausible that a college that has superior results in graduating and placing its majors could perform badly on the rankings on account of its being primarily a liberal arts school, while a technical school that focuses in the narrower area of health sciences could fare much better. When making comparisons, one needs to evaluate similar entities, just as it would make little sense to compare a technology company’s ROI with that of a manufacturing firm. In general, firms in rapidly growing industries have a much higher ROI than those in mature industries.
Also not mentioned is the geographic area in which the college is situated and in which a majority of its graduates launch their careers. As the cost of living is higher in certain parts of the country than in others, incomes for college graduates are also higher in those areas. For example, according to recent Census data, the cost of living in San Francisco is more than 90% above that in Springfield, IL. At the same time, earnings relationships are not static. For example, the Bureau of Labor Statistics reported that total compensation in Atlanta rose 4.0% in 2013 while it rose 1.3% in Chicago. On a long-term present value basis, were the narrowing of the difference in wages between these two areas to hold up or were the trend to continue, the ROI would change even if the institutions of higher education made no changes whatsoever.
In sum, as is the case when using any rankings system—and I, for one, believe that efforts to measure higher education’s effectiveness in a wide variety of ways can be helpful to students, parents, faculty and administrators—one should know what the rankings seek to measure and be cognizant of the limitations of the methodology involved. Only with that context, can one be in a better position to leverage the possible value that might be attainable from studying such data.
There is no magic metric that captures most or all of the true value added by each college or university. The pursuit of such a perfect measure may be idealistic, but it is unattainable. A range of useful metrics can, however, provide important insight.