Measuring Up: What Does It Mean for Me?
The National Center for Higher Education Management Systems (NCHEMS)
Measuring Up 2004, the third state-by-state “report card” for higher education issued by the National Center for Public Policy in Higher Education, was released with considerable press attention on September 15th. The report grades state performance in five areas—preparation, participation, affordability, completion, and benefits. The grading process involves aggregating statistical performance measures in each area to the state level, then comparing each state’s totals with those of the best performing states. Partly because of the hoopla surrounding its release, institutional leaders may be tempted to quickly dismiss Measuring Up as just one more simple-minded approach to providing “public information” that only ends up making higher education look bad. After all, the report says nothing about my institution’s performance and it excludes much of what higher education is all about, including graduate education and research. Can’t it just be ignored so we can move on? Here are a couple of reasons why I don’t think so.
First of all, the headlines that attracted the hoopla were pretty compelling. The bulk of the story was, of course, about affordability and it’s hard to argue with the central contention that students are worse off now than they were a decade ago with respect to college costs. To make this contention, the Center needed a lot of low grades. It got them by setting the “best performance” benchmark in 1992, a significant change in its established grading procedures in other areas. While this last-minute rule change has generated some justified flak from some states that really have made good-faith efforts to keep college costs down for low-income families, Measuring Up’s grim and relentless affordability message continues to resonate politically. College costs remain a prime candidate in Congress’ “search for a number” against which to hold colleges and universities accountable as debates about the Reauthorization of the Higher Education Act unfold next year.
Measuring Up’s second big message was that the nation’s performance in preparing high school students for college improved noticeably over the past ten years, while college access and completion rates remained flat. On the surface, this reinforces a second potential number against which colleges might be judged when the Reauthorization dust settles—graduation rates. But behind the substance of this message is a more subtle, but important, signal: policy matters. States—and more recently, the federal government in the form of No Child Left Behind—have paid a lot of attention to “fixing” K-12 education over the past decade and two aspects of this interest are important for us in higher education to heed. First, policy action in this arena has been remarkably consistent and proactive, and has proceeded without much regard for the opinions of the institutions affected. Indeed, institutions are seen as part of the problem. Second, policymakers for the most part believe this approach is showing results and Measuring Up suggests that they’re right. Despite legitimate cautions about important differences between K-12 and college, their first cut at addressing apparent performance shortfalls in the latter will probably be to reach for the same medicine.
Beyond its immediate messages, moreover, Measuring Up signals an emerging “new look” to accountability that we had better get used to. Two other recent documents also exhibit this flavor—the report of SHEEO’s Accountability Commission to be issued in December and that of the Business-Higher Education Forum issued last summer. Some of the aspects of this “new look” are worth noting explicitly. First, the notion of accountability at the core of these reports is about results, and the principal result in question is the quality of student learning outcomes benchmarked against comparable national standards. Measuring Up 2004 is only 24 pages long this year because the details of state performance are reported on the Web. But four of these scant published pages are devoted to reporting results of a pilot project to assess student learning in five states. This initiative compiled and analyzed existing data on such examinations as licensure and graduate school admissions tests, and administered comparable examinations to samples of students at 48 community colleges and 49 four-year colleges and universities. Results were combined to obtain a first-ever state-level index of student learning. In parallel, the SHEEO report will recommend that the federal government develop a national assessment of college-educated citizens to be periodically administered in a fashion similar to the gold-standard National Assessment of Educational Progress (NAEP) currently used to determine progress in K-12 learning. Similarly, the Business-Higher Education Forum’s report called on the nation’s higher education institutions to become much more visible and consistent about assessing student learning, and making public their results. We have talked about “assessment” for years, of course. But what we meant, for the most part, was a locally-developed process of evidence-gathering run by individual faculties at individual institutions. The new conversation, if it continues, will be about common results and it will be directed by others.
Another thing that’s different about the accountability conversation marked by Measuring Up is that its performance dimension is constructed in “public interest” terms. Traditional approaches to accountability in higher education like regional accreditation have always been able to keep the “quality” conversation confined to matters important to the academy—things like resources, faculty qualifications, entering student competitiveness, curricular structure, or research activities. Such an approach will be increasingly insufficient in the emerging accountability environment. Looked at one way, this is a good and important development. Public policymakers and stakeholders in the business community are finally recognizing higher education’s central economic and societal role for the 21st century. The bad news is the same: recognition of this central role brings with it renewed and active interest in ensuring competitive performance.
So Measuring Up is not so much important for what it says as for what it signals. Institutions need to be ready to answer new questions about accountability with solid evidence about learning outcomes, benchmarked where possible against appropriate national standards, and clear in their contribution to the citizens and employers of their respective communities.