25). On the other hand, there is often an assumption on the part of the users that evaluations are "an ivory tower process…too late to be useful, too full of jargon to be understood, too lengthy [to read]…, and too likely to be answering a question quite different from the policy question originally posed" (Ibid).
The last user complaint set forth by Chelimsky -- that the question answered is often not the question posed -- points to the problem of what role, if any, the policy makers themselves should have in forming the evaluation criteria. This problem was a source of pointed debate after the publication of the Equality of Educational Opportunity Study (also known as "the Coleman Report") in 1966. This study was commissioned by the United States Department of Health, Education, and Welfare to determine the effectiveness of the Civil Rights Act in ensuring equal educational opportunities for people of all race, color, religion, and national origin. It found that disparities in educational opportunities remained high, not as a factor of race or religion, but as a factor of socio-economic conditions. While it highlighted the need for a war on poverty, it also gave segregationalists statistical fuel for the argument that school integration would have no effect on equalizing educational opportunities.
This led Glen Cain and Harold Watts to question Coleman's methodology in their 1970 paper "Problems in Making Policy Inference from the Coleman Report." In this critique, they argued that Coleman chose his variables based on "broad and disinterested scientific concerns" (Rothbart 1975, p.23). While this seems an entirely appropriate approach to determining variables from a social scientist's standpoint, Cain and Watts argued that the social scientist must also play the role of social engineer, claiming that variables must be chosen "for their potential role in policy manipulation" (Ibid). In his reply, Coleman retorted that his job was one of scientific discovery, not outcome manipulation.
If the evaluation process must proceed as a joint effort between evaluators and policy deciders, and if those two parties are often so opposed in their objectives, what is the best course of action to ensure evaluations that are both scientifically accurately and politically relevant? Chelimsky suggests that "evaluation can no longer be seen as...
Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.
Get Started Now