Verified Document

Politics And Program Evaluation Getting Research Paper

25). On the other hand, there is often an assumption on the part of the users that evaluations are "an ivory tower process…too late to be useful, too full of jargon to be understood, too lengthy [to read]…, and too likely to be answering a question quite different from the policy question originally posed" (Ibid). The last user complaint set forth by Chelimsky -- that the question answered is often not the question posed -- points to the problem of what role, if any, the policy makers themselves should have in forming the evaluation criteria. This problem was a source of pointed debate after the publication of the Equality of Educational Opportunity Study (also known as "the Coleman Report") in 1966. This study was commissioned by the United States Department of Health, Education, and Welfare to determine the effectiveness of the Civil Rights Act in ensuring equal educational opportunities for people of all race, color, religion, and national origin. It found that disparities in educational opportunities remained high, not as a factor of race or religion, but as a factor of socio-economic conditions. While it highlighted the need for a war on poverty, it also gave segregationalists statistical fuel for the argument that school integration would have no effect on equalizing educational opportunities.

This led Glen Cain and Harold Watts to question Coleman's methodology in their 1970 paper "Problems in Making Policy Inference from the Coleman Report." In this critique, they argued that Coleman chose his variables based on "broad and disinterested scientific concerns" (Rothbart 1975, p.23). While this seems an entirely appropriate approach to determining variables from a social scientist's standpoint, Cain and Watts argued that the social scientist must also play the role of social engineer, claiming that variables must be chosen "for their potential role in policy manipulation" (Ibid). In his reply, Coleman retorted that his job was one of scientific discovery, not outcome manipulation.

If the evaluation process must proceed as a joint effort between evaluators and policy deciders, and if those two parties are often so opposed in their objectives, what is the best course of action to ensure evaluations that are both scientifically accurately and politically relevant? Chelimsky suggests that "evaluation can no longer be seen as...

" Instead, she argues, the evaluative questions must be determined by the decision makers themselves. While this seems to leave open the possibility of biased evaluations skewed to serve political purposes, one must keep in mind that the evaluation itself exists to serve political purposes. If it does not serve those purposes, it is useless. Weiss (1973) seemed to concur with this stance, pointing out that "only with sensitivity to the politics of evaluation research can the evaluator be as strategically useful as he should be" (qtd. In Cheminsky 1987, p. 24 ).
This focus on utility, both of the evaluation and by extension the evaluator, may seem to violate the rules of precision governing the social scientist. However, a program evaluation cannot be viewed as a scientific artifact designed to provide knowledge for knowledge's sake. It is from beginning to end a tool for policy determination, and as such it cannot be divorced from its political implications. This does not mean, however, that the evaluator exists only to serve the agenda of the political operative, or that the decision to create evaluations according to their political utility more than their scientific utility should be seen as a corruption of the evaluative process. On the contrary, Cheminsky (1987) explains, the establishment of a legacy of useful program evaluations constitutes "a contribution systematic, scholarly, independent, critical thinking to the decision making process" (p.26). Such a contribution can only serve to fulfill the program evaluator's desire to play an integral role in the improvement of society through public policy.

Works Cited

Berk, Richard a. And Peter Rossi. (1999). Thinking about Program Evaluation, 2nd Ed. Thousand Oaks, CA: Sage Publications.

Besharov, Douglas J. And Terry W. Hartle. (1985, Dec. 28). Put Politics Aside and Help the Head Start Program. The New York Times. Retrieved on June 5, 2010 from http://www.welfareacademy.org. [Web]

Chelimsky, Eleanor. (1987, November). The Politics of Program Evaluation. Society, Vol. 25, No. 1, p. 24-32.

Chen, Huey-tsyh. (2005). Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness. Thousand Oaks, CA: Sage Publications.

Rothbart, George S. (1975,…

Sources used in this document:
Works Cited

Berk, Richard a. And Peter Rossi. (1999). Thinking about Program Evaluation, 2nd Ed. Thousand Oaks, CA: Sage Publications.

Besharov, Douglas J. And Terry W. Hartle. (1985, Dec. 28). Put Politics Aside and Help the Head Start Program. The New York Times. Retrieved on June 5, 2010 from http://www.welfareacademy.org. [Web]

Chelimsky, Eleanor. (1987, November). The Politics of Program Evaluation. Society, Vol. 25, No. 1, p. 24-32.

Chen, Huey-tsyh. (2005). Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness. Thousand Oaks, CA: Sage Publications.
Cite this Document:
Copy Bibliography Citation

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now