Verified Document

Reliability And Validity Measurements Research Paper

Reliability of Test Reliability is defined by Joppe (2002,p.1) as the level of consistency of the obtained results over a period of time as well as an accurate representation of the population under study. If the outcome of the study can be reproduced using a similar methodology then the instrument used in the research are said to be reliable.

It is worth noticing that there is an element of replicability as well as repeatability ff the observations or results. The work of Kirk and Miller (1986,41-42) indicated that there exists three different types of validity in any given quantitative research. These however, all relate to; the extent to which the give measure if repeated, remains constant, the stability of the given measure over a period of time as well as the similarity of the given measurements in a given time period. The work of Charles (1995) focuses on the idea of consistency with which a given test item is answered. The test-retest method is a type of reliability test. The attribute of a given instrument that is tested for reliability is called stability. A stable measure would produce similar results. A high level of stability is indicative of a very high level of instrument reliability. This indicates that the results are measurable. There is a problem with the test-retest method as pointed out by Joppe (2000).The problem would ultimately make the test to become unreliable to a certain degree. Joppe (2000) explained that the test-retest technique may lead to the sensitization of the respondents on the specific subject matter and thereby influencing their responses. Reliability therefore refers to the level of consistency of a given measure. In psychology for example, if a test is aimed at testing a trait such as introversion, then every time it is administered to a given subject then the results obtained should be approximately similar. The downside is that it is never easy to calculate reliability precisely. Ways of approximating it however, are in existence (Cherry, n.d)

Types of reliability

There are several types of reliability (PTI,2006, Cherry, n.d).They are as follows;

Test-Retest reliability

In this type of a reliability test, the r est is administered twice at two distinct points in time (Cherry, n.d).This reliability test assumes that there is never going to be a change in the construct or quality being measured. It is generally employed for things that are usually stable over a period of time like intelligence.

Inter-rater Reliability

This form of reliability is undertaken by having two judges who are both independent, score the test. The scores that are obtained are then compared critically in order to determine the level of consistency of the rates' estimates. A technique of testing the inter-rater reliability is to score items based on a 1-10 scale. The next process is the calculation of the correlation that exists between the two scores so as to determine the degree of inter-rater reliability.

Parallel-Forms Reliability

This form of reliability is determined by comparing the various (different tests) that were originally created using similar content. This is achieved by creating a large set of test items that are aimed at measuring a similar quality and then dividing (randomly) these items into two tests that are separate.

Internal Consistency Reliability

This type of reliability is employed in the judgment of the consistency of results that are obtained across items that are based on the same test. It basically involves the comparison of the test items that are integral in the measurement of the same construct so as to determine the internal consistency of the tests.

II. Validity (Test Validity)

The work of Joppe (2000) provided an explanation of validity as a determination of whether the given research really measures whatever it is intended to measure as well as the level of truthfulness of the results. Wainer and Braun (1998) on the other hand referred to validity as "construct validity."

Validity in the context of psychology has been extensively been discussed by Tebes (2000).The work of Cook and Campbell (1979) identified four major types of validity; internal validity, statistical conclusion validity, external validity and construct validity. The internal validity makes inference regarding causal relationships in cases involving two articles. The statistical conclusion validity makes inferences regarding the covariations that exist between two variables. External validity involves the generalization to settings, times and other persons. Construct validity involves generalization on theoretical relationship between cause and effect.

Forms of validity

Face validity

Cherry (n, d) defined...

In this form of a test, the researchers take the validity of the test at 'face value'. This is done by examining whether the test actually appears to really measure the intended variable. As an example, a researcher may be interested in measuring happiness and then the test would be said to possess the face validity should it appear to measure the level of happiness. The disadvantage of this test is that it is not accurate since it measures just the superficial signs of a variable. There is therefore a need for the researchers to carry out further investigations into the matter.
Content validity

Content validity refers to the degree to which the items in a given instrument are a reflection of the content universe for which a given instrument will be appropriately generalized (Straub et al. 2004). Generally, the concept of content validity entails the evaluation of a given new instrument so as to ensure that it does include all of the items that are considered essential while eliminating the items that are deemed undesirable to a given construct domain as pointed out by Lewis (1995).When a given test possesses content validity, then the items on that given test are a representation of an entire range of possibilities in regard to the items that the test should include. Content validity has the disadvantage of being bias since it depends on the opinions of the judges in the rating of the items.

Criterion validity

A given test is said to possess criterion-related validity if it has demonstrated beyond reasonable doubt that it is highly effective in the prediction of the criterion as well as the indicators of a given construct (Cherry, n.d).Miller et al. (2003) pointed out that Criterion-related validity is determined whenever one needs to determine the relationship that exists between the scores of a test that is aimed at testing a specific criterion. An example being the scores on a given admission test being related and relevant to criteria like grade point average. There are two types of criterion validity;

1. Concurrent validity which occurs whenever the criterion measures are achieved at the same time as the test scores. This is indicative of the extent to which the obtained test scores are an accurate estimation of the current state of a situation or individual on the basis of the criterion. As an example, in the measure of the extent of depression, the administered test may be described as having concurrent validity if it succeeds in measuring of the current depression levels that are experienced by a subject.

2. Predictive validity which is noted to occur whenever the measures of a criterion are obtained at such a time after the test has been completed. Examples are aptitude tests.

The weakness of predictive validity is that it never tests all of the available data and therefore the selected items can never by definition proceed to produce scores on a given criterion.

Construct validity

Construct validity is demonstrated if in a given test, there is an association that exists between the test scores and the available predictions of a given theoretical trait. Examples are intelligence tests. Construct validity is therefore the extent to which a give instrument can measure a trait and/or a given theoretical construct that it is meant to measure (Miller et al.,2003).

III. What must a psychologist do before they use a test to assure that the test has adequate levels of reliability and validity for the client who is being tested?

For the psychologist to ensure that the test has adequate levels of reliability and validity for the client who is being tested. They must do a number of things;

These actions for ascertaining reliability and validity are dictated by the racial, cultural and educational background of the client. There are certain tests such as IQ test that are culturally sensitive and should never be administered to people of cultural minorities like the black community. The complexity of the questions should also be guided by the academic level of the client. Groth-Marnat (2003) pointed out that prior to conducting any test; the first thing is to determine the competence of the subject. This is done by establishing their competence. Competence is this case is defined as the subjects' ability to meaningfully cooperate with the psychologist. It also necessary to reassure the person being assessed of their rights to privacy so that they feel comfortable and more confident. Their educational level would dictate the complexity of the evaluation techniques while their cultural inclinational would determine the appropriateness of the…

Sources used in this document:
References

American Psychological Association (2003). Ethical principles of psychologists and code of conduct. Washington, DC: Author.

Charles, C.M. (1995). Introduction to educational research (2nd ed.). San Diego, Longman

Cherry, K (n.d) What is reliability?

http://psychology.about.com/od/researchmethods/f/reliabilitydef.htm
Joppe, M. (2000). The Research Process. Retrieved February 25, 1998, from http://www.ryerson.ca/~mjoppe/rp.htm
http://www.proftesting.com/test_topics/pdfs/test_quality_reliability.pdf
Cite this Document:
Copy Bibliography Citation

Related Documents

Reliability, Validity and Norming Sample Populations Play
Words: 670 Length: 2 Document Type: Term Paper

Reliability, validity and norming sample populations play critical roles in the usefulness of assessment instruments used in forensics assessments. These three facets of assessment help to determine whether or not the results the assessment yield is credible. Additionally, they each help to evaluate a particular aspect of an instrument, although there is generally a degree of correlation between these factors. Validity is simply the accuracy of a test to effectively measure

Reliability and Validity Baum Et
Words: 1004 Length: 3 Document Type: Article Review

moderate impairment), while dependent variables included the levels of measured performance on the test. Operationalization involved demonstrating the ability to perform the tasks of daily life. Simple cooking was tested by asking the test subject to cook oatmeal; using a telephone was tested by requiring the subject to inquire about grocery delivery on the phone; and the test subject was required to select and administer medications correctly and select

Reliability and Validity in Social
Words: 849 Length: 3 Document Type: Term Paper

8. Number of individuals in substance abuse clinics and/or treatment reporting income below the poverty line (nominal): This could show a direct correlation between poverty and substance abuse 9. Number of individuals in substance abuse clinics and/or treatment reporting they are homeless and/or have resulted to panhandling (nominal): This could show a direct correlation between homelessness, panhandling and substance abuse. 10. Number of individuals stopped for panhandling by the police: While the

Reliability and Validity in Point-In-Time Counts of Homeless Populations...
Words: 1160 Length: 4 Document Type: Research Paper

substance abuse, PTSD, domestic violence, family functioning, juvenile delinquency or adult criminality, parenting skills, self-esteem, depression, OCD, child well-being, mental status, adoption stability, anxiety, and wellness. If there is a variable of interest to you that is not included on this list, please check with your instructor to determine if it is an appropriate substitution. Description: Provide the name and a brief description of the instrument including how it is

Measurements and Instruments for a Quantitative Research
Words: 1479 Length: 5 Document Type: Research Paper

Measurements and Instruments for a Quantitative Research Plan The topic for this paper is: To what extent do African-American men who live in an urban setting and exhibit aggressive behavior due to early development factors associated with depression receive a diagnosis at local medical facilities of conduct disorder as opposed to depression? In this paper, we will discuss the method most appropriate to cover this topic and provide support for its

Measurement and Statistics Intelligence: Definition and Assessment...
Words: 1550 Length: 4 Document Type: Research Paper

Measurement and Statistics Intelligence: Definition and assessment Two major interpretations of intelligence exist -- the concept of 'general intelligence,' which is often pitted against the concept of 'multiple intelligences.' For many years, it was though that only one kind of intelligence existed, known as the 'g-factor,' or general intelligence. "In recent decades, psychologists have devoted much effort to isolating that general factor, which is abbreviated g, from the other aspects of cognitive

Sign Up for Unlimited Study Help

Our semester plans gives you unlimited, unrestricted access to our entire library of resources —writing tools, guides, example essays, tutorials, class notes, and more.

Get Started Now