Answer the following 5 questions:
- Is it possible to develop a test that will be totally free of error variance? Why or why not?
- What rationale would you use to justify the use of newly developed instruments that may not have existed long enough to accumulate evidence?
- Describe situations in which the use of newly developed instruments would be appropriated.
- What precautions do the practitioners need to take if they want to use new instruments?
- Is it possible to have a test that is reliable but not valid? Why or why not?
- What is the difference between construct, content, and face validity?
- Post your response to the Discussion Area by the due date assigned. Respond to at least two posts by the end of the week.
- Use an APA style reference list with in-text citations in your initial response.
- Use an APA style reference list with in-text citations in at least one of your two responses to classmates.
It is important for a psychological test to have good psychometric properties that help ensure that the test consistently measures what it is purported to measure.
The two most important psychometric properties of psychological tests are reliability and validity. In order for the results of a test to be applied and understood legitimately, the results must be both reliable and valid. Let’s examine reliability.
Reliability means that the same methods get the same results over time. There are different forms of reliability that have to be considered.
For example, test-retest reliability looks at the stability of scores when the test is given more than once to the same group of people. The closer the scores are between both administrations, the more reliable the test is.
Interrater reliability measures whether different people scoring the same test get the same results. This is especially important for subjective measures such as projective tests.
This goal is for a test to be as reliable as possible.
As with all types of experimental and evaluative measurement in psychological testing, error is always a possibility. While certain types of error are impossible to predict before looking at data, there are some kinds of error that can be prevented through paying careful attention to the way in which tests are being administered, and how information is collected and interpreted.
There are two main types of error that should be accounted for in psychological assessment, and those are measurement error and systematic error.
Measurement error is a result of misinterpretation of data, or drawing conclusions that are resulting from misread data. This type of error is distinguished from systematic error, in which the setup and foundations of the data collection were faulty, and this caused responses from participants to be different than if the items had been reliable.
Test validity refers to how accurately a test measures the construct of interest. For example, if you want to measure the length of a board, a scale would not be a reliable test. A ruler would.
In addition to determining that a test is measuring what you want to measure, test validity also ensures that a test is appropriate for what you want to use it for.
For example, you want to test the validity of an employment test designed to measure cognitive ability. Once you determine that the test does measure cognitive ability, you then need to determine whether the test is appropriate to be used as a predictor in your particular employment setting.
Earlier we talked about reliability, or whether a test gives consistent results each time. How does validity relate to reliability? A test that is valid will always be reliable. This is due to the fact that if the test accurately measures a construct, it will then give the same measurement of that construct each time the test is administered to the same group. However, a test that is reliable is not always valid. For example, If I give you a test intending to measure your speed on a bicycle, but I do so by only taking measurement of the size of the bicycle, I will get the same results each time, but I still haven’t measured what I intended to measure.
It is important to know about different types of test validity so that you employ the most suitable items in your test.
Types of Test Validities.html
Types of Test Validities
Several types of validity are taken into account when examining a psychological test. The three types of interest are construct validity, criterion-related validity, and content validity.
Let’s look at each of them individually:
- Face validity is a measure of whether or not the test looks like it measures what it is supposed to measure. In other words, someone taking the test would not be confused that it is measuring something different.
- Construct validity means that the scores on the test are an accurate measure of the construct being measured. For example, do the scores on a new IQ test give an accurate measure of IQ
- Criterion-related validity is observed when a test can effectively predict indicators of a construct. Within the umbrella of criterion-related validity, there are two subtypes: concurrent validity and predictive validity.
- Concurrent validity can be measured when you have another test of the same criterion to compare scores to at the time the test is administered. If both tests gave the same measure of the criterion, then there is concurrent validity.
- Predictive validity is used to determine if test scores accurately predict performance on a criterion at a later time. For example, if I give a test measuring how often you check your email during an hour period, and you have the same number of times checking email during any hour in the future, then the test has predictive validity.
- Content validity measures how well your test measures all aspects of the construct you are trying to measure.
- External validity is an indicator of whether or not your measurement of a construct in one sample group is similar to the same measurement in a different sample group.