Tuesday 6 December 2011

LECTURE 14(b) -RELIABILITY

What is reliability?
Reliability is synonymous with consistency. It is the degree to which test scores for an individual test taker or group of test takers are consistent over repeated applications.
Methods to Determine Reliability of Instrument

·         Equivalency
The extent to which two items measure identical concepts at an identical level of difficulty. Equivalency reliability is determined by relating two sets of test scores to one another to highlight the degree of relationship or association.
·         Internal
Internal consistency is the extent to which tests or procedures assess the same characteristic, skill or quality. It is a measure of the precision between the observers or of the measuring instruments used in a study.
·         Interrater
Interrater reliability is the extent to which two or more individuals (coders or raters) agree. Interrater reliability addresses the consistency of the implementation of a rating system.
·         Test-retest
The same test is repeated on the same group of test takers on two different occasions. Results are compared and correlated with the initial test to give a measure of stability. This method examines performance over time.
·         Split half method
A test given and divided into halves and are scored separately, then the score of one half of test are compared to the score of the remaining half to test the reliability.


 Factors Affecting Reliability

·         Administration factor
Instructions with the test may contain errors that create another type of systemic error. These errors exist in either the instructions provided to the test-taker or those given to the psychologist who is conducting the test.
·         Question construction
If test questions are difficult, confusing or ambiguous, reliability is negatively affected. Some people read the question to mean one thing, whereas others read the same question to mean something else.
·         Scoring errors
Reliable tests have an accurate method of scoring and interpreting the results. All tests come with a set of instructions on scoring. Errors in these instructions, such as making unsupported conclusions, reduce the reliability of the test.
·         Test-Taker Factors
Factors related to the test-taker, such as poor sleep, feeling ill, anxious or "stressed-out" can integrate into the test itself.
·         Heterogenity of the items
The greater the heterogeneity, the differences in the kind of questions or difficulty of the question of the items, the greater the chance for high reliability.
·         Heterogenity of the group members
The greater the heterogeneity of the group members in the preferences, skills or behaviors being tested, the greater the chance for high reliability

 Relationship between validity and  reliability
·         A test cannot be considered valid unless the measurements resulting from it are reliable.
·         Result from a test can be reliable and not necessarily valid.

No comments:

Post a Comment