![]() ![]() If 100 people test negative but only 95 are actually negative, the specificity is 95%. It's the percentage of people who test negative and don't have the disease. Specificity is similar, but for negative tests. So, if we get 100 positive tests and 97 of those people actually have the disease, the sensitivity is 97%. Sensitivity is the percentage of people that test positive and have the disease. ![]() They're frequently used in medicine, where you can say that a person either has a condition or doesn't have the condition. Specificity and sensitivity rely on a "yes/no" outcome. You can roughly estimate reliability by seeing if a person's scores are consistently close to the same value when taking the same test, when being evaluated by different people, when taking slightly different versions of the test, etc. Reliability is just a measure of how consistent a test is. Note that you might run into the classic issue of "correlation vs causation" though, so don't take too much stock in this. A high correlation between test score and objective measure would indicate that the test is doing the job you need it to. That's because validity essentially asks the question: "Does this test measure what it is supposed to measure?" A simple, imperfect estimate would be to plot test scores on the y axis and objective measures on the x axis and check the correlation. Long answer: You need an objective measure of success to measure validity. Short answer: you'll find it very difficult to estimate these things.
0 Comments
Leave a Reply. |