We transform raw scores to make different variables comparable and to make scores within the same distribution easier to interpret. The “z-transformation” is the Rolls Royce of transformations because with it we can compare and interpret scores from virtually any normal distribution of interval or ratio scores. With the help of z-scores, we can easily determine the underlying raw score’s location in a distribution. The z transformation equates or standardizes different distributions, so z-scores are often referred to as standard scores.
Formula of Used
The term "standard score" or “z-score” is usually defined for normal populations; the terms "Z score" and "normal deviate" should only be described in reference to normal distribution. The transformation from a raw score Y to a Z score can be done using the following formula:
Z = (Y - μ)/σ
Transforming a variable in this way is called "standardizing" the variable. It should be kept in mind that if Y is not normally distributed then the transformed variable will not be normally distributed either. A Z-score represents the number of standard deviation an element is from the mean.
Interpretation of Z-Score
These are the following ways to interpret Z- scores
- If the value of Z-score is less than zero then the value of the given element is less than the mean.
- If the value of Z-score is greater than zero then the value of the given element is greater than the mean.
- The element could be equal to mean if the value of Z-score is equal to the zero.
The reason that z-scores are so useful is that
they directly communicate the relative sanding of a raw score. The way to see
this is to first envision any simple or population as z-distribution. A z-distribution
is the distribution produced by transforming all raw scores in the data into
z-scores. All normal z-distributions are similar, so a particular z-score will
convey the same information in every distribution.
Calculating Z-Scores value from table
For instance, assume Z is a regularly conveyed irregular variable and you need to process
P(0.46 < Z < 2.09). To begin with you discover the estimations of M(2.09) and M(0.46) from the table, then you subtract the two qualities to get the likelihood.
Every column of the Z-score table demonstrates the Z-scores up to the tenths digit. Every segment further refines the Z-score to the hundredths digit. For example, to discover the M(Z) esteem for Z = 0.46, first find the column of 0.4. At that point, find the 0.06 section. Where the line and segment cross is the quality for 0.46. From the table underneath, you can see that M(0.46) = 0.6772. In like manner, M(2.09) = 0.9817.
(The values have been taken from normal table.)
You may have seen that the table does not contain values for - Z. Since the standard ordinary dissemination is symmetric about the mean, you can process M(- Z) with the connection
For instance, to figure P(- 0.7 < Z < 0.8), you discover M(0.8) = 0.7881 and M(0.7) = 0.758. Consequently, M(- 0.7) = 1 - 0.758 = 0.242. In this way,
P(- 0.7 < Z < 0.8) = M(0.8) - M(- 0.7) = 0.7881 - 0.242 = 0.5461
Applications of Z-Scores
A common use of z-scores is with diagnostic psychological tests such as intelligence or personality tests. The normal curve and z-scores are also used when researchers create a ‘’statistical definition’’ of a psychological or sociological attribute. The third important use of z-scores is for computing the relative frequency of raw scores. Z-scores can be used to compare scores from different variables.
If you need more help in z-scoreClick here