In the early years of the 20th century, when Alfred Binet first invented and administered intelligence tests in Paris, it was observed that although most children functioned at or near their age level, the smarter minority of children understood and could perform work at levels beyond their ages, while the slower minority were unable to keep up with children their own age. It became convenient to assign a mental age to each child based upon the average age level at which she or he could function. Dividing a child's mental age by his or her chronological age yields an "intelligence quotient" or IQ. It was found that this measure of intelligence remains remarkably constant throughout life. Scores determined in early childhood tend to be less reliable, but reliability increases with increasing age.
This definition worked well enough for children, but it broke down past the middle teens. Mental age plateaus in the middle teens, along with heighth and other growth parameters. The next step was to use an age of 15 or 16 as a divisor for adult scores. This provided a means for estimating adult intelligence
It was originally taught that ratio IQ's fell strictly upon a bell-shaped or Gaussian error curve. However, this is not the case. Although IQ's fall along a bell curve fairly well up to an IQ of about 130, there is a much higher incidence of very high and very low scores than would be predicted by a strict Gaussian distribution.This must have been evident as early as 1921, when Drs. Louis M. Terman and Catherine Miles Cox screened a population of 250,000 California Schoolchildren to identify the smartest 1523 children for a lifetime longitudinal study of giftedness. The study found 77 children with IQ's of 170 or above, where only one or two would have been expected on the basis of a Gaussian prediction--~50 times the expected frequency. It found 28 children with IQ's of 180 or above, where a Gaussian prediction calls for only one in 3,000,000--336 times the expected frequency. It turned up one child with an IQ of 201, an IQ that would be expected to occur only once in every 7,000,000,000 children--28,000 times the expected frequency!. A similar situation exists among the seriously retarded, although the numbers aren't a mirror image of those above the average IQ of 100.
It was also difficult to convert scores made on tests like the SAT, the GRE, and the ACT to ratio-IQ scores. Their developers expressed their results in percentiles, but not in mental ages.
In the 1960's (?), in order to correct for this over-representation of people with extreme IQ's, psychometrists began quoting IQ scores on the basis of their rarity rather than using the ratio scores themselves. Psychologists called this new, deflated term for IQ the "deviation IQ", since it is a measure of how far someone deviates from the average IQ of 100. Deviation IQ's exactly fit a Guassian distribution by definition, since, knowing someone's percentile score on a test, his or her deviation IQ is derived by reading off his or her IQ from a chart that matches IQ's with percentile rankings. For example, if someone scores in the 96th percentile on an IQ test, we can use the chart to assign a deviation IQ of 128 to them. Deviation IQ's are lower than ratio IQ's, with the disparity rising as the IQ rises. For example, people with ratio IQ's of 130 or above would be expected to comprise about 3% of the population. In practice they constitute about 4% of the population, and their corresponding deviation Q is 128 or above. About one person with a ratio IQ of 150 or above would be expected in every 1,000 members of the general population. In practice, about one person in every 160 has a ratio IQ of 150 or above. The deviation IQ corresponding to a childhood ratio IQ of 150 is 144. Similarly, one would expect to find someone with a ratio IQ of 160 or above once in every 10,000 members of the general public. In practice, one in every 1,000 people has a ratio IQ of 160 or above, and the deviation IQ assigned to these individuals is 150 or above.
Summing it up:
a ratio IQ of 120+ becomes a deviation IQ of 119+;
a ratio IQ of 130+ becomes a deviation IQ of 128+;
a ratio IQ of 140+ becomes a deviation IQ of 136+;
a ratio IQ of 150+ becomes a deviation IQ of 143+;
a ratio IQ of 160+ becomes a deviation IQ of 150+;
a ratio IQ of 170+ becomes a deviation IQ of 156.5+;
a ratio IQ of 180+ becomes a deviation IQ of 162.5+,
a ratio IQ of 190+ becomes a deviation IQ of 168+;
a ratio IQ of 200+ becomes a deviation IQ of 174+;
a ratio IQ of 210+ becomes a deviation IQ of 179+;
a ratio IQ of 220+ becomes a deviation IQ of 184+;
a ratio IQ of 230+ becomes a deviation IQ of 189+;
a ratio IQ of 240+ becomes a deviation IQ of 193+;
a ratio IQ of 250+ becomes a deviation IQ of 198+; and
a ratio IQ of 260+ becomes a deviation IQ of 203+.
A complete conversion chart relating ratio IQ's to deviation IQ's may be found in John Scoville's excellent paper, accessible through Darryl Miyaguchi's indispensable "Uncommonly Difficult IQ Tests" website.
As an example of the relationship between ratio IQ's and deviation IQ's, we might consider the case of Marilyn vos Savant. When she was 10,.Marilyn received an IQ score of 168 on an IQ test (the ceiling of the test) However, extrapolating (presumably, her measured mental age was 22 years, 10 months), she was assigned an IQ of 228. This corresponds to a deviation IQ of 188. Her adult deviation IQ score on the Mega Test was 186. (Note that this reflected a raw score of 46 out of 48, and was very close to the ceiling of the Mega Test.)
Although deviation IQ's deflate ratio IQ's as needed to fit a Gaussian Error Curve, they don't, in my opinion, replace the meaningful quantitative measure that a ratio IQ affords. I find that the idea that Marilyn vos Savant has a mental age 2.28 times the mental age of the average adult to be complementary to the idea that she is at the 1:30,000,000 level of mental ability. I commonly convert deviation IQ's to ratio IQ's using the table above, in order to avail myself of both pieces of information.