How Valid Is It to Judge Intelligence by IQ Scores?

    This is a very good question.
    The idea that we can take 1/2-hour or 1-hour, or in the case of the SAT, the GRE, and the ACT, a one-day test that will measure our lifelong ability--or or our lifelong lack of ability--to ever reach the highest eyries of insight or invention is a terrifying thought. This is probably particularly intimidating to those of us who adjudge our "edge" in life to be denominated by our intelligence.
    Before we go further, we need to discuss the various flavors of IQ scores subsumed under the rubric "IQ".
    The original definition of IQ for the U. S'. first popular IQ test, the 1916 revision of the Stanford Binet, was a 'ratio IQ' test that employed a standard deviation of 16 points of IQ. Later, however, other test designers introduced IQ tests with standard deviations of 15 (no doubt for good reasons).
(1) The Stanford Binet IQ Test and others such as the Slossen tests still use a standard deviation of 16.
(2) The Wechsler family of tests uses a standard deviation of 15.

    It's important to consider whether a quoted IQ score is based upon a standard deviation of 15 or of 16 in interpreting the IQ number.
    A second possible source of confusion concerns ratio IQ's versus deviation IQ's. Like heights, IQ's don't exactly fit a Gaussian distribution. Like heights, ultra-high IQ's appear much more frequently than would be predicted from a Gaussian distribution. For example, we would only expect to find one IQ of 160 or above in every 11,000 people. Instead, we find one IQ of 160 or above  in every 1,100 people. We would only expect to find one IQ of 180+ in every 3,500,000 people. Instead, we find one IQ of 180+ in every 20,000 people. We would expect to find one IQ of 200+ in every 5,000,000,000 people. Instead, we find one IQ of 200+ in every 500,000 people. An IQ of 220 would be so improbable that we can consider it virtually impossible. However, psychometrists have found several individuals with childhood IQ's this high. (The highest IQ officially on record... 228... belongs to Marilyn vos Savant, although I have been privately informed of the existence of someone with a childhood IQ of 242.)
Bottom line: IQ's only approximately fit a bell curve diverging from it rapidly as IQ's climb above two standard deviations from the mean.
    Actually, insofar as I'm aware, there's nothing that says that IQ's have to follow any particular kind of distribution curve at all.
    If heights were distributed in accordance with a Gaussian distribution, virtually no one would be more than seven feet tall. In reality, there is a pituitary condition known as acromegaly in which men have been known to grow to be almost nine feet tall... a condition that would be virtually impossible within a strictly Gaussian descriptive framework.
   Conventional (ratio) IQ's fit children well enough. However, as children's physical growth slows during adolescence, the growth of mental age also slows, and then plateaus in the mid- to late-20's (although it should be noted that certain aspects of intelligence such as vocabulary and, perhaps, general information, may continue to rise into their 60's).. From 1916 through 1960, psychometrists employed the concept of adult mental age, dividing the adult mental age by 16 (their compromise as an age at which mental growth could be considered complete) to arrive at an adult IQ. However, adult mental ages are a synthetic construct. Also, it may not be that easy to calibrate an adult IQ test.
    In any case, around 1960, psychometrists decided to switch to percentile rankings on adult intelligence tests. They could directly and unambiguously measure percentile rankings, but they couldn't directly measure adult mental ages.  Universities were using percentile rankings to assess new applicants. Psychometrists could assign IQ's to adults by assuming that IQ's strictly fit a bellcurve

    First, IQ tests do a remarkably good job of measuring IQs. Huge jumps in IQ aren't common. It is certainly possible to do poorly on an IQ test, but usually, subsequent testing with an eye to any special problems the test-taker might be facing (such as poor vision) will lead to higher scores if the special problems are addressed and corrected. In the early years of the 20th century, psychologists still subscribed to the 19th century notion of independent mental faculties. Faculties such as perception, conception, judgement, reason, recollection, memory, imagination, intuition, wisdom, discernment, discrimination, and awesthetic sensitivity were considered to be independent of one another. However, no one had educed a quantitative method to put this notion to the test--until Charles Spearman. Dr. Spearman invented factor analysis and put these ideas to the test.

    In the end, I can only say that I have personally found IQ scores to be a very faithful measure of what I would deem intelligence. People with high IQ's have larger vocabularies, a greater range of knowledge, quicker understanding, and an enhanced ability to

    Beyond that, knowing the distance between someone with an IQ of 125 and someone with an IQ of 75--one the school principal and the other, the janitor--it's sobering to ponder whether we might seem equally obtuse to someone with a ratio-IQ 50 points above our own--e. g., a ratio IQ of 175 and a deviation IQ of 160, or a ratio IQ of 190 and a deviation IQ of 168.