出典(authority):フリー百科事典『ウィキペディア(Wikipedia)』「2015/11/23 17:06:03」(JST)
IQ
このページは曖昧さ回避のためのページです。一つの語句が複数の意味・職能を有する場合の水先案内のために、異なる用法を一覧にしてあります。お探しの用語に一番近い記事を選んで下さい。このページへリンクしているページを見つけたら、リンクを適切な項目に張り替えて下さい。 |
Intelligence quotient | |
---|---|
Diagnostics | |
An example of one kind of IQ test item, modeled after items in the Raven's Progressive Matrices test
|
|
ICD-10-PCS | Z01.8 |
ICD-9-CM | 94.01 |
An intelligence quotient (IQ) is a score derived from one of several standardized tests designed to assess human intelligence. The abbreviation "IQ" was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests he advocated in a 1912 book.[1] When current IQ tests are developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less,[2] although this was not always so historically. By this definition, approximately two-thirds of the population scores between IQ 85 and IQ 115, and about 5 percent of the population scores above 125.[3][4]
IQ scores have been shown to be associated with such factors as morbidity and mortality,[5][6] parental social status,[7] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[8][9] and the mechanisms of inheritance.[10]
IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they don't always improve their cognitive abilities, such as memory, attention and speed.[11] In research contexts they have been studied as predictors of job performance, and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.
Historically, even before IQ tests were invented, there were attempts to classify people into intelligence categories by observing their behavior in daily life.[12][13] Those other forms of behavioral observation are still important for validating classifications based primarily on IQ test scores. Both intelligence classification by observation of behavior outside the testing room and classification by IQ testing depend on the definition of "intelligence" used in a particular case and on the reliability and error of estimation in the classification procedure.
The English statistician Francis Galton made the first attempt at creating a standardized test for rating a person's intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance).[14][15][16] He hypothesized that there should exist a correlation between intelligence and other observable traits such as reflexes, muscle grip, and head size.[17] He set up the first mental testing centre in the world in 1882 and he published "Inquiries into Human Faculty and Its Development" in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.[18][19]
French psychologist Alfred Binet, together with Victor Henri and Théodore Simon had more success in 1905, when they published the Binet-Simon test, which focused on verbal abilities. It was intended to identify mental retardation in school children,[20] but in specific contradistinction to claims made by psychiatrists that these children were "sick" (not "slow") and should therefore be removed from school and cared-for in asylums.[21] The score on the Binet-Simon scale would reveal the child's mental age. For example, a six-year-old child who passed all the tasks usually passed by six-year-olds—but nothing beyond—would have a mental age that matched his chronological age, 6.0. (Fancher, 1985). Binet thought that intelligence was multifaceted, but came under the control of practical judgement.
In Binet's view, there were limitations with the scale and he stressed what he saw as the remarkable diversity of intelligence and the subsequent need to study it using qualitative, as opposed to quantitative, measures (White, 2000). American psychologist Henry H. Goddard published a translation of it in 1910. American psychologist Lewis Terman at Stanford University revised the Binet-Simon scale, which resulted in the Stanford-Binet Intelligence Scales (1916). It became the most popular test in the United States for decades.[20][22][23][24]
The many different kinds of IQ tests include a wide variety of item content. Some test items are visual, while many are verbal. Test items vary from being based on abstract-reasoning problems to concentrating on arithmetic, vocabulary, or general knowledge.
The British psychologist Charles Spearman in 1904 made the first formal factor analysis of correlations between the tests. He observed that children's school grades across seemingly unrelated school subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests. He suggested that all mental performance could be conceptualized in terms of a single general ability factor and a large number of narrow task-specific ability factors. Spearman named it g for "general factor" and labeled the specific factors or abilities for specific tasks s. In any collection of test items that make up an IQ test, the score that best measures g is the composite score that has the highest correlations with all the item scores. Typically, the "g-loaded" composite score of an IQ test battery appears to involve a common strength in abstract reasoning across the test's item content. Therefore, Spearman and others have regarded g as closely related to the essence of human intelligence.
Spearman's argument proposing a general factor of human intelligence is still accepted in principle by many psychometricians. Today's factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the g factor, which represents the variance common to all cognitive tasks. However, this view is not universally accepted; other factor analyses of the data, with different results, are possible. Some psychometricians regard g as a statistical artifact.
During World War I, a way was needed to evaluate and assign Army recruits to appropriate tasks. This led to the rapid development of several mental tests. The testing generated controversy and much public debate in the United States. Nonverbal or "performance" tests were developed for those who could not speak English or were suspected of malingering.[20] After the war, positive publicity promoted by army psychologists helped to make psychology a respected field.[25] Subsequently, there was an increase in jobs and funding in psychology in the United States.[26] Group intelligence tests were developed and became widely used in schools and industry.[27]
L.L. Thurstone argued for a model of intelligence that included seven unrelated factors (verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, reasoning, and induction). While not widely used, Thurstone's model influenced later theories.[20]
David Wechsler produced the first version of his test in 1939. It gradually became more popular and overtook the Stanford-Binet in the 1960s. It has been revised several times, as is common for IQ tests, to incorporate new research. One explanation is that psychologists and educators wanted more information than the single score from the Binet. Wechsler's ten or more subtests provided this. Another is that Stanford-Binet test reflected mostly verbal abilities, while the Wechsler test also reflected nonverbal abilities. The Stanford-Binet has also been revised several times and is now similar to the Wechsler in several aspects, but the Wechsler continues to be the most popular test in the United States.[20]
Raymond Cattell (1941) proposed two types of cognitive abilities in a revision of Spearman's concept of general intelligence. Fluid intelligence (Gf) was hypothesized as the ability to solve novel problems by using reasoning, and crystallized intelligence (Gc) was hypothesized as a knowledge-based ability that was very dependent on education and experience. In addition, fluid intelligence was hypothesized to decline with age, while crystallized intelligence was largely resistant to the effects of aging. The theory was almost forgotten, but was revived by his student John L. Horn (1966) who later argued Gf and Gc were only two among several factors, and who eventually identified nine or ten broad abilities. The theory continued to be called Gf-Gc theory.[20]
John B. Carroll (1993), after a comprehensive reanalysis of earlier data, proposed the three stratum theory, which is a hierarchical model with three levels. The bottom stratum consists of narrow abilities that are highly specialized (e.g., induction, spelling ability). The second stratum consists of broad abilities. Carroll identified eight second-stratum abilities. Carroll accepted Spearman's concept of general intelligence, for the most part, as a representation of the uppermost, third stratum.[28][29]
In 1999, a merging of the Gf-Gc theory of Cattell and Horn with Carroll's Three-Stratum theory has led to the Cattell–Horn–Carroll theory (CHC Theory). It has greatly influenced many of the current broad IQ tests.[20]
In CHC theory, a hierarchy of factors is used; g is at the top. Under it are ten broad abilities that in turn are subdivided into seventy narrow abilities. The broad abilities are:[20]
Modern tests do not necessarily measure all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ.[20] Gt may be difficult to measure without special equipment. g was earlier often subdivided into only Gf and Gc, which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.[20] Modern comprehensive IQ tests don't stop at reporting a single IQ score. Although they still give an overall score, they now also give scores for many of these more restricted abilities, identifying particular strengths and weaknesses of an individual.[20]
J.P. Guilford's Structure of Intellect (1967) model used three dimensions which when combined yielded a total of 120 types of intelligence. It was popular in the 1970s and early 1980s, but faded owing to both practical problems and theoretical criticisms.[20]
Alexander Luria's earlier work on neuropsychological processes led to the PASS theory (1997). It argued that only looking at one general factor was inadequate for researchers and clinicians who worked with learning disabilities, attention disorders, intellectual disability, and interventions for such disabilities. The PASS model covers four kinds of processes (planning process, attention/arousal process, simultaneous processing, and successive processing). The planning processes involve decision making, problem solving, and performing activities and requires goal setting and self-monitoring. The attention/arousal process involves selectively attending to a particular stimulus, ignoring distractions, and maintaining vigilance. Simultaneous processing involves the integration of stimuli into a group and requires the observation of relationships. Successive processing involves the integration of stimuli into serial order. The planning and attention/arousal components comes from structures located in the frontal lobe, and the simultaneous and successive processes come from structures located in the posterior region of the cortex.[30][31][32] It has influenced some recent IQ tests, and been seen as a complement to the Cattell-Horn-Carroll theory described above.[20]
There are a variety of individually administered IQ tests in use in the English-speaking world.[33][34] The most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale for adults and the Wechsler Intelligence Scale for Children for school-age test-takers. Other commonly used individual IQ tests (some of which do not label their standard scores as "IQ" scores) include the current versions of the Stanford-Binet, Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scales.
IQ scales are ordinally scaled.[35][36][37][38][39] While one standard deviation is 15 points, and two SDs are 30 points, and so on, this does not imply that mental ability is linearly related to IQ, such that IQ 50 means half the cognitive ability of IQ 100. In particular, IQ points are not percentage points.
On a related note, this fixed standard deviation means that the proportion of the population who have IQs in a particular range is theoretically fixed, and current Wechsler tests only give Full Scale IQs between 40 and 160. This should be borne in mind when considering reports of people with much higher IQs.[40][41]
Psychometricians generally regard IQ tests as having high statistical reliability.[7][42] A high reliability implies that – although test-takers may have varying scores when taking the same test on differing occasions, and although they may have varying scores when taking different IQ tests at the same age – the scores generally agree with one another and across time. Like all statistical quantities, any particular estimate of IQ has an associated standard error that measures uncertainty about the estimate. For modern tests, the standard error of measurement is about three points. Clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes.[20][43][44]
Pupil | KABC-II | WISC-III | WJ-III |
---|---|---|---|
Asher | 90 | 95 | 111 |
Brianna | 125 | 110 | 105 |
Colin | 100 | 93 | 101 |
Danica | 116 | 127 | 118 |
Elpha | 93 | 105 | 93 |
Fritz | 106 | 105 | 105 |
Georgi | 95 | 100 | 90 |
Hector | 112 | 113 | 103 |
Imelda | 104 | 96 | 97 |
Jose | 101 | 99 | 86 |
Keoku | 81 | 78 | 75 |
Leo | 116 | 124 | 102 |
Since the early 20th century, raw scores on IQ tests have increased in most parts of the world.[47][48][49] When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book The Bell Curve after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists.[50][51]
Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether the effect may have ended in some developed nations, whether there are social subgroup differences in the effect, and what possible causes of the effect might be.[52] A 1998 textbook, IQ and Human Intelligence, by N. J. Mackintosh, noted that before Flynn published his major papers, many psychologists mistakenly believed that there were dysgenic trends gradually reducing the level of intelligence in the general population. They also believed that no environmental factor could possibly have a strong effect on IQ. Mackintosh noted that Flynn's observations have prompted much new research in psychology and "demolish some long-cherished beliefs, and raise a number of other interesting issues along the way."[48]
IQ can change to some degree over the course of childhood.[53] However, in one longitudinal study, the mean IQ scores of tests at ages 17 and 18 were correlated at r=0.86 with the mean scores of tests at ages five, six, and seven and at r=0.96 with the mean scores of tests at ages 11, 12, and 13.[7]
For decades practitioners' handbooks and textbooks on IQ testing have reported IQ declines with age after the beginning of adulthood. However, later researchers pointed out this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect. A variety of studies of IQ and aging have been conducted since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. Current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be controlled to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages.[54]
The exact peak age of fluid intelligence or crystallized intelligence remains elusive. Cross-sectional studies usually show that especially fluid intelligence peaks at a relatively young age (often in the early adulthood) while longitudinal data mostly show that intelligence is stable until the mid adulthood or later. Subsequently, intelligence seems to decline slowly.[55]
Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate.
Heritability is defined as the proportion of variance in a trait which is attributable to genotype within a defined population in a specific environment. A number of points must be considered when interpreting heritability.[56] Heritability measures the proportion of 'variation' in a trait that can be attributed to genes, and not the proportion of a trait caused by genes. The value of heritability can change if the impact of environment (or of genes) in the population is substantially altered. A high heritability of a trait does not mean environmental effects, such as learning, are not involved. Since heritability increases during childhood and adolescence, one should be cautious drawing conclusions regarding the role of genetics and environment from studies where the participants are not followed until they are adults.
The general figure for heritability of IQ is about 0.5 across multiple studies in varying populations.[57] It may seem reasonable to expect genetic influences on traits like IQ to become less important as one gains experiences with age. However, the opposite occurs. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.8 in adulthood.[58][59] One proposed explanation is that people with different genes tend to reinforce the effects of those genes, for example by seeking out different environments.[7] Debate is ongoing[weasel words] about whether these heritability estimates are too high, owing to inadequate consideration of various factors—such as the environment being relatively more important in families with low socioeconomic status, or the effect of the maternal (fetal) environment.
Family members have aspects of environments in common (for example, characteristics of the home). This shared family environment accounts for 0.25–0.35 of the variation in IQ in childhood. By late adolescence, it is quite low (zero in some studies). The effect for several other psychological traits is similar. These studies have not looked at the effects of extreme environments, such as in abusive families.[7][60][61][62]
Although parents treat their children differently, such differential treatment explains only a small amount of nonshared environmental influence. One suggestion is that children react differently to the same environment because of different genes. More likely influences may be the impact of peers and other experiences outside the family.[7][61]
A very large proportion of the over 17,000 human genes are thought to have an effect on the development and functionality of the brain.[63] While a number of individual genes have been reported to be associated with IQ, none have a strong effect. Deary and colleagues (2009) reported that no finding of a strong gene effect on IQ has been replicated.[64] Most reported associations of genes with intelligence are false positive results.[65] Recent findings of gene associations with normally varying intelligence differences in adults continue to show weak effects for any one gene;[66] likewise in children.[67]
David Rowe reported an interaction of genetic effects with socioeconomic status, such that the heritability was high in high-SES families, but much lower in low-SES families.[68] This has been replicated in infants,[69] children [70] and adolescents [71] in the US, though not outside the US, for instance a reverse result was reported in the UK.[72]
Dickens and Flynn (2001) have argued that genes for high IQ initiate environment-shaping feedback, as genetic effects cause bright children to seek out more stimulating environments that further increase IQ. In their model, environment effects decay over time (the model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they caused children to persist in seeking out cognitively demanding experiences.[73][74]
In general, educational interventions, as those described below, have shown short-term effects on IQ, but long-term follow-up is often missing. For example, in the US very large intervention programs such as the Head Start Program have not produced lasting gains in IQ scores. More intensive, but much smaller projects such as the Abecedarian Project have reported lasting effects, often on socioeconomic status variables, rather than IQ.[7]
Recent studies have shown that training in using one's working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training.[75] Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time.[76]
Musical training in childhood has been found to correlate with higher than average IQ.[77][78] Multiple attempted replications (e.g.[79]) have shown that this is at best a short-term effect (lasting no longer than 10 to 15 minutes), and is not related to IQ-increase.[80]
In 2004, Schellenberg devised an experiment to test his hypothesis that music lessons can enhance the IQ of children. He had 144 samples of 6 year old children which were put into 4 groups; keyboard lessons, vocal lessons, drama lessons or no lessons at all, for 36 weeks. The samples' IQ was measured both before and after the lessons had taken place using the Wechsler Intelligence Scale for Children–Third Edition, Kaufman Test of Educational Achievement and Parent Rating Scale of the Behavioral Assessment System for Children. All four groups had increases in IQ, most likely resulted by the entrance of grade school. The notable difference with the two music groups compared to the two controlled groups was a slightly higher increase in IQ. The children in the control groups on average had an increase in IQ of 4.3 points, while the increase in IQ of the music groups was 7.0 points. Though the increases in IQ were not dramatic, one can still conclude that musical lessons do have a positive effect for children, if taken at a young age. It is hypothesized that improvements in IQ occur after musical lessons because the music lessons encourage multiple experiences which generates progression in a wide range of abilities for the children. Testing this hypothesis however, has proven difficult.[81]
Several neurophysiological factors have been correlated with intelligence in humans, including the ratio of brain weight to body weight and the size, shape and activity level of different parts of the brain. Specific features that may affect IQ include the size and shape of the frontal lobes, the amount of blood and chemical activity in the frontal lobes, the total amount of gray matter in the brain, the overall thickness of the cortex and the glucose metabolic rate.[82]
Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth.[citation needed] A cohort study confers the relationship between familial inbreeding and modest cognitive impairments among children, providing the evidence for inbreeding depression on intellectual behaviors on comparing with environmental and socioeconomic variables.[83]
Since about 2010 researchers such as Eppig, Hassel and MacKenzie have found a very close and consistent link between IQ scores and infectious diseases, especially in the infant and preschool populations and the mothers of these children.[84] They have postulated that fighting infectious diseases strains the child's metabolism and prevents full brain development. Hassel postulated that it is by far the most important factor in determining population IQ. However they also found that subsequent factors such as good nutrition, regular quality schooling can offset early negative effects to some extent.
Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Improvements in nutrition, and in public policy in general, have been implicated in worldwide IQ increases.[citation needed]
Cognitive epidemiology is a field of research that examines the associations between intelligence test scores and health. Researchers in the field argue that intelligence measured at an early age is an important predictor of later health and mortality differences.
The American Psychological Association's report "Intelligence: Knowns and Unknowns" states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. This means that the explained variance is 25%. Achieving good grades depends on many factors other than IQ, such as "persistence, interest in school, and willingness to study" (p. 81).[7]
It has been found that the correlation of IQ scores with school performance depends on the IQ measurement used. For undergraduate students, the Verbal IQ as measured by WAIS-R has been found to correlate significantly (0.53) with the GPA of the last 60 hours. In contrast, Performance IQ correlation with the same GPA was only 0.22 in the same study.[85]
Some measures of educational aptitude correlate highly with IQ tests – for instance, Frey and Detterman (2004) reported a correlation of 0.82 between g (general intelligence factor) and SAT scores;[86] another research found a correlation of 0.81 between g and GCSE scores, with the explained variance ranging "from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design".[87]
According to Schmidt and Hunter, "for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability."[88] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[89] The correlations were higher when the unreliability of measurement methods was controlled for.[7] While IQ is more strongly correlated with reasoning and less so with motor function,[90] IQ-test scores predict performance ratings in all occupations.[88] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[88] It is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance.
In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[91] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[92]
The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs.[93]
While it has been suggested that "in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much",[94][95] large-scale longitudinal studies indicate an increase in IQ translates into an increase in performance at all levels of IQ: i.e. ability and job performance are monotonically linked at all IQ levels.[96][97] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independently of family background.[98]
The link from IQ to wealth is much less strong than that from IQ to job performance. Some studies indicate that IQ is unrelated to net worth.[99][100]
The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that IQ scores accounted for (explained variance) about a quarter of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[7]
In a meta-analysis Strenze (2006) reviewed much of the literature and estimated the correlation between IQ and income to be about 0.23.[101]
Some studies claim that IQ only accounts for (explains) a sixth of the variation in income because many studies are based on young adults, many of whom have not yet reached their peak earning capacity, or even their education. On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance).
A 2002 study[102] further examined the impact of non-IQ factors on income and concluded that an individual's location, inherited wealth, race, and schooling are more important as factors in determining income than IQ.
The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that the correlation between IQ and crime was −0.2. It was −0.19 between IQ scores and number of juvenile offenses in a large Danish sample; with social class controlled, the correlation dropped to −0.17. A correlation of 0.20 means that the explained variance is 4%. It is important to realize that the causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[7]
In his book The g Factor (1998), Arthur Jensen cited data which showed that, regardless of race, people with IQs between 70 and 90 have higher crime rates than people with IQs below or above this range, with the peak range being between 80 and 90.
The 2009 Handbook of Crime Correlates stated that reviews have found that around eight IQ points, or 0.5 SD, separate criminals from the general population, especially for persistent serious offenders. It has been suggested that this simply reflects that "only dumb ones get caught" but there is similarly a negative relation between IQ and self-reported offending. That children with conduct disorder have lower IQ than their peers "strongly argues" for the theory.[103]
A study of the relationship between US county-level IQ and US county-level crime rates found that higher average IQs were associated with lower levels of property crime, burglary, larceny rate, motor vehicle theft, violent crime, robbery, and aggravated assault. These results were not "confounded by a measure of concentrated disadvantage that captures the effects of race, poverty, and other social disadvantages of the county."[104][105]
The American Psychological Association's 1995 report Intelligence: Knowns and Unknowns stated that the correlations for most "negative outcome" variables are typically smaller than 0.20, which means that the explained variance is less than 4%.[7]
Tambs et al.[106][better source needed] found that occupational status, educational attainment, and IQ are individually heritable; and further found that "genetic variance influencing educational attainment ... contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ." In a sample of U.S. siblings, Rowe et al.[107] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.
Accomplishment | IQ | Test/study | Year |
---|---|---|---|
MDs, JDs, and PhDs | 125 | WAIS-R | 1987 |
College graduates | 112 | KAIT | 2000 |
K-BIT | 1992 | ||
115 | WAIS-R | ||
1–3 years of college | 104 | KAIT | |
K-BIT | |||
105–110 | WAIS-R | ||
Clerical and sales workers | 100–105 | ||
High school graduates, skilled workers (e.g., electricians, cabinetmakers) | 100 | KAIT | |
WAIS-R | |||
97 | K-BIT | ||
1–3 years of high school (completed 9–11 years of school) | 94 | KAIT | |
90 | K-BIT | ||
95 | WAIS-R | ||
Semi-skilled workers (e.g. truck drivers, factory workers) | 90–95 | ||
Elementary school graduates (completed eighth grade) | 90 | ||
Elementary school dropouts (completed 0–7 years of school) | 80–85 | ||
Have 50/50 chance of reaching high school | 75 |
Accomplishment | IQ | Test/study | Year |
---|---|---|---|
Professional and technical | 112 | ||
Managers and administrators | 104 | ||
Clerical workers, sales workers, skilled workers, craftsmen, and foremen | 101 | ||
Semi-skilled workers (operatives, service workers, including private household) | 92 | ||
Unskilled workers | 87 |
Accomplishment | IQ | Test/study | Year |
---|---|---|---|
Adults can harvest vegetables, repair furniture | 60 | ||
Adults can do domestic work | 50 |
There is considerable variation within and overlap among these categories. People with high IQs are found at all levels of education and occupational categories. The biggest difference occurs for low IQs with only an occasional college graduate or professional scoring below 90.[20]
Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups and sexes. While there is little scholarly debate about the existence of some of these differences, their causes remain highly controversial both within academia and in the public sphere.
Most IQ tests are constructed so that there are no overall score differences between females and males.[7][111] Popular IQ batteries such as the WAIS and the WISC-R are also constructed in order to eliminate sex differences.[112] In a paper presented at the International Society for Intelligence Research in 2002, it was pointed out that because test constructors and the Educational Testing Service (which developed the SAT) often eliminate items showing marked sex differences in order to reduce the perception of bias, the "true sex" difference is masked. Items like the MRT and RT tests, which show a male advantage in IQ, are often removed.[113]
The 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in IQ across races.[7] The problem of determining the causes underlying this variation relates to the question of the contributions of "nature and nurture" to IQ. Psychologists such as Alan S. Kaufman[114] and Nathan Brody[115] and statisticians such as Bernie Devlin[116] argue that there are insufficient data to conclude that this is because of genetic influences. A review article published in 2012 by leading scholars on human intelligence concluded, after reviewing the prior research literature, that group differences in IQ are best understood as environmental in origin.[117]
In considering disparities between test results of different ethnic groups, it is crucial to investigate the effects of Stereotype Threat (a situational predicament in which a person feels at risk of confirming negative stereotypes about the group(s) s/he identifies with),[118] as well as culture and acculturation.[119]
In the United States, certain public policies and laws regarding military service,[120] [121] education, public benefits,[122] capital punishment,[123] and employment incorporate an individual's IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except when linked to job performance via a job analysis. Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence.
A diagnosis of intellectual disability is in part based on the results of IQ testing. Borderline intellectual functioning is a categorization where a person has below average cognitive ability (an IQ of 71–85), but the deficit is not as severe as intellectual disability (70 or below).
In the United Kingdom, the eleven plus exam which incorporated an intelligence test has been used from 1945 to decide, at eleven years of age, which type of school a child should go to. They have been much less used since the widespread introduction of comprehensive schools.
IQ is the most researched attempt at measuring intelligence and by far the most widely used in practical setting. However, although IQ attempts to measure some notion of intelligence, it may fail to act as an accurate measure of "intelligence" in its broadest sense. IQ tests only examine particular areas embodied by the broadest notion of "intelligence", failing to account for certain areas which are also associated with "intelligence" such as creativity or emotional intelligence.
There are critics such as Keith Stanovich who do not dispute the stability of IQ test scores or the fact that they predict certain forms of achievement rather effectively. They do argue, however, that to base a concept of intelligence on IQ test scores alone is to ignore many important aspects of mental ability.[7][124]
Some scientists dispute IQ entirely. In The Mismeasure of Man (1996), paleontologist Stephen Jay Gould criticized IQ tests and argued that they were used for scientific racism. He argued that g was a mathematical artifact and criticized:
...the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.(pp. 24–25)
Arthur Jensen responded:
...what Gould has mistaken for "reification" is neither more nor less than the common practice in every science of hypothesizing explanatory models to account for the observed relationships within a given domain. Well known examples include the heliocentric theory of planetary motion, the Bohr atom, the electromagnetic field, the kinetic theory of gases, gravitation, quarks, Mendelian genes, mass, velocity, etc. None of these constructs exists as a palpable entity occupying physical space.[125]
Jensen also argued that even if g were replaced by a model with several intelligences this would change the situation less than expected. He argues that all tests of cognitive ability would continue to be highly correlated with one another and there would still be a black-white gap on cognitive tests.[126]
Psychologist Peter Schönemann persistently criticized IQ, calling it "the IQ myth". He argued that g is a flawed theory and that the high heritability estimates of IQ are based on false assumptions.[127][128]
Robert Sternberg, another significant critic of g as the main measure of human cognitive abilities, argued that reducing the concept of intelligence to the measure of g does not fully account for the different skills and knowledge types that produce success in human society.[129]
The American Psychological Association's report Intelligence: Knowns and Unknowns stated that in the United States IQ tests as predictors of social achievement are not biased against African Americans since they predict future performance, such as school achievement, similarly to the way they predict future performance for Caucasians.[7] While agreeing that IQ tests predict performance equally well for all racial groups (Except Asian Americans), Nicholas Mackintosh also points out that there may still be a bias inherent in IQ testing if the education system is also systematically biased against African Americans, in which case educational performance may in fact also be an underestimation of African American children's cognitive abilities.[130] Earl Hunt points out that while this may be the case that would not be a bias of the test, but of society.[131]
However, IQ tests may well be biased when used in other situations. A 2005 study stated that "differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students,"[132] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa.[133][134] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for autistic children; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of autistic children are mentally retarded.[135]
According to a 2006 article by the National Center for Biotechnology Information, contemporary psychological researches often did not reflect substantial recent developments in psychometrics and "bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s."[136]
In response to the controversy surrounding The Bell Curve, the American Psychological Association's Board of Scientific Affairs established a task force in 1995 to write a report on the state of intelligence research which could be used by all sides as a basis for discussion, "Intelligence: Knowns and Unknowns". The full text of the report is available through several websites.[7]
In this paper the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: "research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications".
The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They stated that individual differences in intelligence are substantially influenced by both genetics and environment.
The report stated that a number of biological factors, including malnutrition, exposure to toxic substances, and various prenatal and perinatal stressors, result in lowered psychometric intelligence under at least some conditions. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, saying:
The cause of that differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests themselves. The Flynn effect shows that environmental factors can produce differences of at least this magnitude, but that effect is mysterious in its own right. Several culturally based explanations of the Black/ White IQ differential have been proposed; some are plausible, but so far none has been conclusively supported. There is even less empirical support for a genetic interpretation. In short, no adequate explanation of the differential between the IQ means of Blacks and Whites is presently available.
The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly genetic explanations.
A notable and increasingly influential[137][138] alternative to the wide range of standard IQ tests originated in the writings of psychologist Lev Vygotsky (1896-1934) of his most mature and highly productive period of 1932-1934. The notion of the zone of proximal development that he introduced in 1933, roughly a year before his death, served as the banner for his proposal to diagnose development as the level of actual development that can be measured by the child's independent problem solving and, at the same time, the level of proximal, or potential development that is measured in the situation of moderately assisted problem solving by the child.[139] The maximum level of complexity and difficulty of the problem that the child is capable to solve under some guidance indicates the level of potential development. Then, the difference between the higher level of potential and the lower level of actual development indicates the zone of proximal development. Combination of the two indexes—the level of actual and the zone of the proximal development—according to Vygotsky, provides a significantly more informative indicator of psychological development than the assessment of the level of actual development alone.[140][141]
The ideas on the zone of development were later developed in a number of psychological and educational theories and practices. Most notably, they were developed under the banner of dynamic assessment that focuses on the testing of learning and developmental potential[142][143][144] (for instance, in the work of Reuven Feuerstein and his associates,[145] who has criticized standard IQ testing for its putative assumption or acceptance of "fixed and immutable" characteristics of intelligence or cognitive functioning). Grounded in developmental theories of Vygotsky and Feuerstein, who maintained that human beings are not static entities but are always in states of transition and transactional relationships with the world, dynamic assessment received also considerable support in the recent revisions of cognitive developmental theory by Joseph Campione, Ann Brown, and John D. Bransford and in theories of multiple intelligences by Howard Gardner and Robert Sternberg.[146]
IQ classification is the practice by IQ test publishers of designating IQ score ranges as various categories with labels such as "superior" or "average."[147] IQ classification was preceded historically by attempts to classify human beings by general ability based on other forms of behavioral observation. Those other forms of behavioral observation are still important for validating classifications based on IQ tests.
There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile on some IQ test or equivalent. Mensa International is perhaps the best known of these. There are other groups requiring a score above the 99th percentile.
The I.Q. is essentially a rank; there are no true "units" of intellectual ability.
An IQ score is not an equal-interval score, as is evident in Table A.4 in the WISC-III manual.
When we come to quantities like IQ or g, as we are presently able to measure them, we shall see later that we have an even lower level of measurement—an ordinal level. This means that the numbers we assign to individuals can only be used to rank them—the number tells us where the individual comes in the rank order and nothing else.
In the jargon of psychological measurement theory, IQ is an ordinal scale, where we are simply rank-ordering people. ... It is not even appropriate to claim that the 10-point difference between IQ scores of 110 and 100 is the same as the 10-point difference between IQs of 160 and 150
Model-fitting analyses that simultaneously analyze all the family, adoption, and twin data summarized in Figure 12.6 yield heritability estimates of about 50 percent (Chipuer, Rovine & Plomin, 1990; Loehlin, 1989).Cite uses deprecated parameter
|coauthors=
(help)The correlation with income is considerably lower, perhaps even disappointingly low, being about the average of the previous meta-analytic estimates (.15 by Bowles et al., 2001; and .27 by Ng et al., 2005). But it should be noted that other predictors, studied in this paper, are not doing any better in predicting income, which demonstrates that financial success is difficult to predict by any variable. This claim is further corroborated by the meta-analysis of Ng et al. (2005) where the best predictor of salary was educational level with a correlation of only .29. It should also be noted that the correlation of .23 is about the size of the average meta-analytic result in psychology (Hemphill, 2003) and cannot, therefore, be treated as insignificant.
a gifted sample gathered using IQ > 132 using the old SB L-M in 1985 does not contain the top 2% of the population but the best10%.
|display-editors=
suggested (help)Library resources about Intelligence quotient |
|
|
全文を閲覧するには購読必要です。 To read the full text you will need to subscribe.
国試過去問 | 「104D057」 |
リンク元 | 「知能指数」 |
拡張検索 | 「言語性IQ」「IQR」 |
関連記事 | 「I」 |
CD
※国試ナビ4※ [104D056]←[国試_104]→[104D058]
-IQ
.