Our Blogs

The Intelligence Delusion: Why the existence of a single intelligence factor is an unsustainable myth.

  • 06-Dec-2022

The idea that intelligence, or g, is a fixed internal trait is disappearing in a puff of logic … and data.

By Professor Bryan Roche

Across many empirical papers that I and others have published over the past decade, it has been shown repeatedly that scores on a wide variety of standardized measures of intelligence and domain-specific cognitive abilities, can be raised significantly using a Relational Frame Theory-based intervention known as SMART (Strengthening Mental Abilities with Relational Training) that does not involve training of skills directly relevant to the test (e.g., McGloughlin, Tyndall, & Pereira (2020a).  We have argued for many years that this is clear evidence for “far transfer”.  That is, we consistently get very large gains in IQ for most or all study participants, on every and all measures employed to asses intelligence or domain specific competence (e.g., arithmetic, vocabulary, reading) despite the fact that our intervention is in no way “training to the test”. Our online training uses nonsense words only (no recognisable  or nameable words or shapes of any kind), and trains only a very small number of syllogistic-style reasoning tasks that we have discovered are relevant to almost every aspect of intellectual functioning (e.g., If A is the same as B, and B is the same as C, A is the same as C, or If A is opposite to B and B is Opposite to C, A is the same as C).

But We All Know that Brain Training Doesn't Work!

I would not blame the reader for being sceptical about the effects of the SMART method.  Some recent high profile and high quality meta-analyses of the effects of “brain training” on general cognitive ability have found ambiguous results (see Sala & Gobet, 2019). But the reason for that is simple.  All of the methods assessed in those studies do not increasegeneral intelligence!  Despite now being reported in a large handful of studies, the SMART method has not been subjected to meta-analysis, although one is now finally in progress. Even with enormously impressive results obtained across several studies from different laboratories, the SMART method is producing results that people find hard to believe.  This appears not to be due to the quality of the studies so much as the more general concern that intelligence is supposed to be fixed for life. The argument goes, that if we manage to raise any index of intelligence using SMART training, well then the measure itself must be unreliable.   In other words, the conceptualization of intelligence as a stable trait is so entrenched (groundlessly as I will argue), that it is preventing researchers from seeing and believing the data before their very eyes.  This idea is stuck firmly in the public psyche, despite the fact that the evidence for this outdated position is (a) inferred and circumstantial and (b) contradicted by copious evidence which the theory has to accommodate with increasingly sophisticated “excuses” (e.g., the test employed must not be a good IQ test, your training must have involved practice with the IQ test, all IQ tests are somewhat inaccurate, etc).  

The latter “excuse” is the most untenable of them all, as it reveals a remarkably low level of scientific and philosophical sophistication.  That is, it shows a lack of understanding of the fact that some variable cannot be said to exists on a continuum for which there is only inaccurate measures.  That sentence is worth restating another way.  A variable, such as a person’s height, cannot be said to vary along an infinitely fine continuum along a perfect normal distribution, if all you have to check a person’s height with is an indirect and inaccurate measure.  The problem is that we do not have  a measure of the “real” level of intelligence of individuals against which to assess the accuracy of our intelligence measures.  As I will argue below, the normal distribution of IQ is a scientific construction, not a scientific finding, and the existence of a stable relationship between IQ and various other measures of intelligence is similarly constructed to fit the theory rather than the other way around.

Smoke and Mirrors in the History of Intelligence Research

The fact that IQ was likely to be normally distributed, was decided upon by Sir Francis Galton before his first IQ test was ever developed.  This erroneous assumption can now not be disproven as an assumption, because IQ tests have been adjusted and altered so that they now produce results across the population that are normally distributed.  As has been joked about by scientists more prestigious than myself, Galton’s assumptions were based on the finding that the weights of cabbages, the heights of fir trees and the heights of soldiers in the British army, were all normally distributed.  

Given this, it only stood to reason to Galton, therefore, that his fictitious inner trait called intelligence must also “exist” along a normal distribution.  That is, he guessed in advance of his first rudimentary IQ test (which involved tasks nothing like what we see on IQ tests today), that exactly 68% of the population would have an IQ score between 85 and 115 (one standard deviation either side of the average IQ which would be 100).  That is pretty impressive … because he was correct!  Well, not exactly. He was totally wrong, but he was made correct by later adjustments to the test and the use of normalization methods applied to the raw test scores that forced a normal distribution onto the raw data.  IQ test takers never receive these raw scores – only their transformed and normalized score relative to other test takers, and researchers rarely report raw scores in studies.  

Psychologists now “know” that we are measuring intelligence accurately because the scores are normally distributed!  They “must” be correct because we “know” that IQ scores “would be” normally distributed … if only we had the real measure!  Which I guess we must now do….because the scores are normally distributed.   See the circular logic? This is not how science should work.

Not only is the normal distribution of scores a scientific achievement rather than a discovery, but the concept of general intelligence as a constant that is in perfect relation to a whole host of other suspected measures of mental ability, is a statistical artefact.  It is an inference based on the contrived inter-correlation between various suspected measures of “intelligence”, an effect known as the positive manifold.    Allow me to explain.

IQ test indices for various broad domains, such as arithmetic or verbal ability, do indeed correlate well with each other.  This admittedly does naively suggest that they must be drawing on the same underlying ability. This “positive manifold” has been studied extensively and apparently supports the idea of a stable underlying intelligence, or g as Spearman called it.  However, what many leading scientists themselves fail to appreciate is the complex two-way process by which tests of intelligence are first nominated as tests of intelligence that can be considered in the examination of the positive manifold.  The mistake is subtle but critical and is illustrated with the example below. 

Let’s say that a test is first developed to measure some aspect of intellectual functioning, such as spatial ability.  The test items are selected largely on the basis of theory, but also partly because they are similar to already widely-used test items, and because they are suspected theoretically to draw on a skill set that is relevant in education, problem solving or everyday life.  The means by which to check the new test’s relevance to intelligence is to measure the correlation between scores on the new test (and its sub-tests if they exist) and those of more widely used established tests of intelligence.  If there is a poor correlation, the new test is ipso facto deemed irrelevant as a measure of  intelligence, because intelligence as conceived by Spearman’s concept of g, is theoretically understood as a singular ability that underlies a wide range of intellectual skills.  If the new test is of any relevance at all, it should correlate with scores on traditional tests, at least to some extent.  The more highly it correlates, the more directly it is thought to index g. 

Tests that display high inter-correlations with other tests of intelligence are accepted as tests of intelligence while those that do not are not accepted as such. But herein lies the subtle yet enormous mistake: Further analyses of the positive manifold, that proves the existence of g, will now include only those tests that already inter-correlate well with traditional tests because only those tests are considered to be real tests of intelligence.  In effect, the basis of the observed positive manifold is not disprovable because a test has to already correlate with other tests to be included in the battery of tests examined in the inter-correlational analysis.  It has therefore becomes a disprovable fact that IQ is a constant and unifying trait because this conclusion is supported by the positive manifold effect.

Mistaking shaky assumptions for empirical conclusions

You can now see how this problem replicates across time, whereby the inter-correlation among intelligence tests, and sub-tests within tests, is the result of how the tests were selected in the first place, based on a standard of inter-correlation that supports a theoretical positon that it is instead supposed to prove. 

The statistical method known as Factor Analysis was itself invented by Spearman, the intelligence theorist, a century ago, precisely to quantify the degree of inter-correlation between various measures of IQ.  Spearman was a brilliant man in many ways but fell victim to this rookie mistake of circularity in his reasoning that would lead to an undergraduate philosophy of science major receiving an F on their exam paper. The method is used to both prove that there must be a singular intelligence (g), and to guarantee that this can never be disproven (by setting the standard for what qualifies as a test for intelligence or a sub-test to be included in one).   In this way, the positive manifold, and therefore g itself, are products of statistical illusion that has been contrived as a result of a groundless theoretical starting point that is misunderstood as an empirical conclusion.  

Some New Insights 

In a recent paper by Kovac et al (2019), the authors suggested that arguing that one performs well on mental tests because of one’s real, extant and singular high g “is not any more valid than claiming that one has high income, high social status, and a college degree because of one’s high socioeconomic status; the direction of causation is the opposite” (pg. 192).

Kovac and colleagues go on to challenge the positive manifold on empirical grounds also. For instance they point out that there are already degrees of tolerance for varying levels of correlation between various assumed indices of g.  As an example, vocabulary and reading comprehension correlate more highly with one another than with mental rotation.  In addition, they remind us that correlations among tests increase as overall intelligence falls and so different tests are not so well correlated at the upper end of the IQ scale.  In addition, it is known that the test-retest reliability of IQ tests varies more greatly as IQ rises, with tests becoming more and more unreliable at above average IQ levels.  So the various sub-components of Intelligence established through inter-correlation do not hang together so perfectly well, and the inter-correlations are unreliable (non-constant) under various conditions merely within tolerable limits.      

Carefully predetermined standards for accepting a new IQ test as valid, artificially enhance the seductive nature of the positive manifold and make it unfalsifiable as a concept.  This is bad science at its best (or worst?).  The invocation of hypothetical constructs inferred only from correlational studies, hampers science by moving research too quickly onto the question of how to measure the construct, what it predicts, what it explains, and so on.  Instead of being distracted by the  smoke and mirrors that characterize the logical errors made my many intelligence theorists, we should instead be raising questions about why these factors inter-correlate so highly when they do.   And remember, invoking g will not help, because the inter-correlations that define g are what we are trying to explain!

Relational Framing: The Foundational Skill on Which Intelligence is Built

A more practical reason why several different measures of intellectual ability inter-correlate so well, is more mundane than Spearman may have hoped.  It turns out that the tasks on IQ tests simply draw on the same narrow set of skills, however broad and diverse the tests appear to be. For example, my own research has identified a narrow set of foundational skills known as Derived Relational Responding Skills, or Relational Framing Skills as an acquired foundational skill set upon which more complex intellectual skills sets are built (see Colbert at al., 2017, 2019; McGloughlin, Tyndall & Pereira, 2020b). These skills appear to be acquired before a range of crucial intellectual skills appear, such as the emergence of spoken or other expressed language  We have written much on the fact that, looked at the other way, intelligence tests seem to be measures of these skills.  That is, almost all tasks on all tests for intelligence are indexing the fluency of the test taker’s relational skills in various ways.  These relational skills, in essence, are the small family of skills that involve responding to one word, object or feeling in terms of how it is related to another.  So for example, if I tell a child that A is more than B, and that A and B are both coins, and coin B buys one candy bar, I can expect the child to choose coin B over coin A, if given a choice.   This is an example of a relational skill in which the child responds to A in terms of its comparative relation to A.  And it appears that almost every response we make is relational, whether it be in terms of sameness, oppositeness, before, after, difference, hierarchically, spatially, and so on.  The more complex the relational response required in a given situation (who is your Father’s Mother-in-law’s daughter?) the more intelligent we assume the correct response to be.  

Other researchers, such as Graeme Halford and colleagues, have arrived at an almost identical conclusion from a different starting point entirely.  They call the foundational skill set relational reasoning, and explore in their research how just about every form of behavior we consider “intelligent” draws on this more fundamental skill. 

These sorts of conclusions bring the behaving living individual back under the microscope of the psychologist and start to free us from the futile attempts to understand intelligent behavior entirely in statistical terms and through brain mapping.  No brain map will ever identify the nature of the hands-on skills children acquire at a young age, how they are effectively taught under instructions of teachers and parents, and how these  in turn underlie the acquisition of language, reasoning, problem solving, planning, remembering, and so on.  Brain mapping may give us an idea of what the brain is doing while we are performing these tasks, and it may tell us totally unsurprisingly that the brains of those who differ in their skill levels in these domains can be distinguished from each other.  But it cannot tell us how to teach children so that they acquire language or mathematical concepts, and so on, more quickly.  If we can understand that, we have cracked intelligence itself and we need not resort to indirect measures, and ideological assumptions.    

The take home point is that we have not explained why person A is more adept at a range of intellectual tasks than person B by simply saying that they are “more intelligent”.  Explaining what “more intelligent” means is our task as psychologists – not to merely describe what we can already see before us.  As Kovac et al. (2019) put it, “Positing a general factor [g] gives the false impression that there is a psychological explanation, whereas the actual explanation is purely statistical” (p. 190).

It is not surprising at all, that there turned out to be some small set of fundamental skills, such as relational skills, that are drawn upon in the development of language and basic arithmetic and other “intellectual” skills. What else would drive the development of a complex skill but the mastery of a lower level skill?  Our task as psychologists is to identify what those lower level skills are in increasing detail and to link them functionally (not merely statistically) to the development of higher-order skills.  That work has already started.  Several studies have now shown that gains in relational skills leads to impressive gains in IQ scores and a range of other test scores (e.g., fluid intelligence, educational aptitude). The gains are not driven by working memory changes because working memory is not trained in these studies, and the gains in IQ are too large to be accounted for by gains in working memory alone.   (See Amd et al., 2018; Cassidy et al, 2016; Colbert et al, 2018; McLoughlin et al. 2020(a), 2020(b) for examples of IQ score and other far transfer gains resulting from SMART interventions).

More recent analyses also suggest that the positive-manifold may not be as robust as we previously thought.  Kovac et al. (2019) offered what they call Process Overlap Theory to help us dispense with the burdensome concept of psychological g.  Specifically, they suggest that many related and overlapping skills are involved in performance on a range of intellectual tasks and that these do not hang together as neatly as we formerly thought in terms of inter-correlating with each other and IQ scores (and by inference g). These researchers have identified executive function in particular as a more basic unifying cognitive process that can explain a wide variety of intellectual performances.  The theory maps out how executive function  is drawn upon differentially in different tasks, and shows how it is not linearly related to intelligence test scores.  More specifically, increasing executive functioning does not necessarily lead to proportionate gains in intellectual performance in any given domain.

 

Kovac and colleagues are not alone in their view that g is not the unifying concept we have thought it to be.  Another account, called mutualism, is a developmental account of the positive manifold that does not invoke an underlying causal latent variable (see van der Maas et al., 2006). 

 

The Social Responsibility of Psychology

We have much to learn about how executive function, relational reasoning or relational framing skills relate upstream to the development of specific cognitive competencies.  It is a daunting task but it is nevertheless the one before us if we are to empower psychology as a helping profession to equip itself with real tools to make a tangible difference to the real lives of real people.  It will be no easy task to map out the ways in which a plethora of cognitive skill tasks draw upon various aspects of acquired relational framing skill sets (or other fundamental skill sets), but at least when we embark on this ambitious endeavour, we are moving the science forward pragmatically, not just conceptually. This is a more exciting and socially responsible way to approach the understanding of “intelligence”.  It contrasts sharply with the efforts of many psychologists to only further reify the by now petrified notion of g.  

A behavioral  explanation that invokes g explains nothing.  It does not lead to educational or therapeutic interventions because g cannot be altered according to those who invoke it.   An a priori belief in g, leads only to research questions that identify and clarify the limits of the effects of educational and therapeutic intervention, rather than help us to find ways to surpass those limits. A belief in g is philosophically naïve and any attempt to further reinforce it rather than deconstruct it, is the wrong agenda for psychology.

 

References

Amd M., & Roche, B.  (2018).  Assessing the Effects of a Relational Skills Training Intervention on Fluid Intelligence Among a Sample of Socially-disadvantaged Children in Bangladesh.  The Psychological Record, 68, 141-149.

Cassidy, S., Roche, B., & Colbert, D., Stewart, I., & Grey, I.   (2016).  A Relational Frame Skills Training Intervention to Increase General Intelligence and Scholastic Aptitude.  Learning & Individual Differences, 47, 222-235.

Colbert, D., Dobutowitsch, M., Roche, B., & Brophy, C.  (2017).  The Proxy-measurement of Intelligence Quotients using a Relational Skills Abilities Index.  Learning & Individual Differences, 57, 114–122.   

Colbert, D., Malone, A., Barrett, S. & Roche, B.  (2019).  The Relational Abilities Index+: Initial Validation of a Functionally Understood Proxy Measure for Intelligence.  Perspectives on Behavioral Science, 43, 189–213.

Colbert, D., Tyndall, I., Roche, B., & Cassidy, S.  (2018).  Can SMART training really increase Intelligence? A Replication Study. Journal of Behavioral Education, 27, 509-531.

Halford, G. A., Andrews, G., Wilson, W. H., & Phillips, S.  (2012). Computational models of relational processes in cognitive development.  Cognitive Development, 27, 481-499.

Kovacs, K & Conway, A.R.A.  (2019).  What Is IQ? Life Beyond “General Intelligence”.  Current Directions in Psychological Science
2019, Vol. 28, 189–194.

McLoughlin S, Tyndall I, Pereira A, Mulhern T. (2020). Non-verbal IQ Gains from Relational Operant Training Explain Variance in Educational Attainment: An Active-Controlled Feasibility Study Journal of Cognitive Enhancement.https://link.springer.com/article/10.1007%2Fs41465-020-00187-z

McLoughlin S, Tyndall I, Pereira A. (2020a). Relational Operant Skills Training Increases Standardized Matrices Scores in Adolescents: A Stratified Active-Controlled Trial Journal of Behavioral Education.  https://doi.org/10.1007/s10864-020-09399-x

McGloughlin, S., Tyndall, I. & Pereira, A. (2020b). Convergence of multiple fields on a relational reasoning approach to cognition. Intelligence, 83, 101491.

Sala, G & Gobet, F.  (2019).  Cognitive Training Does Not Enhance General Cognition. Trends in Cognitive Sciences, 23, 9-20.

Van Der Maas, H. L. J., Dolan, C. V., Grasman, R. P. P. P., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. E. J. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Review, 113(4), 842–861. https://doi.org/10.1037/0033-295X.113.4.842