Intelligence Quotient or IQ: History and Evolution

Around 1910, Henry Goddard, the director of a school in New Jersey for people with intellectual disabilities, was a pioneer in the concept of IQ in the United States.
Intelligence Quotient or IQ: History and Evolution

Last update: 09 November, 2020

The Intelligence Quotient (IQ) is the score of an individual’s intelligence in a standard intelligence test.

IQ scoring has been widely used to compare the intellectual capacity of an individual with others. The average score is obtained from a sample of “similar” people, generally within the same age group.

For instance, using an IQ test, it’s possible to confirm that someone’s intelligence scoring is higher (or lower) than the average or typical scores of the others in their group (1).

How the Intelligence Quotient (IQ) started

In 1884, researcher Francis Galton evaluated a large number of people in an attempt to develop an intelligence test. For this evaluation, he tried measuring different characteristics. For instance, some examples of the aspects he noted were the size of their heads and their reaction times (1).

By means of his investigation, Galton introduced methods for numeral classifications of physical, physiological, and mental attributes. He proposed that a wide range of human traits could be measured and meaningfully described and summarized using two numbers.

  • Firstly, the average value of the distribution.
  • Secondly, the dispersion of the scores surrounding this average value (standard deviation) (1).

Another of the prominent researchers who’s made a significant contribution to measuring the intelligence quotient was Charles Spearman. This British psychologist introduced the idea that all of the aspects of intelligence are related. This point of view is very important. It provided the foundation for expressing today’s concept of IQ (1).

An outline of a head with several cogs inside

The first Intelligence Quotient tests and the introduction of IQ scoring

The modern era of intelligence tests started just after the turn of the century. Psychologists Alfred Binet and Theodore Simon worked to develop a method to identify substantial differences in children’s intelligence levels.

The idea was to differentiate between two groups. One was those children who were considered intellectually capable of benefitting from standard education. The other group was children who had learning problems and who should be a part of special educational programs (1).

Their methodology was to express a child’s development as a quotient between the score each child obtained, and the average score of children the same age. The work they published on a wide scale (Binet, 1908), along with the work of Stern, a German psychologist, contributed to forming the concept of mental age.

Mental age and IQ

What does a mental age of eight mean, for example? It means that a child, regardless of their actual age, performs a task as an average eight-year-old child would.

This eventually led to the creation of IQ scoring to represent a portion of mental age, divided by the person’s actual age. The mental age is divided by the actual age of the person and multiplied by 100. The number you get is the quotient of intelligence or IQ scoring.

Around 1910, Henry Goddard, the director of a school in New Jersey for people with intellectual disabilities, was a pioneer in the concept of IQ tests in the United States.

However, the first time that IQ scoring formed part of an intelligence test in the United States was in 1916. In that year, Lewis Terman translated the Binet-Simon test, creating the Stanford-Binet Intelligence Scales test.

During World War I, the United States Army developed the Alpha and Beta Army tests. The objective of these tests was to assign different jobs to soldiers based on their intellectual capacity. The idea was also to rule out those who they considered intellectually inadequate from military service.

During this time, David Wechsler worked as a military psychologist. He started doing tests on individuals that had failed in the Army’s performance tests (3).

A hand holding a lightbulb in front of a sunset.

Wechsler’s tests and IQ scoring

In 1932, Wechsler became the chief psychologist at the Bellevue Psychiatric Hospital in New York. From that moment, he planned to make a change in IQ scoring.

As a result of his work, the meaning of IQ scoring was changed to a standard scoring method. This standard scoring referred to the average score obtained by a sample of healthy age pairs (3). This new concept allows the IQ score to apply throughout a person’s life (4).

The Wechsler-Bellevue Intelligence Scale Form 1 was published in 1939. It was a measure of intelligence based on summarizing scores from various subtests. In addition to a composite summary, which he called the Full Scale IQ, Wechsler argued that intelligence could be measured more accurately. He showed that by dividing the subtests. But how?

Wechsler divided the subtests into two different categories. The first category would be those that mainly reflect verbal abilities. And secondly, those that reflect nonverbal intelligence, or “performance” skills. Thus, he produced the verbal IQ score and the performance IQ score.

Wechsler’s tests and recommendations for describing intelligence were successful. Later, he developed a test for adults, the WAIS (Wechsler 1955). This was a direct derivation from the Wechsler-Bellevue test and also from the children’s version (WISC) published in 1949.

Intelligence scoring today: does it truly define us?

We’ve seen a brief history of IQ scoring and the principles it’s based on. Psychologists often still use these tests today to measure intelligence, in somewhat the same way as you’d use a tape measure to measure the length of a table.

However, does an IQ score truly and accurately represent someone’s intelligence? It would seem that, little by little, the concepts of both IQ and of intelligence are evolving. Thus, the way we evaluate intelligence must also evolve.


All cited sources were thoroughly reviewed by our team to ensure their quality, reliability, currency, and validity. The bibliography of this article was considered reliable and of academic or scientific accuracy.


  • Saklofske, D. H., Schoenberg, M. R., Nordstokke, D., & Nelson, R. L. (2017). Intelligence Quotient. Encyclopedia of Clinical Neuropsychology, 1-5.
  • Gould, S. J. (1981). The mismeasure of man. New York: Norton.
  • Boake, C. (2002). From the Binet-Simon to the Wechsler- Bellevue: Tracing the history of intelligence testing. Journal of Clinical and Experimental Neuropsychology, 24(3), 383–405.
  • Bartholomew, D. J. (2004). Measuring intelligence: Facts and fallacies. New York: Cambridge University Press.
  • Wechsler, D. (1955). TheWechsler adult intelligence scale. New York: Psychological Corporation.

This text is provided for informational purposes only and does not replace consultation with a professional. If in doubt, consult your specialist.