History of the IQ test
The Problem of Assessing Intelligence
William Stern was the man who coined the abbreviation for IQ (Intelligenz-Quotient), which we know today as a measure of intellectual ability.
Although interest in assessing human capabilities has existed for thousands of years, the scientific measurement of intelligence is a relatively recent development.
In 1904, at the request of the French government, psychologist Alfred Binet and his colleague Théodore Simon created the first system to identify students who might struggle in school. Their Binet–Simon scale became the first standardized test of intellectual performance.
From France to America
In 1916, psychologist Lewis Terman at Stanford University adapted the French test to fit the American school system. His Stanford–Binet Intelligence Scale became the standard intelligence test in the United States for decades.
At that time, IQ was calculated by dividing a child’s mental age by their chronological age and multiplying the result by 100. For example:
If a child’s mental age was 14.5 years and their actual age was 11, the IQ calculation would be:
14.5 ÷ 11 × 100 = 131.8
This method, however, was only suitable for children and quickly showed limitations when applied to adults.
The First Adult IQ Tests
The usefulness of IQ testing expanded during World War I, when tests such as the Army Alpha and Beta were used to assess soldiers’ intellectual capabilities. Around the same time, IQ scores were also applied to evaluate immigrants entering the United States.
The first major breakthrough in adult intelligence testing came with psychologist David Wechsler in the 1930s and 1940s. He introduced the Wechsler Adult Intelligence Scale (WAIS), which compared test-takers to others within the same age group rather than relying on the “mental age” formula. This shift allowed for a more accurate and fair assessment of adult intelligence.
Wechsler also emphasized that intelligence is multi-dimensional. His tests included not only verbal and mathematical tasks but also problem-solving, spatial reasoning, memory, and classification challenges. This broader view of intelligence influenced all future IQ testing.
Evolution Through the 20th Century
Throughout the mid-20th century, IQ tests spread widely across schools, the military, and workplaces. They were used for educational placement, job recruitment, and even to support controversial policies on immigration and social programs. While the tests provided valuable insights, they also became a source of debate about cultural bias, fairness, and the ethics of labeling people by a single number.
In the 1980s and 1990s, new theories of intelligence—such as Howard Gardner’s Multiple Intelligences and Robert Sternberg’s Triarchic Theory—challenged the idea that IQ alone could capture the full range of human intellectual ability. Nevertheless, standardized IQ tests remained central tools in psychology and education.
IQ Testing Today
Modern IQ tests, including updated versions of the Wechsler and Stanford–Binet scales, continue to play a vital role in psychological assessment. They are now carefully designed to minimize cultural bias, incorporate diverse problem types, and provide a more comprehensive picture of cognitive abilities.
Today, IQ testing is used in many contexts:
-
Education – to identify gifted students or diagnose learning difficulties.
-
Clinical psychology – to assess cognitive impairments, memory, or neurological conditions.
-
Research – to study human intelligence and its relationship to genetics, environment, and achievement.
While IQ is not the only measure of human potential, it remains one of the most researched and widely applied tools in psychology. Its history reflects both the progress of science and the challenges of defining something as complex as human intelligence.