WEIRD Psychology, a fractured framework
How reliable is Experimental Psychology?
Personal Opinion:
In this article, the authors criticize the field of experimental psychology by pointing out that it is exclusively formed, and maintained by Western institutions/people/theories. The underlying assumption that human behavior is universal is being questioned. Authors seem to suggest that experimental psychology didn’t just get the analysis wrong, the act of analysis itself contains wrongdoing.
Authors argue that differentiating human thoughts or behaviors based on geography, and culture is flawed because it is the same as studying different types of liquid using Liter as a measurement. Liquids differ in density, and surface tension among other things. Measuring and comparing their volume isn’t the right way to study their differences.
Experimental psychology measures human behavior by conducting experiments in mostly laboratories, analyzing the data by comparison. How closely people’s behavior in simulated laboratory experiments matches real life behavior is questionable. Behavior is context dependent. When people are told that they are being observed, it often creates a bias in their behavior. Even if it matches, measuring a person’s behavior based on action or what they say isn’t reliable. Reasoning often plays a key role. With the limited access to person’s life, constructing a person’s character based on what they say may not be accurate.
For ex: You are allowed to kill someone(action) as a self-defense(reasoning). Killing will put you in jail if it isn’t for self defense. A politician may make big promises and provide great reasoning, however if it isn’t turning into action, there isn’t much value.
Let’s focus on the assumption that human behavior is universal. My observation is that there are lots of similarities between human thought, independent of culture/geography. A set of people argue that human nature is the result of environment/surroundings in which they grew up and the genes they carry. We all have been in situations where a classmate seem to have more similarity with our thoughts than siblings who grew up in same household. Tracing the human nature to genetics or environment may not suffice.
Let’s take a brief look at viewpoint of some prominent thinkers.
“It's neither East nor West. It's all the same. Human nature is exactly the same. There's no difference.” - U.G. Krishnamurthi
Reminds me of a clash between Jung & Ramana Maharshi. Western thinkers like Jung consider the Self as representation of the psyche as a whole, whereas Ramana Maharshi says the Self is an illusion. When Jung was on a trip to India, he was suggested to meet Ramana Maharshi, however, Jung skipped the meeting and later wrote a letter on the same. According to Jungian Psychology, individuation is equivalent to self-realization, where the Ego meets the Self. For Ramana Maharshi, There is no Self. It appears that both of their theories are in collision with each other.
Peter Kingsley talks about finding the root of Western civilization by describing the works of Parmenides. He is often critical of Western people relying on eastern practices and suggests that western civilization should find its own roots.
“If we want to experience the benefits of eastern wisdom, any wisdom, first we have to do the impossible work of coming to terms with the Western truth about ourselves.” - Peter Kingsley, Catafalque.
Below is the summary of the article “Psychology’s WEIRD Problems”.
Introduction:
In 2010, Joseph Henrich (Author of The WEIRDest People in the World) argued that experimental psychology has a problem: it is too reliant on “WEIRD” participants. The acronym denotes that the typical participant in a psychology experiment comes from a background that is Western, Educated, Industrialized, Rich, and Democratic.
Psychologists had already been joking among themselves for decades about how experimental psychology is really the study of the behavior of college students. The authors claimed that Western college students are unusual and tend to be outliers in terms of their performance on various behavioral tasks.
This is a problem because psychologists often assume that what they are really studying when they run experiments in their labs is the behavior and thinking processes of humans in general. However, if, in running these experiments, psychologists are relying on college student participants, and if college students are not representative of humans in general, then a great deal of the published research output of experimental psychology – and that of other behavioral sciences such as behavioral economics – suddenly comes under question.
This is not to say that psychologists have been doing nothing, however. In fact, in the past decade or so, psychologists have been quite busy dealing with crises in their field. Coincidentally, Henrich and colleagues’ paper came along at about the same time as another crisis was breaking: the crisis of replicability.
The 2010s have been described as a “decade of crisis” for psychology. There have been multiple, overlapping crises, but the big one has been a crisis of replicability. The replicability crisis materialized after psychologists began to worry about the possibility, first identified in the biomedical context, that many of the published findings in the scientific literature may simply not be true.
The key to seeing why the WEIRD issue is so central is to notice that what is often presented as a single problem – the “WEIRD problem” – is in fact a collection of related theoretical, methodological, and practical assumptions and limitations. We suggest that there is not one single WEIRD problem. There are at least 3, as follows.
The problem of WEIRD participants. This is the classic problem, as presented by Henrich and colleagues, of a lack of diversity among research participants. It is a real problem with a long history of being discussed by psychologists.
The problem of WEIRD methods. The next problem concerns the methods that are employed in order to generate experimental psychology’s empirical data. Much of existing experimental psychology, as carried out on Western college campuses as a form of school testing. The researcher sets problems for the participant, who dutifully solves those problems. Studying real behavior in the wild requires a more diverse and open methodological approach than is offered by laboratory-bound experimental psychology.
The problem of WEIRD institutions
This is the most fundamental. They are Western-centric and may need to be adjusted if psychology is to become capable of supporting a broader and more diverse set of research practices than it has traditionally made use of.
2. WEIRD Participants
In the life sciences, it is common for researchers to make use of model organisms. A model organism is a convenient, readily available animal that scientists can handle in the laboratory, and that can be used in day-to-day research. In medical research, a commonly used model organism is the laboratory mouse. In experimental psychology, the most convenient model organism has traditionally been the college psychology major.
Relying on college students as the main source of research participants does, however, have potential disadvantages. One worry is that the student population is likely to be unrepresentative of humanity as a whole. College students tend to come from a narrow slice of the overall population. They tend to be in their late teens or early twenties. They tend to be relatively successful academically (they are successful enough academically that they were able to enroll in a university). They also tend to be from the kind of social background that college students come from. In North America, this tends to mean that they are relatively affluent and disproportionately middle class and white.
If, as an experimental psychologist, how can you know that your experiments are telling you anything useful about humans in general? How can you be sure that you are not inadvertently obtaining data that are only really meaningful or informative about college students?
2.1 The Rise of the Online Research Participant
There has, however, been one notable change in the profile of research participants in recent years. This change has been caused by the rise of online microwork platforms, in particular by Amazon’s launch of its Mechanical Turk platform. In principle, the use of online platforms could allow experimental psychologists to diversify their participant sample well beyond the traditional pool of college students. However they are in fact replacing one narrow pool of college student participants with a different, but equally narrow group of people engaged in online gig work.
In the rest of the article, we’ll see how merely sampling a more diverse population will not be enough to give a comprehensive science of humanity. This is because the WEIRD participants problem is only one of psychology’s WEIRD problems. The other problems go deeper, and concern psychology’s underlying theories and methods, as well as the institutions within which psychology is conducted.
3. WEIRD Methods
3.1 The Stimulus-Response Crisis
The standard format of psychology laboratories ever since the origin of experimental psychology in latter half of 19th century is based on stimulus-response method. In stimulus-response method, you present the participant with a task & record the bahavior. You repeat this until adequate statistical data is collected.
The stimulus-response paradigm allows psychologists to generate data that are easily comparable across different individual people. All the participants perform the same task, and their behavior is measured in the same way.
Limitations come along with the stimulus-response paradigm. The stimulus-response experiment relies on strict experimental control of the behavior situation. In order to be able to quantify and compare the behavior of the different participants that you have sampled, you need those participants all to have performed the same task in the same way.
Once you have collected lots of data from different people all performing the same set of tasks, you can be fairly confident in claiming that people respond in such-and-such manner, given this particular set of stimulus materials and tasks.
Will people behave in the same way if you switch to slightly different stimulus materials? How about if you give the participants different instructions: say, after each video, instead of asking the participants how happy they feel, you ask them how confident they feel about the direction of national politics right now?
In order to answer these new questions, you will have to run a whole new set of experiments.
But How much have you learned about human psychology? How can you generalize beyond the narrow experimental paradigm you repeatedly used? How can you leverage all the data you collected in order to make conclusions about people’s behavior in the real world? Realistically, the kinds of statements you will be justified in making about human nature, or human behavior in general, will be quite limited. This issue has led to the suggestion that psychology is in the midst of a generalizability crisis.
The areas of psychology that have had the most severe troubles with replicability are precisely those areas that most heavily rely on the stimulus-response paradigm. Rather than a replication crisis or a generalizability crisis, it may be more accurate to say that psychology has for the past decade or so been undergoing a stimulus-response crisis.
4 WEIRD Institutions
In the previous two sections, we considered different reasons solving the problem of WEIRD participants would not automatically make psychological science less WEIRD: merely sampling more widely cannot in and of itself change the fact that psychological science relies on theoretical assumptions and research methods that are not culturally neutral, but fundamentally Western-centric. This means that the problem of WEIRD participants is just one of psychology’s WEIRD problems, and a relatively superficial one at that.
Psychological science is deeply WEIRD at the institutional level. Experimental psychology is largely carried out within WEIRD organizations that rely on WEIRD resources to generate WEIRD products.
4.1 A Lack of Diversity among Researchers?
In his analysis of APA journals, Arnett (2008) found that, overall, 73% of first authors worked at American universities (this ranged between 65% and 85% for different journals), with an additional 14% of first authors coming from other English-speaking countries and 11% from Europe, leaving only 2% for the rest of the world.
4.2 Publishing
Arnett’s analysis of top APA journals showed that 100% of editors in chief were from American universities, while 82% of associate editors and editorial board members were based in the United States, the remainder coming almost entirely from other rich Western countries
The Western-centeredness of academic publishing also goes beyond demographics at the individual level, amounting to a much larger structural issue. Academic publishing has been described as an oligopoly with just five publishers – Elsevier, Wiley, Springer, Taylor & Francis, and SAGE – responsible for more than 71% of all articles in psychology alone. Unsurprisingly, these five for profit publishers are all rich companies based in rich Western countries: Wiley and SAGE were founded and are still headquartered in the United States; Elsevier was originally Dutch but is now owned by RELX, a publicly traded British multinational; Taylor & Francis is British and currently operates as a division of Informa, a publicly traded British group that now also owns former rival Routledge; and Springer was originally founded in Germany, but currently operates as a German–British privately held company that also owns the Nature Group as well as former Springer competitor Palgrave Macmillan.
4.3 Funding
Sources of funding can sometimes exert a non-negligible influence on scientific research and reporting. Funding can also play a role in shaping the outcomes of the investigation.
This phenomenon has been described, especially in the biomedical and pharmacological context, as the “funding effect” namely the fact that some “study outcomes were significantly different in privately funded versus publicly funded” research.
According to data from the National Science Foundation (NSF), the majority of financial support for academic research in the United States comes from the federal government, through agencies such as the National Institutes of Health (NIH), the Department of Defense (DOD), the Department of Energy (DOE), and the NSF itself, among others, which together account for more than 50% of academic research funding across fields.
Data from other countries are not as easily accessible, but given the United States’ role and influence in contemporary psychology, the US context already provides a window into the discipline’s culture-bound relationship with research funding.
This is a concern even in basic research in psychology because there is always the chance that funding sources impose biases, even if subtle ones, on the what and how of research: “whoever pays the piper calls the tune”
4.4 Beyond Helicopters and Parachutes
Over the past decade, and across fields and disciplines, there has been growing criticism of what is variously described as “helicopter research” or “parachute science”: these are brief research trips, often in Africa or Asia, in which researchers from rich Western countries fly in, collect the data they need, and then fly out to conduct the analysis and interpretation back at home, all with minimal (if any) involvement from and direct benefit to local researchers or the local population.
Helicopter research is a bad way to solve psychology’s WEIRD problems because it targets only the sampling problem. What is more, depending on who the collaborators are, even careful cross-cultural projects might not be enough to fully circumvent the WEIRD theoretical, methodological, and institutional limitations characteristic of contemporary psychology.
Western research typically aims to find out the extent to which Western theories apply to non-Western groups.
Reference:
Summary of “Psychology’s WEIRD Problems, Cambridge Elements”


