Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 2 of 5
Foundational Social Psychology Experiments
(And Why Analysts Should Know Them) – Part 2 of 5
Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.
Jump to an individual topic:
- The Magic Number 7 (or, 7 +/- 2)
- When The Facts Don’t Matter
- Confirmation Bias
- Conformity to the Norm
- Primacy and Recency Effects
- The Halo Effect
- The Bystander Effect (or “Diffusion of Responsibility”)
- Selection Attention
- False Consensus
- Homogeneity of the Outgroup
- The Hawthorne Effect
Confirmation Bias
We know now that “the facts” may not persuade us, even when brought to our attention. However, Confirmation Bias tells us that we intentionally seek out information that continually reinforces our beliefs, rather than searching for all evidence and fully evaluating the possible explanations.
Wason (1960) conducted a study where participants were presented with a math problem: find the pattern in a series of numbers, such as “2-4-6.” Participants could create three subsequent sets of numbers to “test” their theory, and the researcher would confirm whether these sets followed the pattern or not. Rather than collecting a list of possible patterns, and using their three “guesses” to prove or disprove each possible pattern, Wason found that participants would come up with a single hypothesis, then seek to prove it. (For example, they might hypothesize that “the pattern is even numbers” and check whether “8-10-12”, “6-8-10” and “20-30-40” correctly matched the pattern. When it was confirmed their guesses matched the pattern, they simply stopped. However, the actual pattern was “increasing numbers” – their hypothesis was not correct at all!
Why this matters for analysts: When you start analyzing data, where do you start? With a hunch, that you seek to prove, then stop your analysis there? (For example, “I think our website traffic is down because our paid search spend decreased.”) Or with multiple hypotheses, which you seek to disprove one by one? A great approach used in government, and outlined by Moe Kiss for its applicability to digital analytics, is the Analysis of Competing Hypotheses.
Conformity to the Norm
In 1951, Asch found that we conform to the views of others, even when they are flat-out wrong, surprisingly often! He conducted an experiment where participants were seated in a group of eight others who were “in” on the experiment (“confederates.”) Participants were asked to judge whether a line was most similar in length to three other lines. The task was not particularly “grey area” – there was an obvious right and wrong answer.
Each person in the group gave their answer verbally, in turn. The confederates were instructed to give the incorrect answer, and the participant was the sixth of the group to answer.
Asch was surprised to find that 76% of people conformed to others’ (incorrect) conclusions at least once. 5% always conformed to the incorrect answer. Only 25% never once agreed with the group’s incorrect answers. (The overall conformity rate was 33%.)
In follow up experiments, Asch found that if participants wrote down their answers, instead of saying them aloud, the conformity rate was only 12.5%. However, Deutsch and Gerard (1955) found a 23% conformity rate, even in situations of anonymity.
Why this matters for analysts: As mentioned previously, if new findings contradict existing beliefs, it may take more than just presenting new data. However, these conformity studies suggest that efforts to do so may be further hampered if you are presenting information to a group. It is less likely that people will stand up for your new findings against the norm of the group. In this case, you may be better to discuss your findings slowly to individuals, and avoid putting people on the spot to agree/disagree within a group setting. Similarly, this argues against jumping straight to a “group brainstorming” session. Once in a group, Asch demonstrated that 76% of us will agree with the group (even if they’re wrong!) so we stand the best chance of getting more varied ideas and minimising “group think” by allowing for individual, uninhibited brainstorming and collection of all ideas first.
Stay tuned!
More to come next week.
What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?