Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 4 of 5
Foundational Social Psychology Experiments
(And Why Analysts Should Know Them) – Part 4 of 5
Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.
Jump to an individual topic:
- The Magic Number 7 (or, 7 +/- 2)
- When The Facts Don’t Matter
- Confirmation Bias
- Conformity to the Norm
- Primacy and Recency Effects
- The Halo Effect
- The Bystander Effect (or “Diffusion of Responsibility”)
- Selection Attention
- False Consensus
- Homogeneity of the Outgroup
- The Hawthorne Effect
In 1964 in New York City, a woman name Kitty Genovese was murdered. A newspaper report at the time claimed that 38 people had witnessed the attack (which lasted an hour) yet no one called the police. (Later reports suggested this was an exaggeration – that there had been fewer witnesses, and that some had, in fact, called the police.)
However, this event fascinated psychologists, and triggered several experiments. Darley & Latane (1968) manufactured a medical emergency, where one participant was allegedly having an epileptic seizure, and measured how long it took for participants to help. They found that the more participants, the longer it took to respond to the emergency.
This became known as the “Bystander Effect”, which proposes that the more bystanders that are present, the less likely it is that an individual will step in and help. (Based on this research, CPR training started instructing participants to tell a specific individual, “You! Go call 911” – because if they generally tell a group to call 911, there’s a good chance no one will do it.)
Why this matters for analysts: Think about how you present your analyses and recommendations. If you offer them to a large group, without specific responsibility to any individual to act upon them, you decrease the likelihood of any action being taken at all. So when you make a recommendation, be specific. Who should be taking action on this? If your recommendation is a generic “we should do X”, it’s far less likely to happen.
Before you read the next part, watch this video and follow the instructions. Go ahead – I’ll wait here.
In 1999, Simons and Chabris conducted an experiment in awareness at Harvard University. Participants were asked to watch a video of basketball players, where one team was wearing white shirts, and the other team was wearing black shirts. In the video, the white team and black team respectively were passing the ball to each other. Participants were asked to count the number of passes between players of the white team. During the video, a man dressed as a gorilla walked into the middle of the court, faced the camera and thumps his chest, then leaves (spending a total of 9 seconds on the screen.) Amazingly? Half of the participants missed the gorilla entirely! Since then, this has been termed “the Invisible Gorilla” experiment.
Why this matters for analysts: As you are analyzing data, there can be huge, gaping issues that you may not even notice. When we focus on a particular task (for example, counting passes by the white-shirt players only, or analyzing one subset of our customers) we may overlook something significant. Take time before you finalize or present your analysis to think of what other possible explanations or variables there could be (what could you be missing?) or invite a colleague to poke holes in your work.
More to come!
What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?