Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 5 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

False Consensus

Experiments have revealed that we tend to believe in a false consensus: that others would respond similarly to the way that we would. For example, Ross, Greene & House (1977) provided participants with a scenario, with two different possible ways of responding. Participants were asked to explain which option they would choose, and guess what other people would choose. Regardless of which option they actually chose, participants believed that other people would choose the same one.

Why this matters for analysts: As you are analyzing data, you are looking at the behaviour of real people. It’s easy to make assumptions about how they will react, or why they did what they did, based on what you would do. But our analysis will be far more valuable if we can be aware of those assumptions, and actively seek to understand why our actual customers did these things – without relying on assumptions.

Homogeneity of the Outgroup

There is a related effect here: the Homogeneity of the Outgroup. (Quattrone & Jones, 1980.) In short, we tend to view those who are different to us (the “outgroup”) as all being very similar, while those who are like us (the “ingroup”) are more diverse. For example, all women are chatty, but some men are talkative, some are quiet, some are stoic, some are more emotional, some are cautious, others are more risky… etc.

Why this matters for analysts: Similar to the False Consensus Effect, where we may analyse user behaviour assuming everyone thinks as we do, the Homogeneity of the Outgroup suggests that we may oversimplify the behaviour of customers who are different to us, and fail to fully appreciate the nuance of varied behaviour. This may seriously bias our analyses! For example, if we are a large global company, an analysis of customers in another region may be seriously flawed if we are assuming customers in the region are “all the same.” To overcome this tendency, we might consider leveraging local teams or local analysts to conduct or vet such analyses.

The Hawthorne Effect

In 1955, Henry Landsberger analyzed several studies conducted between 1924 and 1932 at the Hawthorne Works factory. These studies were examining the factors related to worker productivity, including whether the level of light within a building changed the productivity of workers. They found that, while the level of light changing appeared to be related to increased productivity, it was actually the fact that something changed that mattered. (For example, they saw an increase in productivity even in low light conditions, which should make work more difficult…) 

However, this study has been the source of much criticism, and was referred to by Dr. Richard Nisbett as a “glorified anecdote.” Alternative explanations include that Orne’s “Demand Characteristics” were in fact at work (that the changes were due to the workers knowing they were a part of the experiment), or the fact that the changes were always made on a Sunday, and Mondays normally show increased productivity, due to employee’s having a day off. (Levitt & List, 2011.)

Why this matters for analysts: “Demand Characteristics” could mean that your data is subject to influence, if people know they are being observed. For example, in user testing, participants are very aware they are being studied, and may act differently. Your digital analytics data however, may be less impacted. (While people may technically know their website activity is being tracked, it may not be “top of mind” enough during the browsing experience to trigger this effect.) The Sunday vs. Monday explanation reminds us to consider other explanations or variables that may be at play, and be aware of when we are not fully in control of all the variables influencing our data, or our A/B test. However, the Hawthorne studies are also a good example where interpretations of the data may vary! There may be multiple explanations for what you’re seeing in the data, so it’s important to vet your findings with others. 

Conclusion

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data? Are there any interesting studies you have heard of, that hold important lessons for analysts? Please share them in the comments!

Leave a Reply