Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 5 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

False Consensus

Experiments have revealed that we tend to believe in a false consensus: that others would respond similarly to the way that we would. For example, Ross, Greene & House (1977) provided participants with a scenario, with two different possible ways of responding. Participants were asked to explain which option they would choose, and guess what other people would choose. Regardless of which option they actually chose, participants believed that other people would choose the same one.

Why this matters for analysts: As you are analyzing data, you are looking at the behaviour of real people. It’s easy to make assumptions about how they will react, or why they did what they did, based on what you would do. But our analysis will be far more valuable if we can be aware of those assumptions, and actively seek to understand why our actual customers did these things – without relying on assumptions.

Homogeneity of the Outgroup

There is a related effect here: the Homogeneity of the Outgroup. (Quattrone & Jones, 1980.) In short, we tend to view those who are different to us (the “outgroup”) as all being very similar, while those who are like us (the “ingroup”) are more diverse. For example, all women are chatty, but some men are talkative, some are quiet, some are stoic, some are more emotional, some are cautious, others are more risky… etc.

Why this matters for analysts: Similar to the False Consensus Effect, where we may analyse user behaviour assuming everyone thinks as we do, the Homogeneity of the Outgroup suggests that we may oversimplify the behaviour of customers who are different to us, and fail to fully appreciate the nuance of varied behaviour. This may seriously bias our analyses! For example, if we are a large global company, an analysis of customers in another region may be seriously flawed if we are assuming customers in the region are “all the same.” To overcome this tendency, we might consider leveraging local teams or local analysts to conduct or vet such analyses.

The Hawthorne Effect

In 1955, Henry Landsberger analyzed several studies conducted between 1924 and 1932 at the Hawthorne Works factory. These studies were examining the factors related to worker productivity, including whether the level of light within a building changed the productivity of workers. They found that, while the level of light changing appeared to be related to increased productivity, it was actually the fact that something changed that mattered. (For example, they saw an increase in productivity even in low light conditions, which should make work more difficult…) 

However, this study has been the source of much criticism, and was referred to by Dr. Richard Nisbett as a “glorified anecdote.” Alternative explanations include that Orne’s “Demand Characteristics” were in fact at work (that the changes were due to the workers knowing they were a part of the experiment), or the fact that the changes were always made on a Sunday, and Mondays normally show increased productivity, due to employee’s having a day off. (Levitt & List, 2011.)

Why this matters for analysts: “Demand Characteristics” could mean that your data is subject to influence, if people know they are being observed. For example, in user testing, participants are very aware they are being studied, and may act differently. Your digital analytics data however, may be less impacted. (While people may technically know their website activity is being tracked, it may not be “top of mind” enough during the browsing experience to trigger this effect.) The Sunday vs. Monday explanation reminds us to consider other explanations or variables that may be at play, and be aware of when we are not fully in control of all the variables influencing our data, or our A/B test. However, the Hawthorne studies are also a good example where interpretations of the data may vary! There may be multiple explanations for what you’re seeing in the data, so it’s important to vet your findings with others. 

Conclusion

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data? Are there any interesting studies you have heard of, that hold important lessons for analysts? Please share them in the comments!

Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 4 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

The Bystander Effect (or “Diffusion of Responsibility”)

In 1964 in New York City, a woman name Kitty Genovese was murdered. A newspaper report at the time claimed that 38 people had witnessed the attack (which lasted an hour) yet no one called the police. (Later reports suggested this was an exaggeration – that there had been fewer witnesses, and that some had, in fact, called the police.)

However, this event fascinated psychologists, and triggered several experiments. Darley & Latane (1968) manufactured a medical emergency, where one participant was allegedly having an epileptic seizure, and measured how long it took for participants to help. They found that the more participants, the longer it took to respond to the emergency.

This became known as the “Bystander Effect”, which proposes that the more bystanders that are present, the less likely it is that an individual will step in and help. (Based on this research, CPR training started instructing participants to tell a specific individual, “You! Go call 911” – because if they generally tell a group to call 911, there’s a good chance no one will do it.)

Why this matters for analysts: Think about how you present your analyses and recommendations. If you offer them to a large group, without specific responsibility to any individual to act upon them, you decrease the likelihood of any action being taken at all. So when you make a recommendation, be specific. Who should be taking action on this? If your recommendation is a generic “we should do X”, it’s far less likely to happen.

Selective Attention

Before you read the next part, watch this video and follow the instructions. Go ahead – I’ll wait here.

In 1999, Simons and Chabris conducted an experiment in awareness at Harvard University. Participants were asked to watch a video of basketball players, where one team was wearing white shirts, and the other team was wearing black shirts. In the video, the white team and black team respectively were passing the ball to each other. Participants were asked to count the number of passes between players of the white team. During the video, a man dressed as a gorilla walked into the middle of the court, faced the camera and thumps his chest, then leaves (spending a total of 9 seconds on the screen.) Amazingly? Half of the participants missed the gorilla entirely! Since then, this has been termed “the Invisible Gorilla” experiment. 

Why this matters for analysts: As you are analyzing data, there can be huge, gaping issues that you may not even notice. When we focus on a particular task (for example, counting passes by the white-shirt players only, or analyzing one subset of our customers) we may overlook something significant. Take time before you finalize or present your analysis to think of what other possible explanations or variables there could be (what could you be missing?) or invite a colleague to poke holes in your work.

Stay tuned

More to come!

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, General, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 3 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Primacy and Recency Effects

The serial position effect (so named by Ebbinghaus in 1913) finds that we are most likely to recall the first and last items in a list, and least likely to recall those in the middle. For example, let’s say you are asked to recall apple, orange, banana, watermelon and pear. The serial position effect suggests that individuals are more likely to remember apple (the first item; primacy effect) and pear (the final item; recency effect) and less likely to remember orange, banana and watermelon.

The explanation cited is that the first item/s in a list are the most likely to have made it to long-term memory, and benefit from being repeated multiple times. (For example, we may think to ourselves, “Okay, remember apple. Now, apple and orange. Now, apple, orange and banana.”) The primacy effect is reduced when items are presented in quick succession (probably because we don’t have time to do that rehearsal!) and is more prominent when items are presented more slowly. Longer lists tend to see a decrease in the primacy effect (Murdock, 1962.)

The recency effect, that we’re more likely to remember the last items, is explained because the most recent item/s are recalled, since they are still contained within our short-term memory (remember, 7 +/- 2!) However, the items in the middle of the list benefit from neither long, nor short, term memory, and therefore are forgotten.

This doesn’t just affect your recall of random lists of items. When participants are given a list of attributes of a person, their order appears to matter. For example, Asch (1964) found participants told “Steve is smart, diligent, critical, impulsive, and jealous” had a positive evaluation of Steve, whereas participants told “Steve is jealous, impulsive, critical, diligent, and smart” had a negative evaluation of Steve. Even though the adjectives are the exact same – only the order is different!

Why this matters for analysts: When you present information, your audience is unlikely to remember everything you tell them. So choose wisely. What do you lead with? What do you end with? And what do you prioritize lower, and save for the middle?

These findings may also affect the amount of information you provide at one time, and the cadence with which you do so. If you want more retained, you may wish to present smaller amounts of data more slowly, rather than rapid-firing with constant information. For example, rather than presenting twelve different “optimisation opportunities” at once, focusing on one may increase the likelihood that action is taken.

This is also an excellent argument against a 50-slide PowerPoint presentation – while you may have mentioned something in it, if it was 22 slides ago, the chance of your audience remembering are slim.

The Halo Effect

Psychologists have found that our positive impressions in one area (for example, looks) can “bleed over” to our perceptions in another, unrelated area (for example, intelligence.) This has been termed the “halo effect.”

In 1977, Nisbet and Wilson conducted an experiment with university students. The two students watched a video of the same lecturer deliver the same material, but one group saw a warm and friendly “version” of the lecturer, while the other saw the lecturer present in a cold and distant way. The group who saw the friendly version rated the lecturer as more attractive and likeable.

There are plenty of other examples of this. For example, “physically attractive” students have been found to receive higher grades and/or test scores than “unattractive” students at a variety of ages, including elementary school (Salvia, Algozzine, & Sheare, 1977; Zahr, 1985), high school (Felson, 1980) and college (Singer, 1964.) Thorndike (1920) found similar effects within the military, where a perception of a subordinate’s intelligence tended to lead to a perception of other positive characteristics such as loyalty or bravery.

Why this matters for analysts: The appearance of your reports/dashboards/analyses, the way you present to a group, your presentation style, even your appearance may affect how others judge your credibility and intelligence.

The Halo Effect can also influence the data you are analysing! It is common with surveys (especially in the case of lengthy surveys) that happy customers will simply respond “10/10” for everything, and unhappy customers will rate “1/10” for everything – even if parts of the experience differed from their overall perception. For example, if a customer had a poor shipping experience, they may extend that negative feeling about the interaction with the brand to all aspects of the interaction – even if only the last part was bad! (And note here: There’s a definite interplay between the Halo Effect and the Recency Effect!)

Stay tuned

More to come soon!

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 2 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This post continues exploring classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

Confirmation Bias

We know now that “the facts” may not persuade us, even when brought to our attention. However, Confirmation Bias tells us that we intentionally seek out information that continually reinforces our beliefs, rather than searching for all evidence and fully evaluating the possible explanations.

Wason (1960) conducted a study where participants were presented with a math problem: find the pattern in a series of numbers, such as “2-4-6.” Participants could create three subsequent sets of numbers to “test” their theory, and the researcher would confirm whether these sets followed the pattern or not. Rather than collecting a list of possible patterns, and using their three “guesses” to prove or disprove each possible pattern, Wason found that participants would come up with a single hypothesis, then seek to prove it. (For example, they might hypothesize that “the pattern is even numbers” and check whether “8-10-12”, “6-8-10” and “20-30-40” correctly matched the pattern. When it was confirmed their guesses matched the pattern, they simply stopped. However, the actual pattern was “increasing numbers” – their hypothesis was not correct at all!

Why this matters for analysts: When you start analyzing data, where do you start? With a hunch, that you seek to prove, then stop your analysis there? (For example, “I think our website traffic is down because our paid search spend decreased.”) Or with multiple hypotheses, which you seek to disprove one by one? A great approach used in government, and outlined by Moe Kiss for its applicability to digital analytics, is the Analysis of Competing Hypotheses.

Conformity to the Norm

In 1951, Asch found that we conform to the views of others, even when they are flat-out wrong, surprisingly often! He conducted an experiment where participants were seated in a group of eight others who were “in” on the experiment (“confederates.”) Participants were asked to judge whether a line was most similar in length to three other lines. The task was not particularly “grey area” – there was an obvious right and wrong answer.

(Image Credit)

Each person in the group gave their answer verbally, in turn. The confederates were instructed to give the incorrect answer, and the participant was the sixth of the group to answer.

Asch was surprised to find that 76% of people conformed to others’ (incorrect) conclusions at least once. 5% always conformed to the incorrect answer. Only 25% never once agreed with the group’s incorrect answers. (The overall conformity rate was 33%.)

In follow up experiments, Asch found that if participants wrote down their answers, instead of saying them aloud, the conformity rate was only 12.5%. However, Deutsch and Gerard (1955) found a 23% conformity rate, even in situations of anonymity.

Why this matters for analysts: As mentioned previously, if new findings contradict existing beliefs, it may take more than just presenting new data. However, these conformity studies suggest that efforts to do so may be further hampered if you are presenting information to a group. It is less likely that people will stand up for your new findings against the norm of the group. In this case, you may be better to discuss your findings slowly to individuals, and avoid putting people on the spot to agree/disagree within a group setting. Similarly, this argues against jumping straight to a “group brainstorming” session. Once in a group, Asch demonstrated that 76% of us will agree with the group (even if they’re wrong!) so we stand the best chance of getting more varied ideas and minimising “group think” by allowing for individual, uninhibited brainstorming and collection of all ideas first.

Stay tuned!

More to come next week. 

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?

Analysis, Featured, Presentation

Foundational Social Psychology Experiments (And Why Analysts Should Know Them) – Part 1 of 5

Digital Analytics is a relatively new field, and as such, we can learn a lot from other disciplines. This series of posts looks at some classic studies from social psychology, and what we analysts can learn from them.

Jump to an individual topic:

The Magic Number 7 (or, 7 +/- 2)

In 1956, George A. Miller conducted an experiment that found that the number of items a person can hold in working memory is seven, plus or minus two. However, all “items” are not created equal – our brain is able to “chunk” information to retain more. For example, if asked to remember seven words or even seven quotes, we can do so (we’re not limited to seven letters) because each word is an individual item or “chunk” of information. Similarly, we may be able to remember seven two-digit numbers, because each digit is not considered its own item.

Why this matters for analysts: This is critical to keep in mind as we are presenting data. Stephen Few argues that a dashboard must be confined to one page or screen. This is due to this limitation of working memory. You can’t expect people to look at a dashboard and draw conclusions about relationships between separate charts, tables, or numbers, while flipping back and forth constantly between pages, because this requires they retain too much information in working memory. Similarly, expecting stakeholders to recall and connect the dots between what you presented eleven slides ago is putting too great a pressure on working memory. We must work with people’s natural capabilities, and not against them.

When The Facts Don’t Matter

In 1957, Leon Festinger studied a Doomsday cult who believed that aliens would rescue them from a coming flood. Unsurprisingly, no flood (nor aliens) eventuated. In their book, When Prophecy Fails, Festinger et al commented, “A man with a conviction is a hard man to change. Tell him you disagree and he turns away. Show him facts or figures and he questions your sources. Appeal to logic and he fails to see your point … Suppose that he is presented with evidence, unequivocal and undeniable evidence, that his belief is wrong: what will happen? The individual will frequently emerge, not only unshaken, but even more convinced of the truth of his beliefs than ever before.”

In a 1967 study by Brock & Balloun, subjects listened to several messages, but the recording was staticky. However, the subjects could press a button to clear up the static. They found that people selectively chose to listen to the message that affirmed their existing beliefs. For example, smokers chose to listen more closely when the content disputed a smoking-cancer link.

However, Chanel, Luchini, Massoni, Vergnaud (2010) found that if we are given an opportunity to discuss the evidence and exchange arguments with someone (rather than just reading the evidence and pondering it alone) we are more likely to change our minds in the face of opposing facts.

Why this matters for analysts: Even if your data seems self-evident, if it goes against what the business has known, thought, or believed for some time, you may need more data to support your contrary viewpoint. You may also want to allow for plenty of time for discussion, rather than simply sending out your findings, as those discussions are critical to getting buy-in for this new viewpoint.

Stay tuned!

More to come tomorrow.

What are your thoughts? Do these pivotal social psychology experiments help to explain some of the challenges you face with analyzing and presenting data?