Cognitive Bias - Problem Definition and Hypothesis - You think as much as you know and remember

As one data scientist aptly noted, “Data and data analytics can be used to support almost any assumption. However, the quality of analytics and the insights they generate depend heavily on the clarity of the problem definition and the assumptions made. Great business leaders distinguish themselves by asking the right questions to uncover new perspectives and solutions, while great analysts translate those business questions into rigorous analytical ones. Without a clear problem definition and well-founded assumptions, truly effective analytics is impossible.”



Availability Biases - You think as much as you remember


A leadership team prides itself on being consumer-centric. To stay connected with customers, they regularly visit consumers’ homes and conduct in-depth interviews. During one such visit, they speak to two or three long-time, loyal customers who express dissatisfaction with a recently changed product. Their reasons for discontent vary, but they all share a common sentiment: the product no longer feels like the one they trusted for years. Some even mention considering switching to competitors.

Among the visitors are the company president and the marketing officer, and they quickly form a hypothesis: the stagnant performance of the newly upgraded product is likely due to the dissatisfaction of loyal users. The analytic and R&D teams are brought in to explore this hypothesis. They analyze customer data and find a higher volume of complaints from existing users who are unhappy with the change. Alarmed by the findings, the team decides to revert the product to its original formula, fearing that continuing with the new version might lead to further decline. However, after switching back to the original formula, the business continues to stagnate and even slightly declines, erasing any previous positive results.


What went wrong?


The problem lies in the team’s initial assumptions, influenced by availability bias. The leadership team’s vivid and personal experience led them to overestimate the broader impact of this feedback. This overestimation led analytic team to create incorrect hypothesis of ‘losing loyal users is key reason for stagnant business’ so they missed the another hypothesis of ‘are we sufficiently delighting new customers?’ That narrow hypothesis led team to look at the pattern to prove the loyal users impact and concluded that dissatisfaction is the key cause of slow business growth.  


Availability bias also shapes how problems are prioritized. Leaders often ask, “What’s the biggest issue right now?” The answer is usually the most recent fire, the loudest complaint, or the problem that caused the most embarrassment. Chronic issues—technical debt, cultural decay, gradual customer dissatisfaction—rarely surface because they lack drama, even when their cumulative impact is far greater.

In analytics, this bias quietly influences what gets analyzed at all. Metrics are created around recent failures. Dashboards expand to track the last incident. Meanwhile, risks that unfold slowly or rarely trigger alerts remain unmeasured—and therefore unmanaged.


Availability bias does not distort numbers. It distorts attention. What feels common is often just memorable. What feels important is often just recent. When analysis is driven by what comes easily to mind, analytics becomes reactive rather than diagnostic. Instead of asking: “What problems matter most?” we end up asking:

“What problem happened most recently?” This shift turns data analytics into an echo of recent experiences rather than a tool for understanding underlying patterns.


What factors affect the strength of availability bias?

  1. Vividness: Bright, clear, or unusual experiences are more memorable. Talking to a couple of consumers who complain will always seem more important than hundreds of consumers’ product research-based data.
  2. Emotional Impact: Fear, anger, happiness, or sadness can amplify availability bias. People tend to overvalue events that evoke strong feelings because they feel more real or urgent.
  3. Personal Relevance: Does it matter to me? Natural disasters that cause thousands of deaths on the other side of the world often feel less significant than a power outage in your own district.
  4. Recency: Recent events impact our thinking more than past ones, leading us to overestimate their prevalence. New information tends to overshadow older data, causing people to misjudge patterns based on what's most current, rather than what's statistically accurate.
  5. Frequency of Exposure: The more often you hear something, the stronger the memory becomes. Even a small number of incidents can feel more important if you hear about them frequently. Consumer relations teams who receive customer complaints may start to believe their product or service has a recurring problem.
  6. Salience: Events that stand out because they are novel, unexpected, or atypical tend to be more memorable. When a "black swan" event occurs, it feels like such events are more frequent due to their high impact on us.


So, how to overcome this in your problem definition and hypothesis generation stage?

  1. Always challenge your assumption. Be aware that there are always availability biases, so ask questions ‘Is this assumption based on what I think I know?’, ‘What if there are other alternatives?’, ‘What would others think the problem?’
  2. Separate ‘Anecdotes’ from ‘Evidence’. Human is machine of story telling based on their memory and experience where availability biases affect the most. Explicitly label if the inputs are anedodtal(stories or experience) or empirical(data, metrics). So, separate out ‘what we remember’ vs. ‘what the data shows’
  3. Use structured framework to generate hypothesis such as  Customer Journey Mapping for end-to-end experience from the customer’s perspective that is useful for marketing and R&D, For strategic problem, MECE(Mutually Exclusive, Collectively Exhaustive) that organise causes so they do not overlap and cover the entire problem space(e.g. sales drop and key bucket to examine is volume decline, price decline, mix shifting instead of jumping into only one), For Operation issue, leverage IPO(Input-Process-Output) that generate hypotheses for failures at each stage.


Remember that availability bias affects what comes to mind first while frameworks decide what must be considered at all.






Confirmation Bias — You Think as Much as You Know


Let’s start with a simple exercise. Don’t overthink your answers — just notice what explanation comes to mind first.

1. Your child’s test score dropped significantly, even though the previous test was good. What do you think is the reason?

2. Sales declined this quarter. What is the most likely cause?

3. A customer complains aggressively. What do you think about them?

4. New data contradicts a long-held belief. What do you do?


Now, let’s examine those answers.


Think about the first question. Many people immediately assume: “They didn’t study hard enough.” This is especially true in cultures that emphasize effort and discipline. What happens next? Parents start looking for signs of laziness, distraction, or lack of discipline, and then try to correct their child’s study behavior. But what if the lower score was caused by something completely different — like a much harder test, a sudden shift in curriculum, or teaching quality that does not match the child’s learning style? Your immediate assumption shaped how you interpreted the situation — and that is confirmation bias at work.


Now consider the second question. Once leadership believes “sales execution is weak,” all ensuing analysis will focus on execution quality. Training programs are launched. Monitoring increases. Incentives are changed. But what if the original cause was something else — like market contraction, competitive pricing, or lower product quality? When the problem is defined based on a familiar story, analysis searches for evidence to confirm that story, rather than discovering what has actually changed.


For the third question, a store clerk may think: “Another difficult and unreasonable customer.”

The quickest resolution may be to give a refund and move on. But what if the complaint reflects repeated unresolved issues with systemic causes? Ignoring that possibility means repeating the same problems over and over.


And the fourth question? If you have been working in analytics in any organization, I bet you heard this 

“The data must be wrong. Let’s rerun the test!” This is confirmation bias at its purest. Rather than asking what the surprising data might be telling us, time and resources are wasted trying to make the evidence fit the existing belief.



Confirmation bias doesn’t start with the data. It starts much earlier — when we define the problem and decide what we want to proveInstead of asking “What could explain this outcome?”we ask “How can I prove what I already think is true?” or "Based on my experience, I think the problem is this. Can we check if this is still true?"  Once that happens, analysis becomes a mirror — reflecting what we already believe, not what has changed. If the purpose of analysis is to give new insight and learning, this confirmation bias based problem and hypothesis generation is not productive use of your organization's resource.



A Better Way: Elon Musk’s First-Principles Thinking


Elon Musk is famous not just for his companies, but for how he approaches problems. One core part of his approach is First-Principles Thinking — breaking problems down to their most fundamental truths and building solutions from there.  


Unlike typical thinking, which often relies on past experience or analogy — “That’s how things have always been done” — Musk asks fundamental questions like:


“What are the basic elements of this situation?”

“What assumptions am I making — and are they really true?”

“If I remove all assumptions, what remains?”


This method forces us to identify and challenge assumptions that often lie beneath our first explanations.


For example, when Musk founded SpaceX, he questioned the assumption that “rockets are inherently expensive.” Instead of accepting that cost as a given, he broke the problem down to raw materials, manufacturing constraints, and physics principles. In doing so, he designed rockets in a way that drastically reduced cost by creating ‘reusable rocket’ as all the rockets before was one time use.


The key difference between this approach and confirmation bias is subtle but powerful:

Confirmation bias starts with a comfortable assumption and looks for evidence to support it.

First-principles thinking starts with no assumptions and builds understanding from fundamental truths.


How to Use This in Business Analytics


When you define a problem using first principles, you force yourself to ask better questions before you even look at data. For example, 


instead of asking “Why did sales execution decline? any issues in sales team's training program?” 


Ask:


“What must be true for sales to have declined?”

“What assumptions are baked into our current explanation?”

“What fundamental elements (customer demand, price, product relevance, channel changes) could explain this?”


By breaking the problem down into its basic components, you are far less likely to latch onto the first familiar story that feels right. 


This shifts thinking from confirming beliefs to discovering truths.



Anchoring Bias : You think as much as you are primed


Imagine two organizations:

  1. Organization A has been experiencing a sales decline with an Initial Yearly Average (IYA) of 98, while the broader market is also declining, with the market IYA at 95. Despite this, Organization A has been able to gain market share during this period. Now, as they set their targets for the next year, their natural instinct will be to lean on their recent performance and the broader market trends.
    In this scenario, the target-setting process is anchored by the recent past and the challenging market conditions. Given that Organization A has seen a modest decline (but with some success in share growth), it might set a target for next year that is modestly optimistic, such as 100 IYA—slightly better than the current year, but acknowledging that the market remains tough. This approach is heavily anchored on the idea that growth is constrained by external conditions like a declining market, and their performance—while positive relative to the market—still falls within a declining trend.
  2. Organization B, on the other hand, is in a different situation. This company currently has a small market share—only 5%—but has slightly increased its share by +0.5% over the past year. With a much smaller market share, the natural mindset in setting next year’s target is driven by the potential for growth. Their instinct might be to look at their low market share as an opportunity—after all, there’s “plenty of room to grow.” Given this starting point, Organization B might set an aggressive growth target, for example, 110 IYA, expecting their market share to increase significantly as they gain ground on competitors.

In both cases, anchoring bias plays a critical role in how each organization approaches the future. For Organization A, the anchor is their recent performance in a declining market. The thought process might be: "We’ve seen how tough things are, so modest growth is the best we can expect." Because they’re anchored to their current performance and the external decline, they focus on small improvements.

For Organization B, however, the anchor is their small market share and the potential for growth. They’re anchored to the belief that being small means there's much more to capture, so their target is set aggressively high. The result? While Organization B may be starting from a weaker position, their anchor leads them to expect dramatic gains, pushing them to set a target that assumes a higher-than-realistic growth rate.


How Leadership Leverages Anchoring Bias to Drive Results

In both organizations, leadership plays a pivotal role in leveraging anchoring bias—not only unconsciously, but intentionally. Many leaders are aware that setting high targets can drive more aggressive thinking, creativity, and ultimately better performance. They consciously anchor high to push their teams beyond their comfort zones. By aiming for ambitious targets—like 110 IYA for Organization B—leaders create a mindset where the team is motivated to look beyond incremental growth and consider innovative solutions and untapped opportunities. The high anchor forces them to think differently, potentially opening new channels for growth that wouldn’t have been considered with more modest goals.

This strategy is particularly effective when an organization needs to break free from stagnation or a negative trend. Even though Organization A might be facing a tough market, a leader could intentionally anchor the team to aggressive growth, such as 110 IYA, which challenges the team to reframe the problem. Instead of accepting the constraints of the declining market, they focus on how they can outperform expectations—looking for new efficiencies, innovating products, or exploring new channels. The goal isn’t just to hit the target, but to use the high anchor to drive creativity and stretch the team’s capabilities.

While anchoring high can sometimes result in over-ambitious targets, studies show that teams who aim higher are more likely to overachieve relative to conservative targets. Leaders who set higher targets foster a culture of excellence and continuous improvement, as the team feels they must go beyond the “safe” boundaries of what’s been done before. The key is that the high anchor doesn’t just set the bar—it also shapes the mindset and behaviors of the team. It encourages them to think outside the box, push boundaries, and identify opportunities they might have missed otherwise.




Comments

Popular posts from this blog

Cognitive Bias - Presenting Facts - You are Framed!