
Three questions every retailer needs to answer – but almost none can
Fredrik Hammargården • March 2026 • 6 min read
Physical retail has always measured the wrong things. Footfall counts entrances. Sales counts transactions. Neither tells you who your visitors are, whether they are the right ones, or whether your store is designed for the customer you are actually serving. A new measurement framework changes that.
"We count entrances. That's all we have."
– VP, Store Experience, European fashion retailer
When the VP of Store Experience at a major European fashion retailer was asked how many of her visitors were women between 25 and 40 — the brand's stated target customer — she paused for a long moment. 'We count entrances,' she said. 'That's all we have.' The brand had 400 stores, a precisely defined brand strategy, and no way of knowing whether its physical spaces were serving the people that strategy was designed for.
This is not an unusual situation. It is the norm. Across apparel, home, and specialty retail, the overwhelming majority of store performance measurement stops at footfall and sales conversion. Both are lagging indicators. Both measure outcomes that have already happened, and neither can tell you why they happened or what to change.
The result is a structural blind spot. Retailers invest in display windows, zone layouts, assortment decisions, and staffing models based on intuition or historical sales data — without ever knowing whether the people in the store match the people the strategy is designed for. As Peter Fader, Professor of Marketing at the Wharton School, has argued across two decades of customer centricity research: companies that treat all customers as equal systematically misallocate resources, because the distribution of commercial value across any customer base is not equal at all.
New camera-based analytics infrastructure is closing this gap. By applying computer vision to existing security camera feeds — without capturing identifiable data — retailers can now measure visitor quality, zone-level engagement, and strategic alignment in real time, at scale, and across their entire store estate. The question this creates is not whether the data is available. It is whether retailers know what to measure.
The problem with counting entrances
Footfall has dominated physical retail measurement for decades because it is easy to count. But footfall without context is close to meaningless. Two stores with identical visitor numbers can have dramatically different commercial outcomes depending on how long visitors stay, which zones they reach, and whether they match the store's target customer profile.
Academic research supports this directly. Hui, Bradlow, and Fader's landmark study of grocery store shopping paths, published in the Journal of Consumer Research in 2009, established that shoppers cover on average only 37 percent of store zones during a visit, with the most commercially active behavior concentrated among those who stay longer and engage more deeply. This finding — that time in store is a predictor of purchase behavior, not merely a byproduct of it — is the empirical foundation for the first distinction any rigorous measurement framework must draw: between visitors and qualified visitors.
RESEARCH GROUNDING
Hui, Bradlow, and Fader (2009) found that as consumers spend more time in store, they become more purposeful: less likely to explore and more likely to buy. Their RFID-tracked dataset showed shoppers visit on average 37% of store zones, with purchase behavior concentrated in zones visited during longer trips. Source: Journal of Consumer Research, vol. 36(3), pp. 478-493.
A visitor who enters and leaves within seconds has not engaged with the store. Including them in footfall counts inflates traffic figures and dilutes conversion calculations. The qualified visitor — the person who stays long enough to meaningfully interact with product — is the actual addressable audience for every commercial decision made on the floor.
This distinction maps directly to what ecommerce teams have understood for years. In digital retail, traffic quality is the primary driver of conversion performance. Email traffic converts at roughly five to six times the rate of social media traffic, not because more people arrive, but because the people who arrive are pre-selected for relevance to the offer. Physical retail has lacked the equivalent infrastructure to make this distinction. It now has it.
Layer one: Traffic quality, not volume
A rigorous view of store performance begins with three traffic-quality metrics that replace the single footfall number most retailers currently rely on.
Metric
What it meaures
What it diagnoses
Drop-in rate
Share of passers-by who enter the store
Storefront signal strength — meaningful only when paired with ICP share of entrants
Bounce rate
Share of entrants who leave without engaging
Mismatch between exterior signal and interior experience
Qualified visitors
Entrants who stay long enough to engage with product
True addressable audience for all floor decisions
A critical correction to how drop-in rate is typically used: the metric does not measure success by volume alone. A display window that maximizes drop-in rate by attracting everyone is not doing its job. A window that attracts a higher share of the store's ideal customer profile — even if total drop-in volume is lower — is performing better commercially. The strategic measure of window effectiveness is the quality of who enters, not the count of who enters.
A store with high footfall, high bounce, and low qualified visitors is not a successful store. It is a corridor.
Layer two: Zone-level conversion
Once traffic quality is established, the question becomes: are qualified visitors converting, and where? This is where zone-level measurement creates its most immediate commercial value.
The primary metric here is cross conversion to tills: for each zone, what share of visitors end up at the tills in the same visit? This is the physical retail equivalent of the add-to-cart-to-purchase rate in ecommerce — the most diagnostic stage in the funnel, and the one that reveals where commercial friction is concentrated.
A zone with high dwell time and low cross conversion to tills is engaging visitors but not driving purchase intent. The causes are locatable: product placement, price communication, staffing levels, zone crowding. Hui, Bradlow, and Fader's research found that crowded store zones attract visitors but simultaneously reduce their likelihood of purchase — a finding directly relevant to interpreting cross conversion anomalies in high-traffic zones.
RESEARCH GROUNDING
Hui, Bradlow, and Fader (2009) showed that consumers are attracted to crowded store zones but less likely to make a purchase once they arrive. This creates a systematic bias in raw zone traffic figures: high zone visits do not imply high purchase likelihood. Source: Journal of Consumer Research, vol. 36(3), pp. 478-493.
The second zone-level metric is ICP share: the proportion of qualified visitors in a given zone who match the store's ideal customer profile, measured hourly. These two metrics, taken together and compared across zones, produce a quadrant classification: Core zones (high ICP share, high value impact), Broad zones (high traffic, lower profile alignment), Potential zones (high ICP share, currently underperforming), and Weak zones (low on both dimensions). The classification changes by hour and by day of week, making it a genuine operational tool rather than a strategic exercise conducted once a year.
Layer three: Strategic alignment
The third and most consequential layer addresses the question that most retailers cannot currently answer: are the visitors the store attracts actually the visitors the strategy is designed for?
This is the question Peter Fader's customer centricity framework has been asking of digital businesses for two decades. His core argument, supported by extensive CLV modeling research published across Marketing Science and the Journal of Marketing, is that customer heterogeneity is large and commercially significant: the top segment of any customer base generates value disproportionate to its size, while a meaningful share of customers generate negligible or negative profit contribution.
The gap between who a retailer thinks is visiting and who is actually visiting is not a philosophical question. It is a data problem — and it now has a data answer.
Metric
What it reveals
Demographic profile: visitors vs tills visitors
Whether the people who purchase match the stated brand target, or whether conversion is concentrated in a different demographic than strategy assumes
ICP share at entry vs ICP share at tills
Funnel alignment: whether the right visitors are being attracted and retained through to purchase, or lost between entry and tills
Zone-level product-market fit
Whether individual zones are serving the audience they are designed for, or have drifted from their intended commercial purpose
The Reinartz and Kumar research on customer profitability, published in the Journal of Marketing in 2000 and 2003, provides the closest peer-reviewed grounding for why this layer matters. Their empirical work established that the match between customer desires and firm offerings — not customer tenure or volume — is the primary driver of commercial value. A visitor who matches the store's ICP is, by definition, in a higher-value relationship with the product.
The ICP as a testable hypothesis, not a declaration
The most important methodological principle in this framework is one that distinguishes it from most retail analytics implementations: the ideal customer profile must be treated as a hypothesis, not a fact.
In B2B marketing — where ICP thinking originated — the concept is well established: you define the characteristics of your most valuable customer, build strategy around attracting them, and then validate the definition against commercial outcomes. If high-ICP accounts do not convert at higher rates than others, the definition needs updating. The ICP is a living document, not a brand manifesto.
The validation mechanism in the three-layer framework is cross conversion to tills. If zones with high ICP share consistently show high cross conversion, the ICP definition is holding. If high ICP share consistently fails to predict tills conversion, the ICP is miscalibrated — either the definition does not match the actual buyer profile, or there is a structural friction in the journey from zone to purchase.
THE VALIDATION LOOP
ICP share at entry → ICP share at zone level → Cross conversion to tills. If all three align, the store is attracting and converting the right visitors. If ICP share is high at entry but low at tills, execution is failing strategy. If ICP share is high in zones but fails to predict tills conversion, the ICP definition requires review.
What this means for how retailers operate
The three-layer framework is not primarily a reporting exercise. It is an operational system. Each layer generates a specific type of action.
Traffic quality metrics drive display window and entrance zone decisions. If drop-in rate is high but the ICP share of entrants is low, the window is attracting the wrong audience. The visual merchandising brief needs to change — not to maximize entry volume, but to attract the right entrant profile.
Zone-level conversion metrics drive daily floor decisions. Which zones are underperforming on cross conversion relative to their ICP share? Where should staffing be increased or reduced? Which product placements are engaging high-ICP visitors without driving them to tills? These are decisions that can be made every morning based on the previous day's data.
Strategic alignment metrics drive quarterly and annual planning. Is the store portfolio attracting the visitors the brand strategy is designed for? Are there systematic gaps between the brand's stated target customer and the actual visitor profile across markets or formats? Where is the strategy misaligned with reality?
Physical retail has been counting the wrong things for too long. The infrastructure to count the right things now exists. The retailers who move first to define, measure, and act on visitor quality rather than visitor volume will have an analytical advantage over competitors that is structural rather than temporary — because understanding who your customers are, and whether your store is designed for them, does not become less important as the competitive environment intensifies. It becomes more important.
Academic sources
Hui, Bradlow, Fader (2009). Journal of Consumer Research, 36(3), 478-493. | Hui, Fader, Bradlow (2009). Marketing Science, 28(2), 320-335. | Reinartz, Kumar (2000). Journal of Marketing, 64(4), 17-35. | Reinartz, Kumar (2003). Journal of Marketing, 67(1), 77-99. | Grewal, Levy, Kumar (2009). Journal of Retailing, 85(1), 1-14. | Fader, P.S. (2011). Customer Centricity. Wharton Executive Essentials. | Hui, Inman, Huang, Suher (2013). Journal of Marketing, 77, 1-16.