Indivd Insights

Beyond footfall: A new framework for measuring store performance

Beyond footfall: A new framework for measuring store performance

Beyond footfall: A new framework for measuring store performance

By: Fredrik Amréus Hammargården •  March 2026  •  6 min read

By: Fredrik Amréus •  March 2026  •  6 min read

Physical retail has always measured the wrong things. Footfall counts entrances. Sales counts transactions. Neither tells you who your visitors are, whether they are the right ones, or whether your store is designed for the customer you are actually serving. A new measurement framework changes that.

Physical retail has always measured the wrong things. Footfall counts entrances. Sales counts transactions. Neither tells you who your visitors are, whether they are the right ones, or whether your store is designed for the customer you are actually serving. A new measurement framework changes that.

"We count entrances. That's all we have."

"We count entrances. That's all we have."

"We count entrances. That's all we have."

– VP, Store Experience, European fashion retailer

When the VP of Store Experience at a major European fashion retailer was asked how many of her visitors were women between 25 and 40, the brand's stated target customer, she paused for a long moment. "We count entrances," she said. "That's all we have." The brand had 400 stores, a precisely defined brand strategy, and no way of knowing whether its physical spaces were serving the people that strategy was designed for.

This is not an unusual situation. It is the norm. Across apparel, home, and specialty retail, the overwhelming majority of store performance measurement stops at footfall and sales conversion. Both are lagging indicators. Both measure outcomes that have already happened, and neither can tell you why they happened or what to change.

The result is a structural blind spot. Retailers invest in display windows, zone layouts, assortment decisions, and staffing models based on intuition or historical sales data, without ever knowing whether the people in the store match the people the strategy is designed for. As Peter Fader, Professor of Marketing at the Wharton School, has argued across two decades of customer centricity research, companies that treat all customers as interchangeable systematically misallocate resources, because the distribution of customer lifetime value across any customer base is highly skewed. A small number of customers generate value vastly disproportionate to their share of the base, while a significant share generate negligible or negative value (Fader, 2011; Fader and Hardie, 2010). Grewal, Levy, and Kumar's organizing framework for customer experience management in retailing, published in the Journal of Retailing in 2009, made a complementary case: that retailers who measure the full customer experience rather than isolated transaction metrics gain a structurally different understanding of store performance (Grewal, Levy, and Kumar, 2009).

New camera-based analytics infrastructure is beginning to close this gap. By applying computer vision to existing security camera feeds, without capturing identifiable data, retailers can now measure visitor quality, zone-level engagement, and strategic alignment in real time, at scale, and across their entire store estate. The question this creates is not whether the data is available. It is whether retailers know what to measure.

The problem with counting entrances

Footfall has dominated physical retail measurement for decades because it is easy to count. But footfall without context is close to meaningless. Two stores with identical visitor numbers can have dramatically different commercial outcomes depending on how long visitors stay, which zones they reach, and whether they match the store's target customer profile.

Academic research supports this directly. Hui, Bradlow, and Fader's study of grocery store shopping paths, published in the Journal of Consumer Research in 2009, used RFID-based PathTracker technology to record approximately 1,000 shopping trips across 122 store zones in a large supermarket. They found that shoppers covered on average only 37 percent of store zones during a visit, and that as consumers spent more time in store, they became more purposeful: less likely to explore and more likely to buy (Hui, Bradlow, and Fader, 2009). The same research group subsequently proposed a formal integrative framework for modeling path data across marketing contexts, establishing the theoretical foundation for treating in-store movement as a measurable, structured phenomenon rather than noise (Hui, Fader, and Bradlow, 2009). This behavioral pattern, that extended dwell time correlates with increased purchasing intent rather than merely increased browsing, is the empirical foundation for the first distinction any rigorous measurement framework must draw: between visitors and qualified visitors.

RESEARCH GROUNDING

Hui, Bradlow, and Fader (2009) found that as consumers spend more time in store, they become more purposeful: less likely to explore and more likely to buy. Their RFID-tracked dataset of approximately 1,000 shopping trips across 122 zones showed shoppers visit on average 37 percent of store zones, with purchasing behavior concentrated during longer trips. Source: Journal of Consumer Research, vol. 36(3), pp. 478-493.

A visitor who enters and leaves within seconds has not engaged with the store. Including them in footfall counts inflates traffic figures and dilutes conversion calculations. The qualified visitor, the person who stays long enough to meaningfully interact with product, is the actual addressable audience for every commercial decision made on the floor.

This distinction maps directly to what ecommerce teams have understood for years. In digital retail, traffic quality is widely recognized as the primary driver of conversion performance. Industry benchmarks consistently show that owned channels convert at several multiples of paid social traffic, not because more people arrive, but because the people who arrive are pre-selected for relevance to the offer. Physical retail has lacked the equivalent infrastructure to make this distinction. It now has it.

Traffic quality, not volume

Metric

What it measures

What it diagnoses

Drop-in rate 

Share of passers-by who enter the store

Whether the storefront is attracting the right profile, not just the most people

Bounce rate 

Share of entrants who leave without engaging

Mismatch between what the exterior promises and what the interior delivers

Qualified Reach Rate

Qualified ICP visitors as a share of passers-by

The true commercial yield of the storefront, combining all three prior metrics in one number

Qualified visitors 

Entrants who stay long enough to engage with product

The size of the store's actual commercial audience

A rigorous view of store performance begins by replacing the single footfall number most retailers currently rely on with three traffic-quality metrics, and then combining them into a fourth that makes the others actionable at the storefront level.

The first is drop-in rate: the share of passers-by who enter the store. It measures storefront signal strength, but becomes meaningful only when paired with a second question: who is entering. A display window that maximizes drop-in rate by attracting everyone is not doing its job. A window that attracts a higher share of the store's ideal customer profile, even if total drop-in volume is lower, is performing better commercially. The strategic measure of window effectiveness is the quality of who enters, not the count of who enters.

The second is bounce rate: the share of entrants who leave without engaging. A high bounce rate signals a mismatch between exterior promise and interior experience. The window drew them in; the store lost them immediately.

The third, and most consequential at this layer, is qualified visitors: entrants who stay long enough to engage with product. This is the true addressable audience for all floor decisions, from staffing to merchandise placement to zone layout. A store with high footfall, high bounce, and low qualified visitors is not a successful store. It is a corridor.

These three metrics combine into a single storefront number: the Qualified Reach Rate, defined as the share of passers-by who become a qualified ICP visitor inside the store. Where drop-in rate counts who enters and bounce rate measures who leaves, Qualified Reach Rate collapses the full entrance-to-qualification funnel into one figure that reflects both storefront performance and in-store execution simultaneously. A store with high drop-in rate but low Qualified Reach Rate is attracting people but losing them before they qualify: either the wrong profile is entering, or bounce is too high, or both. A store that improves its Qualified Reach Rate has done something commercially meaningful regardless of what happened to raw footfall.

The practical implication is a simple diagnostic format that any senior executive can use without reading a data table. Of every 100 people who walk past a store, a specific number enter, a smaller number stay, a smaller number still qualify, and a final number match the target customer profile. At a European fashion retailer measured over six weeks, that sequence ran: 10 entered, 6 stayed, 3 qualified, 2 were the right customer. The gap between each stage identifies exactly where the storefront is losing ground, and which type of intervention, visual merchandising, entrance design, or floor layout, is likely to close it.

Zone-level conversion

Once traffic quality is established, the question becomes: are qualified visitors converting, and where? This is where zone-level measurement creates its most immediate commercial value.

The primary metric here is cross conversion to tills. For each zone, what share of visitors end up at the tills in the same visit? This is the physical retail equivalent of the add-to-cart-to-purchase rate in ecommerce, the most diagnostic stage in the funnel, and the one that reveals where commercial friction is concentrated.

A zone with high dwell time and low cross conversion to tills is engaging visitors but not driving purchase intent. The causes are locatable: product placement, price communication, staffing levels, zone crowding. Hui, Bradlow, and Fader's research found that the presence of other shoppers attracts consumers toward a store zone but simultaneously reduces their tendency to make a purchase in that zone (Hui, Bradlow, and Fader, 2009). While these findings originate in grocery, more recent research by Zhang, Li, and Burke, using video tracking and transaction data from specialty apparel stores, confirmed that the pattern holds across retail formats: group dynamics in zones significantly affect both zone penetration and purchase conversion, providing further evidence that raw zone traffic figures systematically overstate commercial performance in crowded areas (Zhang, Li, and Burke, 2018). Additional path-level research has reinforced the operational significance of these dynamics. Hui, Inman, Huang, and Suher demonstrated that in-store travel distance directly affects unplanned spending, establishing that the physical path a shopper takes through a store is itself a driver of commercial outcomes (Hui, Inman, Huang, and Suher, 2013). A companion study by the same research group used in-store video tracking to deconstruct unplanned purchase conversion at the point of purchase, showing that consideration and conversion are distinct, measurable stages that can be influenced by zone-level design decisions (Hui, Huang, Suher, and Inman, 2013). For retailers interpreting zone performance, the collective implication is direct: a crowded zone is not necessarily a productive one, and the physical structure of the shopping path shapes outcomes in ways that aggregate metrics cannot capture.

RESEARCH GROUNDING

Hui, Bradlow, and Fader (2009) showed that consumers are attracted to crowded store zones but less likely to make a purchase once they arrive. Zhang, Li, and Burke (2018) extended this finding using video tracking in specialty apparel stores, demonstrating that group size, composition, and cohesiveness affect both zone choice and purchase conversion. Sources: Journal of Consumer Research, vol. 36(3), pp. 478-493; Journal of the Academy of Marketing Science, vol. 46(4), pp. 532-555.

The second zone-level metric is ICP share: the proportion of qualified visitors in a given zone who match the store's ideal customer profile, measured hourly. These two metrics, taken together and compared across zones, produce a quadrant classification. Core zones show high ICP share and high value impact. Broad zones show high traffic but lower profile alignment. Potential zones show high ICP share but are currently underperforming. Weak zones are low on both dimensions. The classification changes by hour and by day of week, making it a genuine operational tool rather than a strategic exercise conducted once a year.

Strategic alignment

The third and most consequential layer addresses the question that most retailers cannot currently answer: are the visitors the store attracts actually the visitors the strategy is designed for?

This is the question Peter Fader's customer centricity framework has been asking of digital businesses for two decades. His core argument, developed across peer-reviewed research in Marketing Science and the Journal of Marketing Research, and distilled in his 2011 book Customer Centricity, is that customer heterogeneity is large and commercially significant. Probability models of customer-base behavior consistently show that aggregate metrics mask extreme variation in individual customer value. Firms that ignore this heterogeneity produce downward-biased estimates of their customer base value and systematically misallocate resources across segments (Fader and Hardie, 2010).

The gap between who a retailer thinks is visiting and who is actually visiting is not a philosophical question. It is a data problem, and it now has a data answer.

The gap between who a retailer thinks is visiting and who is actually visiting is not a philosophical question. It is a data problem, and it now has a data answer.

Metric

What it reveals

Demographic profile: visitors vs tills visitors

Whether the people who buy are the people the brand targets, or whether a different demographic is actually converting

ICP share at entry vs ICP share at tills

Where in the journey the right visitors are being lost, from entrance to purchase

Zone-level product-market fit

Whether each zone is serving the audience it was designed for, or has drifted from its commercial purpose

Three metrics make the misalignment visible. The first is demographic profile comparison: do the people who purchase match the stated brand target, or is conversion concentrated in a different demographic than strategy assumes? The second is ICP share at entry versus ICP share at tills, which measures funnel alignment: whether the right visitors are being attracted and retained through to purchase, or lost between entry and tills. The third is zone-level product-market fit: whether individual zones are serving the audience they are designed for, or have drifted from their intended commercial purpose.

The Reinartz and Kumar research on customer profitability, published in the Journal of Marketing in 2000 and 2003, provides the closest peer-reviewed grounding for why this layer matters. Their empirical work established that customer tenure and loyalty do not reliably predict profitability. Across noncontractual settings, both short-life and long-life customers can be profitable or unprofitable, and three of four customer segments in their dataset showed decreasing profits over time (Reinartz and Kumar, 2000; Reinartz and Kumar, 2003). The implication is that a visitor who looks loyal, who returns frequently, may not be commercially valuable, while an infrequent visitor who matches the store's target profile may be worth considerably more. What determines value is not how often someone visits, but whether that visitor is someone the business is designed to serve profitably.

The ICP as a testable hypothesis

The most important methodological principle in this framework is one that distinguishes it from most retail analytics implementations: the ideal customer profile must be treated as a hypothesis, not a fact.

In B2B sales, where ICP thinking originated as a practitioner framework for identifying high-value accounts, the concept is well established. You define the characteristics of your most valuable customer, build strategy around attracting them, and then validate the definition against commercial outcomes. If high-ICP accounts do not convert at higher rates than others, the definition needs updating. The ICP is a living document, not a brand manifesto.

The validation mechanism in the three-layer framework is cross conversion to tills. The logic runs as follows: ICP share at entry leads to ICP share at zone level, which leads to cross conversion to tills. If all three align, the store is attracting and converting the right visitors. If ICP share is high at entry but drops significantly by tills, execution is failing strategy. If ICP share is high in zones but fails to predict tills conversion, the ICP definition itself requires review. This creates a closed feedback loop that either confirms or challenges the retailer's assumptions about who their best customer is, measured not in surveys or brand workshops but in observed behavior.

THE VALIDATION LOOP

ICP share at entry leads to ICP share at zone level, which leads to cross conversion to tills. If all three align, the store is attracting and converting the right visitors. If ICP share is high at entry but low at tills, execution is failing strategy. If ICP share is high in zones but fails to predict tills conversion, the ICP definition requires review.

What this means for how retailers operate

The three-layer framework is not primarily a reporting exercise. It is an operational system. Each layer generates a specific type of action, and each operates on a different time horizon.

Traffic quality metrics drive display window and entrance zone decisions. If drop-in rate is high but the ICP share of entrants is low, the window is attracting the wrong audience. The visual merchandising brief needs to change, not to maximize entry volume, but to attract the right entrant profile. Qualified Reach Rate is the single number that tells you whether those decisions are working: a window intervention that improves drop-in rate without improving Qualified Reach Rate has not improved commercial performance.

Zone-level conversion metrics drive daily floor decisions. Which zones are underperforming on cross conversion relative to their ICP share? Where should staffing be increased or reduced? Which product placements are engaging high-ICP visitors without driving them to tills? These are decisions that can be made every morning based on the previous day's data.

Strategic alignment metrics drive quarterly and annual planning. Is the store portfolio attracting the visitors the brand strategy is designed for? Are there systematic gaps between the brand's stated target customer and the actual visitor profile across markets or formats? Where is the strategy misaligned with reality?

Physical retail has been counting the wrong things for too long. The infrastructure to count the right things now exists. The retailers who move first to define, measure, and act on visitor quality rather than visitor volume will have an analytical advantage that is structural rather than temporary, because understanding who your customers are, and whether your store is designed for them, does not become less important as the competitive environment intensifies. It becomes more important.

ACADEMIC SOURCES

Fader, P.S. (2011). Customer Centricity: Focus on the Right Customers for Strategic Advantage. Wharton Digital Press.

Fader, P.S., Hardie, B.G.S. (2010). "Customer-Base Valuation in a Contractual Setting: The Perils of Ignoring Heterogeneity." Marketing Science, 29(1), 85-93.

Grewal, D., Levy, M., Kumar, V. (2009). "Customer Experience Management in Retailing: An Organizing Framework." Journal of Retailing, 85(1), 1-14.

Hui, S.K., Bradlow, E.T., Fader, P.S. (2009). "Testing Behavioral Hypotheses Using an Integrated Model of Grocery Store Shopping Path and Purchase Behavior." Journal of Consumer Research, 36(3), 478-493.

Hui, S.K., Fader, P.S., Bradlow, E.T. (2009). "Path Data in Marketing: An Integrative Framework and Prospectus for Model Building." Marketing Science, 28(2), 320-335.

Hui, S.K., Huang, Y., Suher, J., Inman, J.J. (2013). "Deconstructing the 'First Moment of Truth': Understanding Unplanned Consideration and Purchase Conversion Using In-Store Video Tracking." Journal of Marketing Research, 50(4), 445-462.

Hui, S.K., Inman, J.J., Huang, Y., Suher, J. (2013). "The Effect of In-Store Travel Distance on Unplanned Spending: Applications to Mobile Promotion Strategies." Journal of Marketing, 77(2), 1-16.

Reinartz, W., Kumar, V. (2000). "On the Profitability of Long-Life Customers in a Noncontractual Setting: An Empirical Investigation and Implications for Marketing." Journal of Marketing, 64(4), 17-35.

Reinartz, W., Kumar, V. (2003). "The Impact of Customer Relationship Characteristics on Profitable Lifetime Duration." Journal of Marketing, 67(1), 77-99.

Zhang, H., Li, L., Burke, R.R. (2018). "Modeling the Effects of Dynamic Group Influence on Shopper Zone Choice, Purchase Conversion, and Spending." Journal of the Academy of Marketing Science, 46(4), 532-555.