· Models  · 7 min read

Quantum RFM: the customer as a state system

The limits of traditional RFM segmentation and an interpretive framework that treats the customer as a dynamic system, not a static object to be classified.

The limits of traditional RFM segmentation and an interpretive framework that treats the customer as a dynamic system, not a static object to be classified.

The limits of traditional RFM segmentation

The traditional RFM model originates in contexts where behavioural repetition is a reasonable assumption. Recency is treated as an absolute measure, frequency as a relatively stable property, value as an economic summary. The customer is described as an entity to which scores are assigned, based on common thresholds that should apply to the entire customer base.

In many real-world contexts, especially B2B, this approach quickly shows its limits. Customers with very different behaviours end up in the same class, while similar customers are separated only because they are passing through different temporal phases. The result is a segmentation that looks tidy but actually loses touch with the concrete meaning of the observed behaviours.

Two customers inactive for thirty days are not necessarily in the same situation. For one it may be a physiological pause consistent with their history; for the other, a significant signal of discontinuity. Treating these cases as equivalent means giving up on interpreting behaviour, replacing it with a convenient but uninformative classification.

The limit of traditional RFM is therefore conceptual before it is operational. Time is treated as an absolute quantity and the customer as a static object, while real behaviour is dynamic, situated and strongly dependent on individual history.

The customer as a state system

In a transactional system the customer is not directly observable. What we observe is a discrete sequence of events: orders, interactions, periods of absence. The customer emerges as a structure reconstructed from this sequence, not as an immediately accessible entity.

Thinking of the customer as a state system means recognising that their current state only makes sense in relation to previous states. Every observation is always situated within a dynamic: it does not describe “who” the customer is, but where they are in relation to their own history.

From this perspective, the customer does not possess behaviour as a stable attribute. The customer coincides with the behaviour that emerges from the observed sequence, and each new event modifies the system’s configuration. It is not a matter of updating a profile, but of observing a state transition.

This approach does not introduce superfluous theoretical complexity. On the contrary, it adheres to the way data exists and is produced, avoiding the projection onto the customer of a stability that has never been directly observed.

The problem of absolute recency

Recency is often used as an objective and intuitive measure. The time elapsed since the last order seems to provide an immediate indication of the relationship’s state, which is why it is adopted as the primary signal in many models.

In reality, without a reference point, this measure is ambiguous. Time does not have the same meaning for all customers, nor within the same customer at different moments. Thirty days can represent a normal pause in one context and a significant deviation in another.

Without an individual reference, absolute recency mixes profoundly different situations and produces signals that are difficult to interpret. The same temporal observation can indicate stability, risk or simple noise, depending on the customer’s history.

This problem is not solved by refining thresholds or multiplying classes. As long as time remains absolute, the measure remains disconnected from the structure of the observed behaviour. It is necessary to change the way time is interpreted, not just how it is segmented.

The individual cycle as an empirical reference

Over time, each customer manifests a certain regularity or irregularity in reordering. This regularity is not an assumption, but an empirical fact that can be observed with varying intensity depending on the case.

The individual cycle is not a parameter defined a priori, but an emergent property of the distribution of intervals between events. Estimating it through the median and percentiles means accepting that real behaviour is often asymmetric, discontinuous and influenced by contingent factors.

The choice of robust measures is not a statistical refinement, but a consequence of the nature of the observed data. The mean, in these contexts, tends to be unstable and unrepresentative, while percentiles offer a more coherent reference.

The individual cycle is not a prediction of the next order. It is a historical equilibrium point that allows us to make sense of the time elapsed since the last event, without turning the observation into a predictive promise.

A continuous measure of deviation

Once an individual reference is defined, it becomes possible to read the customer’s current state in terms of deviation from their own historical equilibrium. Quantum RFM introduces a continuous measure that describes this relationship directly.

This measure is not meant to compare different customers with each other. It is a local measure, which allows us to observe the same customer over time, maintaining the context of their history. It says where the system is in relation to what has been normal for it, not whether it is “doing well” or “doing badly” in an absolute sense.

The continuity of this measure is central. Discrete classifications are an operational necessity, but not a property of the phenomenon. Reducing a continuous dynamic to a class too early means losing information and rigidifying interpretation.

Real distributions and robustness choices

Observed behaviours rarely follow regular distributions. They are often concentrated in specific intervals, interrupted by periods of inactivity or marked by exceptional events that temporarily alter the rhythm.

In this context, the use of the mean or models that assume normality introduces systematic distortions. Outliers are not errors to be eliminated, but an integral part of the observed phenomenon.

Quantum RFM adopts robust measures because it seeks a stable representation of real behaviour, not an abstractly correct value. Robustness is a methodological choice that privileges interpretive coherence over formal precision.

The simplicity of the metrics is not a concession, but a necessary condition for making the model readable and usable in a conscious way.

When the concept of cycle ceases to be informative

Not all customers exhibit sufficiently regular behaviour to be summarised in a cycle. Some act episodically, others concentrate orders in narrow windows, still others alternate phases with completely different logics.

Quantum RFM explicitly recognises this limit. It introduces a consistency measure that does not modify the estimated cycle, but qualifies the confidence we can place in it. The cycle remains unchanged; what changes is the weight we assign to it.

Consistency does not serve to further segment, nor to “correct” behaviour. It serves to signal when the model is operating on a fragile basis and when, instead, the historical reference is sufficiently stable.

In these cases, the model does not fail. On the contrary, it makes explicit an epistemic limit that is often ignored in traditional scoring systems.

States as an operational reading tool

To be usable, a continuous measure must be translated into an operational language. Quantum RFM’s states meet this need without becoming identity categories.

States are not statistical clusters nor labels that define the customer. They are reading tools that describe the moment at which the system finds itself in relation to its own historical behaviour.

A state does not exhaust the available information, but makes it usable. It serves to direct attention and make action possible, without replacing the complexity of the phenomenon with a rigid simplification.

Operational use of the model

Quantum RFM is not designed to predict the next order or to indiscriminately automate actions. Its purpose is to provide a coherent reading of behaviour over time, reducing the risk of out-of-context interventions.

The introduction of consistency makes explicit a principle that is often implicit: not all situations are equally automatable. When information is fragile, the model signals the need for caution, not a calculation error.

In this sense, Quantum RFM supports the decision without replacing it. It offers an interpretive structure that helps decide when to act, when to wait and when it is appropriate to put human judgement back at the centre.

Concluding considerations

Quantum RFM is an interpretive framework before it is an operational model. It arises from the need to read customer behaviour without imposing artificial regularities or arbitrary thresholds.

Treating the customer as a state system means accepting that behaviour is dynamic, history-dependent and often irregular. The model does not seek to eliminate this complexity, but to make it readable and usable.

It does not promise certainties or infallible predictions. It offers a conceptual structure that allows us to observe better, decide with greater coherence and recognise, when necessary, the limits of automation itself.

Back to Thinking