Vinita Bhatia
1 day ago

When first-party data fragments, person-based marketing takes centre stage: Lakshmana Gnanapragasam

Epsilon India’s SVP-Analytics explains how person-based marketing confronts duplication, blind spots and false scale, turning identity resolution into a commercial, not technical, decision.

Lakshmana Gnanapragasam, senior vice president of analytics at Epsilon India.
Lakshmana Gnanapragasam, senior vice president of analytics at Epsilon India.

The next time you sign up for a loyalty programme, consider the email address or phone number you provide. Most consumers juggle multiple identities and devices to maximise rewards or maintain privacy, creating a deceptive reality for marketers.

As marketers lean harder on first-party data to replace disappearing third-party signals, an inconvenient truth keeps surfacing: most customer databases are messier than dashboards suggest. Multiple email IDs, shared devices and overlapping loyalty profiles often inflate reach figures and quietly erode media efficiency. What looks like scale is frequently duplication.

Lakshmana Gnanapragasam, senior vice president of analytics at Epsilon India, sits at the intersection of this problem. With a career spanning Quantum India, IBM and more than eight years at Epsilon, he currently leads a 200-member global data science team tasked with turning fragmented data into something commercially usable.

Since Publicis Groupe acquired Epsilon for $4.4 billion in 2019, his remit has expanded to embedding high-scale analytics into agency-of-record relationships, as brands attempt to shift from device-led targeting to person-based marketing. In this interview with Campaign India, Gnanapragasam explains why identity resolution has become a commercial necessity, how weak data foundations undermine AI investments, and why learning the wrong thing can be more damaging than learning nothing at all.

Publicis acquired Epsilon in 2019 to place data at the centre of its media propositions. As a lead for professional services in data science, how do you structure the scale and complexity of data flowing through global AOR relationships?

You could split my role at Epsilon into two broad buckets. The first bucket is professional services, specifically around data, consulting and analytics that we deliver to Epsilon’s clients. The second scope of work is internal—to bring this expertise into our product teams, so they can continue to improve capabilities, features and functionality.

We are often the first teams called in to work with clients to solve their problems, which gives us visibility into what they are grappling with. That visibility can potentially provide Epsilon with an opportunity to build products around those problems. The work is therefore not limited to delivery; it also informs how platforms evolve based on real client challenges.

Many CMOs view first-party data as a definitive competitive advantage. At scale, however, gaps emerge. What are the blind spots when brands model growth purely on internal data?

Every client we speak with has a degree of endowment bias. They believe their first-party data is rich, and in many cases they are correct. If you have invested early in a CRM or loyalty programme, you will have longitudinal data, like what products customers buy, categories they transact in, stores they interact with, online behaviour, promotions, offers and reduction behaviour.

In B2C retail, brands that invested years ago in these foundations are sitting on rich data. But even the largest retailer may not be selling to every addressable individual in a market. You might have 20–40 million transacting customers, but the potential universe could be 200–500 million.

Second, brands only know customers who transact with them. They have limited understanding of the white space, comprising individuals who have never engaged. While the data may be deep for known customers, there is still a lot about those individuals that the brand does not know, and nothing at all about those outside the ecosystem.

Since that gap often leads to overestimation, how do you reconcile the number of customers brands think they have with actual unique individuals?

We believe every individual is unique, with their own tastes and preferences, and that a message resonating with one individual is unlikely to resonate with another. The way we operationalise that belief is through identity resolution.

At Epsilon, we have an identifier called Core ID. We take client data and onboard it through Epsilon’s data onboarding mechanism, the PeopleCloud platform. When we do this, we often discover that a brand doesn’t have 20 million unique customers; it may have 15 or 16 million.

In-store, people are frequently asked to sign up for loyalty programmes using an email ID or phone number. Most people have multiple email addresses and devices. As a result, what brands perceive as unique customers often includes duplication. Core ID becomes critical because clients don’t want to waste media budgets or send duplicate offers to the same individual under the assumption they are different people.

Data clean rooms are often discussed in terms of media efficiency. How do they protect measurement integrity compared to device-led approaches?

A person-based marketing philosophy, rooted in a strong identity solution like Core ID, gives marketers confidence that they are working with high-quality data. That means the individuals included in campaigns are genuinely unique, and the data is not corrupted, especially when setting up control groups.

Without a robust identity layer, the same person can appear in both test and control groups. That pollutes analysis and undermines learning. The worst outcome is not learning nothing, but learning something incorrect. You then double down on the wrong insights and chase outcomes that will not materialise over time.

A strong identity foundation significantly improves measurement integrity. It ensures experiments are valid and learning agendas are reliable, which is essential when marketing decisions involve large budgets.

Industry reports suggest 93% of marketers plan to invest in AI, yet only 21% see expected ROI. Where is the value being lost?

Many of the reasons you mentioned explain why AI pilots fail to scale. At Epsilon, we focus on a Value Realisation roadmap. We do not pursue every opportunity. Data quality almost always emerges as a concern because brands often lack a 360-degree customer view.

There are three pillars: data, planning and activation, and measurement. If you plan properly, activate effectively, and maintain a strong learning agenda through closed-loop measurement, you begin to see certain campaigns delivering dividends repeatedly.

At that point, the priority shifts to speed. Before competitors catch up, you need to exploit the opportunity and capture market share. AI value is lost when organisations treat pilots as isolated experiments rather than building systems that learn, iterate and scale.

Epsilon has completed a decade in India. Which sectors or technologies are seeing the fastest adoption locally?

We see significant potential for Epsilon Digital Media Solutions, particularly on the DSP and programmatic side. In India, there is growing opportunity around improving returns on paid media investments. We work with Publicis Groupe agencies to bring these solutions to brands, focusing on optimisation and measurable outcomes.

That is where we see the strongest momentum. Brands are increasingly scrutinising paid media performance, and there is demand for more accountable, data-led optimisation.

How is AI being applied within Epsilon India beyond headline use cases?

AI automation and generative AI act primarily as productivity enablers. We use them across product and services, where Epsilon India plays a pivotal role. Tasks that were traditionally handled manually by consultants or agency teams can now be accelerated.

This is not about replacing people entirely. Instead, individuals can do four to five times more work in the same amount of time. The impact is less visible externally but significant operationally; faster analysis, quicker iterations and more scalable delivery.

As identity becomes central to marketing, what should brands prioritise over the next few years?

Brands need to focus less on surface-level metrics and more on the integrity of the underlying data. Without a reliable understanding of who the customer actually is, even the most advanced AI or clean room strategy will underperform.

The industry’s shift away from third-party cookies has made identity resolution a commercial requirement, not a technical nice-to-have. Those who invest in clean, person-based data foundations will make better decisions. Those who don’t risk optimising noise.

In a market where marketing effectiveness is increasingly judged by outcomes rather than activity, getting identity right is not about sophistication—it is about accuracy.

Source:
Campaign India

Follow us

Top news, insights and analysis every weekday

Sign up for Campaign Bulletins

Related Articles

Just Published

1 hour ago

Why we’re trading 17 opens tabs for one good ...

Between comparison charts and keywords searches, the humble product page has begun to lose its shine.

2 hours ago

In 2026, the honeymoon is officially over for ...

The era of the accountable, standardised ecosystem is beginning.

19 hours ago

Gratitude List: What are indie agencies thankful ...

From global awards to brave work, Indian independent ad agencies are grateful that for the opportunity to come out of the shadows of larging holding companies.

1 day ago

Media fragmentation: The unfair opportunity ...

What we call 'media fragmentation' is simply reality catching up with an industry that prefers linear planning templates.