Vodafone Idea 2023 — 2025 Pune, India Chatbot Platform Owner

Designing self-service
for India's next million
digital users

Owning the experience design of chatbot, voicebot, and email-bot self-service across one of India's largest telecom operators — for a customer base that is mass-market, mobile-first, vernacular, and often on patchy 2G/3G connectivity.

Scroll to read
01 — Context

The customer Vodafone Idea actually serves is not the customer most service teams design for.

Indian telecom customer service is not a Western customer service problem with different language settings. The median Vodafone Idea customer is a working adult in a Tier-2 or Tier-3 city, paying ₹150 to ₹400 a month for prepaid recharges, on a mid-tier Android phone, often on a connection that drops to 2G outside their home circle.

They speak one of nine major Indian languages, sometimes mixing Hindi-English in a single sentence. They have low trust in self-service flows because they have been burned by promotional opt-ins, surprise charges, and bots that loop them back to "talk to an agent" after asking five questions.

The team I joined owned every text and voice self-service surface for these customers — chatbot on the website and Vi app, voicebot for retention and collections, and inbound email bot routing customer queries. We reported into the CTO office and worked with two BOT vendor teams, the Genesys contact-centre platform, back-office agent operations, the CRM team, and BI / customer experience analytics.

Why this work mattered
For a mass-market telecom customer, every minute waiting on hold or every wrong bot answer is the difference between continuing the relationship and porting their number to a competitor. Self-service quality was directly upstream of retention.
Vernacular by default Mobile-first design Low-trust until earned Accessibility as ground floor 7 telecom circles Vernacular by default Mobile-first design Low-trust until earned Accessibility as ground floor 7 telecom circles
02 — Project A

Reframing how we asked customers how they felt.

Vodafone Idea collected post-interaction CSAT and TNPS on a 10-point scale — industry standard, used everywhere, decades of historical data. The problem: the data was unreadable. Response rates were below where they should be, the distribution was bimodal (very low or 9-10 with little middle), and the BI team could not act on the qualitative signal because comment volume on detractor responses was thin.

My hypothesis was that the scale itself was the problem, not the customers. On a mobile-first vernacular interface, a 10-point scale is functionally a 3-point scale — users mentally collapse it to "bad / okay / great." Forcing them to choose between 7 and 8 adds friction without adding signal.

Before — 10-point scale
12345 678910

Treated as effectively a 3-scale (low / mid / 9–10). High abandonment after the rating step. BI could not extract actionable signal.

After — 5-point emoji scale
😞
😐
🙂
😊
🤩

Mobile-optimised. Follow-up question only triggers on scores ≤3 — faster form for happy customers, deeper signal for detractors.

I redesigned the survey as a 5-point emoji-and-label scale with the qualitative follow-up triggered only on scores ≤3. I ran A/B tests across multiple circles before national rollout, and worked with the BI team early to design a 3-month dual-scale collection window so historical comparability was preserved.

The hard part was not the design — it was getting CXX and CRM stakeholders to agree to break a decade of historical trend lines. I presented the proposal three times before sign-off, with documented mapping logic and a quality-of-signal argument that landed only after the third revision.

Outcome
Response rate increased materially. Detractor-comment volume increased even more. The BI team got cleaner cohort signals within one quarter. The mapping framework I wrote is still in use.
03 — Project B

Recovering the email channel one intent at a time.

When I took ownership of the Email BOT, it was containing about 43% of inbound queries — the rest were being handed off to back-office agents who restarted the conversation from scratch because the bot's transcript wasn't attached. Both customer and agent experience were poor.

I categorised several thousand failed-containment emails by intent. Three patterns covered the majority of failures: bill disputes, plan changes, and service activations. Each needed a different design intervention, not a single "improve the bot" effort.

↘ Email-bot intent routing — three flows, three interventions
Intent
Customer query
Bot intervention
Handoff design
Outcome
Bill disputes
"Why am I being charged this much?"
Clarifying-question chain to surface actual concern
CRM-attached transcript with itemised charges
Resolved 1st-touch
Plan changes
"I want to change my plan"
CRM lookup, propose adjacent plans only
Self-serve checkout in email
Contained
Activations
"Activate my new SIM"
Status check + activation steps
Full transcript to agent if escalation
Reduced AHT

I worked directly with the back-office team to define what "good handoff" meant from their side. They had been hating handoffs because every escalated case was a fresh start. A transcript-attached handoff with structured intent metadata reduced their average handle time and changed how they thought about the bot — from threat to colleague.

The three flows shipped in sequence over several months, each A/B tested before wider rollout. I tracked progress weekly on a Power BI dashboard I built, and personally reviewed a sample of failure cases each week to keep the model honest.

43%
Starting
containment
57%
After
redesign
+14pts
Against a
17-point target
The miss as instructive as the hit
We did not hit 60%, and I owned that publicly — the bill-dispute intent had a regulatory ambiguity I had under-scoped at the start.
04 — Project C

Voice retention across seven circles.

Phase 1 of the MNP retention voicebot rollout went live across seven Indian telecom circles. The job: when a customer initiated a port-out request, the voicebot would call them, understand what was driving the move, offer relevant retention propositions, and either save the customer or hand to a retention agent with full context.

The design challenge was conversational, not technical. A voicebot trying to retain a customer who is mid-port-out has roughly 90 seconds before the customer hangs up. The opening line determines everything.

"This is Vi calling about your port request" loses them. "I noticed you're considering a switch — can I understand why before you decide?" keeps them on the call.

I worked across two BOT vendors, the Genesys voice platform, contact-centre operations, and circle-level retention teams. None of them reported to me. The unlock was a shared weekly readout where every vendor's metrics were visible to every other vendor — comparison drove cooperation more reliably than any process I could have designed.

For agents, I framed the voicebot as taking the repetitive opening calls so agents could handle the harder retentions — and tracked agent NPS as a goal alongside deflection. This mattered because agents who thought the bot was replacing them quietly sabotaged it; agents who thought it was upgrading their work helped tune it.

Outcome
All seven circles delivered in Phase 1 on the committed date. Vendor relationships survived a difficult mid-project moment and both contracts were extended.
05 — Looking forward

An exploratory POC: could a digital human handle Indian telecom self-service?

Alongside the production work, I led an exploratory POC asking a different question: what if the next generation of self-service for Indian telecom customers wasn't text or voice — but a photorealistic digital human who could hold a face-to-face video conversation in multiple Indian languages?

The hypothesis: for low-trust, first-time-online users, the cognitive overhead of a chat interface or voicebot may be higher than we assumed. A face — even a synthetic one — might lower the barrier to engaging with self-service for users who default to "talk to a human at the store" because that's what feels safe.

Digital human conversational agent for Indian telecom self-service
POC · 2024
Digital human for Indian telecom self-service
Explored a face-to-face conversational interface for vernacular customer service — testing whether a synthetic human presence reduces friction for low-trust, first-time-online users who normally avoid digital self-service.
POC

The POC was scoped, prototyped, and presented. It surfaced as many questions as answers — uncanny-valley discomfort for some user segments, latency tradeoffs, vendor-platform dependencies — and the work served as an input to broader strategy conversations about where conversational AI should and shouldn't go for this customer base.

Detailed walk-through available on request, under appropriate confidentiality.

06 — Reflection

What I'd do differently.

First, I leaned on Tier-2 city user research that came from existing operational data more than from primary interviews. If I had the time again, I would have run five real customer ride-alongs in two languages before designing the email-bot flows. The intent categorisation would have been better, and the bill-dispute regulatory ambiguity might have surfaced earlier.

Second, I treated the three projects as independent work streams. They weren't — TNPS data fed the email-bot redesign, the voicebot relied on the email-bot's transcript handoff pattern, and the agent-handoff dashboard pulled from all three. I should have built a shared service blueprint up front that named the dependencies, instead of discovering them through cross-project debugging.

Third, the BI team was a partner from day two on Project A but only from month four on Project B. That delay cost weeks of measurement work. Bring data partners in at scoping, not at launch.

07 — Why this matters next

Vodafone Idea's customers are not a demographic I researched — they are users I designed for.

Week after week, through three product launches, across seven Indian states. The same instincts — mobile-first, vernacular by default, low-trust until earned, accessibility as ground floor not ceiling, scale-first design system thinking — translate directly to any consumer experience aimed at India's mass market.