Owning the experience design of chatbot, voicebot, and email-bot self-service across one of India's largest telecom operators — for a customer base that is mass-market, mobile-first, vernacular, and often on patchy 2G/3G connectivity.
Indian telecom customer service is not a Western customer service problem with different language settings. The median Vodafone Idea customer is a working adult in a Tier-2 or Tier-3 city, paying ₹150 to ₹400 a month for prepaid recharges, on a mid-tier Android phone, often on a connection that drops to 2G outside their home circle.
They speak one of nine major Indian languages, sometimes mixing Hindi-English in a single sentence. They have low trust in self-service flows because they have been burned by promotional opt-ins, surprise charges, and bots that loop them back to "talk to an agent" after asking five questions.
The team I joined owned every text and voice self-service surface for these customers — chatbot on the website and Vi app, voicebot for retention and collections, and inbound email bot routing customer queries. We reported into the CTO office and worked with two BOT vendor teams, the Genesys contact-centre platform, back-office agent operations, the CRM team, and BI / customer experience analytics.
Vodafone Idea collected post-interaction CSAT and TNPS on a 10-point scale — industry standard, used everywhere, decades of historical data. The problem: the data was unreadable. Response rates were below where they should be, the distribution was bimodal (very low or 9-10 with little middle), and the BI team could not act on the qualitative signal because comment volume on detractor responses was thin.
My hypothesis was that the scale itself was the problem, not the customers. On a mobile-first vernacular interface, a 10-point scale is functionally a 3-point scale — users mentally collapse it to "bad / okay / great." Forcing them to choose between 7 and 8 adds friction without adding signal.
Treated as effectively a 3-scale (low / mid / 9–10). High abandonment after the rating step. BI could not extract actionable signal.
Mobile-optimised. Follow-up question only triggers on scores ≤3 — faster form for happy customers, deeper signal for detractors.
I redesigned the survey as a 5-point emoji-and-label scale with the qualitative follow-up triggered only on scores ≤3. I ran A/B tests across multiple circles before national rollout, and worked with the BI team early to design a 3-month dual-scale collection window so historical comparability was preserved.
The hard part was not the design — it was getting CXX and CRM stakeholders to agree to break a decade of historical trend lines. I presented the proposal three times before sign-off, with documented mapping logic and a quality-of-signal argument that landed only after the third revision.
When I took ownership of the Email BOT, it was containing about 43% of inbound queries — the rest were being handed off to back-office agents who restarted the conversation from scratch because the bot's transcript wasn't attached. Both customer and agent experience were poor.
I categorised several thousand failed-containment emails by intent. Three patterns covered the majority of failures: bill disputes, plan changes, and service activations. Each needed a different design intervention, not a single "improve the bot" effort.
I worked directly with the back-office team to define what "good handoff" meant from their side. They had been hating handoffs because every escalated case was a fresh start. A transcript-attached handoff with structured intent metadata reduced their average handle time and changed how they thought about the bot — from threat to colleague.
The three flows shipped in sequence over several months, each A/B tested before wider rollout. I tracked progress weekly on a Power BI dashboard I built, and personally reviewed a sample of failure cases each week to keep the model honest.
Phase 1 of the MNP retention voicebot rollout went live across seven Indian telecom circles. The job: when a customer initiated a port-out request, the voicebot would call them, understand what was driving the move, offer relevant retention propositions, and either save the customer or hand to a retention agent with full context.
The design challenge was conversational, not technical. A voicebot trying to retain a customer who is mid-port-out has roughly 90 seconds before the customer hangs up. The opening line determines everything.
"This is Vi calling about your port request" loses them. "I noticed you're considering a switch — can I understand why before you decide?" keeps them on the call.
I worked across two BOT vendors, the Genesys voice platform, contact-centre operations, and circle-level retention teams. None of them reported to me. The unlock was a shared weekly readout where every vendor's metrics were visible to every other vendor — comparison drove cooperation more reliably than any process I could have designed.
For agents, I framed the voicebot as taking the repetitive opening calls so agents could handle the harder retentions — and tracked agent NPS as a goal alongside deflection. This mattered because agents who thought the bot was replacing them quietly sabotaged it; agents who thought it was upgrading their work helped tune it.
Alongside the production work, I led an exploratory POC asking a different question: what if the next generation of self-service for Indian telecom customers wasn't text or voice — but a photorealistic digital human who could hold a face-to-face video conversation in multiple Indian languages?
The hypothesis: for low-trust, first-time-online users, the cognitive overhead of a chat interface or voicebot may be higher than we assumed. A face — even a synthetic one — might lower the barrier to engaging with self-service for users who default to "talk to a human at the store" because that's what feels safe.
The POC was scoped, prototyped, and presented. It surfaced as many questions as answers — uncanny-valley discomfort for some user segments, latency tradeoffs, vendor-platform dependencies — and the work served as an input to broader strategy conversations about where conversational AI should and shouldn't go for this customer base.
Detailed walk-through available on request, under appropriate confidentiality.
First, I leaned on Tier-2 city user research that came from existing operational data more than from primary interviews. If I had the time again, I would have run five real customer ride-alongs in two languages before designing the email-bot flows. The intent categorisation would have been better, and the bill-dispute regulatory ambiguity might have surfaced earlier.
Second, I treated the three projects as independent work streams. They weren't — TNPS data fed the email-bot redesign, the voicebot relied on the email-bot's transcript handoff pattern, and the agent-handoff dashboard pulled from all three. I should have built a shared service blueprint up front that named the dependencies, instead of discovering them through cross-project debugging.
Third, the BI team was a partner from day two on Project A but only from month four on Project B. That delay cost weeks of measurement work. Bring data partners in at scoping, not at launch.
Week after week, through three product launches, across seven Indian states. The same instincts — mobile-first, vernacular by default, low-trust until earned, accessibility as ground floor not ceiling, scale-first design system thinking — translate directly to any consumer experience aimed at India's mass market.