Four years inside Amazon Alexa — starting at Alexa Data Services in Chennai supporting the India launch, then three years on the Alexa Skills team in Bangalore designing, building, and certifying voice user interfaces. This case focuses on one personal-time project that grew to 18,000+ active users in six months and earned an Amazon Developer Reward I declined to accept.
Users invoke a skill with a wake phrase and a name ("Alexa, open Voice Recognition") and the skill has roughly two seconds to confirm understanding before users churn. There are no analytics dashboards showing where users got confused. No heatmaps of taps. The only signals you get are aggregate usage logs and the reviews customers leave — which they leave only when they're delighted or furious, never the soft middle that web designers rely on.
During my three years on the Alexa Skills team I shipped 175+ skills across ten marketplaces (US, India, Canada, UK, Australia, Germany, France, Italy, Spain, Brazil-Portuguese), validated and certified roughly 5,000 third-party skills as a global Policy POC, and acted as point of contact for the Alexa Skills Certification Policy Guidelines.
But the most instructive design work I did during that period was on a personal-time project I built outside my day job.
The customer enrolled each member of their household by having them say their name three times, and from then on the skill could identify which person was speaking and respond differently — different reminders for different family members, different content preferences, different greeting styles. Small, focused, shipped in English.
I shipped the skill, instrumented it, and went on with my day job. Within six months it had 18,265 enablements. Within eighteen months it was at roughly 45,000 users worldwide. Amazon's marketing team reached out about featuring it on the India storefront.
None of that was a plan. It was a consequence of taking customer reviews seriously when no one was watching.
The Alexa store in 2019 was crowded. There were thousands of voice skills, most of them static text descriptions, no preview, no demo. A user browsing for "voice recognition" had no way to see how a skill worked before enabling it.
So I made one. I scripted, storyboarded, and produced a 16-second animated promo video that walked a viewer through the skill in plain language — enrolment, recognition, the household-mode personality. Linked it from the skill description. Posted it to my yashochitraka Instagram, where it picked up organic shares.
This is the part of the story I find most honest about how I work. The job description said "design and certify Alexa Skills." The actual work — if you cared whether anyone used what you built — included scripting, storyboarding, character animation direction, branding, and distribution. I didn't wait for marketing to do it. I shipped the video the same week the skill went live.
The video changed enablement velocity in a way the description text alone never did. Users who watched the promo enabled at a noticeably higher rate than those who saw only the description — a pattern visible in the store analytics within weeks of the video going live. The discipline of designing the *promotion* of voice products, not just the products themselves, is one I carried forward to every skill I shipped after.
I read every review the skill received — including the angry ones — and tagged them weekly into a sheet by failure mode. Two patterns dominated.
In March 2020, the skill qualified for Amazon's Alexa Developer Rewards program — a cash payout for highly-engaged skills in the India Alexa Skills Store. As an Amazon employee on the Alexa Skills Policy team, accepting it would have been a conflict of interest under Amazon's employee policies.
I escalated to my leadership before any payout cycle, asked for my name to be removed from the program, kept the skill live, and continued maintaining it. I also wrote up the precedent so any future Amazon employee in the same situation would have a clear path.
Someone reading this is probably wondering why I'm including it in a portfolio at all. The answer is: it's the most accurate single signal I can give you about how I think about the work.
Outside the personal project, my day job was certifying Alexa Skills at global scale. I validated approximately 5,000 third-party skills, including major brands (Coca-Cola, Spotify, BBC, Disney, PVR, Fox, Ola, Kohler Konnect). I drafted contributions to the Alexa Skills Certification Policy Guidelines, escalated several HIPAA-related policy gaps later codified into SOP, and was selected to represent Amazon at VOXCON 2019, India's Voice First Conference in New Delhi.
Where I had unique impact was in noticing failure modes the platform itself had blind spots for. In April 2019, I flagged to BI and leadership the probability of a third-party VUI platform endpoint failure — based on signals I was seeing on developer forums, not on internal monitoring data.
BI confirmed the scope: 96,543 skills live on the store, of which 11.2% were built using the platform in question. In June 2019, the failure occurred. The mitigation playbook was already in place because the warning had been raised two months earlier.
This is the kind of work voice products need: someone watching the surface where engineering monitoring doesn't reach, willing to write the unsolicited memo.
First, I designed Voice Recognition without a single user research session. Reviews were my research channel because they were free and immediate, but reviews are biased toward extremes — I never heard from the soft middle of users who gently disengaged. If I had it again, I would have run five interviews in the first month with users who had enabled the skill once and never returned. That would have unlocked a third complaint pattern I never saw.
Second, I treated household-mode toggle as a discoverability problem ("how do I name it well?") when it was also an architecture problem. Adding modes downstream is harder than designing for them upstream. If I had reset the architecture to assume household-mode by default, the skill would have been simpler and the toggle would have been an option to disable it, not enable it.
The discipline of reading every review, of writing release notes that named what changed, of declining a reward that wasn't mine to accept — that is the same discipline I bring to any product where users I will never meet are using something I designed.