AI CodeFix

Strategic launch of AI-powered code fixes at enterprise scale

2024-2025Product LaunchActivation DesignDesign Systems
AI CodeFix

Overview

I led the strategic design for launching AI CodeFix to General Availability at SonarSource, bringing an AI-assisted code fix feature to thousands of engineering teams worldwide. The users are software developers who live in their IDE and code review tools, with high sensitivity to anything that interrupts their flow or introduces unreliable automation. There were no established enterprise patterns for launching AI features to skeptical developer audiences at this scale. The work required building trust incrementally, in the exact context where developers already operate.

Challenge

Developers spend an estimated 3 hours per week per person on manual debugging and repetitive issue resolution. How do you drive adoption of an AI feature to users who are professionally trained to distrust automated changes to their code? Traditional enterprise activation patterns, banners, modals, onboarding flows, would fail with this audience. I had to design something that respected developer flow, introduced value at the right moment, and never made users feel the product was trying to take over. At the same time, I was running this launch in parallel with the Remediation Agent, requiring both products to share a coherent AI design language.

Process

Understanding the developer mental model

I ran generative research with developers across different seniority levels and tech stacks to understand how they evaluate new tooling, what triggers skepticism toward AI suggestions, and at what point they feel comfortable accepting an automated fix. This surfaced a key insight: developers don't reject AI, they reject AI they can't reason about.

Mapping activation patterns

I audited how enterprise B2B products typically drive feature adoption, then stress-tested each pattern against developer behavior. Most failed, too interruptive, too abstract, or front-loading commitment before value. This led me to design around problem-moments rather than onboarding moments: surfacing the feature exactly when the user has a relevant issue in front of them.

AI CodeFix - Mapping activation patterns

Designing progressive disclosure

The activation flow let developers try CodeFix on a single issue with one click before any broader commitment. Each step only asked for as much trust as the user had already earned. I prototyped multiple variations and tested them with developers, iterating on wording, placement, and visual hierarchy to minimize cognitive overhead while keeping AI actions transparent.

AI CodeFix - Designing progressive disclosure

Building shared design foundations

Running this alongside the Remediation Agent meant two teams could easily diverge. I created shared design guidelines and reusable components covering AI suggestion states, confidence indicators, and opt-out affordances. This gave both products a consistent language and prevented months of duplicated design work.

AI CodeFix - Building shared design foundations

Solution

I started by researching how developers discover and evaluate new tooling, not through marketing moments, but through problem-moments in their workflow. This shaped a contextual awareness system that surfaced CodeFix only when an issue was flagged, in the exact place developers were already looking. The activation flow used progressive disclosure: try it on one issue first, then enable broadly only after experiencing value. This reversed the typical enterprise pattern of commitment before value. To keep both AI products consistent, I created shared design guidelines and reusable components to align multiple teams working in parallel.

Impact

Over 4,000 organizations activated AI CodeFix, generating more than 1,000 code fixes daily. The 32% weekly retention rate showed developers integrating it into regular workflows, not just trying it once. The progressive disclosure and contextual awareness patterns were adopted by other product domains at SonarSource, and the design foundations extended directly to the Remediation Agent, saving months of duplicated work. The trust-first design direction was later validated by Sonar's own IDE team: when they shipped the Side-by-Side Code Diff Panel in VS Code v4.44, they described the original experience as feeling like a leap of faith, naming trust and transparency as the key focus of that release.

Links

Next Project

Hubli