SonarQube Remediation Agent
Designing SonarSource's first autonomous AI agent for code quality

Overview
As the sole designer at SonarSource, I led end-to-end UX design for the company's first autonomous AI Agent, a product with no established design patterns to draw from. The users are senior software engineers and DevSecOps teams who depend on SonarQube to enforce quality gates across their CI/CD pipelines. For this audience, trust is non-negotiable: a single unexplained autonomous code change could erode months of credibility. The agent automatically fixes issues when pull requests fail quality gates, operating independently while maintaining developer trust through transparent execution visibility.
Challenge
There were no existing design patterns for autonomous agents in enterprise developer tools. The core problem wasn't just technical, it was psychological: would engineers trust an AI to autonomously modify their production code? This was a real concern in the industry: AI-generated pull requests contain 1.7 times more issues than human-written ones, and technical debt increased 30 to 41% after teams adopted AI coding tools. I had to design non-deterministic AI workflows for users who think in systems, value control, and distrust black boxes. All within a 3-6 month MVP timeline across three milestones: M1 (internal dogfooding), M2 (configuration and monitoring), M3 (beta release).
Process
Mapping the unknowns
Before any wireframe, I ran an assumption mapping exercise with the team. We listed every belief we were building on, technical, behavioral, and trust-related, and ranked them by risk and verifiability. This created our research roadmap and prevented us from designing on top of unvalidated foundations.

Research with engineers
I designed and ran moderated sessions with senior engineers and DevSecOps leads, users who think in systems and scrutinize every automated action. Sessions focused on mental models around autonomous code changes, thresholds for trust, and what visibility they'd need to feel confident delegating fixes to an agent.

Designing the agentic workflow
With no existing patterns for autonomous agents in enterprise tools, I created the interaction model from scratch. Key decisions: how to surface agent triggers within the existing PR flow, how to represent non-deterministic behavior in a linear UI, and how to design opt-in controls that gave engineers authority without making the agent feel unreliable.

Building trust through transparency
The agent job monitoring interface became the most critical design artifact. I designed a step-by-step execution trace showing what the agent read, what it changed, why, and how it verified the result. This gave technical users the audit trail they needed to trust autonomous execution, and validated the core assumption that had threatened the project from the start.
Solution
With no blueprint to follow, I proposed a validated assumptions framework to de-risk every design decision before building. I mapped all critical unknowns across milestones, from whether developers would configure an agent they couldn't predict, to whether they'd trust its output without reviewing every line. I designed studies to test these assumptions early, preventing costly failures downstream. The central design challenge was the agent job monitoring interface: I created a step-by-step execution trace that showed engineers exactly what the agent analyzed, why it made each decision, and how it validated changes, giving technical users the visibility and control they require. This transformed a black-box AI into an auditable, trustworthy collaborator.
Impact
A key enterprise customer reported high confidence in the experience, reducing their technical debt by 1,000 hours in 45 days. The assumptions framework became a model for high-risk initiatives across the org. I introduced Opportunity Solution Tree workshops that shifted the team from feature-first to problem-first thinking, adopted by other squads across the design org. The agentic workflow patterns and trust architecture I designed became the foundation for all future agent capabilities at SonarSource. The Remediation Agent, which built directly on this work, went on to rank #1 on SWE-bench Verified with a 79.2% success rate, resolving issues in an average of 10.5 minutes. Sonar now describes the core design principle as an architecture of trust, the same framing that drove every decision in this project.
Links
Next Project
AI CodeFix →