Generative AI-Powered NVC Comment Assistant Tool
Columba is a Reddit browser extension designed to foster healthier online discussions using Nonviolent Communication (NVC) principles. We designed an in-line intervention system that identifies potential conflict and offers a "reflective pause" during the commenting process. Our focus was on reducing online reactivity while ensuring the AI supports, rather than replaces, the user’s original intent and authentic voice.

My Contributions
I owned the end-to-end interaction design of the comment intervention flow, built and maintained the design system, ran all cognitive walkthroughs (n=9), and defined the KPI framework within a team of 1 designer, 1 developer, 1 product manager, and 2 researchers.
Product Design Lead
01.2025 - Present
Interaction Design / Design System
AI Research / Cross-Functional Teamwork
Context
Turning NCD Theory into a Real, AI-Driven Tool
Our system is grounded in Needs-Conscious Design (NCD), a framework that translates NVC’s psychological principles into actionable design goals. We specifically focused on the Observation and Needs stages of NVC to move away from "tone policing" toward a reflective interface. By integrating NCD, the system acts as a facilitator for emotional intelligence, helping users stay intentional and connected in high-friction environments.

Problem
The Tension Between Guidance and Autonomy
Supportive Paper
AI Assistance Shapes Beliefs, Risks Autonomy
Source: Tom Fleischman, “AI Assistants Can Sway Writers’ Attitudes, Even When They’re Watching for Bias.”
Solution
01. User-Controlled On/Off Toggle
We implemented a browser toggle that allows users to activate NVC guidance only when needed. This approach resolves the tension of AI surveillance by making the tool a conscious choice rather than an intrusive monitor. By giving users full control over the intervention, we preserve personal autonomy during high-stakes discussions.
02. When Original Text Violates NVC Guidelines
When the text violates NVC principles, the UI flags the original statement, highlights detected negative patterns, and presents a revised suggestion alongside a brief “Here’s Why” explanation. This design makes the feedback transparent and educational while preserving user agency through options to accept, edit, or dismiss. We intentionally avoided auto-rewriting or blocking posts, as well as intrusive pop-ups, since those approaches risked feeling coercive, reducing trust, and disrupting the natural writing flow.

03. When Original Text Aligns NVC Guidelines
When a response aligns with NVC, the UI provides a subtle confirmation with optional tone adjustments. Instead of a binary "pass/fail" check, this serves as a reflective mirror that helps users validate their original intent. This ensures that even positive interactions remain intentional and grounded in the user’s authentic voice.

User Research
1) Design System First: Scalable Foundation
We combined qualitative user research with evaluative testing to understand how AI interventions are perceived in real writing contexts. This included user interviews, comparative concept testing (Reddit vs. private messaging), and cognitive walkthroughs (n=9) to assess usability and perceived intrusiveness.
Process
1) Design System First: Scalable Foundation
Before finalizing the platform, we established a scalable design system to maintain consistency across rapid experiments. By standardizing visual tokens and AI-interaction patterns, ensuring elements were always editable and transparent, we reduced rework and accelerated sprint velocity. This system allowed the team to iterate on multiple variants without compromising user trust or system behavior.

2) Constraint: Context sensitivity (public forum vs private messaging)
Our user research revealed a clear boundary between public forums and private messaging. While AI intervention felt appropriate for Reddit, users found it intrusive in personal DMs, where it holds potential of privacy issues.
Approach/decision: Consequently, we prioritized Reddit as our primary target, using private messaging concepts only as comparative probes to refine the intervention's "fit" and "feel."

3) Strategic Refinement: Navigating Design Constraints
Early testing and cognitive walkthroughs led us to pivot away from features that felt "policing" or rigid, ensuring we reinforced user agency and the NCD framework.
Killed: Judgmental color-coding / warning cues
Color-coded escalation signals created a “moderation” vibe and reduced perceived authenticity and trust.Killed: Rigid tone presets as the default
Presets were seen as limiting and didn’t cover edge cases—so we kept presets as “seed” options but added free-form custom tone.Killed: Regenerate-only rewriting (forced authorship)
Users didn’t want to be pushed into AI-written text. We expanded beyond rewriting to include “Here is why” explanations so users could learn, decide, and maintain ownership.

Prototype 1.0
Final Prototype
Outcomes & Impact
Validating User Control & Intent
Through a controlled pilot study, we validated that Columba supports healthier communication while maintaining high user autonomy.
Rewrite adoption rate: +25% of sessions apply at least one suggestion (useful without forcing).
Tone improvement: In randomized A/B blind tests (n=9), 70% of reviewers favored NVC-informed drafts as more constructive and less hostile.
Time cost: –25s median added time from intervention trigger → final post (keeps friction acceptable).
Perceived Control (%, 5-point): +80% select 4–5 (“Agree/Strongly agree”) for “I felt in control of my final message.”
These metrics demonstrate that Columba functions as a supportive partner rather than a prescriptive editor, balancing automated guidance with human-in-the-loop control.
Reflection
Research to Ethical Product Design
The hardest decision in this project wasn’t a visual or technical one — it was figuring out how much the AI should actually do. Every time we made the suggestions more helpful, users felt less like the final message was theirs. We had to keep pulling back, not because the AI couldn’t do more, but because doing more was the wrong goal. That tension — between a capable system and a respectful one — defined every major design decision we made.
If I were starting over, I’d push harder to expand beyond n=9 in the pilot study. The directional findings were clear, but a larger sample would have given us the statistical confidence to make stronger claims about which specific intervention patterns drove tone improvement versus which ones users simply tolerated. That’s the difference between a research finding and a design principle you can actually build on.
What this project permanently changed for me: I used to think about AI design in terms of capability — what can the model do, and how do I surface that well? Now I think about it in terms of agency — at what point does the AI’s helpfulness start eroding the user’s sense of ownership? Columba taught me that the best AI interaction is often the one that prompts a person to think, not the one that thinks for them.



