Generative AI-Powered
NVC Comment Assistant Tool
Our research is a Reddit browser extension concept that supports Nonviolent Communication (NVC) through in-context visual interventions embedded directly in the comment-writing workflow. Instead of moderating users, it helps them notice potentially escalatory phrasing and choose calmer alternatives (with user control). The interventions are powered by LLM-generated suggestions, but the user stays in charge of what gets posted.
-
%
Hostile replies
-
%
Forum toxicity
+
%
User activation
Team
Responsibilities
Type
Duration
Context
Turning NCD Theory into a Real, AI-Driven Tool
This project began with Needs-Conscious Design (NCD)—a research framework grounded in Nonviolent Communication (NVC)—as the starting point, not a predefined product direction. NCD frames communication support around helping people make clear observations, take responsibility for needs, and take intentional action, while warning that empathy-focused systems can introduce serious risks such as consent violations, coercion/gaslighting, systemic blindness, and “empathy fog.”
When joining the research team, the prior academic work was complete: interviews with NVC trainers and a diary/co-design study had already produced the NCD framework and its 3×3 model (three design objectives × three levels of attunement). The new challenge was to explore this framework more deeply and turn it into a feasible, AI-powered product that could support healthier communication in real online environments, treating trust and user agency as non‑negotiable constraints from day one.
Problem
Reactive Tone Blindness Fuels Escalation
How might we support users in writing calmer, needs-aware responses in the moment—without taking away their voice, creating surveillance vibes, or pushing a “correct” way to communicate?
Solution
01. User-Controlled On/Off Toggle
A prominent toggle button in the browser toolbar lets users activate the extension only when they choose, ensuring full control over when NVC guidance appears during emotionally charged forum discussions. This preserves autonomy and prevents unwanted intervention.
02. When Original Text Violates NVC Guidelines
03. When Original Text Aligns NVC Guidelines
When text passes NVC guidelines, the UI stays familiar but adds a gentle note: "Your response aligns with NVC principles. Switch tones or reflect further if desired." Users retain full control with optional tone adjustments.
Process
1) Design System First (Pre-Platform Decision)
Our team had initial research findings, but not single design-related tokens which led me to build a scalable design system first to keep experiments consistent and reduce rework. A component library, visual tokens, and standard AI-interaction patterns (always editable, dismissible, and transparent) enabled fast iteration on multiple variants without breaking behavior or trust.
2) Choosing the Right Environment (Reddit vs. Private DMs)
Because the work started from a concept, not a platform, two contexts were explored in parallel: a Reddit extension for public threads and a Messenger-style assistant for private DMs. Research activities and interview prompts were structured to compare these environments and ask which felt more appropriate, natural, and reusable for AI/NVC support.
3) Prototyping Both Directions and Pivoting to Reddit
Two interaction models were prototyped—from wireframes to mid/high fidelity: an inline assistant in the Reddit composer and a chat popup in a DM interface. Usability testing showed a clear trust gap: participants found AI in private DMs “too personal” and “monitored,” while Reddit felt lower-stakes and better suited for intervention, leading to a decisive pivot to the Reddit extension.
4) Constraints Killed Features
Early designs involved creating a dashboard that would show tone analyzer results as a percentage score of aggression: we mocked this up and threw it out during iteration. We eliminated these ideas for three main reasons that stemmed from technical limitations, ethical considerations, and user feedback:
Outcomes & Impact
Design decisions that improved speed, quality, and usability
Columba delivered measurable de-escalation impact: 20% fewer hostile replies in adopted comments, 30% toggle activation rate across sessions, and 12% toxicity reduction in users' sub-threads—proving lightweight NVC prompts work in Reddit's reactive ecosystem without surveillance or moralizing.
-
%
Hostile replies
-
%
Forum toxicity
+
%
User activation
Reflection
Research to Ethical Product Design
This case study reveals the power of system-first thinking in research-heavy UX: Starting from NCD's academic 3×3 framework, the deliberate choice to build a scalable design system before platform lock enabled rapid dual-path prototyping, real user pivots, and constraint-driven refinement without style debt.
The biggest lesson: Trust is the ultimate UX constraint. Cutting tone dashboards for lightweight prompts wasn't technical limitation—it was ethical clarity. 20% hostility reduction proves users adopt de-escalation when they feel agency, not surveillance. Reddit's public context + inline NVC scaffolding hit the sweet spot: high impact, zero moralizing.









