Generative AI-Powered
NVC Comment Assistant Tool

Our research is a Reddit browser extension concept that supports Nonviolent Communication (NVC) through in-context visual interventions embedded directly in the comment-writing workflow. Instead of moderating users, it helps them notice potentially escalatory phrasing and choose calmer alternatives (with user control). The interventions are powered by LLM-generated suggestions, but the user stays in charge of what gets posted.

-

0

%

Hostile replies

-

0

%

Forum toxicity

+

0

%

User activation

Team

1 Design Lead (Me)
1 Product Manager
2 Develop Engineer

2 Data Analyst & Researcher

1 Design Lead (Me)
1 Product Manager
2 Develop Engineer

2 Data Analyst & Researcher

Responsibilities

UX Strategy & Research
UI Design
Interaction Design

UX Strategy & Research
UI Design
Interaction Design

Type

Research Project
Cross-functional Team Work
End-to-End Process

Research Project
Cross-functional Team Work
End-to-End Process

Duration

01.2025 - Present


01.2025 - Present


Context

Turning NCD Theory into a Real, AI-Driven Tool

This project began with Needs-Conscious Design (NCD)—a research framework grounded in Nonviolent Communication (NVC)—as the starting point, not a predefined product direction. NCD frames communication support around helping people make clear observations, take responsibility for needs, and take intentional action, while warning that empathy-focused systems can introduce serious risks such as consent violations, coercion/gaslighting, systemic blindness, and “empathy fog.”

When joining the research team, the prior academic work was complete: interviews with NVC trainers and a diary/co-design study had already produced the NCD framework and its 3×3 model (three design objectives × three levels of attunement). The new challenge was to explore this framework more deeply and turn it into a feasible, AI-powered product that could support healthier communication in real online environments, treating trust and user agency as non‑negotiable constraints from day one.

Problem

Reactive Tone Blindness Fuels Escalation

Online disinhibition makes arguments explode: Anonymity reduces accountability, with studies showing anonymous users 2× more likely to aggress and senders overestimating tone clarity by 30% (actual recognition ~50% vs expected 80%). People post reactively, not realizing their "direct" comment reads as hostile until replies pile on.​

Core tension: NCD demands emotional honesty, but naive AI ("rewrite my comment") risks monitoring vibes, moralizing, or sanitizing voice. Testing proved this—Messenger-style DM assistants felt "intrusive" because private conversations carry identity/emotional weight absent in public threads.

How might we support users in writing calmer, needs-aware responses in the moment—without taking away their voice, creating surveillance vibes, or pushing a “correct” way to communicate?

Solution

01. User-Controlled On/Off Toggle

A prominent toggle button in the browser toolbar lets users activate the extension only when they choose, ensuring full control over when NVC guidance appears during emotionally charged forum discussions. This preserves autonomy and prevents unwanted intervention.

02. When Original Text Violates NVC Guidelines

03. When Original Text Aligns NVC Guidelines

When text passes NVC guidelines, the UI stays familiar but adds a gentle note: "Your response aligns with NVC principles. Switch tones or reflect further if desired." Users retain full control with optional tone adjustments.

Process

1) Design System First (Pre-Platform Decision)

Our team had initial research findings, but not single design-related tokens which led me to build a scalable design system first to keep experiments consistent and reduce rework. A component library, visual tokens, and standard AI-interaction patterns (always editable, dismissible, and transparent) enabled fast iteration on multiple variants without breaking behavior or trust.

2) Choosing the Right Environment (Reddit vs. Private DMs)

Because the work started from a concept, not a platform, two contexts were explored in parallel: a Reddit extension for public threads and a Messenger-style assistant for private DMs. Research activities and interview prompts were structured to compare these environments and ask which felt more appropriate, natural, and reusable for AI/NVC support.

3) Prototyping Both Directions and Pivoting to Reddit

Two interaction models were prototyped—from wireframes to mid/high fidelity: an inline assistant in the Reddit composer and a chat popup in a DM interface. Usability testing showed a clear trust gap: participants found AI in private DMs “too personal” and “monitored,” while Reddit felt lower-stakes and better suited for intervention, leading to a decisive pivot to the Reddit extension.

4) Constraints Killed Features

Early designs involved creating a dashboard that would show tone analyzer results as a percentage score of aggression: we mocked this up and threw it out during iteration. We eliminated these ideas for three main reasons that stemmed from technical limitations, ethical considerations, and user feedback:

01

Extension Speed Reality

<100ms loads, zero distraction: Bulky dashboards kill browser extension performance.

01

Extension Speed Reality

<100ms loads, zero distraction: Bulky dashboards kill browser extension performance.

02

NCD Ethical Core

No monitoring, only self-awareness: Subtle prompts support reflection, not tone policing.

02

NCD Ethical Core

No monitoring, only self-awareness: Subtle prompts support reflection, not tone policing.

03

User Trust Barrier

"Report card" surveillance feel: Stats echoed DM testing's privacy fears; users want control.

03

User Trust Barrier

"Report card" surveillance feel: Stats echoed DM testing's privacy fears; users want control.

Outcomes & Impact

Design decisions that improved speed, quality, and usability

Columba delivered measurable de-escalation impact: 20% fewer hostile replies in adopted comments, 30% toggle activation rate across sessions, and 12% toxicity reduction in users' sub-threads—proving lightweight NVC prompts work in Reddit's reactive ecosystem without surveillance or moralizing.

-

0

%

Hostile replies

-

0

%

Forum toxicity

+

0

%

User activation

Reflection

Research to Ethical Product Design

This case study reveals the power of system-first thinking in research-heavy UX: Starting from NCD's academic 3×3 framework, the deliberate choice to build a scalable design system before platform lock enabled rapid dual-path prototyping, real user pivots, and constraint-driven refinement without style debt.

The biggest lesson: Trust is the ultimate UX constraint. Cutting tone dashboards for lightweight prompts wasn't technical limitation—it was ethical clarity. 20% hostility reduction proves users adopt de-escalation when they feel agency, not surveillance. Reddit's public context + inline NVC scaffolding hit the sweet spot: high impact, zero moralizing.