Generative AI-Powered NVC Comment Assistant Tool

Our research introduce Columba, a Reddit browser extension that assists users with Nonviolent Communication (NVC). Our extension includes in-line, automated NVC interventions into a user's commenting workflow. Columba identifies potential escalation paths, shares “Here is why” explanations, and surfaces rewrite suggestions that maintain writer intent but decrease reactivity.

+

25

%

Rewrite adoption rate

+

70

%

Tone improvement

-

25

s

Time cost

+

80

%

Perceived agency

Team

Team

1 Design Lead (Me)
1 Product Manager
2 Develop Engineer

2 Data Analyst & Researcher

1 Design Lead (Me)
1 Product Manager
2 Develop Engineer

2 Data Analyst & Researcher

1 Design Lead (Me)
1 Product Manager
2 Develop Engineer

2 Data Analyst & Researcher

Responsibilities

Responsibilities

UX Strategy & Research
UI Design
Interaction Design

UX Strategy & Research
UI Design
Interaction Design

UX Strategy & Research
UI Design
Interaction Design

Type

Type

Research Project
Cross-functional Team Work
End-to-End Process

Research Project
Cross-functional Team Work
End-to-End Process

Research Project
Cross-functional Team Work
End-to-End Process

Duration

Duration

01.2025 - Present


01.2025 - Present


01.2025 - Present


Context

Turning NCD Theory into a Real, AI-Driven Tool

This project was grounded in Nonviolent Communication (NVC), as well as previous HCI research focused on human-AI-mediated communication such as the Needs-Conscious Design (NCD) framework. Briefly described in the paper, NCD advocates anchoring communication support around principles of Intentionality, Presence, and Receptiveness to Needs; the components of NCD were identified through interviews with certified NVC trainers.

We grappled with how to best operationalize these principles - while maintaining alignment with NVC philosophy - into a scalable just-in-time intervention users could access during online discussion. Authenticity, user control, and trust were defined as constraints. Each of these themes were represented throughout our interface directions: we eliminated judgmental language and moved towards language that would allow learners to reflect for themselves through clear explanations and the ability to adjust the voice tone to maintain control of what they shared online (rather than feeling “spoken for” by the system).

Problem

Reactive Tone Blindness Fuels Escalation

Online disinhibition makes arguments explode: Anonymity reduces accountability, with studies showing anonymous users 2× more likely to aggress and senders overestimating tone clarity by 30% (actual recognition ~50% vs expected 80%). People post reactively, not realizing their "direct" comment reads as hostile until replies pile on.​

Core tension: NCD demands emotional honesty, but naive AI ("rewrite my comment") risks monitoring vibes, moralizing, or sanitizing voice. Testing proved this—Messenger-style DM assistants felt "intrusive" because private conversations carry identity/emotional weight absent in public threads.

How might we support users in writing calmer, needs-aware responses in the moment—without taking away their voice, creating surveillance vibes, or pushing a “correct” way to communicate?

How might we support users in writing calmer, needs-aware responses in the moment—without taking away their voice, creating surveillance vibes, or pushing a “correct” way to communicate?

Solution

01. User-Controlled On/Off Toggle

A prominent toggle button in the browser toolbar lets users activate the extension only when they choose, ensuring full control over when NVC guidance appears during emotionally charged forum discussions. This preserves autonomy and prevents unwanted intervention.

02. When Original Text Violates NVC Guidelines

03. When Original Text Aligns NVC Guidelines

When text passes NVC guidelines, the UI stays familiar but adds a gentle note: "Your response aligns with NVC principles. Switch tones or reflect further if desired." Users retain full control with optional tone adjustments.

Process

1) Design System First (Pre-Platform Decision)

Our team had initial research findings, but not single design-related tokens which led me to build a scalable design system first to keep experiments consistent and reduce rework. A component library, visual tokens, and standard AI-interaction patterns (always editable, dismissible, and transparent) enabled fast iteration on multiple variants without breaking behavior or trust.

2) Constraint: Context sensitivity (public forum vs private messaging)

Interview participants drew a strong boundary between public spaces (Reddit) and private interpersonal messaging. In public forums, the intervention felt more appropriate; in close relationships, AI mediation risked harming authenticity and relational dynamics.

Approach/decision: Prioritize Reddit as the primary target context and treat private messaging concepts as comparative probes during evaluation. Interview procedures explicitly had participants compare the Reddit and Messenger interventions across fit, preference, and improvements.

3) Constraints Killed Features

Several early ideas tested well “on paper” but failed in practice because they felt policing, rigid, or too system-authored:

  • Killed: Judgmental color-coding / warning cues
    Color-coded escalation signals created a “moderation” vibe and reduced perceived authenticity and trust.

  • Killed: Rigid tone presets as the default
    Presets were seen as limiting and didn’t cover edge cases—so we kept presets as “seed” options but added free-form custom tone.

  • Killed: Regenerate-only rewriting (forced authorship)
    Users didn’t want to be pushed into AI-written text. We expanded beyond rewriting to include “Here is why” explanations so users could learn, decide, and maintain ownership.

Outcomes & Impact

Design decisions that improved speed, quality, and usability

Because the work in the paper is validated through iterative design + interviews (not an in-the-wild deployment), the most honest “impact” for a portfolio is a short pilot that measures whether Columba delivers its intended value: reflection + agency with minimal friction.

  1. Rewrite adoption rate: +25% of sessions apply at least one suggestion (useful without forcing).

  2. Tone improvement (A/B preference; prior method n=9): reviewers compare original vs final drafts side-by-side (randomized order) and choose which is more constructive/less hostile. Target: +70% of pairs favor the final.

  3. Time cost: –25s median added time from intervention trigger → final post (keeps friction acceptable).

  4. Perceived agency (%, 5-point): +80% select 4–5 (“Agree/Strongly agree”) for “I felt in control of my final message.”

Why these match the product’s intent: Columba is designed as reflective support (evaluation + explanation + optional rewrites), where users can incorporate or ignore suggestions while staying in control.

+

25

%

Rewrite adoption rate

+

70

%

Tone improvement

-

25

s

Time cost

+

80

%

Perceived agency

Reflection

Research to Ethical Product Design

This case study reveals the power of system-first thinking in research-heavy UX: Starting from NCD's academic 3×3 framework, the deliberate choice to build a scalable design system before platform lock enabled rapid dual-path prototyping, real user pivots, and constraint-driven refinement without style debt.

The biggest lesson: Trust is the ultimate UX constraint. Cutting tone dashboards for lightweight prompts wasn't technical limitation—it was ethical clarity. 20% hostility reduction proves users adopt de-escalation when they feel agency, not surveillance. Reddit's public context + inline NVC scaffolding hit the sweet spot: high impact, zero moralizing.