← All Posts

Critical Yet Kind: How I Encoded Feedback Principles into 86 Hooks

Google’s Project Aristotle studied 180 teams and found that psychological safety, not talent density or resources, was the strongest predictor of team performance.1

I spent 12 years giving and receiving design feedback at ZipRecruiter. Then I encoded the principles I learned into automated code review systems. The patterns transfer surprisingly well.

TL;DR

Effective feedback separates work critique from personal worth. At ZipRecruiter, I watched talented designers shut down after receiving feedback that attacked them instead of their work. I also watched teams accelerate when feedback was precise, frequent, and focused on the output. When I built my Claude Code hook system, I realized I was encoding the same feedback principles: my hooks critique the code (specific, actionable, non-personal) rather than blocking the developer (vague, punitive, identity-threatening). The parallel between human feedback and automated quality gates runs deeper than I expected.


What I Learned Giving Feedback for 12 Years

The Distinction That Matters

“Your code has a race condition in the payment handler” critiques the work. “You keep making basic mistakes” critiques the person.

The distinction seems obvious on paper. Under deadline pressure, tired managers routinely conflate the two. I did it myself early in my career.2

At ZipRecruiter, a junior designer shipped a feature with a significant usability issue: a three-step flow that should have been one step. My first instinct was frustration: “How did this get past review?” The feedback I almost gave: “You need to think more carefully about user flows.” What I gave instead: “The onboarding flow adds two unnecessary steps between signup and first value. Here’s how to collapse it.” Same conclusion. Different framing. The first version makes the designer defensive. The second teaches.

The Curiosity-First Pattern

“Walk me through your approach here” opens a conversation. “Why did you do it wrong?” closes one.

The question’s framing determines whether the response is defensive or collaborative. I learned this from Kim Scott’s Radical Candor framework, then validated it across hundreds of design reviews.3

Curiosity-first questions reveal context that judgment-first questions suppress. A designer who skipped accessibility testing might not know about the requirement. A developer who chose a slower algorithm might have encountered a dependency conflict with the faster one. Opening with curiosity surfaces these factors. Opening with judgment buries them.

Frequency Reduces Stakes

Teams that receive feedback weekly on small items develop resilience for feedback on large items. Teams that only receive feedback during annual reviews experience each instance as high-stakes and threatening.4

At ZipRecruiter, I moved design reviews from biweekly to daily standups. Initial resistance was high. Within a month, the team reported that feedback felt “normal” rather than “eventful.” By quarter three, designers proactively sought feedback because the stakes per instance were low enough that hearing “this needs work” felt like a data point, not a judgment.


How Feedback Principles Became Code

When I built my Claude Code infrastructure, I wasn’t consciously applying feedback principles. But looking back, every design decision mirrors what I learned from human feedback loops.

Hook Feedback Is Specific, Not Vague

My blog-quality-gate.sh hook doesn’t say “this post needs work.” It says “Line 47: passive voice detected in ‘was implemented by the team.’ Suggestion: ‘the team implemented.’” Specific line number, specific issue, specific fix.

Compare with a human code reviewer who writes “clean this up” versus “the error handler on line 52 swallows the timeout exception. Add specific catch for TimeoutError.” The first is vague judgment. The second is actionable critique. My hooks enforce the second pattern automatically.

Hooks Critique Work, Not Identity

My git-safety-guardian.sh hook intercepts dangerous git commands, but its output never says “you’re about to make a mistake.” It says “WARNING: force-push detected on branch main. This operation rewrites remote history.” The hook describes the situation without attributing carelessness.

This mirrors the work-vs-person feedback distinction. The hook critiques the operation, not the operator. A developer who accidentally runs git push --force main doesn’t feel shamed. They feel informed.

Quality Gates Are Frequent and Low-Stakes

My 12-module blog linter runs on every commit to content/blog/. Each check is small: one rule, one finding, one suggestion. No single finding is a crisis. The linter produces 3-5 findings per commit, each fixable in under a minute.

This mirrors the daily-standup feedback pattern. Frequent, low-stakes checks normalize quality feedback. A developer who sees “INFO: low internal link density” treats it as a nudge, not a verdict. The same developer receiving a quarterly report listing 47 issues would feel overwhelmed.

The Pride Check Is Self-Assessment, Not External Judgment

My Shokunin philosophy includes a “Pride Check” before any work is marked complete: “Would a 10x engineer respect this approach? Does this code explain itself? Have I handled the edge cases?” These questions are self-directed, not externally imposed.

The self-assessment pattern works better than external enforcement for the same reason curiosity-first feedback works: it preserves agency. A developer who decides their own work isn’t ready yet grows faster than a developer who’s told their work isn’t ready yet. Same conclusion, different psychological ownership.5


The Counter-Intuition: High Standards AND Psychological Safety

Most leaders default to either kindness or honesty. Kind managers avoid difficult feedback, creating comfort where mediocre work persists. Honest managers deliver blunt criticism that erodes trust, creating environments where people stop taking risks.6

Both approaches fail. The research consistently shows that the highest-performing teams combine direct feedback with psychological safety. Google’s Project Aristotle, Edmondson’s research on fearless organizations, and Scott’s Radical Candor framework all converge on the same conclusion: people do their best work when they feel safe to fail AND receive honest feedback about how to improve.

My hook system encodes this combination. The hooks are strict (they block commits with passive voice, dangling footnotes, and missing meta descriptions). But the feedback is constructive (specific finding, specific suggestion, no personal attribution). Strict standards with kind delivery.


Key Takeaways

For managers: - Separate work critique from personal assessment; use “the code has” rather than “you always” - Increase feedback frequency; weekly small feedback builds tolerance for quarterly large feedback - Model vulnerability by sharing your own mistakes and the feedback you received

For engineers building quality systems: - Design automated feedback to be specific and actionable; “line 47: passive voice” teaches more than “quality issues detected” - Make quality gates frequent and low-stakes; 5 small checks per commit beats 47 findings per quarter - Frame quality requirements as self-assessment (pride checks) rather than external enforcement

For individual contributors: - Seek specific, actionable feedback rather than approval; “looks good” helps less than “the error handling on line 45 misses the timeout case” - Psychological safety doesn’t mean comfort; safe teams take bigger risks and face harder problems because failure isn’t punished


References


  1. Duhigg, Charles, “What Google Learned From Its Quest to Build the Perfect Team,” The New York Times Magazine, February 2016. 

  2. Stone, Douglas & Heen, Sheila, Thanks for the Feedback, Viking, 2014. 

  3. Scott, Kim, Radical Candor, St. Martin’s Press, 2017. 

  4. Gallup, “Employees Want a Lot More From Their Managers,” Gallup Workplace, 2018. 

  5. Edmondson, Amy, The Fearless Organization, Wiley, 2018. 

  6. Buckingham, Marcus & Goodall, Ashley, “The Feedback Fallacy,” Harvard Business Review, March-April 2019. 

Related Posts

The Startup Validation Stack: What 12 Projects Taught Me About Evidence

I validated 12 projects in 9 months. Some followed the framework. Some skipped steps. The difference in outcomes taught …

8 min read

The Pathless Path: How I Left a 12-Year VP Role to Build 12 Projects

I left VP of Product Design at ZipRecruiter after 12 years to build independently. No plan, no destination, just curiosi…

6 min read

Building Design Teams That Ship: What 12 Years at ZipRecruiter Taught Me

After 12 years leading product design at ZipRecruiter, I learned which team structures ship and which polish endlessly. …

7 min read