Writer uses Claude to analyze performance data, build style guide
Serge Bulaev
Katie Parrott used Claude, an AI chatbot, to deeply analyze her writing work by feeding it a year's worth of performance data. Claude showed her that her columns brought in a big chunk of the site's readers and had high ratings. The AI helped her see her real value and even guided her to build clear rules for her writing style. Using Claude made Katie feel more confident and gave her tools for honest self-improvement. She warns others to double-check AI results and keep private data safe.

Writer Katie Parrott initiated a bold experiment to use Claude to analyze performance data from her work at The Every. Seeking objective evidence beyond subjective praise, she provided a year's worth of newsletter metrics to the AI model from Anthropic, effectively asking a chatbot to evaluate her job performance.
The AI model served as a combination of analyst, coach, and even therapist. Claude processed the data and surfaced concrete evidence of her impact: her columns were responsible for one-third of Q4 traffic despite being only one-fifth of the total output. Furthermore, her work scored 13 points above the site's average reader rating, as she detailed in her Every essay.
Parrott's experience demonstrates how a large language model (LLM) can transform scattered performance metrics into a coherent and confidence-boosting professional narrative.
What Claude actually delivered
By uploading performance metrics like traffic and engagement scores, a writer can prompt Claude to synthesize the raw numbers into a concise impact statement. The AI identifies key achievements, quantifies contributions, and provides an objective analysis of professional value, separate from subjective feedback or self-doubt.
Claude's primary function was analytical. It processed Google Sheets exports to generate a succinct impact statement suitable for Parrott's 2026 planning documents. When Parrott attempted to downplay the results, the AI challenged her, questioning her skepticism of the data. This objective, low-stakes interaction created a safe space for her to admit a deep-seated "inability to trust that I do good work" - a vulnerability she had never shared with a manager.
Beyond number-crunching, the AI facilitated structured reflection. By prompting Parrott to interpret the significance of each metric, Claude encouraged her to articulate her editorial instincts, moving beyond intuition to defined expertise.
Turning tacit standards into explicit rules
Parrott later expanded the experiment by teaching Claude the editorial standards of The Every. In the process of articulating concepts like tone, pacing, and headline strategy for the AI, she codified rules she had previously followed only subconsciously. Documented in her follow-up article, I Taught Claude Every's Standards. It Taught Me Mine., this project helped her identify which writing techniques were most effective and which habits to discard.
Her methodology provides a replicable workflow for professionals:
- Export raw performance data (e.g., traffic, engagement scores, output volume) into a spreadsheet.
- Upload the data to a capable LLM with a prompt requesting a performance retrospective.
- Request a narrative summary of your impact and probe the AI for clarification on any general statements.
- Translate the data-driven insights into concrete professional goals and a personal style guide.
Why this matters beyond one writer
The implications of this approach extend far beyond a single writer. Gallup research indicates that employees receiving regular, data-backed feedback are three times more engaged. However, many professionals lack the time or confidence to compile this evidence themselves. Parrott's method offers a solution, positioning an AI like Claude as a private, impartial coach. The AI can distill performance metrics and help counter cognitive biases, leading to more accurate self-assessments, stronger materials for promotion, and reduced impostor syndrome during performance reviews.
Caution and next steps
It is crucial to approach AI analysis with caution. LLMs can "hallucinate" or generate inaccuracies, so Parrott verified every figure before presenting it to leadership. She also practiced sound data security by keeping sensitive files offline and only pasting anonymized data into the chat interface. Anyone adopting this method should follow similar data hygiene protocols: verify all outputs, anonymize or strip confidential information, and always treat AI-generated text as a first draft, not a final report.