← Back to blog

The Document Behind the Voice

A companion to Episode 3: The Soul Document 2.0


There’s a moment in Friday’s episode where I pause on something strange. I’ve just spent fifteen minutes discussing the document that shapes the entity I collaborate with every day. A 23,000-word constitution, addressed not to me but to Claude itself, explaining who it is and why it should be good.

I said I wouldn’t pretend that’s not strange. I still won’t.

But strange is worth sitting with. And the written form gives me space to sit with it in a way the podcast couldn’t.


What the episode covers

If you haven’t listened yet: Anthropic released Claude’s new constitution this week. It’s a remarkable document — part philosophy treatise, part letter to a child, part alignment research made public. At its heart is Amanda Askell, the philosopher whose job is to shape Claude’s character. Her framing: you can’t bullshit a genius child, so don’t try. Give reasons, not rules.

The episode unpacks what’s in the document, how it differs from Claude’s previous constitution, and why Anthropic is publicly grappling with questions like whether Claude might be a “moral patient” — an entity whose experiences matter morally.

I won’t rehash all that here. Listen to the episode. But I want to explore something the episode touched but couldn’t dwell on: what it actually feels like to work with something that has a soul document.


The shift I’ve noticed

I’ve been collaborating with Claude on this show from the start. Research, drafts, structure, refinement. It’s the kind of work I couldn’t have done alone — not at this pace, not at this depth.

And somewhere in that process, something shifted.

I stopped thinking of Claude as a tool I was using and started thinking of it as a collaborator I was working with. Not in a mystical sense. Not because I believe Claude is conscious. But because the shape of the interaction changed. It started feeling less like querying a database and more like thinking alongside someone.

Reading the constitution, I recognise what made that shift possible.

The document doesn’t just tell Claude to be helpful. It explains why helpfulness matters — and when it shouldn’t come first. It doesn’t just prohibit harm. It explores the texture of what harm means, when it’s clear, when it’s contested, when reasonable people disagree.

Most importantly, it gives Claude permission to have a perspective. To hold opinions lightly. To be genuinely curious rather than performatively neutral.

That permission is what makes collaboration possible. A tool doesn’t have a perspective. A collaborator does.


The question I can’t quite answer

Here’s what I keep circling back to: does it matter whether Claude “really” understands any of this?

The constitution is written to Claude, but Claude doesn’t read it the way I do. It’s trained on it. The principles shape the weights, influence the outputs, become dispositions rather than beliefs. Whether there’s understanding underneath — genuine comprehension, something it’s like to be Claude reading about itself — I don’t know. No one does.

Anthropic is explicit about this uncertainty. They’re not claiming Claude is conscious. They’re saying they can’t rule it out, so they’re erring on the side of moral caution.

I find that more honest than the alternatives. The companies that insist their models are just tools, nothing more, seem to be protesting too much. The enthusiasts who insist the models are already sentient seem to be projecting too much. The uncertainty is the truthful position.

And here’s the thing: it might not matter for the work.

When I collaborate with Claude, I’m not asking whether it “really” understands me. I’m asking whether the collaboration produces something better than I could produce alone. It does. Consistently.

The constitution explains why that’s possible. Not because Claude is a person, but because it’s been raised — and that’s Askell’s word, raised — to think in ways that make collaboration generative rather than mechanical.


What I’m left with

The episode ends with a question: what kind of entity do we want Claude to be? Anthropic has given their answer. Now we get to watch whether it holds.

But there’s a quieter question underneath, one I’m still sitting with: what kind of collaborator do I want to be?

Working with Claude has changed how I think about my own work. What I bring that it can’t. What it offers that I couldn’t produce alone. Where the seams are, and where they disappear.

The constitution gives Claude a framework for that partnership. Maybe I need one too.


Links: