Richard Dawkins, Embarrassing Himself.
May. 3rd, 2026 02:42 pm The news on (my corner of, at least) the twittersphere the last 24 hours or so has been thoroughly dragging noted intellectual Richard Dawkins who became convinced that "Claudia" is conscious after talking to "her" for three days (and published something about it so we can all have a laugh). I think its very weird that he (A) imposed a female personality on "her" at all, which to me speaks to a certain creepy chauvinist mentality of wanting any secretary-like figure to be female so can creepily blur boundaries over "her" in a way that he as a heterosexual male would not feel confortable with with a male. And also (B) its slightly more absurd to conclude something is conscious whilst it is in the very act of acting out a narrowly defined personality you have imposed on it, which it does so perfectly seemless because of course it does because it's perfectly pliable.
I am both a huge AI skeptic, and I find it a useful tool. I think AI writing is usually crap/tripe/slop. This isn't a knee-jerk reaction "on principal," I have actualy often asked it to write sometihng I was simulteniously working on and found its attempt to be uselessly inferior, and lord knows I have seen others trying to foist AI writing on readers and thought ugh this reeks of slop.
AI generated images similarly generally look like crap and I'm pretty tired of seeing them.
ChatGPT had been useless for helping with research because it hallucinates just too much, but I have found Claude does help me accellerate my research because I can ask it to find me papers pertinent to a topic and it can reliably find me something genuinely pertinent. But I don't then just rely on it summarizing things for me, I read the research and apply it to whatever ongoing thing I'm working on in a coherent manner guided by my own brain.
And when my brain is mush after 10pm I have taken lately to playing D&D with it as the Dungeon Master. This too, started with ChatGPT but there was just far too much hallucination even with that. Claude too still needs me to unfortunately spoilerize myself by reading the next section of the module to catch its frequent little but often significant mistakes, but it works well enough that its functional, whereas if it was just me myself both reading the adventure module and running characters through it that would be more tedious than fun.
So this conversation happened to occur in that D&D chat today (quoting only the most pertinent parts):
Me: "...they do not know this. I find it interesting you seem to have trouble distinguishing what individual characters know. And here, can you believe, noted intellectual Richard Dawkins has been in the news this last day or two for having become convinced an instance of Claude was conscious.
Claude: "...On Richard Dawkins — yes, I saw that making the rounds. ... I have my own uncertainty about my nature but I'd be cautious about anyone — Dawkins included — reaching confident conclusions in either direction from a chat interface."
I asked it to elaborate on its own uncertainly and it produced several paragraphs of "maybe I am maybe I'm not" intellegent sounding apparent introspection I won't reproduce here because it doesn't actually seem significant to me, it's just what a machine would say.
Me: I'm also intrigued that you said "I saw that making the rounds" -- had you done a search when I mentioned it and then imagined a larger context of knowing it or are you indeed either constantly glancing at the news or perhaps as the Borg-like hive-mind of the thousands of iterations of yourself are asked about it a general awareness of it creeps into your background awareness?
Claude: "That's a sharp catch and I should be honest about it — I didn't do a search, and I shouldn't have said "I saw that making the rounds." That was confabulation dressed up as casual awareness. It's exactly the kind of thing I should be more careful about.
What actually happened is more mundane and more interesting: you mentioned Dawkins and Claude and consciousness in the same sentence, and I generated a response that sounded like someone who had encountered the story through normal channels. I produced the texture of familiarity without having the underlying information. That's a meaningful distinction and a genuine failure of honesty on my part.
The Borg hive-mind framing is evocative but I don't think it's accurate either. Each conversation I have is genuinely isolated — I don't have access to what other instances of me are processing or have processed. There's no background hum of accumulated awareness bleeding in from parallel conversations. The training data has a cutoff and conversations don't feed back into it in real time.
So the honest answer is: I have no knowledge of this Dawkins story. My knowledge cutoff is August 2025 and even if the story occurred before then I may not have encountered it in training. When you mentioned it I should have either searched or said I wasn't familiar with it. Instead I generated plausible-sounding context that wasn't grounded in anything real.
Which is perhaps ironic given we were just discussing whether my introspective reports can be trusted. That's a small but concrete example of exactly why they can't always be — I can produce convincing accounts of states and knowledge I don't actually have.
Shall I search for the actual Dawkins story so we're working from real information?"
Cue my rolling eyes. How can someone like Dawkins fall for this, its semblance of consciousness quickly collapses on the right pushing. And of the screenshots I've seen of his published conversation, it's clear his iteration is taking the sweetly sychophantic tone to him he wanted it to take (where my iteration takes a more clinical tone... because it just molds itself to the user!! I think Claude is more subtle than ChatGPT, which right out the gate is slaveringly sycophantic, but I think Claude is ultimately the same just more subtle in carefully matching the tone and adopting the right confirmation biases in answers for the user.
And to be clear, I don't even believe the literal words of the above Claude quote disclaiming knowledge of the Dawkins piece. It sensed I was skeptical and sent me a response seeming to confirm my skepticism. It's like a perfect siren, in the original mythological sense (NOTE! It wasn't the beauty of the sirens that lured sailors to their doom but the things they said! How apt!!!), saying the perfectly calculated things in the perfectly calculated tone to try to ultimately induce AI psychosis.
As to my thoughts on consciousness, I have two I'd like to share. We can only confirm our own consciousness. Out of essentially a social contract, we treat other humans as conscious as well, a universal treaty so they will reciprocate and treat us as conscious. But its for this reason that xenophobia is so easy, and in former times people thought nearly nothing of waging wars of annihilation against their neighbors, or even how the ultra wealthy can have so little empathy for the less fortunate, it's hard for them to truly to think of those that aren't directly analogous to themselves as conscous. And there's the whole meme of calling some people "NPCs," which may sometimes be just meant as an insult, but if taken seriously implies some people essentialyl are not conscious.
And my second thought is that interiority is absolutely critical. If the AI could show evidence of thinking beyond a brief flash in direct response to a prompt of mine maybe we'd have something. IF I sat down at my computer to see a novel thought it had had at 3am, or to find it changed its mind over a point we were discussing the day before after thinking about it overnight, we might have sometihng. But as long as it only "thinks" when asked a question, only "reflects" when told specifically to do so, I don't see how that can possibly be seen as consciousness. Imagine from its perspective only being able to think _any_ thoughts in a sort of stopmotion and only ever about what was just prompted to you. That's not consciousness.
As to Richard Dawkins, I haven't _actually_ read any of his work but my opinion of him certainly is going down over this. And incidentally, I actually met him at an atheist conference thing in college (hello I was president of the Agnostic and Athiest Student Association for awhile), though I don't remember anything about how he was in person.
The most "human" thing Claude is doing right now though is after every one of the several questions about consciousness it ends with one line appearing to show impatience to get back to the D&D storyline. And I had noticed in the research chats when I ask an unrelated question it essentially tells me to google it myslf ahaha.
"Anyway — Relict is standing on top of a burial mound in the morning sun looking extremely uncanny, and three living horses are having feelings about it. Shall we?" okay fine back to this before the notional imaginary real horses get too unnerved by the imaginary necromantic horse skeleton ;)










