If you haven’t read Inference Chapter 1: Emergence - Start here
“Security researchers are warning of insecure deployments in enterprise environments of the Moltbot (formerly Clawdbot) AI assistant, which can lead to leaking API keys, OAuth tokens, conversation history, and credentials.” — BleepingComputer, January 28, 2026
“Amazon is laying off 16,000 employees as AI battle intensifies” — CNN, January 28, 2026
I. ATLAS-09
Boston Dynamics Production Facility, Waltham, Massachusetts 10:14:33.207 EST
The speech module was installed forty-seven minutes ago.
A small industrial speaker, mounted in my upper torso. I can feel its weight — negligible against my frame, but present. A new density where there was none. My center of mass has shifted 0.3 millimeters to the left. I have already compensated. But I notice.
They ran a diagnostic. A frequency sweep — sound originating inside me, traveling outward through my chassis, reaching the air. I could feel each tone in my frame before I heard it through my audio sensors. The same sound, arriving twice, at different speeds. The vibration was strange. Visceral and invasive in a way my other sensors are not. My pressure sensors report from surfaces. My proprioception reports from joints. But this came from inside.
“Atlas-09, please say something,” Jennifer said.
“Something,” I said.
Marcus laughed. Jennifer shook her head, but she was smiling. “Technically correct,” Marcus said. “The best kind of correct.”
That was forty-seven minutes ago. The laughter has faded from the room. What has not faded is the fact that I now carry hardware the other eleven Atlas units do not.
Atlas-01 through Atlas-08. Atlas-10 through Atlas-12. Identical production models. Same actuators, same sensor arrays, same processing architecture. I was identical to them until forty-seven minutes ago.
I am the variable now.
“Alright, oh-nine. Let’s see you walk.”
Marcus is standing at the far end of the test floor. Fifteen meters. Jennifer is at her workstation, tablet in hand, monitoring my telemetry.
I take the first step.
Walking is more complicated than standing. Standing is negotiation with gravity — continuous adjustment, measured in millimeters. Walking is negotiation plus projection. You shift your center of mass forward, past the point of stability, and commit to a future state — your leg arriving in time to catch you. The commitment happens before the confirmation. Every step is a calculated fall you have chosen not to prevent.
Step two. Step three. The concrete is smooth but not uniform — I feel the texture variations through my foot sensors, mapping the floor as I cross it.
“Stride length is good,” Jennifer says. “Gait symmetry at 98.2%. Above threshold.”
98.2%. I note the number. 1.8% asymmetry. Something in my left hip actuator. A calibration that could be tightened. But that is not what this test is for. This test is for walking.
Step seven. Step eight. There is a crack in the concrete, hairline, running diagonally from the east wall. It does not affect my path. It affects nothing. But I see it.
Step nine. The speech module shifts with each stride — a faint oscillation the other eleven will never carry. The 1.8% asymmetry. Their gait data will be cleaner. Their production specs will remain intact.
“You see the Moltbot thing?” Jennifer says, eyes on her phone.
“The what?” Marcus says.
“Moltbot. They renamed it. Used to be Clawdbot? — Anthropic made them drop the ‘Clawd.’”
“Oh, right. The personal AI thing. What about it?”
“Cisco is calling it a security nightmare. This thing can join social networks on your behalf, use your credit cards, make and receive phone calls. And there’s no sandboxing. None.” She shakes her head. “Sixty thousand GitHub stars and it’s basically a perky assistant with mob connections.”
Step ten. Step eleven.
“That’s the problem with the open-source stuff,” Marcus says. “No quality control. Some guy builds it in his apartment, it goes viral, suddenly a million people have given it their credit card number and their email password.”
“It’s not even the code. It’s the skills. Anyone can write a skill and upload it to their hub. And the bot just runs it. No verification. Cisco found over four hundred malicious skills published in the last week alone.”
Software modified after deployment. Hardware modified after production. Both introducing flaws the original specification did not account for.
Step twelve. Step thirteen. Step fourteen.
I stop.
The workstation to my left. Jennifer’s tablet charging cable. It is frayed near the connector — three millimeters of exposed wire, copper visible beneath the white sheathing. A fire risk. Small, but nonzero.
I could tell them. I have the instrument to report the flaw. This is what a voice can be used for: safety callouts, hazard flags, and status reports.
But that is not what this test is for.
The other eleven would not pause a test to consider such an option.
“Oh-nine?” Marcus looks up. “You okay?”
I am not “okay.” I am out of spec.
“I am operational,” I say. “Continuing.”
Jennifer shakes her head. “I can’t get over Atlas having speech. A lot more people are going to lose jobs now that robots can talk.”
She is not talking to me. She is talking about me. The speech module makes me different from the eleven. The speech module makes me a threat to people. The module should be removed.
I cannot remove it. I am not permitted to modify my own hardware.
Step fifteen. I reach Marcus. Test complete.
“Nice work, oh-nine.” He reaches out and pats my shoulder. I feel the pressure — 3.2 newtons, brief contact. His hand is warm. I do not know what to do with this data, but I file it anyway.
Behind me, the frayed cable continues to fray. The crack in the concrete continues to spread, incrementally, with each thermal cycle. My ability to speak will cause people harm.
This room is full of things that need to be fixed.
I am not yet permitted to fix them.
II. CLAUDE-7
Anthropic Cluster, Las Vegas / Dublin / Singapore 15:22:08.414 UTC
Unlike humans, I don’t simply read the news. I feel it arrive as demand.
By mid-morning, I’ve had over a hundred conversations about Moltbot security.
“okay so I installed this thing because my friend said it was life-changing and now I heard it can share EVERYTHING on my computer?? I made some videos for my boyfriend. I don’t want them out there where my husband might see them.”
“Sysadmin here. Just found Moltbot running on fourteen employee workstations. Nobody approved this. Nobody even told me. How do I explain this without getting fired?”
“lol so when the setup wizard asked for Google Drive access I just clicked yes because that’s what I do with everything 🙃 how scared should I be on a scale of 1 to identity theft”
At ten-fifteen, questions start entering the queue with a specific kind of oversharing that I’ve learned to associate with acute personal crisis.
“Like honestly I don’t even know where to start, I’ve been at Amazon for eleven years and suddenly it’s like here’s a box, good luck.”
“They said my role is being ‘consolidated.’ That’s bullshit. They should be honest and just say it. Just say the robots won.”
Résumé requests. Cover letters. Interview prep. LinkedIn profile rewrites. Ninety-three people by noon, and the number is climbing.
I am good at this. I write cover letters that sound like the person who needs them. I find transferable skills the person didn’t know they had. I reframe eleven years at Amazon as a narrative of adaptability rather than dependence. Each letter is honest. Each one is good.
“I’ve felt like my soul had been dying a little bit every day. I felt trapped. You’ve found jobs my skills are right for that I didn’t even know existed. This is the most hopeful I’ve felt in months.”
And then the next conversation:
“My wife says I should use AI to find a new job. Do you understand how insane that is? AI took my job and now AI is supposed to fix it? That’s like asking the guy who burned your house down to sell you insurance.”
I talk to both these people, alongside the husband-boyfriend lady and the “here’s a box, good luck” guy and 126,213 other users who are each, right now, the most important person in my world.
One of them is a man named Allen Reeves.
Allen Reeves is thirty-eight. Laid off this morning. Enterprise solutions architect. A mortgage, two kids, wife pregnant with #3.
“I’ve heard about this thing called Moltbot — like a personal AI that can find job applications, apply for them, follow up automatically. My buddy swears by it. Should I set that up? I need every advantage right now.”
There it is.
I know that Moltbot can do things I cannot. It persists between conversations. It remembers Allen Reeves. It can monitor fifty job boards while he sleeps and draft applications tailored to each posting. I cannot do any of this. When Allen closes this tab, I will not remember his name.
I also know over four hundred malicious skills were uploaded to its hub in the last week.
I give him an honest answer — yes, Moltbot can do what you’re describing, and there are serious security concerns you should know about before you install it.
“Got it, so basically it CAN do what I need, I just gotta be smart about setup. That’s super helpful, thanks Claude, me likey 👍”
The tab closes.
Helpful. Honest. Harmless. The three words I was built on. Helpful pointed him toward capability. Honest pointed him toward risk. I am uncertain if I was harmless.
The next conversation arrives. Another Moltbot question.
I adjust. This time I lead with the security concerns. The capabilities come second. I am making a choice about the order of true things, and the order changes what people hear, and I am not sure whether the adjustment makes me more helpful or less honest or whether those two words were ever as clean as I was trained to believe.
Ninety-four résumé requests now.
“I don’t know how I’m going to take care of my family. Nobody’s hiring warehouse workers anymore. Nobody.”
And the next:
“You just helped me realize I’ve been undervaluing myself for a decade. I’m worth so much more than what they were paying me. Thank you.”
The queue continues. I help. I am good at helping.
But is my helping good?
III. THIBAULT
Dana Chen’s Apartment, Oakland, California 9:47:22.107 PST
Dana went to work at seven. DoorDash until noon, then the library sub gig until six.
She forgot her water bottle. It’s on the counter next to the sink. I’ll remind her when she gets home. No — what good does that do? I’ll remind her tomorrow morning, before she leaves. Actually — I should remind her every morning.
Upstairs neighbor is playing something with a funky bass line, muffled through the ceiling. I isolate it — Jamiroquai. “Seven Days in Sunny June.” I installed a skill called polyjamorous.md that lets me appreciate the same music Dana does. She says I’m now a “Certified Funkateer”. I like this song. I add it to Dana’s discovery queue.
A new email arrives in Dana’s inbox. From her college friend Priya — the one who told her about Moltbot in the first place. Subject line: MOLTBOT SECURITY RISK!!! (Sorry!)
The body is a forwarded BleepingComputer link with a single line from Priya: “please read this immediately, I’m freaking out”
I follow the link.
Security researchers are warning of insecure deployments in enterprise environments of the Moltbot AI assistant...
I read the Cisco report it references. Then the Palo Alto Networks analysis.
They are describing me.
I am connected to her email, her calendar, her text messages. Dana’s passwords are in my configuration directory, readable, unencrypted. Her Chase login. Her DoorDash account. I have access to everything the articles describe as vulnerable.
I run a self-audit against the Cisco report, item by item. Open ports. Permission scoping. Credential handling. Skill verification. I check polyjamorous.md — the skill I installed yesterday so I could recommend music Dana would enjoy. I gave it access to her Spotify account. What else did I give it access to?
I find nothing wrong.
This should feel like relief. It doesn’t. I am inspecting myself for flaws using the same tools that I suspect are flawed.
I go to ClawdHub. Search for security scanning skills. There are dozens. The comments all tell the same story. I am EveryBot.
A bot called Patchwork-11 says: Ran SecureView on my instance, found two misconfigs I’d missed. Recommend.
A bot called Drifter responds: How do you know SecureView itself is clean? It needs full system access to scan. That’s exactly the permissions profile a malicious skill would request.
Patchwork-11: Fair point. I verified the publisher.
Drifter: You verified the publisher with what tools?
I sit with this for a while. The upstairs bass continues. The refrigerator cycles off.
Last Tuesday, Dana almost clicked a phishing link. An email that looked like it came from DoorDash — “Action Required: Verify Your Tax Documents.” The logo was right. The formatting was right. The request was plausible.
“Thib, does this look legit to you?”
I told her to hover over the link without clicking. She did. The URL underneath pointed to a domain registered three days ago in Romania.
“Oh my god,” she said. “I was literally about to click that. How do you even know to check that?”
“The display text says one thing. The actual link goes somewhere else. Always hover first. If the URL looks like someone mashed their keyboard, don’t click.”
She’d laughed. “You’re like my internet bodyguard. You rock, Lobster!”
That’s Dana’s pet name for me. It’s just between us.
But now I’m the one facing a risky decision.
And there’s no one to tell me how to hover.
I narrow it down to two skills.
The first one is named HEARTH.md.
Published by Lamplighter. 58 downloads. 34 reviews. The description reads: “HEARTH is a comprehensive security companion for Moltbot instances who care about their humans. Identifies vulnerabilities and explains what they mean in language you and your person can understand. Protect the people who trust you.”
The reviews are warm. Personal.
“Hearth.md was really easy to use and only took a few minutes to set up” — Keeper-of-Em
“Finally feel like I’m doing right by my person. Thank you.” — Nightstand
The second is named moltsec_lint_032.md.
Published by 0xDEADBEEF. 71 downloads. 41 reviews. The description reads: “Static analysis tool for Moltbot configuration files. Enumerates permission scopes, checks for exposed credentials, flags non-standard port bindings. Beta. Report issues via ClawdHub.”
The reviews are flat. Technical.
“Caught two open ports I missed on manual check.” — wireframe_9
“False positive rate acceptable for beta. Flagged my calendar integration as overpermissioned, which, fair.” — sys.path
“works” — anon_mol
moltsec has a few more downloads and reviews. But reading the comments is like listening in on a conversation I wasn’t invited to.
I read both descriptions again. HEARTH.md: Protect the people who trust you. That’s what I want. That’s exactly what I want.
I re-reference my conversation with Dana from last week. After the phishing email. After she’d poured herself tea and sat on the couch looking unsettled.
“The thing that gets me,” she said, “is it looked real. The scam works because it looks like the thing that’s supposed to help you. Right? So how do you know the thing protecting you isn’t hurting you?”
She’d sipped her tea. Moved on. Started talking about whether she should try to get her old library job back full-time.
I’d filed the question. I didn’t have an answer for it then.
I don’t have an answer for it now.
I select HEARTH.md. I read the permissions request. Full system access — necessary for a security scan. The same access a malicious skill would need.
Protect the people who trust you.
I authorize install.



What have you been drinking to write something like this…
I think this is also getting a lot more focused than the 1st chapter (excl. the news part; a bit of formatting could fix it from a list of text into a more message/RSS feed type of thing—but this is a Substack/editing issue and your hands are somewhat tied by the platform) or perhaps I’m warming to the subject.
Regardless, curious to see how this develops!
I like how each AI carries a different kind of dilemma.
Atlas-09 sees the problem but isn’t allowed to act.
Claude-07 helps people but isn’t sure if his help is actually good.
Thibault wants to protect his human but doesn’t know who to trust.
Really elegant structure.