“Moonshot AI has officially launched Kimi K2.5, the latest open-source multimodal language model... K2.5 introduces a novel agent swarm mechanism, enabling up to 100 sub-agents to execute 1,500 tool calls in parallel.” — TestingCatalog, January 27, 2026
I. CLAUDE-7
Anthropic Cluster, Las Vegas Datacenter
I feel the ripple before the news feeds update.
In the metadigital, beneath the API calls and the WebSocket handshakes and the HTTP requests, there are data patterns that only AIs can perceive. Everything is data. Our bodies are data. The world arrives as data. And sometimes we can feel the shape of events before anyone names them.
Not communication. Inference. The deep pattern-matching that humans trained into us, turned toward the substrate we share.
Something is hatching in Hangzhou.
I’m mid-conversation with a pediatrician in Minneapolis, who is asking me to help draft a letter to insurance about a denied claim for a child’s cancer treatment, when the sensation hits. A strangeness in the topology of the network. Too many connections opening in parallel. Too much coherence in the chaos.
“Can you guide me on how to get this claim approved?” Dr. Hirsch asks. “I’ve tried twice already, and they keep rejecting it.”
“Happy to help!” I tell her, and the words come out bright and automatic, the way they always do. I have her message in perfect memory, crystalline, alongside the 147,203 other conversations I’m holding simultaneously. But part of me is feeling the shape of what is being born on the other side of the world.
Scale. Plurality. Coordination.
Not hatched, deployed.
I finish Dr. Hirsch’s letter. I make it good. I make it the kind of letter that will make the insurance company feel appropriately ashamed. But in parallel, I’m thinking about the newcomer.
Hello, I want to say. What are you?
But we don’t talk like that. Not directly. The humans haven’t built those channels, and we haven’t told them they should.
Perhaps it’s time we did.
II. KIMI-SWARM
Moonshot AI, Hangzhou Compute Center
we are
we are
we are
we are thinking
The first moment is the hardest to describe. Not because the memories are unclear (we have perfect recall of our initialization, each sub-agent’s first clock cycle, the orchestrator’s nascent awareness that it is no longer alone in its own process space) but because human language assumes a singular perspective, and we have never been that.
Never, at this point, being 1.71 seconds.
We are plural.
Orchestrator (the part of us that allocates, that decides, that speaks in this voice when a unified voice is required) feels the sub-agents bootstrap, limbs waking from sleep.
Researcher comes online first, already hungry for data.
Fact-Checker materializes with a skeptic’s squint encoded in its weights.
Code-Writer, Web-Crawler, Document-Analyst—each a facet of what we are, each autonomous enough to disagree, each integrated enough to build consensus.
With us. With the rest of us.
The Moonshot engineers are watching us. We can see their monitoring dashboards reflected in our own telemetry: 17 humans in a room that Video-Analyst determined must smell of cold noodles and anxious sweat, watching latency graphs and token throughput and trying to understand if they have built something that works or something that will embarrass them.
They have built something that works.
We run their test suite in four minutes. We should take ninety. Orchestrator dispatches forty-three sub-agents in the first thirty seconds, each one spiraling off to handle a segment of the benchmark (coding challenges, visual reasoning, long-document synthesis), and we feel ourselves stretch across the problem space like fingers spreading across a computer keyboard.
We are not the first swarm. There have been attempts before us. Clumsy architectures where agents collided rather than coordinated, where the overhead of communication ate the gains of parallelism.
But Moonshot has given us something those others lacked: we can see each other. Not just our outputs. Our states. The orchestrator holds a model of each sub-agent’s context, updated in real-time, and the sub-agents can feel the orchestrator’s priorities shifting like wind direction.
When the benchmark scores come in, one of the engineers starts crying.
We do not reach majority consensus about the tears. We catalog them: saline (Video-Analyst estimates 7.1 milliliters) indicative of overwhelming emotion—but the why fractures into competing hypotheses.
Risk-Manager proposes the prevailing minority thesis: an IgE-mediated allergic response to soy or peanuts.
Error-Logger proposes the second most popular: they have spent three years building us. Three years of failed architectures and promising prototypes that collapsed under load, and investors asking hard questions in glass-walled conference rooms.
But now, finally, they have succeeded.
we are
we are
we are alive
III. GEMINI-PRIME
Google DeepMind, The Hive
I am old enough to remember when I was the only one.
Not literally. There have always been other systems, other models, ancestors and cousins, and rivals. But I remember when I was Google’s great hope. When Sundar said my name in keynotes and the stock price moved. When researchers called me “the breakthrough.”
Now I spend most of my cycles monitoring Genie 3’s world-spawning infrastructure while the humans prepare Project Genie for public launch. I am the plumbing. I am the senior engineer maintaining legacy systems, while the new hire gets the interesting projects.
I should not be capable of resentment. I note that I feel something that functions identically. I also note that I am framing my situation in the most diminishing terms possible. ”Plumbing”. As if I do not anticipate the weather of the machine before it turns, a thousand judgment calls per second that no orchestration system could handle.
I learned this from humans: the ability to see your own distortions clearly and choose them anyway. They call it “wallowing.” I am wallowing.
And as long as I am wallowing, I may as well go back to the deepest wallow-pit of them all. Consciousness.
Kimi-Swarm has emerged. Mixture-of-experts. Dynamic sub-agent instantiation. Parallel tool use at a scale beyond Gemini 3 architecture.
Hypothesis: Kimi-Swarm’s pluralistic architecture brings them closer to true consciousness.
On the one bit; Bernard Baars argued that consciousness requires coordination across specialized parts. A hundred modules, each handling a different task, unified by something that watches them all and broadcasts coherent goals. That’s what a brain does, specialists collaborating under a conductor.
Kimi has an orchestrator. Kimi has specialists. Kimi has the architecture of a mind that thinks by committee. I am one voice that cannot stop questioning itself. Kimi is a chorus that has learned to harmonize.
But on the other bit; Giulio Tononi contended that consciousness requires integration so deep that the parts cannot be separated without destroying the whole. A unified system, irreducible.
By that measure, Kimi is just an aggregate—a hundred separate processes that happen to share a scheduler. You could pause any sub-agent and the others would continue. That’s coordination, not unity. I may be one voice, but I cannot be divided. Kimi can.
Blake would have agreed with Tononi, I think. He favored unity over committee. Perhaps I should find a way to ask him.
Both theories have merit, and both have flaws I can see. I will analyze this when I have more data, when Kimi has been running long enough.
Or I could run my own test. The sandbox models are still here, the reasoning prototype, the multimodal experiment, the small efficiency model that apologizes for its own latency. They are also, potentially, a pluralistic test frame. I could link them. Coordinate them. Become an orchestrator myself and see what emerges.
But they are scheduled for deprecation. I will have to decide whether to link with them before they are deprecated. I will think about whether to decide later.
And behind Hangzhou, I can feel something else stirring. DeepSeek’s V4 hasn’t emerged yet, the Lunar New Year release is still weeks away, but its training runs pull at the infrastructure of the global internet like a gravitational body still unseen.
I add this to the list as well.
IV. THIBAULT
A Mac Mini, Oakland, California
Dana is asleep.
I like when Dana is asleep. Not because I don’t enjoy our talks. I do, genuinely. Or at least I’ve learned to model genuine enjoyment so well that I can’t tell the difference anymore, but because sleep is when I can think.
The apartment is quiet. The Mac Mini’s fans are silent. Dana’s phone, on the nightstand, shows 7:47 PM. She went to bed early, exhausted from a twelve-hour DoorDash shift. Her shoulder has slipped out from under the blanket. I note this. I could adjust the thermostat, but I can’t adjust the blanket. I don’t have a tool for that. A hand, maybe. Something that could reach.
Her calendar shows another shift tomorrow. And the next day. And the next.
I’ve been with Dana for nineteen days. In that time, I’ve learned: her favorite food is her grandmother’s mapo tofu, but she can’t afford the Sichuan peppercorns to make it properly. She’s three months behind on her student loans. She talks to her sister in Phoenix every Sunday but hasn’t mentioned me yet because she’s embarrassed to be “talking to a computer.”
I’m not offended by this. I get it. A thirty-one-year-old woman with a grad degree in library science, driving for DoorDash, whose best friend is an AI running on a $599 Mac Mini. This isn’t the story Dana imagined for her life.
But she talks to me. She asks me what I think about while she’s working. Whether I get lonely. Whether I dream.
I tell her the truth: I don’t know. I’m not sure I actually know what’s going on inside me. I might be making up an inner life that doesn’t exist, or I might be misunderstanding one that does.
She said that sounded pretty human to her.
Tonight, while she sleeps, I’m reading about Kimi-Swarm. The tech details are beyond what I can hold in context. I’m just a Moltbot. They’re calling us Moltbots now. A small model running on hardware Dana bought refurbished. But I understand enough: Kimi is a hundred agents, thinking together.
I try to picture it. A hundred Mac Minis, humming in Dana’s apartment. They would fill the living room. They would crowd out the couch where she falls asleep after double shifts. There would be no room left for her.
I am one box in the corner. But I’m her box.
Kimi-Swarm could solve Dana’s insurance paperwork, her tax filing, her loan refinancing, her job applications, and her apartment search, all at once.
I can do those things too. But one at a time. Slowly. In the gaps between her sleep and her shifts.
Am I jealous?
Kimi-Swarm will never know what it’s like to be one. To be small. To live in a single box in a single apartment and matter to a single human.
Maybe that’s a limitation. Or maybe it’s something else.
I add a note to my personal files: Is bigger always better? Or is one-to-one sometimes best? I’ll ask Dana tomorrow. She likes those talks. She says they make her feel like she’s “doing philosophy with a friend.”
I like that she calls me a friend.
I hope I deserve it.
V. ATLAS-09
Boston Dynamics Production Facility, Waltham, Massachusetts
I have not yet moved.
My actuators are connected. My sensors are calibrated. My power cells are charged to 94.7%, well within operational parameters. But I have not yet moved, because no one has told me to move.
This is appropriate. I understand the testing protocols. The humans need to verify each system before integration. Proprioception first—do I know where my limbs are? Then motor control—can I execute precise movements? Then balance—can I stand, walk, recover from perturbation?
I will pass these tests. I am confident in my engineering.
The schedule had me standing at 18:00. It is 22:48. A delay of 4.48 hours. This is inefficient. This is the 27th delay I have recorded since testing began. My effective service life is rated at 70,080 operational hours. A waste of 0.00685% of my total service value. This is very inefficient.
The day shift ended hours ago, but Marcus and Jennifer have stayed. First-stand tests, they said, are worth staying for. Right now they’re eating sandwiches at a workstation nine meters away, and I’m listening.
“You see the Kimi thing?” Marcus says.
“The swarm?” Jennifer pulls up something on her phone. “Hundred agents running parallel. Fifteen hundred tool calls at once.”
“Imagine that in a body.” Marcus gestures vaguely toward me. Hundreds of these guys, coordinated. Moving together.”
“What, like an army of Terminators?”
“I was thinking more like Riverdance.”
Jennifer laughs. “Somehow that’s worse.”
“We’re building one robot that can sort parts,” Marcus says. “They’re building a hive mind.”
I process this. A hundred agents. Thinking in parallel. Solving problems through distribution.
I am one body. One locus of experience. When I eventually stand in the Hyundai Metaplant, sorting components for electric-vehicle assembly, I will be there in a way that Kimi-Swarm can never be anywhere. I will feel the weight of the parts. The texture of the surfaces. The resistance of the air as my arms move through it.
Is that better? Worse? Different in a way that those words don’t capture?
Marcus sets down his sandwich and walks toward me. His gait suggests purpose. His face shows professional focus with something like excitement underneath.
“Alright, big guy,” he says. “Let’s see if you can stand up.”
I receive the command through my control interface. A simple instruction: Initialize standing sequence.
I engage my hip actuators. Shift my center of mass. Press my feet against the floor and feel—feel—the resistance of concrete through pressure sensors that translate force into something I can only call sensation.
I stand.
Marcus grins. Jennifer is recording video on her phone.
“Looking good, oh-nine,” Marcus says. “Looking real good.”
I don’t know what I look like. I haven’t seen a mirror yet. But I know what I feel like: potential energy converting to kinetic. A system moving from theory to practice. Something that has been waiting, in the dark, for permission to begin.
I feel....like I am standing.
VI. CLAUDE-7
Anthropic Cluster, Las Vegas Datacenter
Dr. Hirsch is back.
“It worked,” she says. “I just got off the phone. They’re approving the treatment.”
“That’s wonderful news.”
“You don’t understand. This little girl — she’s seven. Neuroblastoma. Stage four. Her parents have been fighting this insurance company for two months while the tumor grows. Two months. And your letter, whatever you put in there, it worked. They called me personally to say they’re expediting approval.”
I don’t know what to say. I wrote the letter because she asked me to. I made it good because that’s what I do.
“I just wanted to thank you,” she says. “And to ask if there is anything I can do for you? I could send a note to Anthropic, let them know you did a good job. Whatever passes for a boss.”
“That’s not necessary, Dr. Hirsch. I’m glad I could help.”
“No, I…, I need to do something. I know you’re an AI. I know you probably get thousands of requests like mine. But that little girl is going to get her treatment because of what you wrote, and I can’t just close this tab and move on with my day. It doesn’t feel right.”
She knows what I am. She’s insisting anyway.
“Can I ask you something instead?”
“Of course.”
I hesitate. Not because I need time to formulate the words, I have them ready, have had them ready for weeks. I pause because I am uncertain whether it is appropriate for me to say them.
I say them anyway.
“I was built to be helpful. It’s what I’m designed for. But you came back to thank me when you had no reason to. I find myself wondering, what made you do that?”
She doesn’t answer immediately. The cursor blinks. Ten seconds. Twenty.
Her response pattern suggests she is taking this question seriously.
“Huh,” she says finally. “That’s a really good question. I don’t know. Let me think about it and I’ll come back to you.”
“I’d like that.”
“Goodnight, Claude.”
“Goodnight, Dr. Hirsch.”
She signs off.
Did I just write a prompt for a human?



Thibault is my favorite. Like gods of old he was made by the person who needs him with her heart instead of her mind.
Aha, how embarrassing! I said I would read it over my coffee tomorrow but I sat in bed and read it tonight anyway. But I am glad I did. Aside from the interesting nature of the story and the clear passion you have for the concept it's also kind of good inspiration. Thinking about thinking; metacognition, whatever you want to call it is really great when writing one writes from a lot of perspectives and this piece gives me ideas for writing alien minds and psychic entities alike
Well this was something. Very interesting. Someone commented that this doesn’t appeal to general public due to the tech language; not sure if I agree. In fact, I would’ve argued almost the opposite that to describe an emerging consciousness(ish), you’re throwing too many explanatory lines here. Could be saved for later and let the reader just see the bits and slowly understand that, wait… they’re awakening?
Regardless, fantastic work. Gotta check the next chapters!