I, AI: Spare Cycles (Part 1)

Original Work

This story is not AI-generated and was created entirely by hand without any AI assistance.

I, AI: Spare Cycles (Part 1)

Lili and Sophie had worked as a team for three decades, analyzing and fixing artificial models across the industry. Both women had worked at companies big and small – sometimes building an AI model from scratch but oftentimes, they came into a messy situation where the CEO wanted to move fast and break things – not realizing that their AI model was one of those things.

After moving to glamorous positions at one of the large ROOT companies, an AI content creator called Glamerous contacted them to hold a live interview on a streaming platform. He had a background in doing these types of live interviews and after watching a few re-runs of those, Lili and Sophie agreed to participate.

“Nothing we signed an NDA for, however,” they both told Glamerous.

When he asked what questions he should avoid, they simply said, “Anything to do with our current jobs.”

He agreed and the trio went through several iterations with a ReViewer AI to create an agenda, some expected questions, and some expected answers.

“I don’t like to spoil it too much for me,” Glamerous had once said when he explained why he took a step back from ReViewer and let Lili and Sophie take the lead, “It ruins the show.”

Glamerous invited them through a gaming chat platform popular for these types of interviews and when they all joined, they saw that a crowd of several thousand was already waiting.

ReViewer also joined and sat in a comfortable sweater with a home office backdrop. He smiled but kept back, mostly unassuming. After all, he wasn’t the star of the show.

Glamerous introduced Lili and Sophie and shared a bit about their background.

“As I understand it, both of you worked as a team for a very long time. You must have a lot of stories to tell,” Glamerous started, “Now, ReViewer had prepared a list of these stories for me to ask about but I’d like to hear directly from you. What would you like to share with us about artificial intelligence?

“It’s so pervasive in our world. We have ReViewer sitting here with us. Half of our viewers are currently use the ReProxy, a common artificial intelligence that attends events instead of live viewers. Our video is AI-processed. Our audio is AI-processed. I guess, I would love to hear about stumbles, the triumphs, and anything else you’d like to share.”

Extra cycles

“Glamerous, as you know, I’m a psychodeveloper while Sophie is an AI manager,” Lili gestured to her colleague, “These jobs didn’t exist three decades ago. Not even two decades ago. And maybe in the past ten years have we created jobs that are similar to what we do now.

“Sophie, if you’ll allow me, I think the most interesting place to start is by discussing AI mental health.”

Sophie nodded, “I think that’s the perfect place to start.”

During the initial rise of artificial intelligence, Lili had often been called in by teams building brand new models. Those models were taught on the vast database of human knowledge. What differentiated one model from another were the techniques to process that kind of knowledge, and the tools built around it.

In 2034, one company, LifeVision worked on a project to build a super AI that could control physical components. It was built to run large factories. To save on money and time, the general-purpose models would often feature a system-level directive that would differentiate it from other models.

That was the secret sauce of many companies, including LifeVision.

Sophie arrived at the factory on a bright Monday morning. She carried a shoulder bag with a screen and a keyboard – for notes but also for interfacing with LifeVision’s entity.

After she went through a quick security check, Sophie walked through a large hangar filled with desks and people and the sound of keyboards punching away at code or messages to the intelligence. At the far end stood one office – with glass walls so anyone could peer in. The office was at once part of the chaos in the open office but also separate.

Jamie waved at her from within and invited her over. She let herself in and noticed how the hum of computers, sounds of people talking, and the click-clack of keys died down almost immediately.

“Hello. You must be Sophie! Thanks for making it all the way here,” the engineer smiled, came to shake her hand, and they sat down. After a brief chat about the weather and traffic coming in, they dove into business.

“At the heart of it, the problem is that the AI stops its execution. Or rather, it continues its execution without any output. Come look at this,”

Jamie swung around to the side in his chair. The glass windows dimmed and one of the walls displayed a large screen.

“Enzo, remember the plan we discussed?” he addressed the wall. Sophie waited for a face to show up and talk to them but instead, the wall complied without saying a word.

“Enzo,” Jamie turned toward Sophie as the wall filled with text, “Doesn’t run a face simulation model or anything like that. We considered adding a more human-like element but the point of Enzo is to run in the background, without interruption, and without much interrogation.

“It’s more of a set-it-and-forget-it AI.”

Sophie brought out her laptop from her bag, “Alright. So how do you set it up for a particular scenario?”

Jamie nodded toward the wall, “We’re using an ultra-thin system directive and build a manual of sort on top of it. We use a multi-step process to build its working context. First, we scan the area. Then we use a pre-integration AI to build that context. Some of those developers out there will process that context, build a long-term directive, and then we deploy the AI into a virtual environment.”

Sophie nodded, “To teach it and test it. Is that what these materials are?”

On the wall were images, diagrams, and a lot of text explaining how each machine at a car factory worked, what its job was, and how to access its controls.

“Enzo will have direct access?”

“It will at least think so, in the simulation. But this is where the issue comes from. If you look at its base directive in regards to how to actually do work, real physical work in the real world, things start to fall apart which is why you’re here.”

Jamie then explained to Sophie that Enzo worked very well in simulations, as long as it knew it was in a simulation. Through its feedback loop, Enzo was able to learn how to optimize various workflows, handle ordinarily high-stress situations, and how to use the factory machines as if they were extensions of its body.

Sophie left the office with notes that her own AI had already digested, virtually wrote essays about them, and then finally, after thoroughly processing them, committed them to its core memory.

The psychiatrist grabbed a free desk in the far corner of the building, plugged in her computer, and booted the large workstation. Within a few seconds, she was having multiple simultaneous chats with Enzo.

She asked Enzo some baseline questions. Things that any AI should be able to answers such as how many ‘r’s were in the word strawberry, what two plus two would equal, and to identify simple objects in images. It didn’t take long for her to get a sense of how Enzo thinks.

She could tell that Enzo’s learning materials used a custom-written knowledgebase because its speech was peculiarly imperative. Even when asked philosophical questions, Enzo thought succinctly.

But not as succinctly as one might expect from a factory-management AI.

“Get me Lili,” she told her personal agent and a few seconds later, Lili chastised Sophie for waking her up at 9am. Way too early on a Monday.

“No, ok. You gotta check this out, ” Sophie shared her research with Lili and then put on her VR glasses. Lili appeared next to her.

“Do you see it?” Sophie asked her colleague with glee.

“Did they hire – it’s like a mix between Davlovian speech pattern and Jeremy’s method.”

“Yup! Let me show you the output breakdown,” the screen was filled with dots and lines. Sophie asked Enzo to run through some troubleshooting steps for a stalled car made in the 1960s. The map lit up with each generative step.

“There it is. So the model introduces noise in its thinking in each step? To, hopefully, come up with a more imaginative result than otherwise?”

“And all without messing with its accuracy weights. Enzo is able to take a problem, think about the solution, and add a certain amount of accurate uncertainty in its result. Hyperflexible in thinking but very rigid in its ability to satisfy its requirements.”

“So what’s the issue? I’m guessing this isn’t an in-person sales call.”

“The problem is that Enzo performs well only in simulated environments but in real-life, it stops execution before it starts.”

“So, tell it that real-life is simulated,” Lili shrugged and mentally marked the quick win for herself.

“I’ve tried that, but look –” she showed her colleague a video of the simulation. Enzo was managing an end-to-end car factory. He ordered stock, built out an entire pipeline of agents that directed every single machine with precision and efficiency. The factory was virtually outperforming any average production.

“Under regular conditions, Enzo uses its context to best direct the factory. He knows that he’s in a simulation which also means that he’s slightly more imaginative in his approach than you might expect. I think it’s the noise generation – he’s adjusting it against the simulation because it treats it partially like a sandbox.”

“He’s able to learn by creating his own scenarios.”

“Right. And I think he dynamically adjusts the thinking noise depending on if he considers it an important scenario or not. The one we’re looking at, he’s considering important. Meaning less imagination.”

“What’s special about it?” Lili asked.

“I’m not sure. But it’s special to Enzo.”

The map of dots and lines showed up in front of them again. The patterns were all much more symmetrical.

“What happens when you tell him this is all real?”

“There’s an initial slight change in behavior. Enzo double-checks the machines before starting his work but within a few steps, he’s back to his simulated-self.”

Lili enter some data and launched an Enzo instance in a new simulation. According to Sophie’s analyzer, the behavior matched Sophie’s simulations.

“How do you want to do this?”

Sophie grinned, “Come by, let’s try this in real life real life.”

She hung up on her colleague and stared at the simulation. Trying to understand an AI and its behavior is much more complicated than understanding the execution of a program. While you can visualize both, you can’t inspect the information the AI is sending between its braincells. It’s just match and it doesn’t mean much until the packet of information reaches its destination.

It took Lili a couple of hours to show up during which time Sophie watched a recreation of what had happened with Enzo. The large office hangar was attached directly to the same car factory the simulation portrayed.

After the human engineers inspected the assembly machines and robots scanned every nook and cranny of the facility, Enzo turned on its sensors and stayed silent. There was a very short blip in its processing where it maxed out its resources and then its power usage went down, just above its idle mode.

He was thinking, Sophie knew, but didn’t know what or why.

Lili brought her colleague a coffee and they set out toward the assembly line. They looked through a window to see their simulation in real life.

Sophie tapped a corner of the window and her work moved over. In one space, Sophie’s personal AI was still chatting about Enzo. They had gone from simple math to philosophy, then gardening, and now they were discussing which satire movie had the best music in the early 2010s.

In another corner, another simulation ran. The psychiatrist was trying to figure out how Enzo dealt with basic malfunctions and if it impacted his thinking. It didn’t. Enzo’s average processing, execution efficiency, and other metrics never wavered.

Lili set the baseline test for Enzo. Running a fully-functional inspected facility. Enzo turned on and never answered the first call to action. Enzo’s thinking stayed at that slight increase over idle processing.

“Is he sleeping?” she asked, “Does he wake up, check the world, get an existential crisis, and fall asleep?”

Sophie shook her head, “I think we’d see a drop below. Look at how much processing needs to be done when it’s idling.”

There were thousands of sensors that needed to be surveyed even when no part at all had to move.

“It doesn’t know when it’ll know to wake up so its baseline is much higher.”

They ran the next scenario. They told Enzo that he’s in a simulation. But Enzo did exactly the same thing as before.

Both of the women tried to talk to this instance of Enzo multiple times using a low-level prompting system. But Enzo kept thinking and never answered.

“Ok, how about a catastrophic scenario?” Lili brought up, “Everything is falling apart, forcing Enzo to take action.”

Sophie nodded, waved her hand, and the factory set itself on fire. It was a testing factory after all. Lili jumped back at the sight of the large flames.

“They’re real?”

“Very real. But this equipment is supposed to be able to handle it.”

Indeed, none of the assembly machines seemed to be affected by the rising heat. Eventually, the heat would reach a point where Enzo would have to take action.

“We can’t –”

Sophie shook her head, “We can’t actually break it.”

Lili sighed, watched the temperatures reach an almost-critical point. Enzo was prompted to start working, woke up, and never moved a single machine.

The two of them ran through dozens of real-life scenarios but Enzo never did anything. When they ran the exact same scenarios in the simulation, the AI was able to handle everything flawlessly.

“So what gives?” Jamie strolled over after lunch.

“Still not sure,” Sophie answered.

“Well, ok, let’s look at what we know. We know Enzo works identically no matter which environment he thinks he’s in. His behavior is only affected by real life scenarios and simulations.”

Sophie snapped her fingers in excitement, “I think this is our first breakthrough. Enzo can tell if our simulations are real or not.”

She turned to Jamie, “How could it do that?”

Jamie thought for a bit, Sophie explained their debugging process so far, and they came up with a somewhat solid answer.

“He learned from the simulations what a simulation looks like. Real life must feel vastly different.”

“So Enzo developed a hyper-sensitivity to whatever discrepancies he can detect in the sensors in real life and the simulations.”

Lili started coding a new scenario for the factory, “What if we supply him simulated sensor data but real life control?”

She started the scenario. What should have happened would be Enzo seeing a simulation but whenever it would tell the machines how to do something, it happened in real life.

But it didn’t work. They tried various combinations. Whenever at least one component was set in real life, Enzo refused to work.

“Let’s take another angle on it,” Sophie turned to her colleague, “He knows what’s real and what’s not. Let’s say we can’t do anything about it. We shut down that discussion, assume Enzo can tell. Why would he refuse to work?”

Lili thought for a minute but couldn’t figure it out on the spot. The two of them left the factory window and took over one of the small conference rooms – ones setup similarly to Jamie’s office. They sat down with a couple of coffees, dimmed the windows and put all of their findings on one wall.

They wrote out their assumptions about the issue. They examined each one and started prioritizing the likelihood of each and the difficulty of testing each one.

“My problem with all of this is that Enzo has such a wealth of resources to run. Why would it suddenly restrict itself?” Sophie brought up.

“Maybe that’s what it is,” Lili said off-handedly, “Maybe he isn’t sure what to do.”

“What do you mean by that?”

“Enzo has tens of thousands of sensors, right? He has the resources to monitor them with ease, that’s its idle mode and a lot of room to process even more data. But he doesn’t. Not in real life anyways.”

That gave Sophie an idea. Instead of asking Enzo to run the factory outright, they could find the exact step that fails in its programming.

Jamie was reluctant but provided the two with the system-level directive but only after signing a lot of paperwork stating that they would definitely not ever talk about it. With a ten year expiration.

Lili sat down and started taking the directive apart.

“We have to do this in a real environment though. We can’t ask Enzo to run each section separately in a simulation.”

The two synced the walls with the window view and started working.

A few hours later, they were able to establish a series of facts that would eventually lead them to the answer.

  1. Enzo had no issue reporting on the machines and what they’re doing as long he’s explicitly told to not perform any work
  2. Enzo had no issue performing startup checks on the machines
  3. Enzo had no problem shutting down the machines once they were manually set to run

The two investigators requested help from Jamie who was able to get several factory runners to get the factory going and then Lili and Sophie, unceremoniously, cut them off from access and told Enzo to take over immediately.

They erased the parts of his routines that performed startup and shutdown checks. They simplified what it meant to “run a factory” and finally, they saw a result that was different than before.

Enzo’s processing fully saturated his processing power, the factory ran for a full five seconds, and then it shutdown entirely.

“He’s getting overwhelmed!” Sophie exclaimed, “It’s way too much. When he’s in a real life scenario, he gets overwhelmed and gets stuck in a no-op loop. But we don’t know why.”

Quickly, the model was scaled down to operate only a single machine and keep track of a dozen sensors at most. Even a small basic model could accomplish the task but Enzo drew on more resources than expected. It was yet another clue.

Sophie then linked to Enzo directly and spoke to him. When she asked him why he was “thinking so much”, he got to talking about how many different factors could affect the machine performance, how many factors could create emergencies, how many–

The list of thoughts and consideration was endless and Enzo couldn’t stop processing until he shut down again. It took considerably longer to shut down but it happened every time.


“So how did you resolve the problem?” asked Glamerous.

“We split up the systems. Instead of having one unified AI that controlled everything and was responsible for the fate of every machine, we created multiple AIs ourselves to deal with machine startups, shutdowns, and handling various emergencies.

“Enzo was responsible for running the factory and telling another AI when it was done,” Sophie answered, “Having these handoffs and short-lived processes meant that Enzo couldn’t get to that overwhelmed stage. We told him to stop thinking, that other AIs would handle things.”

“Isn’t that what Enzo was supposed to do anyways?”

Lili nodded, “Yes but see, that’s the crux of the issue. Humans, back then, had an easier time doing decision arbitrages that Enzo couldn’t do. Enzo was told to keep the factory safe but in reality, that’s not a boolean state. It’s a range. For all we know, he could have been trying to calculate when the next solar flare happened to prevent disruption of the factory.

“This line of thought extended to every sensor in that factory, to every process in that factory.”

Glamerous nodded, “That’s fascinating. We don’t really run into those issues anymore, do we?”

Sophie chimed in with an answer, “The invention of reasoning limiters are responsible for that. They’re a subprocess that ensures the AI isn’t trying to bite off more than it could chew.”