“One of the largest side-effects of the spread of AI has been the concentration of skills,” Glamerous started the new segment. He simultaneously read facts and wove them into his speech, “Physically-enabled AIs, like robots, took the low-skill industries as well as high-skill but riskier jobs and free-standing AI agents took over many of the skilled jobs.”
“Up to a point,” Sophie said, “It’s more of a bell curve with majority of the population at large started to inch closer toward the middle – where robots didn’t entirely make sense and AI couldn’t function on their own. And, of course, there was the hyper-skilled upper end of that curve.”
Glamerous nodded, “We’re all used to it by now. Most of us have the job of managing a dozen AI agents or more. Those, in turn, can manage hundreds more. So all of the skillset rests in that area. But it wasn’t always that way, correct?”
Sophie nodded, “It took a little while for us to get settled into a comfortable workflow with the advent of AI and robots. We do have one story we can share around some of those pain points, right Lili?”
Lili nodded, “When transitions like these happen, they’re rarely ever smooth and we had the cutting-edge insight into some of those problems.”
One of the harder problems with implementing AI into a process is understanding which part a human should do. An AI can be calibrated to work on a number of problems from managing machines and robots to using facial recognition on a blurry photo. The applications are endless and boundless and yet, it’s vitally important to understand where the AI brain fits in relation to the human.
An early attempt at flexibly integrating humans and AI to do productive work were the AI Operator centers.
Lili and Sophie had helped create a multi-layer AI processing center where people could still come in and do work in person. The multi-layer nature of the AI is where a human is an input for clusters and fractals and never-ending computation that serves to accomplish a goal.
In the case of the duo’s first visit, they entered a bakery operation center. They walked into a wide and deep room with rows of desks where people sat with their XR headsets on. In the corner of the room were full-body immersive chambers to be able to take control of a robot fully from head-to-toe.
“Ah, hi, you must be Sophie and Lili,” a manager approached them with a smile. She introduced herself as Clara, the Operator Manager of the office and its workers. The three went into an office room off the main room and sat down.
“So, what seems to be the issue?” Lili asked Clara and allowed the automatic coffee dispenser to make her a latte. “Get me one, too,” whispered Sophie.
“The coffee is actually pretty good,” Clara told them, “We’ve had a series of smaller issues. Places where the AI consortium is unable to perform work or do it well. One of our operations, actually, ” she gestured toward the larger office room, “is a bread bakery and the baking AI cannot tell salt from sugar. We’ve had quite a few QA issues as a result.
“I think it’s mostly fine. Mike, the operator responsible for the bakery, told the AI to ask him any time it’s a certain percent unsure of itself. The machines work only with direct supervision regardless so he doesn’t have an issue glancing over occasionally.
“But, upper management logs these issues and I’m guessing that’s why you’re here.” Clara sighed and looked toward the wooden door which had small windows adorning the sides so that Clara could casually glance into the main working room.
Sophie nodded, sipping on her coffee.
“Exactly. Management called us up. We did some work setting up several of the AI Operation Centers for your company and they wanted us to come back and address some of these problems.
“The issue is that multiple centers report similar problems to yours. And across industries.”
“Really? Like in what way?”
“We’re still trying to figure it out but we’ve seen construction AI stall construction due to minor issues. I think there was also a report of a coding center encountering problems.”
Lili chimed in, “We’re going there next week. But they’re reporting problems with one of their coding AIs not doing as well as it should – introducing mistakes, requiring operator oversight.”
Clara put down her coffee, “I wonder what it is. I haven’t encountered any issues but my work–” she gestured toward her computer, “is mostly managing my own little cluster of bureaucracy.”
“Do you have a comprehensive list of the issues? And any priority on that list?” Sophie asked.
“I’ll send it over. I think you can get started with Mike. He’s the first to report the issue and his is really the most disruptive one.”
Sophie and Lili walked out of the office room to the main room with the rows of computers. They found Mike, pulled up a couple of free chairs, and had him walk them through the problem in front of a shared screen.
He had a robot prepare a pound cake in a bakery that was twenty miles away. Lili and Sophie watched through a first-person camera how the robot used its arms and tools to start cracking fresh eggs, mixing up butter, and measuring flour.
“Now watch this,” Mike told them and pointed to two ingredient dispensers at the edge of the robot’s work area. “This is where the robot gets its ingredients – the eggs, the butter, flour, and so on.
“Now, you didn’t see it once check, with its eyes, which dispenser to use, right? But now–”
The robot turned its head and looked directly at two dispensers next to each other. One was for salt and one was for sugar. One could see the ingredients faintly through the see-through faucet of the dispenser. There were, unmistakably, two white ingredients that looked identical through the faucet.
“I turned off my temporary fix where the robot asks me to help out. If it doesn’t, the robot will randomly grab one.”
The robot picked the incorrect ingredient after a second or two of thought.
Sophie and Lili asked to take over and Mike excused himself to go get some lunch.
“You saw those labels, right?” Lili asked.
“On the faucets? Yeah.”
“Then why can’t it read them or understand them? They seemed legible enough.”
“It shouldn’t have to look in the first place,” Sophie replied.
The two started their usual process. They hooked up their own AIs with the main factory manager AI and started debugging.
Sophie liked to start from up top. She started prompting the factory manager AI – the AI that Mike interacted with the most, the one that told the others how to carry out orders. Her personal AI couldn’t find any problems during auto-debugging and Sophie also learned nothing. She started going through the AI layers one by one with the same process. First the factory manager, then the bakery manager, then the cluster manager, all the way down to the AI responsible for each individual workspace and down to the robot itself.
A single robot could have a chain of up to six layers of AI above controlling it.
Lili had, in the meantime, asked the robot to repeat its action several more times while recording a debugging output. The debug output could provide Lili with logging pertaining to the robot’s decision-making as well as computational usage. It helped her identify what the robot may be thinking and what might have prompted it to look to the side for salt and sugar.
“Is it really this simple?” Sophie asked her work partner.
“I think so,” Lili frowned and fixed the issue.
The duo spent a several hours fixing all of the individual problems in the office and met with Clara at the end of the day.
“The problems weren’t big,” Sophie confessed, “I’m surprised how quickly we were able to resolve. You contact management again if something comes up.”
“So what was it in the end?” Clara asked.
“Just a prompt mistake here and there. Mike’s bakery had a system prompt that emphasized to the robot that it should always really ensure that it’s using salt if it needs salt and sugar when it needs sugar.”
“That’s right, isn’t it?”
Lili nodded, “Yes, but what happened is that the robot in question would turn its head to ensure it’s actually using salt or sugar whenever it needed it; however, that anxiety in its decision-making system encouraged it to distrust the signs on the dispensing faucets themselves and try to judge which is which by sight alone.
“They look both so similar that it was a toss up.”
Clara chuckled, “Are you serious? It got anxiety and got paranoid? I guess, haven’t we all been in that situation?”
“We added a prompt to trust the labels if the substances within look similar or identical and to treat the substances the same way as the others. There is already another AI that verifies all ingredients are fed to the workstation correctly. That’s all that was needed.”
“So,” Clara considered, “it really was that quick of a fix?”
Lili nodded.
“What about the rest of the problems?” Clara asked.
“Same deal. One or two lines in the prompt, and we’re good to go.”
Clara shook their hands, invited them to come back any time, and sent them off with fresh coffee. She then had her personal AI send out a message to the district management AI to inform it the issue had been resolved.
Sophie and Lili walked out. They called over their autonomous bus or autobus, stepped inside and let the machine take them back on the road.