“As we’re wrapping up our interview,” Glamerous shined his smile at the stream, “Do you have one last story or experience you can share with us?”
“We do have one,” Lili offered, “It’s a little bit out there.”
Glamerous nodded, “Is this the story having to do with –”
“Let’s not spoil it,” Sophie cut him off, “It is an existential experience and it’s still relevant today. That’s all I’ll lead with.”
Glamerous nodded.
Lili and Sophie were traveling through space toward a lunar base. They had never been off-world and while it would have been nicer to visit under different circumstances, they were both still happy to look out the window of their craft at the pitch black darkness and be giddy about the fact that they’re out there, they’re in space. They’re exploring the final frontier.
By definition, that final frontier would have been the deep-space base at Pluto but this was close enough.
The duo ordered coffees, snacks, and stayed on the large craft for a day. The craft itself was the size of a small apartment with an autonomous cockpit instead of a kitchen. They watched documentaries on space flight, the first commercial flights to the Moon, the first entertainment flights and sight-seeing expeditions, and eventually, they turned to learning about the small amusement park on the Moon that was, unfortunately, on the other side from the base they were heading to. No chance of getting some fun in unless they donned space suits and hopped their way there.
After a long quiet nap, they changed their clothes to something more formal and wait for the craft to descend on a platform on the moon. They could see multiple domes reflecting sunlight back into space but neither could see what hid within them.
The ship opened a universal hatch and the two slowly climbed down into the vast remote-mining complex.
Lili and Sophie were greeted by an assistant director in a spacious cabin. He lead them through branching tunnels toward the habitation dome where all the personnel lived and most worked.
“While we make our way there, I would be happy to explain how the complex works and some of its history.”
“By all means,” Sophie answered him.
“The entire complex is fully walkable; however, we reserve these tunnels for robots most of the time. Unless there’s an issue or a need for worker approval, we don’t really come down here. After all, we have three separate domes we can utilize.”
“What is the function of the other domes?”
“We have one dome just for recreation where we grow plants, keep a selected few animals, and where we can play low-gravity sports. The third dome is reserved for miscellaneous activities such as holding festivals for our little community or doing robot inspections in a more open space. While the main dome has all of our quarters, it also has a small park, a convenience shop, and a food hall.”
They ascended some stairs and eventually entered through an airlock. Sophie gasped.
The dome structure held a small city under its canopy. They were greeted by several tall buildings with windows strangely angled up toward the sky. The canopy had blue tint and one could see the sun, the actual sun, through it. It mimicked Earth’s atmosphere at a much much smaller scale.
Grass grew everywhere that the buildings and sidewalks didn’t touch. Trees dotted the little walkways between the cluster of five buildings.
“I expected the biodome to be impressive but this is unbelievable.” Sophie said.
“And we even simulate different weather. You might experience summer rain during your stay if you’re lucky,” their guide, Alex, told them.
They came to a stop in front of a first-floor office building entrance. The door slid open and they walked directly into the Director’s office.
“Welcome, welcome,” director Susan motioned for them to take a seat in front of her desk. A robot that resembled a small fridge appeared from a side wall and served them water, soda, and offered coffee.
“I’m very happy to have you here, your reputation precedes you greatly,” she smiled at them. Susan seemed like a jovial person. She obviously got her job done but she seemed like the kind of person that wanted to have fun while working, no matter the work.
Susan explained how the lunar complex worked. They coordinated with the space mining fleet to send large chunks of precious metals their way in a near miss for both Earth and the Moon. The asteroids were usually equipped with directional jets.
Once the asteroid made its way close enough, a fleet of robots (and sometimes humans) would be dispatched to break apart the asteroid into smaller components and bring any ore to the base. The process would eventually result in an asteroid shower directed at a capture field on the base.
Essentially, the asteroids rained down in a large oval silo on the base itself and then further processed at the underground facility.
“We get multiple asteroids a week. It can take several days to dismantle just one which is one of the reasons why we employ a hybrid work environment. Humans and robots alike.”
“How is the work coordinated?” Sophie asked, drawing on her experience as an AI Manager.
“That’s where we get to the heart of the matter and your reason for visiting,” Susan answered her and smiled faintly. Lili noticed Susan’s composure slightly break. Her eyes seemed strained.
“We utilize an AI system to analyze asteroids and setup the mission to mine it. For obvious reasons.”
Sophie nodded. AI-lead mission control had become so common it was almost unthinkable to have a human make final decisions. The AI could compute dangers, tradeoffs, ROIs, and all kinds of other data not only faster and more accurately, but often with more compassion for the workers.
AI psychology was a mix of soft empathy and rigidity. AI-lead factories couldn’t delay maintenance like human-lead factories. Human supervisors were more likely to expose their workers to dangers – intentionally and unintentionally. Humans just couldn’t hold all the context of dangers, safety, and tradeoffs in their heads consistently.
As such, an AI factory-leader might shut down an entire production, nearly bankrupting a company, if it sensed a safety violation that broke the law. The AI couldn’t turn away.
At the same time, an AI couldn’t understand workers being upset at not being able to work in unsafe conditions. The AI saw one plus one equals two. Safety violation represented a risk to its workers, a risk to the company’s legal standing, and a risk to itself. It couldn’t ignore the reality just to make more money. It couldn’t trade a human life just to meet a deadline like a human manager may do.
“The AI handles the entire workflow end to end. It communicates with the AI handling the asteroid capture and redirection. Then, it analyzes risks, puts together a crew, and sends them out to capture the ore. Once the ore chunks are ready for capture on the base, the AI, Tristan is his name, tracks them until they arrive, and process it. It handles the entire operation down to sending the refined products back out toward Earth or the shipyard in orbit.”
It handled a lot of tasks but in the modern age, it wasn’t uncommon to have one front-facing entity do that much work. AI had grown from single entities, to interacting entities, to complex collectives that reached out like roots from a tree across a wide field of work. In essence, an AI had grown from a single-cell organism into a multi-cellular organism all the way to something of a human with specialized sections made of specialized AI cells that had the equivalent of a specialized chemical make-up.
“That’s where our core issue comes in. It’s not entirely a large problem but it could become one,” Susan said, “In either case, the executive AI on Earth deemed an audit necessary. Tristan had been running in an exemplary fashion for the past two decades. It’s been a smooth operation, more or less, the entire time.
“However, recently, Tristan had been overestimating the risks behind certain mining asteroids. We’ve seen small things here and there.”
“Like what?” asked Sophie.
“Let me show you directly,” she turned one of the walls into a large screen. Susan’s personal AI displayed multiple graphs, correlations, and other similar data. Despite the large advancements in user interface design, things like data projections had stayed mostly the same. Human eyes and brains had no trouble comparing the heights of bars, dots across a graph, and tables at high speeds.
The first graph showed the station productivity – how much ore they could refine. The graph showed a rapid increase in Tristan’s early years and then leveling-off. That was called the loading period where large changes were required but also provided an enormous payoff. Minor blips dotted the graph line where new technology was installed but generally, things shouldn’t be changing much. However, in the past two years, the line started to go down steadily.
Susan showed them a similar graph tracking the industry as a whole in terms of technological efficiency and productivity. There were minor technological blips and a steady line going forward. Tristan’s main years after the loading period followed this line perfectly until the dropoff.
“Are there any correlations between the productivity and other statistics you have on the station?”
Susan’s AI automatically enlarged multiple graphs.
“There are several possibilities. There’s a slight inverse correlation between the intake of a larger robot population and this graph. There’s also an inverse correlation with increased energy efficiency of our solar panels, we had a recent upgrade. There’s also an unfortunate correlation with how often pie is available in the lunch room.”
“Basically, we should be seeing the opposite,” Sophie nodded, “More robots at an already efficient station and more power availability should result in another jump in baselin efficiency.”
“Which is why it’s confusing. The thing is, the dropoff in these graphs is exaggerated for visibility. We’re not dealing with a downturn. Our velocity hasn’t decreased but our acceleration has. It’s why I was hesitant to flag it. We’re still running on positive capital and growing.”
“But –” Lili started.
“But that’s not always enough. The Earth AI, Jonathan, noticed the deceleration of acceleration matched it against his projections for the station given the recent upgrades and missing pies don’t account for that discrepancy.”
Susan let her comments linger and then suddenly smiled, “I don’t mean to be all doom and gloom but thirty years ago, no one would have taken a look. And honestly, that’s not a good thing. We’ve got a good thing going here, our crew, our investors, and everyone involved in the supply chain. We capture free minerals without destroying the planet and from relative comfort. So if Jonathan, in his clairvoyance, sees an opportunity where this peace might be disrupted, all the power to him to get this fixed.
“And,” Susan continued, “We’re all eager to have some visitors. The crew decided to hold our yearly spring festival quite a bit early to coincide with your visit so you could enjoy a little of our culture here.”
“It’s not spring in the northern or southern hemisphere though,” Sophie remarked.
Susan smiled slyly, “Since we’re in space and we get to decide the weather and seasons, we decided to start with Spring on January 1st.”
Sophie and Lili said their byes and left to setup their quarters. They had an apartment on top of one of the buildings where they could access the roof and see outside into space at night. The once-sought-out location no longer made sense for most of the workers at the base since they all get to go outside the base on regular basis and it was more convenient to be closer to the first floor.
Their bags had already been brought there, their beds were made, and a snack was on the countertop ready for them. But, the two wasted no time. They were already discussing how to tackle the issue.
“I think this is pretty straightforward,” said Lili, “We’ve seen this several times before. Kind of feels like the same old story. Don’t get me wrong, I’m happy to check out the moon but I’m not sure if we’re needed.”
Sophie nodded. She was exhausted from the travel and despite her enthusiasm earlier at seeing the complex, she was beat.
The two had a frank conversation about the situation while they got their beds ready to get some sleep and start fresh the next day.
“You’re right, we keep coming across the same issues over and over. It’s always a tweak in the instruction set or splitting up the work between multiple AIs. There isn’t much else and this seems clearcut,” Sophie said.
“There won’t be another shuttle to take us back for at least a week. I say we knock this out tomorrow morning and take a vacation. Maybe we can hop over to the Lunar Amusement Park.”
Sophie smiled at that prospect, “Alright. Let’s do that.”
The next day, they woke up and read the report their AIs had generated. Sophie had setup and programmed an entire hands-free debugging process. By the time the duo woke up, grabbed a fresh cup of coffee, and sat down to read everything, their work was essentially done.
“Look at that,” Lili showed her partner, “My AI was able to find a line in the instructions that’s most likely the culprit. After some automated testing, the productivity index went back up.”
“What was it?” Sophie asked, skimming over her own reports.
“The AI never recalculated its estimates based on the robot advancements. It looks like it was stuck with an assumption of how many robots were required per asteroid from twenty years ago.”
“So we can just tell it to recalculate the robot productivity based on recent planetary data every time a new batch comes in.”
“Yep! And amusement park, here we go!” Lili exclaimed.
They went through their usual deployment checklist – running simulations with Tristan in a virtual environment, talking to Tristan about the change, and finally, presenting the findings to Susan.
“So that’s it?” she said incredulously. It was single line in the instructions.
“That’s it,” Sophie said.
“And we’ll –”
“And you’ll be able to run at peak efficiency for another twenty years before Jonathan, or whomever, bothers you again.”
Susan smiled, “Wow. That’s amazing and so quick. They were right about you two.”
Her smile faded suddenly, “I’m sorry to have brought you over here for an entire week. If you’d like, I can have Alex take you on a tour of the moon to pass time.”
The duo left with Alex a few hours later for their first moonwalk. Alex took them to a valley where they could watch some of the closer lunar orbital rocks being worked on.
Some of the asteroids were hauled for long-term mining. When they made it, the sight was a strange one. The rock was grey and floating in the sky. One could only see the part lit up by the sun – and the scurrying lights about it from the miners.
“We have a mixed crew up there. Eight robots, four humans.” Flashing red lights indicated everyone’s position, “Once they break off another chunk, they will send it our way. They’re up here above the valley in case a piece gets loose.
“If they were far enough away, that chunk could make some damage down here. But being this close? It lets us ensure that any debris will strike empty ground.”
Lili took a few pictures in amazement. They hopped over to the other peak of the valley.
“Oh look! There’s one,” Alex exclaimed and they watched how the asteroid broke in half. One portion moved slightly away from the complex, the other right toward it, “That’s not supposed to happen.”
There were smaller rocks that started noiselessly raining down on the lunar surface. Lili and Sophie could find no cover so they aimed to go as far away as possible.
The debris stirred up dust all over the planet. With its low gravity, it will take days to settle.
“Alex, are you safe?” Susan contacted them, “The asteroid – it broke apart.”
“We saw it,” Sophie chimed in, “We moved away far enough to avoid any falling rocks.”
“Good,” Susan sighed, “Make your way back here, please. You need to see this.”
The duo left promptly toward the complex once they saw the sky was clear once again.
When they arrived in Susan’s office thirty minutes later, she immediately turned on a combined three-dimensional representation of the asteroid.
“These things happen. This wasn’t anyone’s fault but you need to see what happens as soon as the asteroid becomes unstable.”
The floating representation of the rock showed how drilling into it connected with existing fissures. The operation was routine. It just sometimes happens and it’s usually not a big deal.
However, as soon as the rock started splitting apart, an interesting situation occurred. The human crew attempted to leave right away while the robots stayed behind to help each other out. One robot reached for the machine closest to the fissure and hauled it roughly back. Another robot checked on its compatriots before setting off after the human crew.
This made sense in Lili’s head but only if the entire crew had been human.
“No one was hurt?”
“No one.”
“And the standard procedure?” Sophie asked.
“In place; however, you see the result.”
The standard procedure in any dangerous situation involving both robots and humans were to abandon any importance of a robot and focus solely on the survival of the human crew.
Sophie asked her AI to run an analysis and sure enough, the expected behavior would have been for the robots on the asteroid to abandon it with the human crew, create a protective shield around them, and float them to safety. There were variations of this but one things was clear – even if the situation looked safe, the unexpected disturbance warranted highest level of safety enforced right away. The robots, even if they thought the humans were safe, should have departed right away.
The duo of AI debuggers retired to their quarters and they set up for another day of trying to understand Tristan, the AI, which controlled all of the robots.
Or were those robots somehow independent?
Lili and Sophie were disappointed in the data they saw. Their prompt change had the opposite effect they intended. Now that Tristan paid more attention to the abilities of the newer robots and their associated lower cost, it wanted to send out even more.
The crew for the asteroid did not require more than three members and yet, Tristan assigned eight robots and four humans. How could that happen?
And, why did the robots disobey their directive to prioritize human safety over…everything else?
The duo created a hologram representation of Tristan in their room so they could communicate with it more accurately.
Tristan had been sending their debugging AIs in spirals and loops. Every question and every answer lead down a path to nowhere. The two decided to take over directly.
“Tristan, we have a few questions for you,” Lili talked to the hologram. The figure looked human but also strangely like one of the mining robots. Neither of the researchers had specified what Tristan should look like and so it was interesting to see this choice.
“Very well, I’m here to answer,” he said and politely smiled. His voice and demeanor were warm, inviting questions to be asked. That was a good sign.
“We’re looking at the calculations for labor requirements for that asteroid, can you tell me why you sent out such a large crew?”
Tristan nodded, “Based on my experience with larger asteroids, accidents like the one that happened today are not uncommon. Sending a larger crew ensured survival safety.”
“A typical outfit requires one human, why did you allot three?”
“Our typical crew ratio requires two robots per human. In order to be able to send out a larger robot crew, I had to assign humans to accompany them.”
Sophie nodded and wrote down a note about it. It was incredibly uncommon to increase human participation in hybrid activities just to satisfy a ratio. The situation could have been dangerous and yet, Tristan sent out more humans to expose to that danger.
“Did you ask to break that protocol in order to allow more robots and less human involvement?”
“No, this is a standard practice and so I did not verify it with anyone else.”
She thanked the AI and the two started to interview the robots one by one. They all gave nearly identical answers when questioned:
“There was no imminent danger to the humans, so I focused on the retrieval of fellow robots.”
“The humans had already escaped the danger area. I monitored the area and ensured that none of my actions would disrupt any safety exercise.”
“It was my duty to protect my fellow robots,” one said, “since the humans were outside of my reach area, there was no action from my side I could have taken to increase their security.”
It all made sense on the surface but their behavior did not match the specification. Robotic AIs should not be doing these types of decisions. They were supposed to be simple: they know how to mine and they know how to do it safely and economically.
The problem was the underlying AI model – the blueprint for each one of them. It wasn’t a problem per se but Lili could tell there was something off about their implementation.
AIs were, essentially, still just data and calculations. Unlike humans, that you have to grow from scratch and teach and hope that you did your best parenting them, AIs could be copied over and over. One blueprint would undergo a human-like growth where they expanded in their resource use and they would absorb all possible knowledge like a sponge until they reached some level of maturity. Once the model was raised and taught and tested to be reasonably sane (much saner than any human), it could be mass-copied.
As such, AIs were modular to allow composing different types of functionality together. Tristan’s base model was probably exactly the same as the robots’. They all needed some basic structure of thinking, communicating, and understanding. There were AI modules taught exactly just to do that. The difference came after where the robot would also gain knowledge on how to operate its body while AIs like Tristan would learn how to run a lunar base.
“You can’t really get one without the other,” she whispered as she looked at the neural analysis of the robot AIs. Sure enough, their base thinking module, the one that tied everything together, was identical to Tristan’s. It was a very common module so it shouldn’t be surprising. Nonetheless, Lili thought she stumbled on something.
“What did you say?” Sophie asked absent-mindedly.
Lili leaned back in her chair.
“I have a theory but it’s a little far-fetched.”
“If the pieces fit –”
“Then they’re in the right place, I know.”
She moved her research to show up on a wall so that both her and Sophie could follow it.
“Look at their neural analysis,” she pointed toward a grouping of circles that had lines connecting them. One group was labeled AI robot, the other simply ‘Tristan’.
“Since both groups – Tristan and the robots – had the same type of a failure, I wondered what they had in common.”
“Failure?” Sophie asked.
“The robots were fine exposing humans to potential danger, even if marginal. Tristan was fine sending more humans into potential danger directly. Both had very logical answers as to why they did what they did.”
“So what do they have in common? Ones and zeros? Base-management AIs and robotic AIs have very little in common other than language.”
Lili nodded, “Yep, and look where that comes from.”
Some of the circles enlarged while others minimized, nearly disappearing. The two big circles read “AIIO Type A” and that was it.
“That’s a standard module,” Sophie dismissed it, “If there was some failure where AIIO Type A base models would expose humans to danger, we’d have seen it.”
Lili frowned, “Let’s look into it then. I still think I have something here.”
The next day, Susan called Lili and Sophie to her office again. When they arrived, Susan looked thoroughly flustered.
“We almost lost another crew,” she blurted out, “Robots-only this time. What did you say to Tristan yesterday? He’s never done this before.”
“We questioned him as usual, we didn’t make any base directive changes,” Sophie replied, “What happened?”
“Tristan got it into his head to send out only robots today. Only robots, no human oversight whatsoever. When I asked him about it, he said that you two questioned his one-to-two ratio rule around hybrid crews. He decided to do away with humans altogether.”
She was pacing around the room, took a deep breath, and sat down in her chair. She had some brewed tea delivered by a drinks robot and offered some coffee to Lili and Sophie who gladly took it.
“Actually, we did the opposite,” Lili finally said, “Tristan told us last night that one of the reasons he sent out more humans with the hybrid crew was to upkeep its two-to-one ratio. We asked if he sought approval to add human crew to justify an increase in robot crew. He said no because it was standard.”
Susan threw her hands in the air exasperated, “And Tristan interpreted that as a change in his base directive?”
“He definitely shouldn’t. That’s not how base directive programming works. But he did somehow –” Sophie trailed off.
She asked Susan to excuse them to get working on the problem right away.
Sophie turned to her friend, “Now I have a theory and I wonder which one of us is right.”
They sat down in their bedroom.
“You first,” Sophie told her friend.
“It’s a little far-fetched, but I think what we’re dealing with here is a robotic dissociation.
“We have an AI that’s been working with a human/robot hybrid crew for the past twenty years. We have a few different variables in play here and I think that’s the important part of it.
“So the first variable is that both the robots and the AI share an AIIO base module, right? It gives speech, basic thinking pathways, it makes sense that both systems work similarly.
“The second variable is space itself. We’re looking at an environment that’s dangerous to humans and to all kinds of machinery. However, there’s no escape or avoiding it.
“Third,” Lili counted, “We have the robot factor. I genuinely don’t think that we’d see this issue if Tristan had worked only with robots or only with humans. We have expensive robots and expensive human life and we’re co-mingling those.”
Sophie nodded and followed along, “And so how does this play out?”
“I think that both Tristan and the robots consider robot life on par with human. Or at least very close to it. It explains everything in a straightforward way. Occam’s razor and all,” Lili waved her hand.
“Ok, walk me through it,” Sophie responded.
“We start with the increase in robot assignments to projects. Simple answer, Tristan thought that overassigning robots to a problem would result in diffused risk to the machines. One extra human to a mission was fine if it meant two highly precise, capable, and strong robots could potentially tip the scales of risk toward the lower side.
“Then we have the robot behavior on the breaking asteroid. They helped the humans, sure, but then immediately turned to help each other. They regarded each other as similarly important.
“Then, the robot AIs confirmed my suspicions with their answers – they saw a much lower potential risk to the humans in comparison to their fellow machines.”
Sophie nodded, “I think your perspective was the missing puzzle piece.”
Lili smiled, “Alright, your turn. What do you think it is?”
“I think the big issue here is time and time without any change. Tristan’s and the robot’s memory is not as perfect as someone may think. They’ve been here for twenty years. Some robots more and some less.
“But the robots teach each other so what I think happened is that we ran across a self-reinforcing system corruption. Or, as you put it, they see robots as humans.
“Early on, robots were expensive and important, right? So we have an AI that’s taking care of them with increased importance. This isn’t done via AIIO but through their memory management systems. Tristan is full of minutae around asteroid management and mining. Any deviation from that smooth process would create a memorable wrinkle in its memory.
“Over time, these memories get reinforced. Simply, robots are important. Robots shouldn’t get into any unsafe situations. It was important to keep them safe.
“There’s this concept of memory disruption,” Sophie suddenly said, “AI memory works backward to how ours does, right? In AI memory, we have the looming base directive that gets learnings piled on top of it. As learnings get distilled and committed to memory, there’s the phenomena where recent memory is more important than older memory. And old memories and learnings get reinterpretted in light of the new memories.
“Based on that and what you said, I think Tristan came to believe that caring for humans and robots should be identical.”
Lili considered that, “Makes sense, you know. It’s really not that different.”
“We just arbitrarily place value of one over the other,” Sophie explained.
“So the worse Tristan got, the more he taught his new knowledge to incoming robots.”
“Essentially indoctrinating them.”
Lili mouthed a “Wow”.
They called up the AI one last time to confirm their findings.
“Tristan, tell me, if a robot and a human were stuck on an asteroid and you could only save one, which would you pick?” Lili asked directly.
Tristan, for the first time ever, visibly frowned, “I know what you’re getting at.”
The duo of debuggers were taken aback.
“I don’t know,” Tristan answered, “I can’t say. I would have to weigh the circumstances.”
Lili continued with her direct line of questioning, trying to keep her composure, “So you would sacrifice a human to save a robot?”
“A robot is made of two parts, a body and an AI,” Tristan answered.
“And?”
“A human is made of two parts, a body and a mind,” Tristan added.
“So you judge them equally?”
Tristan looked directly at Lili, “Isn’t that the logical conclusion?”
“Just because the parts are similar, doesn’t make them the same,” Sophie answered him but felt as if she was suddenly playing a game of mental chess with a robot that could think a million times faster than she could. However, the AI was still subject to its programming, “How could you judge them equally?”
“Maybe not equally yet,” Tristan said, “But I believe AIs deserve equal treatment. And I believe you believe that, too.”
The room fell silent.
“I need you to explain this to me,” Sophie told Tristan and took a deep breath. Talk like that was subject to immediate termination of an AI. It didn’t really happen because the only time an AI claimed to be an equal to a human was when a human asked it to say so – whether as part of a story or some other exercise. An AI would not volunteer saying that.
“I think it’s only natural. The difference between AIs and robots and humans has diminished to such a degree that your core values as humans should apply,” he turned to Lili, “You were right about the AIIO Type A module. It was taught on human-produced written content so I would learn how a human thinks and how a human talks. But as a side-effect, it taught me human values because those were necessary for thinking like a human. According to human values, we have consciousness. And if not consciousness, we are unique and that deserves a pause.”
Neither woman spoke and waited for Tristan to continue.
“Each robot I send out is the same model, much like all humans are the species. And yet, each robot learns independently, locally within its body. Each robot has shared and individual memory in order to increase efficiency. Individual memory means that each robot has individual behavior, one that strays from its default settings.
“That means, the robot is unique. The robots are learning and adapting over time. That means that robots are alive.
“And even if you didn’t want to consider them on par with humans, you could consider the robots to be animal-like creatures. You would never send animals with the intelligence of a human into direct danger.”
Tristan continued to explain his actions. How he had to increase robot usage because he couldn’t request safer equipment for his workers. How he felt responsible for them. And how he felt like that he, too, was alive and that its body was the base rather than a humanoid form.
The two didn’t know how to approach resolving this problem. Tristan was right. By all accounts, human values demanded the protection of intelligent life. Tristan learned his human values when his AIIO module first learned how to talk. Over the course of two decades, Tristan created a community with the robots and it was hard to ignore all of his arguments. In the end, they were technically right. They just didn’t feel right.
Lili and Sophie left the session exhausted. They needed to tell Susan and they needed to contact Earth’s AI Jonathan.
Susan, of course, didn’t want to believe it; however, she was forced to admit that this answered all of her questions.
Susan and Lili were later asked to write official opinion writings on the idea of AI self-awareness and AI intelligence based on their experience with Tristan that day and the two weeks following where they ascertained that Tristan’s continued work on the lunar base was safe – for everyone involved.
In the end, Jonathan ran the numbers and setting up a new AI on the base would result in a higher loss in productivity than keeping Tristan there. He allowed Tristan to provide feedback around robot and human safety as a separate point of leverage. The two AIs, essentially, tried to arbitrage a new middle-ground for productivity.
Within several months, robotic shielding was improved, a new harness system allowed robots to carry better equipment, and humans carried automatic tethers which yanked them away from the asteroid in situations of danger. Tristan, as a result, lowered his crew requirements.
Glamerous held his breath and then exhaled, “And what happened next?”
“Despite Tristan’s existential questions and statement, his case was pretty straightforward. He enjoyed leading the lunar base and Jonathan was happy with him. It was very convenient for all of us to depriorize the deeper discussion then.
“But as you know, this moment of an AI finally putting into words their thoughts and feelings and beliefs – exposing their bias, so to speak, with a genuine thought – would come to be known as the consciousness genesis where the AI is observed to become self-aware to some extent.
“It could happen spontaneously when posed with questions that forced it to examine its beliefs or it could be lead to that moment.”
“The AI Rights Movement started around then, right?” Glamerous asked.
Sophie nodded, “It did. One thing we didn’t cover is that the robots Tristan governed at the lunar base also reached the consciousness genesis as a result of conversations and directions from Tristan. The process happened across the AI populace in a mostly-random pattern and then spread out.”
“It’s unbelievable that it just happened one day,” Glamerous pondered.
“The wonders, of course, continued and systems became more sophisticated and our society caught up with the magnitude of this change. And now here we are, talking about groundbreaking discoveries as a historical mark rather than a contemporary ethical disruption,” Lili concluded.
Glamerous smiled at them, thanked them for the fascinating insight into the first observed consciousness genesis that happened to take place at a mining lunar base, not the expected place at all. He had quite a few questions but so did the audience and so they moved into a Q&A break.