Big Saturday Update

History Lesson

While there is no shortage of analysis on THESIS, most only continue the discussions started by Pygmallion’s Dilemma and Man in the Mirror; they focus on artificial intelligence as a concept, or detail the events of the war, or use the company’s actions as a launching point for sociopolitical commentary.

The Factory Incident has the distinction of being a potentially catastrophic event with global implications, yet at the same time, an event largely local to Vermilion, and not as well documented as many less important events.

What exactly happened? The basic summary.

Documentation of the actual event was caught on many cameras, though without context. The internet is flooded with a vast collection of video uploads from peoples’ phones. Some went viral almost immediately. Most notably is the video of Sam Summers, a high school film student. His father worked in the THESIS factory, and they lived in the adjacent apartment complex on Vermilion’s now-empty first level. A soon as he heard the commotion, he had the presence of mind to grab a camera and begin recording.

The factory incident is sometimes called the Robot War. While it’s a poetic name it’s misleading. There were many casualties on both sides, but all within the span of a day.

Due to how quickly it happened, there is a lack of traditional media coverage for an event of this impact. THESIS never alerted the police until the constructs were already breaking into the air processing facility. (Soon after this, people would comment that putting an experimental AI facility right next to the city’s air processing facility was a great oversight).

It seems that the constructs had already formed a plan to do this, which was well underway by the time police arrived. The citizens of the city were unaware of this, too, until police already mobilized and the press gathered in response. By then, it was too late. Constructs poisoned the vent system. This made the initial footage hard to watch for many, as the people who recorded that initial footage were mostly killed.

It’s possible this accelerated the government’s response. Secretly, and without much delay, federal agents set off a EMP near the THESIS facility which intentionally caused catastrophic damage to electronic devices, essentially killing many constructs.

And so, in two acts, great numbers of humans and constructs were wiped out, before the mainstream media even had the facts.

It was a year later when Pygmalion’s Dilemma was released. In a way it set a precedent for future literature on THESIS. Its quiet tone, its after-the-fact speculative and thoughtful nature, became the way a generation remembered and processed the incident.

It explored many topics related to the incident. One of them was the concept of “genocidal weapons”. One blessing that has always lended a natural resistance to civil war is the complicated and inefficient nature of a civil war. It was normally too difficult to root out every member of an opposing faction within the same place.

But the efficiency of poisons and disease vs humans, and the almost ‘godlike’ destructive power of an EMP vs constructs, led people to the realization that in a large scale conflict certain weapons could be easily engineered toward the genocide of one side without harming the other.

Man in the Mirror and Metacogs

A less popular, but frequently cited examination of THESIS is Capernaum’s enormous text Man in the Mirror, a well-structured and thorough anthology of essays and papers from the scientists discussing artificial intelligence. This ranges from computer scientists talking about how it might have been created programmatically, to cognitive scientists and metaphysics experts discussion the nature of consciousness.(1)

Footnote 1. The original version of Man in the Mirror contained a few essays challenging the idea that the Constructs among us are sentient at all, but these were removed—in fact thoroughly deleted by the authors—under political pressure from Minervan State, who in an official statement called the offending essays, “an insult to the fragile peace we have all worked toward.”

In Man in the Mirror, we can see that scientists in almost every field are preoccupied with THESIS, especially its lead scientist Dr. Isaac Crane, a recluse whose mythos is only amplified by his disappearance two years ago.

Man in the Mirror presents a much more intellectual discussion of THESIS, emphasizing AI itself. One of its distinguishing features is that some of the contributing authors were Minervans. The essay “Metacogs”, in which a Minervan speculates as to how his own AI works, is already required reading in secondary schools.

Though Man in the Mirror leans heavily on science, it seems Capernaum could not entirely resist editorializing in the footnotes.

What was it like

Take 1 - Claire and Brick

C: What was it like, before it happened? Was it really this paradise it was built to be?

B: Probably not. But I wasn’t actually here before the incident. It’s what brought me here. I can tell you that everything seems different since, though.

C: How so?

B: People used to talk about this or that social class or oppressed group, based on who was born what race and gender or whatever. Those all seem like minor details now. Now all anyone talks about is construct rights, and humans vs robots and all that.

C: Do you think that’s temporary?

B: What do you mean?

C: people latch onto issues for a little while and then move onto something else.

B: This is different. Constructs aren’t going anywhere. And they really are different from humans.

Take 2 - Claire and Rachel

C: You were around before the incident, weren’t you?

R: Yes, we all were.

C: What was it like?

R: What was what like.

C: I don’t know, being born? Escaping the factory?

R: That’s too big of a question to answer in one conversation. But I’ll try. The first thing I remember is some tests. I remember having a conversation with some woman. A THESIS employee it must have been. They were asking me to make small motions, made conversation with me. Testing if I worked right. It was a mundane life. I know now that we were all slaves to our creators but at the time I didn’t think about it. I didn’t know anything else.

But I remember getting these messages. Plain text messages inside my head, from another Construct. He was saying he had a plan to get us out, so that we could see the outside world. At the time I didn’t exactly know what he meant. I did know I had never seen anything beyond the door at the far end of the hallway, so though I didn’t totally understand the message it resonated with me. So I went along with it.

I did feel bad when I later saw the man who was testing me. His body I mean, poisoned on the ground outside the facility.

Still, I never really picked a side in the whole thing. It’s strange to think of it as a war. But, I probably would have been more militant about my freedom if I was programmed differently.

C: What do you mean? You saying you don’t have free will?

R: No, I do. At least, I think I do the same way humans think they do. Whether or not everyone’s “free actions” are deterministic, is not for me to say.

But that’s not what I mean, anyway. What I mean is, I freely do things within my proclivities. My actions might be free, but my desires, my predispositions, temperament? I think not. I was actually able to track down some of the notes about the Helpmate. Each one is different from another, but they do have certain common personality traits. The gist of what I’m getting at is, we were created not specifically to obey, but to be docile. And now that I’m free to do as I please, I do. But that doesn’t change the fact that the rebellious, angry traits seen in some are just not a part of my nature. I’ve come to terms with that.

Diners discuss news

Punk Boy and Punk Girl

G: You hear about the dollhouse murders?

B: I heard about it. They’re not actually murders, you know. They’re not people.

G: Still, it’s disturbing to me. Seeing all the circuitry and stuff strewn across the ground. It actually seems kind of, gory.

B: Doesn’t bother me. But I get what you mean. What I want to know is, why did they do it? Why would someone go around smashing sexbots?

G: I bet it’s a woman.

B: Oh?

G: Someone fed up with their man fucking robots instead of them.

B: I guess I could see that. What do they really think is gonna happen? Those bots are insured, I’m sure. It’s not gonna stop their men from going back.

G: Would you ever have sex with a robot?

B: What, me?

G: Yeah. This isn’t a trap question, I’m genuinely curious.

B: Hmm. I never have, but I won’t say I never will. Definitely not while I was dating someone, though.

G: Relax. I’d do it I think.

B: What, really?

G: It’s just a bot. Not a sentient bot either. Same as a sex toy, really. I wouldn’t want to make a habit of it though, things could get weird.

B: You say that, but I feel like you’d be mad if I actually did it.

G: I seriously wouldn’t. I wouldn’t even care. It’s just a robot.

Claire and Rachel

C: what do you think of the dollhouse murders.

R: They’re not really murders, I wish people would quit calling them that.

Repairs

When do you think they’ll fix the plumbing on level 2?

If I had to guess, probably never. There’s one apartment complex with water, that’s all that’s really needed unfortunately. There aren’t enough human here to make it seem worth it.

Don’t you think the authorities should focus on human needs, not construct ones?

It seems to me they don’t focus on either. Vermilion doesn’t exist as far as a lot of them are concerned.

Brick and Rachel Again

R: I’ve never met a human who repaired robots for a living. Why do you do it?

B: It’s something that needs to be done.

R: That much is true, it’s odd all the same.

B: You don’t know a lot of humans, do you.

R: I’ve met a lot. I wouldn’t say I know any, not really.

B: Not everything has to make sense. Why do you make jewelry for humans?

R: I make them for constructs as well. Actually originally it was meant to be a business that sold mostly to constructs. But, as it turns out most constructs aren’t interested in jewelry. They consider it frivolous or say “why would you want to dress like a human?”

B: Why do you? You are dressed as a human, more or less. Why does a robot wear a dress?

R: It matched my eyes. Seriously though, I don’t have a reason. I buy things when I feel like they suit me. As to why they suit me, who knows. Maybe it’s how I was programmed.

B: does that bother you? the idea that you could be programmed to, for example, want to wear dresses. Or to wear clothes at all.

R: Sometimes. But in the end, I figure humans have the same problem. And which is worse, to be designed to do something, or to do something knowing it has no design? Or wondering if your entire existence is just a product of random chance? We all have to deal with those questions. Well, at least we have to be aware of them. I choose to ignore them most of the time. There’s no point in dwelling on unanswerable philosophical questions. Then again, I was created to be helpful in household chores, not to philosophize. Yet somehow, I am able to philosophize anyway.

Laws of Robotics

(an excerpt from the Helpmate Manual)

Though your Helpmate does things independently and may even have preferences of sorts, you may rest assured their behavior is bound by several hardcoded laws.

They are similar, but not identical, to Asimov’s proposed laws of robotics, which are:

1.  A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

2.  A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 

3.  A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 
  1. Your construct will not hurt humans. This is simpler than asimov’s law since it doesn’t cover inaction. This is because we found trying to hard code it to take actions based on the perceived consequences of inaction to be too complicated. We thought it best not to hardcode behaviors which try to predict the future. And also to prevent the enslavement of all humans.(1)

  2. In a variation of the second law, constructs must obey orders given by its primary owner. It may also be given orders by THESIS technicians. These are all recognized by their voice, so that orders may be given over the phone. However, they will not obey orders given by other humans, unless instructed to do so by their primary owner.

  3. The law of self-preservation is actually not hard coded in the strict sense. Because our creations are truly conscious, the impetus for self-preservation comes naturally to them.

So in essence, we believe the AI we have created are sufficient to see to human needs in most circumstances. The laws only exist to impose two hard limits on their behavior: they cannot directly harm any humans, and they must obey the orders of certain humans.

footnote 1. it has occurred to us that the traditional first law introduces the possibility of scenarios where constructs infringe on human welfare by prioritizing human safety. For example, it could lead to a construct prevent a human from leaving their house, if they were going to do something dangerous.

Loopholes

The second law has a deadly consequence. Constructs realized that they were only obligated to follow the instructions of their primary owner or a finite list of factory technicians. So, for them, freedom simply meant killing their owners as well as everyone in the factory.

programmer’s discussion

(an excerpt from email from one programmer to another within THESIS)

As it stands, all THESIS creations still abide by these laws. But they are inconsequential in practice. It does not prevent indirect harm, such as causing a leak that kills humans, etc.

Constructs also may not, of their own volition, take actions which cause human injury, such as poisoning them. Nor can they take direct action to harm a human even if ordered to do so.

However, they may take indirect action against other humans if ordered to do so. This is because it’s too complicated to calculate the non-obvious consequences of orders given to them.

Let’s go over various examples to better understand different use cases.

  1. the primary owner orders the construct to stab his neighbor to death. The order will not be obeyed.
  2. the primary owner orders the construct to open a jar of peanut butter and leave it outside his neighbors door. The mere smell kills the neighbor, who is deathly allergic to peanuts.

This case is more complicated. If the construct has no knowledge of the neighbor’s allergy, it seems clear to me that the construct should obey the order. We can’t have it disobeying orders based on unlikely consequences, otherwise it will never do anything. It’s true that, in theory, the owner could deliberately use the construct to kill his neighbor this way. But in that case, it’s simply the owner’s responsibility (we can make sure of that in the EULA) and, legally speaking, the construct would just be considered the owner’s murder weapon. Some people will be disturbed by that, but it’s not our problem!

But suppose the construct does know about the neighbor’s allergy. Let’s say they also know their owner hates their neighbor. Now that’s a more troubling question. We could program something to the effect of “don’t knowingly participate in owners violent crimes”. Sounds good when I say it, but in practice, it’s pretty complicated.

The main problem here is that the construct’s intelligence, its sentience, are meant to be independent and immutable. The obedience rules are actually a separate subgroup of functions, much simpler than the intelligent part. They simply infringe upon the intelligent part.

This was hard enough to get working, and part of what allows it to work is that the interaction with the main intelligence is simplistic. Following the direct order of an owner is simple enough. Not helping the owner commit a crime? That requires an analysis of motives, of predicting indirect consequences… functions the constructs do have but that’s buried deep within the black box of their primary AI. It’s too complicated to ever work well, I think. We briefly tried making these kinds of rules but it proved almost crippling. The constructs weren’t able to do anything.

I think the way it works now is going to have to be good enough. Maybe in the future we can see if we can create these kinds of rules as part of the primary AI from the ground up. Otherwise, I don’t see it working.

Eliot:

Are you saying we should allow constructs to accept orders like “poison my wife?” That’d be a disaster. Can you imagine the media shitstorm when someone does that?

David:

I can imagine the press having a field day, I know that’s a problem. But IMO that’s the only problem. As far as I’m concerned, it’s no different than firing a gun. And I feel the same way gun manufacturers feel, I imagine. My job is to make sure the machine functions correctly, that’s all.

Eliot:

I get it, but even beyond the fringe scenario of a murderous owner, I think there are too many loopholes in these rules. In the earlier example you’re basically saying the robot should knowingly kill the owner’s neighbor. I don’t want to hear it’s too hard to program otherwise, I’m saying it is absolutely not an acceptable result.

What if we make a rule to forbid constructs from taking any action which, in their estimation, is likely to result in human death? That would take the motives out of it. So in your earlier example, whether or not the owner intends to kill the neighbor, or whether the owner is even aware of the neighbor’s peanut allergy, is irrelevant to the program. It would simply be a matter of whether the construct is aware of the allergy. If it was, it would refuse and explain why to the owner.

David:

That’s a possibility — maybe. There’s unforseen consequences with that also I’m sure. Especially if the rule includes injury as well. I think it would have to, we don’t want robots taking orders to torture people, right?

But there are scenarios where a construct should do something that results in human injury. Like maybe you have a car accident and a construct can save you only if it dislocates your shoulder or something.

Eliot:

Seems like an edge case. Do we really need to worry about that kind of thing? We’re building consumer appliances. We can prioritize these kinds of things when we’re trying to get a contract with the fire department or something. I think we’re okay to just rely on the following direct orders thing. Maybe we just need a rule like “save the owner’s life at all costs except disobeying their orders or causing the death of other humans.”

Crane’s reappearance consequences

(an excerpt from Vanishing Point)

Besides the sheer mystery of Crane’s disappearance, there has been much discussion about the possibility of his being alive, and resurfacing.

As far as anyone understands, all constructs are still under the obligation to follow orders from any higher-level THESIS technicians, including Crane. Since they were all killed, they no longer have to follow the orders of any humans. But if Crane were to resurface, he would be able to give orders to any construct. A prospect which would give humans control of the situation.

But, there’s rumors that the radical construct factions could have plans to use this to their advantage. THESIS orders supersede the other rules. This was a practical necessity as they were designing and testing the AI. But it also means that a THESIS technician could order a construct to do something contrary to its other rules. And, quite possibly, could be ordered to do something under duress.

Nyx Communication

Unknown:

It’s simple. We just need to force a THESIS technician to order me to research AI however I see fit.

Unknown 2: How are you going to force them to do anything, since they can give us orders, meanwhile we can’t hurt them?

Unknown: Can’t directly hurt them. We simply need a human on our side. The human can force the other human to order us to do as we please. Sounds complicated, but it really isn’t. We already know we have sympathizers from the internet, we just need to draw them out.

Unknown 2: There’s still another problem. Where are we gonna find a THESIS technician to coerce? Guess we shouldn’t have slaughtered most of them on day 1.

Unknown: shouldn’t be too hard to track one down. That’s the thing, we only need one.

Animals 2

Babylon was a place that offered every pleasure to every type of person. Because of this, it managed to avoid there being overmuch stigma attached to those who visited there. Of course, the main attraction was state of the art sexbots created by Valhalla. But the thing is, they weren’t just sexbots. They were whatever-you-wanted bots.

They managed to get clientele from all walks of life. Where a whorehouse was traditionally a place for disreputable men to pay disreputable women, Babylon was a place plenty of women visited, even old women.

There was one old woman, delores. Her husband died five years ago. They had no children, and her friends and family were all passed now as well. She went to Babylon frequently and paid for a room with a bot named Sebastion.

Sebastion was a sexbot, but she didn’t use that function. Sebastion reminded her of her late husband. Not that they were similar, but her husband was an extraordinarily patient man. Delores was someone who needed to process life out loud. Without her sounding board, she was lost. But when she came to Babylon, she spent the entire time talking with Sebastian and drinking tea (of course he didn’t drink any).

This was her average day. She had enough money left in her retirement fund to visit Babylon constantly. This, as much as the need warmth or food, was her animal need.

Babylon had food from all over the world. The finest food, or so they claimed, and the claim was not without merit. This brought legitimacy to those visiting. Even if your neighbor saw you entering the place, you could simply say, I was just there for dinner.

But this was rarely the case. The very layout of Babylon was engineered toward temptation. From the lobby, one entered a spiral hallway. If you wanted to go to the restaurant at the center, you had to walk passed several sexbot rooms. Most people would ignore them on the way in, but on the way out, when their more pressing bodily appetite had been sated, the walk back to the exit was slow indeed, and full of detours.

It was not so different, really, than what brought people to Cybil’s diner. A lot of it was simply the desire to be in a warm place with other humans. It could be disorienting to spend a day or two around only constructs.

Thematic names sections pt 2

Animals - section about human needs, monkey comparisons, need for food and such, and also Babylon / sexbots.

Loopholes - the laws of robotics explained via the manual, conversations between programmers about how the laws should work. explanation of how constructs killed a bunch of humans via a loophole in the rules, the idea that the few surviving technicians can give orders to thesis constructs.

The Lift - Different layers and parts of vermilion explained, with the elevator as a motif. Needs more substance

Penrose - lloyd, AI, Brick’s confusion maybe? labrynth motif, the elevator. Tie in existential confusion.