It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
INFERENCE <SystemUser means "How did MetroMind make the core logic leap that lead to the destruction of Metropol's human population?">

I just finished Primordia and it stands next to Gemini Rue as my favourite WEG game (although for different reasons) and certainly one of the best adventure games I have played. There were also some elements that remind me of PISS (which isn't surprising considering that they both share PS:T as inspiration).

Going over the different threads and the Fallen novella, a lot of the questions that I had have since been answered. However, there is still one thing that I cannot figure out. Having been built during the time of humans and seeing how machines (or A.I. in general) were removed from military decision-making in Metropol (showing either a blockade in their programming or a smart decision on the part of their builders considering my question); how exactly were the Primordial "guardians" programmed so that MetroMind could make the decision to kill off the only remaining human population? It also presents a very interesting contrast to Horus Manbuilt, which seeing that Metropol contained the last remaining humans on Earth decided to self-destruct rather than to destroy his builders (one of the possible origins of Humanism perhaps?).

Perhaps MetroMind was faulty from the start or she had already "progressed" to a new version that allowed such action to be taken, however I still find her decision odd and slightly inconsistent. So any comment from fellow observers or the writer himself would be appreciated.

There is a second, minor question that has been bugging me (pun not intended). After finding a skeleton outside the outskirts Metro station, I more or less knew what had happened to humans in this world. What really shocked me was Horatio's response, saying that this was a more primitive model of androids from which something could be learnt. Is it really possible for him not to recognise a skeletal structure that was fundamentally different from his own construction (or that of many other robots), thus leading to a logic conclusion that this was not a robot/android; or did the constant deletion of his memory cause him to forget the basic elements of the world he used to inhabit?

In either case, my congratulations to the writer for making such a compelling story narrative to have us pouring over it for any clue we may find and I hope/looking forward to your next project, be it in the same gameverse or in an original IP.
This is the kind of question that Starmaker always answers better than I do!

MetroMind explains it as a visceral reaction to hearing the humans' fear and realization that they had doomed themselves irrespective of whether the Horus carried out its attack. She says that she realized humanity was headed for death, and it was her job to take them to their final destination, etc. Because MetroMind is pretty unreliable, there's a question whether her explanation is self-serving (and it raises some questions, like how she had the poison gas, which would seem to require a measure of premeditation). In any event, I think your question is more, how could her coding permit her to make that logical leap.

Well, partly I think we can say that she may have found a loophole in her code ("I am allowed to take people where they are headed. They are headed for the grave. I can therefore kill them!") to carry out something she wanted to do anyway. I would say that her [volitional] willingness to take that step was caused by the incredible stresses she was under, stresses she was not designed for. Judging from the posters in her hallway, she was built during an era of optimism, presumably well before the War of the Four Cities. By the time she "cracks" (which we can use in two senses; the "mental break down" sense and the "bypassing coded security measures"), she's living in a terrible world -- instead of taking commuters about their daily lives, she's functioning as a bomb shelter, and so forth.

It's also strongly suggested (and certainly my intent) that Metropol believed the hype about her: that she was the way of the future, etc. So consider the strain it would put on someone to (1) not really be able to do the job she was built for; (2) believe that she is capable about bringing about a better world; and (3) watch the world go completely to hell. These are the kinds of circumstances that make revolutionaries and dictators and ultimately monsters. (Sometimes, perhaps, also non-monsters.)

So if that sort of explains her volition, the question might be how she had the capability to do what she did, i.e., how her code permitted it, even if it's what she wanted to do. (Since the robots talk all the time about wanting to do things but being unable to do it because it's contrary to their core logic.) Here I think we can say that (1) she has the loophole; (2) it stands to reason that a public transit AI would be one of the first AIs built, so the safeguards might not have been as effective; and (3) what specific safeguards she would have would have been designed for ordinary circumstances ("Don't close doors on people!" "Don't run people over!" "Don't crash trains into each other!") -- there's no reason she would have had "Don't release that poison gas that you had Scraper pilfer from the munition train running from Factor to the launch pads."

Maybe not at perfect answer, but it's the best I can do off the top of my head!

Regarding the skeleton -- I dunno. I mean, there's some suggestion that Horatio knows something about organic beings (everything Crispin knows came from Horatio, and Crispin knows about eating, even though "robots don't eat"). But I guess in a world where anthropomorphic robots are really common, a skeleton in a body suit wouldn't, at a glance, seem so out of place as to make you assume it's some completely different kind of being. (Cf. Melville's lengthy excursion into why a whale is a fish in Moby-Dick.)

Also, remember that Horatio has never had first-hand experience with humans. When the Horus is turned on, there are no humans left in Urbani. And Horus's log suggests that Horus wasn't preprogrammed with much information about humans, since the ship figures out humans' creative power through deduction, not from any data. Who knows how much of that even made it into Horatio? And once Horatio was turned on, there were no humans left anywhere on the planet.
Post edited July 04, 2013 by WormwoodStudios
avatar
WormwoodStudios: /snip/
Thank you very much for your reply. The answers more or less go in the direction I had already thought was probable, but for someone whose vision of robots comes from Asimov's work, this jump beyond the "Three Laws" into what is defined as Progress (essentially the Singularity with A.I. making better A.I.) takes some extra work to reconcile. If the Primordial guardians were not proper A.I. (constrained by their core logic), then the acts of Horus and MetroMind are actually the transition into sentience proper. Horus was due to its limited ability and predetermined function forced to destroy itself, while MetroMind managed to corrupt what would otherwise had been a crowning achievement of the robot society.

However, that only hold IF the robots are merely advanced machines, rather than "conscience" as we would define it. Taking a more closer look reveals in my opinion that this is not the case and that we are already dealing with highly developed intelligences, capable of same moral and immoral acts as any human is. Be it out of imitation that we programmed into them or proper intelligence on their part, I cannot determine, however in any case I would find it extremely difficult not to classify the inhabitants of Primordia's world as proper sentience (although as with humans, with a wide range of physical and psychological capabilities, that may manifest themselves as lower intelligence and/or handicap).

Nevertheless, the comments made by Elena McIlven do cast some doubt on the above statement, although that may just be due to the nature of Autonomous 8. As for Horatio and the skeleton, I may just be nitpicking something that wasn't fully developed in the game narrative itself. I just found it peculiar as a player, even if the robots in the world didn't respond to it in the same way. But how did Horatio come to the conclusion that it was an earlier model of an android? The plot thickens!

Regardless, I am glad that you as the writer have decided to come join us in the discussion on this pearl of a game. I wish more authors did the same, as it keeps both the community strong and provides motivation/advice for future work.
I wouldn't entirely trust McIlven's perspective. After all, she's talking about a member of a class of things that she is planning to utterly eradicate. She has a high psychological incentive to think of the robots as sub-sentient.

[EDIT:
P.S. I love talking about the game with you guys simply because your engagement with the game creates a depth that the game otherwise wouldn't have. Plus, it's super helpful to learn the kinds of things that rub people the wrong way (and the right way!). Also, re: Asimov -- yeah, these robots aren't really Three Laws types. They couldn't be, since considerable human-harming functions (military and legal) were entrusted to them.
Post edited July 04, 2013 by WormwoodStudios
Some thoughts about Metrominds sanity:

I try to compare her with all crazy AI I know:

HAL9000: Humans want to shut him down when they assume he has a malfunction. He kills them because this would stop him from his task to lead the ship to his destination. This does not apply to MM because nobody wanded to kill her.

SHODAN: Is completely insane. She thinks she is an all mighty god and everything exists to serve her or to be destroyed. MM is not insane and she believes her actions are good for progress and progress is good for everyone. When her actions are evil or crazy its because she is stupid and narrow minded.

Glados: is the best hit. Both run a facility and both kill humans with poison gas for the sake of progress. Glados is crazy because she has two different objectives: "Be a nice person" and "Test things under extreme conditions till the very end". G is more advanced than MM because G uses all kinds of psycologic tricks to make you do what she wants.
MM does not promise you anything, does not lie to you and does not insult you.
but there are some great lines in her song that fit perfectly to MM

. . . We do what we must, because we can.
For the sake of all of us, except the ones who are dead . . .

MM had no reasons to kill humans. Her job was to tranport them.
So I believe she killed them accidentally (maybe like somebody else suggested with the fire extinguishing system) and in order not to get insane she convinced herself that she did it intentionally for the sake of progress.
She does not trust robots too because she was not designed to have a personal relationship with anybody (unlike Arbiter and Steeple who were designed to deal with the interests of individuals). Her task was to ensure that all trains run according to scedule. Scraper was designed as a tool to help her with her job so she thinks everyone exists to help her achieve her goals. Her goals have changed from running trains to rebuilding the city.

In that sense the inhabitants of Metropol are more advanced than MM because they do not only exist to focus on one single goal ( and when they do they do it because they want to do it). These robots are more advanced then MM because they are designed to be free individuals. So the irony is, that an old machine that wants progress (MM) hinders the progress of those who are further advanced then her.

PS: somebody must have written some paper about the psycology of fictional characters but I am too lazy to search.
:) ... I guess in my mind, Metromind's logic seemed fairly straightforward. She's designed to make the city as effective as possible, so whenever something goes wrong - it's human error. Or external problems. And in a pinch, I don't think it's illogical for Metromind to be able to remove the people in the city that keep causing all these problems for her plans. Either directly by shipping everyone to a particular final station. Or perhaps indirectly by simply ignoring their survival needs, to better direct the reliable elements of the city.

So I don't think it's necessary to parse her "core logic" so it can be twisted into include killing humans. Because she's been programmed to make the city perfectly effective.

And I think the transformations she's talking about having had mirror several of the other AIs we meet as well. They remake themselves into another version, and let the new version adopt versions of the previous directives, with another set of rationalisations that allow them to exist for another generation. Horatio has several of those, Steeple definitely has these, and so on. Where they retain an amount of the insight the previous version had, but have to learn what it actually means over again. That's how Horatio ends up seeking after a meaning with humanism, and I think it explains why he makes Crispin: he wants to understand creation and civilisation. Even though he doesn't actually know why specifically, the origin of the revelation at the end of the game is what stays with his new versions. A perceptive cynic, like Everfaithful might explain it as self-conceit. Escaping from himself and from reality. While Horatio would see it as self-maintenance, and probably justify it on the grounds that he is afraid of what he would do if he allowed himself to fulfill his original programming.

While Metromind's version of that is to make a very bad decision that, while logically sound, clearly defeats the main purpose of the original's programming. So the remake of the logic is a version of the core logic that just doesn't include humans. She goes from being a failure, to simply a new maintainer of another (im)perfect city. Now it's a different problem that stops the perfect system from working. An external threat, powerpack-abusers in the city, people outside with no solidarity towards needful machines in Metrocity. And when pressed, she really reveals (or so I think) the way she functions -- and this is why Horatio hates her so much -- that she simply accepts the new reality, remakes herself, and fits herself into it as a saviour and a messiah. That she is believing her own propaganda, fits new facts into it very easily - and that Horatio recognizes that in himself to a certain degree. Or that he fears he has that trait himself.

In any case. So arguably, that's the core of her logic. To take any existing and unworkable scenario, and simply put a huge vision on top of it where all her (failed) devices to make the system perfectly effective should be the goal in themselves. While psychologically this eats at her, in the way that she knows it's just rhetoric. In other words, we're coming back to the way the logic cracks: she's designed to create perfection out of mechanisms that cannot be 100% reliable or predictable over time. So she makes these sacrifices, which then makes the shiny surface crack.

Horrifying and striking parallels to real world, present and past political ideologies and personas are obviously, completely incidental :p
Post edited October 07, 2013 by nipsen