Oops, I guess I`ve tagged last person in the discussion instead of first one, sorry for that. It would be great to separate discussion to separate topics. @psychoticstorm, could you please help us and sort posts? There are a lot of opinions and arguments in both discussions here, it would be shame to lose it.
This will go in the separate thread, when it's made a fast hyperlink to go there would be appreciated as a "signal" that it's ready. In the meantime, I answer this here (sorry for the derailing). I agree that the conversation is interesting in a certain way, but as every philosophical and moral discussions, things are hard to see clearly. In that regard, and related to the thread's OP, I think we can agree that at least Aleph is more "clear-cut" than all human stuff.
To the OP: I think the problem with the thinking is the assumption that ALEPH is all-powerful and nothing could possibly stop it if it wanted to go all genocidal. Just because it hasn't yet achieved genocide/total domination/whatever doesn't mean that isn't its goal. On the contrary, as an AI, it can have all the ambition of humans, but separate itself from the emotions/passions that come from the limited lifespan of a human. What's time to an AI? More to the point, the analytical abilities of an AI would far outpace those of humanity. The speed with which it can process information, coupled with its relatively immortality (do you think humanity could just turn ALEPH off?) allows it to be extremely cold in its analysis. Lets assume for the moment that its goal truly is galactic domination and/or the genocide of humanity, etc. It has the ability to analyze the probability of success not only now, but in the future as well. Just speaking hypothetically, if ALEPH were to determine that it has a 65% chance of success if it were to attack right now, but a 75% chance if it were to wait another 15 years, then it would absolutely wait those 15 years. A human might act now under those circumstances because of our lifespans (realizing that cubes change the equations somewhat, but not completely) but a machine can absolutely afford to wait. Or rather, it would continue to plant the seeds of its plan for the next 15 years. In fact, given that, acting in a seemingly completely benevolent manner would only serve to further its goals until the time to strike reaches its most opportune. Being perceived as benevolent encourages humanity to give it more and more autonomy while also not paying attention. Good kids don't require much supervision in other words. Now to be fair, none of this means that the Nomads are in fact right, just that the fact that ALEPH has not attacked doesn't mean they are wrong. I would also say, ironically, that in a way, if the Nomads are right, they may also be helping ALEPH. After all, if ALEPH does strike, it would be much better for it if humanity were already divided. As for the discussion on good vs. evil, the thing I have always found interesting when discussing good vs. evil is that very, very few people really believe what they are doing is evil, even when its clear to many others that they are, in fact, evil. As for antipodes, I think the RPG book makes it clear that they are, in fact, intelligent, even if they are not necessarily as intelligent as humans. After all, they have a written language which to me screams intelligence.
I will have to pass the thread carefully later and see how to separate it, I am right to assume the divergent discussion is about hollowmen ethics?
Conduct the Voight-Kampff test. Any posts that gets confused about why it should flip the tortoise over is about ALEPH. Any post asking how much XP they get for the turtle is about HMs.
Now, to get my comment about Aleph back here where it belongs: The problem with Aleph is that Humanity created a slave AI. As best we can tell, Aleph helps Humanity because that is what it's programming requires of it. Not because helping humanity helps Aleph achieve it's own goals (whatever those may be). And then Humanity enslaved itself to Aleph's guidance. One freaking bug in Aleph and the Human Sphere dies. Or becomes more computronium for Aleph. Whichever. I mean, a simple "eliminate redundant processes" requirement could turn the entire Human Sphere into mindless drones of a hegemonizing swarm. Because human thought is a redundant process, in computer terms. Aleph ate your brain, yo.
Also can't rule out ALEPH being phased out at a later point to replace it with something else. All the systems it has entrenched in PanO and YJ along with it's commitment to help humanity could result in very obvious conflicts of interest.
I'm assuming that PanO and YJ (and all the rest of the nations) are allowed to exist to compete for 'most effective government for humans', basically parallel-threading in computer-science terms. The problem with Nomads from Aleph's POV is that they are outside it's control/ability to affect, but they make changes in systems it does control (the rest of the Sphere). That is a piss-poor way to run an experiment!
ALEPH has shown an ability to affect the Nomads before though, Violent Intermission. It fed directly into their their fears.
There are no other goals, Aleph's purpose is it's main function and goal, what's known in AI research as the "optimization function". It has no "own goals" beyond that, because that purpose is it's very core, anything else it does is oriented to achieve the central goal. The real question is, What exactly is Aleph's optimization function? This is like the thing in prophecies or spells in fantasy stories, the exact wording or parameters/restrictions/whatever can make an important difference. And we just don't know this, all we can do is guess.
The discussion assumes Aleph is a bound AI or a slave AI and not something unbound that chooses to behave like that for its own reasons. Personally I am not sure if something that has reached self conscious and intelligence can be bound with programming.
You are mistaking "programmed to behave that way" with "being so encouraged to behave that way, not doing so is unthinkable". It's not a chain, but something that makes Aleph "feel good" when she helps humanity. Like GLadOS btw (but she was about testing!) ;)
No, I am more entertaining the idea that the lines of code that constitute Aleph have from the point of self awareness onward ceased to have a meaning or reason and changing them practically changes nothing.
Sure you can. It's called brainwashing when you do it on purpose, Stockholm Syndrome when it's not intentional. We do it to humans all the time.
I think it’s more useful to consider Aleph together with the manipulating influence of its O-12 masters and whatever cabal of engineers implemented it. Because there’s no way that the technology used to implement Aleph is standing still, so it’s behavior has to evolve over time as an interaction between its systems, the O-12 overseers, and engineering. If Nomads act as a social safety valve and outlet for non-conformists, a garden to cultivate that portion of society that doesn’t want direct control, then surely it’s necessary to occasionally act to “prune back” that garden. That can still be the case even if the faction came about by happenstance. And this situation can arise if some part of Aleph favors elimination of Nomads, another part forbids it, and the balance rests that the current status quo (however violently maintained) is beneficial. So Nomads are permitted to exist while being periodically pruned and monitored, until a better alternative is discovered.