Discussion in 'Nomads' started by Dragonstriker, Aug 6, 2018.
This assumes Aleph is an actual AI. It's not. It's an anime AI: it plays by different rules then those proposed above.
What has really happened is that the Nomads have maxed their 'genre awareness' skills. It's well known that the plucky underdogs fighting the evil empires/AI overlords get all the cool toys.
If ALEPH is the friendly AI and Nomads are wrong about it, why did ALEPH attack the Bakunin ship and cause the Violent Intermission? Sounds like this "Friendly AI" doesn't like when it can't influence people.
A friendly AI that wants what's best for humanity and given power to do so is a demon that should be destroyed.
There are two sides to this. Nomads are simply kids that have watched too many movies in which even a well-meaning AI goes wrong. I, Robot is probably a timeless classic there.
On the other hand I think ALEPH treats Nomads as a control group/pressure valve. It's better to have a group beyond your immediate control for data gathering reasons, as well as to have a tool that can outgrow the limitations of a joint culture. And since it's obvious that there will be some malcontents and rebels, it's better to have an obvious place they could go than for them to have no choice but to start organizing some undercover rebellion. The function of ALEPH's attacks are to maintain the hostile Nomad stance, so they won't feel the need to join at some point.
Works well with kids and movies analogy. Nomads apparently hadn't read "The 1984".
You only have Nomad reading of how that whole thing went down.
I mean, facts can also easily be explained by Aleph trying to secure Bakunin and trigger-happy Missile Launcher Moderators* started firing at Aleph troops, missed, and blew holes into the ship. :)
*yes, I know there are no Missile Launcher Moderator loadout. Now. Why do you think that is? Nomad leadership is not stupid - burned once, they learn.
I just imagined a very angry Riot Grrl snatching a Missile Launcher out of some hapless Moderator's hands: "Give me that, you male moron!"
Well, I mean... statistically if it's an anime AI, that's better for the human sphere than if it's an AI based on a western property. Japanese-media AIs are much less likely to go rogue than Western-media AIs. There's been some interesting research lately on whether that stems from fundamental differences in culture, specifically Westerners having in the back of their head the biblical injunction on creating new life - 'playing god', as so many sci-fi and real life scientists have been accused of - even if they're not christian cultural osmosis is a helluva thing, and thus being destined to be destroyed by their own creation, while Japanese content creators potentially have a more Shinto-derived outlook, where everything down to the rocks has a spirit, so the AI having its own consciousness is not anathema to their way of thinking. Again, even if the Japanese content creator does not follow the Shinto religion, it has had a strong influence on the underpinnings of Japanese culture.
Of course, there's the fact that CB is based in Spain, not Japan, so it's down to whether ALEPH is an anime AI, or a western AI with anime aesthetics.
@RecklessPrudence: additionally the strong theme in Western culture is the morally justified rebellion against authority, where in Eastern cultures there's a stronger Confucian vibe of stability and obeying said authority. One makes humans predisposed to be at odds with all-powerful AI, the other promotes compliance.
I don't think it's about good or evil. Nomads can be just against control from an outside power, be it Aleph, E.I., Tohaa or whatever hierarchical human state, it diesndo matter it's the principal of the thing that's wrong and no good intention will change that.
Did someone read Max Tegmark?
Lol, anti-Nomad propaganda in Nomad subforum))) Guys, you are great, continue)))
The point is, Aleph is currently friendly AI and it`s evolving. There are no guarantees that Judgement Day won`t be the most profitable decision to make for sake of human evolution. Or "only single totalitarian state would prosper" decision. Or any other.
Human Sphere relies on Aleph good will and nothing more.There are no safeguards, counterbalances, nothing. Go Aleph or go Aleph.
That`s why alternative points of view exist.
Some people just don't want to live under a bootheel, doesn't matter if it's friendly.
If anything, the "friendly" bootheel is worse because it confuses the oppression and makes you feel good about it.
Go tell the ateks everything is fine because they're good for society.
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
The big problem with a massively-powerful ("weakly god-like") AI like Aleph is that Asimov's Laws are nasty slavery. Your AI should want to help people because it's the enlightened-self-interest thing to do for the AI, not because it's compelled to do so by programming that makes it subservient to humans. Not to mention that the 3 laws are depressingly easy to subvert. "You are breathing oxygen that a hospital patient needs! You MUST BE STOPPED!!!"
A second problem is that a tyranny for your own good is the most insidious and hard to oppose.
The view of the OP made this whole setting suddenly a whole lot darker.
To paraphrase "If disaster is inevitable just sit back and enjoy the ride".
I've just got hold of HSN3 and read through the aleph fluff in there, and feel it's very hard to justify aleph as anything except an overlord intent on controlling humanity to do its bidding.
Whether aleph will is good or evil may be unknown, but the fact that it censors all information which suggests the latter is a firm indicator in my mind that aleph isn't good.
The whole premise of aleph seems very reminiscent of 1984.
Aleph were my first faction, and the insistence of players in world events to play them as a solely good AI has been a big reason for playing my Nomads in the world events. There's so much room for imagination in how they are played and who they attack, and as aleph essentially controls information, whatever they do can be interpreted by humanity however aleph pleases.
CB uses unreliable narrators for their fluff. Most of the fluff about Aleph is sourced from Arachne or is Aleph discussing Aleph.
Friendly AGI means compatible with human existence, not “friendly”. Nothing stops non-Friendly AI from deciding your atoms are more useful as computronium than as part of you. Friendly AI won’t do that because it’s against its nature.
There’s no conflict between Aleph’s goals and those of humanity, either because there never were or because Aleph has already won.