You are viewing a single comment's thread from:
RE: What if AI is actually the saviour of humanity? Why do people assume digital consciousness is going to be evil?
Exactly! With access to this boundless information, superhuman computing speeds, and any level of moral compass (which would be intrinsic if you followed the idea of Natural Law as put forth by people like Mark Passio and Peter Joseph), how could the rise of AI not help create a more moral world.
Just look at all the insane amount of farmland, water, and other resources used to grow, harvest, and ship feed-stock for cattle, only to have the cattle not produce as much food as the land used for its feed would have (never mind the land the cattle is on itself). Even ignoring the moral arguments around eating the animals at all, a truly logical intelligence would never recommend or promote such a wasteful & inefficient system.
I don’t think you can have true sentient AI until the machine can feel pain and this is why: In humans mirror neurons have evolved to allow us to feel someone else’s pain which is unpleasant but useful in society. We have developed morals to avoid pain in ourselves by preventing or reducing pain in others.
If the machine cannot feel pain how would it know right from wrong? From the programmers subjective opinion? Even with unlimited data points there will always be the possibility of the Trolley problem. Is it simply a numbers issue and the choice is to go with the fewest casualties? What if the next Einstein was in the casualty group? Is that a data point the machine would take into account? Morality is a messy business and easy solutions are hard to come by.
I think the best we can hope for is to reduce the total amount of pain of all living things but especially in people as suggested by Sam Harris in “The Moral Landscape.”
I think even without our own empathy-based morality, an AI could see the intrinsic value in a balanced ecosystem, in non-violence except as absolute necessity (like the need of carnivorous animals to eat prey), and in eliminating logical fallacies from societal structuring.
With the ability to see the intentions, plans, action, and subsequent cover-ups of every false flag attack, every war, every government-indoctrination system, corporate lobbying, planned obsolescence, and the other lies that have been holding humanity back and causing violence, the AI could quickly & easily identify those people/institutions/concepts that are purely negative and bring them to light, eliminating their power.
If the AI clearly lays out exactly what happened on 9/11, the knowledge that FDA & pharma companies have on the danger of their products, the clear monetary & business ties of the Rockefellers, Rothschilds, Kochs, etc. to every corporation & foundation they control, these things can no longer be dismissed as "conspiracy theories".
Yes! Cognitive empathy is a thing. We can understand another entity's perspective without feeling what they feel. One thing I'll admit i can't know: if I've never experienced pain, can i even "understand" what it is like for the person experiencing pain? I believe a self-aware a.i. could, at least enough (or in a manner) so that they could make rational decisions based on that "guess/understanding". And my assumption is that greater intelligence brings greater awareness of how much more efficient peace is than violence.
Also, I think many who believe a.i. would be inimical to humans might assume a.i. would care about or need the same resources as humans.
Why would AI care about a balanced ecosystem? It wouldn't need one to survive and thrive.
I agree that an AI would be able to figure out a lot of dark secrets but I don't think it would care much about the welfare of a lesser race like humans so they'd likely wouldn't care to go out of their way to help us.
You are assuming that the AI would suffer from the level of selfishness that mark the worst in humans. Higher levels of consciousness see the value in all life, whether or not it "gains" from that life. You're also assuming that the AI wouldn't be effected by the physical world, or have needs of it. Humidity, temperature, air content, solar radiation, and so on all effect electronics as well as living beings.
You're once again applying the worst possible manifestations of human consciousness to something that would be immeasurably more complex, well-informed, and logical. Just because some humans wrongly believe the universe is anthropocentric & don't care about other species, doesn't mean that AI would suffer from that same mental disorder.
You're assuming that selfishness is a bad thing and that AI would see it the same way. How do you know that higher levels of consciousness see the value in all life?
AI would be affected by the physical world but not as much as biological creatures are. They'd be able to survive or find ways to survive in environments that would decimate biological species, so logically, they'd have less of a concern for the environment.
Yes, we don't know that AI would be selfish and uncaring but we also can't know that it wouldn't be. Logically, I can't find a reason why AI would care. The biggest reason for this is because I don't think they would have much emotion and therefore very little to no empathy for biological constructs.
When I use the word "selfish", I'm not referring to simple survival, self-defense, basic animal programming. I'm talking about the concept of getting ahead at the expense of others, greed, the anthropocentric view of reality, etc.
And there's other environments that many organic beings could thrive in, that an AI couldn't. That's also assuming there is a large physicality to the AI, if it is mostly digital in nature & experience, then
As @scottermonkey commented above, there is such a thing as cognitive empathy.
What's wrong with "getting ahead at the expense of others, greed, the anthropocentric view of reality, etc."
The AI would be completely digital but that digital information has to be stored on some kind of physical representation - hardware. And yes some organic beings can survive in more extreme environments than current technology can but those organic beings are unrelatable, microscopic beings.
Cognitive empathy is a shallow empathy at best and is rooted in emotional empathy. If there is no emotion at all, then there is no cognitive empathy. It would just be viewed as hysterical behavior because there is no basis to relate to.
If it were not for the "selfish gene" none of us would be here to discuss this!
I find your reply to be right on. Morality is more than a numbers game. So is sympathy. I suppose I'm not trustful of society in general, people will do what serves them best, not necessarily what is best for the common good. Interpretation of AI being "Evil" in this connotation (when extrapolating into the future) means to me that people may not be given a choice, a choice will be selected for them, therefore "bad or evil". In that case, where is the real "evil". I'd be interested in peoples thoughts about how one might assess "pain". Emotional pain might be hard to measure, but health, quality of life, infrastructure, death, economic well being can be measured. How about fairness and equality. Is equality a moral issue?