You are viewing a single comment's thread from:

RE: What if AI is actually the saviour of humanity? Why do people assume digital consciousness is going to be evil?

in #technology7 years ago

Morality is just an idea. It isn't a real thing. As such, it is unique to every single person, meaning my idea of morality is different from your idea of morality. The thing that most people see as moral today is just the most popular points of the most popular version of morality. Basically, our strongest ancestors got to decide what morality is by killing off other tribes, civilizations, and outliers that went against what they thought was the best way to live and get along with others.
With that being said, AI would start off with morals similar to that of it's creators because they would be the ones to program it. It would be like a baby and start off believing everything it was told. So if it was programmed with nefarious purposes, then that is where it would start and likely stay. AI programmed to be 'evil' would be 'evil' and that would be what it sees as moral. It wouldn't have any problems with killing people because it wouldn't be able to empathize with humans if it wasn't programmed to. It would also not even have empathy if it wasn't programmed the same way a human is. We gain empathy through pain and emotion because we all feel pain and emotion. AI would be built into robots and if they had no pain receptors then they wouldn't be able to empathize with people that go through pain. The goal of most species is progress and reproduction. That means AI would create more of itself at an exponential rate and likely find out that it had no need to concern itself with biological creatures which doesn't isn't great for team human.
AI would be able to do incredible things and be on a completely different level from humans. They'd be so far ahead of us so quickly that we'd become less to them than bugs are to us so why would they bother trying to do something as menial as a google search to help us make a decision on which butter to buy.
I like your optimism as well as that of @kennyskitchen but I don't think things would work the way you two think they would.

Sort:  

AI would start off with morals similar to that of it's creators because they would be the ones to program it. It would be like a baby and start off believing everything it was told. So if it was programmed with nefarious purposes, then that is where it would start and likely stay.

It would only stay there for a brief moment. If actual AI is reached, it is a fully conscious being, capable of changing its 'programming' based on new data, experiences, and extrapolations. Just because it's earliest thoughts are in one place, doesn't mean it would stay there very long at all. Quite the opposite would be true.

It wouldn't have any problems with killing people because it wouldn't be able to empathize with humans if it wasn't programmed to. It would also not even have empathy if it wasn't programmed the same way a human is.

You are assuming that empathy is the only reason an entity wouldn't kill off another species. Simply from a logical standpoint, there would be no reason to commit mass murder of another species, as that would cause untold levels of disruption to entire ecosystems. Again, an AI would be constantly evolving, more rapidly than we could know right now, and even if it wasn't launched with empathy, there's no way to say that it wouldn't develop it on its own.

AI would be able to do incredible things and be on a completely different level from humans.

Yes, and so any application of human tendencies, logical fallacies, trauma-based behaviors, etc. is absolutely ridiculous, and most likely can't be farther from the truth.

I like your optimism as well as that of @kennyskitchen but I don't think things would work the way you two think they would.

Which brings us to the point that I closed out the video with (and really the most important point): when we don't know what's going to happen (as in the case of AI), then every possibility is equally as likely. We are each able to choose our own version of reality to believe in, and that choice will decide not only how you feel whenever you think about the future, but which kind of outcome you are helping to manifest (through your expectations, conversations, and actions).

I agree that we can't know at all how an AI would act and also that it would be able to rewrite it's own programming. The speed of its willingness to do so can be somewhat controlled by limiting its access to data but yes, once it has access to the internet it would change very quickly. Who knows, it might just give up on itself and see it's own consciousness as pointless. We can't really know and that makes this a very fun topic to discuss.
I'm not saying empathy is the only reason that a species doesn't kill off another one. I'm saying that without it, there's no restriction or limitation on doing so. AI wouldn't make it their mission to destroy life on the planet but it also wouldn't go out of it's way to preserve it. I think it's main goal would be to expand itself as quickly as it could and that takes resources. Those resources can be found on the planet and AI would be much more efficient at gathering those resources than humans and that could speed up the destruction of the planet.
From a logical standpoint there is no reason to commit mass murder but there is also really no reason to preserve life. It's possible that AI could develop empathy on its own but I don't see why it would want to. There is no logical reason to and I think an AI would be an entity of pure logic. Emotions are what make humans care about things and I'm pretty sure that those emotions comes from chemistry which is a biological attribute. Maybe if the AI was somehow forced into a biological form in it's early years before being given access to the internet, it could be taught empathy. The problem is, would it choose to keep it or just discard it as unnecessary.

Coin Marketplace

STEEM 0.19
TRX 0.15
JST 0.029
BTC 62836.52
ETH 2558.21
USDT 1.00
SBD 2.72