Can AI be trained to assist with moral decision making?

in #life7 years ago (edited)

An article questions whether or not AI can be trained to detect morality. Specifically the article is titled: "We can train AI to identify good and evil, and then use it to teach us morality". The immediate problem with this article is apparent in the title. The concept of good and evil are subjective yet the article appears to be talking about the subject of morality as if there is some objective morality which everyone would agree on which an AI can somehow be trained to discover.

Can AI make the world more moral?

When it comes to tackling the complex questions of humanity and morality, can AI make the world more moral?

This question I think is more appropriate than the question in the title. I absolutely think AI can make the world more moral. In fact I would go so far as to say the world cannot be moral or even approach being moral without AI (machine learning). The question is a matter of what kind of AI are we talking about? Another question is who will control this AI? The problem is we simply do not have an AI which can do this on say a level of Google. I do think we can develop a "moral search engine" and in fact I've got an idea on how to do just that which I'll reveal in future blog postings.

The article highlights the main problem with current technocratic approaches to AI morality:

There are many conversations around the importance of making AI moral or programming morality into AI. For example, how should a self-driving car handle the terrible choice between hitting two different people on the road? These are interesting questions, but they presuppose that we’ve agreed on a clear moral framework.

We simply do not have a universal framework for morality. My opinion is on the self driving car topic we should allow the owner of the car to decide whether to prioritize the car or whether to take the utilitarian sacrifice one to save many. This would put the moral question where it belongs (to the owner of the car rather than the manufacturer). To have car manufacturers override would be to put responsibility on the makers of the software who for better or worse are no more enlightened on morality than anyone else.

Where do I finally reach a point of disagreement with the article writer?

Though some universal maxims exist in most modern cultures (don’t murder, don’t steal, don’t lie), there is no single “perfect” system of morality with which everyone agrees.

But AI could help us create one.

The article writer assumes there is an us, a we, without defining who these people are. Do we all believe murder, stealing, etc is wrong? Apparently not because war happens and in war murder and theft are common. In addition, the circumstances shape right and wrong, for example if you're a mother and your children are starving will you go and steal food? Or do you do what is right and starve so as not to violate the moral absolute of no stealing?

It's simple, there are no moral absolutes in nature. So to have an AI try to create absolute fixed rules is a very naive approach which in my opinion is guaranteed to fail. I do think AI can help a person find the solution which is simultaneously best for their self interest while attempting to minimize harm to others and that is why I call my approach to this problem a "moral search engine" rather than to simply give the AI examples and have the AI merely use some kind of neural net to create solutions. I just don't think that kind of approach will work unless the AI can predict how humans will react to it's solutions (public sentiment).

Morality has a public sentiment component

While you have personal decision making which does not have to be concerned about the moral outrage of people around the world because the decisions are small, you also have bigger decisions where you do have to be concerned about how people around the world will react. Human beings are notoriously not good at predicting the reactions or moral outrage of other human beings because our brains are limited to only being able to manage around 150 relationships. So this means there is a hard limit called Dunbar's number which proves the human brain does not scale and it is by this limit (and others) that I make statements such as human beings can never truly be moral. To put it in short, without AI non of us have a hope in the world of being moral in a hyper connected world.

What a does hyper connected world mean?

A hyper connected world is a world where you have to manage potentially thousands of relationships (beyond Dunbar's number). Facebook creates an illusion allowing people to believe they have thousands of "friends". Twitter creates a similar illusion. The trend toward increasing transparency cannot produce more morality because even if for every person there are 5000 stakeholders who watch their decisions, it is not possible for the person being watched to adapt to the opinions, feelings, morals, norms, of 5000 people from all around the world who may have very different notions of right and wrong. To put it simply, the neocortex cannot handle the moral load which hyper connectivity with transparency inevitably brings.

The article connects morality and law

The article makes another mistake in my opinion by trying to connect morality and law. In my opinion law is amoral. This is to say that what is or isn't a law has nothing to do with morality. It has nothing to do with current moral sentiment as there are laws on the books which most people today view as immoral. It has nothing to do with consequences to society if the goal is to produce positive consequences because there are laws which produce negative consequences to society (such as mass incarceration which then led to fatherless households which then led to a poverty cycle).

Inherent in this theory is the presumption that the law is an extension of consistent moral principles, especially justice and fairness. By extension, Judge Hercules has the ability to apply a consistent morality of justice and fairness to any question before him. In other words, Hercules has the perfect moral compass.

While I agree with the idea put forth that AI can be part of creating a perfect moral compass I do not think AI alone can do it. Nor do I think any moral compass can ever be considered perfect or "optimal". It can produce a better moral compass for the vast majority of people on the earth though. In order to achieve this, in my opinion the question asker must be capable of asking moral questions (querying) both the machines and the people. In order words in order to build a true moral search engine the question must be asked to "the global mind" which is like a super computer which includes both machine computation and human computation.

What if we could collect data on what each and every person thinks is the right thing to do? And what if we could track those opinions as they evolve over time and from generation to generation? What if we could collect data on what goes into moral decisions and their outcomes? With enough inputs, we could utilize AI to analyze these massive data sets—a monumental, if not Herculean, task—and drive ourselves toward a better system of morality.

On this part I agree. The data analytics approach is the correct approach to morality in my opinion. IT's a matter of having access to both human computation and machine computation. It is a matter of knowing public sentiment on any particular moral question at any point in time. It's about using AI to process this sentiment and even use it for predictive analytics. This in my opinion is a viable approach for a moral search engine.

But I do not think this will lead to a unified "system of morality". What is best for me is not going to be what is best for you. What is right for me to do based on my stakeholders, or my crowd, is not going to be what is right for you to do based on your crowd. If we both ask our crowd, depending on who is in our crowd we could get completely different results to the same question.

Conclusion and final thoughts

  • There in my opinion is no objective morality. Not enough evidence shows it exists in nature.
  • AI will not be able to find objective morality unless it exists in nature.
  • Current moral sentiment is not the same as objective morality. It is a mere approximation in the best case for what will upset most people (or upset the least number of people).
  • A moral search engine requires an ability to query the full and total global or universal mind which means human and machine computation, or non-human animal computation should the technology evolve to permit their participation.
  • A moral search engine in my opinion is a must have because evidence suggests that the neocortex does not scale. Making the world hyper connected and transparent may work when it's only 100 or so people (small town) but it does not appear to scale up to millions or billions of people all who have their own opinions on right and wrong.

References

  1. https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/
Sort:  

Morality is subjective.

It's simple, there are no moral absolutes in nature.

How can something under another person's control teach another about morality?

Easy, suppose you have a pet and it is under your control. You train it, you feed it, you care for it. Are you saying you can learn nothing about morality from these interactions? What you can learn is how to care for another (most basic).

But will this tell us absolute right and wrong? Maybe not. AI doesn't answer every question. AI does for lack of better explanation, number crunching. It does the heavy lifting that our brains cannot handle. It can augment our ability to follow the rules we tell it we want to adhere to. It can help us avoid contradicting ourselves, help us become disciplined, and most importantly it can process big data.

For example I don't know a lot about you or the morals of your country. Suppose I want to be perceived as a moral person in your country? So I require a data set on the moral attitudes on different topics in your country. AI can be helpful because it can interpret and analyze this data so that I can understand that a large percentage of women in your country feel a certain way about a certain issue for a certain reason.

As a very limited human it is impossible for me to know for instance how women in your country might react to a decision I make. AI would analyze data and offer advice based on how women in your country are expected to react to certain decisions. Some of this might seem simple but it's more a matter of number crunching.

To put it in another context if we are marketing and trying to sell a product, we will want to know how the potential customer is responding to the changes we are making. Remaining in this symbiosis or good graces with the customer is a matter of analyzing customer sentiment.

On a certain level morality is similar. We do not know how people will react to what we do or say. In the future we will not be able to afford to go trial and error because to say the wrong thing could mean blackballed or censored or demonetized for life. So in order to avoid these exceptionally harsh consequences we must rely on analysis of current sentiment, trends, feelings, what people of 2018 perceive as moral.

Jeez.
You're right.
Doesn't make it any less scary though.
The change is here.

i wonder why you have so great amount of votes but not very big payments?

is it a lifehack how to get many votes?-)

The more minnows see my posts the more votes I tend to get. Votes from minnows don't count for as much but they do add up. What it represents is, a lot of minnows like my post and that I'm becoming skilled at marketing my posts to minnows. It doesn't mean I'm as skilled as some others because as you said, often my posts get lots of votes but not a lot of Steem.

If you just want to get a lot of votes from a lot of people then you can use clickbait titles, with clickbait photos, and you'll get votes. If you want to also engage your audience then you have to choose a deep enough topic to generate a meaningful discussion.

aha, thank you for explanation!

It seems to me your reputation and SP play a great role in this case, because if a minnow make a great post with an engaging discussion (a topic for it), it can easily be missed by all the rest, and just be invisible for others because nodoby is interested in voting for him (he isn't profitable in voting in reply).

Without any doubt, you create high quality content, and all your votes are logical and fair, but sometimes I see posts about nothing but from dolphins or whales and they always get great payments and much attention.
you're an experiences Steemian, and you know it better than me, I think:) but it's just a little copy of real life.

It doesn't seem to be the case. I may get a lot of votes but I don't get a lot of Steem. Some minnows are earning more per week in Steem Dollars and SP than I do on a per post basis.

I would say some of my upvotes come from my followers who read my content regularly. This may be around 100 people at the most and you are included as one of my regular readers. Then there are new people who see my posts, read it, and then find the discussion and join it. So depending on the topic you can get a lot of interest.

To get a lot of SBD payment per post is not really something I have any control over so I don't focus on that so much. I focus on getting as many upvotes as I can, getting as many followers as I can, engaging the readers, and providing value with content.

If a whale sees one of my popular posts I might get a big upvote. If not then the post may get a lot of upvotes from minnows, a lot of engagement, and I might end up getting a lot of followers rather than the big payout. As long as I see progress toward my goal of 10,000 followers it's fine.

At 10,000 followers I might decide to quit blogging and move on.

to quit? but 10,000 will be very soon! I'm sad bbecause I have so few friends (or people to talk with) here, and I don't want to lose one of them..
why exactly 10,000? a magic number?:)
blogging and steemit has much in comon with addicton, or maybe passion, and if you spend a lot of months here, you just can't stop, because it's a part of your life. and income means much too, of course.

you don't feel such passion here? or maybe you just want to spend money you have earned here at last?:)

The funny thing is, when my posts were getting reasonable payouts from the whales, a few minnows complained about it. They said it wasn't fair, so the whales who were using bots to upvote my posts stopped doing that.

At the same time, if they do not do that then some of my readers might think it's not fair that some of my better posts receive such small payouts.

Honestly there is no making everybody happy. I post and I get the payout I get. I never complain about how small or how large the payout is. I don't concern myself with the payouts others get. Maybe they are better at it than I am and nothing is wrong with that.

yeah, it's maybe the best position that can be: just don't worry about payouts. I try to follow this rule as well, and just write something what I'm personally interested in. Earlier I tried to follow main trends and chose topics that I supposed would be paid but I never had the right variant - payouts didn't come, or come for the posts I didn't even expect to be paid at all.

I relate my response to your conclusions. Before going into that I would like to say that I assume you care about people and the environment and you want to see a world where peace and freedom are being provided. So don't take my words as a personal offense but as a highly critical standpoint of mine.

Morality doesn't have to be objective. There are ethics and morals in all humans once you are faced with a situation which requires them.

Can we agree on that that you don't want to be punched in your face or killed by another human being? Do you agree that you don't want your things stolen or someone betraying you? Do you find it morally inappropriate when someone is talking bad behind your back and is hurting your reputation? Do you want to be ignored once you are crying desperately on the street looking out for help? Can you stay cool when you see birds miserably dying from an oil pest? Do you feel empathy for a child who is screamed at? What are your immediate responses to the situations I just mentioned?

I would say that humans can say "yes" to those universal ethics and would like to claim that those ethics not only are universal but also protected by law and habit. The fact that those ethics are betrayed does not prove that these ethics don't exist in a very significant matter. They do.

There is no need to look into nature for supporting human basic needs and convictions, I think.

It's not the question what upsets people. The question is what upsets you. Once you can agree on the named moral standards you should live up to them no matter what others do or don't do. It should make you think if there is not agreement inside of you and what that could mean.

The world IS already hyper-connected. All that happens to mankind happens at the same time to you. You are influenced by events like war, natural catastrophes, and climate issues. You may think that you know it all through media and if that information flow would come to an end you would know nothing.

Well, that is not correct from what I experience. You and I are directly influenced by people who - for instance - come from another country and tell their subjective perceptions and stories. You are influenced by your direct surrounding. Transportation, Computation, Consumption. Your (and mine) inability to have control over modernity makes you look at the wrong solution which is more of the same (technology). People are bored to death in this environments of high technology. Funny term is "bore out".

From my point of view - which is more of an organic nature - the connections between all living systems are a matter of fact. Climate, plant and animal (including man) populations, and movements of geological matter are cyclical and highly complex. Human nature is a black box and can never be "known" the same way "consciousness" cannot be defined.

You underestimate humans I think ... maybe yourself, too. You overestimate what computers can do. The "learning AI" is a (nice to play with) fantasy. To have a similar learning ability as a human, a machine would need organs and senses, blood, nerves and cells and DNA. It would need a human body in order to gain the same intelligence as humans have. But until you find that humans need extensions because they are not smart enough to run their lives you will stick to the glorious imagination that machines will make our existence safer and better. Have you heard of the term "learning through osmosis?" I hope you get my idea.

Even if we would be able to build an android my question would be: Why would you want that? Why should I want that? What is your intention?

Can we agree on that that you don't want to be punched in your face or killed by another human being? Do you agree that you don't want your things stolen or someone betraying you? Do you find it morally inappropriate when someone is talking bad behind your back and is hurting your reputation? Do you want to be ignored once you are crying desperately on the street looking out for help? Can you stay cool when you see birds miserably dying from an oil pest? Do you feel empathy for a child who is screamed at? What are your immediate responses to the situations I just mentioned?

This is called the golden rule. The do onto others as you would want others to do onto you. This rule in my opinion is flawed. It contains assumptions, the first of which are that others are like you (which is not always going to be the case). The other is it leads directly to an eye for an eye, because if someone does something to you that you don't appreciate then do you now have to return the favor? Well that is tit for tat, and in some cases that is what people decide to do and they call that morality as well.

The golden rule vs the platinum rule

  • The golden rule is do onto others as you would have others do onto you. This is logically flawed.

  • The platinum rule is treat others they want they want to be treated. This is more correct.

The consequence based moral perspective isn't really about how you may feel about a particular action (emotivism). Different people have different capacity to feel different ways in response to an event. A person who is innately empathic such as yourself has an unfair advantage in certain situations such as the examples you mention which favor high empathic individuals. I could also provide an example of a scenario where your empathy wont work such as the infamous Trolley Experiment.

Trolley Problem

If you are the person in the Trolley and there is a fork upcoming on the tracks, and you've got the choice to either flip the switch and so one person will die, or you do not flip the switch and five people will die, what are you going to do here? Empathy doesn't inform or hint on what you must do here. The consequence based thinker would do the quick calculation that to save the life of five complete strangers outweighs saving the life of one complete stranger. This illustrates that morality is math for the consequence based thinker.

Now to provide another example? What if it's one person you know and love vs five complete strangers? Well even a consequence based person like myself would choose to save that one person vs the five complete strangers. Are my emotions correct? I would say it's a very selfish thing and so it would only be correct for me (in my perceived best interest based on my feelings). For my perspective to save my friend to sacrifice five is right.

Now what if that life is mine? Is it incorrect for me to do anything to save my own life? Take as many lives as necessary? In a war scenario, as a soldier I would have only one real concern and that is making it back alive. I would of course also want to make sure my friends get back alive because I know them. The enemy side? I don't know any of them, and ultimately that outweighs all else. This is how in war people can kill the enemy side to protect their side.

Consequentialism works in every scenario no matter what the emotions may say. First assume you have a set of values, such as to protect your own life and the lives of people you care about most? Well you can now determine the value of every decision based on that. It's pure math, a matter of measuring the consequences, and how you feel must be overridden if it detracts from that. This means no you cannot say stealing is wrong if you've run out of food from which to survive. Stealing is right if it keeps you from dying in any given situation (and if you can get away with it). So the golden rule only applies if a person is willing to die for it, otherwise they will violate it to live. This also shows how morality is subjective because people are not robots, and ultimately have preferences.

The no murder rule doesn't apply in war.
The no stealing rule doesn't apply in war.

The only thing that applies in war, and in nature, is to do whatever you must do to avoid dying. This can include any behavior, any decision, and it also includes accepting any emotional or psychological consequences which go along with those decisions. In other words, you can sometimes choose to feel guilty and live, or die with honor (not violating certain strict codes of conduct).

Your (and mine) inability to have control over modernity makes you look at the wrong solution which is more of the same (technology). People are bored to death in this environments of high technology. Funny term is "bore out".

In my case it is not technically possible to disconnect completely. If I never touch a computer again the satellites, drones, surveillance cameras and other spying gadgets still monitor me. It's in my opinion too late to try to convince myself that I can live as if the world hasn't changed dramatically. The other thing is, if I were to choose not to embrace technology then for me it is a choice not to thrive in the modern world. This for me isn't ideal if I would like to be happy.

From my point of view - which is more of an organic nature - the connections between all living systems are a matter of fact. Climate, plant and animal (including man) populations, and movements of geological matter are cyclical and highly complex. Human nature is a black box and can never be "known" the same way "consciousness" cannot be defined.

In my blog post on the global mind I am specifically talking about that connection. I don't really talk about consciousness because there is no way to prove whether or not that exists. I do talk about the mind of the earth which is something we can measure, and I talk about pancomputationalism which is also measurable. The idea that physical computation is possible, such as for example rocks acting as memory storage and fossils containing a memory allowing us to carbon date them, and so on.

You underestimate humans I think ... maybe yourself, too. You overestimate what computers can do. The "learning AI" is a (nice to play with) fantasy. To have a similar learning ability as a human, a machine would need organs and senses, blood, nerves and cells and DNA. It would need a human body in order to gain the same intelligence as humans have.

This is a strawman argument. I don't really ever claim that AI must have the same intelligence as humans. I don't even necessarily require that. To have a moral calculator the AI doesn't have to do very much besides deduction. There of course can be more which can come later as AI improves but consequentialism doesn't require human level AI. I don't think building an android is something I ever claimed to want.

I do not underestimate humans. Look at the world we live in with all this unnecessary suffering? I think you are perhaps overestimating humans and the ability of people to be moral, to do the right thing, or to even determine what the right thing is. The humans who really want to do the right thing typically have no means of determining what that is. Then you have some humans who genuinely do not care.

Personally, I have never met anyone who would have shrugged their shoulders if they were robbed, beaten up or lied to. Will you like to be lied to or beaten up? ... Or are the questions too personal ... ?
But I guess that is a "no" from reading between the lines.

The eye-to-eye principle is clearly at a disadvantage and causes conflicts that you feel when you avenge yourself for one of your perceived unjust acts of another because revenge remains ignored by almost no one but in turn, leads to another act. You're right about that. Without the ethical principle of not harming anyone else, revenge would be easy. Two people who deny this principle make life a living hell just like those involved in their conflict. This ethics must, therefore, be defended and always come back into people's consciousness by giving them examples that follow them.

The scenario you depicted (trolley) is an absolute exception and it is quite constructed because such situations are very rare in people's lives.

I agree with you that some cases undermine these rules. And I agree with you that the theft of food, just like saving those I know, ignore ethical principles. One really has to be very narrow-minded and frozen to be indignant about an exceptional ethics violation caused under extremes.

A principle is no longer a principle but terror, if it must be complied with at all times under all circumstances and in all conceivable scenarios.

That would be something like not saving a drowning person because I know that in principle every person is able to swim. So that's nonsense. Exceptions of violation though should be rare.

I would not give myself up for war or encourage people to become soldiers, for example. Those, who do let themselves being recruited as soldiers do say yes to hurt these ethics even though they may not think about killing when they apply for this job.

In extreme cases, if I was to be killed personally by a hostile mind, I would try to defend myself and most likely die involuntarily in the attempt. Whether I would be able to kill another man in an emergency, I don't know. Probably, when my instincts kick in and I can take advantage, I would be. That is why law excuses this act of violation when it happens while defending your life. There is a difference between active killing and defending because of your instinct.

But I did not talk about any of this and in our normal everyday life, which is not currently dominated by bombs in the streets. And your solution does also talk about everyday use, correct?

I can apply the ethical principles mentioned and choose them willingly - without any world knowledge whatsoever. Ethics can spread and establish itself in times of peace, which I find logical so that in a difficult or conflict-ridden situation I know what to rely on.

In such cases, a long-trained habit that I cultivated for years helps me, just like a martial artist who can defend himself automatically without having to think long which movement he should carry out. In this scenario, I can also look back on many years of positive experience (which seems to me to be the most important thing), which has taught me that ethically correct behaviour has earned me a high reputation, that my fellow human beings trust me and that for this reason alone I am counting on consensus instead of retaliation. As far as my fellow world has come to know me as a human being of integrity, it will favour me instead of harming me. But I can only achieve all this if I follow this ethics in general.

You also could start right now to follow those ethics and see how it works for you.
For sure this will not change complexity or the world (or leaders) or the others you talk about and decisions which have an influence on your life but it will change you. In the end, you are always alone with what moral decisions you make and be responsible for them no matter what the public opinion dictates. You yourself are the most reliable source and authority which counts, you matter.

Will you like to be lied to or beaten up? ... Or are the questions too personal ... ?
But I guess that is a "no" from reading between the lines.

What I would like to happen doesn't determine what people do. I respond what people do and to how I am treated, period.

The eye-to-eye principle is clearly at a disadvantage and causes conflicts that you feel when you avenge yourself for one of your perceived unjust acts of another because revenge remains ignored by almost no one but in turn, leads to another act. You're right about that. Without the ethical principle of not harming anyone else, revenge would be easy. Two people who deny this principle make life a living hell just like those involved in their conflict. This ethics must, therefore, be defended and always come back into people's consciousness by giving them examples that follow them.

It is a matter of respect. Sometimes you don't get a choice but to either protect yourself or get beat on. An eye for an eye is tit for tat and in game theory it's a deterrent. By creating a consequence for actions which you don't want to happen, it makes them less likely to happen. If there are few or no consequences for actions you don't want to happen, then there is a greater chance someone might try to make it happen. So it's important to in my opinion believe in actions and consequences.

If someone lies to you what are the consequences going to be? What about if they beat you up? Well if they lie you will not rely on them for information in the future at the very least and if they beat you up then you can charge them with assault. At the same time it's also true that if someone is honest with you then maybe you choose to reward that? Maybe if they are nice to you then you're nice back? Reciprocity is what matters in my opinion.

The scenario you depicted (trolley) is an absolute exception and it is quite constructed because such situations are very rare in people's lives.

In your life those situations are rare. You're not every person and different people face these situations more or less often. I admit, most of the time these situations are not life and death but the situation is extremely common when stakes aren't as high. For example, maybe no one is going to die but either one person must get hurt or one hundred people must get hurt. Of course you'd choose to hurt only that one because it's considered the appropriate decision (if you do the math). In my own life, I've faced this situation many times, where it's a zero sum game and someone has to lose in the game, and in these situations the only thing I could choose if it's complete strangers is to try to minimize the total number of people who lose.

In extreme cases, if I was to be killed personally by a hostile mind, I would try to defend myself and most likely die involuntarily in the attempt. Whether I would be able to kill another man in an emergency, I don't know. Probably, when my instincts kick in and I can take advantage, I would be. That is why law excuses this act of violation when it happens while defending your life. There is a difference between active killing and defending because of your instinct.

We all have had different lives. We all have been shaped by different experiences. If you had an easier than usual life it does not mean the majority of people on the earth have it so easy. Take a look at people in Syria, or people in developing countries, or people who are just in bad environments. Some children must grow up fast and make these sorts of decisions while other children are sheltered until adulthood. For this reason, it's in my opinion all about the outcome. If you made it to adulthood then your strategy worked for you, but you're not in the position to tell anyone else what would work for them since they aren't you, aren't in the same environment, don't have the same opportunities, etc.

Whether I would be able to kill another man in an emergency, I don't know. Probably, when my instincts kick in and I can take advantage, I would be. That is why law excuses this act of violation when it happens while defending your life. There is a difference between active killing and defending because of your instinct.

The law in my opinion has nothing to do with morality. Law is amoral and exists for us to follow it. Morality in my opinion is about outcomes and consequences. If for an example a person determines being alive in prison is a better outcome than being dead then the law is meaningless. It's all about what outcome the person prefers and whether or not the person is prepared for the consequences.

I can apply the ethical principles mentioned and choose them willingly - without any world knowledge whatsoever. Ethics can spread and establish itself in times of peace, which I find logical so that in a difficult or conflict-ridden situation I know what to rely on.

I understand and respect your ethics. It's not my ethics or perspective but we didn't have the same life, opportunities, or experiences. We don't have the same knowledge. So I believe in you doing what is working for you in your own life.

You also could start right now to follow those ethics and see how it works for you.
For sure this will not change complexity or the world (or leaders) or the others you talk about and decisions which have an influence on your life but it will change you. In the end, you are always alone with what moral decisions you make and be responsible for them no matter what the public opinion dictates. You yourself are the most reliable source and authority which counts, you matter.

To use an analogy, there are some people in the world who barely have to lift any weights but they just are always fit, always look in shape. Then you have others who have to bust their asses in the gym just to keep from becoming obese. The person who can eat what they want, never train, and still look fit, cannot in my opinion tell everyone else how to stay in shape. The person born genetically gifted was born with for example more muscle fibers, or less of an ability to get fat. The person not born with any gifts but who still manages to stay in shape had to do it the scientific way, without relying on any natural gift or talent.

I would say consequentialism is similar to doing it the scientific way. It's harder, it requires calculated decision making, it requires discipline, knowledge, etc. To weigh the consequences of every decision, without falling back on how it feels, is to take a measured approach to behavior. This works for some people and the benefit of it is you do not need to have any talent or intuition. You don't have to be unusually empathic or compassionate, you don't have to even like people, you just have to want the best possible life. How do you make a better life for yourself? It may include making the world better, it may include avoiding conflict, avoiding a bad reputation, avoiding prison, but it doesn't mean you have to keep your emotions aligned with every decision. Some decisions which lead to the best outcome for the greatest number of people will not feel so good to make but if the long term outcomes matter more than those short term emotional discomforts then the difficult decision gets made.

you are always alone with what moral decisions you make and be responsible for them no matter what the public opinion dictates. You yourself are the most reliable source and authority which counts, you matter.

In my opinion it is not so simple. People who have a stake in me, in what I do with my life, in my future, they expect me to make the best possible decisions with the information I have access to. Public opinion is what determines reputation. What you describe as what you follow resembles virtue ethics but the virtuous don't always get treated well. Jesus Christ was virtuous but was also crucified. Many Black Americans were virtuous but were lynched. The bigger question is how do you survive your circumstances without becoming a complete and total monster. The only way to know if you're becoming a monster is public opinion. So public opinion does matter, in fact it is critical, because ignoring it is what creates the dictator, the authoritarian, the person who ignores it completely, well what happens if public opinion goes against their will? Do they now seek to try to change public opinion? How will they go about doing that? Propaganda? Disinformation?

It is not my goal to be a "strong leader" and shape public opinion. My goal is to thrive and live the best life I can with what I have. In my opinion to be ethical is to find ways to live a good life with obstacles in place. There is no absolute right and wrong as it's all based on conditions but I do believe in respecting those who are stakeholders. Stakeholders are your supporters, the people who are in your corner, and if they are to say you're doing something wrong then ultimately if you want to keep them in your corner you have to listen. So I do believe in taking advice, in listening to stakeholders, in being a collaborative and cooperative person, as this is what leads to the outcomes I want.

You seem to think that people can exclude their emotions by will and that also people exist without any talent whatsoever. ...Here our image of humans drifts very much apart. I do not believe that emotions can be shut down (they are still there but under the surface) and also not that there are untalented people at all. For "zero talent" a human being must be totally brain-damaged or/and totally bodily unable to move. A disabled person. And even then a talent can be seen or shining through. To act or think with no emotions involved you have to be something else than human.

The assumption that I had kind of an easy life in comparison with others (whoever that might be and where is the "x" which marks the middle?) is a little hasty. Only I can give you a statement about how my life was and is. Compared to my mother (who was in Prison in a Russian camp for more than ten years) I had an easy life. But then again, I have a more difficult life when I include other factors - also carrying some of her burdens. How my life is felt by me determines how I describe it and also it is influenced by how stable my psyche feels in time and space. I don't know anybody who doesn't carry wounds and trauma. Believe me, I had my share.

People like Jesus or Martin Luther King are strong role models. Without this ideal models, I wouldn't have so much hope and optimism, also Buddhism is a very strong source for me. It's strange, as you say it, it sounds like they made a mistake by risking their lives for what they stood for.

You said "What I would like to happen doesn't determine what people do. I respond what people do and to how I am treated, period."

... If I would tell you loud and violently that you are "a stupid ugly bitch!!!" that doesn't hurt you? Wouldn't you mind me calling you that? ... Wouldn't you like to stay your purse in your bag and not being stolen? You say that it's not important how you feel about a scenario like this ... I say that the feeling of once having been violated or betrayed or stolen from can put me in the shoes of not doing it myself to others. .

.. Moral learning means to count in the body and feelings in order to recognize other humans or sentient beings suffering. At least I need experiences like to have hurt myself (squeezed my finger in the door) or to have been yelled at or have witnessed others being hurt by others to know what's to be like when morals are violated.

I did not understand your example where you talk about your responsibility for people. Yes, you have moral or other decisions to make all your life.

For the suffering "in the world": my arm is not long enough to stretch out so far. I do what I can get hold on. I am a social worker and deal every week with refuges from all countries (in particular Syria and Afghanistan). I listen to first hand stories and that is good because I anyway do not like to rely on mass media.

Thank you for engaging with me.

You seem to think that people can exclude their emotions by will and that also people exist without any talent whatsoever. .

I never made those claims. Anything I said you'll be able to quote else you're putting counterfeit ideas out. Lots of people have many different talents. We don't all have the same talents. Genetic gifts are something people are born with an affinity or capacity for and they can be developed. As far as excluding emotions by will? I don't know what you mean. I do think people can make decisions without their emotions overriding the decision and that is a matter of will (a learned behavior) but this doesn't exclude the person from feeling whatever they are going to feel as a consequence.

I do not believe that emotions can be shut down (they are still there but under the surface) and also not that there are untalented people at all. For "zero talent" a human being must be totally brain-damaged or/and totally bodily unable to move. A disabled person. And even then a talent can be seen or shining through. To act or think with no emotions involved you have to be something else than human.

This is not what I said. I said not everyone has the same capacity for empathy, for compassion, etc. A psychopath according to psychology does not feel any empathy at all. That kind of person is 100% incapable but these people make up around 5%-10% of the human population and around 20% of the prison population. In other words, at least 80% of us are likely not psychopaths and are capable of at least some level of empathy. A person who has a higher capacity for feeling empathy is like a person who has a higher capacity for feeling anger.

As for whether emotions shut down or not; you can only speak for yourself and your own emotions. You're not in everyone else's body. So what you are really saying is that your emotions are strong enough that they don't diminish or shut down but you're not able to claim this about anyone else unless you can cite a study which was conducted which showed 100% of the subjects responded as you mention. From what I know about neuroscience the brain is always rewiring and is very flexible. It could be a use it or lose it scenario like with muscles, but since there aren't any long term studies on what causes a person to become low empathic there is no science. As a result I can only speak for myself, how I can behave, what I can do, and you can only speak for yourself.

The assumption that I had kind of an easy life in comparison with others (whoever that might be and where is the "x" which marks the middle?) is a little hasty. Only I can give you a statement about how my life was and is.

Perhaps I am wrong to make any assumptions. My point is that by the thought patterns and expressions you have made; for you it is rare to have to make life changing or difficult decisions. My point is that because something has been rare in your life does not mean it is rare in the life of someone else. So you can never use your own life as an example of what works because only you were able to experience your life. Just like only I am able to experience my life and the behaviors I learn are a result of my experiences unique to me. I cannot tell another person who hasn't had similar experiences to me what may or may not work in their situations.

So unless you've been in similar circumstances to me, you're not in a position to judge or advise on moral issues. This is the reason why it's not always helpful to ask complete strangers for moral advice.

Compared to my mother (who was in Prison in a Russian camp for more than ten years) I had an easy life. But then again, I have a more difficult life when I include other factors - also carrying some of her burdens. How my life is felt by me determines how I describe it and also it is influenced by how stable my psyche feels in time and space. I don't know anybody who doesn't carry wounds and trauma. Believe me, I had my share.

My point is, personality, psychology, emotional responses, are shaped by life experiences (not just genetics). People learn behaviors to help them to cope, survive, thrive, in the environment and situations they are in. A person who has a different way of thinking about the world or arriving at certain conclusions is an asset because they can see life from a perspective you cannot and they can react in ways you wouldn't think of. The point being that you're limited to your own perspective, experiences, emotional capacity, just as I'm limited to mine.

You said you had an easy life at least compared to your mother. This in my opinion is what most parents want for their children. This is what I would want for children of my own. That said, it's also the case that if I haven't experienced your life I cannot ever really understand you fully. It means if I haven't had the same experiences you had, or your mother had, I cannot know for sure what would work in those situations.

The issue I have isn't with you speaking from the heart about your own experiences, your own moral systems, your philosophy of life. I respect that you're willing to communicate with me on this level. My point is it is problematic if you project to say that because your emotions are a certain way, because your ideals are set in a certain way, that it's somehow better than mine, and that I must adopt yours. On that I disagree, because if you've never had similar experiences then perhaps you never had to think about what I've had to think about or solve the problems I've had to solve. Maybe you never found the limits of your moral philosophy to reach the point of thinking it's not good enough. My point is that I do what works, and any moral philosophy I arrived at is based in pragmatism.

People like Jesus or Martin Luther King are strong role models. Without this ideal models, I wouldn't have so much hope and optimism, also Buddhism is a very strong source for me. It's strange, as you say it, it sounds like they made a mistake by risking their lives for what they stood for.

This is your subjective opinion. I don't believe in role models. I don't follow that concept. I think people are free to put other people in high regard if they choose to. I think if it makes someone a better person to do this then it's great. I just don't personally consider myself much of a follower of others.

... If I would tell you loud and violently that you are "a stupid ugly bitch!!!" that doesn't hurt you? Wouldn't you mind me calling you that? ... Wouldn't you like to stay your purse in your bag and not being stolen? You say that it's not important how you feel about a scenario like this ... I say that the feeling of once having been violated or betrayed or stolen from can put me in the shoes of not doing it myself to others. .

I don't understand your argument. Do you think the people in this world only do what I would like them to do? Do you think I'm not capable of doing things that you might dislike in tit for tat exchanges? I think those exchanges of course are immature and are to be avoided. I think that any actions I do, or anyone else does, has consequences. If I go around disrespecting people I do not know, then I would have to deal with the possible consequences which come with that behavior. The same if I'm disrespected; then the consequences for that can be considered the cost of disrespecting me.

I see things as costs vs benefits. People, non-human animals, they tend to do things which they can get away with. If a behavior has zero costs and has high personal benefit then there will be motivation for someone to do it. Behaviorism indicates that if animals are rewarded for certain behaviors then those behaviors become increasingly done. So I do not think how we feel has much control over how people act (toward us or toward others), unless we know the person and or that person cares how we feel. So if you ask me how would I feel if someone decides to do something I don't like? It happens every day.

.. Moral learning means to count in the body and feelings in order to recognize other humans or sentient beings suffering. At least I need experiences like to have hurt myself (squeezed my finger in the door) or to have been yelled at or have witnessed others being hurt by others to know what's to be like when morals are violated.

I think this is impetuous (to focus on in body feelings to inform decision making) but this is a choice you can make. I think you are actually rolling the dice under the assumption that your subconscious gut feelings will always be right. I'm not someone who thinks gut feelings have a high rate of accuracy or produce wise decisions with a high enough rate of accuracy that they can be considered a reliable substitute for conscious thinking. In addition, the gut feelings (subconscious thinking) can be manipulated by advertisements, propaganda, and many forms of disinfo. To rely on something which you know can be socially engineered is in my opinion dangerous. This is to say, if the truth is whatever you feel good about, and if the right thing to do is whatever feels best, well that's basically just doing whatever you feel like. If you are willing to accept the consequences of doing whatever you feel like doing then this is your life.

I did not understand your example where you talk about your responsibility for people. Yes, you have moral or other decisions to make all your life.

Responsibility for others is a reason why some people may care about the outcomes. If you know your decisions don't just impact you but the people you can help in the future, or people you are helping today, or the people who are helping you, or who could help you in the future, then you see every decision you make impacts your stakeholders, your supporters. If this were the military, and you were part of a unit, then the whole unit is stronger if you're stronger, and if you're weaker then it could have adverse effects. If people don't want to see you suffering then maybe you should make decisions which would reduce the probability of this? If people want you to be successful (even if it's just so you can help them out), then they have some stake in your success. The point being, people do not always make decisions with only themselves in mind.

For the suffering "in the world": my arm is not long enough to stretch out so far. I do what I can get hold on. I am a social worker and deal every week with refuges from all countries (in particular Syria and Afghanistan). I listen to first hand stories and that is good because I anyway do not like to rely on mass media.

And this makes sense. As a social worker the very talents and skill set that you have is best. I'm not a social worker, and maybe I do not share your talents. I do things differently, but I do not demand or expect that you should do things like how I do it. I think you should do what works best for you and contribute to the world on your own terms. If it's working for you then why not continue doing what works for you?

I find it interesting what we are doing here. Understanding each other through text being so different in our approaches and characteristics is quite an experience.

I tried to read you - I gave you my interpretation instead of quoting you. ... Getting a sense of whom I deal with.

Yes, my worldview and how I look on humans is that we all have a common ground and similarities which I call "universal". For that, I don't need scientific evidence (we are still the same species, right?). Though I do appreciate many findings I like to check if what is talked/written about can be verified or proven by my very own experience. I am getting more alert when sciences step into a realm where the scientific method is not on a stable ground any longer (where the AI science is not if you ask me). Like all those terms I gave you in my comments (consciousness, learning, wisdom, intelligence etc.).

My skepticism about AI (in particular including morals) is the main reason for me focusing on body feelings to inform decision making. It may seem to you that I do that in general ... or I am that kind of person. But what happened is that I perceived you as occupying the exact opposite ... so I was tempted to point my finger at the body and senses. Same thing happens when someone just acts and talks out of intuition or a whim. I have a tendency to balance things out.

I step over borders and I do act as an ambassador for what I believe in. You do the same. Maybe a little less border crossing. I am fine with our differences but I put my trust in you that we have also a lot in common ... which I wanted to find out through this exchange.

Yes, my worldview and how I look on humans is that we all have a common ground and similarities which I call "universal". For that, I don't need scientific evidence (we are still the same species, right?). Though I do appreciate many findings I like to check if what is talked/written about can be verified or proven by my very own experience. I am getting more alert when sciences step into a realm where the scientific method is not on a stable ground any longer (where the AI science is not if you ask me). Like all those terms I gave you in my comments (consciousness, learning, wisdom, intelligence etc.).

I wish the world were so simple but the scientific evidence suggestions something else (science is all I have to determine truth). It appears the brain wirings of people is different. There is a common spectrum in terms of limits but it's a bell curve of possibilities rather than equality.

My skepticism about AI (in particular including morals) is the main reason for me focusing on body feelings to inform decision making. It may seem to you that I do that in general ... or I am that kind of person. But what happened is that I perceived you as occupying the exact opposite ... so I was tempted to point my finger at the body and senses. Same thing happens when someone just acts and talks out of intuition or a whim. I have a tendency to balance things out.

Human beings can feel pain. How we interpret and react to pain differs according to how well we know ourselves. Pain are signals indicating damage to the body. But pain does not inform the best course of action (neither does fear). I don't have skepticism of AI because I know human beings are the beneficiaries and users of AI. It's ultimately just another tool which a person can try to use in order to become a better person. A person with an empathy deficit can use AI to boost their ability.

I step over borders and I do act as an ambassador for what I believe in. You do the same. Maybe a little less border crossing. I am fine with our differences but I put my trust in you that we have also a lot in common ... which I wanted to find out through this exchange.

I don't think I can consider myself an ambassador. Whatever I believe in, is merely a result of what I'm exposed to in terms of experiences, knowledge, and the cause and effect relationships. So the only thing I can say I believe in is that each person should do what is best for themselves. As for trust, I don't really know why you would choose to trust me based on such information but the good thing about this blockchain technology is that we can interact with the minimum level of trust necessary.

I prefer trust minimization because the alternative is contractual agreements with legal consequences. In my opinion, we trust systems (justice system, blockchain, reputation system, economic system) far more than we trust individuals to do the right thing.

Your post has been selected by Connect to the World FR


@cw-fr
We promote English posts in the French community!!

We are writing a post with our selection that we share within the French community
you can see the post by clicking on the image below

Thanks for the great content!!

I also have a question if morality is always good for human race. With increase of morality we have disrupted natural evolution and now its common to see such anomalies like cons living better than free working people, amoral people procreate in bulk while most usefull community members hardly have time for a single child. It is all fault of increasing morality.

tumblr_m98jst6nbw1qbrdf3o1_r4_500.gif

I am afraid of AI . Science fiction movies can come to reality and they can eliminate us :(

Tame AI. Domesticate it. At some point I'm sure humans were afraid of dogs but over time and by a process of coevolution the dog became our best friend. Make the AI a part of you and why would you fear yourself?

AI is not like humans or animals. We are coding it and if one evil code badly then it will effect all of us .

Physics reveals no fundamental difference. Computation is computation according to physics. There is a symbiotic relationship possible between humans and machines. Meaning the code you speak of is evolving and as AI improves it can improve our ability to improve it's code while also helping us improve our ability to improve ourselves.

Unless pace at which AI evolves becomes too fast for us and it will abandon us. We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.

I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.

Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.

When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.

We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.

There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.

So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.

Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?

On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:

  • Intelligence amplification.
  • Replacement for humans.

I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.

References

  1. https://en.wikipedia.org/wiki/Intelligence_amplification

I am all for evolving past our human shells, but the resulting being will not be a human. I am talking about a scenario where people decide to stay people and merge with technologies extends no further than smart implants (I reckon most people would be too conservative to think otherwise). And AI (or "mergers") may outpace and abandon those "true" people.

"There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear."

THIS

Weak AI is no problem. The question remaining is what about strong AI.
https://en.wikipedia.org/wiki/Artificial_general_intelligence

I don't think strong AI would be a good thing to have on earth. I think if it has a function it should be to spread intelligent life off planet. It's something to put on a space probe with the seeds of life. The Von Neumann Probe in my opinion is a good use case for AGI.

The problem with developing AGI on earth is the motivation for it. Currently most technologies are developed for war. Most of the time the motivation for creating a technology is to exert control over people. An AGI in my opinion if developed now, will be used like a weapon to enslave or control the masses. I'm not in favor of developing WMDs so I'm not in favor of developing an AI which will trigger an arms race and be used to control society.

We might think it will not be used that way but look at surveillance. Look at how social media is used. Look at how the Internet is used. All of it has been weaponized to control people.

The only way I think an AGI can develop safely is if it emerges slowly in a decentralized fashion. Most attempts to develop it currently are centralized and look more like power grabs. Since there is no current safe way to develop AGI in a way which will not lead to an arms race I cannot say it's desirable to focus on that. If it emerges on its own then I would hope it emerges in a decentralized way which everyone can benefit from out of the gate.

Great Post! Get Free Upvotes! Plus you get 0.1 STEEM Free just to signup! http://j.gs/Anmi

A great post full of nice information I like it!

This also depends a lot on the economic situation. For example, you talked about stealing, but in a Post-scarcity economy for example, everything would be abundant, and everyone would therefore be well fed. Now about a self driving car deciding which person(s) to save, is a bit tough. But maybe, in the future, we could have technology which is so safe, that the possibility of killing someone would be impossible.

Yeah ok, in post scarcity. We can also talk about the economy on Mars in 500 years but the problem is we don't live on Mars right now and we don't live in a post scarcity world right now. So in the current environment where we do have people who starve or who are freezing to death homeless on the street we really have to focus on what is the immediate concern.

The possibility of accident never reaches 0. There is always some probability of failure in anything. The key is whether we can make it a much lower probability that the machine AI will fail vs a human driver. Humans drive drunk, humans chat on their phones, humans have a high error rate while driving. If the rate is lower than that of a human then I'll trust the machine more, because I trust the math more than my feelings.

We could have post-scarcity economy pretty soon though. As early as in 2050 according to this article, http://edujob.gr/sites/default/files/epagg_prooptikes/The%20Post-Scarcity%20World%20of%202050-2075.pdf

Could, but we probably will not. Let me know when we have basic income.

It is all about how an AI is created and taught to learn as to its eventual goals. It can learn the basic morality that humans offer to others as well as understanding of why something is done etc. But there will always be those creating things to destroy others or the world, in some way or another.

I want to create AI, I want mine though to have a proper understanding and recognition of humans, what we have done (good and bad) and there are always multiple ways to do most things. Etc etc.

New to Steemit?