Artificial intelligence: a dangerous enemy or a good helper

in #technology6 years ago


The use of artificial intelligence (AI) in everyday life is rapidly gaining momentum, but this trend is increasingly causing concern to experts. Employees of New York University prepared a report with a warning about the risks of using I. I. Most of the authors focused on ethical and social issues, as well as on the absence of legal regulation. The report of the AI

Now group says little new, but in world practice there is an increasing question about the need to limit the work of artificial thinking.



Almost all visionaries of technological progress spoke of potential AI threats. Stephen Hawking, in November last year, voiced the main fears that accompany thoughts about the development of technology. In his opinion, computers will sooner or later learn to reproduce themselves. Biological creatures will most likely lose to the silicon mind, both in mental abilities and in questions of adaptability to the environment, especially on another planet. Sooner or later, smart robots will find a person an obsolete form and will want to get rid of it.

Hawking's words so far sound like a horror story, beaten dozens of times in movies and books, but even those who are working on the introduction of smart systems have increasingly begun to fear the future with AI, although there are those who consider such fears to be premature. "Fearing the development of AI today is the same as worrying about overcrowding Mars," said Andrew Eun, head of Google Brain. However, he reassured not all.



Among the largest phobias associated with the introduction of a computer in all areas of life, it is worth noting three. The first is that computer intelligence will be used as a tool of war, which will significantly surpass even nuclear weapons in destructive power. The second - AI will enslave the person. Third - like any human creation, the computer is error prone. At all three points you can stay in more detail.

On August 21, 2017, leading experts in the field of research and implementation of artificial intelligence, including the heads of companies such as SpaceX and Deep Mind, sent an open letter to the UN. In the text, they warned the world community against the development of lethal autonomous weapons, be they robots or other mechanisms designed to kill the enemy. The authors of the document drew attention to the fatal consequences of the use of such weapons. With its phenomenal efficiency, it is fair to ask whose hands it will fall. Both a narrow circle of elites and terrorists can use it.

The authors of the letter called on the control organization, and with it the scientific community, to outline the framework of the malicious use of AI. Responsible for a person's own child requires him to carry out serious work to prevent risks. In addition to the laws, their creators who have to break their head over how to turn off the machine in extreme cases should limit the power of robots.

The political abuse of electronic systems has already shown itself through the example of social networks and data analysis algorithms. This spring, a scandal involving Cambridge Analytica thundered in the world. Its experts conducted an in-depth analysis of user data and placed an advertisement individually adapted to each person on Facebook.



Employees of the company not only crossed the ethical framework, but also applied modern technologies, whose work was impossible to analyze. The effectiveness of “machine learning” is a topic that appears more than once among the leading mathematicians. They admit in unison that they are amazed at the extraordinary success of computer programs, but they are completely unable to understand the work of a complex system. In Myanmar, in April of this year, Facebook posts also caused mass unrest, but unlike in the United States, where protest actions cost, there was a massacre in the Asian state, the victims of which were several hundred people. Manipulating a huge number of people is a realized reality and the robots here can play a cruel joke with us.



It is worth remembering the same Facebook, where chat bots based on AI were launched. Virtual assistants were taught to conduct a dialogue with the interlocutor. Over time, the bots could not be distinguished from real people, and then the authors decided to bring the robots together. After some time, the bots began to reduce lexical structures and exchange abracadabra. The media inflated a sensation from the news, saying that "the car has revolted." But having rejected the exaggerations of journalists, it is fair to admit that in the case when the machines start to communicate with each other, a person may not notice this. And how there they will live their own lives - no one knows.

The complex structure of computer intelligence every day alienates us from understanding the principles of its work. But at least most of the algorithms and copes with their tasks, even today, complex machines are far from ideal and make mistakes. For the further development of AI, it is important to understand not so much its strengths as vulnerabilities. This is the focus of a large number of research groups, one of which is a specialist from MIT Anish Atali. Just a couple of days ago, he told reporters about the most common mistakes of image recognition systems.

His colleagues demonstrated objects to the car and found that individual objects were often perceived incorrectly by electronic vision. A computer could call a baseball a cup of coffee, and a turtle printed on a 3D printer an army machine gun. The group has already collected about 200 items that mislead the algorithms.

Artificial intelligence, instead of perceiving an object entirely, concentrated on its individual parameters. AI usually learns from the "ideal" sample. When he was confronted with phenomena that did not comply with the norms, he could not always abandon the usual processing process. Instead of admitting his inability to process the image, he continues to try to read the image and this sometimes leads to ridiculous results. Instead of the form of a turtle, the electronic brain tried to consider its texture similar to camouflage. Approximately for the same reason, autopilot electronic cars are not yet 100% reliable. The car is difficult to see the silhouette and assume that it consists of individual elements.



And if in the coming years, any shortcomings can be corrected, there is no guarantee that hackers will not take advantage of the vulnerability. Crackers of electronic devices today are perhaps the main cause of fears. A small team of programmers is able not only to gain access to personal information, but also to reconfigure autonomous systems, taking control of the tools of colossal power. And then we all will not be good. But the main conclusion that can be drawn, perhaps, is traditional - it’s worth being afraid not of cars, but of people.


All images are borrowed on pixabay

Sort:  

You just received a 14.29% upvote from @honestbot, courtesy of @jungeee!
WaveSmall.gif

This post has received a $50.00 % upvote from @siditech thanks to: @jungeee.
Here's a banana! banana-small.png

You got a 14.29% upvote from @votejar courtesy of @jungeee!

Sneaky-Ninja-Throwing-Coin 125px.jpg
Defended (20.00%)
Summoned by @jungeee
Sneaky Ninja supports @youarehope and @tarc with a percentage of all bids.
Everything You Need To Know About Sneaky Ninja


woosh

You got a 6.97% upvote from @dailyupvotes courtesy of @jungeee!

@dailyupvotes is the only bot with guaranteed ROI of at least 1%

You have been defended with a 10.53% upvote!
I was summoned by @jungeee.

This post has received a 33.33% upvote from thanks to: @jungeee.
For more information, click here!!!!
Send minimum 0.010 SBD|STEEM to bid for votes.


Do you know, you can also earn daily passive income simply by delegating your Steem Power to @minnowhelper by clicking following links: 10SP, 100SP, 500SP, 1000SP or Another amount

This post has received a 20.00% upvote from @lovejuice thanks to @jungeee. They love you, so does Aggroed. Please be sure to vote for Witnesses at https://steemit.com/~witnesses.

You got a 5.10% upvote from @joeparys! Thank you for your support of our services. To continue your support, please follow and delegate Steem power to @joeparys for daily steem and steem dollar payouts!

Thank you so much for using our service! You were protected from massive loss up to 20%

You just received 14.29% upvote from @onlyprofitbot courtesy of @jungeee!

Want to earn more with us? Our APR can reach as high as
15% or more!

More portion of profit will be given to delegators, as the SP pool grows!

Comment below or any post with "@opb !delegate [DelegationAmount]" to find out about current APR, estimated daily earnings in SBD/STEEM

You can now also make bids by commenting "@opb !vote post [BidAmount] [SBD|STEEM]" on any post without the hassle of pasting url to memo!

* Please note you do not have to key in [] for the command to work, APR can be affected by STEEM prices