You are viewing a single comment's thread from:

RE: Response to Dan Larimer's "The Limits of Crypto-economic Governance"

in #news7 years ago

I know how dangerous A.I can be because it can be programmed with specifications
I don't know how dangerous humans can be because human emotion is unpredictable.

on can be controlled. The other is irrational.

guess which one is the most dangerous.

The AI. can be tamed to run like an autistic brain. Not like something that it evolves. There can be many kinds of AI.

Sort:  

the problems begin with deep learning when AI can evolve (auto-program itself). Yes the humans are irrational but with 30 centuries of hindsight one can say that they are "irrational" in an almost "bounded" way - the boundary is a bit fuzzy but not completely non-deterministic. There is a large encasing where you can be pretty sure you can fit the whole irrationality of humans (with a 8-9 sigma probability)

During cold war a nuclear missile failed to launch due to one person's insubordination. It is cool to speak in hindsight when everything roles out good.

An A.I can be programmed with levels of function. Humans cannot

AI can be just an unpredictable as humans. It is very hard to comprehensively imagine how dangerous A.I. could be, for the most part, it is not the obvious things that A.I. are programmed to do that are dangerous, it is the unintended consequences of directives that seem normal to us humans but result is the classic paper clip maximizer problem. You claim that "Human emotion is far more dangerous than artificial intelligence." This seems like a statement you should have less confidence in. Human emotion is dangerous, A.I. is dangerous. And we know a hell of a lot more about how and why human emotion is dangerous than about the dangers of A.I.

You say "The AI. can be tamed to run like an autistic brain. Not like something that it evolves. There can be many kinds of AI."
This is confusing, especially as you almost contradict yourself by saying "There can be many kinds of AI" at the end. Yea, one of those kinds, is A.I's that evolve, and another is those you can't "tame". And yes, people are building both kinds.

This issue is very complicated and clearly we want some balance of human influence/goals and AI in our governance systems, as i'm sure both Dan and Vitlalik would agree (yes balance would be different based on their approach). But reading your simplistic interpretation of this issue, i can't help but agree with you, whatever emotions guided you to write this post are dangerous, much more dangerous than whatever A.I. you'll probably never make.

Yet you don't talk about the off chance that the programming fails and/or glitches out, because how likely is that something doesn't act according to programming, or does so but in a very autistic and obtuse way.