RE: Response to Dan Larimer's "The Limits of Crypto-economic Governance"
AI can be just an unpredictable as humans. It is very hard to comprehensively imagine how dangerous A.I. could be, for the most part, it is not the obvious things that A.I. are programmed to do that are dangerous, it is the unintended consequences of directives that seem normal to us humans but result is the classic paper clip maximizer problem. You claim that "Human emotion is far more dangerous than artificial intelligence." This seems like a statement you should have less confidence in. Human emotion is dangerous, A.I. is dangerous. And we know a hell of a lot more about how and why human emotion is dangerous than about the dangers of A.I.
You say "The AI. can be tamed to run like an autistic brain. Not like something that it evolves. There can be many kinds of AI."
This is confusing, especially as you almost contradict yourself by saying "There can be many kinds of AI" at the end. Yea, one of those kinds, is A.I's that evolve, and another is those you can't "tame". And yes, people are building both kinds.
This issue is very complicated and clearly we want some balance of human influence/goals and AI in our governance systems, as i'm sure both Dan and Vitlalik would agree (yes balance would be different based on their approach). But reading your simplistic interpretation of this issue, i can't help but agree with you, whatever emotions guided you to write this post are dangerous, much more dangerous than whatever A.I. you'll probably never make.