Ideas from Edge about artificial intelligence and human culture
Summary and commentary with some ideas about artificial intelligence and human culture from The Human Strategy: A Conversation With Alex "Sandy" Pentland [10.30.17] via Edge.
Introduction
I think I've mentioned before that I'm a long-time fan of the Edge web site. In this week's e-mail newsletter, they linked to the article, The Human Strategy: A Conversation With Alex "Sandy" Pentland [10.30.17]. Pentland is a Professor of Computer Science at MIT; Director of MIT's Connection Science and Human Dynamics labs; and Author of "Social Physics." The articles and videos in their newsletters typically require a substantial time investment, so I can't always take the time to watch or read them, but I'm glad that I clicked through to this one.
In this wide-ranging article and video, Pentland weaves together some of his ideas about artificial intelligence (AI), culture, human behavior, and government in a fascinating way.
On AI, Pentland discusses the current state of AI as a network of "dumb" nodes that yields a brute force paradigm that's dependent on massive amounts of data. He discusses the possibility of improving it by creating a mesh of smarter nodes that have contextual awareness.
On culture and human behavior, he talks about how evolution can be a process of exploration and exploitation. Individuals explore by searching for popular ideas, and innovate by copying and adapting those ideas for their own needs. He also brings to light the concept of "Social Physics," or the use of big data to build a computational and predictive theory of human behavior.
In his discussion of government, Pentland talks about the need for transparency and more granular oversight, and he makes the insightful observation that - functionally - government regulators and other bureaucracies really aren't very different from AI implementations.
The article is in textual and video formats, but I didn't find a way to embed the video here, so you can click through to read, watch, or both.
I will offer summaries and commentary on each of the above topics throughout the remainder of this article.
[Image Source: Pixabay.com, License: CC0, Public Domain]
Section 1: Artificial Intelligence
AI is a recurring point of commentary throughout the article. Pentland notes that today's AIs make use of dumb neurons to survey massive amounts of data and learn useful patterns. He also points out that because the neurons use linear processing, training requires millions of examples, and it does not generalize well to new uses. He describes the mechanism that accomplishes this as a "credit assignment function." According to him, the credit assignment function takes "stupid neurons, these little linear functions, and figure out, in a big network, which ones are doing the work and encourage them more."
This model suffers from the shortcomings that it doesn't generalize well, it is not contextualized, and it behaves as a sort of "idiot savant."
Contrasting with that, he offers the idea of better AI's with neurons making use of smarter algorithms to make better decisions with less data.
In the first example of this, Pentland talks about physics. He claims that if neurons are programmed with specialized knowledge of physics, AIs can "take a couple of noisy data points and get something that's a beautiful description of a phenomenon."
In a second example, Pentland talks about neurons that use the newly emerging science of social physics to model human decisions. Pentland describes it as something like Skynet, that "that's really about the human fabric."
Eventually, Pentland extends this to the idea of even using humans as the neurons, and asks, "...what's the right way to do that? Is it a safe idea? Is it completely crazy?." On one hand, this is a radical idea that's reminiscent of The Borg or The Matrix, but on the other hand it's not. Maybe it all depends on implementation. Recently, I offered this, in a comment here on steemit:
Of course, Kurzweil's vision has always been of the dominant human-machine hybrid, which I still think is plausible. Maybe a machine can out-think one person, or ten or twenty, but can a machine out-think an entire society with brains linked together via high speed interconnects?
I imagine a future global supercomputer which is a conglomeration of human and machine intelligence supplemented by mostly mechanical labor. Maybe "going to work" means plugging in a head-set and renting our intellects to the Matrix for the day.
It's worth noting that teams of humans aided by computers are now beating the best AIs in chess tournaments.
Two notable points that caught my attention while reading and listening were:
- His mention of distributed Thompson sampling, which he said is, "combining evidence, of exploring and exploiting at the same time," and "has a unique property in that it's the best strategy both for the individual and for the group." He went on to note that when you use this technique for selection, if you select the best groups, you're also selecting the best individuals; and when individuals act in their own best interest, they're also acting for the group's best interests. -AND-
- It occurred to me that, when applied to social networks, his credit assignment function is superficially similar to steemit's reputation system and voting aggregations.
Which leads into another one of his topics, culture.
[Image Source: pixabay.com, License: CC0, Public Domain]
Section 2: Culture
A simple explanation of Thompson sampling, when applied to culture is this:
- Look at what other people do.
- If it looks useful copy it.
It's a simple, but remarkably effective for innovation and progress. Good ideas spread and bad ones get lost in the crowd.
The limitation is that three things are required: trusted data; known and monitored algorithms; and fair assessment of behavior. According to Pentland, fair assessment of behavior doesn't exist yet. Fake news, propaganda, and advertising all work in opposition to the desire for fair assessments of behavior. Accordingly, one of Pentland's objectives is to identify facts that everyone can agree on, like the US census - for example. This paragraph gives some insight into how he's thinking about it:
A common test I have for people that I run into is this: Do you know anybody who owns a pickup truck? It's the number-one selling vehicle in America, and if you don't know people like that, that tells me you are out of touch with more than fifty percent of America. Segregation is what we're talking about here, physical segregation that drives conceptual segregation. Most of America thinks of justice, and access, and fairness as being very different than the typical, say, Manhattanite.
He extends this point by saying that economic segregation is on the rise around the world, and that in almost all places, the top quintile and bottom quintile of society never see each other.
He has more to say about extreme wealth noting that Europe has deeply entrenched wealth and power, whereas America has so far resisted that entrenchment. In America, he says, "If you win the lottery, you make your billion dollars, but your grandkids have to work for a living," which brings to mind the old axiom about there being three generations from shirt sleeves to shirt sleeves. As I recall, de Tocqueville also remarked on this difference in his, "Democracy in America".
Additionally, he didn't mention it, but it occurs to me that by preventing copying, intellectual property rights may also impede Thompson sampling from operating effectively in human societies.
In order to promote human cultural intelligence, Pentland is working on open algorithms and open data. He reports interest and support from surprising governments such as those in Europe, China, Latin America, and Arica. He reports that his projects include work in fields of health care and economics in places like these.
Which segways into his discussion of government.
[Image Source: pixabay.com, License: CC0, Public Domain]
Section 3: Government
To me, one of the most surprising aspects of this article was its observation that regulators, bureaucracies, and other parts of government operate very much like instances of artificial intelligence. Information goes in, where it is sent through a variety of rules, bureaucratic hierarchies, and processes. After this processing, decisions come out. These decisions have real world impact, and in practical terms, there is little oversight. We get to cast a vote every year or two or four.
From this observation, Pentland argues that more transparency is needed and that controls need to be far more granular. How can we know if our court system works, he asks, if we have no reliable data?
He further notes that the digitization of media has led to a media that is failing us, and that when society doesn't have a trusted institution providing accurate information, its citizens are subject to manipulation. Additionally, he notes that notions of justice have changed from informal and normative to formal, and that legal systems are failing as a result of that shift.
This is the area of Pentland's commentary that I thought was the most speculative. It's hard to know if things are really as different as all that in the digital age, or if we are just more aware of it, because all of the blemishes have become more transparent when information flows at the speed of light.
At any rate, it's hard to disagree with his desire for transparency and more granular control of the bureaucracies.
Conclusion
There's far more in the original article than I could adequately cover in this summary, so I emphatically recommend that you click through and read the original or listen to the video.
In my opinion, Pentland gave a fascinating, if wandering, discussion of AI, culture, and governance, and I especially intend to learn more about distributed Thompson sampling and Social Physics.
I've always been a huge fan of AI. The news that came out recently that Musk's OpenAI beat pro Dota players was mind blowing to me. But then I read here that humans beat AI chess player with computer assistance? I think this is where the future of AI really is in. Not sole computer automation, but human and computer collaboration to bring out the best in both. The future will be amazing.
I wonder what happens when AI is smarter then us humans, will it at that point gain intelligence at an exponential level? Will we become insignificant?
great observation and post. keep going.
Thanks for summarizing!
great post
AI is really a topic worth a note. To be honest I was waiting for a steemit post. And this is what it is. Thanks mate for sharing.
Wow.. Loves when someone relates past with new technology. Affirming that there's a cycle that repeats
Congratulations @remlaps, this post is the third most rewarded post (based on pending payouts) in the last 12 hours written by a Superuser account holder (accounts that hold between 1 and 10 Mega Vests). The total number of posts by Superuser account holders during this period was 834 and the total pending payments to posts in this category was $2677.25. To see the full list of highest paid posts across all accounts categories, click here.
If you do not wish to receive these messages in future, please reply stop to this comment.
stop
Informative post
Really nice post