You are viewing a single comment's thread from:
RE: AI is stupid: Humans Judgment Matters
Astounding progress for sure. Though identifying things like cancer has a proveable measurement that they can work from and the AI is identifying the presence of real objects. Quality, a cultural construct, is not fixed which makes the predictive value of an AI that learned from historic data not so useful.
It's that ability to learn from a few examples that leads the Idealists to think that humans do something quite special.
Here's an AI that is good at judging creativity with the benefit of a ton of hindsight.
It doesn't have to be perfect though. The question is, if given enough data on people's opinions of what quality is, could it differentiate quality enough to be useful? I'm not certain that's not a yes. Perhaps there are quite a few cases where it would fail. Could it bring some quality to the top though? Lets say it was built as an upvote bot, with 70% accuracy of what a statistically significant portion of the audience thinks are quality articles. Would that not be helpful?
Not saying it would be worth the time to build, but it would probably be helpful. Also likely a fun project to play around with.
Such a bot would be very useful for a particular community to have. Provided that bot is not the only means for assigning rewards within the community then it's false-positives will not be so bad. Particularly if the bot kept on learning what the community liked.
Where I to produce a user interface for steem - this would be my point of difference. That there was AIs that learned what individuals would probably like to see. You can do that off very broad metrics and by observing user behaviour. But, this is an AI at the level of the tools and it acts as an assistant to users.
However, if the bot had way too much SP then it becomes economically attractive for bad actors to learn how to fool the bot. So, yeah, I'd rather tune the bot to learn individual preferences to get around that.