Towards a decentralized, abuse resistance framework for the Steem blockchain

in Suggestions Club9 months ago (edited)

This post sketches out a possible framework for decentralized abuse resistance on the Steem blockchain.


Background


Image by Gemini; prompt: "Draw a techno-uptopian pyramid structure with three layers, shaded from dark at the bottom to light at the top with a very bright light above it."

Back when I was copying forward my long-term goals in my programming diary posts, one implementation goal that I kept writing, week after week, was this:

A protocol and framework that I'm bouncing around in my head for decentralized abuse measurement and resistance; (note to self, I'd better write this down before I forget what I have in mind...)

Fortunately, 4 months later, I haven't forgotten it yet 😉. So, today (instead of working on programming 😞), I thought I'd finally write it down.

Introduction

During my almost 8 years on the Steem blockchain, one of the topics that never goes away is the topic of abuse. Although I am not a fan of downvotes, I have reluctantly agreed with the apparent majority that downvoting is the best available mechanism for curtailing abuse under the current rewards distribution rules.

However, we have also seen that downvoting can, itself, become a form of abuse, and this has generally led to a light touch for moderators who want to retain users and investors. Unfortunately, this light touch implies a correspondingly high tolerance level for abuse on the blockchain.

Despite its presence in the title, however, I'm going to try to avoid the word "abuse" for the rest of this article. For the purposes of this framework, an article has exactly two states: "overvalued" and "undervalued". At the end of the day, the goal is to get posts as close as possible to the correct valuation, whether that valuation is $0 or $50K. For purposes of rewards, it almost doesn't matter whether the post is overvalued because it's "abusive" or just because it's badly written. (we may want to take additional actions, such as muting, for truly abusive posting, but that's outside of the scope of a rewards discussion.)

So, the hard question is how to get the right level of moderation that reduces plagiarism, spam, etc. but doesn't drive away users or investors. Over the years, I have imagined a number of general guidelines that I think need to be considered:

  1. Undervaluing of posts is just as harmful as overvaluing of posts. The goal should not be "eliminating abuse". The goal should be "valuing every post correctly".
  2. Moderation of overvalued posts cannot rely on altruism. It must be rewarded.
  3. Measurement should be separated from enforcement.
  4. Measurement should be used to track improvement (or lack thereof) over time.
  5. Participants in the measurement system should be protected from retaliation.
  6. Enforcement starts with the top-tier stakeholders.
  7. It should be as fun as possible for participants.
  8. It should be competitive.

So, those are some of the ideas that stand behind the framework that I've been imagining. In this framework, there are three leading roles: the surveyor, the analyst, and the enforcer. Read on to hear more about each.

The surveyor


Image by Gemini; prompt: "Draw a picture of a video game character for a blockchain surveyor"

In one of my previous articles, I described this role as a sentinel, but for today let's call it a surveyor, instead.

The surveyor's job is simple. They review posts on the blockchain, and categorize them as "undervalued" or "overvalued". This categorization is saved in the form of custom_json transactions. I'm imagining multiple possibilities for how this might work:

A solitaire game: form 1

An app is developed that shows random posts to a surveyor, one post at a time, and the surveyor decides whether the post is overvalued or undervalued. Then, a custom_json transaction is saved to record the current value (at time of decision-making) and the surveyor's assessment.

After post payout, the app can compare the surveyor's opinion against the actual payout and the opinions of other surveyors' and game points can be awarded based on how well their opinion matched the consensus.

A solitaire game: form 2

This is the same as "form 1", with one exception. The surveyor is not making their opinion public. Instead, they are working in partnership with a single analyst (see the next section), and their opinion is encrypted using the analyst's Memo key so that no one else can see it. It's possible for the public to see that the surveyor is playing the game, but it's not possible to know what posts they evaluated or how they decided.

A two player game

Think of a dating app. Random posts are shown to two random players at the same time, and the results are only saved as custom_json transactions if both players agree about whether the post is overvalued or undervalued (swipe-left, swipe-right). Again, the saved values would include the post's value at the time when the decision was made, and the over/under-valued assessment of the two game players. In this case, the players could earn scores in terms of their ability to match with random partners, and also with the post's eventual payout value.

In all cases

Players could opt for additional privacy in order to avoid retaliation by using alternate, pseudonymous accounts.

As we'll see in coming sections, these players would, ultimately, be rewarded in the form of beneficiary rewards, and the amount would be based on their contribution towards returning rewards from overvalued posts into the rewards pool.

A custom_json transaction might look something like this (copied and edited from @moecki, here):

{
  "required_auths": [],
  "required_posting_auths": [
    "example"
  ],
  "id": "mod_eval_post",
  "json": "[
    'overunder', {
      'account': 'social',
      'author': 'example-author',
      'permlink': 'example-article',
      'value': '9.43',
      'assessment':'overvalued'''
    }
  ]"
}

The analyst


Image by Gemini; prompt: "Draw a picture of a video game character for a blockchain analyst."

The analyst would be responsible for reviewing the custom_json transactions from the surveyors, consolidating them into actionable posts that enforcers could use later to apply downward adjustments to posts that were identified as (probably) overvalued.

These posts would have beneficiary settings that redirected some of the rewards to the surveyors who provided the raw data, in proportion to the amount that was returned to the rewards pool for each surveyor. The remaining portion of the post's reward would stay with the analyst.

Clearly, this is all too complicated for manual execution, so the analyst would need to be supported by the development of programs for collecting the data and determining the proper beneficiary reward sizes. Some portion of the beneficiary rewards could also be distributed to support development.

As with surveyors, the analyst could also opt for additional privacy in order to avoid retaliation by using alternate, pseudonymous accounts.

The enforcers


Image by Gemini; prompt: "Draw a picture of a video game character for a blockchain enforcer."

One thing that the Steem whitepaper clearly got wrong was the role of crowd downvoting in support of abuse resistance. What we have found over the course of many dysfunctional "downvote wars" is that retaliation wins the day, and the largest stakeholders have the final say.

So, the framework I'm proposing depends mostly on the top-tier stakeholders for enforcement.

This is where we have to hope for something that resembles altruism, but we're not really hoping for altruism. Instead, we're hoping that the largest stakeholders will favor their long-term self-interest over their short-term self-interest because, if (nearly) all posts are valued correctly, the value of their stake should go up.

The enforcer's role, then is to review the posts by the analysts, upvote the analyst posts that are useful, and downvote to adjust the values on the overvalued posts that the analysts highlight. By doing this, rewards get returned to the rewards pool, and those can be used by others to raise the valuation of undervalued posts. Even without redistributing those rewards, the simple act of downvoting raises the value of undervalued posts. This was described by one of Steemit's founders, back near the end of 2016:

I view down vote as up voting everyone else, but the downvoted item, just more efficient. Every upvote implicitly reduces rewards of everyone else.

Conclusion

Back in 2019 (Wow, this doesn't seem like 5 years ago...) I suggested, If you want Steem's minnows to use their downvotes, protect them with quorum sensing. This proposed framework is another step in the same direction.

Is it complicated? Yes. That's why it took me more than 4 months from the time I thought of it just to write it up. Clearly, this isn't something that can happen overnight. Is it perfect? Certainly not.

However, I think it is feasible, and I think it meets the guidelines that I described in the introduction. Further, it is amenable to decentralized and modular development.

Finally, once the protocol is agreed for storing the over/under-valued ratings in custom_json transactions, developers are free to build on that however they want, so better solutions that I have not yet imagined could emerge on top of that protocol after implementation.

(For the record, not a word about 2nd price auctions in the whole article. 😉)


Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".




Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.


image.png

Pixabay license, source

Reminder


Visit the /promoted page and #burnsteem25 to support the inflation-fighters who are helping to enable decentralized regulation of Steem token supply growth.

Sort:  

Interesting ideas. If I understand this correctly, it's basically a kind of further instance or consortium of users who can influence the amount of a post's payout. Hmm, difficult.

The longer I've been involved with Steem, the more difficult it seems to me to develop a fair system. For a long time I thought that the success of Steem would essentially depend on the way in which rewards were distributed fairly.

One of my ideas for achieving this goal was that votes could only be given manually. Of course, the vote services would not be enthusiastic about this idea, as it would take away the basis of their business model. Today I am no longer so convinced of my own idea, although I still think that posts with only one picture and a few words that are valued at several 100$ do not convey a good image.

It would certainly be interesting to see what effects the implementation of your proposed ideas would have.

 9 months ago 

Thanks for the reply!

If I understand this correctly, it's basically a kind of further instance or consortium of users who can influence the amount of a post's payout. Hmm, difficult.

Right, it's intended to crowdsource abuse resistance while protecting the participants from retaliation. It would be ideal if the rewards algorithm handled this automatically, but it's clear that it doesn't (in its current implementation). So, unless someone is going to design and implement a new rewards algorithm, we need to make adjustments at the next layer.

One of my ideas for achieving this goal was that votes could only be given manually.

This has been proposed many times over the years, but I honestly don't think it's technically possible with the blockchain's design. Personally, I don't see automatic voting as being much different from Google's automatic search indexing. It could never scale if Google asked people to do it manually. IMO, if it's designed right, it's a good thing, but we're currently saddled with suboptimal designs.

I still think that posts with only one picture and a few words that are valued at several 100$ do not convey a good image.

Yeah, I definitely agree on this point. (possibly with very rare exceptions)

 9 months ago 

I think this idea's brilliant and in many ways, builds upon the work that the STEEM Watchers Team is already doing.

I've got a few additional thoughts which you may have covered so apologies if I repeat any:

  1. Anonymity - I think that this is important. Whilst the game needs to know who's playing, the community doesn't. This aligns with the quorum sensing (I'm surprised you waited until so late in your post to mention this 😉) which tries to protect smaller users.
  2. Player Scores - To avoid "abuse" of the game, if a user's opinion consistently differs from the majority, then they should have a "Rating" where their opinion carries less "weight" than a "sheep". Working along the lines of Steemit's "Reputation".
  3. Let everybody play - It will be a couple of years ago now that I suggested writing an "Anti-plagiarism" game, where one player would submit a post for review (with evidence) and then other players would simply pick "plagiarised" or "not plagiarised" - the idea being that "Watchers" could the flag the most "plagiarised". And to gamify it (with an anonymous reward system) would attract players. I wrote my alternative front-end instead 🙂

I'll stop my thoughts for now... my brain's consumed with the plethora of stylesheets that I'm discovering in GitHub repositories that aren't Condenser!

 9 months ago 

This aligns with the quorum sensing (I'm surprised you waited until so late in your post to mention this 😉) which tries to protect smaller users.

Yeah, it didn't occur to me until that section of the article, but this really is just another proposal to implement quorum sensing. The surveyors are sending the signals, but nothing gets acted upon unless their signal is strong enough to find its way through the analyst and enforcer filters. I definitely agree that anonymity (or at least strong pseudonymity) is important, at least for the surveyors. Pseudonymous or not, protecting the analysts from retaliation would probably fall to the enforcers. That's true with the current STEEM Watchers team, too.

You're right that STEEM Watchers has a similar model already, where the detectives are basically acting as surveyor + analyst. The concern I have with their model is that it creates an incentive for a Steem Detective to create abuse with a hidden account and "discover" it with their detective account. I'm not saying that anyone is currently doing that, but the incentive is there.

It will be a couple of years ago now that I suggested writing an "Anti-plagiarism" game, where one player would submit a post for review (with evidence) and then other players would simply pick "plagiarised" or "not plagiarised"

Yep. I remember that. I think that conversation is probably what started my thinking about the swipe-left/swipe-right model. I just think it needs to extend beyond plagiarism.

Player Scores - To avoid "abuse" of the game, if a user's opinion consistently differs from the majority, then they should have a "Rating" where their opinion carries less "weight" than a "sheep". Working along the lines of Steemit's "Reputation".

Yeah, a lot of thought needs to go into this, and also into the beneficiary reward distribution. I'm also not sure if someone could skew the results by running multiple surveyor accounts. Hopefully, the random assignment of posts to surveyors would prevent that.

I'll stop my thoughts for now... my brain's consumed with the plethora of stylesheets that I'm discovering in GitHub repositories that aren't Condenser!

Glad to see that your proposal got funded. Congratulations! I look forward to reading about your progress (and seeing it implemented)!

 9 months ago 

I think you have invented something that can revolutionize the Steem blockchain and save it from a lot of overrated content. Gamification can be a great solution. There will be no shortage of participants in the project, if, say, a league table will be compiled at the end of the week and the first 30 participants will receive significant rewards (it is not known where the funding will come from, but it could even be the DAO). Not all Steemit users are good authors, but they have to post something because it's the only way for them to "mine" Steem. I think such a category of users would be happy to compete for rewards by evaluating other people's content.

Undoubtedly, the tournament table should be compiled not by indicators of activity, but by indicators of high-quality work. How to assess quality? It should probably be the deviation of the ratings given to a post by a particular user from the average rating of the post, or something similar.

The weak link of the idea is the implementation of the final decision, that is, there must be some powerful account that will give the downvote.

Where can we find such an account? Does this not provoke a senseless war of downvotes?

I think we would begin to get answers to these questions if someone took the initiative to implement this idea.

 9 months ago 

Thanks for the feedback!

The thing that I really like about this framework is that it can be implemented in increments and by multiple independent developers. Basically, I see an implementation roadmap that looks something like this:

  1. Define/publish the custom_json protocol.
  2. Developers start creating games for the surveyors - and multiple models are already available for this: solitaire vs. dual player from my post, or tournaments as you suggest here. I'm sure there are many other possibilities, too. Your tournament idea might be the best way to "bootstrap" it, since beneficiary rewards from analysts aren't available yet at this point. Or maybe just a daily post with a summary/analysis of a single surveyor's findings (top-5 overvalued, top-5 undervalued, something like that...)
  3. As the various games achieve adoption, developers can create additional tools for analysts to aggregate, summarize, and report the findings from multiple surveyors, and to share beneficiary rewards with the surveyors who they team with.
  4. As the analyst reporting quality improves, top tier stakeholders can begin using that to inform their voting. (again, possibly with the aid of new tools from developers)
  5. Once the whole system is in place (and before), developers continuously tune the reward structures that were put in place during the earlier phases. As stated above, the goal here is to align rewards with the players' contributions towards returning rewards from overvalued posts back to the rewards pool.

It might take years to fully implement, but it's fairly easy to get started and to make frequent incremental improvements.

The weak link of the idea is the implementation of the final decision, that is, there must be some powerful account that will give the downvote.

This is true, but hopefully not insurmountable. And even without the downvoting accounts, at least we can get a better understanding of the scope and scale of the problem. This understanding might lead to other possible solutions that we can't be aware of yet.

Not all Steemit users are good authors, but they have to post something because it's the only way for them to "mine" Steem. I think such a category of users would be happy to compete for rewards by evaluating other people's content.

This is a really good point that hadn't occurred to me, but I definitely think that you're correct. It's in line with curation rewards, but you don't have to start with a large stake in order to collect rewards.

Undoubtedly, the tournament table should be compiled not by indicators of activity, but by indicators of high-quality work. How to assess quality? It should probably be the deviation of the ratings given to a post by a particular user from the average rating of the post, or something similar.

This is also a good point, and your suggestion is in line with my thoughts on the topic. Also, even if we don't know an answer right now, with continuous experimentation and improvement we can discover a variety of good solutions over time.

TEAM 1

Congratulations! This comment has been upvoted through steemcurator04. We support quality posts , good comments anywhere and any tags.
Curated by : @o1eh



I love the idea in general. It's no doubt a complex model which I'd love to see implemented and functioning.

There are few questions and nonsensical thoughts running in my mind which I'm unable to contain...

  • A "surveyor" can be any user, right? What about the "analyst"? A group of anonymous credible users?

  • What if not enough users "play the game" or "survey the posts"? What would be the consensus then?

  • Would Enforcers wait for the payout day (before closing of curation window) to adjust post rewards?

  • If this is implemented then each and every post should be reviewed. Otherwise, it wouldn't be fair - some authors getting away with undeserved rewards and some getting their rewards adjusted.

  • How about simply adding a rating slider to each post? The rater should be kept anonymous and the author shouldn't be notified about every rating activity. Rewards can later on automatically adjust based on the rating value. For this, a suitable algorithm needs to be designed. Although I don't think automatic reward adjustment is even possible without voting and getting posting key authorization.

 9 months ago 

Good questions. Thanks for the feedback!

A "surveyor" can be any user, right? What about the "analyst"? A group of anonymous credible users?

Yes, the analyst can also be anyone. The idea is that the analysts can pick and choose the surveyors that provide the best information, and the enforcers can do the same with the analysts. Either role can be anonymous or not, but I'd imagine that most people would prefer to use alt accounts to avoid retaliation on their primary accounts.

What if not enough users "play the game" or "survey the posts"? What would be the consensus then?

Yeah, the whole framework would only be effective if it drew enough participation. But, fortunately, we wouldn't have to build the whole thing at once. We could start with a game for surveyors, just for fun, then when there's enough data out there analysts could start by reporting on the data. Finally, when there's enough information from the analysts, the enforcers could start participating, too. In this fashion, the development work and adoption could be phased in over time.

Would Enforcers wait for the payout day (before closing of curation window) to adjust post rewards?

Probably. I'd imagine that it would make sense to do their downvoting after 6-61/2 days.

If this is implemented then each and every post should be reviewed. Otherwise, it wouldn't be fair - some authors getting away with undeserved rewards and some getting their rewards adjusted.

That would be ideal, but I don't think it's really necessary. Maybe they could root out the worst abusers first and then that would free up time for other content. We're never going to score every post exactly right, but the important part is to gradually get better and better at it. If near-0 abusive content gets downvoted (as now), that's not fair to the authors who are producing attractive content. IMO, ignoring all abusive content is even more unfair than ignoring just some of it.

How about simply adding a rating slider to each post?

This is definitely something that could be tried. The nice thing about it is that once we have the framework and protocol definition, developers can experiment with all sorts of different possibilities.

The rater should be kept anonymous and the author shouldn't be notified about every rating activity.

I agree on these points. Of course, with a public blockchain, it's impossible to keep authors from discovering their ratings if they want to (unless encrypted). But I don't see a reason to design that sort of notification into any of the applications.

It seems to me that if someone's going to go through all the trouble of reading and honestly evaluating a post, having the only result be a thin "over or under" judgment seems like it's leaving something on the table. (They also presumably are using their experience with the post to inform their own voting on it, but "curation rewards" don't tend to be worth much for voters with small or medium amounts of SP).

 9 months ago 

It seems to me that if someone's going to go through all the trouble of reading and honestly evaluating a post, having the only result be a thin "over or under" judgment seems like it's leaving something on the table

I don't think that people to take the time to completely read a lot of the posts they vote for now, so I wouldn't expect that to be different in the games. To be honest, I also don't think it's always necessary. In many/most cases, you can get a reasonably good sense of over/under valued in just a few moments. And, of course, the players can also upvote or downvote the posts they see directly, which gives them the full range of percentages to express themselves. Hopefully, the game's scoring would be implemented in a way that motivates people to spend enough time on evaluation... whatever that turns out to mean.

"curation rewards" don't tend to be worth much for voters with small or medium amounts of SP

Yeah, this is why I imagined that the posts by the analysts would direct rewards to both analysts (via author rewards) and surveyors (via beneficiary rewards). I'm imagining that beneficiary settings for the surveyors would use a distribution scheme that resembles Bitcoin's mining pools.

This post has been featured in the latest edition of Steem News...

Upvoted. Thank You for sending some of your rewards to @null. It will make Steem stronger.

TEAM 1

Congratulations! This post has been upvoted through steemcurator04. We support quality posts , good comments anywhere and any tags.
Curated by : @o1eh



 9 months ago 

Thank you, @o1eh!