You are viewing a single comment's thread from:
RE: Steemit Update: HF21 Testnet, SPS, EIP, Rewards API, SMTs!
The "dividing account abuse" explanation has never really made much sense to me. A large stakeholder who is willing to go to such lengths to obtain something close to 100% of upvote will most likely just find another avenue if that particular one closes. Circular voting or vote selling (off-chain if necessary) will still be possible.
However the impact on real small accounts, new accounts, and engagement through comments will be substantial. I would argue that the pros of the CLRC change do not match up to the cons and it is a part of the EIP that could be dropped, without dramatically altering the main pillars of the proposed change.
Circular vs. self-voting isn't the point at all. The point is solely that everything is visible so it can be downvoted if the content itself (or at least the story surrounding it) doesn't justify the reward. Who makes the votes using which account or which third party voting service literally doesn't matter here.
At which point the abuser will stop doing it, most likely. Which is good. And then they will do something else to obtain a similar level of rewards (circular voting / vote-selling are just examples of such methods that are still likely to be employable under HF21).
As such I think that the benefit brought by the CLRC change is very small. The cons of that part of the EIP outweigh the pros.
Vote selling and circular voting themselves aren't bad and can't be stopped in any case. So we posit that as a given. Vote selling, etc. that results in payouts divorced from content value is the problem, and can be countered with downvotes as long as it is highly visible which is why some form of superlinear is needed (I'm not arguing for this specific curve or parameters; at this point I'm not even sure I correctly understand what they are).
The benefits of the CLRC just seem to me to be small compared to the costs.
Under the above example the abuser continues to extract the same value from the system, just through a different method. There is no increase in altruism, nor in manual voting, nor any redistribution of value from the abuser to content creators. Maybe it looks slightly better on the surface but the underlying economics are unchanged.
The costs on the other hand seem high. Reductions in payouts for smaller earners which is likely to include new accounts. Reductions in payouts on comments, harming engagement and reducing Steem's effectiveness as a social network. Reductions in the power of minnows and dolphins to reward content without significant levels of consensus and the influence of whales.
Given the blockchain data is transparent some form of automated data analytics could be used to find such abusers. Combined with automated free downvotes this could go a long way to resolving the issue without all the above costs.
Not sure which example you are talking about but as long as it is highly visible then we need to take into account the effect of downvotes (also BTW the deterrent effect of downvotes even if there aren't actual downvotes in one particular case). Some superlinear curve (not necessarily this exact one) has the effect of ensuring that anything that isn't highly visible can't get full value, so it eliminates the possible loophole here.
Sounds very difficult, complicated, and to the extent it requires ongoing effort, relying largely on altruism and volunteer work, as all anti-abuse efforts do.
The objective benefits of 100% preventing atomized milking with one fairly simple measure are pretty clear to me, but I might not agree that the parameters are set ideally in the proposed fork.
EDIT:
BTW I'm not sold on this:
Most social networks pay out literally nothing. If there are small payouts on new accounts, comments, etc., and there are still likely to be even with the new curve, just as there were some even under the old and far more extreme n^2 curve, I might argue that it has roughly the same effect (the appeal being to earning something from you interaction instead of nothing) as slightly larger payouts. We sort of ran the experiment on whether flattening the curve would dramatically increase retention and engagement and found the result mostly negative (no dramatic improvement).
That's ignoring that EIP is very much intended to offset the reduction by increasing these payouts due to less milking (even though it may increase other payouts by more).
I honestly doubt that the specific amount of small payouts either way has a meaningful effect on engagement (and if it does it is probably fake engagement people are manufacturing for the purpose of generating payouts), though I certainly can understand that people getting these payouts would prefer more rather than less, were all else equal. All else isn't equal though, there are other considerations here.