Making SPAM fighting on Steem into a game of skill

in #steemexclusive2 years ago (edited)

Can abuse prevention on the Steem blockchain be gamified? I have thoughts.


Introduction

image.png

Pixabay License: source

I've been hanging around the Steem blockchain for almost 6 years now, since July, 2016. In that entire time, there have been frequent efforts to fight abuse (plagiarism, spam, etc...), and in my personal opinion, they have all failed to a large degree. This is something that I have spent a great deal of time thinking about, without much in the way of results. One idea I had was to model abuse prevention after quorum sensing, which is a technique that single-celled bacteria use to coordinate multicellular action. Another idea was to model the rewards algorithm after a second price auction, which is a type of auction that better-motivates participants to bid their "true value" when competing in an auction (here's another Steem article on second price auctions which I just discovered a moment ago, from PreSearch.)

Obviously, neither of those ideas got traction, and the community is starting to talk about abuse again. Frankly, I had given it up as more-or-less hopeless. Here are some problems that I see with abuse prevention attempts in the past.

  • They depend on altruism to avoid the "tragedy of the commons"
  • Identifying abuse is hard work (REALLY HARD)
  • Small stakeholders can't take the risk of getting into a flag war with large stakeholders

In the time I've been here, there were two efforts that come to mind as the closest things I've seen to successful abuse prevention. The first was @cheetah, which was an automated bot that was coded by someone who has now left the blockchain. Personally, I thought that Cheetah was better than nothing, but also that it left a lot to be desired. Also, as time went on, it seemed to me that it got worse and worse at accomplishing its goal. The second almost-successful effort was @steemflagrewards, which also disbanded. I never learned the details of their operation, but as I understand, they somehow managed to eliminate part of the need for altruism.

So, I've been silent when the topic came up again recently. As they say, "There's nothing new under the Sun." But then today, I had an idea. What if we could make abuse prevention into a game of skill - one that rewards people for participating, and rewards the best participants the most? So, in the following sections, I'll propose a method for doing just that. This is beyond my skills and means to implement, but maybe it will stimulate some other ideas.

Origins

Ten or fifteen years ago, there was an Internet game. I don't remember what it was called or who created it, but it was in a genre that was called something like "Gaming for Good". If I remember right, the creator may have been at Carnegie Mellon then signed on with Google (and taken his idea with him), after completing his PhD. That could be wrong, though. Doesn't matter, except that I wish I could give him credit for the idea.

Anyway, in this game, it went like this:

  • Connect to the web site
  • Get paired randomly with an unknown partner
  • Get shown a photo
  • Type potential keywords for a period of time - maybe a minute(?)
  • If you and your partner typed the same keywords, then you won, otherwise you lost

And here's where the "gaming for good" part came in. When keywords were matched by two independent partners (actually multiple pairs of partners, I assume), they were used to train an AI system on image recognition.

I spent quite a few late nights playing this game. It was one of those games where you think you're just going to be there for a few minutes, but then... one round at a time, the minutes turn into hours.

Connecting this concept to abuse prevention on Steem

So now, let's imagine how we could apply similar concepts to abuse prevention on Steem. Because of upvote-based incentivization and the need for downvotes, it would have to be run by one or more parties with a sizable stake, but here's what I imagined today:

  1. An automated crawler looks for potentially abusive posts. This could be done by looking for characteristics on the blockchain (i.e. high reputation author with low follower count; high value post with low follower count; high value post with no comments; posts with a large ratio between max vote and median vote; etc...). It could also be done by passing a small portion of the post through the API from a search engine or plagiarism service.
  2. Someone puts up a web site where it feeds candidate posts from the crawler to anonymous/random pairs (or triplets, etc..?) of players and asks them if it is abusive content (we need a better term for "abusive content" ;-). For each post, repeat steps 3-6 (outer loop):
  3. Log answers on the blockchain - but, importantly, without the post that they're replying to. That way, there's no fear of flag wars. The web site would know the link, but no one else. The posts would need some sort of anonymized player/game ID and also be delayed by a random amount of time, in order to prevent players from identifying their partners.
  4. If the two players match their answers, then the operator upvotes the answers, rewarding them for their time & effort
  5. If the two players don't match their answers, then no upvote is given (and maybe it wouldn't even need to be recorded on the blockchain)
  6. Repeat steps 3-5 with as many pairs as desired (inner loop)
  7. If some threshold is met, it signals the large stakeholder to further evaluate the post (using automation or manual inspection)
  8. If the large stakeholder(s) agree(s) that the post is abusive, then a downvote is issued.

Now, let's revisit the three challenges I identified above, with previous solutions to abuse prevention:

  • They depend on altruism to avoid the "tragedy of the commons"

If abuse prevention is turned into a game of skill like this, there is no need for altruism. The best people at identifying abuse will receive the most rewards, so it will be a sustainable level of effort.

Further, the large stakeholder would be rewarded by curation rewards for their votes, and also (hopefully, but not certainly) with a rising price of STEEM - as the content quality improves. Of course, the web site owner could also generate revenue from advertising and beneficiary rewards, and the whole thing could be further incentivized with TRC20 tokens.

  • Identifying abuse is hard work (REALLY HARD)

Players can play as much or as little as they'd like. "Many hands make light work", as they say. Also, if my experience with the game described above is any guide, it might actually be fun.

  • Small stakeholders can't take the risk of getting into a flag war with large stakeholders

Since the link is only shared between the large stakeholder and two players who don't know each other, there's no way they could be drawn into flag wars.

The only risk that occurs to me is that a malicious actor could seek to punish everyone who participates, but the operator could provide protection and it should become prohibitively expensive for a malicious actor.

Afterthoughts

For abuse prevention, it's not necessary to identify all abusive posts. We just need to find enough to create the friction that will lead the abusers to self-limit. This means that a whole post wouldn't have to be fed through a plagiarism detector, just some random excerpts. It also means that the large stakeholder could limit their downvote size to just a portion of the post's value, perhaps by canceling out the largest value voter on the abusive post (which is reminiscent of the 2nd price auction that I mentioned above). Certainly, these sorts of things could be tuned and adjusted as time passes.

Conclusion

So there's the idea. Certainly, there's nothing wrong with rekindling the abuse prevention methods that have been tried in the past, but it seems to me that we should think about new ideas, too.

I wish I had the skills and means to implement something like this in a reasonable time frame, but since I don't, all I can do is kick it out to the community and ask for feedback. Thoughts?


Thank you for your time and attention.

As a general rule, I up-vote comments that demonstrate "proof of reading".




Steve Palmer is an IT professional with three decades of professional experience in data communications and information systems. He holds a bachelor's degree in mathematics, a master's degree in computer science, and a master's degree in information systems and technology management. He has been awarded 3 US patents.

Sort:  

I love the idea and it's sparked a lot of thoughts on how it could be achieved on a simple and then more complex basis.

e.g. v1 allows people to anonymously report content as plagiarised which (registered) users can then agree or disagree with. This will give the post a "plagiarism/abuse" score which can be shared in a daily report for action by @ac-cheetah. Then subsequent rewards, registered user reputation, etc. can also be administered, perhaps via one of the steemcurator03 - 08 posting keys so that even the rewards remain anonymous.

A future version could include automated highlighting of potentially abusive posts and it could slowly become more complex.

This will maintain anonymity, use the gamification idea and reward users for highlighting abuse and penalise those abusing.

And to prevent people arbitrarily saying everything is plagiarised, include some known to be ok posts within the game so that those trying to abuse the reward part of the initiative fail. Perhaps even a minimum reputation threshold within the game before somebody playing the game gets rewarded. e.g. You start at 25 rep, every "fail" by highlighting an ok post (it'll be really obvious) loses you 5 points, every "approved as abusive" post gains you 1 point. Once you hit 50 points, you share a proportion of a reward pool (which could be a weekly steemcurator upvote).

I need to go out now but will mull this over whilst I am and come back with some ideas. This feels like it's well within my skillset - I just need to decide whether it should be a priority over the reskin.

I have also been thinking some more about it after I posted, and also landed on the concept of publishing regular anonymized reports (daily, or more or less frequently, depending on participation levels) with the identified posts. It improves transparency and it's another income stream for the operator.

There are a lot of possibilities when you get to thinking about the details.

Another thing I was thinking was to use beneficiary settings to distribute rewards, so the content doesn't show up on the players' blog pages, comments, or reply histories. i.e. the account running the game posts the game reports with appropriate beneficiary settings to direct rewards to the players. I suppose that could be an optional setting, but I would think most players would prefer not to have that content be visible on their on profiles.

I was thinking about the beneficiary route too and (correct me if I'm wrong) I thought that a malicious actor could still identify supporters via the post itself (which says who the beneficiaries are) or via the wallet transactions (I think the rewards get paid to the poster and then transferred but I could be wrong about this).

That's why I ended up with piggy backing on the new steemcurator teams so that the support blends in with another activity.

I've produced a strawman wireframe (buzzword bingo alert) for how it could work but I'm too tired to post it tonight. I also thought that something like this could work really well with the site redesign I'm working on - where a user can report a post as plagiarised within the post itself.

I also thought that if a post reaches a certain "plagiarism rating", then a comment could be auto-posted tagging community admins and mods (assuming it's in a community) so that they can take action too. Similarly, if a user is consistently flagged as an abuser, a comment could be automatically added to every new post of theirs - I think sentinels did or still do something similar.

I was thinking about the beneficiary route too and (correct me if I'm wrong) I thought that a malicious actor could still identify supporters via the post itself (which says who the beneficiaries are) or via the wallet transactions (I think the rewards get paid to the poster and then transferred but I could be wrong about this).

Yeah, this is a concern. The posts distributing rewards to players would have to be totally disconnected from the potentially abusive posts (PAPs ;-). This way, the malicious actor would be able to tell who is playing the game, but they wouldn't be able to tell who ID'd their particular post as spam/plagiarism.

Earlier, I didn't see what you were getting at with the steemcurator## accounts, but now I think I do. If the account is voting for other stuff, then there's no way to tell if the vote was for game participation or for other reasons.

The drawback to that, though, is that rewards could only be distributed if the player was also posting for other reasons. I think we'd want to include players who don't want to post other stuff all that frequently (if at all).

I like the idea of putting abuse reporting and thresholds into the web site. I guess it would be easy enough to integrate abuse reporting, but rewarding the successful abuse-hunters would be trickier.

the malicious actor would be able to tell who is playing the game

This is potentially enough to put some people off. I know a few users that are still knocking about who have quite a lot of power that downvote for their own (often insane) whimsical reasons.

If the account is voting for other stuff, then there's no way to tell if the vote was for game participation or for other reasons.

Exactly 👍🏼 It's the ultimate cover and I expect the man behind ac-cheetah to also be in one of the chosen steemcurator groups so I'm confident he'd support the idea.

The drawback to that, though, is that rewards could only be distributed if the player was also posting for other reasons. I think we'd want to include players who don't want to post other stuff all that frequently (if at all).

Good point - It would be fairly easy to keep a record of who's owed what and if / when they do finally post something, their reward could be automated. If they don't post though, there's little fear of retribution from that PAPper (😉) so the the idea of remaining anonymous being optional could work well here - the user choosing to have their rewards credited via a beneficiary or wallet transfer. Or they could even create a "dummy" account to collect rewards.

I like the idea of putting abuse reporting and thresholds into the web site. I guess it would be easy enough to integrate abuse reporting, but rewarding the successful abuse-hunters would be trickier.

I think so. I think that a standalone website would be fairly straightforward to do (which can be launched first on the same server as the reskin will live) and then the new website could easily automatically post data into the existing database. The rewards pose a challenge. The other big challenge is people highlighting a post as plagiarised, unaware that the author has a blog, website or photography portfolio elsewhere.

I know that HiveWatchers is heavily criticised on Hive for its heavy handedness and this feels like it's heading in that direction... https://hivewatchers.com/

I know that HiveWatchers is heavily criticised on Hive for its heavy handedness and this feels like it's heading in that direction... https://hivewatchers.com/

There were a lot of complaints when they were here, too. Not just the heavy-handedness, but also arbitrariness and alleged conflicts of interest. I could never decide if I thought they were a net-positive or net-negative. I think I delegated, undelegated, and redelegated to them a number of times.

The other big challenge is people highlighting a post as plagiarised, unaware that the author has a blog, website or photography portfolio elsewhere.

One of the big lessons from past efforts is the need for an appeal process when there is a central authority who is making the decisions. Of course, on the other side, that just makes abuse-fighting harder.

Those sorts of challenges are the reason why I've been trying to think about decentralized techniques like this game idea or quorum sensing and adjustments to the rewards algorithm to make abuse less profitable.

The nice thing about your idea to publish a daily report out of the game play is that multiple projects could develop competing lists in their own ways, and maybe the best would bubble up to the top. Maybe a part of the project should be to publish a standardized reporting format.

I think that I'm going to park the reskin for a while and work on this instead. I think it's got real potential for good if it's done well. Once a certain threshold's reached, a reply to the post could be triggered. Any reply from the author could be added to the game so that future players can see it - although the original author could decide to edit the content or suchlike so perhaps the original content could be saved.

Since the downvote trail is managed by somebody else, me writing the game and reporting the results could work well. I think that an initial version could be written in a couple of days so it's just a question of finding a couple of days 🤔

Your quorum sensing post offers an interesting idea. I think that to some degree ac-cheetah's downvote trail achieves this - especially in the knowledge that there are users with far high power backing you up. Like the gobshite kid who has the MMA world backing him up. I'll let my subconscious work on this one too.

I think that to some degree ac-cheetah's downvote trail achieves this - especially in the knowledge that there are users with far high power backing you up.

I agree, the main difference is that it doesn't happen on a post-by-post basis. That might not matter much, anyway, though.

I wonder if there might be a base-rate issue -- e.g. if the common case is "not abusive" then wouldn't an easy high-rewards strategy be to just vote "not abusive" as often as you can, since you're likely to match?

Yeah, I have been thinking it through a little more. With a 50/50 chance, matching on a single post would probably be much too easy. I guess it would be better to make the groups that have to agree bigger than just two people.

There might also be a way to throw in some definite/known abuse and use that as a calibration tool.

There are definitely still some details that need to be worked out.

I am a newbie in Steemit. but I have completed all the achievement tasks required by the steemit team, to be completed by beginners. It means. With I have completed all the tasks of achievement, I really have understood all the rules that apply in Steemit. including a severe prohibition on Plagiarism, Abuse and others. I will gladly and very professionally comply with all those rules. Regarding this talk. It's really very unfortunate, and very disturbing, the acts of plagiarism that occur on Steemit, monopoly, abuse and very excessive bitbots. I'd like to say, it's very demoralizing for us beginners, who really stick to the rules well. Very small fish like us cannot downvote plagiarism content, we are very afraid of backlash. But we are ready to fight it, together with seniors in steemit. in ways without risk of retaliation. That aside. one of the things we can do, as minnows is. We will always post quality content, free of plagiarism.

As a red fish, I don't yet have the power to support the fight against plagiarism through downvotes. Even to fight with other red fish too I feel pressured. Especially to downvote the pope. Their revenge, that's what I'm worried about. Even for red fish, I think they have an aliances. That's why I very rarely use downvotes for an act of plagiarism. It's not that I approve of plagiarism, I'm just a red fish avoiding retaliation.

I'm very agree for Your idea to fight the plagiarism with a voice auction game. It's a good solution. It's almost risk-free.

One thing I am concerned about here is that it is not a black and white issue whether a post is abusive or not. For instance, you and I have both discussed someone on this blockchain committing borderline abuse for months, and milking the system of thousands of SBD in rewards, but that actor's post are so realistic (likely written by AI) that whether or not it's abuse is not 100 percent clear. So my question is what will the rules of this game be in judging whether something is abusive? If a post is worth a lot more than a user thinks it should be is that abuse? I get that by having input from multiple users, you will have almost a check and balance system for detecting abuse, but that is not useful unless the definition of abuse is widely agreed upon.

For instance, you and I have both discussed someone on this blockchain committing borderline abuse for months, and milking the system of thousands of SBD in rewards, but that actor's post are so realistic (likely written by AI) that whether or not it's abuse is not 100 percent clear.

I'd be interested in knowing which user you're referring to - it sounds a lot like a few I've caught in the past that were translating content from Chinese or Russian sources. Their content was perfect. Consistently posting something deep and insightful every single day - the kind of quality that an individual would not be able to think of on a daily basis, devoid of any personal touches at all.

I asked @remlaps to respond, but we've noticed a number of accounts that have grossed nearly 100,000 SBD with content that is posted daily, but is not interactive whatsoever, and makes sense, but is not incredibly meaningful. The same large stakeholder votes for each of these accounts, and we are suspicious that they are using ai to generate the articles, and milking the rewards using their stake.

I don't want to type the users in a way that's searchable, but some of them are listed in screen shots here: Using TrufflePig for potential abuse detection - Human curation needed - Multiple $1,000s in rewards at stake. Others are easily identifiable in the recurring "Today's Truffle Picks" reports from @trufflepig (before it stopped running).

Using data from steemdb.io, in December or so, I built a PowerBI report on 14 of the accounts that were identified by trufflepig, and it appears that the accounts collected about 91k SBD in rewards from March 2021 through January 2022. (No idea how many other similar accounts might be out there)

image.png

I was thinking the posts were AI-generated (more recently I've been seeing ads for something called Jarvis), but I guess they could also be language translations.

image.png

This is the most interesting post. If you find it, put the URL into https://www.steemcryptic.me/ and see what the original version was. This is the only post that I came across that was significantly different and I've never spent the time working out why. Maybe I'll spend that time now - not that much can be done against this kind of upvoting power.

That is interesting. I searched Google for a deleted series of words inside quotes and didn't find any matches. The reason for the changes isn't obvious to me.

The timing of their first post and the facts that the recovery account is blocktrades and they were originally posting from SteemPeak reinforces my suspicion about where the funds might be getting directed, although it's mainly just speculation. (I was posting from Steempeak around that time, too, so these are very weak lines of evidence. ;-)

On a separate note, we also have the brute force method... roughly 950 SP and 110 SBD in the last week.

image.png

And another interesting post...

image.png

Especially this paragraph that was deleted from the end of the post:

You might have to let go of things that you love and that you have been attracted to and you might have to let go of things that you know you will never be able to experience again. You might have to let go of things that you have been used to and you might have to let go of things that you will never be able to experience again. You might have to let go of a lot of things that you have been used to and you might have to let go of many things that you have been attracted to. You might have to let go of a lot of things and you might have to let go of a lot of people and you might have to let go of all the people and things that you know that you are supposed to have

Another rare mistake. What do you make of the same sentence being repeated 4 times? It definitely suggests some kind of automation.

Yeah, the sentences are not quite exactly the same, but it's like a "Mad Lib" where someone/something just inserted new clauses into the same sentence structure four times in a row.

My gut feeling is still that it's being crafted by GPT-2 or GPT-3. Especially because of the timing of the early posts. I seem to remember a number of Steem posts in the 2019/2020 time frame where people were experimenting with those tools. If I recall correctly, some of the people involved in those conversations later joined the Witness War hostilities on the Hive side.

Maybe near the beginning a human was proof-reading and fixing the worst parts, but as time went on and they continued to go unnoticed, perhaps they didn't need to bother with edits any more?

This might be useful, but I'm not sure how to interpret it yet. Found it linked from here and tried it out with a couple paragraphs from the top post on that same account. To my eye, the sample below looks similar in coloring to "machine*: unicorn text (GPT2 large)." Also, I note that there's no purple at all.

image.png

Here are a couple paragraphs from my own post, so I know they were written by a human.

image.png

I guess the more green and yellow you see in comparison to red and purple, the more likely it is that an AI wrote it. That article was published in 2019, so maybe there's a better tool available by now.

This is fascinating - I didn't know this existed and it's incredibly interesting. I'm probably going to get sucked in to it now and lose my day.

That would definitely explain why the articles appear to make sense but at the same time, they don't. One of the ones I read seemed to have a random thread that was totally unrelated to the main thread. In a coherent but totally illogical sense. I think you're right in that their posts are being generated in this way - it almost appears obvious now that my eyes have been opened!

Ah, I know the lot. I investigated them some time ago and couldn't find the original source. There are often clues within edited posts (i.e. why did they edit it) and there were very few posts that had been edited (which is suspicious in itself). What I did notice is that one of the edits was correcting some grammar - so somebody who (I believe to be) fluent in English had read it after it had been published.

The other edits often removed a quotation mark (") as the first and last character of the post. So whatever tool had been used to translate or create the post wraps it in quotes.

Looks like I looked into them on 12th April last year - 12 accounts at the time.

I have no doubt though that it's not authentic content and believe that this account holds the key -

image.png

The content was genuine at the start, written by a guy from Pakistan and then it changed completely - like it was being run by somebody completely different.

Ultimately, the final decision lands with the person or team who casts the downvote. But that is similar to the problem that the original game intended to solve. Say it showed you a picture with a cat and a chair. Should the keyword be "cat" or "chair" or both or neither? There's no clear rule on that, but the game relied on player input to inform its decision. The rules for identifying abuse are always going to be fuzzy, but (hopefully) if enough eyes look at it, you can reach a consensus.

the final decision lands with the person or team who casts the downvote.

This doesn't always have to be the case right? We could have a DAO kind of system since this is a downvote trail. Maybe a simple DApp where users can see the flagged posts and based on the common sentiment, the person who initiates the downvote can decide whether to vote or not. This would kind of be similar to the SPS but on a separate DApp.

Your thoughts?

Yeah, that's getting more complicated, but I think you're right. This is connecting it back to the idea about quorum sensing that I mentioned in the opening paragraph.

Is that even possible? To have someone just mark a post and once enough quorum is reached, a downvote is initiated? If I'm not wrong, there is a 1hr expiry of every transaction on Steem. If a transaction is not broadcasted within this 1hr duration, it would fail, right? So, this is where it becomes very tricky.

I don't think it would be possible on-chain. The "vote broadcasting" would have to happen off-chain, through a web site or some other protocol. The only part that would be on-chain would be issuing the downvotes.

Even if it were possible, I think you'd want to keep it off-chain anyway, in order to prevent retaliation.

Oh, and I forgot. This is a good example:

For instance, you and I have both discussed someone on this blockchain committing borderline abuse for months, and milking the system of thousands of SBD in rewards, but that actor's post are so realistic (likely written by AI) that whether or not it's abuse is not 100 percent clear

From what I can see, it's actually somewhere around one hundred thousand SBDs during the course of last 10 months (and maybe more that I'm not aware of). It would be interesting to see if a game like this would find that actor, like @trufflepig did (accidentally, it seems).

If we are going to talk about abuse, I will tell you, sometimes I look for posts randomly, I think I am going to hunt, I go to the Korean community, at minute 2 the post has a value of 3 steem and at minute 5 its value is 127 steem, if this is not abuse, it is a monopoly, and so in several communities, the monopoly is constant, I have seen some photos that are plagiarism, I say nothing, I think that this should be solved by the powerful panel, I do not know what to call them, It is not up to us to do their job , but if that were the case , I would start by eliminating the monopoly , that is a great abuse

Thanks for the reply. I definitely agree with you that some of it is pretty easy to find, but most of us don't have enough stake to address it.

I agee with you @lupega..
of course As a minnow, I worry about retaliation.

This is an shocking issue on internet. Abusing, plagiarism, spamming is a kind of hated work. This is a kinf of theft. At the beginning there was some effort from steemit team to prevent it. But now it is not seeing. I am basically hopless to see the development of steemit. Sometimes I think there is now development of steemit. I don't no why the development of steem is too slow

I have included this post in the 31th issue of Steem News Magazine For Steemit Platform | January 22, 2022.

The steam blockchain contains plagiarism and spam. However, it is very low. It is difficult to get rid of it completely. Abuse needs to be prevented here.

Very interesting suggestion. Yes, I do agree that the game would be good and could help in fighting with the abuse. Even if we won't be able to do it, I remain optimistic though.

The abusers may be grouped in several categories:

  • for the plagiarism we have @alex-move who is working on the tool which will be finding it automatically and tagging @ac-cheetah to double check it. Now the project is getting extended from two groups to four. Making it the solution for the whole platform is the matter of time (and money, unfortunately). Downvoting with our downvote trail launched by @symbionts is giving sort of protection from flag war, but the current 400 000 SP is still not enough to solve all the problems.
  • for the account farms, @the-gorilla has written the script that helps to detect it, so it should be easier to fight with that sort of abusers now
  • for those who overuse bidbots by sharing spam, it can be reviewed by checking the list of 1000 users with the highest SP. Those who are not in the top1000, usually do not delegate to upvu or tipu because it's not giving them satisfying income.

Sooo... we have the tools, but the game is still a great solution. More people is actively doing something, less things will be accidentaly skipped and more abusers will be self-limiting themselves. The biggest question is what we will do with mentioned abusers while they will be detected? Thanks to the trail and delegations, I do have higher downvote power than @endingplagiarism before, and yet it's enough to fight with 2-3 big fish at the same time, or one account farm. I may know about more people who are breaking the rules, but they have to wait in the line for their turn. That's why in my opinion the accent on the trail (as the alternative to the direct intervention of Steemit Inc.) should be crucial.

Thank you for this useful and relevant comment.

#lucky10

Coin Marketplace

STEEM 0.35
TRX 0.12
JST 0.040
BTC 71539.00
ETH 3603.23
USDT 1.00
SBD 4.75