RE: Introducing FlagTrail - Steemauto for Flagging Abuse
I very much agree that the tonality and use of words from @imacryptorick was very out of line. But as I know him, he's very passionate about Steem and as an investor of course also interested in the success of Steem. And since utopian-io is such an important part of the Steem ecosystem, I'm pretty sure that's why he got so emotional.
Now with that said, I do think this problem is not only about this specific post & review, but a general one. When reviewers have between 1/4 and 3/4 of the rewards the reviewed post received, meaning actually the written code gets, then depending on the amount of code written, this could be quite unfair.
How long does it take to review the code of this post? 5 minutes? 10 minutes? 30 minutes? Probably 5 or 10 minutes, but since there is no proof the moderator actually looked at it for so long, with 2 lines of commentary, I'd say that the spend time for writing the code and creating the post takes much more than that.
Now, I do want to say that I'm grateful for the system in place and that the current rewards are better than nothing, but when the actual reviews are so highly overvalued and based on so many soft-factors, I can understand when people get frustrated - including myself:
Please take this comment as constructive feedback, because I love what you're doing @elear and I'm sure Utopian 2.0 will be amazing - however, many developers, including myself don't feel valued enough and I simply want to shine a light on that.
Possible solutions would be to reduce weight of soft-factors and add another factor if it's relevant/valuable for the Steem ecosystem.
The reward given for a review is not in their control, so it's not really fair to criticise them for that. Also, what do you mean by soft-factors? I'm assuming you are talking about the comments and commit messages questions? If so, then they already have a really low weight, and the most emphasis is put on the amount of work, significance / impact of the update on the project and the quality of the code.
As for the quality of the review comment. We used to justify every single decision we made, e.g. about the quality of the code and give examples on how it could be improved, but some people got really offended by this and complained. Because of this we have been trying to find a middle ground, which we are obviously still searching for as you can see. While I agree that @codingdefined could've maybe added a bit more justification for the given score, the contributor could've also simply asked him to expand on it. Instead we got this shit storm, which really doesn't benefit anyone to be honest.