RE: Dev Portal Update: Tutorials, Recipes, and Tweaks, Oh My!
Waiting to run something like this on any hardware is a common request. That's why kong exists. It's openlua on nginx. It's ridiculously flexible and powerful. Best of all, it exists and is supported by a greater community.
IMHO, it simplifies things because now I can reduce my footprint by having either a reverse proxy and/or load balancer. That is, I can funnel requests through my reverse proxy before hitting the load balancer, or I could build my multiplexing into my load balancer cluster.
Regarding the tinkering use case, I think kong was created for specifically that because of the openlua foundation. It's meant for fail-fast development. Or rather, it's meant for you to be able to develop your plugins on a running application in just the kind of environment that a raspberry pi will create for you.
I think though that if I were going to tinker, I would use docker for tinkering unless it were IoT. Raspberry Pi really only makes sense when you have hardware you want to modify with software, but you haven't the means. You can forcibly add a software solution to your hardware (ironically with more hardware). For example, I want to connect my home security system to steemit and multiplex with kong/jussi on my raspberry pi, that would make sense. If I just wanted to try out jussi/kong to see how many apps I could connect before seeing resources constrained, I would use docker.
Check it out https://konghq.com/kong-community-edition/
If I wanted to add the ability to cache a json-rpc-batch request coming from the client, that is, write a plugin for kong to accept the request and cache the upstream response, how much time would you estimate one would have to spend writing this functionality in lua, along with the ability for anyone in the community to use the same plugin for their own deployment? You know, generalized and ready for production.
Keep in mind, this hypothetical kong plugin would need to be blockchain aware. It would need to know things like if each TTL for each block being requested in the batch is before or after the last irreversible block or not. Stuff like that.
If you want to do a real cost-benefit here, you'll have to compare actual work on both sides, but then also weigh what you're getting.
I'm basically saying that the amount of work to build the same functionality currently in jussi, but in kong would be less simply because
Also, it's not just better because the work would be less, but the benefits are greater
Whenever the question is "Should we reinvent the wheel?" the answer is almost always, "no".
You should totally write that plugin.
Actually, I had started a kong plugin, but it's not for this. It's for pricing out app access to private nodes. Another reason why I would really like "full-blown" kong support.
It's on my list. Just after "super suit."
I estimate it would have taken less time than it would have to build/design/maintain jussi. A lot less
So, challenge accepted? :D
LOL. You're right. It's a lot easier to complain and hindsight is always 20/20. You want to actually see it. All I can say is, I've been working on a refined steemit infrastructure that provides turnkey solutions for app developers. I may be all talk though because I have so many ideas and I like to go in all directions simultaneously. It's a character flaw.
Oh my god it's me. Only at business dev level, not hardcore software development