The issue is that LN trying to be a mesh network means it cannot scale. The creators admit the whole thing could become disfunctional somewhere between 10,000 to 1 million users of LN.
It is because the path changes from each hop have to be relayed to everyone kn the network. Eventually there is so much useless information being relayed that nodes cant process it.
This is the crux of the unsolved problem that is Mesh Network Topology, likely cant be solved in any sense for LN until tech is more advanced and better internet/cheaper server costs, and even then probably something less than 1 million could use it at once without bogging down system.
what does a scalable solution actually mean? Does this mean that the solution is scalable in the number of users or in the number of transactions, or in the size of the network? If a P2P network is capable of processing thousands of transactions, can we call the solution scalable? If so, what happens when the network doubles its size — — can the throughput be maintained? In fact, a solution that is scalable in a single dimension may not be well-suited for a use case that requires scaling in a different dimension. Hashgraph currently scales only in the number of transactions processed but does not scale with the number of nodes in the network. Zilliqa for instance scales with the number of nodes in the network.
I find the Hashgraph guys to be selling it very hard and understating the downsides, almost defensive about them. This is a good example of some confusion. So the answer is, it depends on the throughput, and that depends on the use case. If there aren't that many packages (messages) but there are tons of people (nodes) then it might be applicable.
"Hashgraph currently scales only in the number of transactions processed but does not scale with the number of nodes in the network."
From the article you quoted.
" If there aren't that many packages (messages) but there are tons of people (nodes) then it might be applicable."
From your reply.
Unless I am even more confused and stupid than I think I am, I find those two statements contradictory. That being said, I was unaware of the scalability issue with Hashgraph, and don't understand it. I'm not that surprised really, as I am not a coder, and don't expect to be able to follow deep into the nuts and bolts.
If Hashgraph doesn't scale nodewise, then I'm saddened, but glad you pointed it out. Checking out Zilliqua now, in the hope that scalability in all three dimensions either turns up, or can be cobbled together soon.
In the context in the article the author is talking about taking the dimensions of use independently, of number of messages, payload size (together which combine to throughput) and number of nodes. I thought it was that for the same number of nodes, hashgraph can scale in throughput, and for the same amount of throughput, it can scale in number of nodes (that is, holding each variable and scaling the other) but perhaps my interpretation was incorrect. Thanks for that, now I'm not sure.
I'll need to do more reading actually because there's not quite enough here. I'll get back to you on that.
The issue is that LN trying to be a mesh network means it cannot scale. The creators admit the whole thing could become disfunctional somewhere between 10,000 to 1 million users of LN.
It is because the path changes from each hop have to be relayed to everyone kn the network. Eventually there is so much useless information being relayed that nodes cant process it.
This is the crux of the unsolved problem that is Mesh Network Topology, likely cant be solved in any sense for LN until tech is more advanced and better internet/cheaper server costs, and even then probably something less than 1 million could use it at once without bogging down system.
Check out a quote from this article:
I find the Hashgraph guys to be selling it very hard and understating the downsides, almost defensive about them. This is a good example of some confusion. So the answer is, it depends on the throughput, and that depends on the use case. If there aren't that many packages (messages) but there are tons of people (nodes) then it might be applicable.
From the article you quoted.
From your reply.
Unless I am even more confused and stupid than I think I am, I find those two statements contradictory. That being said, I was unaware of the scalability issue with Hashgraph, and don't understand it. I'm not that surprised really, as I am not a coder, and don't expect to be able to follow deep into the nuts and bolts.
If Hashgraph doesn't scale nodewise, then I'm saddened, but glad you pointed it out. Checking out Zilliqua now, in the hope that scalability in all three dimensions either turns up, or can be cobbled together soon.
Thanks!
I see what you mean, good catch.
In the context in the article the author is talking about taking the dimensions of use independently, of number of messages, payload size (together which combine to throughput) and number of nodes. I thought it was that for the same number of nodes, hashgraph can scale in throughput, and for the same amount of throughput, it can scale in number of nodes (that is, holding each variable and scaling the other) but perhaps my interpretation was incorrect. Thanks for that, now I'm not sure.
I'll need to do more reading actually because there's not quite enough here. I'll get back to you on that.
Hashgraph definitely looks like the future I can't wait to see things begin to appear using it. Not sure how close we are to that yet though 💯🐒
This is probably related to hash chains, such as what holochain is using? https://github.com/holochain/holochain-proto/blob/whitepaper/holochain.pdf
Sorry just kind of a n00b but I'm trying to figure all of this out