I'll try again...
Would one solution be to have a sequence of nodes, where the next 'n' forging nodes are always known not necessarily in parallel but in close sequence, so node 1 identifies node 2 and node 2 identifies node 3 (based on 1 & 2), node 3 identifies 4 (based on 1,2,3), this would create a forging mesh within the network and clients could choose which node to send to based on latency and forging window. (mobile networks and phones do something like this all the time - yes I know about dropped calls its not perfect!)
The choice of node not just based on the current params but also its awareness of the network topography would mean a continual distribution of nodes resulting in statistically a node being close in latency and forging window to a client that wanted to transact...
Depending on the capacity required the nodes would seek an optimum connection with other nodes to achieve that capacity, also if the network can advertise its current processing capacity based on its current demand and ask for more nodes who might be sleeping because not needed before.
You don't have to deploy static network models like hub/spoke or regional or whatever...
Normally people think the a system is stronger than any one single part but what I read here is how for a moment one part has to stand out from all the rest and that makes it vulnerable.
Would one solution be to have a sequence of nodes, where the next 'n' forging nodes are always known not necessarily in parallel but in close sequence, so node 1 identifies node 2 and node 2 identifies node 3 (based on 1 & 2), node 3 identifies 4 (based on 1,2,3), this would create a forging mesh within the network and clients could choose which node to send to based on latency and forging window. (mobile networks and phones do something like this all the time - yes I know about dropped calls its not perfect!)
The choice of node not just based on the current params but also its awareness of the network topography would mean a continual distribution of nodes resulting in statistically a node being close in latency and forging window to a client that wanted to transact...
Depending on the capacity required the nodes would seek an optimum connection with other nodes to achieve that capacity, also if the network can advertise its current processing capacity based on its current demand and ask for more nodes who might be sleeping because not needed before.
You don't have to deploy static network models like hub/spoke or regional or whatever...
Normally people think the a system is stronger than any one single part but what I read here is how for a moment one part has to stand out from all the rest and that makes it vulnerable.
This is why we need groups of forging accounts.
ARRRGHHH NO!!! THIS IS NOT ABOUT POOLS
This is about nodes having a more intelligent relationship with other nodes rather than the primitive one we have now...
See pages back on self optimising networks.
Even in the ether or whatever the internet is there are physical relationships defining the connections between nodes, define rules for which nodes establish which connections based on physics and node behaviour and you create a strong network or mesh... add a bit more where this mesh tells the clients how to work and they know where to send their transaction to get it processed fastest... there will be some collisions but as long as there are no hotspots (which the network will adjust for anyway if it can)...
You need NXT in a node to make it a strong node (aka pools) but this is about creating a strong and responsive network of nodes.