‘’On-Block’’ is obviously not a solution.
Yes correct in the case of Bitcoin due to the way it has been implemented and all that happens is
20,000 nodes all need to keep 200gb of data in sync so it's one big set of files that is replicated however
this is not what you call distributed processing which would be more like 50 sets of nodes working in small teams
teams with each team being responsible for 1/50th of the data.
Ist year students studying computer science in university could had worked out that one big fat data
structure would reach a limit and when it would be reached but i think what has happened with Bitcoin is that
cryptographic experts (Yes i say they are good) have been allowed to take over from computer programmers
and academia has been allowed to run wild, hence each stage has been over engineered with the basics being
forgotten.
Ripple presents data as if it "On-block" and can do 50,000 transactions per second and not just seven but
I would guess it uses a tree type structure much like DNS servers work but this is far from the only configuration
and you can have a star structure and you need to understand that centralization is not as black and white as
they are making out and Lightning to some degree uses it anyway
I mean mining to see who's got the fastest processor, how did us true professionals ever manage without it
for all these years and believe me I have a lot more to say about the architecture then I can discus here and
it's not me that's cocked up and cannot decide in two years if we should be using Segwit or not