Bitcoin Forum
May 23, 2024, 04:40:59 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Call for compute, lets break records together!  (Read 14 times)
Fuserleer (OP)
Legendary
*
Offline Offline

Activity: 1064
Merit: 1016



View Profile WWW
May 09, 2024, 01:40:06 PM
 #1

Over the past couple of years, I've been working away on a research network named Cassie which will lay the groundwork for the Radix network upgrade, Xian.

Cassie exhibits a number of novel and interesting properties, but the core goals were to implement a linearly scalable consensus protocol which also retains high decentralization and security metrics.

Linearly scalable in this context means that if the compute (validators) available to the network doubles, then the maximum throughput of the network also doubles.

This has been tested extensively, both in the "lab" and with members of the Radix community participating in the tests and we have achieved great results so far sustaining 120,000 transactions per second (about 50% being complex smart contract calls such as swaps) and consumed bursts of 160,000+ without issue.

Our plan over the next few months is to run a series of tests with a goal to exceed 1,000,000 transactions per second for sustained periods of time.  This will require significant compute hence my call out across crypto in general for participation.

We could of course simply rent compute from the various cloud providers and do the test ourselves, but my desire here is for these tests to be as representative of main-net performance as possible.  

That requires that we (Radix) should run a minimal amount of validators to bootstrap the network and the rest provided by 3rd-parties.  The validators would then be globally distributed, different hardware configurations & ISPs (we've had some guys use Starlink successfully at high load!) and behave akin to a main-net in the wild (minus the value of course).

Too often these "tests" are performed in a "lab" environment, totally under the control of the project stakeholders, run for short durations typically minutes, very simple transactions such as A->B transfers, high specification hardware, super fast connection & low numbers of validators.  

In some cases, critical elements have been disabled such as signature generation & validation in order to push the numbers.

These results are then paraded as if they are some kind of achievement, but upon main-net launch the performance capability is a fraction of what these tests achieved.  It is disingenuous, dishonest & unhealthy, distracting from legitimate projects who are working hard on real scalability solutions.

We want to do it right!

If you'd like to participate, please send me a DM.  You will need a machine with the minimum specification of 4 core, 8GB, 200GB SATA SSD & 20Mbps/50Mbps. If you have better specification hardware then you could run multiple validators on the same instance.

Also interested in any suggestions to ensure these tests as are real world representative as they can be.

Thanks in advance, and I look forward to busting some records with you all!

Also interested in any suggestions to ensure these tests as are real world representative as they can be.

Thanks in advance, and I look forward to busting some records with you all!

Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!