Bitcoin Forum
June 03, 2024, 02:45:16 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: btc
1 - 12 (70.6%)
1 - 5 (29.4%)
Total Voters: 17

Pages: « 1 2 3 4 [5]  All
  Print  
Author Topic: btc  (Read 5652 times)
testz
Legendary
*
Offline Offline

Activity: 1764
Merit: 1018


View Profile
June 08, 2015, 08:44:57 AM
 #81

The idea that one can push an infinite number of transactions through the Monero network is utter nonsense. Monero uses adaptive limits that limit the blocksize dynamically. This is explained in section 6.2 of the Cryptonote Whitepaper https://cryptonote.org/whitepaper.pdf. This means that there is no fixed maximum number of TPS that cannot be exceeded regardless of the market conditions. This is the critical difference with not just Bitcoin, but with Litecoin, Dodgecoin, Dash and many other alt-coins.

Thanks for sharing, but even Monero has unlimited TPS, yesterday when I send Monero to poloniex.com I wait 16 confirmation and get it at balance after 30 min+, withdraw BitShares from poloniex.com take me 3 min until I get BTS in the wallet confirmed. So theoretically Monero can be very fast, practically it's even slower than Litecoin.
Sure I will be very happy to see fast Monero network in future.

            ▄▄████▄▄
        ▄▄██████████████▄▄
      ███████████████████████▄▄
      ▀▀█████████████████████████
██▄▄       ▀▀█████████████████████
██████▄▄        ▀█████████████████
███████████▄▄       ▀▀████████████
███████████████▄▄        ▀████████
████████████████████▄▄       ▀▀███
 ▀▀██████████████████████▄▄
     ▀▀██████████████████████▄▄
▄▄        ▀██████████████████████▄
████▄▄        ▀▀██████████████████
█████████▄▄        ▀▀█████████████
█████████████▄▄        ▀▀█████████
██████████████████▄▄        ▀▀████
▀██████████████████████▄▄
  ▀▀████████████████████████
      ▀▀█████████████████▀▀
           ▀▀███████▀▀



.SEMUX
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
  Semux uses .100% original codebase.
  Superfast with .30 seconds instant finality.
  Tested .5000 tx per block. on open network
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
█ █
jonald_fyookball
Legendary
*
Offline Offline

Activity: 1302
Merit: 1004


Core dev leaves me neg feedback #abuse #political


View Profile
June 08, 2015, 08:54:38 PM
 #82

You're barking up the wrong tree, trying to make Bitcoin like ripple, nxt, or bitshares....
what kind of nonsense is this?

We already know the solution-- short term simply increase the block size, and longest term let's see if we can get sidechains working.

Fuserleer
Legendary
*
Offline Offline

Activity: 1064
Merit: 1016



View Profile WWW
June 08, 2015, 10:23:53 PM
Last edit: June 08, 2015, 10:36:28 PM by Fuserleer
 #83

I'm still not convinced, there are a lot of assumptions made, and some information is incorrect or not possible and needs some explanation further...

Also, bear in mind the 2 following quotes when I present the points made that don't seem to add up, conflict or simply don't make sense.

Quote
It should be noted that we are talking about the capability of an individual computer which is the ultimate bottleneck
Quote
BitShares 2.0 will be capable of handling over 100.000 (100k) transaction per second on commodity hardware with parallel architectural optimizations in mind.

So, lets proceed...

Quote
...we make the assumption that the network is capable of streaming all of the transaction data and that disks are capable of recording this stream...

That is a bad assumption to make if you intend BitShares to run on commodity hardware as is stated in various texts relating to BitShares 2.0....if you wish to achieve that, then you should really be assuming that the network, and disks are NOT capable of such a thing.

Quote
Todays high-end servers with 36 cores (72 with hyper-threading) could easily validate 100,000 transactions per second.

Erm....what happened to commodity hardware already?  Who has a 36 core CPU lying around?

Quote
The average size of a transaction is just 100 bytes

I just don't see how this is possible while maintaining the minimum amount of data required to ensure a validatable transaction.  A 256bit EC signature is ~70 bytes, 30 bytes sure doesn't seem like enough to specify 2 addresses, a transaction value, and anything else that is required.

Its worth noting that quoted figures for average BTC transactions are also incorrect as per this http://bitshares.github.io/technology/high-performance-and-scalability/

Quote
The average size of a transaction on competing networks, such as Ripple and Bitcoin, is about 250 bytes.

I recall a few respected members here doing research into this, and the average BTC transaction was at least 2x that, usually 3x and greater.

Quote
This is well within the capacity of most data centers and therefore not considered part of the bottleneck.

 Huh so we have gone from commodity hardware, to data-centers?  What about keeping things decentralized or on commodity hardware?

Quote
After creating the block chain we timed how long it took to “reindex” or “replay” without signature verification. On a two year old 3.4 Ghz Intel i5 CPU this could be performed at over 180,000 operations per second.

That is a statistic I can swallow, but my question is, WHO is doing the signature validation?  Only people with 32 core machines and 1TB of memory?  If so, how are the rest of the nodes in the network ensuring that this now centralized task is done properly?  How can I with my lowly 8 core, be sure that the transactions are indeed valid and signed correctly without having to also verify 100k transactions per second.

Quote
We set up a benchmark to test real-time performance and found that we could easily process over 2000 transactions per second with signature verification on a single machine.

Ahh so 100k per second really is only available to people who own 32 core CPU's in a spare data-center?  If this single machine consisted of commodity hardware, and thus most users of the network will have similar, its not 100k per second is it, its 2k.

Remember this from up top It should be noted that we are talking about the capability of an individual computer which is the ultimate bottleneck, which is in turn confirmed by this next statement

Quote
On release, the transaction throughput will be artificially limited to just 1000 transactions-per-second.

If the speed of the 100tx/s is really achievable on commodity hardware, why limit it to 1000 transactions per second on release?  Could it be that 100k/s on commodity hardware actually is not possible, and this 1k limit is actually to mitigate machines slower than the test bed machine that could achieve 2k/s

If that is not the case then I am totally confused. Is it limited by an individual computer to 2k tx/s, or is it not?  Do you need the suggested 32 cores to be able to process 100k tx/s and if so, what about my question for machines that are slower?  If the majority of machines are indeed only able to process 2k/s what purpose do they serve in the network?  Are they redundant in ANY transaction processing?

To me on the surface, it all seems like conflicting statements, and contradictory numbers.  If the system can process 100k/s, without having centralized nodes packing 32 cores and 1TB of RAM, then I'll take my hat off, but all this information is so confusing, I don't even know what to take away from it.

StanLarimer
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


View Profile
June 09, 2015, 12:28:26 PM
Last edit: June 09, 2015, 03:06:31 PM by StanLarimer
 #84

I'm still not convinced, there are a lot of assumptions made, and some information is incorrect or not possible and needs some explanation further...

Also, bear in mind the 2 following quotes when I present the points made that don't seem to add up, conflict or simply don't make sense.

Quote
It should be noted that we are talking about the capability of an individual computer which is the ultimate bottleneck
Quote
BitShares 2.0 will be capable of handling over 100.000 (100k) transaction per second on commodity hardware with parallel architectural optimizations in mind.

So, lets proceed...

Quote
...we make the assumption that the network is capable of streaming all of the transaction data and that disks are capable of recording this stream...

That is a bad assumption to make if you intend BitShares to run on commodity hardware as is stated in various texts relating to BitShares 2.0....if you wish to achieve that, then you should really be assuming that the network, and disks are NOT capable of such a thing.

Quote
Todays high-end servers with 36 cores (72 with hyper-threading) could easily validate 100,000 transactions per second.

Erm....what happened to commodity hardware already?  Who has a 36 core CPU lying around?

Quote
The average size of a transaction is just 100 bytes

I just don't see how this is possible while maintaining the minimum amount of data required to ensure a validatable transaction.  A 256bit EC signature is ~70 bytes, 30 bytes sure doesn't seem like enough to specify 2 addresses, a transaction value, and anything else that is required.

Its worth noting that quoted figures for average BTC transactions are also incorrect as per this http://bitshares.github.io/technology/high-performance-and-scalability/

Quote
The average size of a transaction on competing networks, such as Ripple and Bitcoin, is about 250 bytes.

I recall a few respected members here doing research into this, and the average BTC transaction was at least 2x that, usually 3x and greater.

Quote
This is well within the capacity of most data centers and therefore not considered part of the bottleneck.

 Huh so we have gone from commodity hardware, to data-centers?  What about keeping things decentralized or on commodity hardware?

Quote
After creating the block chain we timed how long it took to “reindex” or “replay” without signature verification. On a two year old 3.4 Ghz Intel i5 CPU this could be performed at over 180,000 operations per second.

That is a statistic I can swallow, but my question is, WHO is doing the signature validation?  Only people with 32 core machines and 1TB of memory?  If so, how are the rest of the nodes in the network ensuring that this now centralized task is done properly?  How can I with my lowly 8 core, be sure that the transactions are indeed valid and signed correctly without having to also verify 100k transactions per second.

Quote
We set up a benchmark to test real-time performance and found that we could easily process over 2000 transactions per second with signature verification on a single machine.

Ahh so 100k per second really is only available to people who own 32 core CPU's in a spare data-center?  If this single machine consisted of commodity hardware, and thus most users of the network will have similar, its not 100k per second is it, its 2k.

Remember this from up top It should be noted that we are talking about the capability of an individual computer which is the ultimate bottleneck, which is in turn confirmed by this next statement

Quote
On release, the transaction throughput will be artificially limited to just 1000 transactions-per-second.

If the speed of the 100tx/s is really achievable on commodity hardware, why limit it to 1000 transactions per second on release?  Could it be that 100k/s on commodity hardware actually is not possible, and this 1k limit is actually to mitigate machines slower than the test bed machine that could achieve 2k/s

If that is not the case then I am totally confused. Is it limited by an individual computer to 2k tx/s, or is it not?  Do you need the suggested 32 cores to be able to process 100k tx/s and if so, what about my question for machines that are slower?  If the majority of machines are indeed only able to process 2k/s what purpose do they serve in the network?  Are they redundant in ANY transaction processing?

To me on the surface, it all seems like conflicting statements, and contradictory numbers.  If the system can process 100k/s, without having centralized nodes packing 32 cores and 1TB of RAM, then I'll take my hat off, but all this information is so confusing, I don't even know what to take away from it.



YOUR THOUGHTFUL RESPONSE IS MUCH APPRECIATED


Since you took the trouble to read some of our BitShares 2.0 documentation and have prepared a polite, professional response, I am pleased to join you in a serious exchange of ideas.  Smiley

I'll limit my first response to just one of your lines of questions, lest too many trees hide the forest.

You can think of the transaction rate setting in BitShares 2.0 as a form of "dial a yield".  It can be dynamically adjusted by the stakeholders to provide all the throughput they want to pay for.  Since BitShares is designed to be a profitable business, it only makes sense to pay for just enough processing capacity to handle peak loads.    

The BitShares 2.0 block chain has a number of "knobs" that can be set by elected delegates.  One of them is throughput.  The initial setting of that knob is 1000 transactions per second because right now that is plenty and allows the maximum number of people to provide low cost nodes.  A second knob is witness node pay rate.  If doubling the throughput requires slightly more expensive nodes, the stakeholders just dial up the pay rate until they get enough bidders to provide the number of witness node they decide they want (another dial).  Pay rate scales which throughput which scales with revenue which scales with transaction volume.

Now, suppose that a few big applications were to decide to bring all their transactions to the neutral BitShares platform one summer.  If we needed to double the throughput, here's what would happen.

The elected delegates would discuss it in public and then publish their recommended new knob settings.  Perhaps they pick 2000 transactions per second and $100/month pay for each witness node provider.  Everyone who wants to compete for that job then has the funds to upgrade their servers to the next bigger off-the-shelf commodity processor.

As soon as they change those knob settings, the blockchain begins a two week countdown during which time the stakeholders are given a chance to vote out the delegates from their wallets if they don't like the change.  If they are voted out, the blockchain automatically aborts the adoption of the new settings.  If not, the settings are changed and the BitShares network shifts gears to run faster.

There is enough reserve capacity in the software to double our throughput about 8 times - scaling by three orders of magnitude with a simple parameter adjustment.

The current knob setting gives us plenty of reserve capacity at the lowest possible witness node price.  It could absorb all of Bitcoin's transaction workload without needing to even touch those dials.  But, if we ever need to take on the workload of something like NASDAQ or VISA or Master Card, we can dial up the bandwidth to whatever level the stakeholders vote to support.

So, the BitShares 2.0 platform has plenty of spare bandwidth to handle the ledger bookkeeping functions of just about every blockchain currently in existence.  You are all welcome to join us and start using your mining expenses to pay your own developers and marketers instead of electric companies. Nothing else about your business models or token distributions would change.  Simply outsource your block signing tasks and get on with your more interesting earthshaking ideas.   Think of the industry growth we would all experience if most funds spend on block signers were used to grow our common ecosystem instead.  We could all share one common, neutral global ledger platform, where cross-chain transactions, smart contracts and other such innovations were all interoperable!

Or will we waste our industry's development capital on a never ending mining arms race?  Carpe diem!

Stan Larimer, President
Cryptonomex.com


Pages: « 1 2 3 4 [5]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!