Bitcoin Forum
May 05, 2024, 02:09:48 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: New Trading Engine reaches 300k TPS in first tests  (Read 2819 times)
Saman (OP)
Jr. Member
*
Offline Offline

Activity: 32
Merit: 1


View Profile
May 02, 2013, 08:41:05 PM
 #1

Hi

just wanted to point your attention to the video we have release today - we are working on a new trading platform (http://btc.uy) and an early stage test shows that we reach over 300k Transactions per Second in our Order Matching Engine - The Order Matching is capable of many millions of TPS, the bottleneck at the moment is shuffling more then 300k in and out of the Order Matching Engine. This test has been done on a fairly modest hardware, so we are confident we can increase this as we go.

For the technical interested here is the video:

http://www.youtube.com/watch?v=liN_ipXcMHw

or on Reddit:

http://www.reddit.com/r/BTCglobal

a.m.a.

-SM
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
1714874988
Hero Member
*
Offline Offline

Posts: 1714874988

View Profile Personal Message (Offline)

Ignore
1714874988
Reply with quote  #2

1714874988
Report to moderator
Omen1855
Member
**
Offline Offline

Activity: 111
Merit: 10

Bitcoin Entrepreneur


View Profile WWW
May 02, 2013, 08:46:47 PM
 #2

Nice work (based on your English explanation), any plans on expanding this to outside of latin-america? Or am I missing something?

Affiliation code wTksie2mDj for a 10% discount on your trading fees on BitFinex
Saman (OP)
Jr. Member
*
Offline Offline

Activity: 32
Merit: 1


View Profile
May 02, 2013, 08:59:16 PM
 #3

Nice work (based on your English explanation), any plans on expanding this to outside of latin-america? Or am I missing something?

very likely - we started this as a Latin American platform but it's a pretty sure bet that we will go globally with it. The Soft- and Hardware of an Exchange is only half of the rent - There is legal, organizational, banking stuff to consider and we are currently working on this. At the moment I can not comment on legal and bankable issues as we are in the processes of obtaining licenses and negotiating with possible partners from the financial industry. But it looks very promising.
Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
May 02, 2013, 09:27:16 PM
 #4


Russian accent Smiley
btcmind
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
May 02, 2013, 09:27:42 PM
 #5

Without framing the test, this is not very meaningful. I assume you are running on one local machine, you're not dealing with latency issues, not with protocol issues, etc. For instance if you go through a web-browser how much latency does that add - perhaps 10x? Or perhaps 10^4 x? 300k probably means all of this is in RAM?

Why not instead have a bunch of distributed agents submitting messages and then measure performance with some helpful metrics? Much more interesting than average latency is latency distribution of worst cases.
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
May 03, 2013, 02:26:34 AM
 #6

Looks very cool!

You mention it's written in Factor. Is it entirely in Factor, or do you use other languages?

Also, how many lines of code are we talking about?

Last question: the benchmarking you do in the video, is it essentially creating random limit orders and submitting them to the matching engine? If so, have you tried simuluating with a sequence of orders from a real exchange, so it's closer to real life? I mean, generating random orders will put a lot of orders on the book, instead of executing them, but I imagine a real order stream will contain more orders at or around the bid/ask price, thus causing more trades to take place, and possibly slowing it down below 300k TPS, but I don't know much about matching engines, just a hunch.
netrikare
Newbie
*
Offline Offline

Activity: 11
Merit: 0


View Profile
June 10, 2013, 10:39:10 PM
 #7

Vladimir: how many TPS with ACID persistence db can you handle? Order matching algorithm benchmark tells us nothing about overall time to complete buy/sell order.
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 11, 2013, 08:06:22 AM
 #8

IMO performance should be a secondary consideration. Your first priority should be consistency, since you are dealing with money; balances, transactions and orders (in the DB) should be either fully processed, or all not processed - even in the case of power failure. This is what ACID gives you, as I'm sure you know.
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
June 11, 2013, 09:57:32 AM
 #9

IMO performance should be a secondary consideration. Your first priority should be consistency, since you are dealing with money; balances, transactions and orders (in the DB) should be either fully processed, or all not processed - even in the case of power failure. This is what ACID gives you, as I'm sure you know.
I might be a bit inexperienced in the field, but this is how I envision a complete order matching engine:

The first part consists of processes on a CPU core that convert various trading API formats to a standard format. Say from a Websocket API, a REST API and a FIX API to some common format that the matching engine can understand. This "standard format"-stream is simply dumped to the disk, one trade after another, in sequential order while the matching engine is simultaneously fed the same stream in the same order.

You would have three (or more for added resilience) matching engines running on three separate cores all being fed the incoming order stream. These matching engines would always be in the same state, since they are the same program and they are being fed the same stream of incoming orders.

Once every x hours you halt one of the processes, and dump the current state to the disk. This is a "recovery snapshot", that enables you to recreate the current state simply by restoring the state from this snapshot, and replaying the orders (which have been dumped to the disk) from the time stamp of the snapshot into the matching engine.

So two cores are always running with the same matching engine and order stream, never interrupted. If one core fails, we switch to the other core. The third core is for creating snapshots at regular intervals, from which the current state can be restored by restoring "state at time x" from the snapshot and replaying orders "later than x" into the running matching engine instance.

But of course, we would need some sort of database on the other side of this trading engine, to keep track of account balances, which are directly affected by executed trades. Is that the issue?
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 11, 2013, 10:33:51 AM
 #10

But of course, we would need some sort of database on the other side of this trading engine, to keep track of account balances, which are directly affected by executed trades. Is that the issue?

No.

The issue is that there are several separate database entities which all need to be updated simultaneously (i.e. at least):

* orders
* user balances
* transactions

When the matching engine begins a run, it will pull a sorted list from the orders of bids/asks. During this operation the orders must not be updated or you will have inconsistent data to start with. Next the matching engine will start matching best bids and asks together. If the best bid and ask can be matched, an atomic transaction must be begun in which:

* The balances of both users are locked and checked for enough money
* Both bid/ask orders are locked
* The trade is actually performed
* One order is deleted, the other is updated to reflect any partial fill
* Balances of both users are updated to reflect the trade
* An order-transaction is created which exactly describes the trade

If this all goes well and no errors have occurred (which would cause a rollback) this atomic-transaction is committed to the database. As you can see there are a lot of things which must all happen or all not happen in order for the database to remain consistent during just one order match.

Without this, you'll be in a world of hurt as soon as anything goes wrong, since your database will be inconsistent and it will be hard to figure out what went wrong.

Cheers, Paul.
lexxus
Sr. Member
****
Offline Offline

Activity: 309
Merit: 250


View Profile
June 11, 2013, 11:15:22 AM
 #11

But of course, we would need some sort of database on the other side of this trading engine, to keep track of account balances, which are directly affected by executed trades. Is that the issue?

No.

No. You don't need to include db here as persistance can be asynchronous.

read here http://martinfowler.com/articles/lmax.html

Maciek
Full Member
***
Offline Offline

Activity: 254
Merit: 100


View Profile
June 11, 2013, 01:14:56 PM
 #12

@Vladimir
What is expected time for BTCGlobal to start trading BTC? Smiley
bytemaster
Hero Member
*****
Offline Offline

Activity: 770
Merit: 566

fractally


View Profile WWW
June 11, 2013, 02:13:33 PM
 #13

You want to follow the pattern used by the LMAX Disruptor which will do what you want to achieve low-latency, redundancy, and backups, and fast fail-over.

https://fractally.com - the next generation of decentralized autonomous organizations (DAOs).
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 12, 2013, 08:49:26 AM
 #14

You want to follow the pattern used by the LMAX Disruptor which will do what you want to achieve low-latency, redundancy, and backups, and fast fail-over.

I really wouldn't want to do that myself. Moving the problem of database consistency into the program seems like a very dangerous thing to me; there is already enough to go wrong without making life so much more complex by trying to implement LMAX.

Cheers, Paul.
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
June 12, 2013, 10:27:27 AM
 #15

You want to follow the pattern used by the LMAX Disruptor which will do what you want to achieve low-latency, redundancy, and backups, and fast fail-over.

I really wouldn't want to do that myself. Moving the problem of database consistency into the program seems like a very dangerous thing to me; there is already enough to go wrong without making life so much more complex by trying to implement LMAX.

Cheers, Paul.
One the other hand, it will enable more-than-lackluster performance for your exchange. I imagine the 40 TPS Mt. Gox can sustain is related to them using a database for order storage. This really isn't sufficient for an exchange of their size. And it results in hours of lag when things really heat up.
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 12, 2013, 11:07:03 AM
 #16

One the other hand, it will enable more-than-lackluster performance for your exchange. I imagine the 40 TPS Mt. Gox can sustain is related to them using a database for order storage. This really isn't sufficient for an exchange of their size. And it results in hours of lag when things really heat up.

Actually, I find this theory that MtGox can only sustain 40TPS very speculative. I submit that is not the matching engine which is the bottle-neck at MtGox, but rather the constant DDOS they experience tying up their trading interface front-end.

Cheers, paul.
runeks
Legendary
*
Offline Offline

Activity: 980
Merit: 1008



View Profile WWW
June 12, 2013, 11:31:52 AM
 #17

One the other hand, it will enable more-than-lackluster performance for your exchange. I imagine the 40 TPS Mt. Gox can sustain is related to them using a database for order storage. This really isn't sufficient for an exchange of their size. And it results in hours of lag when things really heat up.

Actually, I find this theory that MtGox can only sustain 40TPS very speculative. I submit that is not the matching engine which is the bottle-neck at MtGox, but rather the constant DDOS they experience tying up their trading interface front-end.

Cheers, paul.
I haven't heard Mt. Gox claim they experience constant DDoS attacks. They've been quite intermittent. And I haven't heard anything recently.

The fact of the matter is that no more than 40 trades were processed per second during the huge sell-off. I made a script that analyzes trade data and this was the result: https://github.com/runeksvendsen/mtgoxtps/blob/master/final.png

But you're right that this might have been caused by DDoS attacks. Although I find it striking that not one second passed with more than 40 trades being executed. It kind of looks like a hard limit.

Additionally, Mt. Gox went down for maintenance after the huge sell-off, and proclaimed they were implementing improvements to their trading engine (one was disallowing unfunded orders). And it's clear that around April 14, their maximum TPS increased from around 20 to 40 TPS.

I just ran my script again to analyze data from June. Again, it shows no more than 42 trades in a single second: https://raw.github.com/runeksvendsen/mtgoxtps/master/June1-12.png

My opinion is that it's highly unlikely to not see more than 40~ trades execute in a single second in the course of 12 days, if that is not a hard limit. You only have to put in a large buy order at a dollar or more above the current price, for the matching engine to have to traverse several hundred sell orders.

Looking at the order book on bitcoin.clarkmoody.com now, with the lowest ask at 109.55899, it would only take a single buy order of 840 BTC @ $110 to fill 70 sell orders. I feel confident saying that this has definitely happened in the past twelve days, and the fact that no second has passed with more than 42 trades points strongly to an upper limit of that TPS figure in their engine design.

Come-from-Beyond
Legendary
*
Offline Offline

Activity: 2142
Merit: 1009

Newbie


View Profile
June 12, 2013, 12:17:18 PM
 #18

Modules that require ACID and traditional databases are easily scaled horizontally as needed.

U can't scale horizontally for free, u have to sacrifice something. For an exchange it can't be Consistency, so is it Availability or Partition tolerance?
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 12, 2013, 12:45:37 PM
 #19

I just ran my script again to analyze data from June. Again, it shows no more than 42 trades in a single second: https://raw.github.com/runeksvendsen/mtgoxtps/master/June1-12.png

My opinion is that it's highly unlikely to not see more than 40~ trades execute in a single second in the course of 12 days, if that is not a hard limit. You only have to put in a large buy order at a dollar or more above the current price, for the matching engine to have to traverse several hundred sell orders.

I find it hard to argue with that Smiley
monsterer
Legendary
*
Offline Offline

Activity: 1008
Merit: 1000


View Profile
June 12, 2013, 01:05:35 PM
 #20

Although I just saw this:

Code:
14:04:53	110.00000	3.3287
14:04:53 110.02067 0.0201
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.00000 3.3287
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.00000 3.3287
14:04:53 110.02570 0.0100
14:04:53 110.02067 0.0201
14:04:53 110.02067 0.0201
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100
14:04:53 110.02570 0.0100

Which is 67 trades / sec...
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!