Bitcoin Forum
November 05, 2024, 08:39:12 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [20] 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 »
  Print  
Author Topic: HEAT Discussion and Technical info  (Read 61473 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic. (19 posts by 1+ user deleted.)
OrgiOrg_2
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
December 14, 2017, 09:33:55 AM
 #381

Is it possible to earn interest by holdong heat (aka staking)?
If yes is it done on web wallet or only on downloaded client?
You need to set up a forging node, hope this can help you...
https://heatnodes.org/heatnodes.pdf


Or you lease your HEAT to a forging pool ...

1. Log into your account in the HEAT wallet (web/desktop does not matter)
2. Use the top-right "hamburger" menu to select "lease balance"
3. As recipient use 1842674054425679153
4. Choose a period of blocks, with a minimum of 1440 and a maximum of 300000. This will define the leasing period, with 1440 being approx. half a day of leasing. Do note that a 1440 block period might almost go unnoticed, so would suggest to set it a bit higher
5. Click on "send" and you're all set!
6. You can repeat this action to define a follow-up leasing period, which means that automatically your new lease will start when the current one is over
7. For a withdrawal, send a message to the heat forging pool account in the HEAT wallet.



Relaxedsense
Hero Member
*****
Offline Offline

Activity: 809
Merit: 1002


View Profile
December 14, 2017, 11:31:41 AM
 #382

I would say heat would be $5 within next 5 months*

they do need to have it open source for adoption


absolutely. Combined with some strong marketing there is not telling how high it can go. But i have a good feeling!

My guess, I would start countdown to 5 months once they make heat open source.

One can have a opinion I guess.

I think they shouldnt make it open source at all until everything is developed 100% in order to get a first mover advantage.

Why give this advantage away? One can only wonder.
verymuchso
Sr. Member
****
Offline Offline

Activity: 421
Merit: 250


HEAT Ledger


View Profile
December 14, 2017, 06:17:28 PM
 #383

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

GTTIGER
Sr. Member
****
Offline Offline

Activity: 527
Merit: 250


View Profile
December 14, 2017, 07:45:50 PM
 #384

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis
Lol, I knew there was something to this project. Thanks for solidifying my beliefs.

jukKas
Full Member
***
Offline Offline

Activity: 364
Merit: 100


View Profile
December 14, 2017, 08:20:42 PM
 #385

This is starting to get very interesting. Thanks Dennis for your explanations and development updates.

I've heard HEAT is going to be fully open source at some point. Any estimates when source will be released?

Bitcoin
Mjbmonetarymetals
Legendary
*
Offline Offline

Activity: 1096
Merit: 1067



View Profile
December 14, 2017, 10:10:29 PM
 #386

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

Bullish

Bitrated user: Mick.
GTTIGER
Sr. Member
****
Offline Offline

Activity: 527
Merit: 250


View Profile
December 14, 2017, 11:00:57 PM
 #387

We should contact coinmarketcap and let them know about supply changes and announcement page change.
https://bitcointalk.org/index.php?topic=199685.5020

GTTIGER
Sr. Member
****
Offline Offline

Activity: 527
Merit: 250


View Profile
December 14, 2017, 11:58:08 PM
 #388

Lease you HEAT to:  1842674054425679153

***** OFFER TILL THE END OF 2017 *****


! 0% fee ... payouts will be done automatically in the first week of 2018 !

*** **** *** **** *** **** *** ****

Just 1% fee for keeping server online and manage funds

To claim your earnings, simply send a message via HEAT wallet to the pool account 9634482525377050118 and after this message has 1440 comfirmations, your confirmed earnings will be sent to you, minus 1% management fee + heat standard fee.

Happy Earnings!
Anyone have an automatic payout pool available or try this one?

OrgiOrg_2
Newbie
*
Offline Offline

Activity: 5
Merit: 0


View Profile
December 15, 2017, 07:29:14 AM
 #389

it's my forging pool ... no lessors until now  Embarrassed

but the 0%-offer with automatically payout was extended till end of march 2018 (also look here: http://heatledger.net/index.php?topic=137.msg726#msg726)
VonGraff
Newbie
*
Offline Offline

Activity: 21
Merit: 0


View Profile
December 15, 2017, 02:11:34 PM
 #390

Anyone have an automatic payout pool available or try this one?

I am only aware of the OrgiOrg pool, and the one I am offering. There are some accounts in heatwallet which contain the name of "pool" but none of them are actually containing multiple lessors.

Thus far heatpool.org is operating very stable, but using a manual calculation, awaiting the release of microservices to gradually move over to automation. You can at least keep an overview of the open balance of all lessors at the website of heatpool.org, and I have thus far processed all payments within a reasonable period of time, most often 24 hours. If a lease period has ended without extension I will also process the payment asap. Any open balance will also be included in your effective stake in the pool, so even with a later payment none of your rewards are "lost".

I am still surprised (and delighted of course!) to see heatpool as the #2 forger in the total list of forgers at heatnodes.org. The current effective balance is around 1.5M HEAT, with most of the times around 35 lessors.
memberberry
Full Member
***
Offline Offline

Activity: 237
Merit: 100


View Profile WWW
December 15, 2017, 06:18:12 PM
 #391

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

I think you fail to see that testing all of this on a single server is absolutely useless. Yes you can improve transactions throughput to 2000tps by improving the software, yes you can do it by improving the hardware. But all of this only affects the single nodes speed, BUT all of this will not improve the consensus algorithm and you will not end up having a decentralized BLOCKCHAIN that can do 2000tps, all you get is a centralized database that can do 2000 database operations per second. Basically what you guys are doing is useless as you don't get a faster network and others don't even need your faster software. xD Soon Ethereum will release sharding and probably have more tps over a REAL NETWORK and you guys will still test your 2000tps on a single node.

verymuchso
Sr. Member
****
Offline Offline

Activity: 421
Merit: 250


HEAT Ledger


View Profile
December 15, 2017, 10:34:51 PM
 #392

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

I think you fail to see that testing all of this on a single server is absolutely useless. Yes you can improve transactions throughput to 2000tps by improving the software, yes you can do it by improving the hardware. But all of this only affects the single nodes speed, BUT all of this will not improve the consensus algorithm and you will not end up having a decentralized BLOCKCHAIN that can do 2000tps, all you get is a centralized database that can do 2000 database operations per second. Basically what you guys are doing is useless as you don't get a faster network and others don't even need your faster software. xD Soon Ethereum will release sharding and probably have more tps over a REAL NETWORK and you guys will still test your 2000tps on a single node.

The one who is failing to see things here is I you I'm afraid.

Before you can do any number of transactions over a peer 2 peer network you first need to feed those transactions (over a network) to a single peer and process them there... The single peer actually does the "consensus algorithm" internally already, also in our test, it works the same if you generate a block or if you receive one from the network. Receiving from the network actually being cheaper to process. So thats the first part where you are wrong. 

I fully admit in my first post that work has to be done to the p2p  code before peers can share blocks and transactions over the p2p network at those speeds.
But unlike what we achieved now, that second step is a rather simple problem to solve.

As for this combination: ETHEREUM + SHARDING + SOON.

This depends on your definition of SOON, sharding is a really hard problem and to apply that to a live blockchain worth billions. Basically doing the hard fork of all hard forks, rewriting the entire blockchain in the process. Also they have not agreed yet on what sharding on Ethereum should look like.

So its safe to say you are wrong there to.

Miner2525
Full Member
***
Offline Offline

Activity: 204
Merit: 100


View Profile
December 15, 2017, 11:36:15 PM
 #393

I am still surprised (and delighted of course!) to see heatpool as the #2 forger in the total list of forgers at heatnodes.org. The current effective balance is around 1.5M HEAT, with most of the times around 35 lessors.

$64K question: Would I earn more HEAT leasing it to your mega-pool than running my own node?
memberberry
Full Member
***
Offline Offline

Activity: 237
Merit: 100


View Profile WWW
December 15, 2017, 11:58:09 PM
Last edit: December 16, 2017, 12:09:05 AM by memberberry
 #394

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

I think you fail to see that testing all of this on a single server is absolutely useless. Yes you can improve transactions throughput to 2000tps by improving the software, yes you can do it by improving the hardware. But all of this only affects the single nodes speed, BUT all of this will not improve the consensus algorithm and you will not end up having a decentralized BLOCKCHAIN that can do 2000tps, all you get is a centralized database that can do 2000 database operations per second. Basically what you guys are doing is useless as you don't get a faster network and others don't even need your faster software. xD Soon Ethereum will release sharding and probably have more tps over a REAL NETWORK and you guys will still test your 2000tps on a single node.

The one who is failing to see things here is I you I'm afraid.

Before you can do any number of transactions over a peer 2 peer network you first need to feed those transactions (over a network) to a single peer and process them there... The single peer actually does the "consensus algorithm" internally already, also in our test, it works the same if you generate a block or if you receive one from the network. Receiving from the network actually being cheaper to process. So thats the first part where you are wrong.  

I fully admit in my first post that work has to be done to the p2p  code before peers can share blocks and transactions over the p2p network at those speeds.
But unlike what we achieved now, that second step is a rather simple problem to solve.

As for this combination: ETHEREUM + SHARDING + SOON.

This depends on your definition of SOON, sharding is a really hard problem and to apply that to a live blockchain worth billions. Basically doing the hard fork of all hard forks, rewriting the entire blockchain in the process. Also they have not agreed yet on what sharding on Ethereum should look like.

So its safe to say you are wrong there to.

Wrong again and sad to see this reaction from someone who claims to develop actual blockchain software. The trick with sharding is that NOT EVERY SINGLE NODE needs to verify all transactions but you have shards which kind of seperate the blockcahin into multiple blockchains and you have a consensus layer on top of these chains in which the proof of the state is saved and agreed on. This gives you the possibility of exponential growth, exponential growth over THE ACTUAL NETWORK and not over some local network. Additionally being able to verify 2000tps on a local node, even if all nodes on the network can do that, does not mean having 2000tps over the internet. The way you do things gives you only a linear growth. Improving the software is at its limits and will not improve much further or not in magnitudes that improve things as necessary.. Bigger hardware leads to more centralization and a higher barrier of entrance into having a fullnode or a miner, both not desireable features. And then there is Moore's law... Not sure if you know what linear means but if you do you must admit that you can only achieve linear growth the way you do things. The real question is if you already know all this and just deceive your audience again just as you guys did before or maybe even worse you don't know that and are too ignorant to see this. SOON means next year or the year after that.

GTTIGER
Sr. Member
****
Offline Offline

Activity: 527
Merit: 250


View Profile
December 16, 2017, 01:52:18 AM
 #395


Low volume but a good sign  Grin

tetra
Sr. Member
****
Offline Offline

Activity: 471
Merit: 250


View Profile
December 16, 2017, 08:21:11 AM
Last edit: December 16, 2017, 08:41:24 AM by tetra
 #396

I'm also curious why you think this is special in any way?
Visa does centralized processing and is much faster.

And this is not what crypto is about.
So the question is: Will the tps in the decentralized network be improved at all by this? And if so, by how much?

Flomess
Hero Member
*****
Offline Offline

Activity: 705
Merit: 500


View Profile
December 16, 2017, 10:06:42 AM
 #397

I'm having difficulties with the server app, i tried synching it a couple of times but i always get this:

Any suggestion on what to do?
rail2007
Member
**
Offline Offline

Activity: 74
Merit: 10


View Profile
December 16, 2017, 01:42:28 PM
 #398

heatwallet.com is under maintenance during Sat 17 Dec 2017.
Please check back later ?

foggy1421
Newbie
*
Offline Offline

Activity: 16
Merit: 0


View Profile
December 16, 2017, 02:49:53 PM
 #399

Yes heatwallet is currently under maintenance. will update as soon as possible.
Neversettle
Member
**
Offline Offline

Activity: 308
Merit: 13

ZetoChain - ACCELERATING BLOCKCHAIN FOR THE SUPPLY


View Profile
December 16, 2017, 04:29:11 PM
 #400

Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis
Lol, I knew there was something to this project. Thanks for solidifying my beliefs.

really cool. this is not getting the attention it deserves!

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [20] 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!