Bitcoin Forum
December 11, 2024, 12:51:52 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 3 4 »  All
  Print  
Author Topic: Mining protocol bandwidth comparison: GBT, Stratum, and getwork  (Read 28316 times)
Luke-Jr (OP)
Legendary
*
Offline Offline

Activity: 2576
Merit: 1186



View Profile
December 16, 2012, 03:46:07 AM
Last edit: January 10, 2013, 02:01:38 PM by Luke-Jr
 #1

I took a few hours to do some mining protocol bandwidth usage comparison/tests on my 2.5 Gh/s development mining rig. For each protocol, I gave it 10 minutes and measured its bandwidth usage in terms of (difficulty-1) shares per 2 KB (in both directions). I chose 2 KB because this is roughly the amount of bandwidth used to mine 1 share using the original no-rollntime getwork protocol, and therefore this metric gives approximately 1.00 "bandwidth-based efficiency" per the classic 100% "work-based efficiency" (what BFGMiner displays now).

Stratum ( blind ): 7.44
getwork (rolling): 2.19
getwork (no roll): 1.08
getblocktemplate : 0.33
Stratum (secured): 0.17


Stratum is updating every 55 seconds; GBT every 80 seconds. If secured stratum updated only every 80 seconds, its efficiency would be 0.25. Most likely the difference between this 0.25 and GBT's 0.33 is accounted for by the gzip compression employed by GBT.

Note that these numbers do not take into account scalability. The main benefit (in terms of bandwidth usage) for both GBT and stratum is that they scale without using more bandwidth. I will probably do another run of bandwidth testing when ASICs ship.

They also don't account for Bitcoin security: getwork and blind stratum, though far more bandwidth-efficient at 2.5 Gh/s, give the pool the ability to do double-spends and other kinds of attacks on Bitcoin. Even if you decide to trust your pool operator, this makes the pool a huge target for crackers since it enables them to abuse it to attack Bitcoin. With blind mining protocols, all it takes is someone cracking the top 3 biggest pools to 51% the network.

To try to avoid pool differences affecting the results, all the measurements were made on the Eligius mining pool within the same timeframe.

crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
December 16, 2012, 06:38:33 AM
 #2

For each protocol, I gave it 10 minutes and measured its bandwidth usage in terms of (difficulty-1) shares per 2 KB (in both directions).

Stratum ( blind ): 7.44
getwork (rolling): 2.19
getwork (no roll): 1.08
getblocktemplate : 0.33
Stratum (secured): 0.17


I understand scaling is will be a big issue, but your own findings show the old GW is 3-6 times more efficient than your own protocol? Moo point (You know, like a cows opinion? +1 if you get the reference) as this is all gearing up for ASIC,s but I just think it's funny.

I'm assuming this was done on BFGminer, but I thought there have been reports of higher-than-normal network usage with stratum as compared to CGMiner. Ya, I went there. Sorry, as I don't want to turn into a flame-war, as I'm very interested in the outcome, but isn't there a difference? Im looking for answers from people other than LJR, as you seems to present present the hard evidence in this thread, but you are quite a bit biased.

And as of right now, stratum is 22x more efficient than GBT? Both were designed for scaling, so this won't change (much), correct?

And lastly, anyone besides LJR care to comment on the issue of "network security" by using GBT or "secured" stratum? Correct me if I'm wrong, but the only difference is downloading all of the transactions and independently approving them, rather than just hashing at what the pool gave you? A) Is this correct? B) How likely is an attack on a pool to the point where the pool's hashrate could be controlled and used maliciously? C) How likely is it that the top 3 pools could have this happened at the same time? I do not mine at one of the top 3, as I'd rather see 10-20 medium sized pools than 3-5 big ones.

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
Luke-Jr (OP)
Legendary
*
Offline Offline

Activity: 2576
Merit: 1186



View Profile
December 16, 2012, 06:55:05 AM
 #3

I understand scaling is will be a big issue, but your own findings show the old GW is 3-6 times more efficient than your own protocol? Moo point (You know, like a cows opinion? +1 if you get the reference) as this is all gearing up for ASIC,s but I just think it's funny.
At 3.5 Gh/s, yes. Due to scalability, GBT will probably surpass getwork somewhere around 20 Gh/s.

I'm assuming this was done on BFGminer, but I thought there have been reports of higher-than-normal network usage with stratum as compared to CGMiner.
BFGMiner implements secure stratum, and CGMiner implements blind stratum. (Edit: The testing was all done on BFGMiner, however; I commented out the secure stratum code for the blind stratum test)

And as of right now, stratum is 22x more efficient than GBT? Both were designed for scaling, so this won't change (much), correct?
It shouldn't change much, correct. But that's just blind stratum, which is harmful to the network. Secure stratum is less efficient. And all stratum is less flexible (currently).

And lastly, anyone besides LJR care to comment on the issue of "network security" by using GBT or "secured" stratum? Correct me if I'm wrong, but the only difference is downloading all of the transactions and independently approving them, rather than just hashing at what the pool gave you? A) Is this correct? B) How likely is an attack on a pool to the point where the pool's hashrate could be controlled and used maliciously?
That's the main issue, yes. By making pools a high-value target (due to the ability to do these things), I'd say the attack is a lot greater than it would be using a secure mining protocol. Perhaps it isn't economically useful today, but it will be if Bitcoin ever takes off.

C) How likely is it that the top 3 pools could have this happened at the same time? I do not mine at one of the top 3, as I'd rather see 10-20 medium sized pools than 3-5 big ones.
Why is that? Probably because 3-5 big ones centralizes more mining than 10-20 medium ones? That's exactly the point: 3-5 big ones using GBT isn't any worse than 10-20 medium ones, since the mining is decentralized again.

gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
December 16, 2012, 07:05:10 AM
Last edit: December 16, 2012, 07:26:09 AM by gmaxwell
 #4

I understand scaling is will be a big issue, but your own findings show the old GW is 3-6 times more efficient than your own protocol? Moo point (You know, like a cows opinion? +1 if you get the reference) as this is all gearing up for ASIC,s but I just think it's funny.

Not "will be" — is the only. The reference there is 2kb of data... a very tiny amount.  The only reason to worry about data amounts for normal users is higher rate miners. And the other protocols don't increase their inbound bandwidth much at all for higher rates.. while GETWORK basically becomes linear with hashrate.

Quote
A) Is this correct? B) How likely is an attack on a pool to the point where the pool's hashrate could be controlled and used maliciously? C) How likely is it that the top 3 pools could have this happened at the same time? I do not mine at one of the top 3, as I'd rather see 10-20 medium sized pools than 3-5 big ones.

If you really wanted to assume these attacks couldn't happen, you could just have a few centralized parties handle all the transactions and we could just skip this whole costly mining thing.  Perhaps we could call the resulting system "paypal", if the name isn't already taken.  Tongue

But more seriously, top pool infrastructure has been compromised before— fortunately there was more to be made from robbing the pools hot wallet than by maliciously redirecting the mining, though thats not as likely to be true in the future.  Compromising any major pool— even if it's not a majority at once is still bad because it enables a variety of nasty attacks.

And certainly, more smaller pools would be better.  But regardless of the pool size— bitcoin loses the decentralization that makes it valuable if the people buying hardware aren't doing any due diligence to make sure their hash power isn't being redirected to attack.
kano
Legendary
*
Offline Offline

Activity: 4634
Merit: 1851


Linux since 1997 RedHat 4


View Profile
December 16, 2012, 07:42:18 AM
 #5

So firstly, anyone wanting to see the figures, simply look at the cgminer API and see for yourself.

Yes not much point trusting what Luke-Jr has to say - look at the numbers yourself ...

cgminer API stats command includes the number of times data is sent and received and the amount of data sent and received.

As Luke-Jr has stated above (though the numbers are false and he knows it), he has GBT increasing transaction confirm time by 4 times what Stratum does with his clone miner.

In cgminer, we keep the transaction confirm times roughly the same for both Stratum and GBT - which Luke-Jr complained about saying that it is better to have GBT increase transaction confirm times by 4 times what Stratum does.

Now, I still have yet to see explained anywhere how getting the list of transactions stops a pool from attempting a double spend or gains any other security?
It may be some wet dream that Luke-Jr likes to fantasize about, but no such security exists, anywhere.
As for decentralised?
How exactly is having a centralised pool using GBT decentralised? It isn't.

The few times I bothered to look at the cgminer statistics I got something of the order of GBT is 50x Stratum but there was another factor to take into consideration (share difficulty) so it's probably not quite as high as 50x
But again, anyone can do it themselves and see the numbers - no point taking much notice of Luke-Jr's rigged results and lies.

Stratum (on the pools that invented it) sends work roughly every 30s as per design.
GBT on Luke-Jr clone miner gets work every 120s - go check the code - that's what it says - not 80s as he lied above.
You wont find that 80s anywhere in the clone miner code.
He rigged the results.

Again, on his miner clone you are increasing transaction confirm times by 4 times with GBT versus Stratum
on cgminer they are the same thus they have the same effect on BTC transaction confirm times.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
crazyates
Legendary
*
Offline Offline

Activity: 952
Merit: 1000



View Profile
December 16, 2012, 03:50:32 PM
 #6

C) How likely is it that the top 3 pools could have this happened at the same time? I do not mine at one of the top 3, as I'd rather see 10-20 medium sized pools than 3-5 big ones.
Why is that? Probably because 3-5 big ones centralizes more mining than 10-20 medium ones? That's exactly the point: 3-5 big ones using GBT isn't any worse than 10-20 medium ones, since the mining is decentralized again.
Now, I still have yet to see explained anywhere how getting the list of transactions stops a pool from attempting a double spend or gains any other security?
It may be some wet dream that Luke-Jr likes to fantasize about, but no such security exists, anywhere.
As for decentralised?
How exactly is having a centralised pool using GBT decentralised? It isn't.

So let me get this straight, this whole debate over GBT vs stratum boils down to the issue of "network security"? I'm all for network security, but I usually leave the details for people smarter than me. However, I have to ask the same questions as Kano: Does having every miner on a pool verify the transactions actually increase security? And also how does mining at a centralized pool using GBT = decentralized?

It seems to me that if these "security" issues are actually a non-issue, than stratum + cgminer = the best solution. I say cgminer because LJR admitted to having to edit his program just to use the same stratum implementation as cgminer. We get the same amount of "security" as the old GW, but with far less network usage and far greater scalability. On the other hand, if these issues are actually legitimate, than it would appear that GBT is the way to go, as it does use a little less network usage.

However, I have to take into account 3 things: LJR is the only person I've seen advocating these changes. LJR is also the author of GBT and BFGminer, the competing solution to the problem he seems to be the only person caring about. And lastly, kano suggests that LJR jimmied the test to make his protocol look better.

Tips? 1crazy8pMqgwJ7tX7ZPZmyPwFbc6xZKM9
Previous Trade History - Sale Thread
wizkid057
Legendary
*
Offline Offline

Activity: 1223
Merit: 1006


View Profile
December 16, 2012, 04:10:17 PM
 #7

.... you are increasing transaction confirm times by 4 times with GBT versus Stratum.....

I don't exactly see how this makes any sense.  As we know, transactions are confirmed initially when they are mined in a block, and receive a bump in confirmation every time a new block is built on top of the block in which the transaction appears.

Now, the bitcoin network is tailored for blocks every 10 minutes.  Now, I'm no rocket scientist, but, as far as I know, 30 seconds, 80 seconds, and 120 seconds are still < 10 minutes.  So, at most, even if any protocol shifts mining the transaction by any of those times, the confirmation time gets pushed to the next block.  So, instead of 10 minutes, its 20 minutes to get a confirmation, in a worst case scenario.  And, again, no rocket scientist here, but, 20 divided by 10 != 4.

And this is a moot point anyway.  The polling interval of getting new work containing new transactions (GBT, getwork, stratum, custom, whatever) shouldn't adversely effect confirmation times on average as long as the work is requested at intervals of less than half the bitcoin block mining target of 10 minutes, since every time a block is found, the new transactions will be used for the new work anyway.  So, at most, again, even in a worst case scenario we're talking about pushing the transaction back one block IF the transaction if given to the pool that just happens to mine the next block AFTER *every* miner of that pool has already requested their work, but before it mines the block.  Seems like a pretty unlikely scenario even at 120 seconds (20% block target time).

I really want to know where this 4x longer to confirm number comes from...

To summarize, lets look at every protocol as pseudo code:

(X being 1/2 the protocol's new work interval)
  • Miner requests work from pool
  • X seconds later, Network user submits a transaction
  • Miner mines a block using work from line 1
  • Miner requests new work from pool due to new block found (this happens < X seconds from line 1)
  • Miner's new work's works on the transaction referenced on line 2, < X seconds from the time it was submitted to the network
  • Some miner (maybe this one, maybe somewhere else) mines a block that contains the transaction from line 2 sometime between now and 10 minutes from now, on average

Transaction confirmation time: X seconds at a minimum, to 10 minutes, on average.

So, if the network contains only one miner, polling at X second intervals, then transactions take at least X seconds to be put into a block, and 10 minutes on average.  In practice, regardless of the value of X, miners are not requesting work from their pools or solo bitcoind at the exact same interval, so, again, as long as the value of X is less than 5 minutes, the value of X has no real effect, on average, on transaction confirmation time, because blocks happen on average every 10 minutes.

But, as this forum is full of trolls anyway, I don't think anyone should just trust me, kano, luke, or anyone else.  Do the math yourself, have someone you know do the math for you, or in some other way verify the numbers yourself.

-wk

P.S. - The OP was actually about bandwidth usage, not confirmation times... FWIW, I'm at 0.49 shares per 2KB with stock-settings bfgminer on Eligius using GBT with my four x6500s (~1.5GH share acceptance rate), after about 10 hours.... which is actually better than Luke-Jr's result, for whatever reason.  Seems stock setting is a scantime of 60 seconds and expiry of 120 seconds... which IIRC is the same as cgminer.

Tips: 1LDQrLr6dPVqNJmpZm82eZVKqDFRk7ERW8
Operator of the Eligius Mining Pool - 0% Fee, SAPPLNS, GBT, Stratum, IRC+Phone Support, Share Market (coming soon), Generation payouts, and more.
Don't feed the trolls. Science Confirms: Internet Trolls Really Are Narcissistic, Psychopathic, and Sadistic (1)
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
December 16, 2012, 04:12:14 PM
 #8

he has GBT increasing transaction confirm time by 4 times what Stratum does with his clone miner.
In cgminer, we keep the transaction confirm times roughly the same for both Stratum and GBT - which Luke-Jr complained about saying that it is better to have GBT increase transaction confirm times by 4 times what Stratum does.
None of this has anything to do with transaction confirm times.  It might impress the less technical people when you throw nonsense around like this, but it's really just everyone with a clue embarrassed on your behalf.
wizkid057
Legendary
*
Offline Offline

Activity: 1223
Merit: 1006


View Profile
December 16, 2012, 04:15:13 PM
 #9

However, I have to take into account 3 things: LJR is the only person I've seen advocating these changes. LJR is also the author of GBT and BFGminer, the competing solution to the problem he seems to be the only person caring about. And lastly, kano suggests that LJR jimmied the test to make his protocol look better.

Actually, many pools have GBT implemented.  I take that as advocating.

Also, kano suggests a lot of things.  See my post above (edit: and gmaxwell's) for an example why we shouldn't just listen to kano.

And I don't see how Luke-Jr could have possibly rigged his test, when as per his own list, GBT is not number one on his list.  Plus he also provides the result for stratum-secure using the same request time values as GBT (0.25).

Please read and think before jumping to conclusions, folks.

-wk

Tips: 1LDQrLr6dPVqNJmpZm82eZVKqDFRk7ERW8
Operator of the Eligius Mining Pool - 0% Fee, SAPPLNS, GBT, Stratum, IRC+Phone Support, Share Market (coming soon), Generation payouts, and more.
Don't feed the trolls. Science Confirms: Internet Trolls Really Are Narcissistic, Psychopathic, and Sadistic (1)
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
December 16, 2012, 04:47:55 PM
Last edit: December 16, 2012, 04:59:55 PM by gmaxwell
 #10

So let me get this straight, this whole debate over GBT vs stratum boils down to the issue of "network security"? I'm all for network security, but I usually leave the details for people smarter than me. However, I have to ask the same questions as Kano: Does having every miner on a pool verify the transactions actually increase security? And also how does mining at a centralized pool using GBT = decentralized?

I suspect you've already reached a number of conclusions and that I'm just wasting my time responding—  since you're already ignoring the points upthread, but in case I'm incorrect:

GBT was originally created by ForrestV (under an the original name, getmempool) so that p2pool could exist.  Today _all_ current pool software uses it to talk to bitcoind.   Luke took up the task of writing the formal specification for it and made a number of important improvements to close some denial of service risks and to make it more useful for general pool software, and because some of the updates weren't completely compatible and because the old name was confusing it was renamed.  GBT is part of the bitcoin ecosystem, developed cooperatively in public, reviewed and used by many people, part of the reference software, with a clear publicly agreed specification.  It's not just "luke's thing".

Quote
It seems to me that if these "security" issues are actually a non-issue, than stratum + cgminer = the best solution. I say cgminer because LJR admitted to having to edit his program just to use the same stratum implementation as cgminer. We get the same amount of "security" as the old GW, but with far less network usage and far greater scalability. On the other hand, if these issues are actually legitimate, than it would appear that GBT is the way to go, as it does use a little less network usage.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.  If you were to call yourself a miner in that model we might as well say that AMD is a miner with 90% of the hashpower, declare bitcoin a failure, and go home. Smiley

The security of bitcoin really depends on this assumption that mining— real mining, not just people selling cpu cycles— will be _well_ distributed, and that the people running it will have a long term investment in bitcoin (e.g. mining hardware) that would be damaged by doing anything hostile like helping someone double spend. If it's not well distributed there is a lot of risk that attackers can take over bit chunks by hacking systems or holding guns to people's heads. If the miners are not invested in Bitcoin then they could get short term returns by maliciously mining. These risks exist even if they don't allow the attacker to get a majority hash power, "a lot" is enough to steal from people especially if combined with network isolation attacks (see the bitcoin paper on attack success probability as a function of hashrate). People hype "51%" but >50% is just when the attack success probability becomes 100% (assuming infinite attack duration). Especially with the subsidy down to 25 BTC a high value attack doesn't have to have high odds of success to be very profitable.

If these assumptions— that actual mining (validation) is distributed and the miners are economically motivated to be honest— is incorrect then bitcoins— and mining hardware— should be worthless.  

So, big centralized pools are a terrible thing for bitcoin. They violate the assumptions, and even if the poolops themselves are honest people (well, lots of people thought pirate was honest...) they are still an easy target for attack electronically or at gunpoint. But pools are easy and convenient and people are short-sighted and hope that someone else does the work to keep bitcoin secure while they just do the easiest thing.

For a while the core developers have been strongly encouraging people to mine using p2pool which solves these (and other) issues.  Luke's pool was already popular with the more-geeky crowd and it lost a number of big miners to P2pool.   Rather than denying the reality of the above serious concerns Luke looked for a way of combining the advantages— make mining that is as easy to use and as flexible in payout schemes as fully centralized pools, but not a danger to the Bitcoin ecosystem. He realized that the getmempool call that p2pool and other poolservers were already usng could be used by miners directly, allowing the miners to "trust but verify" and substantially limiting the attacks a malicious party controlling a pool could perform.

(Luke has also done some clever work in BFGminer to decrease the exposure even with getwork, for example it watches the previous block in the block header and prevents pools from asking you to mine a fork against blocks you previously mined. ... but getwork simply doesn't have enough information to do much better validity checks than that)

Con is a brilliant coder, by everyone's acknowledgement, but he's also a little bit coin operated. Luke can be difficult to deal with at times— but he's thinking long term and he's concerned with the future viability of Bitcoin for everyone.  When you delegate _Bitcoin's_ security to either Con or Luke, you might be delegating it to people smarter than you either way— but out of the two only in the case of Luke are you delegating it to someone who's actually demonstrated that they've thought mught about it or _care_ much about it.

People could debate if a centralized pool using GBT and users with BFGminer is "decenteralized" or not or GBT-mining's merit relative to p2pool. But debates over terms are boring, and a comparison to p2pool is somewhat apples to oranges (I say p2pool is more decentralized, especially since the poolops can still cheat miners out of payments in the non-p2pool options, and Luke argues with me).  What matters is that properly used GBT based mining can close many of the risks to Bitcoin of centralized pools. And fortunately slush was convinced with stratum to add extensions to pick up many of the same advantages.

Yes, getting extra data to validate the work takes some more bandwidth but all the bandwidths involved in mining (or bitcoin for that matter) are small. Because GBT style mining allowed massive bandwidth reductions for high speed miners many people had been just expecting people to switch to it and never notice the increases because they were paid ten times over by the improvement, but then slush showed up with stratum which had the efficiency but not (initially) the security improvements of GBT... In any case, the bandwidths involved are so small that they shouldn't be relevant to almost any miners (esp since they don't increase much with increased hashrate). It may be relevant to mining pools— but miners are paying for their services and ought to demand the best behavior for their long term future.
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
December 16, 2012, 06:13:13 PM
 #11

The whole post above is well argued. But I wanted to highlight just this fragment.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.
This is a standard dialectic argument used when software vendor tries to disparage software as a service (SaaS) vendor.

In this case we have Luke-Jr & gmaxwell as conventional software vendor. They hawk their complex software and the updates to it. Although the software is open-sourced it is so complex and obfuscated that only very few will be able to sensibly maintain it. The service component (PoW pooling) is pushed behind.

slush, ckolivas and kano together form a SaaS coalition. They hawk first of all their service and software part is the simplest possible required to relay the work that needs to be performed.

The "who's better" decision shouldn't be made on the bandwidth cost alone. The most rational discriminator is each miner's administrative overhead costs and skills. Simply speaking you the question you want to ask yourself is: Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
Luke-Jr (OP)
Legendary
*
Offline Offline

Activity: 2576
Merit: 1186



View Profile
December 16, 2012, 07:43:03 PM
Last edit: December 16, 2012, 07:58:10 PM by Luke-Jr
 #12

So let me get this straight, this whole debate over GBT vs stratum boils down to the issue of "network security"?
Not the entire issue, that's just one important difference. GBT is also more flexible in terms of things like miners being able to manipulate the blocks they mine and pools being able to setup their own failover/load balancing. There's also GBT Full Node Optimization which I wasn't able to test since no miner implements it yet, which would enable miners to use their local bitcoind to cut down even more on bandwidth use without sacrificing network security.

I'm all for network security, but I usually leave the details for people smarter than me. However, I have to ask the same questions as Kano: Does having every miner on a pool verify the transactions actually increase security?
It decentralizes the block creation. With every miner checking their blocks, a pool would be noticed trying to perform any attacks and miners could switch to another pool or (with BIP 23 Level 2 support) make blocks that aren't performing the attack.

And also how does mining at a centralized pool using GBT = decentralized?
Centralized vs decentralized lies in where the blocks are being created/controlled. With GBT, blocks are always ultimately controlled by the miner (there are some rules on what the miner can do, but this is necessarily part of any pool, centralized or not).

However, I have to take into account 3 things: LJR is the only person I've seen advocating these changes. LJR is also the author of GBT and BFGminer, the competing solution to the problem he seems to be the only person caring about.
While I am the "BIP champion" of BIPs 22 and 23, the GBT protocol itself is the combined work of many authors over the course of 6 months in 2012. Adoptions isn't that bad, either.

And lastly, kano suggests that LJR jimmied the test to make his protocol look better.
Kano is, as usual, a liar. Ironically, he added a "Bytes Received" counter to cgminer git that significantly misreports bandwidth usage. I reported this to him, but apparently all he cares about is making GBT look bad.

Edit: When I posted the OP, I was actually expecting a "haha, I told you so" response from Kano. But seriously, he is extremely agreement-resistant.

gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
December 16, 2012, 08:25:01 PM
Last edit: December 16, 2012, 08:51:19 PM by gmaxwell
 #13

The whole post above is well argued. But I wanted to highlight just this fragment.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.
This is a standard dialectic argument used when software vendor tries to disparage software as a service (SaaS) vendor.

Luke runs a service, though its not much of a money making service (as his pool doesn't take a cut of the subsidy).  I don't— but I don't earn a cent from working on Bitcoin (except for a few donations here and there— which I've had none in months).  I don't think this has squat to do with motivations, but if if you're looking to disparage _someone_ based on motivations you're looking at the wrong parties.

Bitcoin is a distributed and decentralized system.  You can make any distributed system non-distributed by just having some centralized parties run it.   If that is what you want— Paypal is a _ton_ more efficient— all this trustless distributed support has costs, you know—, and more carefully audited and trustworthy than basically _any_ centralized Bitcoin service.  I happen to think that Bitcoin is worth having, but only worth having if it offers something that more efficient options can't, without decentralization bitcoin is just a scheme for digicoin pump and dumpers to make a buck. So, looking out for Bitcoin's security is my only  motivation on this subject. People using GBT or verified stratum for mining against a centralized pool aren't even running any software I wrote, and they're certainly not paying me for it.

I'm quite happy that people provide services for Bitcoin, even though services have centralization risk. But in the community we should demand that things be run in a way which minimally undermines Bitcoin and we should reject the race to the bottom that would eventually leave us with something no better than a really byzantine implementation of paypal.

Quote
Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
And yet this isn't a question anyone is forced to ask themselves, even P2Pool users. And at the same time, people on centralized mining setups still need to maintain their software (e.g. it was pretty halrious to watch big pools lose half their hashrate when phoenix just stopped submitting blocks at some difficulties)
-ck
Legendary
*
Offline Offline

Activity: 4312
Merit: 1649


Ruu \o/


View Profile WWW
December 16, 2012, 09:28:58 PM
 #14

Firstly, the example data is contrived and demonstrates nicely the confusion between technology and application. The test data does not demonstrate the bandwidth data difference intrinsic between stratum getwork and gbt, but the application of said technologies within luke-Jr's software within the constraints of his particular test. The intrinsic bandwidth requirements of each protocol can easily be calculated, and then I implore users who care about bandwidth to test it for themselves - with cgminer. Either using traditional tools or using the cgminer API  to get the results as Kano suggested.

Second what Kano is talking about when he says "transaction times", I believe he is referring to how often a block template for GBT, or a set of merkle branches for stratum, based on current queued transactions is being received by the mining software. If said template is sent once every 30 seconds on average by stratum, and received every 120 seconds on average by GBT, there are potentially 90 seconds more worth of transactions that never make it into the next block solve, in the way it's implemented in luke's software. In cgminer I receive the template every 30 seconds with GBT to match that of stratum. Only when the protocols are "equal" in terms of their likelihood of perpetuating transactions (since this is what bitcoin is about) should the bandwidth be compared. Pretending the bandwidth doesn't matter when one implementation can use over 100MB per hour is just naive.

Third, there has still not been any valid explanation for how sending the miner all the transactions in their entirety actually improves the security of the network. Unless someone is running mining software and a local bitcoin node and is comparing the values from each, and then decides what transactions overlap, and what are valid transactions the pool has chosen to filter out, and then determined that the data is enough of the transaction list to be a true set of transactions, having the transactions does not serve any purpose. Pool security validity software should be developed that does this, and people should use said approach if they care to confirm the pool is behaving. luke's software does not remotely do this to verify the transactions. It simply grabs them and then if it doesn't get them claims the pool is doing something untoward if it doesn't match the template. The transactions could be anything. It also completely ignores the fact that the vast majority of miners run their mining software on machines that aren't actually running bitcoin nodes with which to check the transactions. While the main bitcoin devs might wish this to be the case, it's not remotely the reality and is not going to become it.

We have a very suspicious community that will continually check the pools' actions. Yes history has shown pools may do something untoward - though it's usually about scamming miners with get rich quick schemes and not about harming the bitcoin network. If security was to be forced upon the miners by bitcoin itself, then a valid non-pooled mining solution should exist within the client itself that does not require people to have at least 1% of the network to be feasible. Solo mining is pointless for any sole entity any more. Unless bitcoin devs decide that something resembling p2pool makes its way into bitcoind, allowing miners of any size to mine, then pooled mining is here to stay. If or until that is the case, pooled mining is a proven solution that the miners have embraced. P2pool is great in principle but the reality is it is far from a simple plug in the values and mine scenario that miners use pools for. Pretending mining should be dissociated from bitcoin development just makes it even more the job of the pools to find solutions to mining problems as they arise. GBT failed to provide a solution early enough and efficient enough in the aspects that miners care about and stratum came around that was much more miner focussed. You can pretend that GBT is somehow superior but there isn't a single aspect that makes it attractive to miners or pool ops, and people are voting with their feet as predicted.

Developer/maintainer for cgminer, ckpool/ckproxy, and the -ck kernel
2% Fee Solo mining at solo.ckpool.org
-ck
2112
Legendary
*
Offline Offline

Activity: 2128
Merit: 1073



View Profile
December 16, 2012, 09:48:30 PM
 #15

Luke runs a service, though its not much of a money making service (as his pool doesn't take a cut of the subsidy).  I don't— but I don't earn a cent from working on Bitcoin (except for a few donations here and there— which I've had none in months).  I don't think this has squat to do with motivations, but if if you're looking to disparage _someone_ based on motivations you're looking at the wrong parties.

Bitcoin is a distributed and decentralized system.  You can make any distributed system non-distributed by just having some centralized parties run it.   If that is what you want— Paypal is a _ton_ more efficient— all this trustless distributed support has costs, you know—, and more carefully audited and trustworthy than basically _any_ centralized Bitcoin service.  I happen to think that Bitcoin is worth having, but only worth having if it offers something that more efficient options can't, without decentralization bitcoin is just a scheme for digicoin pump and dumpers to make a buck. So, looking out for Bitcoin's security is my only  motivation on this subject. People using GBT or verified stratum for mining against a centralized pool aren't even running any software I wrote, and they're certainly not paying me for it.

I'm quite happy that people provide services for Bitcoin, even though services have centralization risk. But in the community we should demand that things be run in a way which minimally undermines Bitcoin and we should reject the race to the bottom that would eventually leave us with something no better than a really byzantine implementation of paypal.

Quote
Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
And yet this isn't a question anyone is forced to ask themselves, even P2Pool users. And at the same time, people on centralized mining setups still need to maintain their software (e.g. it was pretty halrious to watch big pools lose half their hashrate when phoenix just stopped submitting blocks at some difficulties)
I understand your arguments, but I'm unable to reconcile them with your actions.

With your writing in English you are advocating distributed systems and heterogenous implementations.

With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.

The marginalization of Windows users and developers (like Casascius) is very glaring example: lots more people would contribute to the project if the core developers would actually download Microsoft Visual Studio Express, purge the code base from GNU-isms and made sure that all future changes developed on Linux don't break Visual Studio builds.

I don't disagree with you: decentralized project can be surreptitiously centralized by malicious (or incompetent) software as a service vendors.

But there is another avenue to such surreptitious centralization: a core development team that is full of "not invented here" and "our way or highway".

Please comment, critique, criticize or ridicule BIP 2112: https://bitcointalk.org/index.php?topic=54382.0
Long-term mining prognosis: https://bitcointalk.org/index.php?topic=91101.0
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 4284
Merit: 8816



View Profile WWW
December 16, 2012, 10:00:01 PM
 #16

We have a very suspicious community that will continually check the pools' actions.
No disrespect intended for your thoughtful response, but this made me chuckle.  Deepbit was (is?) frequently attempting to mine forks of its own blocks. It's only Luke's paranoia code that caused anyone to actually notice it.  Look at what happened with with the BIP16 enforcement— discussed and announced months in advance there were popular pools that didn't update for it— for some (e.g. 50BTC) for a long as a month, and they kept hundreds of GH/s of mining during that time. A lot of malicious activity is indistinguishable from the normal sloppiness of Bitcoin services, and what does get caught takes hours (or weeks, months...) for people to widely notice much less respond to... an attack event can be over and done with far faster than that.

The only way to have any confidence that the centralized pools aren't single points of failure is making them so they aren't— so that most malicious activity cause miners to fall over to other pools that aren't misbehaving... but most of those checks can't be implemented without actually telling miners what work they're contributing to.   I'll leave it to Luke to point to the checks he currently implements and sees being implemented in the future.

Quote from: 2112
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.
So many possible responses… (1) You must have me confused with someone else— I write portable software. (2) show me this person who "can't" use GCC, (3) but not officially supporting VC is more about being conservative and making the software testable, especially since we have a significant shortage of testing resources, especially on windows (and Microsoft's standards conformance is lackluster to say the least)— that a compiler bug caused miscompilation that ate someone coin but it was fine on the compiler the tester used is little consolation. (4) Although you do seem to have me confused for your employee, as while I'm happy for you to use things I write I'm certainly not writing it to your specifications without a sizable paycheck. (5) But if you'd like to submit some clean portability fixes to the reference client I'll gladly review them (6) though having heterogeneous software is the direct opposite of people blindly copying code with so little attention that they can't bother to do a little work in their environment, so changes that have risk or complexity may not be worth it if they're only for the sake of helping forks and blind copying. (7) Finally, why have you made this entirely offtopic reply here? The thread is about protocols between pools and miners, and has basically nothing to do with the reference client not providing you with MSVC project files.
kano
Legendary
*
Offline Offline

Activity: 4634
Merit: 1851


Linux since 1997 RedHat 4


View Profile
December 16, 2012, 10:29:38 PM
 #17

...
The thread is about protocols between pools and miners,
...
Pity that seems to have not been of importance to either you or your boss Luke-Jr

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Luke-Jr (OP)
Legendary
*
Offline Offline

Activity: 2576
Merit: 1186



View Profile
December 16, 2012, 10:43:55 PM
 #18

With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.

The marginalization of Windows users and developers (like Casascius) is very glaring example: lots more people would contribute to the project if the core developers would actually download Microsoft Visual Studio Express, purge the code base from GNU-isms and made sure that all future changes developed on Linux don't break Visual Studio builds.
Um, why should GCC users be responsible for MSVS compatibility? The reason the codebase(s) aren't compatible is because there are no developers who want to deal with MSVS. Also, it's not like MSVS is free software - so you're asking free individuals to become less free (by agreeing to MSVS's terms) so that you can avoid using free software? I don't get it.

But there is another avenue to such surreptitious centralization: a core development team that is full of "not invented here" and "our way or highway".
This is why I, and most other Bitcoin developers, work to improve standardization and make Bitcoin more friendly to multiple competing implementations maintained by different people. Remember that GBT is the standard developed by free cooperation of many different people with different projects, whereas stratum was the protocol developed behind closed doors by a single individual.

kano
Legendary
*
Offline Offline

Activity: 4634
Merit: 1851


Linux since 1997 RedHat 4


View Profile
December 16, 2012, 10:59:50 PM
 #19

...
And lastly, kano suggests that LJR jimmied the test to make his protocol look better.
Kano is, as usual, a liar. Ironically, he added a "Bytes Received" counter to cgminer git that significantly misreports bandwidth usage. I reported this to him, but apparently all he cares about is making GBT look bad.

Edit: When I posted the OP, I was actually expecting a "haha, I told you so" response from Kano. But seriously, he is extremely agreement-resistant.
Of course I am agreement-resistant when you fudge the results.
I don't care if the results make you look good or bad.
Anyone who posts results that are rigged should expect a negative response from all but their lapdogs.

The "Bytes Received" counter is cgminer is simply what CURL tells us it has (sent and) received.

If you think there is some problem with that - then feel free to point out what it is - or go bug the CURL developers and tell them their code is wrong.

Simply saying it is wrong, oddly enough, doesn't mean much to me, coz I have had to deal with your blatant lies so often I take no credence in anything you say without clear proof.

Even though I do provide details of my arguments to you regularly, you usually provide none at all, so I certainly have no interest in your statements without any proof, since most of your arguments are clearly FUD.

Pool: https://kano.is - low 0.5% fee PPLNS 3 Days - Most reliable Solo with ONLY 0.5% fee   Bitcointalk thread: Forum
Discord support invite at https://kano.is/ Majority developer of the ckpool code - k for kano
The ONLY active original developer of cgminer. Original master git: https://github.com/kanoi/cgminer
Luke-Jr (OP)
Legendary
*
Offline Offline

Activity: 2576
Merit: 1186



View Profile
December 16, 2012, 11:13:32 PM
 #20

The "Bytes Received" counter is cgminer is simply what CURL tells us it has (sent and) received.

If you think there is some problem with that - then feel free to point out what it is - or go bug the CURL developers and tell them their code is wrong.
No, you failed to read cURL's documentation, which describes what it actually does. cURL does not have a counter for bytes sent/received on the network.

Simply saying it is wrong, oddly enough, doesn't mean much to me, coz I have had to deal with your blatant lies so often I take no credence in anything you say without clear proof.

Even though I do provide details of my arguments to you regularly, you usually provide none at all, so I certainly have no interest in your statements without any proof, since most of your arguments are clearly FUD.
You're projecting.

Pages: [1] 2 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!