Bitcoin Forum
December 04, 2016, 06:34:57 PM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
   Home   Help Search Donate Login Register  
Pages: « 1 ... 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 [95] 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 ... 830 »
  Print  
Author Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.9.2  (Read 4817137 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
October 29, 2011, 03:01:38 AM
 #1881

If they're not merged mining the longpolls should match block changes and whatever other reasons the pools actually use - It's my impression that they're all theoretical arguments and non merged-mining pools ONLY use longpoll for a block change.
This is not correct. Eligius does not longpoll for NMC block changes, but it does for other unrelated purposes. That being said, ignoring those will not hurt the miner (at least right now) who is only interested in obtaining bitcoins (though it can harm the bitcoin network).
Therefore, throwing out work blindly with a longpoll that doesn't have a corresponding detected block change, without adding any more command line options, is the most unobtrusive change that I can think of and I will do so in the next version.
There is no benefit to ignoring longpoll results.

1480876497
Hero Member
*
Offline Offline

Posts: 1480876497

View Profile Personal Message (Offline)

Ignore
1480876497
Reply with quote  #2

1480876497
Report to moderator
1480876497
Hero Member
*
Offline Offline

Posts: 1480876497

View Profile Personal Message (Offline)

Ignore
1480876497
Reply with quote  #2

1480876497
Report to moderator
1480876497
Hero Member
*
Offline Offline

Posts: 1480876497

View Profile Personal Message (Offline)

Ignore
1480876497
Reply with quote  #2

1480876497
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
-ck
Moderator
Legendary
*
Offline Offline

Activity: 1988


Ruu \o/


View Profile WWW
October 29, 2011, 03:03:37 AM
 #1882

There is no benefit to ignoring longpoll results.
That's not true. cgminer can detect block changes before it receives a longpoll, especially when mining on multiple pools at once. Thus it will have already thrown out the work. Getting a longpoll again after that will make cgminer throw out yet more work. Why do you think I've been resisting dealing with this fucking issue?

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
October 29, 2011, 03:06:39 AM
 #1883

There is no benefit to ignoring longpoll results.
That's not true. cgminer can detect block changes before it receives a longpoll, especially when mining on multiple pools at once. Thus it will have already thrown out the work. Getting a longpoll again after that will make cgminer throw out yet more work. Why do you think I've been resisting dealing with this fucking issue?
Throwing out work doesn't/shouldn't hurt miners. Issuing more work than necessary hurts the pool (CPU time), but in this case the pool has already taken the hit.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 1988


Ruu \o/


View Profile WWW
October 29, 2011, 03:09:54 AM
 #1884

Waiting for work DOES hurt the miner. That's why cgminer tries to tell if some of the work shouldn't actually be discarded by determining if it's from the valid next block and should only throw out work that's from the previous block. However you dress it up for me to support your pool better, there is likely to be more downtime waiting for work by blindly supporting LP for this shit. Admittedly since every pool has taken a hashrate hit lately, none are close to capacity any more, but one day they will be again.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
Luke-Jr
Legendary
*
Offline Offline

Activity: 2086



View Profile
October 29, 2011, 03:11:46 AM
 #1885

Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.

-ck
Moderator
Legendary
*
Offline Offline

Activity: 1988


Ruu \o/


View Profile WWW
October 29, 2011, 03:14:49 AM
 #1886

Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

Primary developer/maintainer for cgminer and ckpool/ckproxy.
Pooled mine at kano.is, solo mine at solo.ckpool.org
-ck
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 2016



View Profile
October 29, 2011, 03:15:57 AM
 #1887

The code commit to add merged mining also allows for EXACTLY this you are saying is bad. EXACTLY this.

Negative, Ghost Rider.

Merged mining adds a _single hash_ to the coinbase transaction and that single hash binds an infinite amount of whatever completely external stuff the miner wants.  This is not the same as people shoving unbounded amounts of crap in the blockchain.   Go look at the blockchain— bitcoin miners have been adding various garbage in the same location for a long time now.

Meanwhile any user, not just miners, could and have been adding crap to the blockchain already.  Where was your bleeding heart when people added several megabytes of transactions to the blockchain in order to 'register' a bunch of firstbits 'names'? 

Even if you don't buy all the arguments I gave about how merged mining is seriously beneficial to the long term stability and security of bitcoin, you should at least realizes that the mechanism channels a bunch of different bad uses into a least harmful form.

And really— asking for the feature to be removed? Thats— kinda nuts.  Anyone running a pool is already carrying a patched bitcoind for various reasons so it wouldn't stop them. Its already a problem that pool operators are slow to upgrade bitcoin.  Making them carry patches for this just means more they would have to port forward and more reason for them to stay on old bitcoin code.

gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 2016



View Profile
October 29, 2011, 03:19:17 AM
 #1888

Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

"Dr! Dr! It hurts when I do _this_."
"Then don't do that!"

Don't trow out work until you have replacement work.

Maintain a priority queue, work against 'new' blocks is highest priority.... If there isn't any of that, take what you have.  As has been mentioned, new prev is not the only reason for new work to be issued.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162


DiabloMiner author


View Profile WWW
October 29, 2011, 03:21:39 AM
 #1889

Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

"Dr! Dr! It hurts when I do _this_."
"Then don't do that!"

Don't trow out work until you have replacement work.

Maintain a priority queue, work against 'new' blocks is highest priority.... If there isn't any of that, take what you have.  As has been mentioned, new prev is not the only reason for new work to be issued.

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
October 29, 2011, 03:22:19 AM
 #1890

Waiting for work DOES hurt the miner.
I can't think of a scenario where a longpoll should make a miner wait for work.
Are you telling me a pool can universally throw out enough work for every single miner working for it in the microsecond after sending out a longpoll at a rate fast enough to guarantee their GPUs don't go idle? Let's close the discussion before I get more annoyed.

Couldn't the workflow be something like this:

1) cgminer detects LP
2) cgminer continues to work on existing block header if it determines block hasn't changed (you already have this) *
3) cgminer issues new getwork()
4) cgminer continues to work on existing block header until pool provides new work. *
5) cgminer begins working on new work.

In steps 2 & 4 is a share is found prior to step 5 then submit it anyways.  It may result in a stale but stales don't hurt miner.  They don't reduce the # of good shares.

i.e. say miner has 500 shares and finds a share in step 2 or 4
if share is good miner now has 501 shares = 1 share gained.
if share is bad miner still has 500 shares and 1 stale = nothing lost

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
October 29, 2011, 03:29:00 AM
 #1891

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

That is a bad assumption:
If LP issued on new NMC block then the BTC block is still valid.  Miner should continue until it has new work.
Potentially in future if LP is issued because pool has detected new high fee transaction then miner should continue until it has new work.
There may be unnamed reasons why pool could now or in future wish miner to change to new block header that doesn't require flushing queue.

The goal should be to maximize good shares, not reduce stale shares.

As an example
501 good shares & 1 stale share is worth more than 500 good shares & 0 stale shares.
All that really matters is the number of good shares.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162


DiabloMiner author


View Profile WWW
October 29, 2011, 03:32:46 AM
 #1892

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

That is a bad assumption:
If LP issued on new NMC block then the BTC block is still valid.  Miner should continue until it has new work.
Potentially in future if LP is issued because pool has detected new high fee transaction then miner should continue until it has new work.
There may be unnamed reasons why pool could now or in future wish miner to change to new block header that doesn't require flushing queue.

The goal should be to maximize good shares, not reduce stale shares.

As an example
501 good shares & 1 stale share is worth more than 500 good shares & 0 stale shares.
All that really matters is the number of good shares.

No, you're still approaching it wrong. The merged mining shouldn't generate an LP, as they are implicitly synchronous. What we need is an asynchronous LP of some kind. Trying to hackjob it in the miner isn't correct.

DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218


Gerald Davis


View Profile
October 29, 2011, 03:35:47 AM
 #1893

The merged mining shouldn't generate an LP, as they are implicitly synchronous.

Can you elaborate?  I am not sure what you are trying to say.  The optimum solution when a pool changes NMC blocks is the block the miners are working on as quickly as possible.  That doesn't mean existing work will be rejected just that new work is potentially more profitable.  The same situation can exist with high fee transaction.   Existing work is valid but new work has a higher EV.  LP advised miners to get new work, once they get it they start working on it.  No reason to idle.
gmaxwell
Moderator
Legendary
*
Offline Offline

Activity: 2016



View Profile
October 29, 2011, 03:36:52 AM
 #1894

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

As I pointed out upthread there are many reasons to LP other than a new prev. LP does not mean that the pool _will_ reject work from before the LP. It might... it might not. It probably will _soon_ but that isn't a reason to stall.

Even in the case of a new prev,  I don't know what the pools do— but on my solo mining if I get a solution from my miners I reorganize and try to extend that, even if it comes late.   I'll only switch to mining an externally headed chain when its at least one longer.

(This provides the highest expected return:  If you _do_ find a solution you'll get two blocks and probably win even though you were late to the game.  If you don't find a solution before the network gets further then it doesn't matter).

Though we're really debating the angles dancing on the head of a pin.  Even if you stall for a half second every single LP, with LP per block you're only talking about losing 0.08% hashpower / energy at most.
DiabloD3
Legendary
*
Offline Offline

Activity: 1162


DiabloMiner author


View Profile WWW
October 29, 2011, 03:39:24 AM
 #1895

Wrong. On LP synchronously flush all works used on that pool. If you maintain a queue of work, it should be emptied and discarded.

Yes, that leads to stalled threads. Better than wasting time and electricity generating stales.

As I pointed out upthread there are many reasons to LP other than a new prev. LP does not mean that the pool _will_ reject work from before the LP. It might... it might not. It probably will _soon_ but that isn't a reason to stall.

Even in the case of a new prev,  I don't know what the pools do— but on my solo mining if I get a solution from my miners I reorganize and try to extend that, even if it comes late.   I'll only switch to mining an externally headed chain when its at least one longer.

(This provides the highest expected return:  If you _do_ find a solution you'll get two blocks and probably win even though you were late to the game.  If you don't find a solution before the network gets further then it doesn't matter).

Though we're really debating the angles dancing on the head of a pin.  Even if you stall for a half second every single LP, with LP per block you're only talking about losing 0.08% hashpower / energy at most.

See what I wrote above. We need asynchronous LPs to solve the issue correctly.

kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
October 29, 2011, 03:42:42 AM
 #1896

The code commit to add merged mining also allows for EXACTLY this you are saying is bad. EXACTLY this.

Negative, Ghost Rider.

Merged mining adds a _single hash_ to the coinbase transaction and that single hash binds an infinite amount of whatever completely external stuff the miner wants.  This is not the same as people shoving unbounded amounts of crap in the blockchain.   Go look at the blockchain— bitcoin miners have been adding various garbage in the same location for a long time now.

Meanwhile any user, not just miners, could and have been adding crap to the blockchain already.  Where was your bleeding heart when people added several megabytes of transactions to the blockchain in order to 'register' a bunch of firstbits 'names'? 

Even if you don't buy all the arguments I gave about how merged mining is seriously beneficial to the long term stability and security of bitcoin, you should at least realizes that the mechanism channels a bunch of different bad uses into a least harmful form.

And really— asking for the feature to be removed? Thats— kinda nuts.  Anyone running a pool is already carrying a patched bitcoind for various reasons so it wouldn't stop them. Its already a problem that pool operators are slow to upgrade bitcoin.  Making them carry patches for this just means more they would have to port forward and more reason for them to stay on old bitcoin code.


Sigh
Reread what I said.
I'm talking about the change to bitcoind that was to allow for merged mining.
However, it allows the complete block to be generated, not just to specify the coinbase.
It DIDN'T just allow for a change to the coinbase, it is way more expansive than that.

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
shads
Full Member
***
Offline Offline

Activity: 224


View Profile WWW
October 29, 2011, 05:25:11 AM
 #1897

Therefore, throwing out work blindly with a longpoll that doesn't have a corresponding detected block change, without adding any more command line options, is the most unobtrusive change that I can think of and I will do so in the next version.

For those who are concerned this will create some impact think of it this way.  You throwing out one work or the other anyway.  Current implementation is to throw out the new one, the suggested change is to throw out the old one.  Unless you believe for odd reason that the new one is somehow worse than the old one it's a no brainer.  Fresh is best.

PoolServerJ Home Page - High performance java mining pool engine

1LezqRatQz7MeNoCVziYwcdwtqeEbvrdAq - http://payb.tc/shads

Quote from: Matthew N. Wright
Stop wasting the internet.
shads
Full Member
***
Offline Offline

Activity: 224


View Profile WWW
October 29, 2011, 05:32:54 AM
 #1898

Couldn't the workflow be something like this:

1) cgminer detects LP
2) cgminer continues to work on existing block header if it determines block hasn't changed (you already have this) *
3) cgminer issues new getwork()
4) cgminer continues to work on existing block header until pool provides new work. *
5) cgminer begins working on new work.


maybe all other pools do it differently or something coz I can't see why this is so hard.  When PSJ sends a longpoll it includes work.  I thought all LP implementations did this.  There's no delay before you can start on the new work after a longpoll because you've already been given it.  The only reason you'd need to issue a getwork is to refill yr queue.

If you get a LP response and you've detected a new block from another pool then behave as you normally would and mine the other pool until the current pool catches up.

edit:
p.s.  I see that conman's mood hasn't improved any since last I spoke to him so I just gonna leave this quote here again:

yes it will... I would personally suggest that miners who want this fixed pool together and put up a bounty.  I've looked through cgminer code and I'd guess the work involved in building it is in the same order as poolserverj (i.e. months).  I enjoyed writing poolserverj but one of the least fun things about it is supporting new systems like merged-mining which you don't have any interest in.  I'm fairly sure I wouldn't have bothered supporting MM if it weren't for tempting bounties waved in front of me.

PoolServerJ Home Page - High performance java mining pool engine

1LezqRatQz7MeNoCVziYwcdwtqeEbvrdAq - http://payb.tc/shads

Quote from: Matthew N. Wright
Stop wasting the internet.
kano
Legendary
*
Offline Offline

Activity: 1918


Linux since 1997 RedHat 4


View Profile
October 29, 2011, 07:47:16 AM
 #1899

Therefore, throwing out work blindly with a longpoll that doesn't have a corresponding detected block change, without adding any more command line options, is the most unobtrusive change that I can think of and I will do so in the next version.

For those who are concerned this will create some impact think of it this way.  You throwing out one work or the other anyway.  Current implementation is to throw out the new one, the suggested change is to throw out the old one.  Unless you believe for odd reason that the new one is somehow worse than the old one it's a no brainer.  Fresh is best.
Except if there was no merged mining, the old one would be OK except on a Bitcoin new block LP.

However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?
Discarding it is actually discarding work that is valid in all cases except on a Bitcoin new block LP.
Submitting it and then getting a 'stale' response is just as bad.
This is work you have done that would be valid if not for merged mining.
i.e. back to what I said about it earlier on ...

Pool: https://kano.is BTC: 1KanoiBupPiZfkwqB7rfLXAzPnoTshAVmb
CKPool and CGMiner developer, IRC FreeNode #ckpool and #cgminer kanoi
Help keep Bitcoin secure by mining on pools with Stratum, the best protocol to mine Bitcoins with ASIC hardware
shads
Full Member
***
Offline Offline

Activity: 224


View Profile WWW
October 29, 2011, 09:19:18 AM
 #1900

*scratches head*

you're not discarding anything you've done.  If you've found a solution previously with the same work you've already submitted it.  You have an exactly equal chance of find a new solution with either the new or the old work, but with the new work there's less chance it's stale.

Simply replacing work with another is a nanosecond operation.  Nothing is being discarded except some data which has less inherent value that what it's being replaced with.

Quote
However, the actual issue is what happens to any incomplete (but started) work occurring at the time of the LP?

You do understand there's no such thing as 'partially complete' work?  One work is good for about 4 billion hashes but just because you've done 2 billion doesn't mean your any closer to finding a share than if yr starting fresh.  Any given piece of work may have 0 solutions or it may have many.  There's not a fixed number that you get when you've finished running through the nonce range.

PoolServerJ Home Page - High performance java mining pool engine

1LezqRatQz7MeNoCVziYwcdwtqeEbvrdAq - http://payb.tc/shads

Quote from: Matthew N. Wright
Stop wasting the internet.
Pages: « 1 ... 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 [95] 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 ... 830 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!