FairUser
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
January 18, 2011, 01:35:10 AM Last edit: January 18, 2011, 02:23:41 AM by FairUser |
|
Why must it "getwork" each 5 seconds?
I think technohog described it well. I'm thinking about push based protocol instead of current getwork, where server will send a job to client only on merkle hash change (so only few times per minute, in the worst case). But it will takes long time before it will be stable enough to switch pool to this. There could be a couple of problems with this. 1) If by "push" you mean "send to client", what happens to clients/miners who are behind a firewall and can't/don't want to have port forwarding setup? Or those who use Tor? 2) My 3 miners all come from the same IP, so I'm doing a new get work request every 3-6 seconds. Limiting that would suck cause my miners would start to idle...and I want them turning 24/7.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 18, 2011, 02:30:31 AM |
|
There could be a couple of problems with this. 1) If by "push" you mean "send to client", what happens to clients/miners who are behind a firewall and can't/don't want to have port forwarding setup? Or those who use Tor?
There exists stuff like Comet or so. I will probably prefer something TCP based to avoid HTTP overhead (which is significant in our case). 2) My 3 miners all come from the same IP, so I'm doing a new get work request every 3-6 seconds. Limiting that would suck cause my miners would start to idle...and I want them turning 24/7.
I'm not talking about limiting requests per second. Push based protocl just don't need polling every 3-6 seconds. Workers will receive new job immediately when merkle changed...
|
|
|
|
FairUser
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
January 18, 2011, 02:50:56 AM |
|
There could be a couple of problems with this. 1) If by "push" you mean "send to client", what happens to clients/miners who are behind a firewall and can't/don't want to have port forwarding setup? Or those who use Tor?
There exists stuff like Comet or so. I will probably prefer something TCP based to avoid HTTP overhead (which is significant in our case). 2) My 3 miners all come from the same IP, so I'm doing a new get work request every 3-6 seconds. Limiting that would suck cause my miners would start to idle...and I want them turning 24/7.
I'm not talking about limiting requests per second. Push based protocl just don't need polling every 3-6 seconds. Workers will receive new job immediately when merkle changed... Interesting. Never read about Comet until now. Does that mean my clients get updates within seconds of a new block? What would happen to the "work" my miner is currently working on when a "push" is received? Would my miner stop working on it's current getwork and start working on the new getwork right away? :O Would my current work be discarded or counted? Even bigger though, what would the miner clients have to update to get this all working correctly. I've gotten 7 blocks so far, and anything that helps increase those chances would be cool.
|
|
|
|
BitterTea
|
|
January 18, 2011, 04:19:54 PM |
|
Does that mean my clients get updates within seconds of a new block? What would happen to the "work" my miner is currently working on when a "push" is received? Would my miner stop working on it's current getwork and start working on the new getwork right away? :O Would my current work be discarded or counted? Even bigger though, what would the miner clients have to update to get this all working correctly. I've gotten 7 blocks so far, and anything that helps increase those chances would be cool. The notion of "wasted" work is not applicable to mining. You are constantly (300,000,000 times per second on my 5850) trying to find a valid hash that is between difficulty 1 and the current difficulty. Basically, you're playing the lottery many times per second, and you immediately know whether or not you won. When a new block is found, you are actually "wasting" work until the next time you GetWork, or have it pushed to you.
|
|
|
|
FairUser
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
January 19, 2011, 03:22:05 AM |
|
Does that mean my clients get updates within seconds of a new block? What would happen to the "work" my miner is currently working on when a "push" is received? Would my miner stop working on it's current getwork and start working on the new getwork right away? :O Would my current work be discarded or counted? Even bigger though, what would the miner clients have to update to get this all working correctly. I've gotten 7 blocks so far, and anything that helps increase those chances would be cool. The notion of "wasted" work is not applicable to mining. You are constantly (300,000,000 times per second on my 5850) trying to find a valid hash that is between difficulty 1 and the current difficulty. Basically, you're playing the lottery many times per second, and you immediately know whether or not you won. When a new block is found, you are actually "wasting" work until the next time you GetWork, or have it pushed to you. Yeah, I get that, and I understand how mining works. Been doing it since the server came online. I never said anything about "wasted" work either. Read with care please. I'm just asking lots of questions because a week or so ago I saw the block count increase by two on my modified Linux client, but not on the server or my windows client....and this lasted for 6 minutes. Kinda strange. If a new and better method for distributing getwork request could be achieved, that'd be great. The existing system seems fairly decent since I've only noticed very few hash submissions when the block had rolled over to the next in line. Hence, the submission of the getwork hash didn't need to be submitted because the block count increased by 1 (or 2 on quickly found blocks). I'm just wondering if it could be improved. BTW, my modified Linux clients can maintain 1024 active connections (yes, I modified the code...452 connections at the moment) to other users bitcoind processes, so when a new block is announced it should be propagated to most of the other clients within a second or two (Fiber-optic internet connection...very low latency and lots of bandwidth). Hence why I'm still trying to figure out exactly how the Mining Pool server AND my windows client didn't get updated when my Linux client did. Very very strange.
|
|
|
|
bitcool
Legendary
Offline
Activity: 1441
Merit: 1000
Live and enjoy experiments
|
|
January 19, 2011, 03:37:29 AM |
|
BTW, my modified Linux clients can maintain 1024 active connections (yes, I modified the code...452 connections at the moment) to other users bitcoind processes, so when a new block is announced it should be propagated to most of the other clients within a second or two (Fiber-optic internet connection...very low latency and lots of bandwidth). Hence why I'm still trying to figure out exactly how the Mining Pool server AND my windows client didn't get updated when my Linux client did. Very very strange.
with 6 minutes difference, it's not only strange, it smells fishy.
|
|
|
|
FairUser
Sr. Member
Offline
Activity: 1344
Merit: 264
bit.ly/3QXp3oh | Ultimate Launchpad on TON
|
|
January 19, 2011, 04:42:12 AM Last edit: January 19, 2011, 05:00:32 AM by FairUser |
|
BTW, my modified Linux clients can maintain 1024 active connections (yes, I modified the code...452 connections at the moment) to other users bitcoind processes, so when a new block is announced it should be propagated to most of the other clients within a second or two (Fiber-optic internet connection...very low latency and lots of bandwidth). Hence why I'm still trying to figure out exactly how the Mining Pool server AND my windows client didn't get updated when my Linux client did. Very very strange.
with 6 minutes difference, it's not only strange, it smells fishy. 6 minutes was the highest I've seen with my own eyes. There were a couple of days when it was about a minute or two. However, it doesn't appear to be happening now. I'll post next time I see it with more stats/time stamps.
|
|
|
|
ElectricGoat
Newbie
Offline
Activity: 42
Merit: 0
|
|
January 19, 2011, 10:00:02 AM |
|
What about creating a small proxy server between your server and all the workers for a single user ?
I have three workers, and they all call getwork every 5 seconds. What if I had a proxy that they would each call, but that would only relay a single getwork call to your server ? It seems pretty easy to write, and could lower your server load by quite a bit.
a single getwork? If you use a "normal" mining client, everbody would be working on the exactly the same hash -- this is a complete waste of time. Of course we can change each miner start with different nonce.... or start with random nonce. This need changing a couple lines of code. Installing the proxy would be voluntary at first, so you'd have to install a fixed miner as well, that wouldn't really be a problem. And then slush could prevent people with more than X workers to connect, and that would push people to adopt a proxy if they had a lot of workers.
|
|
|
|
BitterTea
|
|
January 19, 2011, 04:20:43 PM |
|
Yeah, I get that, and I understand how mining works. Been doing it since the server came online. I never said anything about "wasted" work either. Read with care please. Ok, I see that you never mentioned wasted work, but you did talk about discarding work, and with my understanding of the way mining works, that makes no sense. What would happen to the "work" my miner is currently working on when a "push" is received? My understanding is that once a new block has been found, the data you're trying to hash is no longer valid. Would my miner stop working on it's current getwork and start working on the new getwork right away? :O Based on my understanding, this seems like desirable behavior, so that you are working on stale data as little as possible. Would my current work be discarded or counted? What is there to be discarded or counted? You've either found a solution with a difficulty greater than one or you haven't. There's no progress, just chance. Am I misunderstanding something?
|
|
|
|
dooglus
Legendary
Offline
Activity: 2940
Merit: 1333
|
|
January 19, 2011, 08:23:27 PM |
|
Approx. cluster performance: 20349.286848 Mhash/s
|
Just-Dice | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | Play or Invest | ██ ██████████ ██████████████████ ██████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████████████ ██████████████████████ ██████████████ ██████ | 1% House Edge |
|
|
|
tehlaser
Newbie
Offline
Activity: 3
Merit: 0
|
|
January 20, 2011, 03:21:15 PM |
|
My understanding is that once a new block has been found, the data you're trying to hash is no longer valid.
Actually, if I'm not mistaken, the hash changes every time any of these things happen: a) a new block is created b) a new transaction is added to the block currently being worked on c) the nonce field (the thing you increment to try to get a "winning" hash) overflows d) a few seconds go by (the blocks have a relatively low-resolution timestamp) For any of these except a, if you find a hash the cooperative mining server could just go ahead and mint the old block. So what if a few transactions spill over into the next block, or the timestamp is a few seconds old? It's hardly worth throwing away 50 bitcoins for. And I think slush's miner already plays games with the extraNonce field to make sure that the clients aren't all trying the same hashes. If a miner wanted to be particularly antisocial, it could just never accept transactions at all (or perhaps only accept transactions with a fee attached). Such a miner would simply mint "empty" blocks with only the 50 bitcoin generation transaction in it.
|
|
|
|
tehlaser
Newbie
Offline
Activity: 3
Merit: 0
|
|
January 20, 2011, 03:27:35 PM |
|
But what exactly is happening with all of the trailing "leftovers"
For example, if I have 27 shares in something solved in 5016 total shares, that would be 0.0053827751196172248803827751196172(this goes on and on...)%, translating to
0.26913875598086124401913875598086 coins earned by me for that block.
so what happens?
First, I think it gets rounded off to nanocoins. After that, any amounts less that 0.01 BTC get collected in your "account" with the cooperative miner until they accumulate to the point where they can be paid. This is because transactions with outputs (either direct or "change") of less that 0.01 BTC will not be processed by the default code without at least a 0.01 BTC transaction fee. That's there to prevent someone from trying to DoS the network by making tons of tiny transactions. I don't think there is anything preventing someone from running a modified bitcoin client that processes nanocoin transactions for free, however.
|
|
|
|
ColdHardMetal
|
|
January 21, 2011, 02:29:30 AM |
|
Whoa, I just noticed that I found a block! No longer merely a leach on the system lol.
|
|
|
|
fabianhjr
Sr. Member
Offline
Activity: 322
Merit: 250
Do The Evolution
|
|
January 21, 2011, 04:31:06 AM |
|
Hey, we just passed the 20 ghash/sec barrier! The network is increasing exponentially!
Also, the whole net is now near 150 ghashes/sec. An attack to the network now costs about half a million to a million of today's dollars.
|
|
|
|
bitcool
Legendary
Offline
Activity: 1441
Merit: 1000
Live and enjoy experiments
|
|
January 21, 2011, 05:04:42 AM |
|
Hey, we just passed the 20 ghash/sec barrier! The network is increasing exponentially!
Also, the whole net is now near 150 ghashes/sec. An attack to the network now costs about half a million to a million of today's dollars.
hi, Is there a graph that shows the history of bitcoin network's growth over the past few months, in terms of hashes/sec?
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 21, 2011, 09:59:34 AM |
|
You are either using round and block in a vague manner
You are right. I replaced some occurences of 'block' to 'round', which make more sense. Will be on site soon.
|
|
|
|
slush (OP)
Legendary
Offline
Activity: 1386
Merit: 1097
|
|
January 21, 2011, 10:05:24 AM |
|
First, I think it gets rounded off to nanocoins.
I'm not rounding, but flooring with 8 decimal places precision, to avoid divide more than 50BTC between workers. The rest from "flooring" (50BTC - calculated rewards for all workers) is added to random user which participated on the block (it is usually something like 0.000000x BTC or so).
|
|
|
|
tehlaser
Newbie
Offline
Activity: 3
Merit: 0
|
|
January 22, 2011, 05:37:42 PM |
|
I'm not rounding, but flooring with 8 decimal places precision, to avoid divide more than 50BTC between workers. The rest from "flooring" (50BTC - calculated rewards for all workers) is added to random user which participated on the block (it is usually something like 0.000000x BTC or so).
Ah. Do transaction fees get assigned to a random user too, or when you say "50BTC" do you mean the 50 + any fees?
|
|
|
|
BitLex
|
|
January 22, 2011, 06:05:25 PM |
|
transaction fees tend to zero anyway, don't they? i generated a couple of hundred blocks since around may 2010 and only got transaction-fees (of around 0.02 as far as i remember) twice. that's not even close to cover anyones server-costs, would be more than just ok if slush just keeps it for himself.
|
|
|
|
|
|