Use the -rescan command line argument. (If you're using a shortcut in Windows it can be added to it). And you do know that your backups need to be relatively recent, and that an old backup might not have any of your coins, right?
|
|
|
In the end though, solo mining is most profitable due to 0% fees (you actually get paid transaction fees) and significantly less downtime.
How many transactions actually have collectable fees on them? I know this is supposed to supplant the reward, so I'm wondering if you are seeing any signs of this. You can investigate this at http://blockexplorer.com/.
|
|
|
Let me ask this: If this is going to work, can we say "Cheerio, Pool Lords!" and forget all about the mining pools, pool-hopping etc?
No, p2pool will probably be more suitable for large miners and normal pools than for at-home miners. But it will definitely help (eg, allow mining for a small pool without much variance). there can be many p2pools of larger or smaller size... so if "the main p2pool" gets too big for small miners to see reliable payouts, smaller ones can bud off. so yes, i indeed can foresee that the p2pool framework will overtake centralized pools. At the very least you'll need someone to run a full network node for you, since eventually it will probably be impossible at home. So he may as well act as a pool with monitoring features etc.
|
|
|
Let me ask this: If this is going to work, can we say "Cheerio, Pool Lords!" and forget all about the mining pools, pool-hopping etc?
No, p2pool will probably be more suitable for large miners and normal pools than for at-home miners. But it will definitely help (eg, allow mining for a small pool without much variance).
|
|
|
What do you mean by total block reward at the time? Isn't that unknown until you solve a block?
When you calculate hashes of a header, you know what transactions are to be included in it, so you know what the reward of the block would be if the hash you find is valid (satisfies difficulty requirement). By extension, when you find a share, you know what the reward for it would have been if it was a valid block. So, if you're working on a header for which the coinbase+tx fees is B, if you mine solo your average payout is B/difficulty per share. This is also your average contribution if you mine for a pool. So in PPS, you're supposed to get B/difficulty for that share (minus fees). So essentially, you should get lower relative rewards just after the network finds a block, and higher rewards as the block gets older. Yes, just as if you mined solo. So basically, if you were intent on pool hopping, and had a really efficient system, you might mine a pool that pays out txfees right after the network finds a block, but if it has been a while (txfees are higher), you would solo mine instead?
This is a likely scenario, yes, but the details will depend on how the pool calculates payments.
|
|
|
(24 x25 x9)^34 = about 7.97 x (10^126)?
If anything it should be (24+25+9)^34 = 58^34 ~ 9.05 * 10^59. But as Maged says the correct calculation is 2^160. And we're not ever going to run out of addresses.
|
|
|
What do you mean by total block reward at the time? Isn't that unknown until you solve a block?
When you calculate hashes of a header, you know what transactions are to be included in it, so you know what the reward of the block would be if the hash you find is valid (satisfies difficulty requirement). By extension, when you find a share, you know what the reward for it would have been if it was a valid block. So, if you're working on a header for which the coinbase+tx fees is B, if you mine solo your average payout is B/difficulty per share. This is also your average contribution if you mine for a pool. So in PPS, you're supposed to get B/difficulty for that share (minus fees). As an aside, I want to thank you for trying to make mining better, even though there are still a preponderance of proportional pools You're welcome. It's an uphill battle but you must never lose hope .
|
|
|
Why is it on the bitcoin-otc wiki instead of the Bitcoin wiki?
|
|
|
I don't think your version will work. Because of free-riding, incentives for holders of bitcoin to monitor transactions are extremely weak. For example, large transactions are difficult to distinguish from large numbers of small transactions. The mechanism through which stake holders identify cheaters is unclear. Would identification be accurate? costly?
They don't need to identify anything. They just need to sign a block hash once a day. It's the duty of receivers of large transactions to wait until the block receives a proof-of-stake before considering the money safe. Because the cost of proof-of-stake is close to 0, it doesn't matter much if the incentive is weak. And it should be straightforward to have mechanisms to make it easier and to deal with the contingency of a large number of coins not voting anyway.
|
|
|
Short term fairest paying = PPS pools (Eligius/Arsbitcoin)
These are not PPS! You're misleading people by implying that they are. And it's funny you say these are good short-term, given that they only promise you'll get your due payout eventually (if the pool lives that long).
|
|
|
No one running PPS will want to hand out these fees, as they are taking tremendous risk already by running such a pool. And I would not mine at a pool where there was a chance that I would be underpaid when there are perfectly fair pools out there.
On the contrary, PPS is the only kind of pool which can easily pay transaction fees. For every share submitted you just pay B/difficulty where B is the total block reward at the time. If the operator worries about his risks, he takes a pool fee. For any other scoring method, I don't know of a way to guarantee the expected payout for every share is always exactly the solo average, when the block reward is unknown a priori. This is something for which a solution will have to be found as it becomes more relevant.
|
|
|
How does giveaways increase the variance? I wish I had a better math/stats background.
Depends on the kind of giveaway. For the case of 0.5 BTC reward to the block solver, it means that 99% of your expected payout is according to the scoring method (which has, say, 0.1% of solo variance), and 1% of it is effectively solo with 100% of solo variance. So your total variance is 1.1% of solo variance which is a X11 increase.
|
|
|
Just trying to bring the fee down to zero without destabilizing the variance too much.
Then just use f=-c/(1-c). All I said is that it might be problematic for you to have high variance and no average fee to show for it. If you're up to it then go for it. The giveaway ideas just cause the variance to be higher with no benefit. FWIW, having zero fee is one of the last things I care about in a pool. Things like stability, website features, and low variance are much more important, and I'd be happy to pay 1%-2% fee if I know it incentivizes the operator to make the pool as good as possible. Just look at Tycho, he made the largest pool (hence lowest variance) by, for example, taking fees and using them for Google ads (not that I support this particular practice...).
|
|
|
Maybe a 5 BTC jackpot randomly selecting from the top share producers or perhaps block finders for the past 10 blocks?
What are you trying to solve again?
|
|
|
*long confused post deleted*
Hey guys, I just want to make sure I understand what Meni is saying:
Shares to use from this round: s1 = min(sum of all shares this difficulty, N * current difficulty) Shares to use from last round: s2 = min((1 - sum of all scores this round based on the last s1 shares) * N * old difficulty, 0)
All further calculations are based on only including the shares that you have submitted in the last s1 + s2 shares.
A miner's score: (your valid shares this round / this difficulty + your valid shares last round / that difficulty)
A miner's payout: 50 * score
Looks ok, if "round" means "difficulty adjustment period". Note that for very small pools or long windows, it is conceivable that the window will include shares from two adjustment periods back, which you'll also have to include in the calculation.
|
|
|
Just put the 1% as bonus for the block finder.
No, this just makes the variance higher. The parameters to play with are f and c.
|
|
|
Meni - what do you think about bitp.it's ESMPPS? http://forum.bitcoin.org/index.php?topic=12181.msg378851#msg378851 Same fundamental problem as SMPPS. The balance will eventually be very negative, causing the collapse of the pool. Shuffling the payout scheme around to favor recent shares doesn't change that. It should be clear that this kind of methods is a lose-lose situation. - In PPS, you get 100% (minus fee) whether the pool is lucky or not. - In score-based, you get >100% if the pool is lucky, <100% if not. - In ?MPPS, you get 100% if the pool is lucky, <100% if not, but with a promise "don't worry, it will get to 100% eventually". Except that "eventually" could be a long time in the future, and even that only assuming it won't collapse due to miners being fed up with the low payments, or shut down for any other reason.
|
|
|
Of course, the probability that the balance will be negative at some point during the year is higher, stay tuned...
Ok, according to my model, the probability is about 66%. This is reaffirmed by simulations I've run. How is the probablity if there is a "withhold good proofs-of-work" attack with 1% of the pool's hashing power? What if the attacker has 50% of the hashing power? For the 1% case I get about 74% (I didn't triple-check it so I'm not certain that's correct). For 50% it will be virtually 100%. What is the magnitude of the effect? Not too high, probably, but why take the risk when there is a purely advantageous alternative?
Still, there is the question of picking N. You could choose N/2, N, N*2, or perhaps a fixed value, say 1 million? I'm still thinking about how the choice of N will influence payouts for 24/7 miners, "casual" miners, and pool hoppers. The way I see it, Increasing N has the following effects: 1. No effect on the average payout per share, no matter the mining pattern. 2. Less variance, again for all patterns (perhaps felt the most by intermittent miners). 3. More time in the beginning of the pool where there aren't N last shares and you need to decide what to do with the leftovers (give to the operator? Charity? Distribute among miners proportionally?) 4. More time between mining to knowing what your due payment is and receiving it. 5. If difficulty changes are not handled properly, more disruption caused by them. How is PPLNS affected at or near the time the difficulty changes? Can the difficulty change benefit poolhoppers in some way?
I think the way to handle difficulty changes while remaining 100% hopping-proof is: 1. Express N in terms of blocks. For example you can choose N to be 1 block, and it will remain 1 block even after difficulty change. 2. Assign to each submitted share a score of 1/difficulty (the difficulty at the time of submitting the share). 3. When a block is found, distribute the reward among the last shares with a total score of N, proportionally to their scores. Note also that for the geometric method I have studied this matter more thoroughly and can confirm that it remains correct in face of difficulty changes. Thanks Meni Rosenfeld!
You're welcome .
|
|
|
Arg... Is there any way to stabilize the calculations without charging a fee?
c=0.01, f = -c/(1-c) ~ -0.0101. This will be 0% fee on average, but remember that this will increase your own variance (that is, you'll pay from your own pocket for some rounds, and receive payment for others, so it will average out to 0).
|
|
|
About 5 second rate... If a block is solved in an average of 10 minutes, at the rate of 5 second per share, it yields 120 shares per block. Counting 3*n -> 360 shares to be rewarded on average. Too low for those who are pooling to avoid uncertainty No. The pool finds 120 pool-shares in the time it takes the entire network to find 1 block. But if p2pool is 1% of the network, then the total difficulty of these shares is just 1% of the global difficulty, so 3*difficulty means 36000 shares. With the constant-ratio method, a p2pool too large means share intervals are too short. With the constant-interval method, a p2pool too large means less variance reduction. It's a tradeoff but it should balance out - if p2pool is large, it will cater mostly to larger miners/pools for which a variance reduction north of X360 is sufficient.
|
|
|
|