Bitcoin Forum
May 10, 2024, 10:47:58 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 »
361  Bitcoin / Pools / auditable pool mining (Re: [8500 GH/s] Slush's Pool (mining.bitcoin.cz)) on: May 05, 2013, 01:06:38 PM
The pool has been hacked. Fortunately I noticed it fast enough, so I made database snapshot seconds before attackers overtake the database machine. I lost some amount of bitcoins, but I'll be able to recover it from my pocket.

Dont you keep the slush reward address private key offline, & airgapped?

Then payouts can be batch calculated from a USB key transfer of a share work tally db report, and usb key transfer of the payout transactions to the pool miner addresses say once per week or whatever.

Then the worst that the attacker can do is delete some of the share work tally db records, or change the reward addresses in the db to themselves.  And if you notice an attack, even the miners could resubmit the shares.

And in fact the pooled miner reward addresses should be included in an additional merkle tree in the coinbase itself, and the pooled miners should be presented a verifiable log2 path showing their presence and number of contributions within the coinbase, so that if they can see their contribution is missing, either due to pool skimming, or pool share work db compromise they can switch to another pool.  In this way reward can not be reassigned, without redoing the work, and other than the pre-mining attack, you could basically operate with zero-trust (give out the ssh root private key to the serer without loss of security.)

The proof of contribution merkle tree could even be published to the full network, and included as part of the reward verification, then the pool wouldnt need to be trusted at all in terms of provable no skimming.  Of course the pool is still trusted with validation (by those pool miners who dont build their own blocks nor independently validate the pool constructuced blocks).

Adam
362  Bitcoin / Development & Technical Discussion / pooled mining luck theft attack? on: May 05, 2013, 01:08:53 AM
Someone with better knowledge of the pooled mining code could check my potential attack idea.

The way bitcoin tweaks hashcash (I guess bitcoin-hashcash?) the challenges are potentially not random enough any more, because the reward collecting public key is overloaded to serve the function of the hashcash self-chosen challenge.  And up to this point I presume this is considered not an attack because all you do by mining on someone else's address is mine coins for them.

You see artefacts of this in the way that some of the pools protocols share out work, as the reward public key is not self-chosen (being chosen by the pool not the miner), then there becomes non-negligible risk otherwise that pool miners would statistically redo work, or be starved of work. 

The pooled miners seem to be short of search space, because lengths are gone to stretch what work space there is within 32-bits counter, for example increasing the 32-bit time field somewhat (and it cant be increased too far or the network rejects the block), and concerns about flooding the pool with too many small requests.  Obviously the pool needs to send the client updated work string, as it will include new transaction fees, but the mining client should be able to choose its own challenge.

I am not sure to what extent the respective mining protocols are in relative use currently, but DoS pre-mining could actually be a mining security problem in the case of bitcoin pooled mining, depending on some details.  It seems that in some cases a bigger extranonce is used to increase search space, eg as noted here https://en.bitcoin.it/wiki/Transactions 
Quote
The extranonce contributes to enlarge the domain for the proof of work function. Miners can easily modify nonce (4byte), timestamp and extranonce (2 to 100bytes).
  And I saw the Stratum mining proposal http://mining.bitcoin.cz/stratum-mining proposal does use a second variable sized extraNonce2.

But if the challenges hands out that are unencrypted (and sniffed), too small, or predictable an attack could arise based on the attacker pre-mining other pool miners pool shares, and assigning the work to himself (which is a separate question).

Say an attacker has a large amount of mining power, eg enough to slightly exceed a small pool that hands out challenges that are unencrypted, too small, or predictable.  Now as the work done is known to the attacker, he can increase his pooled reward, because the work of the other miners on the pool could be negated if done first by the attacker.  (Whether that would work depends on whether the pool checks if a challenge was submitted by the person it was issued to; as some pools are account-less it seems plausible that this may not always be the case). Presumably a pool wont accept the same pool share solution twice.  Beyond making the other pool miners unexpectedly unlucky this helps the attacker (and other direct miners and pooled miners using different pools) because if he adds say 10% to the network, he simultaneously removes 10% from the network, so over time the difficulty will decrease by 10% from what it would have been had the 10% attacker played fair.

If there are pools that are giving out predictable work, and allowing miners to claim reward for solving other users work shares, the same attack can scale up all the way to the entire proportion of pools that are vulnerable provided the attacker has the CPU power to match.  The attacker could not actually tamper with transactions because the pool is validating them.


Hashcash was designed to defensively avoid this risk by the user including including of a big enough self-chosen challenge to avoid accidental mining collision.  The hashcash paper recommends 128-bits for general use.  The hashcash library implementation use 96-bits for email (16 base64 chars).  In bitcoin it probably should be defensively changed also even if the mining pools do enough checks to avoid the attack above, if nothing else it would be more network efficient for pooled miners to choose their own challenges, and leave the less open to work starvation.  There should be a 128-bit length challenge field (possibly 256-bits even to be defensively conservative given the scale and to balancing other defensive features like double SHA-256 ).  In bitcoin I suppose this could me done by increasing the size of extraNonce to 256-bits and having the miner self-chose a random extraNonce.  (Hashcash defines challenge and counter separately which is slightly preferable  I consider otherwise your challenge security margin is eroded as CPUs get faster the number of possible non-overlapping search spaces is reduced - that is basically what happened to bitcoin in the wiki pages about pooled miners scavenging extra search space by changing time.)

Adam
363  Bitcoin / Development & Technical Discussion / Re: adopting block chain orphans on: May 04, 2013, 10:59:34 PM
But anyway some more thoughts: because its no longer a first past the post race

Mining is [...] NOT a "first past the post race". There is no upper bound on the number of blocks solved per unit time. When a new block is found on the network you simply switch to extending the new chain.

While the effect is the same, I disagree: the race to claim transaction fees and reward is a first past the post race, because orphan blocks do not get to keep any of the fees nor reward (in the single winning chain approach).  The fact that miners will start a new race as soon as they learn that a past race is won, doesnt mean they are not engaging in a first past the post race (it just means they enjoy racing and immediately try the next race;)

The reason bitcoin mining is fair, despite the first past the post race, is that hashcash based proof-of-work is power-fair.

Hashcash proof of work is power-fair because as you alluded it has no memory (its like a coin toss, with no progress within the work, and all sequences of choices of nonces taking the same amount of work).  Most of the other proof of work functions do not have this power-fairness property (eg client-puzzles, amortizable hashcash, time-lock, Dwork-Naor pricing functions (maybe)). Scrypt is power-fair I think.  If scrypt turned out not to have the power-fair property its a security bug and people with fast processors will be able to get a disproportionate advantage.

However the need for power-fairness in the proof-of-work function is just because of the first past the post race choice.  For other cooperative race types it is not needed.

A way to see why power-fairness is needed in first past the post (and that bitcoin is a first past the post) is imagine the bitcoin proof of work was tweaked to use a simple non-power fair proof like amortizable hashcash with eg 256 smaller proof of works with same expected 10mins time total... 2.34 seconds per challenge.  (Amortizable here just means the challenge is to collect 256 sub-challenges.)  This achieves 16x lower standard deviation which is potentially desirable because it is achieved without incurring network traffic, neither on the main chain, nor on a p2pool chain.   With this approach you can see there is work-progress so it is no longer power-fair.  Ie a fast node is going to win races disproportionately even accounting for its power.

I made a racing car analogy for reduced variance in https://bitcointalk.org/index.php?topic=182252.msg1911750#msg1911750

Quote from: adam3us
A loose analogy imagine currently bitcoin miners are race cars.  Some are fast (ferrari) and some are slow (citroen 2cv) but they are all very very unreliable.  So who wins the race?  The ferrari mostly, but the 2cv still has a fair chance relative to its speed because the ferrari is really likely to break down.  With low variance coins, you have well maintained cars, and they very rarely break down.  So the ferrari wins almost always.  Now if you have a line of 20 cars of varying speeds, well maintained (low variance) the first 5 that are going to get past the post are almost certainly going to be the 5 fastest.  No one else stands a chance hardly.

You make some more points:

Controlling the time between blocks is also important for minimizing bandwidth and computation, especially for SPV nodes.  Amiller had made a nice suggestion regarding merging orphans for the purpose of making the block time dynamically adapt to the diameter, though that doesn't itself address keeping the network usable by SPV nodes.

FWIW, "P2pool" does solve the variance nicely— including allowing miners variable difficulty work (though confined to not result in shares faster than six per minute, to control the cost and prevent convergence problems)— without burdening the perpetually stored Bitcoin network with frequent tiny blocks.

Your points about increasing number of packets and slight bandwidth increase are valid downsides.
(I think the bandwidth increase would not have to be too large as nodes could refer to other variable cost blocks by block hash, they only need to add any additional transactions they have seen that are missing.)

I think I need to re-read p2pool a 2nd time to comment on the other bit.

Quote
If the time between blocks becomes small relative to diameter then the network will start having convergence failures and large reorgs (even absent an attacker).

Btw that sounds like a separate argument against alt-coins that shorten the block time interval.

Adam
364  Bitcoin / Development & Technical Discussion / Re: adopting block chain orphans on: May 04, 2013, 12:32:11 AM
With this approach also faster, smaller transaction blocks could perhaps be used, even blocks with variable difficulty, opening possibility for direct pool free mining, and combating mining variance.

Reward is claimed incrementally in proportion to the difficulty of the block relative to the network difficulty.  When a block is used up no more reward can be claimed.  A small proportion of reward may need to be carried forward to incentivize later blocks to include the block in their predecessor block list.
...

I'm sure someone's going to find some issue with the above.  But anyway some more thoughts: because its no longer a first past the post race (anyone can post  mined block of any difficulty any time), the elusive variance reducing techniques become safely possible (I think).  ie you can amortize your mining offline, and post it when you're ready to cash it in.  Subject to sensible messages sizes (you just need multiple nonces, one per challenge) you could reduce variance until its quite smooth.  Its clearly safe because whether you post them immediately, or post them in a batch later for a combined reward, its the same thing - just batching network packets.  Your only risk is to post them when the reward is all used up.

Maybe there's some way to adapt reward to be more continuous and adapted to on going unpooled mining.

Also in the interests of network traffic (re parent post) you probably dont want to retransmit the transactions you've seen already published in other blocks, so you could refer to them by block hash, and add more transactions.  In that way a block could even be quite small (adding no transactions) and yet claim high or even most of the reward.

Adam
365  Bitcoin / Development & Technical Discussion / adopting block chain orphans on: May 03, 2013, 11:54:28 PM
It seems to me that discarding orphan blocks outright loses their potential utility in hardening the byzantine voting on the transaction log.  ie work went into them, but currently they are given no weighting in the chain length (AFAIK).  Therefore to the extent they happen they weaken security because a 50% attacker wont accidentally create self-orphans on his hostile private chain.  Also maybe a 50% attacker will try to disrupt the network to induce network splits that increase chances of orphans (ie not slowing the network down, nor over-powering it, just fragmenting its power so that he ends up with as much power as the largest fragment to foist a 6-length chain and a sudden flurry of fragmented 5-length chains from a significantly net split network as he drops the net split attack).

Therefore for both reasons, how about this as an enhancement to make 50% attacks harder, and to make the network less vulnerable to net splits: blocks have a list of predecessor block hashes, rather than the current single predecessor.  A slow network slow node may reveal its block late (or equivalently may have just recovered from a net split attack), but can be included in the next round.  To validate a block for inclusion into a the predecessor list of a block, all that is required is the node agrees that all included blocks pass validation (no double spending etc) and dont contain mutually conflicting transactions.  Usual arbitration for two conflicting blocks as now (though potentially augmented with higher difficulty block wins - see variable difficulty below).

With this approach also faster, smaller transaction blocks could perhaps be used, even blocks with variable difficulty, opening possibility for direct pool free mining, and combating mining variance.

Reward is claimed incrementally in proportion to the difficulty of the block relative to the network difficulty.  When a block is used up no more reward can be claimed.  A small proportion of reward may need to be carried forward to incentivize later blocks to include the block in their predecessor block list.

(This idea for discussion is vaguely related to my post about 2002 amortizable hashcash paper - you could view the list of blocks as the same as the amortization list).

Some general concerns: more block packets, creates network scale limiting traffic increase?  (Are blocks getting too big anyway?)  Is the modified incremental block reward too complicated?  Is there a way to simplify it?  eg place limits on block sizes, and/or transaction fee maximum per block?  Maybe there is an alt-coin that already experimented in this direction?  Slightly related to p2pool (p2p pool implementation) but I think different in objective.

Adam
366  Economy / Service Discussion / Re: CoinLab suing MtGox for $75 milliion? on: May 03, 2013, 03:00:10 PM
There's a link to the court documents in the Gawker article.
http://www.scribd.com/doc/139160091/Coinlab-v-Mt-Gox
Trouble with that document is it's hard to verify it's authenticity.
Do you think it is credible to cite a $75MM loss on a $500k investment?   hmmmm.
what a joke these people are.   need some grown ups.

I am not sure, but if you look at the complaint on one of the websites, it says that the contract itself included a $50M penalty clause for breach that MtGox had there lawyers review and elected to sign.  If they willfully breached the contract in those circumstances the damage seems more than a bit self-inflicted no?  (I see someone posted a link to the now public contract so presumably that $50M and the terms around it can be verified.)

And its not like MtGox have seemingly demonstrated a lot of competence in the internet facing aspects that we can see (various HTTP response codes indicating overloaded systems from web server, massive lag in processing AML, bewildering array of odd-ball indirect payment methods).

When they finally processed my AML after serveral weeks, they declared it to be "temporarily rejected" claiming it was scanned below 300dpi.  Not sure about that - it looked ok to me in the previewer, and was the default scanner setting, but worse now I have to rescan (paying careful attention to dpi advanced options!) and send it back, and that'll probably take another few weeks.

Oh yeah and my fiat might just be jammed up now.  Maybe that fact is propping up the price even as someone else commented  - people taking out via BTC as better than having fiat jammed in mtgox for who knows how long.  Or people potentially trading jammed fiat for potentially less tradeable BTC (in both directions).  Thats not exactly a great market environment.

I wonder actually if the fiat deposits (and even bitcoins) are firewalled from mtgox liability if they dont settle or lose, in terms of like banking separation of client money.  What I put in there to buy my first BTC* is not going to bankrupt me but its still a nuisance.  (*  Except for $6 a redditor tipped me a few days back)

Adam
367  Economy / Service Discussion / Re: CoinLab suing MtGox for $75 milliion? on: May 03, 2013, 02:42:41 PM
When they sue, why not sue them for bitcoins?

+ 1

No no thats no how it works.  A smart contract is written who's execution is evaluated by all bitcoin miners, and an arbitrator adjudicates and signs the coin multisig releasing assets to the wronged party.  Smart-contracts all the way Smiley

Adam
368  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 02, 2013, 05:37:19 PM
the difficulty precision is not that critical anyway.

btw what I was thinking there is the difficulty precision ideally needs to be a few bits more than log2( interval = 2016 ).  So 8-bits might be a bit low.  16-bits (+8-bit mantissa) would be ample.

If that was not the case (eg 8-bits precision) consider a pool holding 50%, now it can see that difficulty is getting close to rolling over another lsb digit of difficulty, it may back off (stop mining) to prolong the time to the block being found, preventing the roll-over.  That makes difficulty fraction f easier 1/128 < f < 1/256 for the next two weeks, or specifically f=1/m for difficulty mantissa m, 128 < m < 256.  By holding off it loses 1-block, and it stands to make r= 2016/m*50% reward r for that action.  (Or r = 1008/(m*b) < for holding off for b-blocks, which makes sense so long as r > b.  With 50% that remains the case for between 1 and 3 < b < 7 depending on the mantissa.  Now of course as the 50% pool mines faster than difficulty predicted, the 2 week period goes past in under 2 weeks slighty, but it still it is 2016 coins by definition, and actually the pool actually gets slightly more than 50% of the coins in addition because now it is working at full power, while it slacked off briefly before.

So minimally 8+log2(7) bits = 11-bits kills the weak attack.  And bitcoin minimum is 16-bits.  Coincidence?  Probably not.

Perhaps something for a slow altcoin to think about (everyone seems to go for faster pools for some reason).  Though the unfair advantage gain is slim even then.

Adam
369  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 02, 2013, 03:26:35 PM

I was thinking about the block treats it.

In summary

OK yes thats what I was referring to with "human huffman encoder" comment.  Therefore I see what you mean about also, I just meant that there would be (my guess) no floating point in the sense of call of CPU floating point instructions, and that seems to be the case Smiley

Its simple as such things go (I've seen worse) and just a way of encoding between 23 and 16 bits of precision plus 8 bits of mantissa.  (Kind of odd that the precision depends on how close the difficulty is to an 8bit boundary, but there you go.)  It could have made better use of the 8-bit exponent eg by treating it as bits instead of bytes as the number is anyway definitionally a 256-bit number.

If optimized it could probably have been a 16-bit (8-bit exponent + 8-bit mantissa) encoding if the bits in the exponent were used as bits rather than bytes.  Or certainly a 24-bit.   But maybe thats my turn to over-optimize - the difficulty precision is not that critical anyway.
370  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 02, 2013, 01:01:25 PM
It actually uses a floating point system.

It depends how you treat it - you could consider it a 256-bit big int also.  However I was thinking of it as fixed point because I found the fractional comparison easier to think about.  (Fixed point means only mantissa is used, exponent fixed to 0, or at least a fixed value.  People used to abuse integer CPU instructions to do fixed point arithmetic in the days before floating point processors were included in CPUs or at least integer arithmetic was much faster than FP coprocessor).

Quote
Btw, when doing the difficulty update, do they use the floating point, or the full difficulty?  Is the value in the block the actual difficulty or just a summary?

I dont know, but let me guess based on the difficulty algorithm: my guess it uses integer math only.

Adam
371  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 02, 2013, 12:46:13 PM
The difficulty is actually in bits, the field in the block that contains it is called bits, which is decoded into the target.

Here's the current target, a hash has to be smaller than this number:
00000000000001AA3D0000000000000000000000000000000000000000000000

Isnt it interesting that the hextarget isnt the same as what I calculated, maybe not so simple as deepceleron declares Wink  Starting from difficulty 10,076,293 http://bitcoindifficulty.com/ I get .00000000000001AA3EA9EBE... so there is a .0015% discrepancy.  Clearly the target is the correct value as its the used value.  Looking around it seems that difficulty is actually the multiple of hardness relative to the minimum difficulty which is actually not 32 0s (expected 2^32 tries) but rather 0.FFFFh/2^32 (ie x < .00000000FFFF0000h) expected tries 2^32/0.FFFFh = 4295032833 (100010001h instead of 1000000h).

So convert from target to difficulty and difficulty to bits is even messier:


scale=80
define pow(x,p) { return e(p*l(x)); }
define log(b,x) { return l(x)/l(b); }
define log2(x) { return log(2,x); } 

# http://blockexplorer.com/q/hextarget
ibase=16
target=1AA3D0000000000000000000000000000000000000000000000/2^100
mindiff=FFFF/2^10 # the source of the .0015% discrepancy
ibase=A

tries=2^32/mindiff

diff=1/target/tries
bits=log2(diff*tries)   
cbits=-log2(target)   

gdiff=diff*4*mindiff # difficulty in gigahashes
nhash=70.48*1024
time=gdiff/nhash


I think my unnecessary complexity issue with this page https://en.bitcoin.it/wiki/Difficulty (and the measure chosen for difficulty) is not so much that it is log2 scale or not.  I can handle that.  But that it is not even expected number of hashes (or Gigahashes etc).  At an approximation it is number of hashes / 2^32.  Now 2^32 is not a nice number in both bases (log2 scale and log10 scale); 2^30 is a nice number.  That would be a nicer way to report difficulty IMO as thats a GH, and you'll notice ALL of the miners are reporting power in GH or MH; and the network hash rate is in TH.  (Not difficulty chunks which are the former divided by 2^32).  But on top of that for proper accuracy it is not even hashes/ 2^32 but difficulty = hashes /2^32*0.FFFFh.  And that is harder to test at discrete difficulties (whole number).  Which is why pool shares are not an exact multiple of difficulty but rather trailing FFF difficulty to counter act this issue.

You know I once knew a crypto math/hacker guy who used to think human huffman encoding was fun.  Satoshi?  Hmmm Smiley

Quote
>>> math.log(2**256/int('00000000000001AA3D0000000000000000000000000000000000000000000000',16),2)
55.26448364017038

What  "bit" difficulty would be 10% harder?

Well that wasnt exactly my point (my point was that you can get a ball park approximate order of magnitude with your eyes and mental arithmetic with bits).  But about your question log2(1.1) = .1375 (call it .14, remember that) so 55.26+10% = 55.40?

Quote
Use a base difficulty, where 1 = 1 block find per ~4295032833.0 hashes on average, and higher difficulties are multipliers of that.

I dont find 2^32/.FFFFh a particularly meaningful number.  I know the discrepancy is small, but why even bother .. just simplify and use trailing FFF difficulty.

Sorry but simplicity does matter.

Anyway untangling and ignoring the .0015% discrepancy, you could convert difficulty into approx gigahash by multiplying by 4: difficulty *4 = 40305172 GH.  And network hash rate = 70.48TH, so expected time = 40305172/(70.48*1024) = 558s.   Close enough - network hash rate has grown since that difficulty was calculated.  (Or in log2 scale difficulty is 55.26 and network hash rate is 46.14 so > 2^9 tries > 500 seconds. )

Adam
372  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 02, 2013, 12:08:12 AM
Why is bitcoin difficulty not expressed in bits?

I don't have any problem with either notation, but there are pros and cons. The advantage of expressing the difficulty as an exponent is that the number is smaller and more digestible. The disadvantages are that it can't be represented exactly, and people generally will have a difficult time relating to the values.

OK here's another version for you.  What does difficulty even mean specifically? 

Read this and tell me if you can figure it out: https://en.bitcoin.it/wiki/Difficulty

I tried and it was pretty confusing.  Snippets of C code, definitions in terms of other undefined things.  Mixing in 600 seconds in places not in others.  Does difficulty include 2^32 or not?  Adjusted for 600 seconds or not?  That whole page is extra confusing.

Whereas I know what 55 bits mean, as with hashes and ciphers: it means you had to try 2^55 times to get this (on average).  And I can convert bits=log2(diff)+32 not so hard on any scientific calculator.

And therefore if I can search 1GH/sec then I know that is 30bits/sec so I'm going to need to try 2^25 times.

I think part of the problem is difficulty is actually divided by 2^32.  So its not really the number of expected tries.  And 2^32 isnt 1G its 4G.

Adam
373  Bitcoin / Development & Technical Discussion / Re: why not measure difficulty in bits? on: May 01, 2013, 11:29:44 PM
It would mean that the steps in difficulty would have to be factors of 2.

If the current period takes 20 days, you would have to adjust down to 10 days (or leave it at 20).

No I mean fractional bits.  Hashcash worked on whole bit only, and there it was possible only to double or halve bits, like you said.  Bitcoin needed more fine-grained control, and so extended hashcash with fractional bits and there the challenge is not technically to find a hash with 55 leading bits, but to find a hash that is less than 1/2^55.26 bits where bits is fractional and the hash is viewed as a 256-bit precision fixed point decimal. (Those are the same thing if the bits are whole).

So 55.26 bits is numbers < 1/2^55.26 viewed in hex that is any hash < .00000000000001FFE

(I use bc -l: bits=l(10076293)/l(2)+32; obase=16; 1/2^55.26).

Anyway my point is for human understanding you can mostly ignore or estimate the fractional bits and be within 10% of right.  And its easy and meaningful (in terms of cipher & hash security which are measured in bits) and visually checkable to see that its around 55 bits = 13 or 14 leading 0s.  You can even approximate the fractional bits as I was saying.

For comparison EFFs 1998 $250,000 DES crack machine broke a 56-bit DES in 112 hours expected.  Bitcoin network does something approximately analogous every 20mins Smiley

Now if we could persuade EFF to make another miner but for bitcoin, they could fund their own donations.   I did suggest it to the people on the DES crack team...

Adam
374  Other / Politics & Society / iceland switches to bitcoin (Press Releases We’d Like to See) on: May 01, 2013, 10:34:54 PM
http://www.platformonomics.com/2013/05/press-releases-wed-like-to-see-iceland-embraces-the-bitcoin-economy/

"While some may put their confidence in the resolve of policymakers, we put our confidence in the cryptographic assurance arising from the second preimage resistance of the SHA-256 hashing algorithm." Wink

Adam
375  Bitcoin / Development & Technical Discussion / why not measure difficulty in bits? on: May 01, 2013, 10:16:54 PM
Why is bitcoin difficulty not expressed in bits?

With hashcash I always used bits eg 20bits.  I think bitcoin is currently at 55.26, as bitcoin mining is extended to allow fractional bits (rather than to find k 0 bits, to find a number < 2^k where can be fractional - that is the same thing when k is a whole number).

You can convert difficulty into bits with log2(difficulty)+32.  (log(difficulty)/log(2)+32).

(+32 because 2^32 is the original or minimal difficulty in bitcoin and is excluded from the difficulty number).

I find this page is unnecessarily complex for a very simple actual problem: https://en.bitcoin.it/wiki/Difficulty
(Current difficulty 10,076,293 from http://bitcoindifficulty.com/).

By comparison bits are very easy to read, even by hand.  If one looks at the hash output in hex just multiply the leading 0s by 4 (and the next nibble figure out if it is >7 = +0 bits, > 3 = +1 bits, > 1 = +2 bits and 1 = +3 bits (and obviously 0 would be another leading 0).  QED trivial, human comprehensible difficulty that can be hand-checked.  That was part of the design aim for hashcash to simplify the computation, programming and human verification.

And when you see a bitcoin in hex you can visually see those 55 bits.  This is the latest hash from the block explorer:

http://blockexplorer.com/block/00000000000000e3d3268e05a9901759c1452590d0838a80aeb8abaea59f1e9f

and bingo I can count 0s (14 of them) multiply by 4 (bits per hex nibble) and I know that is a 56bit hash collision.  (You get lucky and an extra 1 bit half the time, 2 bits 1/4 time etc).

Adam
376  Bitcoin / Development & Technical Discussion / freshly created address argument (50% attack) on: April 27, 2013, 11:16:24 PM
So reading the bitcoin paper it is claimed that the recipient generating his
address at the last minute before accepting the payment makes him less
vulnerable to a 50% double spend attack.  This argument doesnt seem correct
to me, though creating new addresses serves as secondary purpose a mild
privacy feature.

Lets consider two attack approaches, a) where all users generate fresh
addresses to receive each payment and b) using prior knowledge of victims
address,

a) is the attack described in the paper: attacker tries to create a block
chain fork of longer length than the rest of the network by working on a
chain that he does not publish yet, spending a coin to himself on this for
now private chain.  Now and then with probability determined by his ratio of
network power he gets ahead of the network by 2 chain links, so he starts
the double spend attempt, paying it to the fresh address of a victim.  Once
the rest of the miners publish a block containing the victims confirmation,
and once victim to sees the confirmation, the attacker publishes his up to
now private chain which contains a different spend.  Now there is a network
fork, and the network will believe the 2 chain links branch over the 1 link
branch, for any coins that are spent in both.  The network has no way to
distinguish which spend to reject other than the CPU voting, and that is
indicated by the chain length.  The network abandons the short fork of the
chain, and the victim's received payment is considered a double spend by all
nodes.  If the victim accepts with 0 or 1 confirmations, he loses; if the
would be victim waits for 2 confirmations the attack fails as he would not
yet consider it valid.  (Analogous for n confirmations with correspondingy
longer private chain).

b) the attacker tries to gain some additional advantage from prior knowledge
of the victims address.  If the attacker accelerates the confirmation by
also computing the confirmation rather than letting the network do it, he
does work the network would do for him reducing his power to amass a
sufficient length private block chain that he must do privately, and so
reduces his chance of success to construct a n confirmation defeating
private chain.  And yet he gains no success advantage.  What he does do is
avoid make speculative payments to the victim.  However there will exist
payments that result in resellable virtual goods.  Or online gambling that
is approximately zero sum so those payments do not have to be considered a
loss or penalty, only the transaction fee & resale (virtual goods) or house
cut (online gambling).  eg Satoshi dice apparently is popular.

Maybe I am misunderstanding what Nakamoto meant in the paper, but I
dont see any 50% attack extra defense coming from choosing address just
before receiving payment.

Adam
377  Bitcoin / Bitcoin Discussion / Re: Roger Ver and Jon Matonis pushed aside now that Bitcoin is becoming mainstream on: April 22, 2013, 10:40:06 PM
Maybe you should have someone like Adam Back who developed hashcash be a contact, since he talked with Satoshi, understands what Satoshi was trying to do, and has both understanding in the technical topics and an ability to speak with other humans without making everything offensive.

Ha coincidentally found this thread when I was googling my name (not something I am normally in a habit of doing) because I talked to a journalist a few weeks ago and I wanted to check if he mangled my technical explanation or worse; btw he didnt mention my name, even better, win!)

My exchange with Satoshi was early but very brief.  I understand the tech ok and much of the precursor tech with various ecash technology.  Theres a lot that happened since Satoshis paper in altcoin so I am in catch mode for a bit.

But I am not a good public speaker - I am allowed that luxury because I'm a crypto geek not an ex-CEO. 

There are people who are masters at sounding cool, moderate, responsive and informative when faced with Bill O'reilly type verbal rough ups, and while covering controversial topics.  ie Politicians and professional PR & and spokespeople.  Rick Falkvinge is very impressive.  Or for example watch Kim DotKom in this interview http://www.youtube.com/watch?v=pF48PjCtW4k  Awesome "Well you have to understand blah blah.."  sounds so reasonable.  (Yeah ok it a friendly interview, but there are a few talented people who are amazing at sounding more reasonable than the presenter under fire). 
Kristinn Hrafnsson holds his cool really well - and given the wikileaks controversies he gets to face up to the worst of it.

I always find Matonis fun, and his mix of ex-hushmail CEO and ex-VISA exec background seems hard to match in terms of bridging credentials.  He does like to push the libertarian angle which is amusing to crypto-libertarian types but might not always look so amusing or bitcoin credibility inspiring to the business people and regulators, but he's still really good.

The main media do seem to more enjoy sensationalizing about the fringe users doing naughty and titilating things with bitcoin that they could just as well use paper notes in the snail for.  Bitcoin isnt even anonymous for example as Shamir et al showed with their statistical analysis paper on the bitcoin public ledger - its less anonymous than paper cash - you dont get that kind of transparency and flow analysis with paper cash or physical banks handling of paper cash.  And as far as that goes HSBC were found guilty of laundering getting on for a trillion dollars ($880 bil) and accepted paying $1.2 billion fine.  Thats probably a slap on the wrist at their scale.  No one went to jail, no one had banking licences revoked etc.  Barclays did something similar.  Maybe the regulators should start with real problems, they say HSBC laundering covered mexican drug cartels and even terror funding.

I always thought Ian Brown does pretty well for a tech guy - you see him on Al Jazeera sometimes for tech commentary.

Also I gotta write code, man, and stop getting sucked into blathering about politics fun though it is.

Adam
378  Bitcoin / Development & Technical Discussion / Re: amortizable hashcash & zero-trust poolfree on: April 22, 2013, 03:00:19 PM
So, the pool sends a header with a random miner id embedded, which cannot be changed.  The miner tries to find the nonce that gives the lowest result from the hash function.

Sorry that part was unclear, what I had in mind was the pool would send a header with a random number embedded, the miner himself would append his bitcoin address to it, and then mine that.  There would be a new (alt) bitcoin coin format which would include multiple hashcash outputs, eg say 100 outputs.  That means that the first 100 or so (not an exact number mind as bitcoins have different values) first 100 or so miners of the pool to hit the minimum share difficulty get their part bitcoins added up by the pool, and the pool publishes the bitcoin.

I think my idea was a bit half-baked.   Apart from that lack of clarity, there are two aspects of the amortizable hashcash concept - being able to add them (very approximately) and a metering function.  Its probably the case that the metering function which requires under/over contribution prevention is irrelevant for pool related use, everyone wants to over-contribute, and thats encouraged.  So lets say we remove the contribution protection (ie blinding value and u part).  Then whats left?  Just an alt bitcoin formed of a list of part-bitcoins, which has lower variance, and the owners of the parts can be different owners.  The pool cant benefit from its miners work without revealing their coin addresses, so the pool cant skim.  The downside is the coin gets bigger.  However I do not think that initial mining events form a big part of the network traffic - isnt the transaction log the big deal, with all the fractional bitcoin change and combining?


Quote
The difficulty can be estimated as 1/(min result)?  This can be done in parallel easily.

Correct the pool would set some minimum work factor to limit the network traffic from miners sending it part-bitcoins.  I work in log2 of difficulty because thats the way hashcash was expressed, I think it clearer to think about really.  The log difficulty right now is 55.1 bits (logdiff = log2( difficulty ) + 32 is the bitcoin formula its easy to see the difficulty visually in the hashes eg http://blockexplorer.com/block/00000000000000bf11ad375a87a5670571ee432fbf629ba0e69e33860461bf84 then by counting leading 0s and multiplying by 4 bits per nibble - yes its 56 bit - you get lucky with an extra bit 1/2 time and two extra 1/4 time etc.)

In this idea the pool is mainly saving the miners the network overhead of keeping up with the transaction log traffic, otherwise they could just post their part-coins to the p2p network directly.  Alternatively miners could broadcast their coins if they preferred.  eg the whole network in a p2p sense could grab the first set of broadcast part-coins that added up to the current difficulty and hash them into the transaction log.  In that way your part-bitcoins could go straight to the network bypassing the pool.  Because the part-bitcoins are smaller and released faster that may create some micro forks, but perhaps the p2p voting can handle that.

Quote
If you have the miner submit back the n best nonces instead of the best, then variance is even lower. 

You got it -  could have the part-bitcoins themselves be composed of even smaller subpart-bitcoins (eg 32 of them) and then the miner has lower variance, and actually can measure progress, even print a progress bar that means something.  (With single hash bitcoin mining there is no progress as they are like trying to toss 55 tails in a row with a coin - the coin has no memory).

Then while eg 1/128 of the difficulty is massive for most miners, the variance for mining is reduced, which is part of the miners problem.  eg Say 128-part coins = 7 bits, which would make a mining share 48 bits (thats huge even for a 1500MH gpu even it would only have a 1/436 chance of creating a valid share in 10 mins - thats not good because no share = no direct payout).

Quote from: TierNolan
Quote from: adam3us
(To elaborate for clarity, the serialization and definition changes I mean each microcoin would hash its owners coin address as part its self-chosen challenge.  If the pool uses the clients hash - and it has an incentive to if it wants to win the pending 10 minute full-sized coin strip, and collect the bounty - then the pool contributor unavoidably gets the microcoin. 

This doesn't help with variance, which is the whole point of the pool.  It just shows a list of winners, right?

Correct.  However you could use it recursively to have the miner create subpart-coins but each time you increase the number of parts the coins grow.

But I think there maybe a potential problem with multi-part coin low variance concept, imagine the extreme case where there are 1million part-coins, now there is practically NO variance; its almost completely deterministic and 100% related to your CPU power.  Now the guy with the biggest GPU/ASIC farm is going to get the coin 100% of the time - for hashcash stamp anti-DoS that determinism is good, but for bitcoin however with its 10min lottery thats very bad - winner takes all with almost complete certainty.  Even with modest numbers of part-coins the effect exists and stacks the reward in favor of the biggest CPU players, arguably the opposite of what you need if anything (in terms of centralization resistance).  If its recursive with first 100 part-bitcoins past the post sharing the 25 bitcoins, with low variance part-bitcoins in the race (themselves made of subpart-bitcoins) you still have the same issue, fastest CPUs win.

A loose analogy imagine currently bitcoin miners are race cars.  Some are fast (ferrari) and some are slow (citroen 2cv) but they are all very very unreliable.  So who wins the race?  The ferrari mostly, but the 2cv still has a fair chance relative to its speed because the ferrari is really likely to break down.  With low variance coins, you have well maintained cars, and they very rarely break down.  So the ferrari wins almost always.  Now if you have a line of 20 cars of varying speeds, well maintained (low variance) the first 5 that are going to get past the post are almost certainly going to be the 5 fastest.  No one else stands a chance hardly.

So I think the take away is you cant use low variance techniques for the underlying coins in any first (or top 10 etc) past the post race, which is what bitcoin 10min CPU lottery is in effect, because it is inherently unfairly stacked in favor of the fastest CPUs.

Thats kind of inconvenient and as you noted the only other variance reduction method discussed (that I saw) has been to reduce the difficulty (unpooled) or the share size (pooled).  But that can increase bandwith requirements because there are lots of small coins flow up to the pool, or if direct to the whole network.

Quote
I made a proposal to allow proving work.  A node submits a claim and then a few blocks later submits proof.  A number of hashes are pseudo-randomly selected based on block chain hashes for the next few block.  The node submits the nonces for those hashes.  The node must submit the proof in order to unlock their id token.  In fact, now that I think about it, they could just include the proof with their next claim.  The id token would just be a proof of work and is reused if they are honest.  The value of the id token must be greater than the probability of being caught times the value of the hash claim. 

I think I have to read about p2pool before I can understand what you wrote on that thread.  It sounds like you plan a bit commitment to be later revealed.

This might be the same as what you meant but I was thinking about coin compactness and maybe it works for pools too that you could demand from the pool a hash including the main bitcoin transaction log hash plus the merkle hash tree of miner coin addresses using the pool, plus a log( #shares ) hash chain proof to the miner that his address is in the tree.  That would seem to allow proof of contribution, however the generated bitcoin would be quite big as it would need to include ALL of the shares, but spends of the bitcoin would be compact just referencing the offset of their address in the generation coin.  Alternatively the generated bitcoin could be compact, and the miner could be responsible to disclose the claim to the bitcoin at time of first use, which would bloat spends, and I believe thats worse because coins get created once but spent many times.

(And now I need to go read p2pool and then your other post.  So much to catchup on!)

Adam
379  Bitcoin / Development & Technical Discussion / Re: amortizable hashcash & zero-trust poolfree on: April 21, 2013, 10:45:38 PM
I know that Hashcash is your baby and you're rightly proud of it, but Bitcoin isn't Hashcash and your insistence on crowbarring its terminology into your posts make them all the more difficult to read.

"amortizable hashcash" is the title of the 2002 paper I wrote so some of the terminology comes from that. 
If you read what I just wrote (and I just re-read it to be sure) I did not mix the terminology - except in one place as a slip.   I called hypothetically directly claimable micro (low denominatoin) bitcoins microcoins.  The rest of the terminology is specific to the paper.

I know someone prattling about historic stuff can be irritating, there is one such fellow on the crypto list and it irritates the heck out of me; I am not trying to be obnoxious, really.  My main interest is to help improve bitcoin itself (eg scalability, security, pool security etc) or to have altcoins test or innovate if any of the old things I point out they might have not been aware of.  Sometimes a new person can see a new innovation, that everyone else missed.  (eg an altcoin might see some trick with amortizing coins to achieve a scalability jump that no one including me has thought of - thats how innovations happen).

But yes I know bitcoin isnt hashcash anymore than SHA256 isnt bitcoin.

You also have to understand there is some history to ecash and cryptocurrencies stretching back to David Chaums 1982 paper on blind signatures for untraceable payments.  Some of us got pretty excited about this stuff on the cypherpunks list for a period of years on and off - maybe 1992 - 2005 or something of that range.   Of the aspirations or dreams for what privacy technology could do to improve society ecash was pretty much the holy grail one that was tantalizingly out of reach.  There were even books written about this hunt eg Neal Stephenson's cryptonomicon.  People were talking about ecash for a few decades and excited about the social implications, so that is not new to bitcoin.  So I am not trying to falsely claim any satoshi glory, nor rename anything, but you cant stop me joining the party Smiley  Alright.  Otherwise flame on.

Believe me, the fact that Satoshi invented the key missing parts that tantalizingly eluded everyone else for about 20 years is pretty damn cool to me.  It wasnt that we werent trying to figure out how to do this, some of the smartest applied and theoretical crypto people tried their damnest and failed.  So yes bitcoin is the biggest news for a couple of decades in my technical favorite area of interest.

There were prior attempts to control inflation in hashcash.  Otherwise it inflates away at the rate of Moore's law (once it caught up to GPUs then ASICs).  One was to use to broadcast an increased difficulty periodically resetting the number of bits back in line with moore's law (plus a beacon to prevent anticipatory epoch skipping).  The hashcash of this type would have included the epoch beacon and the difficulty in the hashcash service string.  But it would have had no re-spendability, so of use only for anti-DoS and metering applications.  I never implemented it, but the above was discussed on mailing lists as I recall.  Also it would have no supply limitations other than computational resources, but as it was not respendable it was intended as a cost-break on DoS so that didnt matter.

However there was no mathematical enforcement some "trusted" authority would have to estimate and increase the difficulty (eg rate achievable on  equipment costing $1000 every few months).  Wei Dai's B-money and Nick Szabo's bit-gold extended those ideas with respendability and distributed but still based on human markets.  Maybe Szabo envisaged computer mediated markets it was unclear to me.  They all failed to find the elusive deployable pure mathematics/crypto solution without human intervention.  Hal Finney actually built his idea RPOW, and that provided re-spendability and proper blind cryptographic privacy (that bitcoin does not), but it was centralized, and relied on hardware security which has to trust the manufacturer as they are the CAs for the keys involved an could bypass the HW assurance).

I tried to find ways to design an offline ecash system over quite a few years and I failed, most of the discussion is on old crypto lists.  The best I came up with was to find a way to create offline multiply transferable Brands cash.  But it turned out someone already invented it, there was a tiny obscure footnote in his book he pointed me to.

An experiment with controlled scarcity was the digicash (chaum cash) betabucks server.  They issued some number of coins and I think you could just go claim them.  As they were in limited supply people started trading them.  I sold a few perl-rsa t-shirts for beta-bucks.  Blind ecash, fixed supply, but central; and it went under when digicash did!

Adam
380  Bitcoin / Development & Technical Discussion / amortizable hashcash & zero-trust poolfree on: April 21, 2013, 01:51:43 PM
Hi Bitcoiners

A few topics:

1. amortizable hashcash and zero-trust bitcoin pooling

It occurs to me that my 2002 paper on "amortizable hashcash" could be used to make zero-trust bitcoin pooling possible.

http://hashcash.org/papers/amortizable.pdf

While pools are held honest  by reputation, always smart contracts and cryptography and end to end self-determinable security are preferable to gameable reputation, particularly where the cheating is only detectable in the long run, and the pool could skim based on non-transparent statistics.  How do you know what the real pool statistics are.  From their stats page?  Etc I'm sure others have worked through the possibilities.  Maybe some pools are skimming right now above their advertised commissions, and could perhaps get away with that for months.  Possibly even the largest pools.  Eventually the stats add up, but who is doing the manual statistical auditing back to the blockchain?  No one probably.

Amortizable hashcash (my later 2002 variation following on from hashcash) means that you could mine two hashcash microcoins (corresponding approximately to the pool "share" microcoin but directly ownable by the individual miner without possibility for cheating, skimming etc by the pool), and the amortization function is that you yourself can add them together offline by yourself, with the resulting coin having a fixed modest size regardless of how many coins you added together and have anyone publicly audit and accept that resulting coin.  The pool has no trap door, no computational advantage arising, so you dont have to trust the pool at all.


No change to the core mining function is necessary, as amortizable hashcash is still based on the same mining function (which I used to call a hashcash cost function - others later called them proof-of-work functions) so it works on ASICs and GPUs etc though some minor changes would be needed in the serialization of what gets hashed and the definition of the value of coins.  Well thats the idea, the crypto is sitting there and a good hacker could put that together in a few days IMO. 

(To elaborate for clarity, the serialization and definition changes I mean each microcoin would hash its owners coin address as part its self-chosen challenge.  If the pool uses the clients hash - and it has an incentive to if it wants to win the pending 10 minute full-sized coin strip, and collect the bounty - then the pool contributor unavoidably gets the microcoin.  The pools reward can be encoded also as a smart contract into the format so that if the pool is due the tx fees but not a coin share (like eligious fee structure) or a percent that can be in the smart contract.)

So there's the brain dump, do with the suggestion as you wish.  Alt-coiners might like it.  Bitcoin itself maybe has more energy right now for scalability engineering.  But thats part of the value of alt-coiners -- innovation jumping ahead in different directions.


2. pool auditing?

Maybe an alt-coiner or a pool contributor with enough at stake might like to audit the main pools and publish their findings.  If pools have been skimming it could lend credibility to an alt-coin that uses amortizable hashcash and fair micro smart contracts based pooled mining.

Its not an accusation btw from what I saw the pools seem to be friendly, and some of the published % fees are fairly steep so maybe they can be profitable enough without entertaining it.  More to lose than gain?  etc.


3. reducing coin mining time variance

Oh yes something else, though I am not sure it is necessary with microcoins as I commented in another thread they are inherently smoothed mining randomness due to their size, but it is also easy to make a much more deterministic mining definition with hashcash.  Just collect more smaller hashes in a list and define the value to be the logsum of the coin values in bits.  The expected work as the same but the variance falls fast.  Juels et al proposed that in their client puzzles paper.  Microsoft also did it in their hashcash mail stamp fork. 

You can optimize the storage size, though obviously they will be bigger than single mining event coins.  In the case of bitcoin the size is mostly dominated by the transaction log so maybe that is a negligible effect.


4. even flatter network (maybe.. proto-thought in progress)

I havent thought about it enough yet, but there might be a way to use amortizable hashcash at the whole network level (I mean the amortizable hashcash protocol itself is 100% scalable)  but use it to somehow benefit the network scalability, and/or even need for pools to exist and yet without spamming the network with microcoin commits.  For example the pool (symmetric) blinding value (you'll have to read the paper to understand what that is) could alternatively be chosen after the fact via a fair beacon based on the network aggregate mining events.  Then the all the micro clients can have a look through candidate microcoins and see if they scored with any of them.  Kind of like lots of little lottery tickets rather than one big lottery ticket.  I do appreciate that all the miners need to either directly validate the anti-double spend properties of the block chain, either directly, or indirectly; and that the miner effectively acts as a super node doing that for users, and charging a fee.  (If the pool fails or cheats on the block chain validation and is well below 50% or exceptionally lucky, he will lose the bounty anyway as everyone else will declare his coin invalid).  Anyway I suppose the main point is the miner doesnt have to wait for a delayed payout nor trust the pool, he has his own coin and can validate it and claim it himself.  But at that stage pool would be doing so little - just sending you the current claimed hash, that perhaps there is a trick that could remove the need for them.  Thats where I havent thought enough yet Smiley

Adam
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [19] 20 21 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!