Bitcoin Forum
May 21, 2024, 03:28:49 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 [286] 287 288 »
5701  Bitcoin / Development & Technical Discussion / Re: Pool shutdown attack on: May 31, 2011, 08:34:54 PM
The only thing that you can tell from a single sample is how unlikely it would be for you to repeat it in the same amount of time.  You can tell absolutely nothing at all about the amount of work actually done.

Say I get a block, and returned the result to you in a nanosecond.  What is more likely?

Option A) I got really lucky.
Option B) I have more hashing power at my disposal than the theoretical hashing power of the entire solar system, should it ever be converted into a computer.

Obviously the former. really really really really really lucky. In fact, I might instead think it's more likely that you've found a weakness in SHA-256 and are able to generate preimages fast.

And by the same token if the network goes a day without a result, whats more likely ... that an astronomically unlikely run of bad luck happened— p=2.8946403116483e-63  a one in a wtf-do-they-have-a-word-for-that event—  or that it lost hashrate or had some other kind of outage?

We had block gaps with combined one in a thousand chances given the expected nominal rate and at the same time big pools were down and their operators were saying they were suffering an outage.  But you say we know nothing about the hash rate?

*ploink*

5702  Bitcoin / Development & Technical Discussion / Re: Pool shutdown attack on: May 31, 2011, 05:55:26 PM
Better example.  You have a billion sided die, and you throw it and it comes up showing "1".  Should I infer from that event that you had actually thrown the die 500 million times?

We're not measuring any outcome.  We're measuring a small set of outcomes.

If I send you a block header and you come back with a solution at the current difficulty how many hashes did you do before you found it?   The most likely answer is 18,678,355,68,419,371  (I changed this number after my initial post because I'd accidentally done a nice precise calculation with the wrong value for the current difficulty Wink )

Maybe you did one hash, maybe you did 1000 times that.  But those cases are VERY unlikely.  Far more unlikely than the error that would be found in virtually any measurement performed of any physical quantity.   So we can easily specify whatever error bounds we like, and say with high confidence that the work was almost certainly within that range and that the chance of the result coming from chance is small— just as if we were comparing the weights of things on a scale (though the difference in process gives a different error distribution, of course).  Because bitcoin solutions arise from such a fantastically large number of fantastically unlikely events— giving a fairly high overall rate, the confidence bounds are fairly small even for a single example.

E.g. at the normal expectation of 10 minutes, we'd expect 99% of all gaps to be under 46.05 minutes and our average rate has been faster than that.  So you're asking us to accept that there were multiple p<1% events but not a loss of hashrate when at the same time the biggest pool operator is obviously taking an outage which is visible to all.  OOooookkkaaay...

5703  Bitcoin / Development & Technical Discussion / Re: Pool shutdown attack on: May 31, 2011, 02:27:05 PM
No, they are not measurements.  There is no way to measure how much work went into finding any given hash, unless you are actually monitoring each and every miner involved.

I think you don't actually know what a measurement is.   If you ask everyone coming into the emergency room complaining of stomach pains "Did you eat cucumber in the last 24 hours?"  is that not a measurement?  Even though the results will be be contaminated by noise and biases?   If the census randomly selects 100,000 houses out of a million to get demographics from, is this not a measurement?

In this case we know the hashrate by virtue of the network reporting when it finds a block.  This is a measurement of _well_ defined process with explicable behavior and we can speak with certainty about it.  Unlike my silly examples its not subject to much in the way surprising biases or unknown sources of noise. (Because, e.g. if broken clients diminish our hash rate— thats not a random effect we'd wish to exclude) about all there is to worry about is how you time the blocks: If you time from the blocktimes you're subject to node timestamp stupidity if you measure locally you will be subject to a small amount of network propagation noise. But bitcoin propagation time is miniscule compared to the average time between blocks.

Regardless, If the expected solution rate is r  then the proportion of times in which a block will take longer than x is e^(-1/r*x).

Moreover, the particular process at play here has its maximum likelihood value at the mean. So you don't have to do any fancier math than taking an average to reach the most accurate measurement possible given the available data.

So, e.g. if we see long gaps then we can say with very high confidence that the _measurement_ being performed by the network is telling us that the hashrate is almost certainly low.

As far as the timestamps go— yes, bitcoin network time can be wonky. So don't pay any attention to it. If you have a node with good visibility you'll observe the blocks directly.


5704  Bitcoin / Development & Technical Discussion / Re: Pool shutdown attack on: May 31, 2011, 01:28:08 PM
No one has any idea where the error bars on the hash rate graphs should be.  On the 1 day line, they will be astronomically huge (not to mention the 8 hour window, lol).  Like several times the width of the plotted channel huge.

DO NOT MAKE ASSUMPTIONS BASED ON THOSE GRAPHS.  They are estimates, not measurements.

I'm not making assumptions based on "those graphs". I'm looking at the highly improbable gaps between blocks that can only really be justified on the basis of large hashrate graphs. e.g. times 50 minutes or longer has a p-value of .0067 if the expectation is 10 minutes— lower considering that we'd been running a rate higher than one per ten minutes.

Unrelated, it's no "No one has any idea"— Those graphs _are_ a measurement but of a noisy process, however the process in which coins are found is well understood, and you can easily draw confidence intervals based on the known distribution and the number of points in the average. I'll nag sipa to do this. It would be reasonable.

5705  Bitcoin / Development & Technical Discussion / Re: The pools are getting far too powerful. on: May 31, 2011, 01:10:32 PM
"Pool's fee is lowered to 0%-Proportional, 7%-PPS for a few days due to the outage caused by DDoS" - deebit.net

Yup.

And 24 hours after beginning that policy they were at >40% again according to their claimed hashrate.  Though their figures are no longer displayed on bitcoinwatch.

Based on comments I've seen by newbies on IRC it seems that a lot of people erroneously believe that mining is a race and that they enjoy some significant advantage by being on the largest pool…
5706  Bitcoin / Development & Technical Discussion / Re: Is the ECDSA public key hashed as a extra level of protection? on: May 31, 2011, 01:24:41 AM
RSA-512 is horribly week, many people (including myself) have cracked it on their own at home.

.... and while I can trivially crack RSA-512 at home ....

That's a very interesting claim considering that the a 512 bit factorization would come in at number 5 on the GNFS records page on
http://xyyxf.at.tut.by/records.html#gnfs

If you can really factor 512 bit numbers trivially, then the project would greatly benefit from your ability.
Please let me know the size and weight of an average matrix for the linear algebra step for your 512-bit factoring program.

I think may be misunderstanding what I meant by 'trivally'.  I can't do it in under a day, for example.  Using EC2 prior to spot pricing I cracked RSA-512 for about $160 for the sieveing step and then did the linear algebra at home. in the total of about ~40 hours.

It only takes about 4gb memory or so for the linear algebra. On some random _single core_ machine, e.g. not using the MPI block-wiedemann you can extract a candidate solution in an hour or two once you've done enough sieving.

I can do the whole process at home in about three days now— though I'm perhaps a bit more overpowered compared to most homes. Wink

I'll gladly prove this to you in private (e.g. by signing a message using a key or two I previously compromised from the pgp well connected set), if you'll share confirmation of this with the forum.

The point being— RSA-512 is too weak for any non-ephemeral usage, and yet even if bitcoin was using it right now it wouldn't be completely fatal to the system, but sure as hell would be if the public keys were disclosed. E.g. The coin at 1PZjfkLZBT7Q3UFmyEWH8QtuzTMi3MUiBj would be mine by now if the system used RSA-512 and the public key were disclosed. But without they public key I'd still be stuck even though I can compromise RSA-512.

Quote
I mean— we already know how to compromise ECDSA in about 4 billion operations. It's "Just an engineering problem".
Even using the British "billion" = million million, 4 billion operations is less than a 42 bit keyspace. Please outline the attack you have in mind.

Modified Shor's algorithm on the EC discrete logarithm problem of n=256bits takes a system of around 1500 error free q-bits (god knows how many for a error correcting system) and something like 6e9 operations (http://cdsweb.cern.ch/record/602816/files/0301141.pdf). I was _abundantly clear_ than I am not claiming that there _currently exists_ an effective attack, and that the benefit was purely a theoretical reduction in brittleness. At the same time no one of any repute is arguing that large scale quantum systems are physically impossible, we just don't know how to build one yet.

Publickeyhash raises a point I hadn't considered: I'd thought of the collision attack as being both a full collision across the big composite hash _and_ a ecc compromise, I hadn't considered the possibility of colliding to a weak key.  Thats interesting. I'm cluessless about how common weak keys would be on the curve bitcoin uses.

5707  Bitcoin / Development & Technical Discussion / Re: Pool shutdown attack on: May 30, 2011, 10:54:42 PM
The netwrok will self heal and route around the damage.. there are very strong incentives for that in place.

Apparently not.  It hasn't been healing when these big pools keep going down— we lose hashrate, and most doesn't come back until the pool does. In spite of the concerns for the stability and security of the bitcoin network— and in spite of actually losing money due to downtime and higher fees— people continue to use deepbit.

As I write this it's back to ~40% even after the outages a day ago.

5708  Bitcoin / Development & Technical Discussion / Re: Is the ECDSA public key hashed as a extra level of protection? on: May 30, 2011, 10:52:17 PM
suppose NSA can easily generate SHA256 collisions,

There is a lot less reason to suppose that the NSA, or anyone else, can currently generate collisions on RIPEMD160(SHA256()) then can shortcut ECDSA.

I mean— we already know how to compromise ECDSA in about 4 billion operations. It's "Just an engineering problem".

5709  Bitcoin / Development & Technical Discussion / Re: Is the ECDSA public key hashed as a extra level of protection? on: May 30, 2011, 10:37:23 PM
If I understand correctly the bitcoin adresses are Base58(Version+RIPEMD160(SHA256(PublicKey))+FirstFourBytes(SHA256(SHA256(PublicKey))))

Since Base58 is easily invertible (just encoding and not encrytion) we can ignore the inner outer brackets.

Now if ECDSA is considered trusted, why hide public key with a hash?

As others have answered, it's a more secure way of making addresses shorter.

However— it does have a theoretical security benefit:  It makes bitcoin a little less brittle against weak but semi-practical attacks on ECDSA.

E.g. imagine if bitcoin used RSA-512 for signatures today.  RSA-512 is horribly week, many people (including myself) have cracked it on their own at home.  Even so,  if this weakness was known bitcoin users could change practices to only ever send once from an address (transferring all value to a new address) and so an attack would have to be performed in the narrow window between the transmission of the spending TXN and the mining of the block (or somewhat longer if also coupled with a high hash power attack).   This is _much_ harder, and while I can trivially crack RSA-512 at home or using EC2 I don't believe I could pull off that attack.

It's likely that when there is some practical attack on ECC it will creepy up slowly— initially taking a very long time and a lot of resources. When this happens concerned bitcoin users could move to more conservative practices (especially for large stores of wealth) while the long process of upgrading bitcoin happens.
5710  Bitcoin / Development & Technical Discussion / Re: proposal to reduce potential effects of majority miner attack on: May 30, 2011, 11:17:27 AM
The idea is not to kick them off the network.  It is to make it harder to win block races.

If there is an honest disagreement about the block chain, the version that is believed by more nodes is likely to win eventually.  That means that there is an advantage to being well connected.

If you are intending to attack the chain, you will want to have as many connections as you can, to get as much of the network on your side as possible.  This proposal makes it more difficult for an attacker to remain well connected.  Not impossible, of course, just more difficult.

This isn't what an actual attack needs to look like.

Lets imagine that I am evilMChacker with my 400,000 strong playstation 4 botnet or whatever, enough to have more hash power than the honest network. You are joeblow interested in selling a shiny new Lamborghini.

At block 300000, I transmit a transaction (X) for our agreed on price of 2 BTC for the car.

Simultaneously, I spin up my 400,000 playstations to start mining block 300001— they include transactions like normal but instead of transaction X they include transaction Y  which is a payment to myself using the same coins as X spends.   Once the playstations find 300001 they do not announce it, they continue working on 300002... If the rest of the network found 300001 in the meantime they just ignore it (as well any other blocks the network found).  They keep merging in new tx from the network (though not any that conflict with Y, of course), but they do not announce and they never pay attention to blocks found on the main network. They just mine quietly in secret.

In the meantime, you count up the confirmations— 1, 2 ... 6  ... 100.  Okay, you're pretty confident that your transaction is guarded against reversal you hand over the keys. As I drive off cackling evilly into the sunset I send the final command to my botnet:  "As soon as you're three blocks ahead of the main network, announce all your blocks and self destruct"

Maybe moments later... hours later ... days ... weeks ... whatever.  _Eventually_, because it has more hashpower, the evilMChacker chain will be three (or any amount) longer than the main chain and the blocks will be released causing a single sudden reorganization.

The ever so slight sucking sound you hear is those precious two bitcoins leaving your wallet (and those of any dependent transactions) and ending back up in mine.

No amount of flap dampening solves this.   And besides,  see RIPE-378, in the context of routing flap dampening is now considered harmful and I think the exact same thing would apply to dampening chain splits.  When you dampen a split, you'll often end up on the wrong side, in which case you'll end up flapping towards all your peers whenever you do finally re-converge, thus causing them to dampen you and so on.  
5711  Bitcoin / Development & Technical Discussion / Re: Vanity bitcoin addresses: a new way to keep your CPU busy on: May 29, 2011, 06:19:28 PM
If there is a demand for it, I might be tempted to start a webservice like the faucet where people can buy vanity addresses for a small bitcoin fee. I have a simple handshake scheme which allows me to generate a new address for you without me finding out your private key. My method sounds like it's faster than Gavin's and mathematically it's non-trivial. It can find addresses containing a short string like "gavin" in a fraction of a second for example.
ByteCoin

I think the claim that you can do this search without knowing the private key is surprising and dubious.

I'd be interested in hearing more about how you propose to do this.
 

5712  Bitcoin / Bitcoin Technical Support / Re: Lost large number of bitcoins on: May 29, 2011, 08:26:28 AM
That is, walk you through it step by step asking for confirmation of each step ("wizard") and possibly even (a la "guru") actually attempting to explain what it is all really about despite those who prefer the "wizard" approach wanting very very much not to ever ever have to understand anything about anything and most certainly not anything even remotely related to computers or computing.

Some highly technical thing that most regular people will probably answer wrong (meaning, in a way that won't get them the behavior they will later wish they got) and which must be answered every single send?  MEH.

People can't be expected to think through the consequence of every transaction. Sometimes you won't realize until long after the fact that you really wish some payment or another had been more anonymous. The system should try to avoid creating situations the user will regret if it can.

How about just keeping track of when backups were taken via the formal backup function and whining at the user once they get low on keys which have been backed up?

Once the encrypted wallet support is in— how about integrated online backup support?… if there was a feature integrated in the client I'm sure there would be no shortage of people selling backup services that accept a few bitcents in exchange for keeping a copy of your encrypted wallet in some web accessible place.

These things would solve more problems and be much more user friendly than some complicated decision with every TXN.




5713  Bitcoin / Bitcoin Discussion / Re: Vote to get bitcoin to the front page of reddit again today! [but not too much] on: May 25, 2011, 10:31:38 PM

Promoting bitcoin == MAKE MONEY FAST  does us no service.  It makes bitcoin sound like a scam— in fact, promoting it that way makes bitcoin into a scam.

I downvoted. Shame on you.
5714  Bitcoin / Bitcoin Discussion / Satoshi's spare change? on: May 25, 2011, 01:38:03 AM
http://blockexplorer.com/tx/3387418aaddb4927209c5032f515aa442a6587d6e54677f08a03b8fa7789e688#o1

(This looks like change from a send from an active address being sent to the address which was paid by the genesis block)

Did older client software not generate random addresses to receive change and instead pick a random address in the wallet?

5715  Bitcoin / Mining / Re: Think I just solved the pool problem. on: May 21, 2011, 10:49:20 PM
Btw it's not only transfer problem, calculating complete block for every share is pretty hard, too. Forgot that pool can calculate tens of hundreds shares per second...

So you're saying that a small set of pool systems scales better than the idle CPUs of thousands of miners? Thats silly. As the txload rises miners can simply prioritize transactions based on their hash distance from some random value, allowing TX validation to scale far beyond what the pool could support.

Today the responses to take about 181 bytes on the wire.  Blocks are frequently about 4k, so at the moment difficulty would need to be 22 to send the whole block and use the same amount of traffic.  If it were compressed by only sending the TX ids, it would be 354 bytes/share for 10tx shares, or less then double now.

Someday in the future when blocks are 1MB (the largest size clients will accept today) the 'compressed' size will be 128032 bytes/share. Share difficulty would need to be ~750 to get to the _same_ traffic levels we have now.

This could all be further reduced by miners only sending incremental updates. So basically in that case it would only take resending each TX, along with one extra per per new block (~6/hour) to setup the root. Done this way it should do no more than 2x the current bandwidth, though it would take more software to track the incremental work per miner.  

But even ignoring all the things that can be done to make it more efficient: at current bulk internet transit prices ($2/mbit/sec/month) full 1MB shares would each cost the pool $0.0000064 each.

Assuming 2MH/J, $0.05, and an exchange rate of $6/BTC  GPU mining won't be power profitable past difficulty 10,000,000, even for people with cheap (but not stolen) power, assuming the current exchange rates.  At a share difficulty of only 12 this would be bandwidth costs of only about $5.33 block solved while sending full 1MB shares without any efficiency measures.  If you can't figure out how to make that work then I'll welcome your more efficient replacements.

(the formula for breakeven profitability is
diff = (719989013671875*exc*mhj)/(17179869184*kwh)
where diff is difficulty, exc is the exchange rate in $/BTC, MHJ is the number of MH per joule, and kwh is $/KWH)

As I write this deepbit is down and the network has gone 30 minutes without confirming a transaction.  This is nuts. I don't think the bitcoin community should continue to tolerate the reliability problems created by large pools.  You're free not to participate in an operating practice, but the network is also free to ignore blocks mined by pools which actively sabotage the stability and security of the network.
5716  Bitcoin / Mining / Re: Think I just solved the pool problem. on: May 20, 2011, 08:13:34 PM
2. Get Bitcoin address for the pool.

The pool should give you N addresses:

One for D=1 work, one for D=6 work, one for D=12 work. etc.

D=6 work pays 6 shares. Etc.  The reason for this is because your scheme uses a lot more bandwidth to transmit shares, but this is trivially corrected by increasing difficulty. But that would increase variance to unacceptable levels for slow miners.  By changing the address they pre-commit to a target difficulty and the shares will be credited accordingly.

The miner software can then bet setup so that it picks the diff that gets it close to 1 share per minute...which should end up being less bandwidth than currently used.

Also, while it would be possible to buffer shares while the pool was down and the pool could choose to pay for stales, I think thats actually a bad idea: it would allow you to get shares on network-offline hosts that aren't really contributing to the success of the pool... plus it would require more coding and storage for the shares.  Instead,  when the pool isn't reachable to accept shares you simply switch to using your own address for mining until it comes back.

Annnd... as we mentioned on IRC pools could still enforce sane behavior on miners (e.g. don't send 1MB blocks of crap transactions) by simply refusing to pay for shares that look like that. So the pool acts as a check on miner behavior, but it can't enforce secret rules since the miners will need to know them in order to conform.  This somewhat invalidates nanotube's #2 re-fee rules, but I think thats a good thing.  Pools don't get unilateral control but they get some influence and I think thats good.


Also from IRC, the logical place to put all this would be in bitcoind, not in a miner client. A simple modification to bitcoind would let you change the payout address... then you use normal miners against it.

All work is done locally ...
1) If you're doing all the work local then why bother sending it to a pool?
2) Why should the pool trust the number of shares you worked on? Hack the bitcoin program so that your Pentium III reports the same shares processed as the guy with 5 GH/s.

(1) in order to get pooled payments, silly.

(2) Because you still submit 'shares' (though now with more data) in order to prove that you're working for the pool, same as now.

5717  Bitcoin / Mining / Re: Todays difficulty increase to 156000, where from here? on: May 19, 2011, 04:09:31 AM
If the continues rate of hashing growth continues at the current rate of ~2.5%/day then my first number becomes 243254 

Okay—  so I was off by 0.36%, so sue me.

5718  Bitcoin / Mining / Re: You are threatening Bitcoin’s security on: May 17, 2011, 10:34:53 PM
i meant. if you had the raw processing power to take 51%.. you could make it 50 different pools of 1.02% so it looked like 50 people held 51%... when in truth it was a single person pulling all the strings.
If you had that much processing power, odds are high that you have many other more profitable uses for it than attacking the blockchain.  Like, for example, mining with the blockchain to capture 50% of the coins produced.

You misunderstood the argument.  Lets say that DB is 40%, slush is 30%, eligius is 20%.   You get 11% of the network hashing power, way less than half. Then you DOS attack those three pools. You now have 51% of the _remaining_ hash power.

This attack is much harder with less pool concentration.



I've been hearing this for awhile, but since my miner will only computer proof of works and send those to deepbit, can deepbit really be used to attack the network using only that?

Yes, your computer can't even see the work it's doing for the pool because the work is hashed, it only works on the block header.  But the pool could not be evil completely undetectably. There is an argument that it's much easier for a pool to undetectably skim from their users, so if they were evil they'd do that instead... but lots of arguments against big pools have been given here which don't require evil.



5719  Bitcoin / Mining / Re: FPGA mining for fun and profit on: May 17, 2011, 05:43:25 PM
The high end Spartan-6 has ~150K gates.

Is this type of thing cost effective for miners to buy just for mining? Nope you'd likely never pay off cost of the cluster from your mining.
Is it cost effective for someone who already owns units used for other work? Very, considering each chip only pulls ~5 watts and they're sitting idle.

Yes. 150K LU  is not enough to fit an unrolled engine with internally pipelined adders. It's enough to fit _one_ unrolled engine, which you'd probably be lucky to get running at 100MHz (=100MH/s).     Maybe you could do some awesome stunts, depending on the platform and somehow get two in, though I don't see it.

If it's otherwise idle capacity, then fine— it would be profitable. But you're not talking about a huge competitive advantage for anyone yet, certainly not a huge short term competitive advantage.

Quote
The bounty is laughable... A person keeping the code to themselves could profit a lot more than that and keep the competitive advantage.  You may not like it but it's capitalism.

Much more and it simply becomes easier to write it myself.  It's quite simple to write a SHA2-256 engine in verilog, though harder to get it going fast.

To someone who knows the tools and has the development kit handy, it's probably two days of work to get something basic going, though the skys is the limit on optimizations.  I'd offer the use of hardware, but the largest programmable FPGA I own is only 27K LU, which is too small to be interesting for this.

The reality is that someone will eventually do it for love or money and reality trumps capitalism.

Moreover, shaking confidence in the security of bitcoin by securing a large private advantage wouldn't be economically sensible for anyone developing this stuff in any case.
5720  Bitcoin / Mining / Re: You are threatening Bitcoin’s security on: May 17, 2011, 05:32:58 PM
I think we need Tycho to voluntarily block new people until he is below 33% (and that is a high amount already) of the hash rate. And convince the slush guy to do the same too.

Free market man, Tycho should not FORCE people out of his pool...

No, he should just increase the fees further.  He's already the most expensive and still the biggest— even if you make some downtime argument— 2% fee pays for 28 minutes/day in downtime— and deepbit has not been perfectly reliable either. I bet he could take 15% and still be at a third. 0_o

Win win.


The security concerns don't require the operators of the pool to be evil. They could be compromised, for example. And there have already been pool security compromises.

Adding to the security paranoia arguments the system reliability argument: If a pool has 50% of the hashrate and it goes down, then we lose 50% of our transaction processing capacity.  This would especially be bad in difficulty-cycles where hashrate had shrunk so that we were already behind 10 minutes / block.


Pages: « 1 ... 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 [286] 287 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!