Bitcoin Forum
May 26, 2024, 01:40:00 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 [118] 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 ... 195 »
2341  Bitcoin / Pools / Re: [230GH/s] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: February 12, 2012, 07:01:47 PM
I've been working on a self-contained network-provisioned bitcoind + namecoind + p2pool USB stick.  I had horrible problems with bitcoind getting stuck for minutes at a time.

I came up with a solution, but it isn't pretty.  I run bitcoind entirely out of RAM now.

I have precisely this setup working here...
BTW, I also have Devcoin and Litecoin working... My USB stick have 8G...
My machine have 1G of RAM...

Cheers!
Thiago

No, you are doing something else.  The bitcoin blk00001.dat file itself is over a gigabyte now, and the index is a couple hundred megs more.
2342  Bitcoin / Pools / Re: [230GH/s] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: February 12, 2012, 05:29:00 PM
I've been working on a self-contained network-provisioned bitcoind + namecoind + p2pool USB stick.  I had horrible problems with bitcoind getting stuck for minutes at a time.

I came up with a solution, but it isn't pretty.  I run bitcoind entirely out of RAM now.
2343  Bitcoin / Bitcoin Discussion / Re: Bitcoin in danger!? Received <Sent!?! on: February 12, 2012, 07:10:24 AM
My guess would be never at all. Testnet is where new code is tested out before it is released.
It has happened on the main network. The following transaction is present in both block 91812 and 91842: http://blockexplorer.com/tx/d5d27987d2a3dfc724e359870c6644b40e497bdc0589a033220fe15429d88599

From Gavin's post, it sounds like they are changing the default client to actually check the database to make sure that a newly created coinbase doesn't collide with a prior transaction. 

I at least hope it will not slow down bitcoin client as downloading all the blocks (and verifying them?) takes already too long.

From Gavin's post, it sounds like they are changing the default client to actually check the database to make sure that a newly created coinbase doesn't collide with a prior transaction. 

I at least hope it will not slow down bitcoin client as downloading all the blocks (and verifying them?) takes already too long.

No, those are in the past; it is too late for them.  This would apply only when your node creates a new candidate block in response to a getwork request, and then later when a brand new block comes in over the network, fresh from some dude's GPU.
In order to check if a generation transaction with the same transaction ID is already existant in the block chain we will obviously have to check past generation transactions. Luckily, only one is created circa every 10 minutes, so even in a 1000 years, we will only have to check around 90 million transactions for duplicates (which can probably be done in a fraction of a pico second in the year 3012).

The absolute worst case for a binary tree search over 256 bits is 256 comparisons.  BDB uses something better, probably a btree (I'm too lazy to look it up).  After 90 million coinbases, the typical search will probably take less than 30 comparisons.  Both of these numbers are trivial and won't add more than a couple of microseconds to block processing time (not counting I/O if the index isn't in memory for some bizarre reason).
2344  Bitcoin / Bitcoin Discussion / Re: Bitcoin in danger!? Received <Sent!?! on: February 11, 2012, 01:10:37 AM
More serious issues might be possible when there is a reorg. For example, when one coinbase is disconnected, all of its duplicates will also be disconnected.
Yeah, which in theory means that all the nodes that had the block containing that duplicate coinbase in their main chain would wrongly treat an attempt to spend it as invalid and all those that hadn't would correctly consider it valid. This may be usable to create a persistent fork in Bitcoin - that is to say, one that can never actually resolve itself without a huge amount of manual intervention by everyone involved.

That would take quite a bit of planning and effort, and the attacker would need to have huge amounts of hashing power plus incredibly good timing.  Oh, and they would only get one shot getting it right, since that fork is only possible during the moment when the network changes rules.  Plus, I'm not sure what anyone could possibly hope to gain by doing it.

And if it even looked like it was possible that someone might do it, we could randomize the rule change time.  Instead of coding it so that the rules change starting with block # X, we could say that the new rules take effect on the first block following # X where the last 8 bits of the header hash are all zero.  That would be totally unambiguous on the network, but still impossible to predict.

Note that under the current rules, if you generate two coinbases with the same transaction and wish to spend both of them, you must spend the first one before you generate the second one.  The duplicates in the past can't be fixed.  All in all, this is a very strange quirk of the system that needs to be fixed, but not a security issue.
2345  Bitcoin / Bitcoin Discussion / Re: Bitcoin in danger!? Received <Sent!?! on: February 11, 2012, 12:47:17 AM
From Gavin's post, it sounds like they are changing the default client to actually check the database to make sure that a newly created coinbase doesn't collide with a prior transaction. 

I at least hope it will not slow down bitcoin client as downloading all the blocks (and verifying them?) takes already too long.

No, those are in the past; it is too late for them.  This would apply only when your node creates a new candidate block in response to a getwork request, and then later when a brand new block comes in over the network, fresh from some dude's GPU.
2346  Bitcoin / Bitcoin Discussion / Re: Bitcoin in danger!? Received <Sent!?! on: February 10, 2012, 04:36:52 PM
My guess would be never at all. Testnet is where new code is tested out before it is released.

oh, did satoshi test out his very first attempt on the test network before creating the main genesis block?

i thought we were talking about a bug in satoshi's creation, as old as bitcoin itself.


My guess is that there were several genesis blocks while he was working on it, and once everything was ready he released it and the last one he made became The Genesis Block.

As to the overwrite issue, it might be better to think of this as a hash collision.  Think hash function rather than cryptographic hash.

Cryptographic hashes are intentionally very hard to collide, and the default client uses a new address for each generation, so in a way the default client had solved the problem.  But, it didn't expect that someone else might use different software to reuse the same inputs to the hash and end up with the same transaction id.

From Gavin's post, it sounds like they are changing the default client to actually check the database to make sure that a newly created coinbase doesn't collide with a prior transaction.  And in the future, they will set it to reject blocks that include duplicates, which would be a de facto protocol change, and a good one.
2347  Bitcoin / Pools / Re: P2POOL vs. Pooled Mining - something stinks here on: February 10, 2012, 02:22:24 PM
Withhelding winning shares is pointless, they can't be used to make blocks or whatelse.

Disruption? Nothing would change. Each p2pool user run it's own p2pool client, there aren't servers that can be overloaded or whatelse.

Well not pointless.  It is a potential economic attack.  Say a major pool saw p2pool as a threat.  They could dump significant hashpower at the pool and withhold blocks.  This would make p2pool seems "continually unlucky" and if the unlucky streak continued enough less savy users would start to doubt the protocol and possibly return to conventional pools.

I doubt any pool is doing this and p2pool isn't special, they could do the same thing to any small conventional pool.  The good news is that as p2pool gets larger the "cost" to have any meaningful affect of reward also grows.

Of course, this would also make the attacking pool look pretty unlucky too...
2348  Bitcoin / Pools / Re: [200GH/s] p2pool: Decentralized, DoS-resistant, Hop-Proof pool on: February 09, 2012, 03:52:17 PM
BTW,

Code:
2012-02-09 07:02:39.638083 GOT BLOCK FROM PEER! Passing to bitcoind! b8370800 bitcoin: 39ccdd2529b5af67320a57a8c5a68116e14f1fe1dddc9f459b6
2012-02-09 12:34:11.499075 GOT BLOCK FROM PEER! Passing to bitcoind! 929038fc bitcoin: 27aafb3b399eb60be707ceb402259cd4b018c91b4ff41ac5a9
2012-02-09 13:02:31.446763 GOT BLOCK FROM PEER! Passing to bitcoind! 7172afee bitcoin: a445a69dd4c3a3d036804df25331cb015f3170b775320d205f4

I've received payment for the 13:02 GMT+1 block but not for the one at 12:34, how is it possible?

spiccioli.


The 12:34 block payment still not received...

spiccioli

You can search for the block hash on blockexplorer.com.  Here it is.

2349  Bitcoin / Pools / Re: P2POOL vs. Pooled Mining - something stinks here on: February 08, 2012, 05:41:12 PM
I've been working on converting my miners to p2pool.  After seeing today's good luck, I wish I had more than just my prototype rig running.

Also, the whole thing is written in python.  If there is something wrong with it, either from a bug or as a scam, it is just sitting there in the source for everyone to see.
2350  Bitcoin / Bitcoin Discussion / Re: BIP 16 / 17 in layman's terms on: February 03, 2012, 08:34:04 PM
The safety analogy breaks down because people are dying right now while we debate over which of two airbag systems to install.  And the designers of both systems are in agreement that their own system is better and that the other guy's system is going to maybe cause a disaster in the future.  And everyone else is standing around offering their own opinions of highly variable quality about the good and bad points of both systems, most of which are valid to some extent because neither system is really perfect except maybe in the eyes of the designer.  Meanwhile, people are still dying...
2351  Bitcoin / Development & Technical Discussion / Re: Bitcoin protocol - need explanation of couple of things. on: February 03, 2012, 08:22:04 PM
Right, but you don't need to start at the current block and step backwards until you find it.  You can just ask "Does this transaction exist my database?" and the database will tell you.

The database is probably using a btree to index things so that it doesn't need to walk from block to block either, but what happens in the database is unimportant.
2352  Bitcoin / Development & Technical Discussion / Re: Bitcoin protocol - need explanation of couple of things. on: February 03, 2012, 07:44:48 PM
Hm, after thinking a bit and taking your explanation i came across this:
When transaction is spread to miner, he:
1. checks where prev_out hashes are located in block chain (starting from newest block in chain and going backward) - i.e. chain is on block 150000 and prev_out hash for our new TxIn is in 130000 so app starts at block 150000 and goes backwards till he hit 130000 block and proper hash in it. After that he checks TxOuts in that transaction adiacent for our new TxIn
2. checks whether there are any double spends by checking any TxIns (starting from TxOuts locations and going forwards till newest block in chain) that spent already coins in TxOuts - i.e. TxOut is in block 130000 and transaction is already spent in block 140000 so it goes from block 130000 until it finds proper TxIn in block 140000.

I am right in these assumptions? Or prev_out is a hash of block (+ additional necessary stuff) where transaction is located to find it really fast?

It isn't necessary to walk the chain.  There are indexes in the database so you can just search by ID.  And you never need to check the chain for double spends because you check the blocks as they come in and you don't store them if they are invalid (like if they include a transaction that spends an output that you already have seen used).
2353  Bitcoin / Development & Technical Discussion / Re: Bitcoin protocol - need explanation of couple of things. on: February 03, 2012, 05:57:20 PM
Quote
Example:
Other nodes observe that Tx1/output 1 has already been spent, and they reject this transaction.
I've got in bold word "observe" because i don't know what it does really mean here? Let's say:
I spend cash from Tx1/output 1(which is for example in block 5 000) in block 10 000(Tx2), and then i try to spend again in block 15 000(Tx3) - what does miners really do? They check ALL blocks (and transactions in them) starting from block 5 000 from whom it is spent to find any inputs for that output? That would consume HDD a LOT for every transaction verification, and i can't imagine how it would work in future with a lot of transactions per second and milions of blocks.

They search their local database for transactions that include Tx1.out1 as an input.  If they find a match, that transaction has already been redeemed, and they reject the new one.

Currently the entire block chain, including each and every transaction ever since the beginning of time is under a gigabyte.  If it ever grows to the point where it becomes unwieldy, there are things we can do.  But scaling up to the "all commerce in the world" level isn't yet a solved problem.
2354  Bitcoin / Development & Technical Discussion / Re: BIP 16 analysis from a miner's point of view on: February 03, 2012, 05:38:49 PM
During the discussion that was to become BIP16 I suggested that we satisify Luke by simply sticking a completely pointless OP_NOP at the end.. an OP_MAKEITSO, if you will:

09:35 < gmaxwell> sipa: okay, so lets end this argument and add a freeking op_code for this for luke.
[...]
09:36 < gavinandresen> I don't like adding an extra byte to make luke-jr feel better.

But doing this would perpetually bloat the blockchain a bit just so that Luke (and now a few others) would feel better about the cleanliness of the solution.  It's a real cost that will cost bitcoin users real money now and in the future and will contribute to the loss of decentralization as bitcoin grows.

Making an OP_P2SH really seems like the best option.  The objection that BIP16 creates a special case isn't confined to just Luke; plenty of other people also see that as a valid concern.  One name that stands out in my memory because I specifically asked him about it is Tycho.

Of course, Gavin also has objections to OP_P2SH, and while I don't think those objections are fatal (see my reply), I'm not the one doing the work so my opinion doesn't mean shit.

In particular, I think that we could define OP_P2SH in a way that has very strict limits today, but the ability to be used in a more flexible way tomorrow if we determine that it is safe to do so.

For example, we could require that it appear no more than once, and if present it must be the first (or last) opcode in a script and reject any transaction that includes it in any other position, or even reject any transaction where the script doesn't exactly match the existing BIP16 template.  This would be pretty simple to add to the existing BIP16 code.

But then down the road, we could gradually relax things as needed.

Say you want to make a transaction that can be redeemed by either of two different P2SH payloads.  You rewrite the first stage script to include two hashes and verify that the script matches at least one of them.  OP_P2SH still must be first, but now there are two possible templates.

Down the road, someone wants to write a conditional script that can be redeemed either with a key or a P2SH payload.  You allow OP_P2SH to be found elsewhere in the script, maybe even inside of a OP_IF block.

What I'm trying to get at is that OP_P2SH could be added today, which would ease some valid concerns, and for relatively low cost, and in a way that allows us room to change in the future.

And for what it's worth, I'm a strong supporter of P2SH, even without OP_P2SH.  My miners are currently set to only use pools that either support or intend to support P2SH very soon and my experimental p2pool deployment tool defaults to supporting it.  I trust Gavin's judgment on this, my only real disagreement is over how to sell it to the rest of the community.
2355  Economy / Economics / Re: India to pay for Iran's oil in gold on: February 03, 2012, 01:49:10 PM
Gold would add a new wrinkle, but only if physical gold was changing hands at the same time that the physical oil was being (un)loaded.  But most of the "gold" in the world is electronic, in the form of Comex contracts, which are likewise nearly instantly convertible to/from other currencies.

Do you think Iran would accept comex-gold? I dont.

If Iran had wanted to hold gold before, they would have bought gold.  If they want to hold gold now, that would represent a shift in their desire to hold dollars vs. their desire to hold gold, and that would be the news, not how the gold is getting there.
2356  Bitcoin / Bitcoin Discussion / Re: What does Quantum Computing mean for Bitcoin? on: February 03, 2012, 07:15:36 AM
If you want to really get into it, the search space is actually much higher than 2^256.  It is either 2^640 or 2^768, depending on your point of view.  It is the output of the SHA256(SHA256(header)) operation that is 256 bits long.  Oh, and most of the search space is actually invalid.

I follow the argument about defining f(x) as the function that gives the answer you want, but in the real world that means that you need a quantum circuit that actually implements not only SHA256(SHA256(x)), but also evaluates it (against variable conditions, no less), and the circuit needs to do it in one clock, with no loops, no registers, etc.  Note that we can't even do this in classical circuits yet, even after like 60 to 80 years worth of progress.  Meanwhile, I think that quantum computers are now capable of double digit addition.  Granted, they can solve all double digit addition problems at once, which is pretty cool.  And for the record, yes, I do know that quantum logic is very different from classical logic.  That doesn't save you from having to build a reversible device that maps 640 inputs to 256 outputs.

You can argue that the search space is really only the nonce, the extraNonce and a few bits of the timestamp, which greatly reduces the search space.  But that has problems because it means that you need to build a bigger circuit for f(), and it also means that f() doesn't necessarily have any solutions that return success.  And extraNonce isn't a fixed size, but you can get around that by searching out a (nonce, Merkle root, timestamp) tuple that satisfies the hash criteria, and then turn around and solve that hash to find a valid coinbase that matches your generated Merkle hash (N=2^256, M=1), and then if you want the coins, you have to solve that hash (again N=2^256, M=1) to recover a public key, and then you have to invert the eliptic curve to find a private key that can generate that public key.

In the end, I'm pretty confident that we will shift from SHA256 to something else for aesthetic reasons long before (like decades before) an actual real live quantum computer gets turned into a miner.

The (cryptography) literature on this is a bit hard to follow because most attacks are developed under the assumption that the goal is to find a collision, which is not even remotely what we are concerned with.  For example, the SHA-3 contest explicitly included criteria for resistance to selected preimage attacks by adversaries with quantum computers with capabilities that don't even remotely come close to existing yet.  Such attacks like that might cause problems for other parts of bitcoin, but not for mining.

I guess I remain unconvinced that quantum computing means that we can't keep hashing like we have been.  And even if we do have to change, we will have plenty of warning.
2357  Bitcoin / Bitcoin Discussion / Re: Fun with Bitcoin, or how an exploit can hide in plain sight [seclists.org] on: February 02, 2012, 07:04:32 PM
Still, looks like another case where my exponential difficulty requirement for blockchain reorganization idea would come in handy.  It would kill any and all offline mining attacks.
is there any reason why this wouldn't be implemented?

Ahh.  The fine print...

Turns out that no one is exactly sure what would happen in a few cases.  I have a rough understanding of what would happen most of the time, but nothing formal.  It is possible for sections of the network to become permanently diverged from the rest of the world, but the amount of time needed for that to happen depends very much on the size of the section that gets disconnected.  For example, if the network was split into two halves (by mining power) they could become irreconcilable in a relatively short time, like a few hours.  But who wouldn't notice the hashing power dropping by 50%?  A smaller split is less likely to be noticed, but will take longer to get screwed.  A solo miner that finds a block every month would need a year or more to get hopelessly wedged, and if he doesn't notice that he is really solo for that long, it is his problem.  (The exact times depend on the parameters of the exponential system.)

And it would complicate things if we ever needed to fix a bug like the signed/unsigned thing from a while back.  Right now, if a major bug is found, and it takes a day or two to get a fix made, as long as most of the mining pools get together, they can overtake the buggy chain and restore order to the entire network.  Under the exponential plan, each node would require human intervention unless the fix was found rather quickly.  I think that this will gradually become less of an argument as alternate clients gain popularity.  Once the monoculture gives way to the ecosystem, it simply won't be possible to do fixes in that way anyway.

Oh, and then there is the issue of starting up.  An attacker that can control which blocks a new node gets during the download can prevent it from ever meeting up with the rest of the network.  But there is no reason for a new node to be using the exponential system during the initial download anyway, so that isn't much of an issue in reality.
2358  Economy / Economics / Re: India to pay for Iran's oil in gold on: February 02, 2012, 06:28:02 PM
Heh.  Roughly 4 trillion dollars (equivalent) trade on Forex, per day. ...
While FED has roughly a bit less than 1 trillion USD handed out.
Compare this to 50Gigs at loosely 100$.

p.s.
Keep in mind how fiat money works, or better how it does not work!

Are you suggesting that oil is only purchased using actual physical paper and metal currency?  The reason I ask is that the New York fed reported $829 billion as the amount of physical currency in circulation.  This is suspiciously close to your "less than 1 trillion USD" figure.

You should look into the different definitions of "money supply", and the different estimates of them.  That is, unless you think that there is a dude driving a forklift with a pallet of cash waiting to meet each tanker as it pulls up to the terminal.
2359  Bitcoin / Bitcoin Discussion / Re: Fun with Bitcoin, or how an exploit can hide in plain sight [seclists.org] on: February 02, 2012, 05:46:17 PM
Quote
A powerful attacker could definitely exploit this; timestamps in the
future are rejected and Bitcoin won't generally accept a version of
history in which time goes backwards, but otherwise a 51% attacker can
choose whatever timestamps they like and can delay releasing their
version of the chain to meet the "less than 10 seconds" requirement.
It's a very expensive attack but far from impossible.

Yeah.  Like this wouldn't be a huge arrow pointing directly at the attacker.  No one is going to notice a 144+ block reversion of the chain.  Right.

Still, looks like another case where my exponential difficulty requirement for blockchain reorganization idea would come in handy.  It would kill any and all offline mining attacks.
2360  Economy / Economics / Re: India to pay for Iran's oil in gold on: February 02, 2012, 06:38:56 AM
The part of my post that you elided describes exactly why you do not need to hold dollars ...
Yes, I did so on purpose. It does not explain what you think it does.
Consider the amount of USD available, it is everything but unlimited! Thus your "proxi" approach does not work.
As USD being relatively rare compared to the amount of oil traded (somewhere 50GigBarrel or so per year) it is getting expensive.
If you donīt get it by now, replace USD with your nose pearls. They will suddenly get very interessting if you could by oil for it, but still very rare (i hope), thus very expensive.

Heh.  Roughly 4 trillion dollars (equivalent) trade on Forex, per day.  50 billion barrels at $100 per barrel is about 5 trillion dollars per year, or less than half of a percent of the Forex volume.
Pages: « 1 ... 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 [118] 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 ... 195 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!