Bitcoin Forum
August 29, 2015, 06:09:27 AM *
News: New! Latest stable version of Bitcoin Core: 0.11.0 [Torrent]
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: « 1 ... 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 [277] 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 ... 802 »
5521  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 19, 2013, 02:54:31 AM
but I did out up a 50,000 NUGs bounty last night for anyone to solve the VGB Conjecture.

Well seeing as you haven't paid your prior promises not sure why anyone would believe you. Then again with NUGs going as low as 0.000002 BTC per NUG, 50,000 NUG is worth 1,000 a 50,000 reward would be worth ~0.1 BTC or $9 USD.  I am sure the "pro programmers will be fighting over a chance at those high stakes".

Quote
Does anyone know how its possible for a 3 day coin to be on an. Exchange in under 3 days?

Many scamcoins hit an exchange within 24 hours some have hit the exchange at the time of the public release.   The only exchange carrying NUG is a self described "exchange for shitcoins".  At this point there are no bids at any price.

http://iceycrypt.com/index.php?page=trade&market=4


Quote
That's a lot of miners, eyeballs and ears listening to nuggets and NUGs rather than their coins.  So they're out where bashing in full force trying to shut this thread down but so far the harder they try the more people come here.

More delusions of grandeur (and you wonder why your programmer partner deserted you).  Hashrate has fallen about 60% since the first day.  More people aren't coming here.  The same group of people are checking in periodically just like people stop and stare at the person dressed like an idiot at Walmart or to get a good look at an accident on the highway.
5522  Bitcoin / Bitcoin Discussion / Re: Why Bitcoin will never reach mainstream on: July 19, 2013, 12:52:30 AM
People aren't going to like what I'm going to say but it needs to be said regardless.
The only way bitcoin or any other alternative currencies will become mainstream is if enough infrastructure is built around it.

Person A sending bitcoins to person B (irreversibly) is simply not going to cut it. There is no inherent trust between both parties. We need 3rd party business (banks, insurers,lenders, exchanges etc) to build trust.

I doubt it.  I bought a $1,600 domain using namecheap and sent them irreversible funds by Bitcoin.  OH NOES was I worried, did I use a trusted bank as a third party?  Nope.  namecheap has a solid reputable business and they stand to lose a lot ripping me off.  I had not a second of doubt/fear sending them the BTC.

Imagine your local power company, amazon.com, newegg, namecheap, (insert company you already trust here) asked you to pay with Bitcoins would you have a problem?  I don't think most people would.

Now for fly by the night never heard of the "company" (which isn't even a real company) until they got out of noob jail and starting asking for tens of thousands of bitcoins in "pre-orders" well yeah you probably want to escrow that, then again if they asked for cash you probably would want to escrow it just the same.
5523  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 19, 2013, 12:47:04 AM
ya i already knew all that Grin. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(Cool*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.



5524  Bitcoin / Development & Technical Discussion / Re: "watching wallet" workaround in bitcoind (fixed keypool, unknown decrypt key) on: July 19, 2013, 12:12:02 AM
It looks like pywallet has an option to import a watching address.  The public key is entered into the wallet and as a placeholder the encrypted private key is just random data.

Based on that it should be fairly straight forward to have an option where given an existing wallet.dat it will update the wallet.dat to a "watching wallet" by replacing all private keys with random data.  Optionally to prevent accidentally unlocking (which may confuse the crap out of bitcoind) the passphrase could at the same time be changed to a random value as well.

Quote
def render_GET(self, request):
          global addrtype
          try:
                                pub=request.args['pub'][0]
                                try:
                              wdir=request.args['dir'][0]
                              wname=request.args['name'][0]
                              label=request.args['label'][0]

                              db_env = create_env(wdir)
                              db = open_wallet(db_env, wname, writable=True)
                              update_wallet(db, 'ckey', { 'public_key' : pub.decode('hex'), 'encrypted_private_key' : random_string(96).decode('hex') })
                              update_wallet(db, 'name', { 'hash' : public_key_to_bc_address(pub.decode('hex')), 'name' : "Read-only: "+label })
                              db.close()
                              return "Read-only address "+public_key_to_bc_address(pub.decode('hex'))+" imported"
                 except:
                              return "Read-only address "+public_key_to_bc_address(pub.decode('hex'))+" not imported"

https://github.com/jackjack-jj/pywallet/blob/master/pywallet.py#L4176

I have sent jackjack a PM to clarify is this is possible and the possibility of setting up a bounty.


5525  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 18, 2013, 11:29:53 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.


5526  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 11:21:32 PM
I'm not sure that you can do what you described above. i.e. you can't just add a lot of private keys, sign a message and then add up the corresponding public keys and use this to check the signature (if that is what you meant) even if you are using EC point addition. I'll have to have a look when I'm more awake (sober)  Smiley

No problem. I am sure it can be done.  It is used for deterministic wallets for example and it for verifiable secure vanity address generation.  It is an interesting property of ECC keys.  I just wanted to know if any crypto experts saw any potential reduction in security as I have limited knowledge in the field of ECC.  Unless I was drunk I don't recall it even being covered in college.

You may be right about signing and verifying in the method you described.  I will try some experiments with OpenSSL.  My assumption would be that if both are possible that given n keys that n key additions plus one signature (or verification) would be faster than n signings (or verification).

5527  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 18, 2013, 11:09:29 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk. 

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.


5528  Bitcoin / Development & Technical Discussion / Re: Any documentation (other than the code) on the format of the wallet.dat file? on: July 18, 2013, 10:21:54 PM
I don't think so.
It's a berkeley db file.

Look at the pywallet code if you want more than one point of view.
kds is the key and vds is the value.

For example, the (key, value) pair for an unencrypted private key would be:
('\x03key\x21\x03\x01\x01\x01...\x01', '\x20\x54\xfd...\x31')
If the public key is '03010101...01' and the private key is '54fd...31'

Thanks I can follow the python code a lot easier than the reference client.

Still the lack of documentation is kinda sad.  It just means countless hours wasted "relearning" the same thing by each developer.
5529  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 18, 2013, 10:08:12 PM
Block 250 has to return a 0 subsidy. Superblocks can not kick in to some future block number that gives people time to update the client.
Edit:  or maybe 250  has to be a 0 or a superblock chance,  have to find the original code to remember for sure.

You were right on the first one.  Since block 250 already exists now and it is 0 coins then any change other than that will cause a retroactive fork back to block 250.  So block 250 MUST be zero and superblocks must NOT be enabled until some block in the future.

However at the time jackjack published the fix it was prior to block 250 so if implemented then it would have been fine however that ship has now sailed so the "fix" which causes the minimal collateral damage (to existing coin holders and exchange accounts) is to "fix" it such that it preserves the existing chain (flaws and all) and then "enhanced" future blocks only.
5530  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 18, 2013, 10:02:47 PM
Block 250 has to return a 0 subsidy. Superblocks can not kick in to some future block number that gives people time to update the client.


Edit:  or maybe 250  has to be a 0 or a superblock chance,  have to find the original code to remember for sure.

Block 250 is not supposed to be 0, I quoted r3wt stating this a few pages ago.
And yeah, I wasn't sure if anybody was really using the client so I just followed what Vlad asked. Also block 250 would create a hard fork anyway as the current code makes it 0 and my fix makes it 49.

It doesn't really matter here (given this coin is dead anyways) but as an academical point of view if a hard fork is necessary it should be a future hard fork.

It doesn't matter what block 250 "should have been", block 250 on the current longest chain has a value of 0 coins.  If you change that you will cause a catastrophic re-org back in time to block 250.  This is an absolute worst case scenario.  Once again this is academic because this coin is dead anyways but a good learning lesson.

Once a mistake is made and the chain has moved beyond that the "fix" is to keep it the same.   As an example Satoshi had intended the genesis block to be spendable but a bug in the code prevents it from being spent.  The "fix" is to keep that block unspendable forever.  Trying to correct it to what "should have been" would cause a irrecoverable hard fork if/when someone tried to spend the genesis block.

The mistakes were:
a) block 250 is 0 coins
b) super blocks are not possible through the current block due to a bug.
c) no subsidy decline just drop to 0 in 7 years (we will ignore this one because it isn't pressing although it does ensure this coin is DOA).

The fix is:
a) keep block 250 as 0 coins.
b) keep no superblocks through some future block (estimate the time necessary to get super majority of nodes to upgrade)
c) implement superblock fix on blocks AFTER the block in "b"

This will keep the current longest chain valid and allow migration at some future block height to enable the superblocks.  Otherwise you introduce the abiltity to double spend.  Some users have already received coins in blocks 251+.  Those users have sold them on exchanges (in theory).  Changing the "correct" value for block 250 would cause all nodes running the corrected code to see the entire existing chain from block 250 onward as invalid.  The coins held by exchanges would disapear in the reorg.

Obviously any hard fork is bad and testing and proper deployment should be done to minimize the need for hard forks however if you must use a hard fork it should only be done in a forward fashion.  At block XXXX in the future old nodes and new nodes will fork not at block yyyy which is way in the past we just erase the majority of the existing change.
5531  Bitcoin / Development & Technical Discussion / Docs on the structure and format of the wallet database on: July 18, 2013, 09:38:49 PM
Title says it all, any documentation (other than the code) on the structure and format of the wallet database?
5532  Other / Beginners & Help / Re: Can someone explain membership requirements? on: July 18, 2013, 09:28:15 PM
10 posts and 4 hours: ability to post outside the newbie section

This is 1 post and 4 hours.

Didn't it change with the new activity thing?

Yes it was 5 post and 4 hours and now it is just 1 post and 4 hours.
5533  Other / Beginners & Help / Re: Can someone explain membership requirements? on: July 18, 2013, 09:21:30 PM
10 posts and 4 hours: ability to post outside the newbie section

This is 1 post and 4 hours.
5534  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 09:20:28 PM


Given:
private keys a & b
Public keys A & B
Data to be signed d

Is it possible to create a signature S such that it can be verified given only A, B, and d?

Why not sign the data d with private key a and then sign the result with private key b to give you S.
Use public key B then public key A on S to result in data d.

Would this solve the original problem?
Even though, as pointed out, there are distinct items of data so it wouldn't work in practice anyway.

This would be for a new (incompatible) transaction format.  There would be no distinct items.  Transactions would simply be signed at the tx level.  At this point it is merely academic, I just want to know if it CAN be done and if doing so results in a reduction of security.

I don't believe it is possible to verify a double signature the way you described.  Remember is verification the entity with the public key isn't recreating the signature and comparing it to the original (if they could do that they could counterfeit the signature on any data).  They entity doing the verification can only validate if the signature is valid or not (i.e. true or false).  

I may be wrong on this one.

5535  Bitcoin / Development & Technical Discussion / Re: Exhausting the keypool (workaround for "watching wallet" in bitcoind) on: July 18, 2013, 09:16:27 PM
It would be more elegant (and also safer) to literally erase (overwrite with random data) the private keys, instead of encrypting them with the "unbreakable" password.
Maybe jackjack could add such an option to his so useful pywallet tool.

Agreed.  That would be a useful option "overwrite private keys".  If the overwritten wallet is ever unlocked it will cause issues but if the wallet remains locked the private keys are inaccessible and bitcoind doesn't know they are overwritten or missing.

An even better solution would be to create and use a watching wallet in bitcoind itself.  The core devs seem reluctant to make changes/improvements to the wallet since it will be made obsolete by deterministic wallets however it would be a useful option.  The wallet header could contain a flag to indicate it is a watching wallet only and only contains public keys.  To avoid significant code the fact that there are no private keys could be hidden by simply encrypting the wallet (not necessary for a security standpoint but would make all private key functions inaccessible to the wallet without a lot of refactoring).

Quote
The watching wallet cannot create new keys, but the spending wallet can, so in theory you still need to repeat the process once for awhile.
Though in practice, if you told it to used -keypool of thousands, it should take you awhile to consume it all.

Agreed we have already used 5,000 key keypools in the past.  That should be fine for most use cases.
5536  Bitcoin / Development & Technical Discussion / Re: Exhausting the keypool (workaround for "watching wallet" in bitcoind) on: July 18, 2013, 09:06:14 PM
Your logic seems solid, but I do not see any question in the OP.

Of course bitcoind will not fill the key pool if you don't unlock the wallet - that's kind of obvious.
Unless you have just found a critical bug, but the theory is that it cannot even if it wanted to.
 

Sorry I had it phrased as a sentence.  Embarrassed

Is it correct that bitcoind will always exhaust the keypool and not refill it under any circumstances when it has an encrypted and locked wallet?
Is is correct that bitcoind will always return an error when requesting a new address when the keypool is exhausted and can't be refilled?

Essentially the security (against lost) of funds depend on those two conditions always being true.  
5537  Bitcoin / Development & Technical Discussion / Re: C# Node on: July 18, 2013, 08:58:57 PM
Blockchain size is not finished growing and it's pretty large to be storing in a relational data store.  I'd be tempted to just store it in the file system.  Most people are running a logging file system these days so making a backup when doing any work might be sufficient.

For a standard node you are right there likely is very little use to store the full blocks in a database.  For efficiency full nodes generally just validate the block, store the header, and use the block to update the UXTO. In essence using full blocks just to build the UXTO.  Full nodes normally never need the historical blockchain except to build the UXTO in a trustless manner.  For most nodes a flat file is more than sufficient and this why the reference client does just that.

However I think it IS useful as a development platform to parse and store the blockchain in a database.  This is useful to building higher level tools for analysis.  I imagine that is how sites like blockexplorer and blockchain.info work. 


5538  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 08:48:24 PM
Some ECC signing systems can do grouping operations where you can compose all the keys and all the data being signed and validate the whole thing with negligible probability of accepting if any were invalid and zero possibility of rejecting if all were valid.  But AFAIK ECDSA is not such a scheme, and the ones I know of have larger signatures. (uh, though if you were to use it to always merge all the signatures in a block— that might be interesting).

Maybe we are speaking of different things but doesn't ECDSA allow creating a composite key by simply adding the two private keys together.   The signature (which would be the same size as the individual keys) can be verified by completing the same ECC arithmetic on the public keys.  For example say a hypothetical transaction (not in Bitcoin network and not necessarily in the same format) has the following information in the transaction.  At this point the format/structure isn't really important.

Transaction Body:
Version
NumInputs
ArrayOfInputs[]
NumOutputs
ArrayOfOutputs[]

Each Input is in the following format:
PrevTxHash
PrevTxIndex
PubKey

So far this is very similar to Bitcoin however there are no scripts in the tx inputs.  Each input simply consists of the information necessary to identify and authenticate it (tx id, index, and public key).  At this point lets ignore the format of the outputs as it isn't particularly important for this discussion.  Now unlike Bitcoin where each input has a signature (the simplified tx is signed w/ private key corresponding to the pubkey listed for the input) could we instead have a single signature for the entire transaction.  Also lets ignore coinbase/generation tx for the moment.

So for a transaction with more than one public key in the inputs we first create a composite key by performing ECC arithmetic.  Lets call the composite key the tx key.  We would take a list of the inputs, remove the duplicate private keys and create the tx key by adding the private keys together.

tx.privkey = privkey[0] + privkey[1] + ... privkey[2]

Then we sign the entire tx once with the tx key.    The single tx signature would be appended to the tx.

Version
NumInputs
ArrayOfInputs[]
NumOutputs
ArrayOfOutputs[]
Signature


To validate we would take the list of inputs, remove duplicate pubkeys, and create a composite in the same manner as the privkeys.

tx.pubkey = pubkey[0] + pubkey[1] + ... pubkey[2]

The tx (composite) pubkey will validate the signature created by the tx (composite) privkey.

This method would be more limiting than the Bitcoin script system however it would (with a few other changes) result in an up to 50% savings in terms of bandwidth and storage requirements.  The computing power requirements should be roughly the same.  Given that over 99%+ of the transactions to date on the Bitcoin network are "standard" pay to pubkeyhash we are sizable savings.  With the use of versioning other more advanced tx would still be possible so it would be possible for an alternative crypto-currency to have the best of both worlds.  For "simple" transactions a significant reduction in resource requirements while still having the ability to create more complex transactions.

Is there any security risk to a format like this?  Any reduction in key strength?  I can't seem to find anything that would indicate it is the case but this really isn't my area of expertise.

Quote
... its certainly possible to— in a single transaction— expose a bunch of public keys, compose them, and sign with the composition. But the space reduction is reduced by the need to expose the public keys... and it would make taint analysis more potent because you multiple parties cannot securely sign in that model. It's also incompatible with sighash single. If you wanted an incompatible ECC specific change— you could instead add public key recovery. This would get similar space savings, but also save on transactions with a single input while not breaking the ability to confuse taint analysis with joint transactions, or breaking alternative sighashes.

I assume by "compose" you mean ECC addition of the private keys (as well as a separate ECC addition of the pubkeys). Right?  I agree there is a need for public key for each input to validate the signature however the signature is significantly larger.  For Bitcoin the pubkey is 34 bytes for compressed pubkey (66 bytes for uncompressed pubkeys).  An alternative CC could simply mandate the use of compressed public keys in the protocol.  The signature is 72 bytes.

Bitcoin Tx Input Format
TxOutHash - 32 bytes
TxOutIndex - 4 bytes
ScriptLength - variable (1-9 bytes, 1 most common)
Signature - 72 bytes (including encoding)
PubKey - 34 bytes (compressed keys)
Sequence - 4 bytes
Total: 147 bytes (Signature is 48% of Input size)

Quote
and it would make taint analysis more potent because you multiple parties cannot securely sign in that model. It's also incompatible with sighash single.

Agreed.

Quote
If you wanted an incompatible ECC specific change— you could instead add public key recovery. This would get similar space savings, but also save on transactions with a single input ...

Interesting.  Can you provide information or reference on public key recovery?

Quote
One thing that I do sometimes wish is that transactions were themselves hash trees internally.  It would be very nice to be able to give to someone all the data they need to build a current UTXO securely (and thus also verify that there is no permitted inflation in the chain) without sending them a bunch of deeply burred signature data which isn't personally interesting to them and which they believe has adequate hashpower security to only do randomized spot checks.

I read this a couple of times and still couldn't conceptualize how hash tree in transaction would add security.  I bookmarked it though.
5539  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 18, 2013, 08:09:46 PM
Leaving it offline too long?

Aye, I'm a non-hardcore casual bitcoiner. But that was an example of an issue related to slow downloading/uploading speed. Freshly mined blocks can't be pruned.

If you are a casual user unable to keep the client online why not just use a SPV client.  You aren't contributing to the decentralization of the network if your node has an uptime of ~3%.
5540  Bitcoin / Development & Technical Discussion / Re: Exhausting the keypool (workaround for "watching wallet" in bitcoind) on: July 18, 2013, 03:47:03 PM
Well since we have covered everything except the question in the OP I am going to assume there is no flaw in the work around logic.
Pages: « 1 ... 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 [277] 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 ... 802 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!