Bitcoin Forum
May 25, 2024, 03:32:56 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 [276] 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 ... 800 »
5501  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 19, 2013, 04:55:29 AM
He doesn't know how, that was the problem, this lottery system I came up with for the golden blocks nobody has done before, he thought he could do it but it turned out he can't and he can't fix it cause he doesn't know how.  Therefore even if I had given him a month, the outcome would have been the same.

A complete lie.  The golden blocks is simply a renamed superblock.  The code is almost bit for bit identical from LuckyCoin.  The developer had the code correct just in the wrong place, literally a cut and paste to fix it prior to launch.  A fix now is slightly more difficulty because the updated code will need to ensure it maintains the errors for existing blocks otherwise the blockchain will restart.


 I pointed out the exact code from LuckyCoin and NUG. There is absoultely nothing new or innovative about GoldenBlocks.

LuckyCoin

Code:
std::string cseed_str = prevHash.ToString().substr(8,7);
const char* cseed = cseed_str.c_str();
long seed = hex2long(cseed);

int rand = generateMTRandom(seed, 100000);

....

if(rand > 30000 && rand < 35001)
          nSubsidy = 188 * COIN;

https://github.com/LuckyCoinProject/Luckycoin/blob/master/src/main.cpp#L842

Nuggets

Code:
std::string cseed_str = prevHash.ToString().substr(8,7);
const char* cseed = cseed_str.c_str();
long seed = hex2long(cseed);

int rand = generateMTRandom(seed, 100000);

...

if(rand > 50000 && rand < 50011)
          nSubsidy = 10045 * COIN;  //The super block Protocol Random 250x Block Award

https://github.com/nuggets-project/nuggets/blob/master/src/main.cpp#L842

Look familiar?



The developer likely broke off contact because ... you are an ass.   Same reason another developer willing to work for free to keep the coin afloat gave up.  BTW someone updated the repo with an updated fix 20 hours ago.  Likely whoever it was just gave up because you are your coin's own worst enemy.
5502  Other / Beginners & Help / Re: Mining Both SHA-256 & Scrypt on a single GPU on: July 19, 2013, 03:47:41 AM
Hi, very first post.

I understand that SHA-256 uses the core clock of a card and Scrypt uses the cards memory.

So I was wondering if it's possible to mine both scrypt and SHA-256 algorithms on a single graphics card at the same time without worrying about frying the hardware.

Appreciate any feedback.

Cheers.
 

That is incorrect.  Both use the "compute" portion of GPU it just happens that Scrypt has higher memory requirements.  Just the memory of a GPU will compute exactly nothing.  You can't mine both at the same time on the same hardware.  Scrypt doesn't even use the GPU "main memory" it uses about 128KB (that right less than 1MB) of cache found on the GPU chip itself.
5503  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 19, 2013, 03:44:35 AM
Heh.  I'm still waiting for the bitcoin project to get honest and state that not all 'peers' are 'equal peers' in the 'p2p' network.  Somehow it seems not to be a priority.  Funny that.

I think it is more simple than that.  If you aren't a full node you aren't a peer. Period.  All peers are equal but not all users are peers.  One way to look at it is inter bank networks are a form of peer to peer networking (where access to the network is limited to a selected few trusted peers).  If you send an ACH or Bank Wire you are using a peer to peer network but YOU aren't one of the peers.  The sending and receiving banks (and any interim banks) are the peers. 

I think a similar thing will happen with Bitcoin with one exception.   It doesn't matter what computing power you have available or are willing to acquire.  The banking p2p network is a good ole boys club, peons not invited.  With Bitcoin you at least have the CHOICE of being a peer.  In the long run (and this would apply to other crypto-currencies as well) a large number, possibly a super majority of users will not be peers.  They are willing to accept the tradeoff of reduced security for convince act become a user not a peer of the network.

TL/DR:
No such thing as less than equal peers, you are either a peer or your aren't.  In Bitcoin v0.1 100% of nodes were peers today some large x% are in time that x% will shrink.  Peers are still peer but not everyone will want or need to be a peer.  There is a real cost of being a peer and that cost (regardless of scalability improvements) is likely to rise over time.

Quote
and the fluff about blockchain pruning was either marketing BS or is de-prioritized (one suspects in order to assist in the formation of 'server-variety-peers' and shifting of non-commercial entities into the 'client-variety-peer' category.

I don't see any support for that claim.  On the contrary ...
https://bitcointalk.org/index.php?topic=252937.0

It is a non-trivial issue.  For complete security we want a large number of independent nodes maintaining a full historical copy of the blockchain.  It doesn't need to be every node but enough that there retains a decentralized hard to corrupt consensus of the canonical history of transactions.  There is a real risk in a jump to a pruned db model that information is lost or overly centralized.  It doesn't mean the problem is unsolvable however it is better to err on the side of caution. 
5504  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 19, 2013, 03:30:42 AM
Quote
Is there any security risk to a format like this?  Any reduction in key strength?  I can't seem to find anything that would indicate it is the case but this really isn't my area of expertise.
I'm not aware of any security harm from doing it.

Thank you.
Quote
Quote
Interesting.  Can you provide information or reference on public key recovery?
If you have a signature and the message it signed and two disambiguation bits you can recover public key used. This saves you from having to transmit the public keys.  We use this today for bitcoin signmessage.

Interesting.  I need to look into that.   So when the original message & signature are known once can reconstruct the public key.  So why does Bitcoin include the public key in the tx input?  Is there a good reason or was it simply due to lack of understanding on how key recovery could be used.  If I understand you correctly like in current Bitcoin tx each input would need to be signed the public key could be recovered.  This would mean the same 72 bytes per input but the eliminate of 34 pubkey bytes per input.  Makes smaller 1 input tx but larger multi-input tx compared to composite key but is still always smaller than "Bitcoin today".

Code:
Comparison of of the size of required pubkeys & signature(s) in bytes, where n = number of inputs.
Bitcoin Today = (72 + 34)*n
Tx Key = 34*n + 72
PubKey Recovery (bytes) = 72*n

One question: What do you mean by "two disambiguation bits"?


Quote
It would not _add_ security, what it would permit is more flexible security / computation / bandwidth tradeoffs without compromising security. A simple example:  Say you are an SPV node.  ... If the transaction was internally tree structured you could request only the data of interest and still receive proof that that data was committed in the block in question.

Interesting.  Thanks again for your detailed knowledge.  Got me thinking of a lot of alternatives.
5505  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 19, 2013, 03:12:23 AM
the miners will also have to pitch in.

With what.  To date 1704 blocks have been mined.  The first 250 were 0 coins so that is 71,246 coins owned by miners.   At current exchange rate that is worth ~$12.82.  That is everything owned by all miners collectively since this samcoin launched.  So if you rounded up every miner and convinced them to give up everything they mined for the last three days it would be worth $12 or maybe 9 minutes of a programmer's time.

Once again have fun with your delusions of grandeur.  


Quote
So don't expect $50 per hour unless you show me a PhD.

Most programmer's don't have a PhD, most code written by the largest and well funded corporations don't have programmer's with PhD on the payroll.  PhD is mostly theoretical work what matters in the real world is experience, the tens of thousands of hours of rolling up your sleeves and writing code.  I will be the first to admit it (and I am far from the only one) college didn't prepare me for real world projects in the slightest.   It was a piece of paper which got me my first job.  Every job after that was based on the work experience.  BTW I have no PhD so that disqualifies me but there wasn't a chance in hell I was offering my services.  I would point out I mapped out how a hardfork will need to happen to avoid a historical re-org back to block 250 and was the first person to notice the superblocks were non-functional (within hours of launch).   The money is irrelevant (as it is to most programmers) I spend countless hours coding boring tedious portions of projects.  I enjoy writing code I find interesting for free.  Shitcoins aren't what any experienced programmer would call "interesting".


Quote
For a guy with over 10,000 posts I'm surprised you haven't read that anywhere.  Or do you also have a coin and would love to see this one go broke and dead?
I have launched no coin so stop with the conspiratorial diarrhea.  That is your rebuttal (and I use the word loosely) to any statement by anyone.  I would love to see this chain halt if only to shut you up.  The never ending delusions of grandeur, the false claims, the self praise, the accolades about your spiritual, physiological, and intellectual superiority.    It is .... sad and likely won't stop until you coin is dead.  At the way hashrate is dropping it shouldn't be long.
5506  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 19, 2013, 02:54:31 AM
but I did out up a 50,000 NUGs bounty last night for anyone to solve the VGB Conjecture.

Well seeing as you haven't paid your prior promises not sure why anyone would believe you. Then again with NUGs going as low as 0.000002 BTC per NUG, 50,000 NUG is worth 1,000 a 50,000 reward would be worth ~0.1 BTC or $9 USD.  I am sure the "pro programmers will be fighting over a chance at those high stakes".

Quote
Does anyone know how its possible for a 3 day coin to be on an. Exchange in under 3 days?

Many scamcoins hit an exchange within 24 hours some have hit the exchange at the time of the public release.   The only exchange carrying NUG is a self described "exchange for shitcoins".  At this point there are no bids at any price.

http://iceycrypt.com/index.php?page=trade&market=4


Quote
That's a lot of miners, eyeballs and ears listening to nuggets and NUGs rather than their coins.  So they're out where bashing in full force trying to shut this thread down but so far the harder they try the more people come here.

More delusions of grandeur (and you wonder why your programmer partner deserted you).  Hashrate has fallen about 60% since the first day.  More people aren't coming here.  The same group of people are checking in periodically just like people stop and stare at the person dressed like an idiot at Walmart or to get a good look at an accident on the highway.
5507  Bitcoin / Bitcoin Discussion / Re: Why Bitcoin will never reach mainstream on: July 19, 2013, 12:52:30 AM
People aren't going to like what I'm going to say but it needs to be said regardless.
The only way bitcoin or any other alternative currencies will become mainstream is if enough infrastructure is built around it.

Person A sending bitcoins to person B (irreversibly) is simply not going to cut it. There is no inherent trust between both parties. We need 3rd party business (banks, insurers,lenders, exchanges etc) to build trust.

I doubt it.  I bought a $1,600 domain using namecheap and sent them irreversible funds by Bitcoin.  OH NOES was I worried, did I use a trusted bank as a third party?  Nope.  namecheap has a solid reputable business and they stand to lose a lot ripping me off.  I had not a second of doubt/fear sending them the BTC.

Imagine your local power company, amazon.com, newegg, namecheap, (insert company you already trust here) asked you to pay with Bitcoins would you have a problem?  I don't think most people would.

Now for fly by the night never heard of the "company" (which isn't even a real company) until they got out of noob jail and starting asking for tens of thousands of bitcoins in "pre-orders" well yeah you probably want to escrow that, then again if they asked for cash you probably would want to escrow it just the same.
5508  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 19, 2013, 12:47:04 AM
ya i already knew all that Grin. i was just wondering how you thought the bandwidth bottleneck problem would be dealt with.

My guess is a lot depends on how much Bitcoin grows and how quickly.  Also bandwidth is less of an issue unless the developers decide to go to an unlimited block size in the near future.  Even a 5MB block cap would be fairly manageable. 

With the protocol now lets assume a well connected miner needs to transfer a block to peers in 3 seconds to remain competitive.  Say the average miner (with node on a hosted server) has 100 Mbps upload bandwidth and needs to send the block to 20 peers. (100 * 3) / (8 * 20) = 1.875 MB so we probably are fine "as is" up to a 2MB.  With avg tx being 250 bytes that carries us through to 10 to 15 tps (2*1024^2 / 250)

PayPal is roughly 100 tps and using bandwidth in the current inefficient manner would require an excessive amount of bandwidth.  Currently miners broadcast the transactions as part of the block but it isn't necessary, as it is likely peers already have the transaction.  Miners can increase the hit rate by broadcasting tx in the block to peers while the tx is being worked on).  If a peer already knows of the tx then for a block they just need the header (trivial bandwidth) and the list of transaction hashes.  A soft fork to the protocol could be made which allows the broadcasting of just header and tx hash list. If we assume the average tx is 250 bytes and the hash is 32 bytes this means a >80% reduction in bandwidth required during the block transmission window (assumed 3 seconds to remain competitive without excessive orphans).  

Note this doesn't eliminate the bandwidth necessary to relay tx but it makes more efficient use of bandwidth.  Rather than a giant spike in required bandwidth for 3-5 seconds every 600 sec and underutilized bandwidth the other 595 seconds it would even out the spikes getting more accomplished without higher latency.  At 100 tps a block would on average have 60,000 tx.  At 32 bytes each broadcast over 3 seconds to 20 peers would require ~100Mbps.  An almost 8x improvement in miner throughput without increasing latency or peak bandwidth.

For existing non-mining nodes it would be trivial to keep up.  Lets assume the average node relays a tx to 4 of their 8 peers. Nodes could use improved relay logic to check if a peer needs a block before relaying.   To keep up a node just needs to handle the tps plus the overhead of blocks without falling behind (i.e. one 60,000 block in 600 seconds).  Even with only 1Mbps upload it should be possible to keep up [ (100)*(250+32)*(Cool*(4) / 1024^2 < 1.0 ].

Now bootstrapping new nodes is a greater challenge.  The block headers are trivial (~4 MB per year) but it all depends on how big blocks are and how far back non-archive nodes will wan't/need to go.  The higher the tps relative to average node's upload bandwidth the longer it will take to boot strap a node to a given depth.



5509  Bitcoin / Development & Technical Discussion / Re: "watching wallet" workaround in bitcoind (fixed keypool, unknown decrypt key) on: July 19, 2013, 12:12:02 AM
It looks like pywallet has an option to import a watching address.  The public key is entered into the wallet and as a placeholder the encrypted private key is just random data.

Based on that it should be fairly straight forward to have an option where given an existing wallet.dat it will update the wallet.dat to a "watching wallet" by replacing all private keys with random data.  Optionally to prevent accidentally unlocking (which may confuse the crap out of bitcoind) the passphrase could at the same time be changed to a random value as well.

Quote
def render_GET(self, request):
          global addrtype
          try:
                                pub=request.args['pub'][0]
                                try:
                              wdir=request.args['dir'][0]
                              wname=request.args['name'][0]
                              label=request.args['label'][0]

                              db_env = create_env(wdir)
                              db = open_wallet(db_env, wname, writable=True)
                              update_wallet(db, 'ckey', { 'public_key' : pub.decode('hex'), 'encrypted_private_key' : random_string(96).decode('hex') })
                              update_wallet(db, 'name', { 'hash' : public_key_to_bc_address(pub.decode('hex')), 'name' : "Read-only: "+label })
                              db.close()
                              return "Read-only address "+public_key_to_bc_address(pub.decode('hex'))+" imported"
                 except:
                              return "Read-only address "+public_key_to_bc_address(pub.decode('hex'))+" not imported"

https://github.com/jackjack-jj/pywallet/blob/master/pywallet.py#L4176

I have sent jackjack a PM to clarify is this is possible and the possibility of setting up a bounty.


5510  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 18, 2013, 11:29:53 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk.  

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.

so as far as addressing the bandwidth bottleneck problem you are in the off chain transaction camp correct?

No although I believe regardless off-chain tx will happen.  They happen right now.  Some people leave their BTC on MtGox and when they pay someone who also has a MtGox address it happens instantly, without fees, and off the blockchain.  Now imagine MtGox partners with an eWallet provider and both companies hold funds in reserve to cover transfers to each other's private books.  Suddenly you can now transfer funds

So off chain tx are going to happen regardless.

I was just pointing out between the four critical resources:
bandwidth
memory
processing power
storage

storage is so far behind the other ones that worrying about that is kinda silly.  We will hit walls in memory and banwidth at much lower tps then it would take before disk space became critical.  The good news is last mile bandwidth is still increasing (doubling every 18-24 months) however there is risk of centralization due to resources if tx volume grows beyond what the "average" node can handle.  If tx volume grows so fast that 99% of nodes simply can't maintain a full node because they lack sufficient bandwidth to keep up with the blockchain then you will see a lot of full nodes go offline and they is a risk that the network is now in the handles of a much smaller number of nodes (likely in datacenters with extreme high bandwidth links).  Since bandwidth is both the tightest bottleneck AND the one where many users have the least control over. As an example I recently paid $80 and doubled by workstation's ram to 16GB.  Lets say my workstation is viable for another 3 years.  $80/36 = ~3 per month.  Even if bitcoind today was memory constrained on 8GB systems I could bypass that bottleneck for a mere $3 a month.  I like Bitcoin, I want to see it work, I will gladly pay $3 to make sure it happens.  However I can't pay an extra $3 a month and double my upstream (and for residential connections that is the killer) bandwidth.  So hypothetically if Bitcoin wasn't memory or storage constrained by bandwidth constrained today I would be "stuck" I am either looking at much higher cost, or a need for more exotic solutions (like running my node on a server).

Yeah that was longer than I intended. 

TL/DR: Yes scalability will ALWAYS be an issue as long as tx volume is growing however storage is the least of our worries.  The point is also somewhat moot because eventually most nodes won't maintain full blocks back to the genesis block.  That will be reserved for "archive" nodes.  There likely will be fewer of them but as long as there are a sufficient number to maintain a decentralized consensus the network can be just as secure and users have a choice (full node, full headers & recent blocks, lite client) depending on their needs and risk.


5511  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 11:21:32 PM
I'm not sure that you can do what you described above. i.e. you can't just add a lot of private keys, sign a message and then add up the corresponding public keys and use this to check the signature (if that is what you meant) even if you are using EC point addition. I'll have to have a look when I'm more awake (sober)  Smiley

No problem. I am sure it can be done.  It is used for deterministic wallets for example and it for verifiable secure vanity address generation.  It is an interesting property of ECC keys.  I just wanted to know if any crypto experts saw any potential reduction in security as I have limited knowledge in the field of ECC.  Unless I was drunk I don't recall it even being covered in college.

You may be right about signing and verifying in the method you described.  I will try some experiments with OpenSSL.  My assumption would be that if both are possible that given n keys that n key additions plus one signature (or verification) would be faster than n signings (or verification).

5512  Bitcoin / Bitcoin Discussion / Re: Once again, what about the scalability issue? on: July 18, 2013, 11:09:29 PM
People, stop saying that scalability is not a problem and writing about how cheap hard drives are.
The scalability is the number one problem stopping Bitcoin from becoming mainstream.
It doesn't matter how fast the drives are growing, right now the blockchain keeps all the old information which is not even needed, and grows indefinitely, how hard is it to understand that it is a non-scalable non-future-friendly scheme?
I am sure the devs know this and are doing their best to address it and I am grateful for that. But saying that it's not a problem is just ignorant and stupid.

Quote
We won't get some real big transaction volume because of this issue.
I can't see how anybody is even arguing against this. I mean, it's even in the wiki: https://en.bitcoin.it/wiki/Scalability

The historical storage is a non-issue and the scalability page points that out.  Bandwidth (for CURRENT blocks) presents a much harder bottleneck to extreme transaction levels and after bandwidth comes memory as fast validation requires the UXTO to be cached in memory.  Thankfully dust rules will constrain the growth of the UXTO however both bandwidth and memory will be an issue much sooner and quicker than the storing the blockchain on disk. 

The the idea that today's transaction volume is held back because of the "massive" blockchain isn't supported by the facts.  Even the 1MB block limit provides for 7 tps and the current network isn't even 0.5 tps sustained.  We could see a 1,300% increase in transaction volume before even the 1MB limit became an issue.  At 1 MB per block the blockchain would grow by 50 GB per year.  It would take 20 years of maxed out 1MB blocks before the blockchain couldn't fit on an "ancient" (in the year 2033) 1TB drive.  

Beyond 1MB the storage requirements will grow but they will run up against memory and bandwidth long before disk space becomes too expensive.  Still as pointed out eventually most nodes will not maintain a copy of the full blockchain, that will be a task reserved for "archive nodes" and instead will just retain the block headers (which is ~4MB per year) and a deep enough section of the the recent blockchain.


5513  Bitcoin / Development & Technical Discussion / Re: Any documentation (other than the code) on the format of the wallet.dat file? on: July 18, 2013, 10:21:54 PM
I don't think so.
It's a berkeley db file.

Look at the pywallet code if you want more than one point of view.
kds is the key and vds is the value.

For example, the (key, value) pair for an unencrypted private key would be:
('\x03key\x21\x03\x01\x01\x01...\x01', '\x20\x54\xfd...\x31')
If the public key is '03010101...01' and the private key is '54fd...31'

Thanks I can follow the python code a lot easier than the reference client.

Still the lack of documentation is kinda sad.  It just means countless hours wasted "relearning" the same thing by each developer.
5514  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 18, 2013, 10:08:12 PM
Block 250 has to return a 0 subsidy. Superblocks can not kick in to some future block number that gives people time to update the client.
Edit:  or maybe 250  has to be a 0 or a superblock chance,  have to find the original code to remember for sure.

You were right on the first one.  Since block 250 already exists now and it is 0 coins then any change other than that will cause a retroactive fork back to block 250.  So block 250 MUST be zero and superblocks must NOT be enabled until some block in the future.

However at the time jackjack published the fix it was prior to block 250 so if implemented then it would have been fine however that ship has now sailed so the "fix" which causes the minimal collateral damage (to existing coin holders and exchange accounts) is to "fix" it such that it preserves the existing chain (flaws and all) and then "enhanced" future blocks only.
5515  Alternate cryptocurrencies / Altcoin Discussion / Re: Miner's Official Coin LAUNCH - NUGGETS (NUGs) on: July 18, 2013, 10:02:47 PM
Block 250 has to return a 0 subsidy. Superblocks can not kick in to some future block number that gives people time to update the client.


Edit:  or maybe 250  has to be a 0 or a superblock chance,  have to find the original code to remember for sure.

Block 250 is not supposed to be 0, I quoted r3wt stating this a few pages ago.
And yeah, I wasn't sure if anybody was really using the client so I just followed what Vlad asked. Also block 250 would create a hard fork anyway as the current code makes it 0 and my fix makes it 49.

It doesn't really matter here (given this coin is dead anyways) but as an academical point of view if a hard fork is necessary it should be a future hard fork.

It doesn't matter what block 250 "should have been", block 250 on the current longest chain has a value of 0 coins.  If you change that you will cause a catastrophic re-org back in time to block 250.  This is an absolute worst case scenario.  Once again this is academic because this coin is dead anyways but a good learning lesson.

Once a mistake is made and the chain has moved beyond that the "fix" is to keep it the same.   As an example Satoshi had intended the genesis block to be spendable but a bug in the code prevents it from being spent.  The "fix" is to keep that block unspendable forever.  Trying to correct it to what "should have been" would cause a irrecoverable hard fork if/when someone tried to spend the genesis block.

The mistakes were:
a) block 250 is 0 coins
b) super blocks are not possible through the current block due to a bug.
c) no subsidy decline just drop to 0 in 7 years (we will ignore this one because it isn't pressing although it does ensure this coin is DOA).

The fix is:
a) keep block 250 as 0 coins.
b) keep no superblocks through some future block (estimate the time necessary to get super majority of nodes to upgrade)
c) implement superblock fix on blocks AFTER the block in "b"

This will keep the current longest chain valid and allow migration at some future block height to enable the superblocks.  Otherwise you introduce the abiltity to double spend.  Some users have already received coins in blocks 251+.  Those users have sold them on exchanges (in theory).  Changing the "correct" value for block 250 would cause all nodes running the corrected code to see the entire existing chain from block 250 onward as invalid.  The coins held by exchanges would disapear in the reorg.

Obviously any hard fork is bad and testing and proper deployment should be done to minimize the need for hard forks however if you must use a hard fork it should only be done in a forward fashion.  At block XXXX in the future old nodes and new nodes will fork not at block yyyy which is way in the past we just erase the majority of the existing change.
5516  Bitcoin / Development & Technical Discussion / Docs on the structure and format of the wallet database on: July 18, 2013, 09:38:49 PM
Title says it all, any documentation (other than the code) on the structure and format of the wallet database?
5517  Other / Beginners & Help / Re: Can someone explain membership requirements? on: July 18, 2013, 09:28:15 PM
10 posts and 4 hours: ability to post outside the newbie section

This is 1 post and 4 hours.

Didn't it change with the new activity thing?

Yes it was 5 post and 4 hours and now it is just 1 post and 4 hours.
5518  Other / Beginners & Help / Re: Can someone explain membership requirements? on: July 18, 2013, 09:21:30 PM
10 posts and 4 hours: ability to post outside the newbie section

This is 1 post and 4 hours.
5519  Bitcoin / Development & Technical Discussion / Re: A question on ECDSA signing (more efficient tx signing)? on: July 18, 2013, 09:20:28 PM


Given:
private keys a & b
Public keys A & B
Data to be signed d

Is it possible to create a signature S such that it can be verified given only A, B, and d?

Why not sign the data d with private key a and then sign the result with private key b to give you S.
Use public key B then public key A on S to result in data d.

Would this solve the original problem?
Even though, as pointed out, there are distinct items of data so it wouldn't work in practice anyway.

This would be for a new (incompatible) transaction format.  There would be no distinct items.  Transactions would simply be signed at the tx level.  At this point it is merely academic, I just want to know if it CAN be done and if doing so results in a reduction of security.

I don't believe it is possible to verify a double signature the way you described.  Remember is verification the entity with the public key isn't recreating the signature and comparing it to the original (if they could do that they could counterfeit the signature on any data).  They entity doing the verification can only validate if the signature is valid or not (i.e. true or false).  

I may be wrong on this one.

5520  Bitcoin / Development & Technical Discussion / Re: Exhausting the keypool (workaround for "watching wallet" in bitcoind) on: July 18, 2013, 09:16:27 PM
It would be more elegant (and also safer) to literally erase (overwrite with random data) the private keys, instead of encrypting them with the "unbreakable" password.
Maybe jackjack could add such an option to his so useful pywallet tool.

Agreed.  That would be a useful option "overwrite private keys".  If the overwritten wallet is ever unlocked it will cause issues but if the wallet remains locked the private keys are inaccessible and bitcoind doesn't know they are overwritten or missing.

An even better solution would be to create and use a watching wallet in bitcoind itself.  The core devs seem reluctant to make changes/improvements to the wallet since it will be made obsolete by deterministic wallets however it would be a useful option.  The wallet header could contain a flag to indicate it is a watching wallet only and only contains public keys.  To avoid significant code the fact that there are no private keys could be hidden by simply encrypting the wallet (not necessary for a security standpoint but would make all private key functions inaccessible to the wallet without a lot of refactoring).

Quote
The watching wallet cannot create new keys, but the spending wallet can, so in theory you still need to repeat the process once for awhile.
Though in practice, if you told it to used -keypool of thousands, it should take you awhile to consume it all.

Agreed we have already used 5,000 key keypools in the past.  That should be fine for most use cases.
Pages: « 1 ... 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 [276] 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 ... 800 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!