Bitcoin Forum
May 21, 2024, 05:38:09 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 [278] 279 280 281 282 283 284 285 286 287 288 »
5541  Bitcoin / Pools / Re: [460 GH/s] Eligius pool: ~0Fee SMPPS, no reg, RollNtime, hop OK, BTC+NMC merged! on: October 16, 2011, 08:35:06 PM
First time using this type of pool, so excuse my question

My BTC client says ".. generated 1.166xxx BTC.. matures in XXX blocks ". What does it mean? My balance is still 0.00

When i clicked on the transaction log of the address, there is one transaction to my address with the above amount.

It means what it says... it matures in XXX blocks (starts at 120).  Once that countdown is done your balance will go up. The whole network produces 144 blocks per day on average, so it's about a 20 hour wait.

5542  Bitcoin / Development & Technical Discussion / Re: Questions about Transaction Protocol Rules on: October 16, 2011, 06:53:32 PM
Thanks log0s!

So for that rule #17, I think just to prevent DOS I will make a simple rule: if the sum of inputs is < 0.01 and transaction fee is < 0.0005 then reject it.  Hopefully something like that is reasonable as a start.

The anti-DOS rule is:

A 0.0005 fee is required if any output is smaller than 0.01 (not the sum of the inputs)  or if the priority is less than 51,000,000.
You calculate the priority as sum(input value * confirmations)/tx_data_size

If your change would be smaller than the fee 0.0005 the wallet is smart enough to give it up as fee to avoid the 0.0005 fee— but it's not generally smart enough to always pick the mixture of inputs required to avoid a fee and could be improved.  What it currently does is tries to find the smallest set of inputs that sum to at least your transaction value, first using only input txn which have 6+ confirms, then 1+ confirms (and your own zero confirm txn), then using unconfirmed inputs.

5543  Bitcoin / Development & Technical Discussion / Re: Request to remove getmemorypool before it gets beyond being a release candidate on: October 16, 2011, 03:19:35 AM
kano: fork the client, remove stuff you don't like, problem solved Tongue
I don't agree with his concerns here— merged mining is, in fact, the _cure_ to the problem he's worried about.  But your response is unreasonable and unfair.  Forking the client doesn't solve the problems with _other_ people throwing garbage in the blockchain which you are forced to store forever in order to participate in bitcoin.
Neither does preventing mainline from doing things already allowed.

Enh, I dunno. Having to maintain a separately compiled fork is a hurdle.

Look at things like the deterministic wallet generator here that uses some crack ass and probably insecure novel cryptographic crap— most bitcoin users are insulated from insecure key generators, for the moment, by the fact that they can't import the keys that crap generates.    Not the most solid protection— but don't underestimate the power of the default.

But it's moot because the complain here is misguided...
5544  Bitcoin / Development & Technical Discussion / Re: Request to remove getmemorypool before it gets beyond being a release candidate on: October 16, 2011, 02:31:19 AM
kano: fork the client, remove stuff you don't like, problem solved Tongue

I don't agree with his concerns here— merged mining is, in fact, the _cure_ to the problem he's worried about.  But your response is unreasonable and unfair.  Forking the client doesn't solve the problems with _other_ people throwing garbage in the blockchain which you are forced to store forever in order to participate in bitcoin.
5545  Alternate cryptocurrencies / Altcoin Discussion / Re: SC Releases his 'white paper', hilarity ensues on: October 12, 2011, 10:00:10 AM
Well, getting to a million SC coins would give you the privilege of mining at a super low difficulty. Over time, more people will become part of that club.
No, you only generate the blocks with no coins in it for you. The real blocks with 32 SC2 will use the normal difficulty.

Not quite—  You generate a normal block, then use your trust account to mine a trusted block right after it (especially easy since it's a minimal difficulty computation).  If someone else had beat you to the punch on the normal block it doesn't matter— the chain with the trusted block is the longest one.  So while the trusted block itself costs you coins it gives you a veto over the identity of the generator of the prior block.

It's even super extra special if you're the operator of this system— because the fee in the trusted blocks goes to to operator, so once he's extracted out the few hundred thousand in surplus coin those pre-mined accounts were equipped with he can simply recycle their fee and maintain them (and their ability to control the winning chain, and thus who the normal block coin goes to) indefinitely.
5546  Bitcoin / Development & Technical Discussion / Re: Protection against keyloggers on: October 12, 2011, 09:51:25 AM
Problem with encrypting wallet is that the user can take millions of guesses a second if they have stolen a copy of if off your computer.
The encryption mechanism in the bitcoin client uses key strengthening to make sure an attempt costs around 0.1s (on your own system). It's possible that the attacker has thousands of units of specialized hardware for cracking passwords, but in general he won't be able to take a million guesses a second.

Still— the point remains, you can't get away with a six digit numeric pin here... Smiley
5547  Bitcoin / Development & Technical Discussion / Re: What Guarantees Are There That The Supply of Bitcoin Will Be Limited on: October 09, 2011, 08:33:00 AM
Now... people said the 51% attack can only double-spend, but I thought that the whole chain and rules were based on consensus, so if 51% said the new rules were there were more money, even though the client was hardcoded it would only fork at the point that something that was hardcoded got ignored (ie, once we got to above the hardcoded limiting amount past 21m). Even then, wouldn't the client just think something was wrong and stop getting a connection? I mean it's consensus so if 50% agreed to new rules wouldn't that be the new rules?

The phrase "double spend" is highly misleading, I wish it weren't the common phrase for that attack.  It would be better if we called that attack "reverse and respend":  What someone with a super majority hashpower can do is spend some coin with you, then after its deep enough in the chain that you trust it to be permanent (e.g. at least 6 deep) they release a new chain which starts before your txn was confirmed but spends the same coins someplace else.  The new chain must be longer than the current real one... so to do this with any reliability they need >>50% of the total hashpower.  So effectively they reverse one of their own payments to you by redoing the bitcoin transaction history so the funds when somewhere else instead.  The total amount of coins is conserved.

There isn't any majority voting on the rules. The rules of the bitcoin system are fixed in every copy of the software. When you change them you don't change bitcoin, you create an alternative chain, effectively... because the two systems won't talk unless their rules agree.

Of course, if ~everyone changed their software than things can change— but it's silly to claim that this constitutes a lack of immutability of the rules. After all, if I made an interface to paypal with a bitcoin logo on it, and convinced everyone to change to that you could hardly say that I changed the bitcoin rules or hold bitcoin accountable for it.
5548  Bitcoin / Development & Technical Discussion / Signature schemes on: October 06, 2011, 07:11:32 AM
Another improvement would be the introduction of the Bernstein signature scheme with a similar security parameter to the existing ECDSA but a much faster verification. Transactions using the cheaper signatures could get a discount on the fees or be allowed more sigOps. To be clear, the scheme I'm thinking of is "A secure public-key signature system with extremely fast verification" from about 2000.
ByteCoin

If you wanted to introduce a signature scheme which was much faster but which required more storage, then offset that storage with improved pruning, why not hash based signatures (lamport or similar tree analogs)?

They are quite fast, have fewer security assumptions than other signature schemes (other signatures also become insecure if H() has the same weaknesses which would break lamport), an intuitive security proof, and are strong against proposed QC models (regardless of the real merits of QC attacks, marketing garbage is making the public think that QC's are already a real thing and the true but misleading assertion that bitcoin would fall to some highly hypothetical very large QC is harmful to public confidence, though I don't have a real feel for how harmful it is generally but "OMG QC's break bitcoin" shows up in IRC ever other week or so).

Lamport signatures also allow distributed storage of signature data. You can forget parts of signatures over time at random but still use them for lower confidence validation of the signature, you can also partially validate them for a big speedup.




5549  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 03, 2011, 08:16:57 PM
Sliding window is a interesting idea, you'd obviously have to reduce the per block adjustment factor and the limits to the 2016-th root to get the same overall adjustment speed as bitcoin, but at first look this should work without introducing new vulnerabilities and result in "smoother" adjustments (obv. not really *faster* if you use factors to end up with bitcoin-equiv adjustment rate, but less... well... blocky).

Consider this attack:   I have the ability to isolate the network connectivity many different nodes and I can pretend to be all of their peers (this isn't an especially hard attack— e.g. it's one people at large ISPs can pull)

I make a one time investment to fork the chain someplace after the highest of the currently used checkpoints, and mine enough blocks to get it down to a difficulty I can sustain on my own.  Once I've done this I can isolate fresh nodes and get them onto my fantasy chain (the fact that it's the sum-difficulty used for comparison is irrelevant because I've isolated them) and I can trigger their prudent must-have-n-confirms behavior before considering a txn safe.

Say I need to reduce it by 1024x to get it to where I can mine it on my own (which is about right, 1/1024 puts 10GH at ~11 minutes/block).  This would currently cost 134,268.75  BTC (the simple forgone income from the same amount of computation: 2016*(50/4^0) + 2016*(50/4^1) + 2016*(50/4^2) + 2016*(50/4^3) + 2016*(50/4^4.)).

If you switch to a sliding window with the same overall behavior your change clamp at each block will need to be at 0.25^1/2016.  You would still need to mine ~10080 blocks but total cost would be 72615 BTC because of the far fewer blocks calculated at the 'too high' difficulty.

So it would ~halve the cost of this attack.

The clamps in bitcoin are what make these attacks costly,  but the clamps also represent exciting non-linearities in the payout of the system which miners could exploit for increased profits.  The fact that the clamps are hard to reach currently makes it a non-issue, but with a sliding window the clamps would have to be very near a factor of 1.0 to preserve the resistance to these forged chain attacks, so the system would almost always be operating in the non-linear region.

Tightening the clamps to keep the attack cost the same would only worsen the non-linearity.

Moreover, (ignoring the screwed up calculation) the window plus node timestamp enforcement (the limitation against blocks from the future) limits the maximum gain miners miners can get from lying about the time to a couple percent.  A sliding window would make this worse because it would provide an incentive to lie for every block, and not just the final ones in a cycle.

People need to stop fixating on weirdness in other chains which are more or less irrelevant for bitcoin and realize that the design of bitcoin isn't an accident. Every one of the features of the distributed algorithm has a complicated relationship with all the others.
5550  Bitcoin / Development & Technical Discussion / Re: Proposal to modify OP_CHECKSIG on: October 03, 2011, 03:20:06 PM
Threshold cryptography allows this 'n of m' functionality to be implemented purely in the clients - no script changes needed, no address format changes needed.

No, not really.   You still have to have a trusted part to hold the resultant derived key.   Meaning you can't use 2 of 3 to address  "one of my machines is compromised but I don't know which" since if you recover the complete key on the compromised machine you're screwed.
5551  Bitcoin / Development & Technical Discussion / Re: OP_EVAL response on: October 03, 2011, 03:13:14 PM
Delaying fees will not really make a real economic difference.   It's just a transaction cost and in an efficient market the price will adjust so that the outcome to all parties is the same.

One thing I don't like about this but can't solve is that it changes the incentive structure for non-standard transactions.

Today if you try to issue a transaction that no one will forward or mine it just doesn't work and then you say "oh well" and spend your coin another way.

In the brave new OP_EVAL future, you spend coin to a script hash and then only later learn that mining the transaction spending it is difficult.  Instead of saying "oh well" you're out the funds until you can convince someone to mine the transaction... and you only discover this after your funds are tied up.

Presumably we'll be smart with the software and not allow the UI to issue transactions that won't work but I'm still concerned that this may increase the number of large burdensome transactions in the blockchain.   On the other hand, because only the input script will be large there may be some increased helpful pruning potential.

It might help a little if we made public key recovery (which we now have code for as part of Sipa's signing support) something that OP_EVAL scripts could use, at least scripts spending "must provide four signatures" wouldn't be quite so large.
5552  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANNOUNCE] New alternate cryptocurrency - Geist Geld on: September 13, 2011, 07:08:17 PM
I've always felt that there's no reason to have a limit when adjusting downwards. If the current hashing rate is less than the required to adjust exactly by /4 then you'll need 2 long retargets to get back to normal speed, but if it can adjust freely then only 1 is needed. Namecoin is currently on this predicament. The next adjustment will be by /4 to 23,500 but the "instant" difficulty (average of the last 120 blocks) is 7,000 so after this very long adjustment cycle there will be yet another long cycle, if the current hash rate remains constant.

Er!!! the downward side clamp is essential for security.

E.g. say the difficulty is a zillion and I want to mine a fantasy fork in order to trick clients that I've isolated into accepting txn that will never be confirmed on the main network.  I wait until near the end of a cycle at current difficulty pushing the timetamps into the future, ... when the adjustment happens on my fork if there is no limit the new difficulty it ends up pretty low and I can then maintain that fork in realtime for as long as I like.

With the limit it would require me mining tens of thousands of blocks at high difficulty to produce a valid chain at a difficulty I could maintain.
5553  Bitcoin / Development & Technical Discussion / Re: Fake Bitcoins? on: August 22, 2011, 07:54:49 PM
I really like vector76's hypothesis, and I recognize it's only speculation.  But it sounds like feasible, targeted attack.   It would also explain why there was no evidence on any others' systems of a forked blockchain.  MyBitcoin was one the only node that directly received this invalid block, and most other nodes probably had the soon-to-be-real block before MyBitcoin could forward it to peers.

This still leaves evidence: Mybitcoin could provide a copy of their blocks file, which would still contain the orphan block even if no one had it.

It's evidence which could be faked, but only at non-trivial cost.
5554  Bitcoin / Development & Technical Discussion / Re: Negative Account Balances and Static Addresses on: August 17, 2011, 01:03:59 AM
RE Static Addresses:  Customers have two different sets of needs.  They can choose to use disposable addresses used for transfers / trades, and opt for a "static" address to plug into mining services or merchant solutions, where deposits can be made over extended periods of time.  As Bitcoin matures like other financial system, this deposit period may extend well into years.

This is all well and good, and also completely meaningless.  If thats what you want to do, then just do it. You don't need any support from the software.

Perhaps you're under the impression that bitcoin will forget about older addresses that it has given you after it gives you a new one? That isn't the case. (I'm just guessing wildly, because I can't figure out what you're thinking). All addresses your client generates will be remembered forever, any other behavior would lose money.

5555  Bitcoin / Development & Technical Discussion / Re: Most transaction relaying is currently pointless and wastefull on: August 10, 2011, 04:10:16 PM
The original bitcoin design has transactions propagating across the network of peers essentially at random. This was a reasonable design as all peers were approximately indistinguishable.

Because of the inv process relaying takes very little bandwidth. We're talking about a grand total of a few kbit/sec. Even with enough transactions to cause a maximum blocksize it isn't much bandwidth.

The relaying also improves security by more broadly propagating evidence of transaction activity (even if no one is currently doing much monitoring).

Most importantly, perhaps, is that it allows nodes to see incoming transactions before they are mined and it allows them to do so without constantly disclosing the identity of the keys they hold.

This is an area were lightweight nodes could probably be more lightweight, but I don't see any advantage in changing this for full validating nodes.
5556  Bitcoin / Development & Technical Discussion / Re: Mining, date / block reward table till 2100 (is this correct ?) on: August 09, 2011, 02:41:41 PM
I had no idea of the specific date when Bitcoin began so I approximated from today's block #.

Is this table correct ?  If so, could we add It to the wiki ?

No. It's not correct, you've used floating point and suffered from numerical precision problems you also started at the wrong point, as you noted. The first block is trivially found, its coded into every copy of bitcoin. http://blockexplorer.com/b/0

Assuming the precision of bitcoin is not increased:

python one liner
print "".join(["%0.8f %d %0.8f\n"%(5000000000/2**x/float(1e8),(x+1)*210000,sum([5000000000/2**y*210000 for y in range(x+1)])/float(1e8)) for x in range(34)])

50.00000000 0210000 10500000.00000000
25.00000000 0420000 15750000.00000000
12.50000000 0630000 18375000.00000000
06.25000000 0840000 19687500.00000000
03.12500000 1050000 20343750.00000000
01.56250000 1260000 20671875.00000000
00.78125000 1470000 20835937.50000000
00.39062500 1680000 20917968.75000000
00.19531250 1890000 20958984.37500000
00.09765625 2100000 20979492.18750000
00.04882812 2310000 20989746.09270000
00.02441406 2520000 20994873.04530000
00.01220703 2730000 20997436.52160000
00.00610351 2940000 20998718.25870000
00.00305175 3150000 20999359.12620000
00.00152587 3360000 20999679.55890000
00.00076293 3570000 20999839.77420000
00.00038146 3780000 20999919.88080000
00.00019073 3990000 20999959.93410000
00.00009536 4200000 20999979.95970000
00.00004768 4410000 20999989.97250000
00.00002384 4620000 20999994.97890000
00.00001192 4830000 20999997.48210000
00.00000596 5040000 20999998.73370000
00.00000298 5250000 20999999.35950000
00.00000149 5460000 20999999.67240000
00.00000074 5670000 20999999.82780000
00.00000037 5880000 20999999.90550000
00.00000018 6090000 20999999.94330000
00.00000009 6300000 20999999.96220000
00.00000004 6510000 20999999.97060000
00.00000002 6720000 20999999.97480000
00.00000001 6930000 20999999.97690000
00.00000000 7140000 20999999.97690000

(make up your own dates, — any dates are going to be somewhat approximate)
5557  Bitcoin / Development & Technical Discussion / Re: Where do transaction fees go? on: August 07, 2011, 12:10:39 AM
Now my turn to ask a question:  I notice from looking at the blockexplorer that most transactions don't actually have any fees.  I thought fees were generally required by the miners in order to prevent DoS/penny-flood attacks.  If the zero-fee transactions do actually go through, what is stopping someone from executing such an attack?

I know that the client has a built-in node-disconnect if it starts getting spammed from that node.   But what is stopping the node from connecting to each of 5000 peers and giving them each 20, zero-fee transactions?  Seems like that would be a pretty big inconvenience for the entire network...

You're misunderstanding a bit how it works.  Most transactions don't need to pay a fee at all— but transactions that appear to be indicative of a DOS attack based on a simple objective criteria need to pay a fee or most nodes will simply drop them (they won't disconnect, the anti-flooding stuff is totally separate).

It works like this,  take the value of each input and multiply it by the number of confirmations it has. Sum that and divide by the size of the txn in bytes.   This is the transactions priority.  If the priority is less than a typical 1 BTC txn aged one day would have, then the transaction is low priority and needs a fee. Additionally transactions with outputs smaller than 0.01 need to pay a fee, just to make open-transaction bloating attacks more costly.

This way, holding the coins itself is the "proof of work" that shows you are probably not attacking.  It's a simple objective measure that can't really be gamed (except by simply using enormous amounts of bitcoin in your attack) and which doesn't require any additional communication or coordination between nodes and doesn't care if you connect to one node or 100.


5558  Bitcoin / Development & Technical Discussion / Re: 0.01 BTC fee on 0.01 BTC transaction? on: July 31, 2011, 09:02:16 PM
Also, if you the coins you're trying to transfer are too fresh on your wallet there is a tax imposed regardless of the amount (they don't want people spending money that haven't been confirmed they own yet)

It's not really about "spending money that haven't been confirmed they own yet" it's that people trying to DOS attack bitcoin produce transactions which look exactly like that:  They take some large amount of bitcoin and rapidly ping pong it between a series of addresses, or they take tiny amounts of bitcoin and mass broadcast it two and from many addressses, or something in between.

The 0.0005 (currently about a half cent USD in value) fee is required in cases when the system can't automatically distinguish your transaction from a DOS attack and so it wouldn't forward it.  You can avoid it by simply letting the inputs in your wallet age, also, when you can— try to get payments in bigger chunks rather than lots of little tiny ones. (like avoiding pennies).




5559  Bitcoin / Development & Technical Discussion / Re: Deterministic wallets on: July 30, 2011, 11:05:16 AM
If this mechanism is being seriously considered, someone needs to make sure it's not encumbered by a patent. I remember when I first heard the notion of a "family of public keys" such that a single private key would allow deriving the corresponding family of private keys, I seem to recall it being patented. That was at least three years ago, I think, so it may not even be covered any more.

FWIW, I may be filing a patent on this, so I'll report back if I find anything problematic.

If I do so, I will be offering it under terms no more restrictive than the Xiph.Org patent license: https://datatracker.ietf.org/ipr/1524/  (e.g. a free license to anyone who doesn't conduct patent litigation against bitcoin users)
 
5560  Bitcoin / Development & Technical Discussion / Re: Why not 10 coins per block and a block every 2 minutes? on: July 30, 2011, 10:58:42 AM
Not to me or the rest of the network.  Such harm is limited to you, the seller who didn't take prudent steps.  Have you ever bought a car from a dealership wherein you were not in the dealership for at least 30 minutes?  This does not qualify as lasting harm in the context of bitcoin itself or the network.

By making the block time faster the risk from shorter confirmation or the number of confirmation needed to reduce risk is increased. This is a cost to all bitcoin users, especially since its users can suffer from the loss of confidence in addition to the loss itself.
Pages: « 1 ... 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 [278] 279 280 281 282 283 284 285 286 287 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!