Bitcoin Forum
May 25, 2024, 06:39:42 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 [136] 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
2701  Bitcoin / Bitcoin Discussion / Re: Dear Agents - you are TOO obvious! $120K sale on SUNDAY? (largest in 9 days!) on: July 17, 2011, 06:13:31 PM
EDIT: Sorry about the spelling, as you can probably guess from my username - English is my 2nd language. Please let me know about any other speller-proof errors. Thanks. (and feel free to make fun of me as long as you still enjoy it Smiley)
That's no excuse.

Not sure if you were serious in your request, but here goes...

Quote
Naturally, the group of those who really understand Bitcoin and it's importance is constantly growing.
its.

Quote
but you have to do better then that.
than.

Quote
Yours trully,
truly.
2702  Other / Beginners & Help / Re: Isn't this a massive vulnerability built into the system? on: July 16, 2011, 06:45:20 PM
oh, so if the overall odds are against you, you want lower variance or you're more likely to have an overall loss.  Like if your goal is 60% heads in coin flips, only flip 3 coins cuz if you flip 100, it's not gonna happen.
Depends... But generally in a situation like you describe you want higher variance. 3 flips is higher variance than 100 flips.
2703  Economy / Economics / Re: Bitcoin loan payment formula (WARNING: MATHS AHEAD!) [FORMULAS FIXED] on: July 16, 2011, 06:05:47 PM
...
Something about the x*(1+d)^(-k) formula didn't seem right, either. After a while and a lot of wrangling and testing the algebra, I figured out that, due to this step calculating deflation, it should be -d, and to push the x value into the future instead of the present, the k should be positive. Final formula for that is
...
In copyable plaintext format, the formula is
((1-d)^k P(d+i))/((d-1) (-1+(1+i)^(-n) (1-d)^n))
...
The sad thing is, that is NOT a very pretty formula. But at least it works.
My derivation, as I explained, is based on the assumption that $1 at year 1 is equivalent to $(1+d) at year 0. It looks like you wanted that $1 at year 0 is equivalent to $(1-d) at year 1. So, let R = (1+i)/(1-d), and use the formula P_0*[(R-1)/(1-R^(-n))]*(1-d)^k.

It shouldn't matter too much, because 1/(1-d) = 1 + d + O(d^2).
2704  Other / Beginners & Help / Re: Isn't this a massive vulnerability built into the system? on: July 16, 2011, 06:05:07 PM
Well since we're on the topic...well off the topic Tongue ummm, I have a theoretic question that's been bugging me because I suck at remotely complicated probability calculations.

If you were to win let's say $51 on $1 scratch off for a profit of $50 and because you're dumb, you decide to take the payout in 100% more lottery tickets  Cheesy Grin

Is it more beneficial to get 5 $10 tickets or 25 $2 tickets?  I know in the real world, the payout vs odds scale up slightly unevenly on more expensive tickets but at least sort of close so ignore that.  In fact, lets say the average overall win on the $10 one is 500x the card's value with an overall average odds of 1 in 1000.  The $2 tickets have an average overall win of 50x the card's value and average overall odds of 1 in 100.  So they look identical cuz the $2 one has 10x better odds of winning 10x less money BUT the $2 choice results in more tries but for less money but all the tries are trying to win the same high value jackpots which there are a finite number of so they're not individual tries, they're sequential ones so the odds get better the more you have, making them appear to be superior to buying less cards of a higher value.  So would the $2 one be a significantly better choice or the $10 ones?
If you assume that:
1. There are few tickets in total, so buying many tickets has a noticeable effect on your odds, and
2. You stop buying tickets if you find a winning one,
Then there's an advantage to the $2 tickets.

Regardless you also need to consider the variance. With the $10 tickets you have more variance, whether that's good or bad depends on your perspective. If you don't want variance you're better off not buying any tickets at all.
2705  Other / Beginners & Help / Re: Isn't this a massive vulnerability built into the system? on: July 15, 2011, 03:57:52 PM
But I would think everyone would use an incrementing function instead of a random one cuz there's got to be a many times more math involved in generating a random number than just adding 1 to the last number.  I mean it like reads a value off the clock and does some whole big equation thing and then splits it out based on the range you're looking for and then hashes it.  That would throw my hash rate out the window, not to mention how I doubt the clock's value interval would be small enough to actually chanve between hashes at let's say 250 million hash calculations per second.  I dunno, maybe it's based on 1 billionth of a second.
That's true, generating a new pseudorandom number for every hash wouldn't be very efficient. But the clock granularity has little to do with it, it's just used as a seed. Once you have the seed you can generate as many pseudorandom numbers in a sequence as you want.
2706  Bitcoin / Development & Technical Discussion / Re: Why is the block nonce only 32 bits? on: July 15, 2011, 01:37:44 PM
It would be nice if pool operators could tune the share difficulty to be higher with more variance and lower server load, vs lower share difficulty for less variance and higher server load.
I'd say we're nowhere near the point where this is worth the additional complexity and confusion, in particular with regards to scoring. People have a hard enough time understanding how shares should be scored as it is.
2707  Bitcoin / Development & Technical Discussion / Re: Why is the block nonce only 32 bits? on: July 15, 2011, 01:30:55 PM
... it would make more sense to have it longer, it could decrease the overhead ...
Surely a longer nonce would increase the overhead, because more bytes must be processed on every hashing attempt.
That's not overhead, that's just a uniform increase in hashing difficulty which has no effect.

I'm thinking mostly about the communication complexity between mining pools and participants. Bigger nonce = less getwork requests.

Also, as it is a proof of work, there is no real reason to make it easier / reduce the overhead per try...
There is a reason to make it not harder for honest miners than for attackers.

Anything which improves the efficiency in practice, without changing the theoretical maximum an attacker can achieve, is welcome. Attackers will probably run large datacenters and not be effected much by the things I refer to as "overhead".
2708  Bitcoin / Development & Technical Discussion / Re: Why is the block nonce only 32 bits? on: July 15, 2011, 11:32:22 AM
Doesn't matter much, after the nonce is exhausted the merkle root is changed (via the extranonce). But I agree it would make more sense to have it longer, it could decrease the overhead and doesn't seem to have serious disadvantages.
2709  Other / Beginners & Help / Re: Isn't this a massive vulnerability built into the system? on: July 15, 2011, 09:38:32 AM
Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.

Thanks dude with more technical bitcoin knowledge than me!

Helped me understand the inner workings of bitcoin more.
Oh, I don't know anything myself, I just quote what I read here Smiley.

Wait wait wait a minute.  If they're using non-repeating values of any sort, then....how are clients not "making progress" towards a low enough hash value? If they're leaving behind non-matching hashes that they're not going to try again, obviously they are making progress then because there are less total possible hashes left to try.
Implementation details. If the nonce was longer (say 128 bits), you could just pick nonces randomly. If the Merkle root was easy to change, you could just use a different random Merkle root and nonce for every hash. But a compromise was reached with a short nonce which is checked sequentially, followed by a random change in the Merkle root. There's "no progress" in the sense that if you hashed for X minutes and didn't find a block, you aren't any closer to finding one than you were in the beginning. (That you don't try the same header twice only means that your progress isn't negative.)

But also, how is everyone working with different block contents?  You mean previous block contents or current block contents?  It has to be based on the block before it so does everyone just grab a random partial piece of the last block and not all of it then when it claims to have a legit block, it tells the verification system what specific chunk it used?  Or is the difference the transactions they're processing and the transactions are included in the about-to-be-hashed chunk of data and each ransaction only gets grabbed by one pool so each pool's attempted block is different?
You say you read the documentation, yet you keep asking these very basic questions.
This page details the contents of a block header. One of the fields is the hash of the previous block, this much doesn't change between different miners at the same time. The main thing that changes is the Merkle root, a hash of a data structure of all of the transactions to be included in the block. Assuming everyone knows and willing to include all transactions, most of the data doesn't change between different miners. What does change is the generation transaction. Everyone uses an address of his own in it. Also, there's "extra nonce" in this transaction which can be chosen freely. Hash functions being what they are, every such change alters the Merkle root in ways you can't imagine. Thus, the Merkle root can be for all purposes chosen randomly.

This will all become so much clearer to you if you hang around http://blockexplorer.com/ for a while.
2710  Other / Beginners & Help / Re: question about solo mining mechanics on: July 15, 2011, 09:37:22 AM
Someone could just write a rigged client that would not generate random values and instead use sequential values so none are ever tried twice for any given block, giving them a ridiculously large advantage over everyone else. If you're wondering how I got to that conclusion, see my other post.
I replied there with an explanation of your mistake.

EDIT: oh, apparently they all do that despite lots of reports that it's trying "random" values.
Correct. I wondered whether I should have clarified that, maybe I could have spared you writing that huge post. Like I mentioned, these are just implementation details and it didn't seem to matter.
2711  Other / Beginners & Help / Re: Isn't this a massive vulnerability built into the system? on: July 15, 2011, 09:34:56 AM
Your mistake is vast underestimation of the space of possible hashes. The block header is 640 bits long, of which 288 bits can be chosen more or less freely (Merkle root and nonce), meaning there are 2^288 different headers to try. If someone chooses these completely randomly and calculates a pentillion hashes (10^18), the chance of a collision (trying the same header twice) is less than 10^(-50). So, for all purposes, the hash space is infinite, there is no chance of collision, and random hashing is just as good as sequential.

Now, more technically, the nonce is the easiest part to change, and since it's only 32 bits, it does get checked sequentially (if you tried millions of nonces randomly you would get collisions). But that's implementation details.
2712  Other / Beginners & Help / Re: question about solo mining mechanics on: July 15, 2011, 08:26:02 AM
I think the question was: since you need the previous block to find the hash for the next one, isn't everyone trying to find the same block at each moment?

That and a whole lot of others now.  This is certainly different than I imagined after I read awfully close to 100% of the documenation in various places and I'm a programmer with a math tutoring background Tongue Can anyone explain the exact mining process in overly-simplified, almost cartoonish objects? Tongue

The chain is a straight line of blocks, right?

You say if the last block is #136352, everyone is trying to find block #136353.  And "find" means take data from the last block and mix it with a different random value every attempt and then hash it. So they all start chugging away and if the hash is below [insert low hash value based on the current difficulty rating here] then you made block 136353.  Other people verify it and tada, 50 bitcoins + transaction commissions for everyone involved proportionate to the work they did.  But then the current block is incremented to #136353 which was just created and every pool has to hash based on data from that new block, which means dumping the old block's calculations but it's not detrimental because the probability of finishing first remains the same as it was for the last block.
Sounds about right. Don't forget everyone also chooses what transactions to include, puts them in a Merkle tree and uses its root as part of the hashed block header.

Which sort of suggests something unbelievably unwise about how this system works, not to mention a massive vulnerability but that's more for another post Tongue
What's unwise about it?.
2713  Economy / Economics / Re: Bitcoin loan payment formula (WARNING: MATHS AHEAD!) [FORMULAS FIXED] on: July 15, 2011, 08:09:52 AM
I don't know the standard terminology for this, so I'll specify my modeling assumptions explicitly.

Let's say you borrow a principal of P=$10K at the end of year 0 at interest i=5%. You are expected to return it at the end of year n=10. Your first payment is at the end of year 1. You pay an interest of 5%*$10K=$500 and some principal, say $1000, for a total of $1500. However, because of d=3% deflation, it feels like paying $1545 would feel at end of year 0. You now have $9000 principal left, so at end of year 2 you pay 5%*$9K=$450 interest, and say $1000 principal again, for a total of $1450 which feels like $1450*1.03^2=$1538.3. And so on.

What we are looking for is a payment scheme that ensures every payment feels the same. So let's call this feeling-equivalent x, we are looking for the x which makes the principal owed 0 after n years.

For an equivalent of x at the end of year k, the actual payment is x*(1+d)^(-k). So for the principal at the end of year k you have P_k = P_{k-1}(1+i) - x*(1+d)^(-k). Denoting R = (1+i)(1+d), this has general solution P_k = A(1+i)^k + [x(1+d)^(-k)]/(R-1). Because P_n=0 we have A = (-xR^(-n))/(R-1) and hence P_k = x * [(1+d)^(-k)-(1+i)^kR^(-n)]/(R-1). Letting k=0 gives us x = P_0*(R-1)/(1-R^(-n)), and so the actual payment at end of year k is P_0*[(R-1)/(1-R^(-n))]*(1+d)^(-k).

I don't currently have access to my CAS so you'll have to verify these calculations. But the fact that for d=0 you have R=1+i and this reduces to the original formula is encouraging.

Now for your questions:

1) 5% + 3% = 8% is probably the wrong intuition. If you have 50% interest per year, then the interest in two years will be 125% = (1+50%)(1+50%)-1, not 100%=50%+50%. The same probably happens when you combine the interest with deflation. They are combined multiplicatively, not additively, and you'll notice that R=(1+i)(1+d) plays a key role in my above calculation.

2) Check.

3) My formula gives an equal feeling-equivalent for every payment, and the derivation should be easy to modify for any desired increase or decrease in the equivalents.


About the philosophical issues, how much a payment feels is proportional to the person's salary, not its purchasing power. I expect that in a stable Bitcoin economy, salaries will remain more or less fixed in numerical value while their purchasing power will increase. So d would be 0 for Bitcoin.
2714  Other / Beginners & Help / Re: question about solo mining mechanics on: July 15, 2011, 06:55:39 AM
I think the question was: since you need the previous block to find the hash for the next one, isn't everyone trying to find the same block at each moment?
If the last block is #136352, everyone is trying to find block #136353. But everyone tries to find a different block #136353.

The network doesn't "assign" jobs, everyone composes his candidate himself as long as it follows the rules. The most important variable is the receiving address of the generation transaction, everyone uses one of his own for this.
2715  Other / Beginners & Help / Re: question about solo mining mechanics on: July 15, 2011, 06:21:01 AM
So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again?
There's no "start over". There's no progress towards finding a block. It's random. A pool has, say, 0.01% chance of finding a block every second whether it's working on a new block or on the same block for hours.

That sucks lol.  Just when I thought it couldn't get less power efficient lol.
Power efficiency has nothing to do with it. The system is designed to be stable as long as no single entity controls >50% of the hashing capacity. Whatever amount of hashing needs to be done (and hence power consumption) to ensure this is the amount that will be required. Everything else is implementation details.

So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again?  That sucks lol.  Just when I thought it couldn't get less power efficient lol.

No... Or else smaller pools wouldn't survive. Deepbit solves blocks in a few minutes for example... But I don't know the exact details of why it doesn't happen Tongue Maybe the network assigns different jobs to different pools/users?
It's not a race. In a race, whether you win or lose depends on what the others do. But here, for a given difficulty, the chance of finding blocks is completely independent of how many blocks others find.
2716  Other / Beginners & Help / Re: question about solo mining mechanics on: July 14, 2011, 05:29:38 PM
Aha, I see.  So the also horribly oversimplified version is basically the network broadcasts a difficulty in the form of a range of hashes that would work.  Like find a hash lower than 00000000000004a5601c621798d1da9d48b203c87a31f2fb0bd53af8e6ca312b and your block wins.

Everyone starts working on a block whenever they hit go in their miner and whether they stop or start, it's the same theoretical block they're working on.  Once a random value gets turned into a hash that meets the requirements, they win 50 bitcoins + any transaction fees and I assume the transactions are officially added to the block once it's established that it's a completed block.
More or less.

So then the 10 minute interval is just probability based and accurate due to volume of hash tries on the network as a whole?  But theoretically, two people could come up with a completed block inside 10 minutes or everyone could take longer than 10 minutes, right?
Right. But every block references the last block. Once a block is found and broadcast, people will start referencing it in their new blocks.

Assuming that's all correct, I'm gonna go out on a limb and assume nobody created a rainbow table for this size of base nonces, right?
Rainbow tables would be useless because you're hashing different data each time.

But what's to stop someone from generating random hashes like everyone else and then when they find a hash that's REALLY low, they'll tell their client to sit on that value and not "turn it in" yet.  Then years from now, when the hash range gets lower, they'll turn in a bunch of super low ones in a row and throw off the coin generation timing.
Every block references the previous block, and the longest chain is considered the valid one. If a new block is broadcast which references a very old block it will not be part of the longest chain and have no influence.
2717  Other / Beginners & Help / Re: question about solo mining mechanics on: July 14, 2011, 08:46:13 AM
I think your confusion is that you think that first the next block is decided, then people work on it. It's the other way around, every miner decides for himself what "block candidate" to work on. He decides what transactions, among the floating transactions he knows, will be included in it. Since everyone should know all transactions, the sender doesn't rely on any single miner for his transaction to be included. The miner constructs the header to work on based on the Merkle root of transactions, the hash of the previous block and so on, and starts hashing with different nonces until he find a hash satisfying the difficulty requirement. Only when he finds it he broadcasts it as the next valid block. If someone else beat him to finding a valid hash, the next block would be different (but would still include more or less the same transactions).

Since it's random, there's no "progress" towards finding a block which the miners need to synchronize about.

Have a look at http://blockexplorer.com/block/00000000000003910e6bce91f8fcbfabac2273123174a2a3fb128c0f7f2619f3 , all the data about the last block at the time of this writing.
2718  Bitcoin / Bitcoin Discussion / Re: Two points about the mining algorithm on: July 13, 2011, 05:43:34 PM
Re 1: First we need to ask why difficulty adjustment is based on a naive calculation instead of a more sophisticated control system. And the answer is probably that Satoshi either didn't have the foresight to include one, or he feared that it would be less understood and harder to implement.
Because "a more sophisticated control system" inevitably means non-linear which leads to various attacks and perverse incentives, like mining in bursts being more profitable than mining continually.  Under the current system the only non-linear behavior is the clamps which don't happen unless someone is bursting a significant multiple of the network average rate, and with that much hash power they could have been been mining more profitably by just mining continually.
I was thinking along the lines of a PI controller. I don't see how it can cause the problems you describe.
2719  Bitcoin / Bitcoin Discussion / Re: Two points about the mining algorithm on: July 13, 2011, 03:19:06 PM
Re 1: First we need to ask why difficulty adjustment is based on a naive calculation instead of a more sophisticated control system. And the answer is probably that Satoshi either didn't have the foresight to include one, or he feared that it would be less understood and harder to implement.

Given that we're using a naive approach, evaluating generation rate in an interval too short will create too much variance, and difficulty will fluctuate every update.

The attack you've mentioned has been discussed before, and the current consensus is that it will be dealt with manually if it ever happens. So, while in theory it could be possible to create a consensus to change the adjustment algorithm, it doesn't seem to matter enough.

Re 2: The reason block reward diminishes at all is that Satoshi subscribes to the economic theory which says that the total amount of monetary units should have a fixed limit (I think this is called the Austrian school).

Why is the drop so staggered? Probably also has something to do with simplicity of understanding and implementation. I don't think the drop will be too disruptive. The amount generated is small compared to the total in circulation, and the drop will be considered in advance.

Making transaction fees high enough to support the required mining is an important challenge for Bitcoin going forward, which I think can be alleviated by augmenting proof-of-work with other synchronization methods.

I'd support a new blockchain where block reward is constant.

2720  Bitcoin / Mining / Re: Pool Hopping: The SIMPLE Solution! on: July 13, 2011, 02:46:41 PM
In a nutshell, if an SMPPS pool with no fees runs long enough, with probability 1 it will eventually reach a point of such unluckiness that its payouts will be miniscule. Miners will leave and the pool will never recover. At that payment miners with pending payments will never receive them.
I don't understand why you say this. With SMPPS, every submitted share has precisely the same expected payout regardless of the past performance of the pool.
Only if you assume people are willing to wait arbitrarily long for their rewards.
They don't have to wait arbitrarily long. They only have to wait until the SMPPS pool accumulates whatever the number of shares chosen for N is. At that point, they will get whatever payment their share is going to generate.

Quote
The lower the pool's current balance, the longer it will take to get the full payout, and people will lose patience. Add to this the fact that people will fear the collapse of the pool, a self-fulfilling prophecy which will prevent ever getting the payment. When the pool is in the red, massive abandonment is a schelling focal point.
I don't follow you. What do you mean by the "pool's current balance"?

The way a SMPPS pool works is this: People submit shares to the pool. The pool tries to find blocks. When it finds a block, it pays out on each of the N shares received prior to finding that block, paying 50/N bitcoins. N can be chosen large enough so that most shares pay out.

You do have to wait until the pool finds a block to get paid though. So this won't work very well for very small pools. But once a pool is large enough to find a block at least once a day, there's no reason to think it would shrink.
You're talking about PPLNS. I was talking about SMPPS. PPLNS is a great method as I've mentioned here and elsewhere.

No, it will make the pool vulnerable to hopping based on pool hashrate fluctuations. It is more profitable to mine for the pool when the current hashrate is higher than the average over the current window.
No it won't. When the hashrate is higher, the number of shares per window will be higher, resulting in lower payouts per block found. Of course more block will be found, equaling things out perfectly.
Read carefully. I said "current hashrate > average hash rate over the window". The number of shares per window depends on the average hashrate over the window, not the current hashrate. Meanwhile, the chance your share will be included in a payout does depend on the current hashrate.
Pages: « 1 ... 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 [136] 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!