We never heard back of jl2012.
I guess he agrees that my argument wasn't fallacious. I withdraw most claims, but I stand by the claim that to see 144 times heads in 144 coin throws is very unlikely.
I will run some statistics on nounces. I am now very curious about their distribution. Anyone did study that before?
I have no obligation to sit here and teach you basic statistics. This is the job of your stat teacher. Anyway, do the following homework: I'm not going to response before you finish the homework. Sorry, I don't understand your homework. Can you be more precise? On the other hand, can we agree that the probability of having independent 144 random variables taking values all below their mean value to be 1/2^144 ? Pretty small uuh? Did I do a good job on that? This is from you We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32.
|nounce(354641)-nounce(354640)| = 19.452.599 probability 19.452.599/(2^32-1)*2 = 1.8%
|nounce(354642)-nounce(354641)| = 5.394.922 probability 5.394.922/(2^32-1)*2 = 0.12%
|nounce(354642)-nounce(354641)| = 313.864.936 probability 313.864.936/(2^32-1)*2 =7.2%
Combined probability 0.000155% that is 1 in 645 161 times.
Now you are asked to do this: |nounce(200001)-nounce(200000)| = |2,860,276,919 - 4,158,183,488| = 1,297,906,569, probability = 1297906569/(2^32-1)*2 = 60.4% Repeat until block 200049 and multiply all the probabilities as you did
|
|
|
We never heard back of jl2012.
I guess he agrees that my argument wasn't fallacious. I withdraw most claims, but I stand by the claim that to see 144 times heads in 144 coin throws is very unlikely.
I will run some statistics on nounces. I am now very curious about their distribution. Anyone did study that before?
I have no obligation to sit here and teach you basic statistics. This is the job of your stat teacher. Anyway, do the following homework: I'm not going to response before you finish the homework.
|
|
|
All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^32-1=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. |nounce(354641)-nounce(354640)| = 19.452.599 probability 19.452.599/(2^32-1)*2 = 1.8% |nounce(354642)-nounce(354641)| = 5.394.922 probability 5.394.922/(2^32-1)*2 = 0.12% |nounce(354642)-nounce(354641)| = 313.864.936 probability 313.864.936/(2^32-1)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data |nounce(1)-nounce(0)| = 5% |nounce(2)-nounce(1)| = 20% |nounce(3)-nounce(2)| = 10% |nounce(4)-nounce(3)| = 1% |nounce(5)-nounce(4)| = 5% |nounce(6)-nounce(5)| = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution. I don't understand what you mean. OK, let me do the computation and explain things carefully. You can tell me on which point you disagree. (0) Put your 2^32-1 integer values on a circle of perimeter 2. This geometrical representation will help you. (1) We assume uniform distribution of nounces. This is correct as first approximation, but not totally accurate as pointed out before by several people. We may extract the historical distribution and use it. (2) The probability that two consecutive nounces are closer as nounce(354641) and nounce(354640) is 1.8%. It is the minor arc length between the two nounces on the circle. Same for nounce(354642) and nounce(354641), and for nounce(354643) and nounce(354644). Otherwise, please correct me if you disagree. (3) We assume independence of nounces with respect to previous nounces, i.e. we consider nounces as independent random variables. This implies that distance between nounce(n+2) and nounce(n+1) is independent of the distance between nounce(n+1) and nounce(n). (4) Thus, the probability of having three consecutive events of the sort described is just the product of the probabilities, it is 1 over 645161. The probability of seeing this is on average once each 12.27 years at an average production of one block (nounce) every 10 minutes. If you can't see why you are committing an elementary statistics fallacy, just consider this: 1. P is an uniformly distributed variable from 0 to 1, with mean = 0.5 2. There is 144 blocks per day 3. The probability calculated, in the way you suggest, is about 0.5^144 = 4*10(-44), which should NEVER happen ------------------------------------- For the consecutive 731kb blocks, it just showed there were too many unconfirmed tx and miners had to use the maximum size.
|
|
|
All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^32-1=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. |nounce(354641)-nounce(354640)| = 19.452.599 probability 19.452.599/(2^32-1)*2 = 1.8% |nounce(354642)-nounce(354641)| = 5.394.922 probability 5.394.922/(2^32-1)*2 = 0.12% |nounce(354642)-nounce(354641)| = 313.864.936 probability 313.864.936/(2^32-1)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data |nounce(1)-nounce(0)| = 5% |nounce(2)-nounce(1)| = 20% |nounce(3)-nounce(2)| = 10% |nounce(4)-nounce(3)| = 1% |nounce(5)-nounce(4)| = 5% |nounce(6)-nounce(5)| = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!! I just did a rough approximation, only valid for small probabilities and few events. You are welcome to do the exact computation. You calculate in a wrong way. You should define the meaning of "close" a priori. That could be 20%, 10%, or 1%. Let say you choose 10%, the P(1.8%, 0.12%, 7.2%) should be 1/1000, not 1/645161. And let say you choose 2%, the P(1.8%, 0.12%, 7.2%) should be 1/2551 (0.02*0.02*0.98). Therefore, one event of this kind is expected in about 2 weeks. Please stop here (and edit your misleading topic) unless you find something really statistical significantly deviated from the theoretical distribution.
|
|
|
All 4 nounces very close by.
Not close at all. A difference 300,000 is about one thirteenth of the maximum range, which means consecutive nonces will be this close together over 10 times a day. 4Byte nounce is between 1 and 2^32-1=4.294.967.295 right? Where is your 300.000 being 1/13th coming from? I meant 300,000,000 (that's the closeness we're talking about right?), but I misplaced a few zeros somewhere around the second glass of absinthe. This is why you shouldn't drink and derive. Be careful with absynthe... Let's look closer at nounces: We assume that nounces are uniformly distributed (not exactly true since if we start increasingly with nounce 0 they follow a Poisson law, but taking into account that nounce cycles many times before finding the solution it is well approximated by the uniform distribution). We look at distance mod 2^32. |nounce(354641)-nounce(354640)| = 19.452.599 probability 19.452.599/(2^32-1)*2 = 1.8% |nounce(354642)-nounce(354641)| = 5.394.922 probability 5.394.922/(2^32-1)*2 = 0.12% |nounce(354642)-nounce(354641)| = 313.864.936 probability 313.864.936/(2^32-1)*2 =7.2% Combined probability 0.000155% that is 1 in 64.5 million of times. Are you trolling? 0.000155% is 1 in 645161 And this is nonsense. Just some made up data |nounce(1)-nounce(0)| = 5% |nounce(2)-nounce(1)| = 20% |nounce(3)-nounce(2)| = 10% |nounce(4)-nounce(3)| = 1% |nounce(5)-nounce(4)| = 5% |nounce(6)-nounce(5)| = 10% Combined probability 0.000005% that is 1 in 20 million of times. Bitcoin in broken!!!
|
|
|
It would be useful to know what percentage of requests for ETFs are denied by the SEC. Their approval is not a mere formality, and COIN surely is not the only wannabe ETF with heavyweight attorneys. I gather that it is not a good sign that COIN has been sitting on SEC's desks for ~2 years, and already went through 5 amended re-filings.
I doubt that loading up in bitcoin would improve the image of any investment fund. Wall Street is not very fond of instruments that lose value every quarter for five consecutive quarters, and whose Captains of Industry cannot even explain why the price is now 220, much less why it should rise again in the future. "Because Wall Street will buy lots of it" is not a convincing argument to Wall Street, I suspect.
What What What What What evidence do you have What What Like, for example, actually reading the 5th amended COIN filing? What is the purpose of using the phrase "Captains of industry" other than as a pejorative to advance nothing but innuendo?
https://www.youtube.com/watch?v=VzmzQja7a2MStop spreading unfounded FUD. You can see the early history of the GLD ETF here: http://www.sec.gov/cgi-bin/browse-edgar?action=getcompany&CIK=0001222333&type=&dateb=20050101&owner=include&count=100The application was submitted on 2003-05-13, after 5 amendments, it was approved on 2004-11-17. I'm not sure if COIN will be approved, but for something as innovative as bitcoin, I don't think it is significantly slower than GLD.
|
|
|
We'll be moving on 62 once 66 is actually deployed (one flaw in the the legacy softfork deployment mechanism is that only one change can be in flight at a time)
Is the plan BIP-66 then BIP-62 and then OP_CHECKLOCKTIMEVERIFY? Is there any consensus that CHECKLOCKTIMEVERIFY will be implemented??
|
|
|
A bid for 5000 btc worth of shares at 350
That's really interesting. I suppose bidders are required to have 100% USD on OTC Markets?
|
|
|
There are few more edge cases I would like to clarify:
1. The use of non-standard push in scriptPubKey (Based on the BIP62 description I think this is allowed)
2. The use of non-standard push in P2SH serialized script
3. The use of zero-padded number in scriptPubKey or P2SH serialized script (I think this is not allowed)
4. The use of non-standard push but not zero-padded number in scriptPubKey (the original question)
5. The use of non-standard push but not zero-padded number in P2SH serialized script
6. The use of zero-padded number in scriptPubKey or P2SH serialized script (I think this is not allowed)
7. The use of non-DER or high-S signatures in scriptPubKey or P2SH serialized script (I think this is not allowed)
8. The use of something not an empty byte as the extra stake element for CHECKMULTISIG in scriptPubKey or P2SH serialized script (I think this is not allowed)
9. The use of empty byte, but not a direct result of OP_0 (e.g. "OP_0 OP_ABS" or "0100"), as the extra stake element for CHECKMULTISIG in scriptPubKey or P2SH serialized script .
10. Having more than one item left in the stake due to the design of scriptPubKey (e.g. scriptSig = empty; scriptPubKey = OP_1 OP_1) (I think this is not allowed)
I think all these cases are currently allowed and would not cause any malleability.
|
|
|
In BIP62, it says Zero-padded number pushes Any time a script opcode consumes a stack value that is interpreted as a number, it must be encoded in its shortest possible form. 'Negative zero' is not allowed. See reference: Numbers. The native data type of stack elements is byte arrays, but some operations interpret arguments as integers. The used encoding is little endian with an explicit sign bit (the highest bit of the last byte). The shortest encodings for numbers are (with the range boundaries encodings given in hex between ()).
0: OP_0; (00) 1..16: OP_1..OP_16; (51)..(60) -1: OP_1NEGATE; (79) -127..-2 and 17..127: normal 1-byte data push; (01 FF)..(01 82) and (01 11)..(01 7F) -32767..-128 and 128..32767: normal 2-byte data push; (02 FF FF)..(02 80 80) and (02 80 00)..(02 FF 7F) -8388607..-32768 and 32768..8388607: normal 3-byte data push; (03 FF FF FF)..(03 00 80 80) and (03 00 80 00)..(03 FF FF 7F) -2147483647..-8388608 and 8388608..2147483647: normal 4-byte data push; (04 FF FF FF FF)..(04 00 00 80 80) and (04 00 00 80 00)..(04 FF FF FF 7F) Any other numbers cannot be encoded. In particular, note that zero could be encoded as (01 80) (negative zero) if using the non-shortest form is allowed. If I try to use non-standard push in the scriptPubKey, without zero-padding, would that be allowed? For example, is the scriptPubKey "5101019C" spendable under BIP62? OP_1 OP_PUSHDATA(01) OP_NUMEQUAL Currently, I think this is spendable with an empty scriptSig.
|
|
|
Implementing this "trust" layer on wallets should be a decision of the wallet developers,if they decided to add a BIP like that,they will be responsible for verifying their customer's information,otherwise their service will not be considered 'trusted',(i didn't mentioned that,but its the wallet service provider responsibility).
So this is basically to establish the bitcoin version of Paypal or VISA. This is application level, not protocol level The direction here is to create some standard for verifying identities,because i see that every new bitcoin "company" trying to create her own standard and what i think is that we need to create some consensus in a much lower level then everyone apps/services.
No such consensus is possible because different countries (and different parts of a county) will always have different standards and requirements.
|
|
|
There is no reason to store these kind of private encrypted data on the blockchain. The blockchain should be used for storing - Public data, or
- Private data but timestamping is needed
For the case you suggest, users and merchants should digitally sign those receipts and KYC data (with mutually agreed timestamp if needed) and keep it by themselves. I've actually had this discussion with regulatory/banking types... There's pressure to have the data on the blockchain itself precisely because it forces it to be "sufficiently" public for governments to be happy - many of them want it to be fundamentally impossible to use the blockchain without giving up that information. First step is to be able to know for sure if the data exists. Obviously pay-to-contract-hash techniques are the sane way to do this stuff... but we don't necessarily have the same goals as regulators do. One of the hard political challenges for people enganging with regulators is convincing them that privacy matters, and any KYC system has to be an add-on, not an inherent part of the system. Bitcoin is a consensus protocol to determine the validity of some binary data. Fundamentally, it is just a language. It could not be regulated like regulating a company because there is no one to send to jail and nothing to confiscate. The only way to regulate bitcoin is to control >50% of the hashing power (or to regulate bitcoin users, but bitcoin users != bitcoin). Anyway, if they have some good idea in mind, they can always create it (since they have unlimited resources) and convince people to adopt it.
|
|
|
There is no reason to store these kind of private encrypted data on the blockchain. The blockchain should be used for storing - Public data, or
- Private data but timestamping is needed
For the case you suggest, users and merchants should digitally sign those receipts and KYC data (with mutually agreed timestamp if needed) and keep it by themselves.
|
|
|
It failed to break -1.75 for the second time, so the first target of recovery is the break -1.75, which is about $300 now.
|
|
|
I proposed this last year:
(1) The txid will be the hash of the tx with all scriptSig removed. (currently, txid is the hash of the tx, including all scriptSig)
An alternative is to have both hashes be valid ways to refer to previous transactions. This means that legacy transactions still work. This is needed to ensure that you don't invalidate transactions that were created in the past, but have a locktime after the hard-fork happens. If you are creating a refund transactions, then you can use hash(transaction without sigscripts) to refer to the previous transaction. It doubles the size of the transaction index though, since it means 2 keys for each transaction. You only need legacy support for those UTXOs already in the blockchain. For those after the hardfork, only the new txid format will be supported. (2) The first level of merkle root will be hash of (txid-a|size-a|txid-b|size-b), where txid-a and size-a are the txid and size of tx-a respectively (currently, the first level of merkle root is (txid-a|txid-b))
What is the benefit of this? It only protects against length changing malleability attacks (which might cover a lot of them). We need this to provide a upper bound of the block size. Otherwise it is impossible to determine whether the block size is under 1MB (or other limit)
|
|
|
|