Bitcoin Forum
May 25, 2024, 09:15:18 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »
41  Bitcoin / Bitcoin Technical Support / Re: Help: Unserendipitous Multisig Transaction on: April 09, 2016, 04:03:57 AM
Easiest way is to keep doing what you did. Every attempt has roughly a 50% chance of success as long as you re-sign
42  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: April 09, 2016, 02:51:05 AM
These test scripts would have exposed all the problems you mentioned. They focus on block validation and reorganization.  You don't have to be a Java expert because the tool acts as a peer to which your node connects.
I have briefly looked at your code and noticed a few odd things but I didn't do a throughout review.
43  Bitcoin / Bitcoin Technical Support / Re: Help: Unserendipitous Multisig Transaction on: April 09, 2016, 02:41:36 AM
Problem solved?
44  Bitcoin / Bitcoin Technical Support / Re: Help: Unserendipitous Multisig Transaction on: April 08, 2016, 11:57:51 PM
First of all, entering your private key in a website is a bad idea (ms-brainwallet) and a market that recommends doing so is suspicious to say the least.

Secondly, the error message you get is usually associated with non standard signatures. Bitcoin core wants low S signatures. If you don't know what it means, you should not use raw transactions.
45  Bitcoin / Development & Technical Discussion / Re: protocol vulnerability? on: April 08, 2016, 02:16:55 PM
If you had read the paragraph after the picture, you would have known that there is more signed than just the scriptPubKey...

Quote
As illustrated above, the data that gets signed includes the txid and vout from the previous transaction. That information is included in the createrawtransaction raw transaction. But the data that gets signed also includes the pubkey script from the previous transaction, even though it doesn’t appear in either the unsigned or signed transaction.


He's concerned that the signature doesn't cover the output of the current transaction - which it does for all signature types besides SIGHASH_NONE.

To be honest, I don't understand this drawing either. This explanation works better for me.

https://en.bitcoin.it/wiki/OP_CHECKSIG


46  Bitcoin / Wallet software / Re: Gocoin - totally different bitcoin client with deterministic cold wallet on: April 08, 2016, 12:44:58 PM
Hi, out of curiosity, did you check your node against the bitcoin regression tests?

https://github.com/TheBlueMatt/test-scripts

--h
47  Bitcoin / Development & Technical Discussion / Re: Segwit details? segwit wastes precious blockchain space permanently on: March 17, 2016, 12:42:53 AM
Segwit is not about saving space for plain full noes, the space is already saved in Core (if the user chooses to save it). As you note, local space savings can be done purely locally.  Segwit increases flexibility; fixes design flaws; saves space for nodes acting as SPV servers; and saves _bandwidth_; and none of these be done as purely local changes.
Again I apologize for not being smart enough to instantly understand all the changes segwit does and I was misled by errant internet posts that segwit saved HDD space for the blockchain.

thank you for clarifying that it wont save space for full nodes.

Also, my understanding now is that iguana can just treat the segwit tx as standard p2sh and with the caveat that until it fully processes the witness data, it would just need to trust that any such tx that are mined are valid.

I would debate with you on many claims you make that I dont agree with, but I see no point to debate with words. I will make an iguana release that will demonstrate my claims. Fair enough?

James

The problem is that what you lost a lot of credibility by making your claims earlier and now it'll be hard to take your software seriously. Basically, you are asking us to check out your rocket after you argued against the laws of gravity.
48  Bitcoin / Development & Technical Discussion / Re: An optimal engine for UTXO db on: March 13, 2016, 06:05:26 AM
To answer the OP, it is difficult to decide what makes an optimal engine for the UTXO set because as usual one has to make design trade offs.

1. If you are interested in super fast import, i.e. processing the block chain in shortest time, you are dealing with a large amount of insert/delete of key value pairs where the key contains a crypto hash. They are essentially random values and will fall all over the place. However, the net amount fits in an average development machine so you can work entirely in RAM. However, bitcoin core can't assume that you have several GB of RAM available!

Here, the main problem of leveldb is that its engine uses skip lists. Their average search complexity is ln(N). You'd be better off with a hash table which is O(1). For the record, skip lists and B-Trees are used when we need to retrieve the keys in order and it's not the case here.

Because blocks depend on previous blocks, processing them in parallel isn't easy. I'm not sure it helps even. (Block validation excluded)

2. If you are building a browser, then you can precalculate the hell out of your data set. The usual database tricks apply: lineralize the key space, compress values, use bloom filters, partition your data, etc. You are working with data than changes only every 10 mn, so very quite static. If you need the unconfirmed tx, add them on top. This case isn't what a db is really used for anyway. A lot of effort is put into concurrency and durability. When they are not crucial, faster implementations exist - especially if you can choose the hardware.

3. If it's for checking transactions in a full node, there are relatively few of them. Once you have extracted the UTXO, we are talking about a few million records which is peanuts in today's standards.

In conclusion, leveldb is fine for its usage it bitcoin core. The choice of a skip list vs hash table engine is questionable, but I don't think that makes a big difference for most of the users.

Maybe, you could describe your usage?
49  Bitcoin / Development & Technical Discussion / Re: LevelDB reliability? on: March 12, 2016, 02:30:27 AM
LevelDB being stupid is one of the major reasons that people have to reindex on Bitcoin Core crashes. There have been proposals to replace it but so far there are no plans on doing so. However people are working on using different databases in Bitcoin Core and those are being implemented and tested.

Maybe the most reliable DB is no DB at all? Use efficiently encoded read only files that can be directly memory mapped.

https://bitcointalk.org/index.php?topic=1387119.0
https://bitcointalk.org/index.php?topic=1377459.0
https://bitco.in/forum/forums/iguana.23/

James

LevelDb is used to store the UTXO set. How is that read only?

UTXO set falls into the write once category. Once an input is spent, you cant spend it again. The difference with the UTXO set is explained here: https://bitco.in/forum/threads/30mb-utxo-bitmap-uncompressed.941/

So you can calculate the OR'able bitmap for each bundle in parallel (as soon as all its prior bundles are there). Then to create the current utxo set, OR the bitmaps together.

What will practically remain volatile is the bitmap, but the overlay bitmap for each bundle is read only. This makes a UTXO check a matter to find the index of the vout and check a bit.

James

Not sure if we are talking about the same thing. Following your link, it seems you are describing the internal data structure used by a block explorer which aren't necessarily optimal for a bitcoin node.
In particular, you use a 6 byte locator. Given a new incoming transaction that can spend any utxo (hash+vout), do you need to map it to a locator? And if so, how is it done?

50  Bitcoin / Development & Technical Discussion / Re: LevelDB reliability? on: March 11, 2016, 10:58:31 AM
LevelDB being stupid is one of the major reasons that people have to reindex on Bitcoin Core crashes. There have been proposals to replace it but so far there are no plans on doing so. However people are working on using different databases in Bitcoin Core and those are being implemented and tested.

Maybe the most reliable DB is no DB at all? Use efficiently encoded read only files that can be directly memory mapped.

https://bitcointalk.org/index.php?topic=1387119.0
https://bitcointalk.org/index.php?topic=1377459.0
https://bitco.in/forum/forums/iguana.23/

James

LevelDb is used to store the UTXO set. How is that read only?
51  Bitcoin / Development & Technical Discussion / Re: Luke Jr's HARDFORK proposal debunked on: March 09, 2016, 09:27:38 AM
I think the calculations are slightly incorrect though.

1. The probability of rolling 1/1/1/1 on 4 dices is 1/6^4 = 1/1296. OK
2. If 20 people are rolling at the same time, the probability of having a winner in a round isn't 20/1296.
It's 1-(1-1/1296)^20 = 0.01531949966668002983951097896119
Instead of 20/1296 = 0.01543209876543209876543209876543
3. If each round has a probability p of finishing the game and every round is independent, the expected number of rounds of the game is 1/p.
This is true, but not obvious. I put a proof below.

The average duration of the game is 65.28 s (vs 64.8 ).

In the case of bitcoin, the difference is negligible since it's in the order of p^2.

--h







PS
Call E(i) the expected number of rounds of this game assuming that we got to round i and not counting the previous rounds.
At round i, we stop with probability p and continue with proability 1-p.
So there are two possibilities:
  - the game lasts 1 more round with probability p because we got 1/1/1/1.
  - or it continues for 1 + E(i+1) rounds with probability 1-p because we didn't.

E(i) = p x 1 + (1-p) x (1 + E(i+1))

Now the important thing is that the game is 'memoryless'. So E(i) doesn't change. E(i) = E(i+1) = E

E = p + (1-p)(1+E)
which leads to E = 1/p.
52  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 12:52:22 AM
But actually, the sig not validating still doesnt mean the vin is always invalid, right? The vout script could just pop all data off and push a true. So a vin without a valid sig could still be spent...

If you are doing it as part of a protocol, you would insist that the output scripts all match the expected template.

Either use CHECKSIGVERIFY or make sure CHECKSIG is the last operation to execute.
certainly when I am on the side of generating the scripts, but I am working on the core side now, so have to understand the proper handling of arbitrary vins/vouts

Not sure if you are still wondering this, but simply put, the transaction is atomic. All the vins get transferred to the vouts if and only if every vin script pass regardless of its content. So yeah, you can create interesting combinations if you some of the scripts have different SIGHASH values.
53  Bitcoin / Development & Technical Discussion / Re: Question: SegWit Second Merkel Tree? on: January 23, 2016, 03:35:17 AM
Technically speaking, the witness data is not in the blockchain.
If the witness data is not in the blockchain, where is it? I thought you said it is in the OP_RETURN.

The OP_RETURN contains the hash of the witness data not the witness data itself.

It is sent separately alongside of the blocks.
Do you mean alongside the txs, prior to being in a block?

If you are familiar with alt-coins that are using the bitcoin blockchain, it is a similar principle. You have data
(here it's the witness data) that is transmitted aside from the blockchain. However, in order to prove that
data authentic, its hash is included in the blockchain.

However, the blocks must have some way to reference the witness block, and they do that by including the hash of the witnesses in the coinbase transaction.
Why do blocks need to reference the witness block? when a block is formed, isn't all txs within that block proved to be signed properly?
What do you mean coinbase transaction? I thought that is the block reward (currently 25btc).

This one, I'm unclear myself. It seems to me that it's a nice to have - but not a requirement. Every transaction is either signed directly and its signature included
in the blockchain (normal case) or in the witness data. If you are interested in verifying signatures, you have to download the witness data. Once you have it,
I believe you could verify every transaction in two steps:
1. the signature is correct
2. the hash of the witness data is equal to the value in the OP_RETURN

Before:
Tx in blockchain:
- Inputs: [Pubkey, Signature]

After
Tx in blockchain:
- Inputs: [Hash of [Pubkey] ]
Tx in segwit data: [PubKey, Signature]

There are other cases too but at first approximation, I hope this helps.

The coinbase tx is the block reward so technically it doesn't have need any input. However, the bitcoin protocol uses its inputs in a special way. It is like an
extension to the block header, but putting data there keeps the header short and can be done by a soft-fork.
54  Bitcoin / Development & Technical Discussion / Re: Announcing BlockCypher's Transaction API: create&manage bitcoin transactions on: January 17, 2016, 07:44:49 AM
If you look closer, you will see that the `sign` method is running on the client.
55  Bitcoin / Development & Technical Discussion / Re: Why can't I use a negative value for OP_CHECKLOCKTIMEVERIFY on: January 04, 2016, 02:43:54 PM
The problem with adding the zero byte is that there is another check when pushing bytes that you have pushed the "minimum number possible".

So it currently won't accept 0x02ff00 which is how it apparently should accept 255.

The problem won't affect mainnet as its block numbers will appear as non-negative numbers (and it probably also doesn't affect *nix timestamp values).


02 FF 00 is the shortest representation of 255 - so I'm guessing the error is somewhere else.
56  Bitcoin / Development & Technical Discussion / Re: Timestamping on the Bitcoin Blockchain: limits on: December 10, 2015, 08:40:23 AM
They don't need to publish all the hashes to the blockchain. They can keep them on their systems and push a merkle root from time to time. Whenever someone needs to prove his claim, he shows the document hash and the service gives the fragment that leads to the root. Technically, there are no limits imposed by Bitcoin.
57  Bitcoin / Development & Technical Discussion / Re: Keeping the addresses generated from deterministic wallet PubKey seed secret. on: December 09, 2015, 01:59:04 PM
You don't need hardened keys for this scenario. If you only publish the addresses, no-one can figure out the next in sequence.

So you can never spend the coins because this would expose the public key which would allow to derive further addresses?

Well, you can also use the public key as long as you don't show the chain code.
58  Bitcoin / Development & Technical Discussion / Re: Keeping the addresses generated from deterministic wallet PubKey seed secret. on: December 09, 2015, 06:40:14 AM
You don't need hardened keys for this scenario. If you only publish the addresses, no-one can figure out the next in sequence.
59  Economy / Services / Re: Node.js decrypt file and verify gpg file on: December 08, 2015, 10:25:29 AM
You may want to take a look at a pure javascript implementation of open pgp, the standard behind gpg. https://github.com/openpgpjs/openpgpjs

A rsa key is usually packaged using pkcs 1 which defines the serialization of both pub and secret keys.




60  Bitcoin / Development & Technical Discussion / Re: script_invalid.json and MINIMALDATA test vectors on: November 09, 2015, 01:39:49 AM
They pass because the test driver sends the minimaldata flag. It's the last field of the test data.
Your implementation could do the same thing.
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!