Bitcoin Forum
June 23, 2024, 05:10:52 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 315 »
961  Alternate cryptocurrencies / Announcements (Altcoins) / Re: Declaration of Independence - Atomic Cross Chain Asset Standard on: March 04, 2016, 11:29:26 AM
Of course if the external chain is some arbitrary ledger that can be updated at will by some corporate entity, then these security assumptions do not hold up. What is required is another rule in the external chain that after R permanent Ti blocks have passed, that it is permanent.

Aha, so we just need to have a consensus to have consensus, got it.

You're begging the question. The whole point of this exercise is to make sure that transaction history cannot be rewritten. If you just assume that it cannot be rewritten, then you don't need all this Ti mumbo-jumbo.

Inclusion of Ti does not make it any harder to rewrite the history, as attacker has access to all Ti and thus he will be able to create a seemingly valid chain which references all Ti blocks but is completely different.

I think you've got things wrong, to improve security you need to reference alt-coin blocks from Bitcoin blocks, not the other way around. This is known as anchoring, it's already used in several projects (for example, Factom) and it can make alt-coin consensus as strong as Bitcoin. (See here: https://bitcointalk.org/index.php?topic=313347.0)

I have not specified anything about how the non-bitcoin chains achieve consensus within themselves yet. For now, just assume they are doing what chains do currently. Presumably they are working fairly well.

By having consensus rules in the chains that provide both an "after" and "before" time relationship, you can narrow down when something happened, even across chains.

As I said, there are more details to work out and this is just the synchronization part. Analogy is a train system. We can ignore how the trains work internally, when we are trying to make a travel plan, just assume the train will get there when it says it will, ie 9am.

But what is 9am? To be able to go from trainA to trainB, both need a common time reference. So it is good you notice that I did not deal with internal consensus, since I havent yet as I assumed that such things are possible, since we see so many blockchains around

James
962  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] BitcoinDark (BTCD)--Financial_Privacy/SuperNET_Core/InstantDEX/PAX/Divs on: March 03, 2016, 08:57:05 PM
Friends, is it possible to make my own paper wallet for BTCD and other altcoins?
I don't mean an online wallet generator. Just a few wallets for own usage where paper wallets are not available.

Thank you


https://walletgenerator.net/  now supports Btcd and you can create paperwallets

Thank you for providing the above link.
My question was if it's possible to create somehow a paper wallet for every altcoin by myself
e.g. to generate the public and private address somehow without using paper generators.

There are altcoins without paper wallets so I want to store my coins on paper.
iguana has API that supports rosetta addresses. Internally all the bitcoin and curve25519 coins use 256 bit privkeys.

This is mapped to pubkeys. And usually with an SHA256 front end you can create the high entropy 256 bits to use for the privkey.

Each coin just has a different address type that is combined with the rmd160(sha256(pubkey)) to create the address. There are dozens of hash and hmac API to help with creating high entropy privkeys and also all the required converters from one format to another.

Not sure what functions you need, but whatever it is there is already and iguana API for it, or it wont take long at all to make one.

docs.supernet.org
963  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 08:15:30 PM
If I use the block 290000 checkpoint to skip verifying sigs prior to that, could I still claim to be a fully validating node? (without being inaccurate)
What is a strong definition of "fully validating node"? (without referring to Satoshi client)
What would you do in case of reorganizing the mainchain from the block #289999? from the block #200000? from the block #1?
I am assuming I dont have to worry about bitcoin reorganizing 100,000+ blocks. Do I?
964  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 07:47:24 PM
I am assuming only sigs after block 290000 need to be verified as long as the checkpoints match, or should all sigs be verfied?
Up to you. Bitcoin is voluntary system.
You have a right not to check signatures at all.
If I use the block 290000 checkpoint to skip verifying sigs prior to that, could I still claim to be a fully validating node? (without being inaccurate)
965  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 07:09:13 PM
You can sign a vin properly by multiple signers and get multiple signatures, each with a different sighash. What I assumed incorrectly was that all signers of a specific vin have to use the same sighash. Can you show me where it says that the same vin can be signed by different signers using different sighashes?
Generally the "says" is that you can signal it. Not that the bitcoin protocol is always a paragon of perfect design, but it should generally not be possible to signal something that shouldn't work-- when thats possible it reliably leads to bugs. Smiley so it shouldn't be done without good reason. Being able to control sighash per signature is pretty useful.

But fair enough; I just wanted to make sure you weren't confused in more profound ways, but I follow your thinking now.

Quote
I am making the iguana be able to be a lossless codec that uses half the space with the sigs and able to purge the sigs to get a 75% reduction in size, I estimate about 15GB size for the dataset without sigs. Do you see any reason for a non-relaying node to keep around sigs that have already been verified?
There isn't-- though Bitcoin Core can already do this. It can run a full node, with transaction and block relaying (but not serving historical blocks) with less than 2GB of space; this isn't a default behavior yet because we have not yet implemented mechanisms for nodes to retain and locate random chunks of the history so that nodes can collectively support other nodes bootstrapping.
For a while I was definitely profoundly confused, but debugging the code has a way of clarifying things.

And the whole sequenceid not really doing much, but the docs do say it doesnt really do anything.

Anyway, onward and upward, I am glad the pace of bitcoin improvements is much faster now.

I plan to implement parallel sync support to other iguana nodes based on each bundle (2000 blocks), would need to include the sigs so it can be verified, but after that each non-relaying node can just purge it and save half the space. I am assuming only sigs after block 290000 need to be verified as long as the checkpoints match, or should all sigs be verfied?

James
966  Bitcoin / Development & Technical Discussion / Re: Using compact indexes instead of hashes as identifiers. on: March 03, 2016, 05:14:15 PM
The problem is that a lot of the data is high entropy and not compressible. I believe I have identified and extracted most of the redundancy. compressing the dataset is a onetime operation and doesnt really take much time to decompress.

What seems incompressible can generally be compressed more if one shifts the order of characters around a file. For example: https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform (that's what bzip does btw). In theory a compression system could use multiple such "tables" and try various configurations to find new redundancies, folding and re-folding the data again and again, but obviously I'm not going to ask you to invent such a system Cool

In any case, whatever degree of compression is achieved => is good. If a compressed block, for example, can be reduced from 1mb to 600kb, it's a huge improvement for someone with a slow connection.
if you can compress rmd160(sha256)) and sha256(sha256()) by any more than a few percent, it probably proves they are not of cryptographic strength and open to plain text attacks among others. SHA256 is supposed to be high entropy, right? If so, it isnt compressible.

Given this, removing the duplication, ie numvouts references to identical txid as they all get spent, identical rmd160[20] being referenced in the reused addresses along with the pubkeys when signed.

So what I do is to create a structured data set without the high entropy and then compress just that part to achieve a 75% compression. Using parallel sync the time to sync that on a 20 mbps home connection would be less than 2 hours, but that leaves out the sigs, which double the size to about 30GB total. However, if we can skip all the sigs before block 290000, then probably 10GB+ of that is not needed.

on /r/Bitcoin front page there is a mention of this with a lot of details on the internals

James
967  Bitcoin / Development & Technical Discussion / Re: Using compact indexes instead of hashes as identifiers. on: March 03, 2016, 04:07:13 PM
These are not just thoughts, but description of a working system that syncs entire blockchain and creates the above data structures in about half hour if you have enough bandwidth and 8 cores. It is bandwidth limited, so for a slow home connection of 20mbps, it takes 6hrs for full sync.

Bandwidth limited = I/O and cpu limited then?

We had a similar discussion in another thread, but anyway I want to write it more concisely here: I really believe scaling can be achieved through tradeoffs of what a user has and what he hasn't. Every machine will hit a bottleneck somewhere, the issue from that point onward is how to juggle the tradeoffs to bypass your bottleneck.

Examples:

-A user has a slow internet connection but plenty of CPU => He should have the option of downloading (and uploading - if he is running a node) very compressed blocks with other cooperating nodes that are willing to use compressed blocks.
-If the same user has slow CPU as well => He should be able to tap into GPU resources for parallel compression/decompression (it's feasible - I've seen pdf's with research on the subject of parallel compression on GPU).
-If a user has slow I/O but fast CPU => Compress the blockchain for more local throughput
-If a user has low RAM but fast CPU => Compress the RAM (for things like compressed cache)
-If a user has slow I/O or low RAM but will bottleneck on CPU if he goes to a compressed scheme => tap into GPU
-If a user wants to increase cache size to increase performance, but lacks ram => Use compressed RAM, tap into more CPU, to reduce I/O.

Essentially you try to trade what you have with what you want to achieve. One size fits all solutions won't work particularly well because one user has 1gbps connection, another has 4mbps. One is running on a xeon, the other is running on a laptop. One has 16gb memory, another has 2. One can tap into a GPU for parallel processes, another can't because he doesn't have one.

IMO the best scaling solutions will be adjustable so that bottlenecks can be identified in a particular system and make the necessary tradeoffs to compensate for deficiencies. Compression is definitely one of the tools that can balance these needs, whether one lacks RAM, disk space on their SSD, bandwidth on their network etc. And GPUs can definitely help in the processing task - whether the load is directly related to btc operations or compression/decompression.
The problem is that a lot of the data is high entropy and not compressible. I believe I have identified and extracted most of the redundancy. compressing the dataset is a onetime operation and doesnt really take much time to decompress.

The CPU is a bottleneck only if your connections is faster than 500megabits/sec, probably half that as that is without verifying the sigs yet, but there is plenty of idle time during the sync.

With the bandwidth varying, we have 30 minutes to 6 hours time to sync. The slower connection provides plenty of time for the CPU

I eliminated HDD as a bottleneck by eliminating most of the seeks and using an append only set of files direct into memory mappable data structures.

Each bundle can then be verified independently and a chain of the bundle summaries can then be used for a quick start, with the validity checked over the next 6hrs. Just need to not allow tx until it is fully verified, or limit the tx to ones that are known to be valid, ie fresh addresses getting new funds sent to it.

I use adaptive methods so it goes as fast as the entire system is able to. After all if you only have one core, then it will be the bottleneck to valiate 100,000 blocks of sigs. The way to have different "sizes" is to have a full node, validating node, truncated node (dump sigs after they are verified), lite nodes.

James
968  Bitcoin / Development & Technical Discussion / Re: Using compact indexes instead of hashes as identifiers. on: March 03, 2016, 11:14:05 AM
Quote
What number of blocks is beyond the common sense reorg danger? I know theoretically very long reorgs are possible, but from what I have seen it is not so common for any big reorg. I think by setting a delay of N blocks before creating the most recent bundle, then the odds of having to regenerate that bundle would be very low. Since it takes about a minute to regenerate it, it isnt a giant disaster if it has to, but best to default things to almost never has to do it.

For reference, there is a protocol rule which prohibits generated BTC (coinbase tx) from being spent until it has 100 confirmations. Older Satoshi client had an additional safety margin: it won't let you spend generated BTC until it has 120 confirmations. Discussion surrounding locking coins for sidechains was in the 144-288 blocks depth range.
thanks. maybe 128? That is close to a full day and power of 2

I dont know the formal to calculate the odds of a reorg that big, but short of some sort of attack or buggy mainchain release, it would seem to be quite small chance if 10 blocks gets to six sigma
969  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 03, 2016, 10:34:01 AM
I noticed it as I found my code was assuming only a single SIGHASH type per vin, but clearly there can be multiple signers.
That is a concerning misunderstanding. I'm not sure how you'd think that-- the sighash in use is part of the ecdsa signature (input to checksig).
You can sign a vin properly by multiple signers and get multiple signatures, each with a different sighash. What I assumed incorrectly was that all signers of a specific vin have to use the same sighash. Can you show me where it says that the same vin can be signed by different signers using different sighashes?

I always used bitcoind for tx signing before, but now doing it all from scratch. The context is that I was preparing the data for doing the signing and noticed I had made a wrong assumption that all sighashes would be the same, this is true in 99%+ of cases.

Anyway, please dont be alarmed, it was just an informational field that was to show what sighash mode a vin used, but it wasnt being used by anything else and it appears to need to be a vector, which is probably not worth doing as I can just iterate through all the sigs and grab the last byte.

I am making the iguana be able to be a lossless codec that uses half the space with the sigs and able to purge the sigs to get a 75% reduction in size, I estimate about 15GB size for the dataset without sigs. Do you see any reason for a non-relaying node to keep around sigs that have already been verified?

James
970  Bitcoin / Development & Technical Discussion / Re: Unconfirmed transactions on: March 03, 2016, 02:44:56 AM
unconfirmed bitcoins are worthless.
I'd never take them as a payment, no matter the sequence number.

people who think they can build a profitable business with unconfirmed bitcoin transactions are lunatics.
unless they are to profit from stealing the money, by fooling others into accepting zero-conf transactions.
So even an unconfirmed with a 0xffffffff sequenceid is too risky in your view.

Am I correct in concluding that an unconfirmed with a non-permanent sequenceid is even riskier, so any such usage combining zeroconf and RBF is insane?

James
971  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 11:15:21 PM
But actually, the sig not validating still doesnt mean the vin is always invalid, right? The vout script could just pop all data off and push a true. So a vin without a valid sig could still be spent...

If you are doing it as part of a protocol, you would insist that the output scripts all match the expected template.

Either use CHECKSIGVERIFY or make sure CHECKSIG is the last operation to execute.
certainly when I am on the side of generating the scripts, but I am working on the core side now, so have to understand the proper handling of arbitrary vins/vouts
972  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 11:00:29 PM
OK, so I am not crazy thinking this is a possible case of undefined behavior

Well, I would be 90% sure that the sighash applies on a per signature basis, but when in doubt, it is worth testing.
I cant see a way that sighash isnt on a per sig basis as it is intimate part of the signing.

I think I was thinking about it a bit backwards, as the sigs are the endpoint of the process and not an intermediate that affects anything, other than if the vin is valid or not.

But actually, the sig not validating still doesnt mean the vin is always invalid, right? The vout script could just pop all data off and push a true. So a vin without a valid sig could still be spent...

Just the murkiness about crossing all possible vout spend scripts with all possible vins gave me pause.

James
973  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 08:22:46 PM
So unlike RBF enabled in one vin propagates to the entire tx, the signatures are always self contained regardless of how they are mixed.

I think so, but it is probably worth checking a few transactions with mixes of signatures on testnet.
OK, so I am not crazy thinking this is a possible case of undefined behavior

I noticed it as I found my code was assuming only a single SIGHASH type per vin, but clearly there can be multiple signers.

974  Bitcoin / Development & Technical Discussion / Re: Why blocktime decrease instead of blocksize increase is not discussed? on: March 02, 2016, 07:45:54 PM
I would suggest you all convert your BTC to nanosecond block time cryptocoins.

Oh wait...


the data we were given shows that the error rates for 5 minutes is very close to 10 minutes and I think it was several years old data?, if so, the safe blocktime would be even faster, probably 3 minutes range. Does LTC have massive amounts of orphans?

James
975  Bitcoin / Development & Technical Discussion / Re: Why blocktime decrease instead of blocksize increase is not discussed? on: March 02, 2016, 07:44:18 PM
Supposedly, changing the block time is a more complex and bigger change than changing the block size.

I still think it's worth it to have faster block time. 10 minutes is a cruel joke, just read about the many complaints that are happening in the real world:
https://www.reddit.com/r/Bitcoin/comments/48m9xq/average_confirmation_times/
Anything is more complex than changing the blocksize, since that's just changing a #define, right?

If anybody uses code complexity to reject faster blocktime, they are not using a valid technical objection. Now maybe it is possible to be more backward/forward compatible with larger blocksize but we are talking about a hardfork change and in that context changing blocktime falls into the really easy to do category
976  Bitcoin / Development & Technical Discussion / Re: Unconfirmed transactions on: March 02, 2016, 07:41:27 PM
RBF suddenly starts making sense, doesn't it?
funny how many people were resisting the deployment of this feature, while now they are going to be the first ones to use it Smiley
the anti-RBF peoples say it will kill zeroconf, but I cant figure out when you would ever want to combine zeroconf and a non-permanent sequenceid. It seems those two are not a good idea at all to mix.

The RBF property of making the entire tx RBF enabled, if just one of the inputs enables it, combined with the entire bitspace of sequenceid (other 2 reserved) seems like we changed sequenceid to mean RBF. Cant think of when you would use non-RBF for a CSV/CLTV but not sure it would never be.

James
977  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 05:01:31 PM
My question is AFTER it is validated, what does it mean?

Signatures are pass/fail, they don't have meaning other than that.  SIGHASH_SINGLE means that you just sign the output that you are interested in.  You are saying "This signature is valid for any transaction which pays to this output".

Assume you had 2 inputs and 2 outputs and you are checking input zero.

The first signature is sighash_single & anyone_can_pay.  That means that you only include output zero.  The anyone can pay means that you also delete all the inputs, except the one you are signing (input zero).  Those inputs and outputs are the only ones that are involved in the signature.

The person who signed that signature signed any transaction with that input and that output.

That means you can add inputs and outputs and the transaction will remain valid (as long as that input and that output are in the same position).

The second signature is sighash_all.  That person signed the entire transaction.  If you change anything about the transaction, then it becomes invalid.

In theory, the first person could create a one input and one output transaction and sign it with (sighash_single | anyone_can_pay) and then sign it.  The second person can then take the transaction and modify it (adding the 2nd input and 2nd output) and then sign with sighash_all.  That 2nd signature is also valid, but only if there are no more changes.
So unlike RBF enabled in one vin propagates to the entire tx, the signatures are always self contained regardless of how they are mixed. In a MofN multisig it is possible changing the tx will change the validity of some sigs, but as long as there are still M or more valid sigs, it is valid.

As long as there is no propagation beyond the current vin, I think it is ok. The RBF in any vin affecting all vins, just spooked me a bit and I wasnt sure about the local scope of SIGHASH. I guess I overcomplicated things.

Thanks

James
978  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 04:47:32 PM
Thanks you, but this is not the question. Assume the sigs verify. see above response
Ok, verify ( signature, digest, pubkey ) method returns either 1 or 0
Note, that this method doesn't depend on any other variables. it does not know anything about satoshi sighash types.
The msig script m-of-n is valid if there are at least m hits of positive verifies
And how to treat the x if the M sigs all use different sighash types?

My confusion is not about the actual signing or verifying step (of course I was quite confused about that till I got it working), but rather how to interpret the resulting signed tx.

A tx has multiple inputs and multiple outputs. But the inputs and outputs range from being totally indpendent to totally dependent and multiple sighash types within a single input seems to create a large number of implementations possible that all follow the rules and all end up with a different result.

It is the semantics of the different SIGHASH types that still confuse me. SIGHASH_ALL is thankfully used for majority and that is simple. The signature assumes everything is the same, no changes allowed. But the others do allow the tx to be changed. So what happens when we have M of N sigs, some that allow the tx to change and others that dont?

Maybe there is some actual documentation that describes things clearly?

Is the rule simply that if all the sigs verify for an input, we treat that input as verified fully? There is no possibility that a SIGHASH type that is different can invalidate the validity when it interacts with other SIGHASH types?

Like the SIGHASH_SINGLE, it appears to require a 1:1 correspondence of vins/vouts, but the different vins are independent. So if one msig signer uses SIGHASH_SINGLE and another uses SIGHASH_ALL, then the SIGHASH_SINGLE would be valid for more tx than the SIGHASH_ALL, so it makes sense that if the tx changes to invalidate the SIGHASH_ALL then the M of N is now M-1

I think that makes sense, but all combinations of SIGHASH modes needs to be well defined. I havent done the full matrix of possibilities yet, that is why I posted. I was hoping that such a thing was already thought through and documented

James
979  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 04:08:19 PM
I can verify it is a valid signature, but what happens if there is SIGHASH_ALL for one sig, SIGHASH_SINGLE for another and SIGHASH_ANYONE_CAN_PAY for the third sig for the same input.
It is not a problem.
let me give you some pseudocode:

byte[] pubkey = getPubkey ( );
byte[] param = getSignaturePlusHashtype ( ); // signature concatenated with hash type
byte[] signature, int hashtype = param.split ( );   // get bare asn1 signature and 1-byte hashtype
byte[] digest;
switch ( hashtype )  
{
  case SIGHASH_ALL : digest = tx.createDigest ( all ); break;
  case SIGHASH_NONE: digest = tx.createDigest ( none ); break;  
}
verify ( signature, digest, pubkey )

Thanks you, but this is not the question. Assume the sigs verify. see above response
980  Bitcoin / Development & Technical Discussion / Re: SIGHASH precedence for multisig? on: March 02, 2016, 04:07:27 PM
I can verify it is a valid signature, but what happens if there is SIGHASH_ALL for one sig, SIGHASH_SINGLE for another and SIGHASH_ANYONE_CAN_PAY for the third sig for the same input.

You compute the transaction for each signature separately.  You can't just work out the hash once, you have to do it for each of the signers (if they use different sighashes)
I understand this.

My question is AFTER it is validated, what does it mean?

Assume all the sigs verify. My understanding is that SIGHASH_SINGLE would affect things differently than SIGHASH_ALL, so with a valid SIGHASH_SINGLE and a valid SIGHASH_ALL mixed together in the same vin (from different multisig signers). Do I treat that input as a SIGHASH_SINGLE or SIGHHASH_ALL as it pertains to the side effects that go beyond the scope of vin itself. Or do I allocate half the funds as SIGHASH_ALL and half the funds as SIGHASH_SINGLE.

Do you see my problem?

For things like RBF where it has dominant precedence, the presence if it in any input makes the entire tx RBF enabled.

Similarily, if 1 of the N sigs is SIGHASH_xxx does SIGHASH_xxx dominate SIGHASH_yyy? And which is the xxx and yyy? Maybe its just a theoretical question and there arent any actual multisig tx with different SIGHASH types for the same vin, but unless the behavior is explicitly defined, would this create a split network unless all the bitcoin cores follow the same rules?

If the rules are so ambiguous and confusing (or missing), then it seems this could even be a current attack vector, unless all current nodes are following the identical rules. What are these rules?

James
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [49] 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 ... 315 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!