Bitcoin Forum
May 26, 2024, 10:41:50 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 »
1  Bitcoin / Development & Technical Discussion / Re: Maleability testing? on: August 29, 2017, 12:56:24 PM
You could look at the malleability integration test in the Core repo:

https://github.com/bitcoin/bitcoin/blob/master/test/functional/txn_clone.py

2  Bitcoin / Development & Technical Discussion / Re: Sidechains and malleability on: July 18, 2017, 08:51:33 PM
Are there any sidechain ideas or proposals that depend on the ability to create non-malleable transactions?
Nope. Malleability isn't an issue for well confirmed transactions, which is all that any sidechains proposals work with-- often 100 blocks or more. But that hasn't stopped people you associate with from dishonestly claiming that segwit exists to enable sidechains. I doubt having this further confirmed will suddenly spur you on to calling out their incorrect claims, since you don't for anything else.

That is also what I thought. I am sorry if I am not calling people out. That is not my purpose.

I asked this because I was curious whether any connection between the two was defensible because that seemed unlikely to me.

I am not going to "call people out" and I do not understand why you expect me to or why you seem to have problems with me not doing so.
3  Bitcoin / Development & Technical Discussion / Re: SPV client vs full node on Selfish Mining attack on: July 18, 2017, 07:51:55 PM
All a full node is "in charge" of is which blocks it accepts. All it can do is reject a block and not relay it. But the network is extremely well connected. Blocks will find their way anyway.
The blocks HAS to go through a rogue node which has modified rules. Imagine if there are only 1000 nodes, one can easily setup 200 nodes and use modified rules on them. The higher the number of rogue nodes, the higher chance an SPV client would connect to it and be disconnected from the actual network.

The higher the number of honest nodes, the harder it will be and the network will be more decentralised as a result.


An honest node doesn't help an SPV except when it isn't connected to any honest node. Otherwise  the honest node can do no more than "not participate".

The misunderstanding that full nodes are helping seems to stem from a misunderstanding of the network. Everyone is connected to everyone in a few hops. An attacker needs only one peer in order for all the others to be pointless. The marginal gain for the network in terms of "sybiling" SPV's with verifying nodes is completely negligible.

There are 1000 mining nodes. As I asked before, why would a full node improve the security? By preventing connections from these mining nodes to light clients? That is not possible.
4  Bitcoin / Development & Technical Discussion / Re: SPV client vs full node on Selfish Mining attack on: July 18, 2017, 01:56:15 PM
Specifically, consider a network with 1000 mining full nodes and 1000 wallets. Now add a non-mining full node. Is the network somehow more secure?
Yes. Assuming that you are talking about 1000 SPV wallets, full nodes will be important. SPV wallets rely on full nodes to deliver accurate information and will have to trust them. This exposes them to sybil attack where a lot of nodes are created by a single entity and are modified to follow a different chain or different rules. The more nodes there is, the more expensive it will be.

SPV nodes verify the PoW. They don't have to trust their peers apart from having at least one peer that doesn't withhold the blocks or the branches. The only thing these full nodes can do is not pass a block they consider invalid, but the SPV node would then still receive it from a miner.

Quote
The full node is a checkpoint on your own driveway. It doesn't secure the rest of the network in any way.
It does. They are in charge of verifying and enforcing the consensus rules. They help both the network and the light wallets in a way.

All a full node is "in charge" of is which blocks it accepts. All it can do is reject a block and not relay it. But the network is extremely well connected. Blocks will find their way anyway.

5  Bitcoin / Development & Technical Discussion / Re: Sidechains and malleability on: July 18, 2017, 10:13:31 AM

Because to peg a sidechain to the mainchain, someone, centralised or in a decentralised manner, must verify that transaction met some conditions (e.g.,to lock the coins on mainchain and is certain levels deep). If the transaction is malleable, it is difficult to programatically achieve this since txid could change. I think depending on the design of the sidechain, this can be taken care of, BUT although the solution might be programmable, it would most probably be centralised.

But the transaction is no longer malleable when it is on chain. Are you referring to a sidechain design which relies on maintaining off-chain pegging transactions? And if so, who maintains it? Do you have any references for such design?
6  Bitcoin / Development & Technical Discussion / Re: SPV client vs full node on Selfish Mining attack on: July 18, 2017, 10:07:46 AM
Of course i understand running full nodes secures network as a whole.
   
Why would this be the case?

Specifically, consider a network with 1000 mining full nodes and 1000 wallets. Now add a non-mining full node. Is the network somehow more secure?

The full node is a checkpoint on your own driveway. It doesn't secure the rest of the network in any way. And as you explain, it doesn't really make you more secure than SPV due to withholding/releasing attacks being much more effective than any invalid block attack.
7  Bitcoin / Development & Technical Discussion / Sidechains and malleability on: July 18, 2017, 09:32:29 AM
Are there any sidechain ideas or proposals that depend on the ability to create non-malleable transactions?

If so, why?
8  Bitcoin / Development & Technical Discussion / Re: If ECDSA is ever cracked/exploited/quantum computed ? on: May 18, 2017, 02:26:10 PM

However, lets imagine for a moment that ECDSA is broken in such a way that the time to crack a private key from a public key is reduced to 6 months.

If I always use a new address for every transaction, then all of my bitcoins are protected by SHA256 and RIPEMD160.

If you have an address that you've re-used, then you might have bitcoins sitting out there on the blockchain with their public key exposed.  An attacker can spend the next 6 months working out your private key and then steal your bitcoins.

If I send a transaction, the attacker has (on average) 10 minutes to figure out the private key, craft a replacement transaction that pays the bitcoins to him, and then convince a miner to mine his transaction instead of mine.

Which is safer?  Your bitcoins sitting on the blockchain with an exposed public key allowing the attacker to continuously try to craft a transaction that takes your bitcoins until you get around to sending them to a new address?  Or my bitcoins that have a window of 10 minutes on average to try to both crack the key AND convince a miner to accept a double-spend transaction in place of the existing one?

The increase in security from using a new address for every transaction is quite small, but it is still better than re-using addresses.

Using a new address for every transaction can also increase your privacy a bit.

I am not arguing that it is not harder to steal or doesn't increase privacy, which is obviously true.

But the value of Bitcoin depends on being able to transact securely. If there is a 6 month attack with independent trials, and there are 6 miners attacking, then every month some transaction will get stolen.

What would the value of Bitcoin be? Would anybody still give a dime for a Bitcoin in such scenario? What would be the use of being the "more secure" owner of a worthless coin?
9  Bitcoin / Development & Technical Discussion / Re: SPV with simple Fraud Hints on: May 18, 2017, 08:45:34 AM
Can anybody help me with this?

I think we even make it standard for SPV nodes to require the ancestors up to N  blocks in the past, to further the diminish costs of false flags.

I don't understand how either false hinting, or invalid blocks by tx withholding can harm that way.

Doesn't this make SPV fully resilient to any miner attack?
10  Bitcoin / Development & Technical Discussion / Re: If ECDSA is ever cracked/exploited/quantum computed ? on: May 16, 2017, 10:39:18 AM
I don't quite understand why hiding the public key behind a hash really helps.

If ECDSA is broken, that is if a private key can be found from a public key in limited amount of time, can't we assume that the time taken to find the private key consists of independent trials?

And if so, can't any node simply keep attempting at incoming transactions, stealing one every N days? Making every transaction a gamble?
11  Bitcoin / Development & Technical Discussion / SPV with simple Fraud Hints on: May 16, 2017, 10:17:31 AM
I would like to better understand the problem of fraud proofs and false flagging fraud hints with normal SPV (headers only+merkle branch).

I understand the two difficult cases:

* A transaction included that references a non-existing output. Absence of the referenced tx cannot be proven.
* A TXID included of a non-existent transaction, and a transaction is included that references it. This also cannot be proven.

Now both these cases can only be *hinted*, and it is said that to verify such (cheaply faked) hints, the SPV falls back to full node.

But what if the SPV simply registers the hint for a transaction in block N, and uses this hint to ensure every received transaction in block >= N must request all ancestors up to block N-1 for verification?

Requesting ancestors seems no bad practice, so this makes both false flagging attacks as well as attacks using invalid blocks unfeasible, and protected by normal anti-DoS measures.

What am I missing here? Why does the SPV need to fall back to full node?
12  Bitcoin / Development & Technical Discussion / Use cases for SegWit script versioning on: May 11, 2017, 08:00:07 AM
I would like to understand the advantages of the script versioning introduced in SegWit.

As I understand, script versioning allows you to change the meaning of existing opcodes, where currently we can only add opcodes.

Why is this needed? Isn't the power of this simple stack based language that you can add *any* functionality simply by adding new opcodes, (and can do so indefinitely with an OP_EXT), without the need for versioning?

Isn't that a very clean mechanism to update this language while retaining backwards compatibility? Especially because due to the immutability you can never deprecate old functionality.



13  Bitcoin / Development & Technical Discussion / Re: The case for moving from a 160 bit to a 256 bit Bitcoin address on: May 02, 2017, 03:26:12 PM

I did, but didn't get it, but maybe I do now on rereading: We're talking about a collision of two P2SH addresses. That makes sense.
14  Bitcoin / Development & Technical Discussion / Re: The case for moving from a 160 bit to a 256 bit Bitcoin address on: May 02, 2017, 01:28:31 PM

So here is the interesting attack:  You give me your pubkey, and then I create my pubkey for a 2-of-2 (or some other more elaborate contract), and then we pay to the resulting address.

Oops.  In the background I did ~2^80 work and found a colliding address which didn't have the same policy, and I use it to steal the funds.

2^80 is a lot of work, but it isn't enough to be considered secure by current standards.

Forgive me, but I still don't quite get that. Where does the transaction with a different policy come from? If you only find two colliding addresses yourself, how can you use it for a contract that steals someone else's fund?

Could someone elaborate on this attack?

Besides, doesn't that require you to create 2^80 "proper" addresses. Thus 2^80 times keypair creation plus double hashing?

15  Bitcoin / Development & Technical Discussion / Re: Fraud Proofs for SPV using the Spend Tree on: May 01, 2017, 11:14:04 AM
This is very exciting, or seems so... i'm surprised there's not more discussion here.  Maybe people are studying your work.

It seems that this is key in decentralization and relevant to the scaling debate. 

I think it could be very beneficial if you had time to write more about
the security model of Bitcoin and how the various strengths of fraud proofs
can help ensure SPV clients are able to validate the network...and reduce
the need for most users to run full nodes.

Thank you Jonald,

There has been some discussion here: https://www.reddit.com/r/btc/comments/684l7w/fraud_proofs_for_spv_using_spend_tree/

I will definitely write more, although I am hoping to also have time to code Smiley

16  Bitcoin / Development & Technical Discussion / Fraud Proofs for SPV using the Spend Tree on: April 28, 2017, 07:51:09 PM
I have showed earlier how Bitcrust uses a Spend Tree to track the transaction graph and verify order.

I believe this same structure can also be used to tackle the difficult Fraud Proof problem by making the processing of hints indicating absent transactions, virtually free.

I propose creating a "Fraud Proof SPV" node (FSPV), by syncing this Spend Tree. It is explained here: https://bitcrust.org/blog-fraud-proofs

Please let me know what you think,

Tomas van der Wansem
Bitcrust
17  Bitcoin / Development & Technical Discussion / Re: [bitcoin-dev] I do not support the BIP 148 UASF on: April 18, 2017, 08:35:58 AM

And unmodified non-segwit miners will not initiate a split under any condition.


I was under the impression that the concept of standard vs non-standard transaction was a matter of local policy, to - among other things - simplify softfork upgrades.

Every miner is still free to accept any non-standard transactions, and can safely accept and include them as a service for example to collect higher fees.

Do I understand that you consider rejecting non-standard transactions a requirement for compliance?

Tomas
Bitcrust
18  Bitcoin / Development & Technical Discussion / Re: Fast parallel block validation without UTXO-index on: April 12, 2017, 11:22:37 AM
I don't understand the specifics of the code, so can you enlighten me? What you're saying sounds like semantics: "It's after in separate batch steps" sounds like it is still done when a block validates,

It's not semantics. When validating blocks the vast majority of the time the UTXO set is not used by bitcoin core; because the UTXOs are in the cache already (due to mempool) and nor are the result of the block's changes written out-- they're buffered until the node flushes-- and very often never get written at all because the newly created outputs are spent by a later block before the system flushes them.

OP suggests that the structure he proposes reduces the delay at the time a block is processed compared to Bitcoin Core's UTXO database, but it shouldn't-- at least not for that reason-- because the database is only used at block processing times in the exceptional case where a transaction shows up by surprise.

This is semantics. I am aware that Core uses a cache to make most UTXO read and writes faster, but I would still call them UTXO read writes (or CoinView read/writes?), and as I understand, they still need to be handled sequentially.

Any reasonable database will attempt to use RAM for frequently used items, regardless of whether this is managed by the application, by a dedicated database component or the OS.

19  Bitcoin / Development & Technical Discussion / Re: Fast parallel block validation without UTXO-index on: April 11, 2017, 09:09:35 AM
This in contrast to Core's model which needs sequential reads and writes to the UTXO index, even if all scripts are already verified.
This is not how Bitcoin Core works. No access to the disk normally at all happens when a block is verified except for transactions that have not yet been seen, reading or writing.

I am not talking about disk reads and writes, I am talking about UTXO index reads and writes.
20  Bitcoin / Development & Technical Discussion / Re: Fast parallel block validation without UTXO-index on: April 11, 2017, 07:51:53 AM
So you basically extract all the transactions from the blocks and store them in a separate files (transactions1+transactions2), giving each transaction a unique sequencional "numeric index".

Then you have the index file (tx-index) where each different TXID points to the "numeric index" - that's actually your equivalent of UTXO-index, except that it includes also spent transactions.

And then you have the spend-index (addressed by the "numeric index") which basically tells you whether a specific output of a specific transaction has been spent.

Is that correct?

Yes. Though you leave out the spend-tree.


First of all, you did not get rid of UTXO-index.
You still have it.
You just extended it into TXO-index meaning that you index not only unspent transactions, but also the spent ones.

Now, how is it possible that your code performs better than the leveldb solution used by core..?

It has nothing to do with any "fast concurrent spend tree".
It has all to do with the (U)TXO index itself.
You use a hashmap as the index, which is much faster than the index used by core's leveldb engine.
But it also takes much more system memory.

I know, because I also use hashmap based UTXO index in my gocoin s/w.
So you don't have to tell me that it's much faster.
But it comes at the cost of the memory - you can't run this with e.g. 2GB of RAM.
From my personal experience I'd say you need at least 8GB.
And because you index both; spent and unspent transactions, you need even more memory than the UTXO-index solution, which you are trying to beat.

The bottom line is: no way this can perform better than a simple hashmap based UTXO-index.

The important part is to split base load transaction validation (when a transaction comes in) with peak load order validation (when a block comes in).

For the first, this design is in itself not faster as it indeed uses an index which includes spent outputs (though it can be pruned to be even more similar to the UTXO). This is also much less relevant because if no block is coming in, neither Core nor Bitcrust are needing a lot of resources.

For the latter, when a block comes in, the output scripts are not needed as they are already verified. The only thing that needs to be accessed is reads to the transaction index which only maps hashes to file offsets. And reads and writes to the spend tree and spent index which are very compact and concurrently accessible.

This in contrast to Core's model which needs sequential reads and writes to the UTXO index, even if all scripts are already verified.

Essentially Bitcrust simply ensures that the work and resources needed when a block comes in are minimized.
Pages: [1] 2 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!