Bitcoin Forum
September 30, 2023, 02:29:14 AM *
News: Latest Bitcoin Core release: 25.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 »
1  Bitcoin / Armory / Is the --satoshi-port command line argument disabled? on: November 13, 2018, 01:50:30 AM
I have a node running on a non-standard port and am trying to get Armory to connect to it.  It connects if I use the standard port, but not if I use a non-standard port.

I think there may be some issue with passing the command line argument over to the database process, but that is just a guess.
2  Bitcoin / Armory / Safe file transfer via Ethernet on: January 04, 2018, 06:47:08 PM
I wrote a ethernet library for transferring files from a Raspberry PI over ethernet.

It uses an "Ethernet Module ENC28J60" add on board for the ethernet connection.  This costs around $2.50 - $5.00.

This board can be connected to the Raspberry PI via the GPIO (general purpose IO) pins.

The software handles the entire IP stack (Repo).

- GPIO driver (handles raw read and write to the Ethernet board)
- Ethernet driver (handles ethernet packets)
- IP Stack (handles ping and 1 TCP connection at a time)
- Web-server (handles uploading and downloading)

The critical point is that the ethernet board is not registered with the operating system in any way.

It is a board that is purely controlled by the software.  You don't need to ask about auto-running or anything by the OS, since the OS is not involved.

This means that it is safe to connect the ethernet board to your computer via ethernet.

You can run it using:

Listen for file upload

./etransfer -i 192.168.0.50

Host my-file.txt

./etransfer -i 192.168.0.50 my-file.txt

If you don't give an IP address it uses one of the 192.254.*.* addresses.

The software is functional, but probably needs another pass at least (and maybe a re-write) to verify that it is actually secure.
3  Bitcoin / Armory / Closing ArmoryDB cleanly on: December 08, 2017, 11:53:30 PM
Is it possible to issue an instruction to ArmoryDB to do a clean shutdown?  Sometimes it doesn't close when Armory-Qt closes.

I am thinking something similar to

bitcoin-cli ... stop

Logs are here
4  Bitcoin / Armory / Ensuring valid OS image for offline computer on: December 05, 2017, 11:49:03 PM
When using a micro-computer for offline signing, the first step is to burn an SD card with the required image.

If your computer is compromised, then the OS for you offline computer could be compromised.  You might have lost before you start.

I was reading about a solution the Trusting Trust problem called "Diverse Double-Compiling".

The key insight was that even suspect computers can perform cross checks.  If even 1 of the suspect computers is not compromised, then it can detect a problem.  In many cases, even if all the computers are compromised but with different malware, they may still detect the problem.

That tells you you have a problem even if it can't tell you where it is.

I was thinking that something similar for OS images.

A computer is "secure" if it can read USB drives without auto-running anything and that it has no malware trying to corrupt the image.  It purely does an image check.

The process would be to have 1 computer designated the writer and all the remaining computers the readers.

You have the writer write the image to multiple SD cards.  You pick one and then have the remaining checked by random checker computers.  If the writer writes a bad image, then it will be detected if at least one computer is secure.

You can do this serially too.  You can write an image to the SD card and then have a random checker check it.  You can do as many loops as you want for security.

Malware would have to guess which round you are going to stop at.  If you do 100 loops, then it only has a 1% chance of guessing right.

This doesn't help with SD card firmware viruses though.
5  Bitcoin / Armory / Some feature requests on: November 30, 2017, 09:48:04 AM
I had a fun weekend last week trying to manage multiple coins with all the airdrops going on.

I wonder if the risk profile has changed.  As the price rises, the security level needs to be increased.

It occurs to me that a few features would be really useful and enhance security of the paper backups.

- Child wallets

This would involve two levels of deterministic wallet (might I say hierarchical).

When creating a new wallet, you could click one and say "create child wallet".

It would compute hash(parent.seed | index) and use that to create the new wallet.

If a child wallet is compromised, it wouldn't affect other child wallets or the parent wallet.

The use case for this is to have multiple wallets for different coins all sourced from the same root.

The weakness is ensuring that the same child wallet index isn't created twice.  The offline computer could store the highest index used for each parent without much risk (or even the highest index used for any parent). 

Alternatively, the offline computer could pick a random index from 1 to 1 million.  Unless someone has around 1000 child wallets, a collision isn't very likely.

It can also check any visible wallets for a collision.

The big advantage is that it allows export of all the private keys from an "expired" wallet without compromising the root paper backup.

The recommendation would be:

Move Bitcoin to new (child) wallet

Once that is confirmed, export all private keys for expired wallet.

- Sign from backup

This means that there is no local storage of private keys for high value actual cold store. 

There shouldn't be listing of the wallet on any computer (offline or online).

The unsigned transaction would be read in and then you would be prompted for the data stored on the backup.

It would give the usual warnings and then sign the transaction.  The private key would be erased from memory.

- Signing session keys

This is again for high value signing.

I am thinking of something like

Create random "session key" and print/write to paper.

Get first paper backup

Encrypt the data on the first paper backup with the session key

Return first paper backup

Get second paper backup

Encrypt the data on the second paper backup with the session key

(and so on)

Once you have enough fragments

Enter session key

All fragments are loaded and decrypted

You can then do "sign from backup" for the transaction.

Destroy session key

Delete encrypted files (kind of optional at this point)
6  Bitcoin / Mining / What restrictions are placed on the header by ASICs? on: August 10, 2017, 05:22:53 PM
What restrictions does mining hardware place on the Bitcoin block header?

Can you give mining hardware any 80 byte header and it will search it (updating the 4 nonce bytes and maybe the timestamp)?

Does the version field have to be positive?

Does the difficulty target have to be a reasonable value?

Does the previous hash have to have at least 32 zeros?

Does the merkle root have to encode at least a coinbase for the extraNonce?
7  Bitcoin / Development & Technical Discussion / Release of the secp256k1 library? on: July 24, 2017, 03:39:44 PM
Is there a formal release of the secp256k1 library?  The github repo just has a master branch.

I assume the version that they included with bitcoin 14.2 is stable/release quality, but they don't appear to have tagged that back on their repo?
8  Bitcoin / Development & Technical Discussion / Does OP_CODESEPARATOR serve any purpose? on: December 13, 2016, 10:19:37 AM
I was looking at the new signature method for segregated witness and it preserves the OP_CODESEPARATOR opcode.

Is there any actual use case for this opcode?
9  Bitcoin / Development & Technical Discussion / Permanent lightning channel without malleability fix on: December 11, 2016, 11:58:37 AM
This is a protocol for opening a permanent lightning channel even if malleability is not fixed.  It only requires check locktime and check sequence verify.

Both sides put down a deposit to eliminate the hold-out risk.  If either of the parties refuses to proceed either the other party is compensated or both get a full refund.

Step 1

Alice picks A1 and sends HA1, Hash(A1), to Bob.
Bob picks B1 and sends HB1, Hash(B1), to Alice.

They can both create this transaction.

Code:
TX1:
0) Pays 3x: (Alice + HA1 after T1) or (Bob after T1 + T2) or (2 of 2 Alice + Bob)
1) Pays x: (HB1 + Alice) or (2 of 2 Alice + Bob)
2) Pays x: (HA1 + Bob) or (2 of 2 Alice + Bob)
3) Pays 3x: (Bob + HB1 after T1) or (Alice after T1 + T2) or (2 of 2 Alice + Bob)
4) Pays Alice's change: (Alice)
5) Pays Bob's change: (Bob)

Alice and Bob create the transaction and sign and broadcast it.

Abort during step 1

If one party signs the transaction and then the other refuses, then the signer should send at least one of the inputs to another address.  This invalidates the transaction.

Abort after step 1

Alice can wait T1 and then reclaim output 0.  This automatically gives Bob HA1, which allows him claim output 2.

Likewise, Bob can wait T1 and then reclaim output 3.  This automatically gives Alice HB1, which allows her claim output 1.

This is a full refund all parties.

If Alice refuses to claim output 0, then Bob can claim it after [T1 + T2] (and output 3), which gives him 6x and Alice nothing. 

Likewise, if Bob refuses to claim output 3, then Alice gets 6x and Bob nothing.

Both parties are incentivized to comply with the abort at this stage.

Step 2

Alice and Bob create the initial channel state transaction.  This supports a channel close that gives x to Alice and x to Bob and spends outputs 1 and 2.

The initial state transaction is different for Bob and Alice due to the way Lightning works.  Only Alice has both signatures for her version and only Bob has both signatures for his version.

Step 2a

Alice signs Bob's version of the transaction and sends the signature to Bob.

Abort after step 2a

If Bob broadcasts his initial state transaction, then that counts as him closing the channel.  Alice gets her x refund and then can reclaim output 0 (after T1).

If he doesn't broadcast it, then she can just follow the same abort procedure from step 1.

Bob can abort using the procedure from step 1 or broadcasting the initial state transaction.

In all cases, Alice and Bob get their money back.

Step 2b

Bob signs Alice's version of the transaction and sends the signature to Alice.

Abort after step 2b

Alice (or Bob) can simply close the channel so that both get x each from outputs 1 and 2.

Once the channel is closed, she can then claim output 0 safely (after 1 week).

Likewise, Bob gets x when the channel is closed (he can close it himself too) and then claims output 3 safely.

This gives both parties a full refund.

Step 3

Alice and Bob create a transaction which sends outputs 0 and 3 back to them.  This requires both outputs to be 2 of 2 signed (so 2 signatures each).

This transaction is broadcast.  If either party refuses to sign, then it counts as an abort after step 2b.

Once step 3 is finished, they now have established an initial lightning channel state with unlimited duration.

It is important that neither reveal their hash lock as that allows the other to spend the funding output.
10  Bitcoin / Development & Technical Discussion / Using the confidential transaction sum for proof of reserves on: August 09, 2016, 01:57:28 PM
It is possible for an exchange to prove their total reserves using a Merkle tree approach, see here for the thread and here for a description.

With the Merkle tree system, it is possible to prove that the total of all the account balances are equal to the sum in the root of the tree.  This rests on the assumption that all users check their balances.

If they don't, then the tree is checked on a random sampling basis.

Confidential transactions enables proving that a list of numbers add up to a given amount without actually saying what the numbers are.  The only information about the numbers that it gives is a range proof.

It says "all these numbers add up to X" and for each number "this number is between 0 and N inclusive".  This gives all the benefits of the tree sum. 

Exchanges could do something like the following.

At close of business on Fridays, the exchange emails all customers an individual message signed with their proof of reserve public key.

Code:
As of close of business on XX/YY/20ZZ, you have 22.01234567 coins.

Your customer unique id is 654321.

The blinding factor for your account this week is 4a3...23c715f.

The exchange then publishes a list of ids, balances (in confidential format) and range proofs.  It also has to publish the total of the balances and the sum of the blinding factors.

Code:
As of close of business on XX/YY/20ZZ, our customers balances are as follows:

000001, <balance as compressed point>, <range proof>
000002, <balance as compressed point>, <range proof>
....
071234, <balance as compressed point>, <range proof>

The total balance is 96532.87654321.

The blinding factor sum is <32 byte big endian integer>.

This combined message should also be signed by the exchange's proof of reserve public key.

This weekly document can be verified by anyone.  Elliptic curve maths is slower than just checking hashes, so it would be slower than the tree system.  On the plus side, the entire sum is checked, rather than random sampling of people who actually check the tree.  At 10ms per entry, an exchange with 50,000 customers would take less than 10 minutes.  At least one of the 50k customers would check it weekly.

This has two advantages over the Merkle approach

  • negative balances are impossible [*]
  • doesn't leak balance info to neighbors in the tree

By emailing all customers weekly, it means that customers can prove what their reserves should be.  Without that, customers who detect fraud might be accused of falsely accusing the exchange.

It makes it easier for customers to get back and check their historical records. 

A customer might be dormant on the exchange, but still vigilant in checking that their email was properly signed.

This makes it much harder for the exchange to pick out which customer balances that they can tamper with safely.  Even if they find a "real" dormant account, there is always the risk that a customer might check their emails.

[*]  With the sum-tree, they can be hidden by collusion with customers.
11  Bitcoin / Development & Technical Discussion / Pruning and automatic checkpointing on: February 19, 2016, 07:50:34 PM
One of the disadvantages with pruning is that you have re-download everything for re-indexing.

When downloading the blockchain, the reference client skips signature validation until it reaches the last checkpoint.  This greatly speeds up processing and then it slows down.

There is no loss in security by self-checkpointing.  When a block is validated, a checkpoint could be stored in the database.  This would be a "soft" checkpoint.  It would mean that signature validation doesn't have to happen until that point.

In 0.12, the last block to be checkpointed has a height of 295000.  The block is over 18 months.  Core has to verify all blocks that were received in the last 18 months.

When downloading, core will download blocks 0 to 295000 without performing signature validation and then fully validates everything from 295000 to 399000.  If re-indexing is requested, then it has to validate the 100k blocks a second time.  There is no security value in doing that.

Instead, the client could record the hash for blocks that have already been validated.  If a block is more than 2016 blocks deep and has a height that is divisible by 5000, then soft checkpoint the block.

It is possible that the new signature validation library resolves the problem.  That would make the problem moot.
12  Bitcoin / Development & Technical Discussion / Consensus supported sequence numbers on: February 17, 2016, 05:28:01 PM
Sequence numbers are currently not enforceable by Bitcoin.  If two transactions spend the same outputs, then the miner is supposed to pick the transaction (input) with the higher sequence number.   This cannot be enforced and so, the miner would probably pick the one with the highest fees.

They could be enforced, if transaction replacement was possible in blocks.  With a hard fork, the rule could change so that transactions with locktimes in the future are allowed into the blocks.  The locktime would prevent the outputs from being spent.  Afterwards, if someone broadcasts a transaction which double spends some of the inputs and where all those inputs have a higher sequence number, then it would effectively cancel the original transaction.  Once the locktime is reached, then the outputs of the highest sequence transaction could be spent.

It is possible to do this (less efficiently) with a soft fork.

The anchor/multisig transaction would have N outputs of the following form:

Code:
IF
    <now + 40 days> CLTV DROP <refund public key> OP_CHECKSIG
ELSE
    <N> <N public keys> <N> CHECKMULTISIGVERIFY
END

The state update transactions would be N input and N output transactions with the following outputs.

Code:
IF
    <now + 30 days> CLTV DROP <new owner's public key> OP_CHECKSIG
ELSE
    <sequence_number> OP_CHECKSEQUENCEVERIFY
END

They would spends the first N outputs of the anchor transaction and spend them to N new outputs.  Each of the state update transactions can act as a link.  As long as it is spending a link with a lower sequence number, then it is a valid transaction.  Once the 30 day CLTV passed, the result output can be spent.

There needs to be an allowance for at some extra inputs into the transaction, in order to pay fees.  This could work similar to SIGHASH_ANYONE_CAN_PAY.

OP_CHECKSEQUENCEVERIFY means that you can effectively use the parent transaction's scriptPubKeys instead of this transaction's.  Each lower sequence number transaction uses the public key from its parent in a chain back to the root/anchor transaction.

In psuedo-code, it does the following

Note: max_sequence starts at 0xFFFFFFFF

if (sequence_number >= max_sequence)
  return FAIL;

If (txid of first N inputs aren't equal)
  return FAIL;

parent = getTransaction(TxIn[n].getPrevTransaction())

if (txid of first N inputs of parent aren't equal)
  return FAIL;

stack.pop(); // remove <sequence number>
stack.pop(); // remove <1> for IF

subScript.max_sequence = sequence_number
subScript.scriptSig = stack.copy()
subScript.scriptPubKey =parent.getgetPubKey()

transactionCopy = transaction with first N txids replaced with grandparents txid

if (subScript.execute(transactionCopy) == FAIL)  // Use transactionCopy for all scriptSig operations
  return FAIL;

proceed as if it was a NOP
13  Bitcoin / Development & Technical Discussion / Soft fork to implement SIGHASH_WITHINPUTVALUE on: August 30, 2015, 10:57:46 PM
SIGHASH_WITHINPUTVALUE is a hard fork that allows signing an output that is only valid if the output has a particular value.

This helps with offline signing.  The offline signing device/software can sign a transaction and be sure of the total value of coins being spent.

Under the current system, all inputs into a transaction must be provided, in full, in order to verify the input values.  In some cases (very low memory), it may even be O(N^2).

The risk is that a signer might sign a transaction that pays a large output to fee.  A transaction might spend a 100BTC output and only spend 1BTC of it.  The other 99 would be paid to fees and the offline signing tool would have no way of knowing this (without checking all the input transactions).

I suggest converting an OP_NOP into OP_INCREASEMAXFEE opcode. 

This maximum fee for a transaction is the sum of all such values.  If none of the inputs into the transaction have this opcode, then the maximum fee is infinite.

The opcode is only allowed in the scriptPubKey.

A P2SH address could be of the following form

Code:
[<some value> OP_INCREASEMAXFEE OP_DROP <public key> OP_CHECKSIG]

Most wallets could send to such addresses (assuming the can handle P2SH address as a destination).

This requires that users estimate what a reasonable fee would be when they create the transaction, but that shouldn't be a big problem.

A less complex version suggested in the original thread is use an OP_CHECKZEROFEEVERIFY opcode to force the transaction fee to zero and pay the actual fee to OP_TRUE.

The fee estimate could overestimate the actual fee, since the risk is reasonably small of corruption.  Worst case, you end up paying a 5X-10X the fee you were expecting.

With OP_INCREASEMAXFEE, that can still be used.  Do mining pools credit transactions that pay to OP_TRUE?
14  Bitcoin / Development & Technical Discussion / Economic majority voting on: August 20, 2015, 04:18:09 PM
In the block size fork, the proposal is to have the miners vote.  However, the users of the system (merchants/exchanges/clients) are the ones who have control in the end.  They choice made by them is the one that has the support of the economic majority.

One way to measure that choice is to see which fork has the coin with the highest value.  If a fork is not very popular, then its coins on that fork will be less valuable.

Assuming that the XT fork activates, then the XT fork will have the support of 75% of the hashing power and the core fork will have at most 25%.  If the core fork still has reasonable hashing support, then it would continue, though with around 45 mins block time.

If any of the inputs into a transaction spend a coinbase output from one of the forks (or any of its descendents) that it can only be included on that fork.  This allows trading coins between the two forks.

Exchanges could add BTC-XT and BTC-core coins to allow trading between the two forks.  This would clearly show which fork had the support of the economic majority.  If the loser doesn't go to zero, then at least some people want to keep it going.

This trading cannot happen until the fork itself happens, so can't be used to tell in advance which side has the support of the economic majority.

This could be rectified with a soft fork.  It works similar to colored coins.  Outputs have information about who can spend them on each fork.  Each fork has an id based on the fork deadline.

Fork-id = 0 means the core chain
Fork-id = deadline means fork chain

The deadline is the unix timestamp of when the hard fork is going to happen.  Each fork would have to pick their own switch moment, but that isn't that big a restriction.  It is not likely we are going to have more "serious" hard fork proposals than one per second.

The soft fork allows the user to specify who owns the output in each of the potential forks.

A user can spend their money to the following output script.  This is a template match like P2SH.

Code:
OP_IF
    <fork_id1> <fork_id2> OP_DROP OP_DROP OP_HASH160 <hash script 1> OP_EQUAL
OP_ENDIF
OP_IF
     <fork_id3> OP_DROP OP_HASH160 <hash script 2> OP_EQUAL
OP_ENDIF

This spends the output to

In forks 1 and 2, the owner of script 1 owns the owner.  In fork 3, the owner of script 2 owns the output.

To spend an output, you include

Code:
<signature for script N> <script N> OP_1

The OP_IF activates and the script is checked like with P2SH.

You can spend multiple rows. 

To skip a row, you just include

Code:
OP_0

This causes the OP_IF to skip that row.

In the outputs, the sum of the inputs for a fork id must be greater or equal to the sum of the outputs for that fork id.

All signed inputs can be consolidated as desired (and committed to fees).  Unsigned inputs must have outputs equal inputs, since it isn't your money to commit to fees.  If two inputs pay to the same script hash, they can be consolidated.

Once the deadline has passed, it is possible to convert encumbered coins back into normal outputs. 

In the core chain, once median timestamp is greater than the dead line, once the fork id = 0 row matters and nobody else is allowed to spend the output.  On the fork chain, only the output that paid to the deadline is spendable.

This allows users of legacy BTC to convert x BTC into x BTC-Core and x BTC-XT in advance of the fork.  These can be traded on exchanges to determine the view of the economic majority.

If most users think large blocks will destroy Bitcoin, they can convert their BTC to BTC-Core.  On the other hand, if they feel that a block size increase is necessary, they can convert their BTC to BTC-XT.  Both sides would be putting their money where their mouth is. 

The most valuable coin isn't necessarily the winner.  If both sides maintain some value, then the fork will split the coin.  On the other hand, if one coin is trading at a few percent of the price of the other, then the economic majority would have given a clear verdict.
15  Bitcoin / Development & Technical Discussion / Securing the network against miner fraud on: July 17, 2015, 11:57:12 AM
Encouraging more people to run full nodes is often suggested as a good way to improve the security of the network and people are encouraged to do so.

This is intended to protect the network from miners creating invalid blocks and violating the network rules (keeping miners honest).

In the extreme case, if all merchants and their customers used SPV clients, then a majority of miners could set any rules that they wish for the network.

Everyone would just find the longest chain and not care about validity of the blocks.

Full Nodes

Running a full node means that you fully check all transactions and blocks before forwarding them.

Adding another full node doesn't actually help that much in protecting against miners.

Your node will just fall behind the rest of the network, since it won't track the longest chain.  None of the SPV clients will bother with your node, since you can't give them info on the latest blocks.

On the other hand, if merchants refuse to accept transactions on the longest (but invalid) chain, then that creates the incentive for miners to properly follow the protocol rules.

For full nodes, the encouragement should be for merchants to run their own full nodes.

Full nodes also help SPV clients for transaction lookup, so they aren't worthless, but it doesn't add to security against miners.

SPV Clients

SPV clients are users of the network, so there is an incentive to make sure that blocks are acceptable to them.  But, by their nature, they don't do miner verification.

Fraud Proofs

Fraud proofs are a short (hopefully less than 100kB) proof that a block is invalid.  Even for 1GB blocks, the fraud proof would be about the same size.  They generally scale with the log of the maximum block size  O(log(block_size)).

Arguably, they scale with the square of the maximum transaction size.  This assumes a maximum size transaction with all the inputs being maximum size.  In practice, the fraud proofs would likely scale with the maximum transaction size.

The advantage of fraud proofs is that you can prove to SPV clients that a block is invalid.

If 75% of the miners want to violate the protocol rules, anyone running a full node can broadcast a fraud proof.

When the miner cartel broadcasts their invalid block, full nodes broadcast the block header and fraud proof.  SPV-nodes can quickly check the header and fraud proof.  This means the fraud proof could propagate faster than the bad block.  Full nodes which receive the fraud proof before the block wouldn't have to validate the block, since they have proof that it is invalid.

Block Withholding Attack

If the miners' cartel just broadcasts the block headers, then it isn't possible to generate a fraud proof. 

Since the full nodes haven't received the full blocks, they can't check them.  This means that they can't produce fraud proofs.

The mining cartel must send the merkle path for any transactions it wants to show to SPV clients.  This means that slowly the block would be broadcast.  A fraud proof broadcast 3 months later would do damage to trust in the network, since all SPV clients would reverse all transactions since then.

Publication Verification Nodes

These are nodes which verify that blocks have actually been published.

They do minimal verification.  They just check the POW and the merkle tree.  They don't even need to check if the transactions are properly formatted transactions.

If they don't delete old blocks, then they would be (non-validating) archive nodes.

In practice, they need to store some of the blocks so that they can forward them.  This is necessary to prove that they have actually been published.

DOS protection is provided due to block POW.  They would only download blocks that are on the longest chain and they haven't seem before.

The relay system in use for bitcoin works this way. With fast relaying, it decreases the need for miners to build empty blocks on block headers.  SPV-mining timeouts can be set lower.

Full System

Merchants running publication verification nodes would create an incentive for miners to actually publish their blocks.

These nodes are pretty cheap for a merchant to run.  Even only storing the 10 most recent blocks would give pretty good proof that the block was actually published, in full, to the world.

Ideally, some merchants would run full nodes too.

Once block publication is proven, fraud proofs can be used to keep all the miners honest.

With fraud proofs, SPV clients are almost as secure as full nodes.  You only need a small number of honest peers to keep the whole system honest.  There could be a delay before the fraud proof is broadcast, so low confirms should be considered less safe.

Risks

Fraud proofs are potentially risky for the network.  They are, in effect, a re-implementation of the network rules.

If there was a bug in the fraud proof system, someone might be able to produce a proof of fraud for a valid block.  This would allow an attack on the network where months of transactions could be reversed by waiting and then submitting the proof of fraud. 

The opposite problem is also true.  The miner's cartel could create an invalid block without it being possible to notify SPV clients.

Ideally, all invalid blocks should have a fraud proof and there should be no fraud proofs possible for valid blocks.  Achieving that and proving it is achieved are the hard part.

SPV clients should probably go into emergency mode if they receive a block revert for a block that is more than 10-20 blocks deep and be directed to find out what has actually happened.  This covers fraud proofs for valid blocks.

Invalid blocks without fraud proofs could be handled by an alert.

Summary

  • Running full nodes alone doesn't protect again miners collusion
  • Full nodes run by merchants does help
  • SPV Clients inherently trust miners
  • Fraud proofs allow SPV clients to reject invalid chains
  • Fraud proofs are vulnerable to miners withholding full block data
  • Publication verification nodes protect against withholding attacks
16  Bitcoin / Development & Technical Discussion / Safe(r) Deterministic Tie Breaking on: July 16, 2015, 03:12:36 PM
When selecting a block to build on, miners are supposed to use these rules.

  • Mine on the longest valid chain
  • Break ties in favour of the earliest received full block

The problem is that the 2nd rule requires information on when each block was received.

If two blocks are found at around the same time and broadcast, half of the miners would end up seeing each of the blocks first.  The network wouldn't agree on which block is the longest.

Some nodes would see one confirm for transactions in one block and some would see one confirm for transactions in the other block.  

When the next block is found, the tie will likely be broken.

A deterministic way to find break ties means that the entire network will instantly agree on which block to build on.

The problem with this approach is that it can lead to miners withholding some blocks.

If the target was 10 million, then each valid block would have a hash ranging from 0 to 10 million.  If a miner hits a block and gets a low hash (say 1 million), then they can have high confidence that they will be able to win any tie break.

They could hold back their block and release it when someone else sends a new block.  The other person will probably have a hash above 1 million (90% chance), so they will probably win the tie.  

This means that their competitors end up wasting time searching for a block just to lose the tie immediately.

The problem is that miners can tell if the block that they just found is has a good chance of winning the tie break.

This can be avoided by using both hashes to determine the winner.

The strength of a block can be defined as

Sn = [2*Hn - sum(Hk)] mod T

T is the target and Hk is the hash of the kth block involved in the tie.

Each time a new block is added to the set, a random number is effectively added to sum(Hk).  This means that all strengths are effectively re-randomized and all of the blocks involved in the tie have an equal chance of winning.

For the two block case, the strengths are

S1 = [H1 - H2] mod T
S2 = [H2 - H1] mod T

The block with the highest strength wins.

This means that a miner has no way to know if the block that they just found is going to win the tie break.  No matter what the hash of the block, there is always a 50% chance of winning any resulting tie.
17  Bitcoin / Development & Technical Discussion / Random block size and the fee market on: July 11, 2015, 09:36:47 AM
With a constant block size, your transaction is either above or below the BTC per byte threshold.  

If it's above, then it will likely get included in the next block.  If not, it could end up stuck for ages.

Simply increasing the block size removes the incentive to pay fees.

A random block size would keep the fee incentive while increasing the average block size.  Higher fees mean greater chance of the next block being large enough to include your transaction.

Since a block size rule change means a hard fork anyway, hard forking changes are possible.

As an example, the merkle root could be changed to be a merkle root of the merkle root of 16 sub-blocks.  Only the first sub-block would have a coinbase.

The bottom 4 bits of the block hash would be used to decide how many of the blocks are actually valid.  

This is compatible with SPV for the branches.  They would need to be updated anyway to make sure they look at the nibble to determine which paths to actually accept.

If the bits were 0111, then 7 of the blocks would be valid.  The miner would send the transactions in all 7 sub-blocks and only the merkle roots of the remaining 9.

Each of the 16 sub-blocks would have a different fee.  If you only pay enough to get into sub-block 15, then it would take around 16 blocks before your transaction is included.

A low probability of a very large block would clear the memory pool every so often.

nibble) total block size

0) 1MB
1) 1MB
...
6) 1MB
7) 1MB
8) 2MB
...
11) 2MB
12) 4MB
13) 4MB
14) 8MB
15) 16MB

This gives an average of around 2MB per block, but still gives 16MB every so often.  

It could be extended past 16MB, with 32MB having a 1/32 chance of being allowed.

If the large blocks had a probability higher than the inverse of their size, then it can push up the average size even more, while not having much effect on the fast transaction fee market.

This helps with having stable relay rules.  The relay rule could be that your transaction must pay enough to get into the 16MB block to be relayed (maybe 150% of the fee to get into the 16MB block), but users would still have an incentive to have higher fees.

This protects against DOS, since any transaction sent with 150% of the 16MB block fee threshold will eventually get included.  You can't spam with fees that are high enough to be relayed but to low to be included in the blocks.
18  Bitcoin / Development & Technical Discussion / Double spend protection with replace by fee on: July 10, 2015, 02:40:11 PM
If a significant number of miners full replace by fee, then it makes it easier to carry out a double spend against zero confirm operations.

The attacker pays for and then immediately receives the item.  The attacker can then create a new transaction that spends the same outputs by with a higher fee and sends the money back to an address that he controls.

Miners with replace by fee will mine that new transactions instead of the original and thus the attacker gets the item and also (most of) their money back.

A double spend protection service could be offered by transaction aggregator.

The merchant would send the original transaction to the aggregator and get a YES/NO reply.  If the aggregator has already received a transaction which spends one of inputs, then it will refuse to accept the transaction.

Otherwise, it will accept the transaction and add it to its set of pending transactions.  The aggregator would charge a fee for the service.

The service creates a passthrough transaction for the pending transaction.  Each output has a matched input.

A -> A'

B -> B'

...

F -> F'

X-> X'

Fee: 1% of total value

The X transaction spends the X output of the previous transaction (value = 0).  This means that the block cannot include a passthrough transaction unless all previous are included.

When a new block is found, the chain is started again.

This has the effect of providing a fee for miners who are willing to include all the transactions in the set.  If they leave any out, then they get none of the extra fees.

If the aggregator combines 100 transactions per block and pays 1% fees, then the cost of not using the aggregator's transaction is about the same as the total value of one transaction.

Even if the attacker sends 100% of the transaction to fees, he only matches the "bid" of the aggregator's transaction.

As more people use the aggregator's system, it is even harder for a double spender.

If multiple aggregators exist, they could work together.  By reference each others transactions, they increase the effective number of transactions in the combined transactions.

This has the nice feature of eliminating the need for making contracts with individual pools.  All miners would see the aggregation transactions and could include them.

This may need child-pays-for-parent depending on exactly what full replace by fee actually does.

On the negative side, it means transaction space being used up.  It is only needed for transactions that are zero-confirm.
19  Bitcoin / Development & Technical Discussion / Multi-pass block validation on: June 25, 2015, 05:07:37 PM
Proof of publication systems require much less complex validation by miners.  Miners just need to check the merkle tree and the POW.  Invalid transactions don't invalidate the block.

This allows stream validation of blocks.  You can start forwarding the block to your peers even before receiving the complete block.  Miners should not build on a block until it is fully received though.

Even SPV nodes could help with proof of publication block forwarding (if the smartphone was charging overnight and had a wireless link).

Proof of publication assumes that >50% of miners are 'honest'.  In this context, it means that they will not build on a block that they have not seen in its entirety and also that they will make the blocks available publicly.

This gives an ordering of transactions (including invalid ones).  A full node would scan all the transactions, in order, and simply discard any invalid ones.

With this system, SPV nodes can't assume that the transactions in the block are valid.  Proving a transaction exists doesn't mean that the transaction is valid.

With a multi-pass system, SPV clients can get some of the benefits restored and historical blocks wouldn't need to store invalid transactions.

This could be implemented as a soft fork.

Each bitcoin block would store the hash of the proof of publication block that is 16 blocks ahead of it.

The bitcoin block can be pre-computed and the hash added to its coinbase.  If you know the publication chain, you can work out the bitcoin block.

Miners just mine the proof of publication chain and naturally produce standard bitcoin blocks that are consistent with the proof of publication chain.  Miners have over 2.5 hours to work out what transactions are in the proof of publication block.

The tail of the bitcoin chain would be 2.5 hours behind the publication chain.  If your transaction gets into the publication chain, then you know it will end up in the bitcoin chain.  Legacy clients would have to wait 2.5 hours though.

The proof of publication blocks would only need to be held for a short time, so they could have a higher size limit.  This would allow miners to include more transactions than are necessary into the block and repeats would be automatically discarded as part of generating the bitcoin block.

For transactions that are more than 2.5 hours old, SPV nodes can check the bitcoin chain.  For newer transactions, they have to trust their peers not to give them invalid information.

If they checked with 8 peers for an outpoint, then at least one of them would probably send the first time that output was spent.  This gives them some double spend protection. 

The proof of publication chain could be arranged to run faster than the bitcoin chain.  Hitting the lower difficulty target advances the proof of publication chain, while hitting the standard bitcoin difficulty advances the bitcoin height and the proof of publication chain height).  Something like the Nimblecoin system could be used to drop the block rate to a few seconds.

If miner fees are out of channel (like a payment channel), miners don't even need to check if the transaction is valid.  Block space is auctioned, which protects against a DoS attack.

Normal fees still work though and even with mini-blocks, the calculated bitcoin block could share the mint fee between all miners on the publication chain.
20  Bitcoin / Development & Technical Discussion / Block relay when pruning on: May 27, 2015, 09:30:39 PM
I was looking at the block relay code in master.

Code:
            // Don't relay blocks if pruning -- could cause a peer to try to download, resulting
            // in a stalled download if the block file is pruned before the request.
            if (nLocalServices & NODE_NETWORK) {
                LOCK(cs_vNodes);
                BOOST_FOREACH(CNode* pnode, vNodes)
                    if (chainActive.Height() > (pnode->nStartingHeight != -1 ? pnode->nStartingHeight - 2000 : nBlockEstimate))
                        pnode->PushInventory(CInv(MSG_BLOCK, hashNewTip));
            }

When a new block is added/connected, each node informs its peers about the new block.

With this code, pruning nodes won't do that.  I don't think that is a good idea.  When in initial download, the node shouldn't inform nodes about new blocks, but once synced, there is no reason not to.

The rule could be that if the block header extends the main chain, the main chain has passed the final checkpoint (or has POW > some_threshold) and the (median of 11) timestamp for the header is less than 6 hours old then a pruning node should inform its peers about the block.  Leaving them in the dark about a new block seems worse.
Pages: [1] 2 3 4 5 6 7 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!