Bitcoin Forum
May 08, 2024, 10:02:59 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 »
1  Bitcoin / Press / Re: 2021-01-04 Blockstream Debuts Open-source Hardware Bitcoin Wallet on: January 05, 2021, 03:24:55 AM
What's the point of a hardware wallet if you still need permission from some third party to be allowed to use your coins?

Exactly this. I will never trust a device that someone else holds the key for - ever. This is a fail in all respects & I urge everyone to avoid it until they change this anti-feature.

Trezor for the win.

we also have a single-sig option coming soon.

note it's considered (eg by coin-center) that blockstream.com/green is non-custodial because you can spend by yourself after a time lock (with the default 2of2+timelock config).

there is also a 2of3 config (create sub-wallet, pick 2of3) which allows you to spend immediately.

The function of the server key is to have multisig enforced 2fa to protect your funds.
2  Bitcoin / Press / Re: 2021-01-04 Blockstream Debuts Open-source Hardware Bitcoin Wallet on: January 05, 2021, 03:22:01 AM
I was going to say it's just another hardware wallet until i read they have both camera and screen which allow true airgapped wallet. They already release the source code for the firmware (https://github.com/Blockstream/jade/), but i couldn't find any detail of the hardware component they use.

So far it's promising than most hardware wallet, i hope it could be competitor of Ledger and Trezor.

see reddit AMA earlier today, has some more hardware and feature Q & A https://www.reddit.com/r/Bitcoin/comments/kqgehd/were_the_blockstream_team_and_we_just_announced/
3  Bitcoin / Press / Re: [10/10/2018] Blockstream Liquid Sidechain Solution for Bitcoin Network Goes Live on: November 04, 2018, 11:50:48 AM
Can someone put the address of deeper articles on this project? So far the biggest criticisms I've read have been about centralization. But honestly, I do not understand why this would be bad. The idea of the project is simply to provide liquidity to large players and thereby help make the price the right one. This could even decrease the volatility that occurs in some exchanges with lower volume. The most interesting thing is that by charging a Fee, the project will have to prove much better than the current arbitrage solutions.

this podcast has more detail https://letstalkbitcoin.com/blog/post/the-bitcoin-game-60-dr-adam-back-part-2-liquid
4  Other / Meta / Re: Timeline of Bitcointalk and Important Events of Bitcoin Journey on: June 02, 2018, 05:38:04 PM

I'm confused regarding Hashcash, as Satoshi mention in white paper with date reference to 2002. But some sources suggest Adam Back published it in 1997[/li][/list]


that is because the paper was written 5 years after hashcash was released.  see first sentence of abstract or citation [1] in the paper "Hashcash
was originally proposed as a mechanism to throttle systematic abuse of un-metered internet resources such as email,  and anonymous remailers in May 1997"

[1] http://hashcash.org/papers/announce.txt
5  Bitcoin / Development & Technical Discussion / Re: Using the confidential transaction sum for proof of reserves on: August 10, 2016, 11:15:37 AM
[confidential transactions] would require a hard fork.  The output value would need to be replaced with a EC points.  You also need range proof support.


Actually you can soft-fork CT.  https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012194.html

The main challenge is size.  I believe I have a way to reduce the range proof from 2.5kB per proof to 2kB per proof but it is still large.

Mimble Wimble is interesting also in making an aggregatable CT which allows the bloat to be more than reclaimed at least as far as catchup goes. (More total bandwidth used, but less bandwidth to catchup as catchup becomes proportional to UTXO size + a smaller overhead per historic transaction).  Maybe the historic transaction overhead can be removed.

6  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 04, 2016, 12:00:19 AM
I wouldn't say they are 'broken' - there is a lot of misunderstanding about it, and due to the level of censure taking place on bitcointalk its difficult to get a clear picture of what it is about on this forum.

We might need to think about moving forum because some of the people proposing the ideas apparently being "moderator incompatible". Technically I dont think it is accurate to say the clarity has been hurt by deletions directly, as far as I saw & remember contents of now deleted post no technical comments were deleted.  What has hurt clarity possibly is the refusal of people to participate because of the potential of moderation or clash of egos and moderators.

I do sympathize and dislike moderation myself, though there is a little irony in Peter R's deleted comment containing a proud link to his own heavy trolling of /r/bitcoin which is not exactly the way to encourage people to divert their time to review your proposal (moderator or not!)

I started a thread on /r/btc

https://www.reddit.com/r/btc/comments/3zc6qg/review_of_shelling_point_protocol_selection_ideas/

and typed up a summary of ideas explained so far (including the below one that I already read before starting this thread we're in).

Quote
So may I respectfully point interested parties to the BU 'white paper' ( without having this post moderated/deleted as a result) here where you can digest the finer details.

I had read that one before starting the thread. It seems to be a different idea again not mentioned on this thread.  Maybe we should be analysing a set of features that BU proposes to combine.

Immediate observation with empty block ratios is that this appears not to work in the face of 4 existing network behaviours SPV mining, relay network, big pools and selfish-mining.

Quote
Additionally, the FAQ provides a higher level description of key topics. This should remove some of the guesswork which has characterised  this thread.

Dont recall if I read that one yet.

Quote
Then present your thoughts or arguments on the unmoderated  /r/btc reddit or directly to bitco.in where they can be discussed.

Ha after thought I figured /r/btc is the next best option given the egos and the moderators clashing, yes, see above.

Adam
7  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 03, 2016, 04:39:47 PM
The idea of enabling nodes & miners to set a market block-size is quite reasonable so there is no criticism of the idea.  Dont take review of the mechanism as a critique of the idea: for ideas to be deployed we need game-theory reviewed protocols and rigorously tested implementations.  Dynamic block-size is actually on the core roadmap and the best proposal I've seen for it is flexcap by GMaxwell & Maaku, with some ideas from Jeff.  You can watch a video about flexcap presented at scaling bitcoin HK.  Maaku has code for the core parts of it, I believe he was going to publish it, probably online by now.

If we want nodes to dynamically set a blocksize limit, in a way determined by the market, we should use a proposal like BIP100.  BIP100 actually allows miners to dynamically set a blocksize limit and agree with each other on a new limit, BU has no system to enable nodes to agree on the limit.

The precursor idea was BIP "100" which Jeff retracted.  The BIP "100" proposal is similar but only miners vote.  In flexcap both users and miners vote.

I would suggest people interested in the idea of dynamic blocks, learn about BIP "100" and flex-cap and see if they can improve them.  There are design considerations that have been refined between 100 and the improvement on it flex-cap.

The bitcoin unlimited project has presented some ideas which do try to automate things.  Unfortunately all of the ones so far seem to be defective suffering sybil attack, constant centralisation pressure, dont take account of SPV mining, shared pools, relay network existing practice nor selfish-mining, attacks.

I have not really analysed the idea of validating two chains, but it seems likely to have problems based on intuition, particularly in the area of race-conditions, chain sharding and divergence risk, and in an adversarial environment.

Bear in mind that the consensus mechanism is extremely fragile, it only just works in that there are many design close things that completely fail.  Most variants I tried I self-broke fairly immediately.  But some of these things take a long time to realise, or require review from GMaxwell or others to disprove.  For example selfish mining was not noticed for years.  I did spend about 3 - 4 months cooking up analysing mining variants to improve bitcoin mining centralisation (eg I invented ghost, but rejected it as over complex for what it achieved, before the academic paper proposed it and a bunch of other variants), before getting into side-chains.

The idea for users to vote by delaying block relay wont work because most miners are already using the relay network or SPV mining.  Over 50% of the network was SPV mining during the 4th july fork.  A large portion of miners use the relay network.

Users voting by advertisement wont work because of sybil as others have explained.

You can read flex-cap to see how they combine miner and user voting in a secure, game-theory stable way that defends against all these attacks.

In summary:

1. The use case: dynamic market set block-sizes are interesting.

2. bitcoin unlimited proposals so far seems broken as discussed by multiple people for a whole range of reasons.  We didnt have a crisp definition and it seems that some things maybe undecided.  That's ok - just keep working on it and make a concrete proposal later and people can analyse it from that.

3. BIP "100" seemed plausible, but was only miner meta-incentive secure. Meaning we would be trusting miners to do the right thing, limited only by their commitment to not do anything too selfish for fear of hurting bitcoin's long term value.

4. flex-cap adds user voting (in transactions) and an economic quadratic feed-back mechanism to create an incentive to right-size blocks (to deter miner zero sum attacks against other miners and curtail the continuous centralisation pressure). Flex-cap also ensures miner fee profit in conditions where otherwise mining fees can be driven to zero by excess capacity in a non-dynamic block-size growth proposals like BIP-103.

[EDIT: I suppose the other thing is it might be better to run experiments on testnet rather than bitcoin or putting clear warnings for users if you have not.  People could lose bitcoin running partial implementations of incomplete ideas.  Encouraging users arent understanding it is a research project to run experimental code with real Bitcoin under it's control or even on the same machine, would be inadvisable.]

Adam
8  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 03, 2016, 12:56:07 AM
Here are two posts from testing1567 on reddit (says he doesnt have a bitcointalk account):

Quote from: testing1567
I prefer to respond to you on here because I don't have an account on either forum. I am not a believer in Bitcoin Unlimited myself, but I do feel that I have a fair understanding of the concepts and intent behind it and they do have some good ideas.

Quote from: adam3us
So what happens if I left my node at 1MB +10% user threshold and a 1.2MB block comes - does my node reject it? How will the network not split into a myriad little shards which diverge following accidental and/or intentional double-spends without manual human coordination?

In your example your node is set to accept 1mb + 10%. (1.1mb?) If you were to receive a 2mb block, your node would accept it, but it wouldn't relay it. It would continue to follow the <1.1mb chain but it would also monitor the 2mb fork. BU has a secondary user adjusted parameter for determining max block size. You can set a maximum fork length. Your BU client will continue to accept blocks for both forks, but will only relay transactions and blocks for your <1.1mb chain, so it is not blind to the existence of the fork and will warn you of discrepancies between the two. However, if the 2mb fork gets more than your maximum fork length ahead of your prefered chain, your client will abandon the 1.1mb chain in favor of the longer one. So if your max fork length was set to 24, then you would stick to the 1.1mb fork until another fork becomes more than 24 blocks longer. This ensures that any overriding of your max settings can only come with a majority of the hashing power behind the move. Miners, in theory, don't have complete control either. A miner would need to consider the orphan risk before creating a large block. This orphan risk is intended to be an emergent property of the network created by individual node operators setting their prefered max block size. Maybe creating a 1.3mb block is fairly safe if the included fees are high enough to risk the orphan, but risking it on an 8mb block could be an almost guaranteed orphan. Every time a miner creates a larger block, it is a calculated risk. We may even see varying mining pools emerge based on people's risk/reward tolerance levels, particularly when the block reward is minimal and the miners are relying on cramming in as many transaction fees in as possible to get paid.

It essentially turns the hard blocksize limit into a soft limit that can be overruled with enough sustained hashing power. The idea is rather than fighting to prevent fragmentation and forking by setting a hard limit, it embraces forking and attempts to manage it in an automated fashion while fragmentation exists and eventually converges on a single fork if it has sustained miner support. In theory, a wallet that is aware of multiple forks can ensure that you are not cheated.

As I said before, I don't completely agree with BU. I have some issues with the logic behind it, but it does have its merits. I'm going to reply to my own post here and talk about what I consider the negatives to BU, but I want to list the positives here. I love the concept of monitoring alternate forks and converging on one if it gets to a certain length ahead of the rest. I personally think that this feature could be very useful even in Bitcoin Core. Imagine using this method but with the variables hard coded to 1mb and a hard coded max fork length rather than being user adjustable. You would essentially be turning any future blocksize increase fork into a much less scary thing. In reality, a blocksize fork needs to happen eventually, regardless of if it is forced through by BIP101 or if is planned on and agreed to years from now. If these features could be refined and implemented into Core, it would allow for a smoother transition without all the emergency upgrades and damage control.

and another one:

Quote from: testing1567
My main issue with Bitcoin Unlimited is how will it handle merging into a new fork? Let's say that I'm at 1mb max and a 1.01mb block is made and remains the largest block in the new fork. What does my client set it's max blocksize to? Is it 1.01mb? What if a 1.02mb block is created right after I merge into the new longest fork? Will my client be out of sync again until the fork grows longer? I'll probably be out of sync a lot unless I manually go into my client and raise the limit to give it some buffer area. I feel like it would be too easy for the miners to basically bully the node operators to push the blocksize higher, especially with a majority of the miners in one physical region. The only thing holding miners back would be the orphan risk and I'm not even sure if that can affect them. It would be trivial for mining pools to build there own block relay network (which I think they have already). My other issue with BU is it lacks a way to move the blocksize down, only up.

I personally think that they should be supported in their efforts because their attempts at automated fork management could eventually benefit everyone even if it never succeeded as a method of setting the blocksize limi

Mod note: fixed quote links
9  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 02, 2016, 07:41:01 PM
From what I understand, BU moves the block size limit from consensus rules to a node policy rule. Instead of having the limit hard coded in, the user chooses their own block size limit. Also if a BU node detects a blockchain that has a higher block size (up to a certain user configurable threshold), after that chain is a number of blocks deep (user configurable), then it will switch to use that blockchain and set its block size limit higher.

So what happens if I left my node at 1MB +10% user threshold and a 1.2MB block comes - does my node reject it?

How will the network not split into a myriad little shards which diverge following accidental and/or intentional double-spends without manual human coordination?

Adam
10  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 02, 2016, 07:27:16 PM
If you can stay on topic as I suggested: "To make progress on review it would be helpful to separate technical from political opinions." I dont see a problem. People have been discussing bitcoin NG ideas on here for years.

Are you able to explain what BU is and how you think it works?  I gave some reviewer questions in the OP.

Adam
11  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 02, 2016, 07:18:17 PM
More review of both core and BU is recommended and encouraged.

Agree I was kind of hoping Aquentys would be able to explain what it is and how they think it works.  It saves time reviewing when people take the time to explain their assumptions.

Adam


Here are some links from kanzure on IRC (Aquentys started a discussion but wanted to continue on a forum for persistence):

Quote
here is where peter rizun admitted that his assumptions in his "fee market" paper were totally broken: http://pastebin.com/jFgkk8M3
20:05 this pastebin paste was made by the same person too
20:06 here is some basic argumentation about big blocks and how increasing resource requirements kick off low-resource participants https://www.reddit.com/r/Bitcoin/comments/3yvkep/devs_are_strongly_against_increasing_the/cyhv7ev
here is why it doesn't matter if transaction fees can pay for big block orphan risk: https://www.reddit.com/r/Bitcoin/comments/3yod27/greg_maxwell_was_wrong_transaction_fees_can_pay/cyfluso
20:07 here is why peter rizun's unhealthy fee market doesn't actually control block size https://np.reddit.com/r/btc/comments/3xkok3/reduce_orphaning_risk_and_improve/cy60r4y
20:08 here is why it is uninteresting to have a bunch of high-bandwidth miners having consensus just among themselves https://www.reddit.com/r/Bitcoin/comments/3ycizh/decentralizing_development_can_we_make_bitcoins/cycex9t
20:09 here is a roadmap for bitcoin core scalability increases that bitcoin core developers have been working on http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011865.html
20:09 in particular for frequently asked questions see https://bitcoin.org/en/bitcoin-core/capacity-increases-faq

Adam
12  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 02, 2016, 07:13:58 PM
More review of both core and BU is recommended and encouraged.

Agree I was kind of hoping Aquentys would be able to explain what it is and how they think it works.  It saves time reviewing when people take the time to explain their assumptions.

Adam
13  Bitcoin / Development & Technical Discussion / Re: bitcoin "unlimited" seeks review on: January 02, 2016, 06:30:32 PM
If you want info about Bitcoin Unlimited you should check out their forum (https://bitco.in/forum/forums/bitcoin-unlimited.15/). Most of their discussions is there, including technical and not.

The idea was to have some technical focussed constructive discourse and this is a more neutral forum and also where more Bitcoin experts hang out.

Adam
14  Bitcoin / Development & Technical Discussion / bitcoin "unlimited" seeks review on: January 02, 2016, 05:58:17 PM
The proposers of bitcoin unlimited said they would like to get some review which seems reasonable, if others would like to help.

The proposal seems at first skim to be a copy of a few existing technologies from Bitcoin's roadmap and were first proposed by Greg Maxwell and others*: weak-blocks & network-compression/IBLT to reduce orphan risk, and flexcap (or a variant of it perhaps).

Perhaps they could start by explaining what it is & how it works.  This might include unimplemented ideas, and a summary of what the code currently for download on the manifesto page does.

To review it will be clearer if you state your assumptions, and claimed benefits, and why you think those benefits hold.  (Bear in mind if input assumptions are theoretical and known to not hold in practice, while that can be fine for theoretical results, it will be difficult to use the resulting conclusions in a real system).  Particularly claimed compatibilities with Bitcoin and how the dynamic block-size game-theory is expected to work and remain secure with SPV mining, selfish-mining, block-withholding and fair (progress-free) mining could also use explaining.

I suggest the sensible thing is if there is something new or insightful, that Bitcoin consider adopting the technology and the BU proponents get behind that.

Maintaining a new coin is a rather complex undertaking and screwing up, as something like 40% of projects that have tried it have done, is very expensive of other peoples money.

To make progress on review it would be helpful to separate technical from political opinions.

Adam

* some citations seem to be notably missing, I trust this is unintentional.
15  Bitcoin / Development & Technical Discussion / Re: ring signature efficiency on: March 21, 2015, 03:29:14 PM
I found this paper "1-out-of-n Signatures from a Variety of Keys" by Abe, Ohkubo and Suzuki http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.3431&rep=rep1&type=pdf section 5.1 shows a way to do it.  I show here how to add traceability to it in a way that makes it compatible with crypto note:

KEYGEN: P_i=x_i*G, I_i=x_i*H(P_i)

SIGN: as signer j; \alpha = random, \forall_{i!=j} s_i = random

c_{j+1} = h(P_1,...,P_n,\alpha*G,\alpha*H(P_j))
c_{j+2} = h(P_1,...,P_n,s_{j+1}*G+c_{j+1}*P_{j+1},s_{j+1}*H(P_{j+1})+c_{j+1}*I_j)
...
c_j = h(P_1,...,P_n,s_{j-1}*G+c_{j-1}*P_{j-1},s_{j-1}*H(P_{j-1})+c_{j-1}*I_j)

so that defines c_1,...,c_n with j values taken mod l some number of signers.  Next find the s_j value:

Now \alpha*G = s_j*G+c_j*P_j so \alpha = s_j+c_j*x_j so s_j = \alpha - c_j*x_j mod n.

Similarly \alpha*H(P_j) = s_j*H(P_j)+c_j*I_j so \alpha works there too.

\sigma = (m,I_j,c_1,s_1,...,s_n)

VERIFY:

\forall_{i=1..n} compute e_i=s_i*G+c_i*P_i and E_i=s_i*H(P_i)+c_i*I_j and c_{i+1}=h(P_1,...,P_n,e_i,E_i)

check c_{n+1}=c_1

LINK: reject duplicate I_j values.

It looks like Joseph Liu, Victor Wei and Duncan Wong made the same observation in "Linkable Spontaneous Anonymous Group
Signature for Ad Hoc Groups" 2004 https://eprint.iacr.org/2004/027.pdf

The proposed scheme is basically the same as what I propose above, and the Liu, Wei & Wong 2004 publication seems to predate the 2007 Fujisaki & Suzuki "Traceable ring signature" https://eprint.iacr.org/2006/389.pdf cited by cryptonote.

Adam
16  Bitcoin / Development & Technical Discussion / ring signature efficiency on: March 01, 2015, 12:19:30 PM
The traceable ring signature used in cryptonote https://cryptonote.org/whitepaper.pdf looks like:

KEYGEN: P_i=x_i*G, I_i=x_i*H(P_i)

SIGN: as signer j; random s_i, w_i

(I relabeled q_i as s_i to be more standard, and relabeled the signer s as signer j)

IF i=j THEN L_i=s_i*G ELSE L_i=s_i*G+w_i*P_i
IF i=j THEN R_i=s_i*H(P_i) ELSE R_i=s_i*H(P_i)+w_i*I_j

c=h(m,L_1,...,L_n,R_1,...,R_n)

IF i=j THEN c_i=c-sum_{i!=j}(c_i) ELSE c_i=w_i
IF i=j THEN r_i=w_i-c_i*x_i ELSE r_i=w_i

\sigma = (m,I_j,c_1,...,c_n,r_1,...,r_n)

VERIFY:

L_i'=r_i*G+c_i*P_i
R_i'=r_i*H(P_i)+c_i*I_j
sum_{1..n}( c_j ) =? h(m,L_1',...,L_n',R_1',...,R_n')

LINK: reject duplicate I_j values.

where H(.) is a hash2curve function (taking a value in Zn and deterministically mapping it to a curve point), and h(.) is a hash function with a hash output size very close to n the order of the curve, ie h(.)=SHA256(.) mod n.

Towards finding a more compact ring signature I'd been trying to find a way to make c_i into a CPRNG generated sequence as they are basically arbitrary, though they must be bound to the rest of the signature (non-malleable) so that you can compute at most n-1 existential signature forgeries without knowing any private keys.  

I found this paper "1-out-of-n Signatures from a Variety of Keys" by Abe, Ohkubo and Suzuki http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.3431&rep=rep1&type=pdf section 5.1 shows a way to do it.  I show here how to add traceability to it in a way that makes it compatible with crypto note:

KEYGEN: P_i=x_i*G, I_i=x_i*H(P_i)

SIGN: as signer j; \alpha = random, \forall_{i!=j} s_i = random

c_{j+1} = h(P_1,...,P_n,\alpha*G,\alpha*H(P_j))
c_{j+2} = h(P_1,...,P_n,s_{j+1}*G+c_{j+1}*P_{j+1},s_{j+1}*H(P_{j+1})+c_{j+1}*I_j)
...
c_j = h(P_1,...,P_n,s_{j-1}*G+c_{j-1}*P_{j-1},s_{j-1}*H(P_{j-1})+c_{j-1}*I_j)

so that defines c_1,...,c_n with j values taken mod l some number of signers.  Next find the s_j value:

Now \alpha*G = s_j*G+c_j*P_j so \alpha = s_j+c_j*x_j so s_j = \alpha - c_j*x_j mod n.

Similarly \alpha*H(P_j) = s_j*H(P_j)+c_j*I_j so \alpha works there too.

\sigma = (m,I_j,c_1,s_1,...,s_n)

VERIFY:

\forall_{i=1..n} compute e_i=s_i*G+c_i*P_i and E_i=s_i*H(P_i)+c_i*I_j and c_{i+1}=h(P_1,...,P_n,e_i,E_i)

check c_{n+1}=c_1

LINK: reject duplicate I_j values.

This alternate linkable ring signature tends to 1/2 the size of the crypto note ring signature as the signature is 3+n values vs 2+2n values.

Adam
17  Bitcoin / Armory / Re: [ANN] Armory 0.93 Official Release on: February 23, 2015, 08:31:02 AM

What do you mean by deterministic signing?

It takes the random number generator out of the process for generating a signed transaction. Somehow (I do not know the details). It makes it safer, as the signatures can't leak any information (i.e. something to help calculate the private key...) when using weak RNG implementations, plus some other benefits I expect. Also, it's last on the changelog list for 0.93

Oh so instead of random it uses a rolling nonce

Idk, personally I think a bad write can make you reuse an increment and boom you're done. But what do I know.

If it adds a random number, it sounds very good.

No thats not how it works.  Deterministic DSA is to use k=H(d,m) as the nonce.  In that way if you sign the same message m=H(transaction), you'll get the same signature, so its also stateless.

And this is important because if you reuse k with different messages you reveal a simultaneous equation allowing the private key to be computed.  private key is d, public key is Q=dG, address is a=H(Q),  signature is (s,r) where s=(h(m)+rd)/k, r=[kG].x, n is the order of the curve.

s=(h(m)+rd)/k mod n
s2=(h(m2)+rd)/k mod n

=> sk = h(m)+rd, s2k = h(m2)+rd
=> (s-s2)k = h(m)-h(m2)
=> k=(h(m)-h(m2))/(s-s2).

now we know k and substituting:

sk=h(m)+rd
=> d=(sk-h(m))/r

There are worse attacks where even knowing a bias of a few bits eg http://www.irisa.fr/celtique/zapalowicz/papers/asiacrypt2014.pdf can result in d being recovered over a modest number of signatures, or that the NIST original DSA standard was partly broken due to a small bias in k generation algorithm by Bleichenbacher, see section 2.2 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.3190&rep=rep1&type=pdf

Avoiding reuse of k is also tricky because that implies log transactional storage in the RNG state.  What if the RNG is in a VM, and the VM snapshotted and rolled back?  What if the RNG is poorly seeded (eg in a server environment).

The lesson for bitcoin is dont reuse addresses but as there are usability difficulties with that also dont have biases in k, and dont rely on transactional, non-rollbackable storage: hence deterministic DSA.

Adam
18  Bitcoin / Armory / Re: [ANN] Armory 0.93 Official Release on: February 23, 2015, 08:12:31 AM

What do you mean by deterministic signing?

It takes the random number generator out of the process for generating a signed transaction. Somehow (I do not know the details). It makes it safer, as the signatures can't leak any information (i.e. something to help calculate the private key...) when using weak RNG implementations, plus some other benefits I expect. Also, it's last on the changelog list for 0.93

Oh so instead of random it uses a rolling nonce

Idk, personally I think a bad write can make you reuse an increment and boom you're done. But what do I know.

If it adds a random number, it sounds very good.

No thats not how it works.  Deterministic DSA is to use k=H(d,m) as the nonce.  In that way if you sign the same message m=H(transaction), you'll get the same signature.

Adam
19  Bitcoin / Development & Technical Discussion / Re: ECDSA 2 of 2 signing on: January 23, 2015, 11:07:50 AM
Sometimes people want the ability to tell which k of n signed (for accountability purposes, if k < n) and there a multisig has additional functionality that schnorr (or other) mulitparty sigs dont.

I did also mention on twitter another idea to use schnorr to make a compact multisg.  The idea is  to have enough different keys per signer to make it unambiguous which k signed, and commit the public key sums for the needed permutations in a merkle tree.  Then to sign reveal the path to the public key sum used and a multiparty sig with that (via the usual schnorr method).

It seems that Micali et al thought of this idea before see http://www.cs.bu.edu/~reyzin/multisig.html though the numbers sound better for them as they're assuming prime fields not EC where a hash of a public key is a smaller ratio than in EC.

I expect it wouldnt be too useful to use the bootstap method I mentioned earlier because the subset doing the bootstrapping know everyone elses keys and so could impersonate them for accountability purposes.  So you do need an enrollment process which does not involve third parties knowing parties private keys!

Adam
20  Bitcoin / Hardware / Re: [ANN] Spondoolies-Tech - carrier grade, data center ready mining rigs on: January 20, 2015, 09:56:05 AM
Did someone tune up the spondoolies hosted miners?

http://eligius.st/~wizkid057/newstats/userstats.php/1Adam3usQMbQWScA5AXnnDsRMeZeCh6ovu

my 4x sp10 hashrate jumped from ~6TH to ~ 7.5TH?  (Which'd be a very nice hashrate at 1.85 TH/sp10 vs 1.4-1.5TH advertised).

Maybe someone accidentally configured another one for my payout address during a reshuffle or config change?

Adam
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!