Bitcoin Forum
November 19, 2018, 09:03:39 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 239 »
41  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 02, 2018, 02:10:37 AM
The number has definitively increased a lot,
Only after falling a lot.  E.g. on Jan 3rd 2012 there were over 16500 listening nodes tracked by sipa's seeder.

42  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 01, 2018, 11:41:53 PM
There are many more full nodes running, that page only lists ones that accept inbound connections.

Estimates put the number at about 84,000 although a significant number of nodes are spy node run only for the purpose of tracing transactions though we don't know how many.  In the past there was a higher rate of node running relative to the user base but the resources required to run a node increased substantially along with many other factors.

Quote
I know that probably the number is increasing over the years,
Listening node count was higher in the past; but UPNP being disabled by default due to repeated security problems with it, increased listening specific resource usage, and other factors have decreased the count even in absolute terms.
43  Bitcoin / Development & Technical Discussion / Re: Anonymous Atomic Swaps Using Homomorphic Hashing on: September 01, 2018, 01:38:07 AM
Relevant related things:

CoinSwap: https://bitcointalk.org/index.php?topic=321228.0  (now that the network has CSV and/or fixed malleability a somewhat simpler protocol can be used; see also https://github.com/AdamISZ/CoinSwapCS)

Swapping with adaptor signatures: https://github.com/apoelstra/scriptless-scripts/blob/master/md/atomic-swap.md


The simple power sums looks like deanonyizing them is a  solvable modular lattice problem but I haven't looked carefully,  I'd be interested in knowing how you think your approach compares to the coinswap and adaptor signature approaches?
44  Bitcoin / Development & Technical Discussion / Re: Threshold Multisignatures for Bitcoin on: August 28, 2018, 11:14:11 PM
Each party interprets the public keys as shares in a Shamir sharing scheme. Put simply, lagrange coefficients are calculated using Xi = H(G*xi). The coefficients are ordered, based on the sort order of X.


for i1 in sorted(Xis):
    coeff[i1]=1
    for i2 in Xis:
        diff = (i2-i1)%P
        inv = modinv(diff, P)
        val = i2 *inv%P
        coeff[i1]=coeff[i1]*val%P    


These coefficients are used to multiply each point by. The results are summed, and that is the “bitcoin public key” — the key that these participants are signing for.

As has been explained to you previously: This is insecure for large enough M.    You have key P1, the attacker claims to be persons P2 ... Pn and selects them adaptively so that sum(P1 .. Pn) =  P1 + xG - P1 = xG, where x is some attacker controlled secret.  They do this by computing a collection of dummy  keys   Dq = q*P1  and setting Fq = 1/q * H(q*P1)  then using Wagner's algorithm to find a subset of Fq values that sum to -H(P1).   P2 ... Pn-1 are Dq from solution and Pn is just some attacker controlled key.  As n grows to some small constant in the number of bits in the key sizes, finding the subset sum becomes computationally cheap.

Quote
Currently, the leading proposal for a multisignature scheme is an M of M scheme
No, it is not.  The BIP for it explains how verification and simple signing work, but the scheme works fine for arbitrary montone functions of keys (including arbitrary thresholds).

Quote
that provides both aa noninteractive M of M scheme

This is inconsistent with your above claim:

Quote
The value of e must be the same for all participants.

Quote
When signing, each participant “i” rolls a random nonce “ki”. ... R  = G*k; e = H(R, M, X)

So either R in the H() above is the sum of participants Rk_i, or e is different for each participant.

If it's the first case, the signature requires a round of interaction to learn the combined R before they can compute s. If it's the latter the signature is not additive homomorphic and your verification will fail.

Also, if it's the former-- R = sum(Rk_i),  then an attacker can set their Ri to be kG minus the sum of all other Rs, resulting in a signature where they know the discrete log of R with respect to G (k), and can recover the group private key  x as  x = (s - k)/e.
45  Bitcoin / Development & Technical Discussion / Re: 10 min (on average) for block time? is it a rule? on: August 26, 2018, 07:32:22 AM
I think the actual rate is what we get from a single point in the network.

The rate of orphan blocks is the number of blocks created in some window which are not in the chain divided by the total number of blocks created. More modern relay has less node specific delays, and so as a result split propagation is now much more strongly geographically limited. A single node cannot be everywhere in the world at once... The harm that orphans cause the network in terms of centralization pressure is related to how many are created, not if any particular node's peers offered to give it to them.  Consider: if you turn your node off for a day it won't see any orphans created during that day at all... are all orphaning problems gone?  Obviously not.

In our case, nodes now see far fewer of the orphans that exist because they now forward along blocks to consenting peers before they've validated them, and as a result get to the point where they're unwilling to propagate an alternative much faster. This speedup lowered the actual orphaning rate but it also largely blinded nodes to orphans.
46  Bitcoin / Development & Technical Discussion / Re: 10 min (on average) for block time? is it a rule? on: August 26, 2018, 01:43:19 AM
that made it possible to have orphan rate dropped  form 0.41% (2016)  to current 0.02% in 2018.
We actually have little idea what the orphan rate is now: HB mode compact blocks radically reduced the _propagation_ of orphans, such that they're not well propagated anymore.  Whatever it actually is, I know it's higher than 0.02% because just collecting from a half dozen nodes I see more than 4 times that many in the last 38k blocks, but on any single node you only see a fraction of that.

Also, blockchain.info's orphaning figures have historically been rubbish, unrelated to recent behavior changes.

Quote
unsolicited block push
Sending whole blocks unsolicited severely hurts block propagation. We'd probably get a measurable improvement if nodes started automatically banning peers that violate the protocol that way. Fortunately it's not very common.
47  Bitcoin / Development & Technical Discussion / Re: ”Argument list too long“ for decoderawtransaction on: August 19, 2018, 05:39:51 PM
bitcoin-cli has the -stdin argument for this reason. (both too long inputs and keeping passwords off command lines...)
48  Bitcoin / Development & Technical Discussion / Re: how ThreadOpenConnections work? on: August 14, 2018, 10:23:23 PM
You are looking at very old and unsupported code, no one should be running any thing that old anymore.

In any case, the infinite loop is wrapped in an if.  The lower code doesn't run if -connect is in use, by design.
49  Bitcoin / Development & Technical Discussion / Re: Bitcoin Block time on: August 10, 2018, 02:40:56 PM
@OP.
Few days back I checked the hash rate graph and I found that difficulty always  increased with each passing month and it never dipped so I do not think that we ever encountered any such situation where re target might happened after more than 2 weeks.

This simply isn't true, hashrate (thus difficulty) has decreased many times in the past.
50  Bitcoin / Development & Technical Discussion / Re: Creating private key from 2 different RNG:s? on: August 06, 2018, 02:21:59 AM
Personally I would not use xor as a rng combiner.   If one of your functions is correlated with the other you risk canceling it it. This can happen due to error e.g. second RNG fails, first ones output is reused or if the second RNG is malicious code that can observe the output of the first. Instead, I would prefer to use a regular cryptographic hash function as the combiner.

(and, indeed, Bitcoin Core uses a hash function as the combiner)
51  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 05, 2018, 01:58:35 AM
According to my suggested threshold of 60480, it doesn't fail.
No but it would fail the _existing_ chain with a threshold of more than half that one. So your adhoc rule doesn't even have a safety margin of 2 against the existing chain-- much less against what is reasonably possible.  If hashrate grows quickly this threshold will be exceeded.  There is no constant threshold where large hashrate growth cannot exceed it.

Quote
No, it is not a lazy approach, whatever.

I don't find 'whatever' to be much of an argument. It still sounds to me that you think the whole network should potentially dangerous changes to the consensus rules in order to avoid having to correctly process a megabyte of network data without undue load. How isn't that lazy?
52  Alternate cryptocurrencies / Altcoin Discussion / Re: CommitTransaction(): Transaction cannot be broadcast immediately, no-witness-yet on: August 04, 2018, 10:31:20 PM
Can anyone help me with this error CommitTransaction(): Transaction cannot be broadcast immediately, no-witness-yet

Seems to happen sometimes when sending transactions from client to client

How are you creating the transactions? Have you allowed your node to sync? no-witness-yet was a message produced by older software when you try to send a segwit spend but the blockchain hasn't activated segwit yet.
53  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 04, 2018, 10:11:31 PM
Obviously, the criteria is:
Any fork, being the longest chain or not, is not allowed to exceed a computable threshold in length such that it would compromise the block generation rate imposed by other consensus rules.
Taken strictly that criteria is violated by the existing chain. In other words, if it were imposed previously the network would have already faulted. Block 0 is timestamp 2009-01-03 18:15:05, this was 3501 days ago, which implies ~504144 blocks. Yet we are on 535244.

Any increase in hashrate will result in that condition, implemented strictly, being violated because the difficulty adjustment is retrospective and only proportional (it includes no integral component).  If the condition is violated it can cause spontaneous consensus failure by forcing blocktimes against the maximum time skew and causing the network to break into islands based on small differences in their local clocks. You can add margin so that it hasn't failed _yet_ on the current chain but what reason do you have to expect that it wouldn't fail at some point in the future for any given selection of margin?  There is no amount which the chain is guaranteed by construction to not get ahead by. Quite the opposite, under the simple assumption that hashrate will increase over time the chain is guaranteed to get ahead.

Quote
So we have a solid definition, provably safe
As far as I can see your posts contain no such proof. In fact, the existing chain being well ahead of the metric now is concrete proof that some arbitrary safety margin values are not safe. What reason do you have to believe another arbitrary value is safe given that some are provably not, beyond "this setting hasn't failed with the historic chain yet"?

Had your same series of reasoning been followed in 2012-- and set a margin of 1000 blocks (or whatever was twice the adequate value at that point in time) then sometime in the next year after the hashrate grew tremendously the network would have spontaneously broken into multiple chains.

Quote
and helpful in mitigating the DoS vulnerability under consideration here
What DOS vulnerability? There isn't one as far as any measurements have thus far determined.  Changing unrelated consensus rules in ways which only seem to satisfy a "doesn't fail yet on the existing chain" level of proof to fix a maybe service could be denied here node behavior seems, frankly, absurd.  Additionally, it seems just kind of lazy to the point of reckless indifference:  "I can't be bothered to implement some P2P within the performance envelope I desire, though it clearly can be through purely local changes, so I'm going to insist that the core consensus algorithm be changed."

If your concern is that this might hold a lock too long (though measurement shows that a node keeps working fine even slamming 200k lookups on every network message, with the code previously posted on the thread) -- then just change the message handling to drop the blinking lock between groups of entries!   In what universe does it makes sense address a simple implementation programming question through speculative consensus rules with arbritary parameters that would provably have failed already for some choices of the arbitrary parameters?

Quote
I'm trying to understand why should we hesitate or use a more conservative approach like what you suggest
Because it's trivial to prove that its safe and doesn't require changes to the bitcoin consensus rules which might open the system up to attack or spontaneous failure and can't create new node isolation vulnerabilities, etc.  Why? because some of us don't think of Bitcoin as a pet science project, understand that its difficult to reason about the full extent of changes, and actually care if it survives.  If you share these positions your should reconsider your arguments, because I don't think they make sense in light of them.
54  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 03, 2018, 10:55:52 PM
My intuitive assumption is that no legit node will suggest/accept forks with a chain much longer than what 10 minutes block time imposes.
No existing software that I'm aware of meets your assumption, they absolutely will do so. It's not at all clear to me how one could go about constructing a rule to not do so that would be guaranteed to not create convergence failures at least in some maliciously created situations.

Quote
Although the longest chain rule is to be used when nodes are deciding between forks, there should be an upper bound for circumstances in which this policy is applicable, and should be dropped when a suggested fork length exceeds that upper bound.
I think you might have buried the headline a bit here: Your conclusive solution as suggested a reworking of the whole consensus algorithm to ignore its primary convergence criteria, and replace it with some kind of "ignore the longest chain under some circumstances" but you aren't even specifying which circumstances or the exact nature of the change... making it impossible to review in any detail.

I don't at doubt that _some_ reasonable ones could be cooked up...  but why would be worth the effort to review such a potentially risky change, when the thing being to fixd is merely "someone can use a lot of bandwidth to temporarily drive your CPU usage higher"?   -- especially when there are a lot of ways to do that, e.g. set a bloom filter to match almost nothing and scan a lot of blocks... that has very low bandwidth per CPU burned.

But also think your suggestion is overkill:  If instead there is a BIP that says that locators should never have more than 10+10*ceil(log2(expected blocks)), then the nodes creating locators can determine if their locators would be too big and instead make their points further apart to stay within the limit (and presumably one less than it, to deal with time-skew).  No change in consensus required, nodes can still accept all the blocks they want. If you are on some many-block crazy fork, you'll request a sparser locator, and as a result just waste a bit more bandwidth getting additional headers that you already know.  --- and that's what I was attempting to suggest upthread: that the safe change is to first get honest requesters to limit themselves; then later you can ignore requests that are too big.
55  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (rewievers wanted) on: August 02, 2018, 09:25:26 PM
Full nodes use getblock AFAIK to synch, as of getheaders
Getblock is only used on known header connected candidate best chains, for many years now. In other words nodes will never speculatively request blocks anymore, they only request blocks that would be the best chain per the headers assuming the blocks are valid. (moreover the behavior being discussed is the same for both messages).
56  Bitcoin / Development & Technical Discussion / Re: Why can Core not Scale at a 0.X% for blocksize in the S-Curve f(X) on: August 02, 2018, 03:20:41 AM
we should take care not to have bitcoin frozen like in its ice age, 
You state this as if the capacity wasn't recently roughly _doubled_, overshooting demand and knocking fees down to low levels...
57  Bitcoin / Development & Technical Discussion / Re: Why can Core not Scale at a 0.X% for blocksize in the S-Curve f(X) on: August 01, 2018, 08:16:14 PM
I believe you are over-emphasizing on hardware and less on bandwidth and network latency. Bandwidth's "growth" is going slower and slower over the years, and that slow growth will compound more on network latency because the effects of higher bandwidth does not translate immediately on the network according to Nielsen's Law.

Another factor is that it's generally dangerous to set bars for participation based on any fraction of current participants.

Imagine, say we all decide 90% of nodes can handle capacity X. So then we run at X, and the weakest 10% drop out.  Then, we look again, and apply the same logic (... after all, it was a good enough reason before) and move to Y, knocking out 10%...  and so on. The end result of that particular process is loss of decentralization.

Some months back someone was crowing about the mean bandwidth of listening nodes having gone up. But if you broke it down into nodes on big VPS providers (amazon and digital ocean) and everyone else, what you find is that each group's bandwidth didn't change, but the share of nodes on centeralized 'cloud' providers went way up. Sad  (probably for a dozen different reasons, -- loss of UPNP, increased resource usage, more spy nodes which tend to be on VPSes...)

Then we have the fact that technology improvements are not necessarily being applied where we need them most-- e.g. a lot of effort is spent making things more portable, lower power consuming and less costly rather than making things faster or higher bandwidth.  Similarly, lots of network capacity growth happens in dense/easily covered city areas rather for anyone.  In the US in a major city you can often get bidi gigabit internet at personally affordable prices but then 30 miles out spend even more money and get ADSL that barely does 750kbit/sec up. The most common broadband provider in the US usually has plenty of speed but has monthly usage caps that a listening node can use most of... Bitcoin's bandwidth usage doesn't sound like much but when you add in overheads and new peers syncing, and multiply that usage out 24/7 it adds up to more bandwidth than people typically use... and once Bitcoin is using most of a user's resources the costs of using it become a real consideration for some people.  This isn't good for the goal of decentralization.
58  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (rewievers wanted) on: August 01, 2018, 06:39:11 PM
As of your 200K loop, and 20% increase in cpu usage: It is huge, imo. With just a few malicious requests this node will be congested.
The patch I posted turns _every_ message into a "malicious message" and it only had that modest cpu impact, and didn't keep the node from working.  This doesn't prove that there is no way to use more resources, of course, but it indicates that it isn't a big issue as was thought above. Without evidence otherwise this still looks like it's jut not that interesting compared to the hundreds of other ways to make nodes waste resources.

Quote
I think it will be helpful to force the hacker to work more on its request rather than randomly supply a nonsense stream of bits.
It does not require "work" to begin requested numbers with 64 bits of zeros.

Quote
Yet, I believe lock is not hold when a block is to be retrieved as a result of getblock, once it has been located, Right?
Sure it is, otherwise it could be pruned out from under the request.

Quote
So a getheaders request with more than 200-250  hashes as its payload is obviously malicious for the current height.
Who is missing something here?
If you come up and connect to malicious nodes, you can get fed a bogus low difficulty chain with a lot more height than the honest chain, and as a result produce larger locators without you being malicious at all. If peers ban you for that, you'll never converge back from the dummy chain.   Similarly, if you are offline a long time, and come back you'll expect a given number of items in the locator, but your peers-- far ahead on the real chain, will have more than you expected.   In both cases the simple "fix" creates a vulnerability, not the gravest of vulnerabilities, but the issue being fixed doesn't appear-- given testing so far-- especially interesting so the result would be making things worse,  

Quote
I suppose once a spv client has been fed by like 2000 block headers it should continue
This function doesn't have anything to do with SPV clients, in particular. It's how ordinary nodes reconcile their chains with each other. If locators were indeed a SPV only thing, then I agree that it would be easier to just stick arbitrary limits on them without worrying too much about creating other attacks.
59  Bitcoin / Development & Technical Discussion / Re: Which bitcoin core version is best for Merchant website on: August 01, 2018, 02:06:59 AM
It is almost completely certain that the accounts functionality does not do what you want and never has, in any case.

Accounts were created to be the backend of a long defunct webwallet service (that lost or ran off with everyone's funds), people often think they do things that they don't, such as act as multiwallet does (or even other things that don't make much sense in Bitcoin like "from addresses").
60  Bitcoin / Development & Technical Discussion / Re: Need some clarification on usage of the nonce in version message on: July 26, 2018, 03:11:43 PM
Because it saves unnecessary code.
If you are worried about one line of code in exchange for doing something right you probably have no business creating a Bitcoin node. Smiley (in fact, the difference in practice should be zero lines of code-- it's just a question where the nonce for comparison is stored: globally or per-peer.).

In any case, using a consistent value would be bad for privacy allowing the correlation of a host across networks and time.
Pages: « 1 2 [3] 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 ... 239 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!