Bitcoin Forum
May 25, 2024, 10:42:26 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 ... 288 »
1021  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 08, 2018, 11:08:41 PM
If I help to fund someone to run a full node on my behalf, it still adds value to the community.
What value are you referring to specifically? I'm especially interested in what value you see being provided beyond the one or more that they're already running (perhaps for someone else)?
1022  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 08, 2018, 07:36:24 AM
We cannot always make citations on the white paper because Bitcoin has already developed further beyond "Satoshi's vision".
on this point you can, in particular the white paper pointed out that nodes should be run for independent security (last sentence of section Cool and that mining would be done by specialized hardware...  moreover, there have been a number of altcoins to make radical design changes along the lines of what aliashraf has been promoting and yet they do not see an increase in nodes.

Mining centralization is certainly a problem, but that doesn't make it the origin of every problem nor does it mean that any particular proposed solution would improve it or anything else for that matter.

Quote
It will make miners run their own nodes,
No it won't. If it did it wouldn't have a snowballs chance in hell of getting significant usage. It lets pooling for payments be operated separately from the consensus -- which does mean that you could still pool payments in a fairly traditional way while running off your own node, but it doesn't require you run your own (nor could it really).  After talking to multiple large miners (tens of MW of mining) who have _never_ run a Bitcoin wallet (they just have their pools pay directly to an exchange account) I think the potential of this is easily overstated, but it's just the right (better even) thing to do.  Combined with other efforts to lower node operational costs we might see some more independence there but I wouldn't hold my breath for a rapid change. Smiley
1023  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 07, 2018, 04:01:25 PM
Aliashraf,

We all understand that you are angry that other people don't want to spend their time radically redesigning Bitcoin in accordance with the "fixes" you demand which others have responded to and concluded your technical arguments are wrong and confused. We all also understand that you disagree with these analysis.

In this thread posts about these complaints are offtopic, they have little to do with the subject of the thread beyond some broad conjecture that you make that if there were less cause for pooling there would be a lot more nodes, a weak argument but one that could be made without all the vitriol. You're turning the thread into a series of abuses that will discourage productive on-topic discussion. Cut it out.
1024  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 06, 2018, 03:02:27 PM
Paying someone to run nodes (or running one yourself on a third party controlled VPS service like amazon or digital ocean, for that matter) wouldn't serve much of a purpose.

I beg to differ. You will still have those people with enough resources and bandwidth to keep it decentralized. I want to add to that group, by giving people with local bandwidth and resource issues a platform to contribute to this important service.

These people would not necessarily be running a full node, because of these problems, but they can now contribute financially to run a full node, by just funding the people who can do this.

I have some friends in some rural areas with very bad internet and they desperately want to contribute, but the local infrastructure issues, stop them from doing that.

This does not mean that we would have only 1 organization doing this in 1 location. <It might be Bitcoin merchants in several different locations, with better bandwidth> 

Your position assumes there is a contribution created by running more nodes on Amazon. There isn't, for the most part.  Effectively, paying amazon to run more nodes is just funding a benign sybil attack by Amazon on the network.

I'm well aware that there are parts of the world where it is prohibitively expensive to run a node-- thats part of the reason why efforts like the satellite broadcast of Bitcoin exist.  But just because there is a problem doesn't mean that any particular alternative is a solution.

Just because someone takes on a cost does not mean they are making a useful contribution.  The first node on amazon was a useful contribution, perhaps you could argue that one in each availability zone is a contribution, but we're vastly beyond that, and adding more nodes on amazon mostly just improves the ability to monitor traffic without being noticed for amazon and sybil attackers who purchase their services.
1025  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 05, 2018, 07:35:04 AM
Why can we not have services where people make donations to a global company with enough resources to host these full nodes for them. <Not virtual servers in the cloud>, like we have with Cloud mining.
This completely misses the point.  If one doesn't care about decentralization the whole of the bitcoin system can run on a _single_ node there isn't need for multiple nodes (much less many) but for decentralization purposes.

Paying someone to run nodes (or running one yourself on a third party controlled VPS service like amazon or digital ocean, for that matter) wouldn't serve much of a purpose.
1026  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 02, 2018, 07:29:41 PM
We need more companies using Bitcoin and to increase confidence in the project it would be natural for them to opt for a full node as well.
That is something I believed in, say, 2011 -- even the whitepaper says that merchants should run a node even with lite clients available...

But the norm for companies these days, especially small companies and startups, is aggressive outsourcing of all technical infrastructure even in their core domain.  So for example, there have been many bitcoin exchanges that don't run their own nodes, but outsource their transaction handling to third parties, and those third parties don't even operate their own equipment, but rent VPS service by the hour from companies like amazon.  As a result, it turns out that companies, even specialist "bitcoin companies" can't be counted on to operate nodes even where a simple risk analysis would say it was in their best interest to do so.  

That was a period in time before mining centralisation and ASICs so there would've been thousands of people with small mining rigs.
Sorry, that is simply untrue. By mid 2011 almost everyone mining was using pools and not running nodes to mine.

Quote
That's why ETH has more nodes of that nature than BTC these days.
Ethereum has _vastly_ fewer "nodes" than Bitcoin. You've been fed misinformation that comes from comparing the total number of ethereum nodes (listening or not) to just the listening bitcoin nodes.  Right now the numbers are 15,277 total ethereum vs 83,096 bitcoin nodes.  Moreover, the common ethereum configuration has security properties a lot more like Bitcoin SPV: they don't validate the history when they join they just blindly trust the hash power for it.
1027  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 02, 2018, 02:10:37 AM
The number has definitively increased a lot,
Only after falling a lot.  E.g. on Jan 3rd 2012 there were over 16500 listening nodes tracked by sipa's seeder.

1028  Bitcoin / Development & Technical Discussion / Re: Total Number of full nodes operating. Less than 10k. on: September 01, 2018, 11:41:53 PM
There are many more full nodes running, that page only lists ones that accept inbound connections.

Estimates put the number at about 84,000 although a significant number of nodes are spy node run only for the purpose of tracing transactions though we don't know how many.  In the past there was a higher rate of node running relative to the user base but the resources required to run a node increased substantially along with many other factors.

Quote
I know that probably the number is increasing over the years,
Listening node count was higher in the past; but UPNP being disabled by default due to repeated security problems with it, increased listening specific resource usage, and other factors have decreased the count even in absolute terms.
1029  Bitcoin / Development & Technical Discussion / Re: Anonymous Atomic Swaps Using Homomorphic Hashing on: September 01, 2018, 01:38:07 AM
Relevant related things:

CoinSwap: https://bitcointalk.org/index.php?topic=321228.0  (now that the network has CSV and/or fixed malleability a somewhat simpler protocol can be used; see also https://github.com/AdamISZ/CoinSwapCS)

Swapping with adaptor signatures: https://github.com/apoelstra/scriptless-scripts/blob/master/md/atomic-swap.md


The simple power sums looks like deanonyizing them is a  solvable modular lattice problem but I haven't looked carefully,  I'd be interested in knowing how you think your approach compares to the coinswap and adaptor signature approaches?
1030  Bitcoin / Development & Technical Discussion / Re: Threshold Multisignatures for Bitcoin on: August 28, 2018, 11:14:11 PM
Each party interprets the public keys as shares in a Shamir sharing scheme. Put simply, lagrange coefficients are calculated using Xi = H(G*xi). The coefficients are ordered, based on the sort order of X.


for i1 in sorted(Xis):
    coeff[i1]=1
    for i2 in Xis:
        diff = (i2-i1)%P
        inv = modinv(diff, P)
        val = i2 *inv%P
        coeff[i1]=coeff[i1]*val%P    


These coefficients are used to multiply each point by. The results are summed, and that is the “bitcoin public key” — the key that these participants are signing for.

As has been explained to you previously: This is insecure for large enough M.    You have key P1, the attacker claims to be persons P2 ... Pn and selects them adaptively so that sum(P1 .. Pn) =  P1 + xG - P1 = xG, where x is some attacker controlled secret.  They do this by computing a collection of dummy  keys   Dq = q*P1  and setting Fq = 1/q * H(q*P1)  then using Wagner's algorithm to find a subset of Fq values that sum to -H(P1).   P2 ... Pn-1 are Dq from solution and Pn is just some attacker controlled key.  As n grows to some small constant in the number of bits in the key sizes, finding the subset sum becomes computationally cheap.

Quote
Currently, the leading proposal for a multisignature scheme is an M of M scheme
No, it is not.  The BIP for it explains how verification and simple signing work, but the scheme works fine for arbitrary montone functions of keys (including arbitrary thresholds).

Quote
that provides both aa noninteractive M of M scheme

This is inconsistent with your above claim:

Quote
The value of e must be the same for all participants.

Quote
When signing, each participant “i” rolls a random nonce “ki”. ... R  = G*k; e = H(R, M, X)

So either R in the H() above is the sum of participants Rk_i, or e is different for each participant.

If it's the first case, the signature requires a round of interaction to learn the combined R before they can compute s. If it's the latter the signature is not additive homomorphic and your verification will fail.

Also, if it's the former-- R = sum(Rk_i),  then an attacker can set their Ri to be kG minus the sum of all other Rs, resulting in a signature where they know the discrete log of R with respect to G (k), and can recover the group private key  x as  x = (s - k)/e.
1031  Bitcoin / Development & Technical Discussion / Re: 10 min (on average) for block time? is it a rule? on: August 26, 2018, 07:32:22 AM
I think the actual rate is what we get from a single point in the network.

The rate of orphan blocks is the number of blocks created in some window which are not in the chain divided by the total number of blocks created. More modern relay has less node specific delays, and so as a result split propagation is now much more strongly geographically limited. A single node cannot be everywhere in the world at once... The harm that orphans cause the network in terms of centralization pressure is related to how many are created, not if any particular node's peers offered to give it to them.  Consider: if you turn your node off for a day it won't see any orphans created during that day at all... are all orphaning problems gone?  Obviously not.

In our case, nodes now see far fewer of the orphans that exist because they now forward along blocks to consenting peers before they've validated them, and as a result get to the point where they're unwilling to propagate an alternative much faster. This speedup lowered the actual orphaning rate but it also largely blinded nodes to orphans.
1032  Bitcoin / Development & Technical Discussion / Re: 10 min (on average) for block time? is it a rule? on: August 26, 2018, 01:43:19 AM
that made it possible to have orphan rate dropped  form 0.41% (2016)  to current 0.02% in 2018.
We actually have little idea what the orphan rate is now: HB mode compact blocks radically reduced the _propagation_ of orphans, such that they're not well propagated anymore.  Whatever it actually is, I know it's higher than 0.02% because just collecting from a half dozen nodes I see more than 4 times that many in the last 38k blocks, but on any single node you only see a fraction of that.

Also, blockchain.info's orphaning figures have historically been rubbish, unrelated to recent behavior changes.

Quote
unsolicited block push
Sending whole blocks unsolicited severely hurts block propagation. We'd probably get a measurable improvement if nodes started automatically banning peers that violate the protocol that way. Fortunately it's not very common.
1033  Bitcoin / Development & Technical Discussion / Re: ”Argument list too long“ for decoderawtransaction on: August 19, 2018, 05:39:51 PM
bitcoin-cli has the -stdin argument for this reason. (both too long inputs and keeping passwords off command lines...)
1034  Bitcoin / Development & Technical Discussion / Re: how ThreadOpenConnections work? on: August 14, 2018, 10:23:23 PM
You are looking at very old and unsupported code, no one should be running any thing that old anymore.

In any case, the infinite loop is wrapped in an if.  The lower code doesn't run if -connect is in use, by design.
1035  Bitcoin / Development & Technical Discussion / Re: Bitcoin Block time on: August 10, 2018, 02:40:56 PM
@OP.
Few days back I checked the hash rate graph and I found that difficulty always  increased with each passing month and it never dipped so I do not think that we ever encountered any such situation where re target might happened after more than 2 weeks.

This simply isn't true, hashrate (thus difficulty) has decreased many times in the past.
1036  Bitcoin / Development & Technical Discussion / Re: Creating private key from 2 different RNG:s? on: August 06, 2018, 02:21:59 AM
Personally I would not use xor as a rng combiner.   If one of your functions is correlated with the other you risk canceling it it. This can happen due to error e.g. second RNG fails, first ones output is reused or if the second RNG is malicious code that can observe the output of the first. Instead, I would prefer to use a regular cryptographic hash function as the combiner.

(and, indeed, Bitcoin Core uses a hash function as the combiner)
1037  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 05, 2018, 01:58:35 AM
According to my suggested threshold of 60480, it doesn't fail.
No but it would fail the _existing_ chain with a threshold of more than half that one. So your adhoc rule doesn't even have a safety margin of 2 against the existing chain-- much less against what is reasonably possible.  If hashrate grows quickly this threshold will be exceeded.  There is no constant threshold where large hashrate growth cannot exceed it.

Quote
No, it is not a lazy approach, whatever.

I don't find 'whatever' to be much of an argument. It still sounds to me that you think the whole network should potentially dangerous changes to the consensus rules in order to avoid having to correctly process a megabyte of network data without undue load. How isn't that lazy?
1038  Alternate cryptocurrencies / Altcoin Discussion / Re: CommitTransaction(): Transaction cannot be broadcast immediately, no-witness-yet on: August 04, 2018, 10:31:20 PM
Can anyone help me with this error CommitTransaction(): Transaction cannot be broadcast immediately, no-witness-yet

Seems to happen sometimes when sending transactions from client to client

How are you creating the transactions? Have you allowed your node to sync? no-witness-yet was a message produced by older software when you try to send a segwit spend but the blockchain hasn't activated segwit yet.
1039  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 04, 2018, 10:11:31 PM
Obviously, the criteria is:
Any fork, being the longest chain or not, is not allowed to exceed a computable threshold in length such that it would compromise the block generation rate imposed by other consensus rules.
Taken strictly that criteria is violated by the existing chain. In other words, if it were imposed previously the network would have already faulted. Block 0 is timestamp 2009-01-03 18:15:05, this was 3501 days ago, which implies ~504144 blocks. Yet we are on 535244.

Any increase in hashrate will result in that condition, implemented strictly, being violated because the difficulty adjustment is retrospective and only proportional (it includes no integral component).  If the condition is violated it can cause spontaneous consensus failure by forcing blocktimes against the maximum time skew and causing the network to break into islands based on small differences in their local clocks. You can add margin so that it hasn't failed _yet_ on the current chain but what reason do you have to expect that it wouldn't fail at some point in the future for any given selection of margin?  There is no amount which the chain is guaranteed by construction to not get ahead by. Quite the opposite, under the simple assumption that hashrate will increase over time the chain is guaranteed to get ahead.

Quote
So we have a solid definition, provably safe
As far as I can see your posts contain no such proof. In fact, the existing chain being well ahead of the metric now is concrete proof that some arbitrary safety margin values are not safe. What reason do you have to believe another arbitrary value is safe given that some are provably not, beyond "this setting hasn't failed with the historic chain yet"?

Had your same series of reasoning been followed in 2012-- and set a margin of 1000 blocks (or whatever was twice the adequate value at that point in time) then sometime in the next year after the hashrate grew tremendously the network would have spontaneously broken into multiple chains.

Quote
and helpful in mitigating the DoS vulnerability under consideration here
What DOS vulnerability? There isn't one as far as any measurements have thus far determined.  Changing unrelated consensus rules in ways which only seem to satisfy a "doesn't fail yet on the existing chain" level of proof to fix a maybe service could be denied here node behavior seems, frankly, absurd.  Additionally, it seems just kind of lazy to the point of reckless indifference:  "I can't be bothered to implement some P2P within the performance envelope I desire, though it clearly can be through purely local changes, so I'm going to insist that the core consensus algorithm be changed."

If your concern is that this might hold a lock too long (though measurement shows that a node keeps working fine even slamming 200k lookups on every network message, with the code previously posted on the thread) -- then just change the message handling to drop the blinking lock between groups of entries!   In what universe does it makes sense address a simple implementation programming question through speculative consensus rules with arbritary parameters that would provably have failed already for some choices of the arbitrary parameters?

Quote
I'm trying to understand why should we hesitate or use a more conservative approach like what you suggest
Because it's trivial to prove that its safe and doesn't require changes to the bitcoin consensus rules which might open the system up to attack or spontaneous failure and can't create new node isolation vulnerabilities, etc.  Why? because some of us don't think of Bitcoin as a pet science project, understand that its difficult to reason about the full extent of changes, and actually care if it survives.  If you share these positions your should reconsider your arguments, because I don't think they make sense in light of them.
1040  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (test data wanted) on: August 03, 2018, 10:55:52 PM
My intuitive assumption is that no legit node will suggest/accept forks with a chain much longer than what 10 minutes block time imposes.
No existing software that I'm aware of meets your assumption, they absolutely will do so. It's not at all clear to me how one could go about constructing a rule to not do so that would be guaranteed to not create convergence failures at least in some maliciously created situations.

Quote
Although the longest chain rule is to be used when nodes are deciding between forks, there should be an upper bound for circumstances in which this policy is applicable, and should be dropped when a suggested fork length exceeds that upper bound.
I think you might have buried the headline a bit here: Your conclusive solution as suggested a reworking of the whole consensus algorithm to ignore its primary convergence criteria, and replace it with some kind of "ignore the longest chain under some circumstances" but you aren't even specifying which circumstances or the exact nature of the change... making it impossible to review in any detail.

I don't at doubt that _some_ reasonable ones could be cooked up...  but why would be worth the effort to review such a potentially risky change, when the thing being to fixd is merely "someone can use a lot of bandwidth to temporarily drive your CPU usage higher"?   -- especially when there are a lot of ways to do that, e.g. set a bloom filter to match almost nothing and scan a lot of blocks... that has very low bandwidth per CPU burned.

But also think your suggestion is overkill:  If instead there is a BIP that says that locators should never have more than 10+10*ceil(log2(expected blocks)), then the nodes creating locators can determine if their locators would be too big and instead make their points further apart to stay within the limit (and presumably one less than it, to deal with time-skew).  No change in consensus required, nodes can still accept all the blocks they want. If you are on some many-block crazy fork, you'll request a sparser locator, and as a result just waste a bit more bandwidth getting additional headers that you already know.  --- and that's what I was attempting to suggest upthread: that the safe change is to first get honest requesters to limit themselves; then later you can ignore requests that are too big.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [52] 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!