Bitcoin Forum
December 13, 2018, 12:33:35 PM *
News: Latest Bitcoin Core release: 0.17.0 [Torrent].
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 239 »
61  Bitcoin / Development & Technical Discussion / Re: Why can Core not Scale at a 0.X% for blocksize in the S-Curve f(X) on: August 02, 2018, 03:20:41 AM
we should take care not to have bitcoin frozen like in its ice age, 
You state this as if the capacity wasn't recently roughly _doubled_, overshooting demand and knocking fees down to low levels...
62  Bitcoin / Development & Technical Discussion / Re: Why can Core not Scale at a 0.X% for blocksize in the S-Curve f(X) on: August 01, 2018, 08:16:14 PM
I believe you are over-emphasizing on hardware and less on bandwidth and network latency. Bandwidth's "growth" is going slower and slower over the years, and that slow growth will compound more on network latency because the effects of higher bandwidth does not translate immediately on the network according to Nielsen's Law.

Another factor is that it's generally dangerous to set bars for participation based on any fraction of current participants.

Imagine, say we all decide 90% of nodes can handle capacity X. So then we run at X, and the weakest 10% drop out.  Then, we look again, and apply the same logic (... after all, it was a good enough reason before) and move to Y, knocking out 10%...  and so on. The end result of that particular process is loss of decentralization.

Some months back someone was crowing about the mean bandwidth of listening nodes having gone up. But if you broke it down into nodes on big VPS providers (amazon and digital ocean) and everyone else, what you find is that each group's bandwidth didn't change, but the share of nodes on centeralized 'cloud' providers went way up. Sad  (probably for a dozen different reasons, -- loss of UPNP, increased resource usage, more spy nodes which tend to be on VPSes...)

Then we have the fact that technology improvements are not necessarily being applied where we need them most-- e.g. a lot of effort is spent making things more portable, lower power consuming and less costly rather than making things faster or higher bandwidth.  Similarly, lots of network capacity growth happens in dense/easily covered city areas rather for anyone.  In the US in a major city you can often get bidi gigabit internet at personally affordable prices but then 30 miles out spend even more money and get ADSL that barely does 750kbit/sec up. The most common broadband provider in the US usually has plenty of speed but has monthly usage caps that a listening node can use most of... Bitcoin's bandwidth usage doesn't sound like much but when you add in overheads and new peers syncing, and multiply that usage out 24/7 it adds up to more bandwidth than people typically use... and once Bitcoin is using most of a user's resources the costs of using it become a real consideration for some people.  This isn't good for the goal of decentralization.
63  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (rewievers wanted) on: August 01, 2018, 06:39:11 PM
As of your 200K loop, and 20% increase in cpu usage: It is huge, imo. With just a few malicious requests this node will be congested.
The patch I posted turns _every_ message into a "malicious message" and it only had that modest cpu impact, and didn't keep the node from working.  This doesn't prove that there is no way to use more resources, of course, but it indicates that it isn't a big issue as was thought above. Without evidence otherwise this still looks like it's jut not that interesting compared to the hundreds of other ways to make nodes waste resources.

I think it will be helpful to force the hacker to work more on its request rather than randomly supply a nonsense stream of bits.
It does not require "work" to begin requested numbers with 64 bits of zeros.

Yet, I believe lock is not hold when a block is to be retrieved as a result of getblock, once it has been located, Right?
Sure it is, otherwise it could be pruned out from under the request.

So a getheaders request with more than 200-250  hashes as its payload is obviously malicious for the current height.
Who is missing something here?
If you come up and connect to malicious nodes, you can get fed a bogus low difficulty chain with a lot more height than the honest chain, and as a result produce larger locators without you being malicious at all. If peers ban you for that, you'll never converge back from the dummy chain.   Similarly, if you are offline a long time, and come back you'll expect a given number of items in the locator, but your peers-- far ahead on the real chain, will have more than you expected.   In both cases the simple "fix" creates a vulnerability, not the gravest of vulnerabilities, but the issue being fixed doesn't appear-- given testing so far-- especially interesting so the result would be making things worse,  

I suppose once a spv client has been fed by like 2000 block headers it should continue
This function doesn't have anything to do with SPV clients, in particular. It's how ordinary nodes reconcile their chains with each other. If locators were indeed a SPV only thing, then I agree that it would be easier to just stick arbitrary limits on them without worrying too much about creating other attacks.
64  Bitcoin / Development & Technical Discussion / Re: Which bitcoin core version is best for Merchant website on: August 01, 2018, 02:06:59 AM
It is almost completely certain that the accounts functionality does not do what you want and never has, in any case.

Accounts were created to be the backend of a long defunct webwallet service (that lost or ran off with everyone's funds), people often think they do things that they don't, such as act as multiwallet does (or even other things that don't make much sense in Bitcoin like "from addresses").
65  Bitcoin / Development & Technical Discussion / Re: Need some clarification on usage of the nonce in version message on: July 26, 2018, 03:11:43 PM
Because it saves unnecessary code.
If you are worried about one line of code in exchange for doing something right you probably have no business creating a Bitcoin node. Smiley (in fact, the difference in practice should be zero lines of code-- it's just a question where the nonce for comparison is stored: globally or per-peer.).

In any case, using a consistent value would be bad for privacy allowing the correlation of a host across networks and time.
66  Bitcoin / Development & Technical Discussion / Re: Just a few question about transactions on: July 23, 2018, 09:46:29 AM
Nodes already know the prior transactions were valid they don't need to check them again.  They just need to track the currently unspent outputs, just as you are thinking.

The Bitcoin software can run with pruning that makes it forget old transactions.  With pruning the security and behavior is generally indistinguishable from other nodes-- but when running in that mode your node just can't help new nodes come up because they need to see the history and you don't have it.

See also Section 7 of the Bitcoin whitepaper.
67  Other / Meta / Re: Anunymint ban on: July 22, 2018, 11:55:43 PM
You are busy as you say and simply don't have the time to engage with the board on things that are not important or directly for advancement of this technology. This is understandable and for sure works best like that.

It's strange in someways because I just clicked on your post history to see if you were still a regular poster...which you are (most of what you say 99% is over my head of course ) but in a way your posting style has some small similarity to Anonymint... I mean this one I think the most recent ... that tone is a tone I often see that is exasperation and frustration at people not seeing things as you do (or as they really are) even after a long discussion. I did not read any other part of that thread but just from that I really could have believed anonymint could have be the author.

I have almost 5000 posts on the forum, in many different subforums and subjects.  But only something like 13 posts in 2018, most in February. Like me, many other technical parties have stopped using the forum entirely or almost entirely. I don't think it's reasonable to that that I am active.  Usually I only post now when a journalist sees something on BCT and asks me to comment,  instead of comment to the journalist I prefer to just go reply to the the thread.

Your example is one of those in fact, the poster in question was running around with incorrect claims of vulnerabilities in Bitcoin.  I got asked.  I'm ashamed that my post looked anything like anonymint's to you but I'm also not surprised: As you note, you don't currently have the background to evaluate the technical content. So you're reading for tone.   If I write in a less than kind tone even when addressing someone who is themselves unkind, it's a mistake on my part which I regret.  But I hope-- and have reason to believe from the results-- that the good I contribute eclipses the crime of having a bit of humanity here and there. Smiley  Unfortunately, to you-- and you are not alone-- someone who does interesting technical work that makes a real difference and someone who strings together terms and disrupts discussions can look pretty similar.  It seems that many draw an equivalence among all people who say things that they don't understand,  and in doing so they do everyone including themselves a great disservice; you can probably understand more than you give yourself credit for, when you don't understand at least some of it is a failure on the speaker's part to make themselves understandable. Sometimes that failure is because they don't understand what they're saying themselves.

To some in shoes like yours, the constant and unrelenting anger in anonymint's posts make him feel even more credible.  Arguably, other more competent posters could win those people over by matching tone.  But most of us don't want to live like that,  we don't want to be king of the crapped up pool. We'd rather just go away, and-- in large-- we have.

Sometimes it's a question of venue-- if I'm writing in the technical subforum I'm usually not trying to address a particularly general audience... but if I'm not comprehensible to the audience I'm addressing it's because I'm making a mistake or because I don't fully understand what I'm talking about myself and so I can't (yet) explain it clearly (it happens from time to time...).  Please feel free to ask me to expand on any of my posts if one interests you but sounds like opaque jargon.

but I see some mods, lauda and now you are here so a lot of big players who make the decisions

Just as a point of order, sub-forum mods on BCT don't really have much in the way of authority.  Mostly we have the technical ability to zot spammers and move around threads,  but forum norms and policies generally frown on using those tools in an especially editorial way. (Moreover, even if a subforum moderator can get away with it, it doesn't help much without the support of global mods and theymos to do things like ban users).  Generally, subforum mods have about the same clout they'd have as a similar non-mod long time community member.   I wouldn't be surprised if a respected technical contributor like me standing up and saying that anonymint's posting drives him off the forum had some impact-- otherwise I wouldn't have commented-- but thats about it, after all I've been telling people to hit [ignore] on anonymint for years, and he's still been here all this time. We don't, for example, have the ability to ban accounts from particular subforums.  If that had been up to me I would have done that with the tech subforum and anonymint years ago-- the people who find him disruptive are mostly in there and the people who don't are mostly elsewhere...
68  Other / Meta / Re: Anunymint ban on: July 22, 2018, 01:22:55 AM
I just noticed that AnonyMint was banned again, sadly as a result of him posting under a new sock account.

I believe that, as much as any single person could possibly be, AnonyMint (and the forum's historical failure to get him under control) is responsible for a significant fraction of the technically competent people becoming largely inactive.

AnonyMint's posts are almost exclusively jargon-laden techno-babble.  His posts are angry and abusive while at the same time they often fail to even make syntactic sense when it comes to the technical content-- at least to anyone who knows what the words mean.  He relentlessly floods threads with his trademark nonsense and switches to slanderous personal attacks whenever someone disagrees with him.   If that were all there was to it the ignore button would be sufficient, but his multi-posting derails basically any thread he targets because if even a few people fail to ignore him they'll respond (usually disagreeing, sometimes just trying to figure out what the heck he means) and make it nearly impossible for productive discussion to continue. Worse, AnonyMint's abusive but "technical sounding" approach is moderately effective at mobilizing throngs of well meaning but ignorant people to his defense (especially ones who are interested in pumping altcoins and find Anonymit to be sufficient 'proof' for whatever they already wanted to believe). When mobilizing an ignorant mob fails he resorts to the use of copious alt accounts.

People who are really savvy with the technology have valuable time (as is the case for anyone with valuable skills).  It's a waste of that time to spend it in a place where there are decent odds of their efforts being buried under a mountain of abusive nonsense.   Even those few who don't find his dishonest practices extremely annoying are forced to admit that it's just a waste of time to be in the same venue as someone like that.

AnonyMint is not the only example of this sort of abusive ignorance that shows up on the forum, -- it's not uncommon for newbies who are used to being the smartest guy in whatever little pond they came from to show up and say they're going to "fix bitcoin" while calling everyone else an idiot for the couple months it takes for them to realize how little they actually know...  but most of these people are just ignorant and can be educated and they aren't especially relentless.  By comparison, AnonyMint's consistent conduct year after year is especially demoralizing. With some angry newbie there is a least the hope that you'll get through to them or that ignoring them will be sufficient.  With AnonyMint from the moment he takes interest in a thread the outcome is clear in advance-- he's going to post and rant until everyone gives up or flames out and it's never going to change.

I think the people concerned about AnonyMint's "free speech" in this thread are being duped into being pawns in AnonyMint's efforts to shut down the freedom of others to communicate and associate. AnonyMint is clearly free to post whatever he wants on his own site (and any other site that can stand him). You're free to discuss his "ideas" with him there, if they interest you.   But when the forum invites AnonyMint to post without restriction, other people aren't practically able to have the discussions they want to have-- he drowns them out and buries them under toxic stink. If a community can't choose topics and participants then anyone who wants can shut down a communities ability to communicate.

If this isn't obvious to you yet, consider a silly analogy:  I think we can all mostly agree that people generally ought to be able to operate their own bodies as they see fit, without other people restricting how they use them. But then we have a public pool that the community likes to use and since it's a public pool we all agree everyone ought to have equal access to it.  But then comes AnonyMint and for whatever reason he insists on using his autonomy over his bodily functions to deficate in the pool and refuses to cut it out.  Some people can't smell it and aren't worried about pathogens and don't mind. But a lot of people do mind and won't get in the crapped up water. So his "freedom" to use the pool without restriction on his conduct ultimately denies others the free use of the pool that they ought to be able to use. If the pool operator won't keep the crapper out, then people will go off to use other shit free pools... and leave the original one for people who like shitting in the pool and the few who don't mind it.

Reasonable people can usually disagree about exactly _where_ the line should be drawn. But the principle that sometimes you've got to set and enforce limits to create a space that people can actually enjoy should be something we all agree on.  In AnonyMint's case, I think almost everyone would agree his conduct has been consistently far over the line, but I think his abusive conspiracy theorizing rants strike a resonance in some people and blind them to how intolerable the guy actually is...

69  Bitcoin / Development & Technical Discussion / Re: An analysis of Mining Variance and Proximity Premium flaws in Bitcoin on: July 21, 2018, 05:56:07 AM
What you are describing here as "Proximity Premium" is normally called "progress". Normally we think of mining as working as a lottery: Your chance of finding the next block is just simply your proportion of the hashrate out of the total.  But in a system with progress things work more like a race: the fastest party wins more often (or even always, if there is a lot of progress). Lottery vs race is a near perfect analogy-- in a lottery your chance of winning on a ticket doesn't change because of your prior ticket, but in a race your chance of winning in this step depends critically on all the steps before it.

Progress can be introduced into mining in many different ways, -- including from things like bad POW designs that try to reduce variance by making mining incremental, or blocks simply taking non-zero time to spread and validate across the network -- but the end result is the same: Progress creates an insidious centralization pressure where the bigger miner actually gets a better return on investment.  Due to re-investment even small amounts of progress could conceivably completely centralize a system even absent other centralization pressures.

Progress is a lot more hard hitting than most people expect. You might look at 6 seconds of delay and wonder how could that matter against a 600 second block interval.  But because of the nature of poisson processes most blocks are found much closer to each other than 600 seconds.  We saw the direct effects of this in the network, when blocks started getting larger than 200k propagation times started getting upwards of 2 seconds and miners saw higher orphaning rates and consolidated onto fewer larger pools, even giving a single pool a super-majority share of the hash power for a little while.  After we invented and deployed new technology that made blocks propagate faster, the consolidation reversed.  Unfortunately, there is probably no amount of progress which is "safe" -- at best it's only small enough to be too small a centralization pressure to worry about compared to other issues, and the improved propagation requires cooperation -- if a large miner intentionally produces blocks with unrelayed transactions will immediately have the old slow propagation which will improve their own bottom line.  When it comes to security you have to build for the worst case, not just hope people will cooperate instead of doing whatever is best for themselves. So long as the difference between the worst case and typical is small there isn't much incentive to exploit it, for the moment it seems to be working.

Concern about progress isn't new-- It's been part of the regular understanding of the engineers working on Bitcoin for a long time-- and bitcoin's creator probably understood it too considering that he chose a 10 minute block interval (smaller numbers _look_ fine until you start trying to work out the effects of progress).

That is why I find it disappointing that A day ago you were claiming that there were no reasons to not radically increase block sizes and slandering all the engineers who've maintained Bitcoin its whole life. You have been more or less calling the people that built so much of what everyone uses malicious criminals, in post after post. Sad

Today you've miraculous discovered that larger propagation delays result in progress which turns mining from a lottery into a race favoring larger miners-- Welcome to the state of competent Bitcoin development circa 2011. And thanks for the proof that you haven't even bothered to read any of the relevant discussions which you felt so easy about smearing people about...

What will you realize tomorrow?  Maybe that progress and variance are not one in the same and in fact almost any variance reduction proposal increases the potential for progress once you realize that participants will behave strategically instead of honestly. For a simple example of where making variance lower makes progress worse, say that you require that a block present 10000 POWs of 1/10000th the difficulty instead of one (your proposal basically does that, in fact): That is a system with very low variance (1/10000th) and almost perfect progress: If you had two miners with 30% hashpower that didn't share or only shared work with each other and four miners with 10%, the 10% miners would almost _never_ find a block. Again, not a new result: Low variance hashcash using multiple POW was described in the hashcash paper (section 6), and people have re-proposed it once or twice a year on BCT since the start. Maybe that if you don't bother researching you'll just rehash old and broken ideas and if you smear people you'll largely get ignored instead of getting responses that help you improve your own understanding...  Or maybe not, your posting history is littered with pumping your belief in scammer wright, after all.  Sad Sad Sad

I think that is just kind of sad and very frustrating.  Even the idea of "fix"ing variance is somewhat confused on its face. Variance is utterly essential to the operation of the system, it _cannot_ converge without variance significantly greater than the communications diameter of the network: Imagine a toy example with a world wide network of nodes and two miners with equal hashrate, in a system with no variance they'll keep producing blocks at exactly the same time, so once the network forks once it'll never heal-- nodes will just follow the closest miner. The same holds when variance is too low rather than zero: you get long reorgs because far away nodes will only switch chain heads when a nearby miner in a tie gets unlucky enough to fall behind more than enough to overcome the communications delays, which happens only rarely if the variance is low.  This isn't speculative, e.g. there was an altcoin called "liquid bitcoin" (IIRC) with a fixed difficulty, once there was enough hashpower to make blocks fast it fragmented into hundreds of separate chains that were unable to heal and form a single consensus. Coupled with strategic behavior by miners (such as large miners delaying sharing their work), your proposed change would also create tremendous amount of progress (IOW centralization pressure)-- just like the example I gave above.

In spite of pages and pages of discussion no one has until now pointed any of this out to you, because almost no one technically competent will bother even reading your posts because you haven't bothered learning what you don't know and you spend a lot of energy making really nasty and unprofessional insults-- I only read any of it because your seemingly incorrect claims of "vulnerabilities" caught the attention of journalists and I was asked if you were full of it or not.
70  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (rewievers wanted) on: July 20, 2018, 10:32:59 PM
Being "interesting" or not, this IS a DoS vulnerability

No information has been presented so far which supports this claim.   It was a reasonable question to ask if looking up an entry was expensive, if it were then it would be an issue. But it is not expensive it is exceptionally cheap.

In fact, sticking a loop that takes cs_main and does 200k lookups each time a network message is received seems to only increase CPU usage of my bitcoin node from 3% to 10%.  Maybe just maybe there is some crazy pathological request pattern that makes it meaningfully worse and which somehow doesn't also impact virtually every other message. Maybe. It's always possible. But that is just conjecture.  Most conjectures turn out to be untrue, talk is cheap. Needlessly insulting sanctimonious talk, doubly so.  Some people wonder why few technically competent frequent these forums anymore, but it isn't hard to see why-- especially when the abuse seems to come so often from parties whos posting history reveals that they're primarily interested in pumping some altcoin or another.

diff --git a/src/net_processing.cpp b/src/net_processing.cpp
index 2f3a60406..6aff91a48 100644
--- a/src/net_processing.cpp
+++ b/src/net_processing.cpp
@@ -1558,6 +1558,15 @@ bool static ProcessMessage(CNode* pfrom, const std::string& strCommand, CDataStr
+    {
+        LOCK(cs_main);
+        arith_uint256 hash = 0;
+        for(int i=0;i<200000;i++){
+          BlockMap::iterator mi = mapBlockIndex.find(ArithToUint256(hash));
+          hash++;
+        }
+    }
     if (strCommand == NetMsgType::REJECT)
         if (LogAcceptCategory(BCLog::NET)) {

1- Check the length of the supplied block locator not to be greater than 10*Log2(max_height) + a_safe_not-too_large_threshold
And then nodes that have very few blocks will get stuck. Congratulations your "fix" for a almost-certain-non-issue broke peers. Safely size limiting it without the risk of disruption probably requires changing the protocol so the requesting side knows to not make too large a request.

2- Check the difficulty of the supplied hashes to be higher than or equal to some_heuristic_nontrivial_safe_value
3- Check first/every 10 hashes to be reasonably close in terms of difficulty(they are supposed to be).
Why do you expect your hacker to be honest enough to use actual hashes? He can just use arbitrary low numbers.

4- Black list spv clients who send malicious getheaders requests in a row.
If you had any idea which peers were sending "malicious messages" why would you not just block them completely?  ... Any kind of "block a peer when it does X which it could reasonably think was a fine thing to do" risk creating a network wide partitioning attack by potentially creating ways for attackers to trick nodes into getting themselves banned.

of very short period of time that lock is hold.
You might not be aware but reading a single block from disk and decoding into memory should take longer than a hundred thousand memory accesses takes.

I don't agree. Any algorithm/code can correctly be analysed and optimized/secured accordingly. No magics.
Yes, and it was analyzed here, and the analysis says that it would be surprising if it were actually slow, so it isn't worth any further discussion unless someone  finds a reason to think otherwise, such as a test result.  
71  Bitcoin / Development & Technical Discussion / Re: Bogus locator in getheaders (rewievers wanted) on: July 18, 2018, 11:36:47 PM
Looking up an entry is O(1) -- just a trivial hashing operation and one or two pointer chases.

So basically what you're saying is that you can make the node do a memory access per 32 bytes sent,  but virtually any network message also does that.  E.g. getblock <random number>.

Locators have no reason to be larger than O(log(blocks)), so indeed it's silly that you can send a big one... but I wouldn't expect it to be interesting. Alternatively, you could consider what is the difference between sending 100k at once vs 20 (a totally reasonable number) many times? Only a trivial amount of network overhead and perhaps a couple milliseconds of blocking other peers whos message handling would otherwise be interleaved.  If you benchmark it and find out that its significantly slower per byte sent then other arbitrary messages I'd be interested to hear... but without a benchmark I don't find this interesting enough to check myself.  

There are a billion and one things that are conjecturally slow, but most of them are not actually slow (and more than a few things that seem fast are actually slow).  Anyone can speculate, testing is what is actually valuable.
72  Bitcoin / Development & Technical Discussion / Re: Why to write down your seed? regular InfoSec policies say never write passwords on: March 06, 2018, 09:11:55 PM
"infosec" password advice is given for contexts where the account can be cheaply recovered if the password is lost.  Not for cases where there will be very large monetary losses if its lost.  Infosec advice is also overly focused on physically proximal threats.  This is outmoded advice: anyone who has physical access to your computer can easily compromise you 1000 ways without the password, and there are a thousand times more attacks from attackers that have no physical access.

Your goal at the end of the day is to keep access to your bitcoins. This means you must balance risks. If you only care about the risk of theft, destroy your private keys now and no one will ever steal them...

Someone who can break into your home can hold you at gunpoint and get you to type in basically any password you know... if the attacker is in your home you probably have bigger problems then them finding a hidden seed.
73  Bitcoin / Development & Technical Discussion / Re: looked at bitcoind source and looks like a shitcode on: February 24, 2018, 12:46:44 AM
Throwing completely substancesless insults at quality work in order to fool people who couldn't tell for themselves into thinking that you're brilliant seems to be a favorite pastime for folks who feel insecure about their lack of competence adequate enough to accomplish anything themselves.
74  Bitcoin / Development & Technical Discussion / Re: segvan: Segwit vanity address & bulk address generator on: February 14, 2018, 04:23:45 AM
I’ve also been seriously mulling ideas for an online service which finds “vanity tweaks” for a private key held by a user—essentially, convenient results from rented time on powerful CPUs in the “cloud” (much though I loathe that word).  I’m curious as to how popular such a service could be.  Anybody interested?

Allow me to knock your socks off:

Say you have N people who each want to find a vanity tweak of their pubkeys which will roughly take M million tries to find.

You can find all N of them with just ~M million tries, instead of the N*M million tries if they were to do them themselves alone.

Here is how.  Each person has a pubkey P_i,   they all come up with uniformly random tweaks T_i.  They tweak their keys, and send these resulting public keys to the hashing server. They keep the tweak and original pubkey private.   They also send the string(s) they want to match. They stay connected.

The sever takes all the strings and compiles them into a single match expression (which can be matched in log() operations at worst, probably better).

Then the server sums all the tweaked pubkeys and grinds on it comparing the output with the omnibus matcher.

When it gets a hit  it then demands all clients except the one with the match to tell them the private keys for their tweaked keys (this reveals nothing about the original private key, since it's been tweaked).    It then sums up the tweak it found and everyone elses private keys and gives that to the lucky user.

Everyone remaining sends new tweaked pubkeys  (probably in the same message they sent their prior private keys).  They get summed and the process continues with the new basepoint.

If someone fails to send their private key, you kick them off and ban them and you lose that result because you cannot reconstruct the tweaks without everyone elses keys.

Implemented correctly this is completely secure.

You could even have the individual users perform their own grinding.  So if they all had computers of the same speed, they effectively get an N fold speedup in how fast they find solutions.

To discourage abuse you could require a new participant grind without submitting their own keys and patterns for a while... There found tweaks prove the work they did, once someone has done enough you can give them a token they can use to submit a pubkey and pattern(s) for matching, if that user fails to reveal, you ban it.  They can rejoin ... but they have to do free work to get a new code.

I haven't previously implemented it because the protocol minutia with tracking and banning and whatnot is a PITA and only the mathematical part is interesting to me. Smiley
75  Bitcoin / Development & Technical Discussion / Re: segvan: Segwit vanity address & bulk address generator on: February 13, 2018, 01:44:38 AM
You want this code:  it will be astronomically faster than your current code.

I believe when I previously implemented the techniques in this code my result was faster than vanitygen on a GPU.

It could also be made faster still with some improvements.  E.g. it doesn't actually need to compute the y coordinate of the points, so several field multiplications could be avoided in the gej_to_ge batch conversion.   It could also avoid computing the scalar for any given point unless you found a match. (E.g. by splitting the scalar construction part into another function which you don't bother calling unless there is a match).

Another advantage of this code is that it is setup to allow an arbitrary base point.  This means you could use untrusted computers to search for you.

Sipa also has AVX2 8-way sha2 and ripemd160 that he might post somewhere if you asked.  An 8-way bech32 checksum generator should be really easy to do, though if your expression doesn't match on the final 6 characters you should avoid even running the checksum.
76  Bitcoin / Development & Technical Discussion / Re: Latest bitcoin core? on: February 09, 2018, 08:04:32 PM
I was going to run two nodes and had setup the addrindex patched node to run on a VM. Due to some disk constraints (speed, capacity) I ended up deciding I would just run the patched addrindex node and use whitebind and whitelist in my bitcoin.conf so nobody but I can connect. You raised a point about vulnerabilities. Do you think the addrindex node is protected if I use whitebind and whitelist?
You have to connect to the outside world somehow... you could run your gateway node with pruning, then it would only use about 3GB space or so.

Also, what about the incorrect results you saw? What did you see and was it from this version: bitcoin-0.13.2-addrindex?
Querying it on an address wihch had funds returned no results.  The addrindex code there was written by Pieter as a quick lark, before he realized it was a bad idea and abandoned it.  Other people picked it up and patched it forward but made no effort to improve it or investigate the issues I encountered with it.

Generally it's my expectation that anyone who uses something like addrindex is eventually going to be forced to us a centralized service provider like once the resource costs of an unpruned address indexed full node is beyond what they can support.  (The fact that you struggled with running two nodes suggests that you're within a factor of two of that already).
77  Bitcoin / Development & Technical Discussion / Re: Lightning Network & bigger amounts? on: February 09, 2018, 09:07:12 AM
In theory you can transfer the max-flow in a single go,  but software needs to support making payments on separate channels in order to do that atomically and people are working on that.   But usually atomiticity isn't required-- usually you can just make a couple of separate payments if you run into limits.
78  Bitcoin / Development & Technical Discussion / Re: Random Number On Blockchain on: February 09, 2018, 08:57:28 AM
The standard protocol is a commit and reveal protocol as mentioned above with hashes.  The problem commit and reveal protocols have is that the last party to reveal can jam the process if he doesn't like the result.   In some contexts thats harmless, in others its fatal.

The hash the last block's ID approach can be biased by miners.  Without knowing what the the result would be used for you can't argue that they wouldn't do it... if they could make themselves win a 100 BTC lottery for sure, ... it would be totally reasonable to orphan and throw out blocks to pull it off.    The earlier proposal to use "the last 64 blocks" doesn't help, the last block is sufficient-- it already commits to all prior blocks anyways.

You can attempt to reduce the holdup issue in a commit and reveal protocol by using verifiable secret sharing.  For example: Alice, Bob, and Charley  generate secret values a, b, c and using ECC compute and publish points  aG, bG, cG   -- these are their commitments.   Then each party generates a new random value r and sends one of the other two parties r and the other party their secret-r, and those parties publish rG and (s-r)G.  So for example, Alice sends r to bob, and (a-r) to Charley.  Bob and Charley publish rG and (a-r)G, and each of them can add the values and check that they equal Alice's commitment because aG = rG + (a-r)G; but the players all still know nothing about each other's secrets. Once the sharing is done, people can start revealing their secrets.  Alice reveals a, Bob reveals b... Charley decides he doesn't like the result and falls offline, but now Alice and bob can reveal the shares of Charley's secret...  Whoever remains can compute H(a||b||c).  Of course, if Bob and Charley are co-conspirators they can abort as soon as they get Alice's shares if they don't like the result. This approach generalizes to any threshold of participants.

These sorts of ideas can be combined.

But which combinations are interesting depends on the application. For example,  one case I've thought a lot about is making random values that people in the future can be pretty confident weren't rigged.

Alice and Bob can agree on their terms and generates the a private key as a hash of the agreement terms and a random value sends a bitcoin to their own keys A and B in block 1000.   Then after block 1000 they sweep their coins and reveal their keys and your random number becomes  H(blockhash || a_private || b_private).   This would make it difficult for the miner to bias without knowing A and B's keys, letting them steal the coins. A and B couldn't easily restate their random values because they're commuted in the chain, etc.

Another idea which I've not seen implemented is to use a slow sequential function on the block hash. So e.g. your randomness involves H(blockhash)  but you make H in that case some function that takes 20 minutes to compute.   You can argue that a miner might throw out a block solution to bias a result-- but if he can't even tell the result for 20 minutes after finding the block that is much harder.  I understand that Ben Fisch  has been working on finding functions for H() in this example which are cheap to verify, so others can check the results of that 20 minutes of computation without doing 20 minutes of computation themselves.
79  Bitcoin / Development & Technical Discussion / Re: How would it be know if a segwit thieft actually happened? on: February 09, 2018, 08:23:12 AM
Nullius and DannyHamilton are spot on.

It's sad that people are getting bamboozled by malicious disinformation on this subject.

The _exact_ same thing protects segwit outputs from being stolen by malicious miners as any other coin: Following the rules is part of what _defines_ mining.  A miner that steals an output hasn't mined as far as nodes are concerned, their blocks will simply be ignored (and the peers relaying them eventually banned).

Segwit is no different from any other consensus rule in this respect-- other than some were introduced later than others, but many have been introduced over time.

We didn't see these same sorts of malicious FUD with P2SH though it was exactly the same-- I guess because back then felons hadn't figured out how to monetize that sort of confusion.
80  Bitcoin / Development & Technical Discussion / Re: Latest bitcoin core? on: February 09, 2018, 08:09:33 AM
Old versions are old, they have known reliability and performance issues. 0.13.2 is vulnerable to DOS attacks (plus potentially other security issues, but I don't recall for sure), and it isn't getting updated for other changes so it will far further behind over time.   I would recommend at a minimum that you setup two nodes-- one on current software, one running your special code-- and make the one with the custom code connect only to the current software.  This way it's shielded from abuse that it might not be able to handle and it's easy for you to upgrade the external node.

As an aside, that address index patch that was floating around gave rare false results for me.  I suspect that it could lose entries when there were reorgs, but I'm not sure if that was the cause or something else.
Pages: « 1 2 3 [4] 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 ... 239 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!