Bitcoin Forum
July 24, 2017, 10:47:03 PM *
News: BIP91 seems stable: there's probably only slightly increased risk of confirmations disappearing. You should still prepare for Aug 1.
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 239 »
1  Bitcoin / Development & Technical Discussion / Re: So far Voting % result shows that there is no bitcoin split. on: Today at 10:34:41 PM
Anyone can create a spinoff altcoin at any time, and people have in the past, e.g. https://bitcointalk.org/index.php?topic=1883902.0

That via is maybe create another one isn't very interesting...
2  Bitcoin / Development & Technical Discussion / Re: Release of the secp256k1 library? on: Today at 06:16:28 PM
Is there a formal release of the secp256k1 library?  The github repo just has a master branch.

No, not yet-- there are still remaining todos before that.

Quote
I assume the version that they included with bitcoin 14.2 is stable/release quality, but they don't appear to have tagged that back on their repo?
It's suitable for Bitcoin's purposes; and probably many others.
3  Bitcoin / Development & Technical Discussion / Re: Segwit tx format? Need a point in the right direction... on: July 23, 2017, 06:59:06 AM
The Bitcoin Core integrated wallet doesn't yet use segwit (except with minimal shims for testing); expect a release that does shortly after segwit is active.  Based on your comments it may be possible for you to the minimal functionality, but it'll be extra steps that you you'll be able to avoid later.
4  Bitcoin / Development & Technical Discussion / Re: Sidechains and malleability on: July 19, 2017, 03:56:30 PM
Why malleability is still problem in someone opinion? Through it we can change only TXID of transaction not her outputs or inputs. After some time wallet will show this transaction with new TXID. So, what a problem?
There are many problems because people do more with unconfirmed transactions then send them and wait for them to confirm before doing anything else.

For example, you may want to spend the change resulting from an unconfined transaction-- so that the rest of your coins are not held hostage until the next block-- but if the transaction ID is changed the child transactions are invalidated. You couldn't even reissue the replacement until you come back online and learn the new ID, and then imagine that the private keys are kept offline in a safe.

If you make only the most boring kinds of transactions, never deal with anything unconfirmed (including making more payments until confirmation, or replacing transactions) and use carefully and competently written wallet software then malleability is at worst a minor nuisance.  If you are trying to do anything fancy with smart contracts, unconfirmed transactions (including just chaining them), or are _writing_ wallet software that has to handle malleation happening, then it's a burden and footgun.  

CLTV and CSV were introduced to recover some of the worst of the damage done by malleability-- without them you couldn't safely do multiparty escrows with timeouts... but even with them the fact that malleation invalidates subsequent spends makes life hard in many ways.

You can also think of it as changing the outputs: an output  is a txid, index, scriptPubkey, and an amount.  The malleation doesn't change the index, pubkey, or amount... but it does change the txid of the output; and this is precisely the problem that segwit solves.
5  Bitcoin / Development & Technical Discussion / Re: Sidechains and malleability on: July 18, 2017, 06:28:25 PM
Are there any sidechain ideas or proposals that depend on the ability to create non-malleable transactions?
Nope. Malleability isn't an issue for well confirmed transactions, which is all that any sidechains proposals work with-- often 100 blocks or more. But that hasn't stopped people you associate with from dishonestly claiming that segwit exists to enable sidechains. I doubt having this further confirmed will suddenly spur you on to calling out their incorrect claims, since you don't for anything else.
6  Bitcoin / Bitcoin Discussion / Re: The Barry Silbert segwit2x agreement with >80% miner support. on: July 17, 2017, 10:46:21 AM
Roger Ver on the other hand would just take a boat load of bitcoins with him (technically he would still have the same on both chains) and may potentially dump them on one chain to crash the price? (would be a big gamble and stupid move but hey he could theoretically do it if he wanted to be spiteful).
In many ways this would be the opposite of spite... it would remove him from the picture, which would be an ongoing benefit because he sure seems to hate the system as it is now, and would put less expensive coins in the hands of more people that believe in it.  I think that would be a pretty good return for a brief period of market volatility.
7  Bitcoin / Development & Technical Discussion / Re: Core secondary password on: July 16, 2017, 08:00:33 PM
Hi,

I really miss a secondary password from Core. Right now you can start up the app and it just shows all your addresses and balances. You can't spend any (assuming your encrypted your wallet), but you can still see.

This is pretty bad IMHO. It opens up an avenue for "3rd party" to extort you knowing how much you have by simply firing up the exe (I know you can encrypt the drive, etc. but this should be in the client).

You should use an encrypted disk.  If you do not, then there are a myriad other leaks that will expose what you were doing.  Having a second password would very likely increase the amount of funds lost though forgetting passwords.

Quote
This layer should also give us plausible deniability. Basically encrypting X wallets and showing only that which matches the password entered. This could also be used to separate your coins (and avoid mistakes), but still keep them in one place.
And how would you explain the extra data in the wallet that doesn't decrypt?  It isn't so simple... plus with this comment you've gone from just an outer level of encryption to implementing multiple wallets in one file with a myriad of UI complications.
8  Bitcoin / Development & Technical Discussion / Re: Do any active/reputable devs support no scaling pressure as a direction for btc? on: July 15, 2017, 08:34:57 AM
an obviously ignorant person that intends only to disrupt.
Maybe, but many readers may not realize that.  Humoring trolls by pretending they are merely ignorant denies them their power (because they want to make us upset) and also is more enjoyable for other readers... it is also a safer option since sometimes someone who looks like a troll really is just misguided. Thanks for the links.
9  Bitcoin / Development & Technical Discussion / Re: Downloading pruned blockchain inefficient? on: July 15, 2017, 02:46:22 AM
If you want to run a pruned node but have the storage capacity to download the whole blockchain it is indeed much faster to download the whole blockchain and then restart bitcoind in pruned mode to do the pruning afterwards since it is constantly being pruned during the download process otherwise.

Have you actually measured that? If it's true-- it's a bug. the pruning should delete a whole blockfile at a time (128MB) and shouldn't take much time at all.  It's totally plausible to me that there is a bug here, but I've not noticed it myself or heard it reported before.
10  Bitcoin / Development & Technical Discussion / Re: Do any active/reputable devs support no scaling pressure as a direction for btc? on: July 15, 2017, 02:43:58 AM
traincarswreck, I think you can make yourself be understood without the insult.

I dunno wtf people think settlement means.   Settlement is a payment of final recourse, not an IOU or other promise to pay but the final transfer of value itself.  Bitcoin is settlement because it's a true electronic cash not a system of IOUs like most electronic payment systems.  It's also a batch settlement system in that you get blocks as periodic events that arrive minutes to an hour apart.
11  Bitcoin / Development & Technical Discussion / Re: Do any active/reputable devs support no scaling pressure as a direction for btc? on: July 12, 2017, 08:50:00 AM
Some people to provide pressure to not change things. Fortunately, these days they don't need to provide much because most developers have already been won over to that way of thinking (or started there)-- because it's pretty sensible once you really understand how subtle the trade-offs are in the system.

I can't speak to bringing global fiance into order beyond saying that I think Bitcoin is much more useful to the world as lower capacity but immutable and decentralized than it would be as paypal 2.0.

All the attacks and insults and pressure won't change that; what it does do a bit is slows down other improvements. For some people-- ones who don't want to see Bitcoin successful-- that is probably victory enough.  So I think it's on every user to fight against that effect.
12  Bitcoin / Bitcoin Discussion / Re: The Barry Silbert segwit2x agreement with >80% miner support. on: July 11, 2017, 08:14:09 AM
This is utter bullshit as over the past week I've seen my mempool easily get down below 1MB of transactions now that the transaction spam has ended on mainnet.
You think it has been low recently, wait until after segwit is active and starting to get used by major transactors.. :-/

Best part of it was hours of going on about the attacker while making it clear that almost no one on that repo is even running their own testnet, and letting it go ~29 hours for a block.  If it takes 29 hours to fix an issue that takes a simple modification, restart, and potentially running a shell-one liner to generate txn if there aren't enough-- how may people can possibly be working on this thing for real?  0.5?
13  Alternate cryptocurrencies / Altcoin Discussion / Re: ETHEREUM BLOCKCHAIN SIZE IS NOW 40% BIGGER THAN BITCOIN BLOCKCHAIN ! on: July 08, 2017, 11:25:24 PM
http://www.altcointoday.com/ethereums-blockchain-size-surpasses-bitcoins-by-40/

So is ETH going to sink to the same levels as once upon a time giants like QuarkCoin, FastCoin and alike?

I mean who wants to wait a week for the block to sync? Who wants to carry 180GB+ of blockchain for a coin that is NOT #1 and will never be #1

 Shocked
Can any of the commonly used ethereum node software still sync the whole chain in a plausible amount of time?  I heard it was taking weeks and now all the software is defaulting to "fast sync" which is SPV-like security.

There really interesting point will be as their state gets as large as Bitcoin's blockchain.  They seem to be working to hold that back by having miners artificially limit it... and have tx fees not that dissimilar to Bitcoin's as a result even though a few months ago they were mocking fees being over a half cent.

I saw a post on reddit that the etherum admins had pledged to spend $1m/yr to keep 10,000 reliable nodes running themselves, if so then they may disguise the collapse of their network pretty well... but it didn't cite any sources.

14  Bitcoin / Bitcoin Discussion / Re: The Barry Silbert segwit2x agreement with >80% miner support. on: July 08, 2017, 01:15:04 AM
The contention is "what comes after segwit?" On the Core side is "nothing" (i.e., just segwit)
lol

In fact, the Core has many things after segwit; some of which are already done (compact blocks), signature aggregation, weakblocks, flexcaps, etc.

All anyone else has is MOAR BLOCKSIZE REGARDLESS OF THE CONSEQUENCES and some heads on spikes.
15  Bitcoin / Development & Technical Discussion / Re: What is in the pipelines for fixing SPV mining incentives introduced in Segwit? on: July 07, 2017, 07:15:41 PM
I'm sure it's already been dealt with or is being dealt with, and I am hoping somebody can link me to the most recent advances in that regard, so I can get all caught up.
Yes, to the extent it was any real concern at all it already dealt with by moving compact blocks up ahead of segwit, and they're now ubiquitously deployed.

To reiterate the concern:  Peter Todd expressed the thought that segwit might make it a greater gain to SPY mine by fetching the witness data separately thus reducing the amount of data that needed to be sent in order to figure out what transactions were in a block. E.g. instead of fetching 2MB of data they would send 750K of non-witness data. The concern is that a miner could produce a block with invalid spends and SPV clients would accept it and miners that aren't validating the chain will extend that invalid chain and the SPV wallet will see two confirmations; and that this optimization would make it somewhat more attractive for miners to mine without validating.

To avoid even this narrow concern: We pulled the development of compact blocks ahead of segwit.  With compact blocks the block is represented by a 6 byte witness-tx-id hash (equivalent to the old txids in that they hash everything including the witness) per transaction in the block. So the optimization that PT suggested above turns into a pessimization: Instead of sending a 30kb compact block you'd need to send a 750kb witness stripped block, which is 25 times larger (thus would take much more time to transfer instead of less).

I think it was always a pretty fringe argument-- if miners want to SPY mine and include transactions they could just communicate bloom filters of the txins they are spending between their spy mining buddies (and even commit them to blocks and validate them, if they like)-- and that works generically, segwit or not, and much much better than sending the witness data would have done (80 byte header + 3k filter, rather than a 750k witness stripped block).

But there was some merit in the point that one thing was already built and running on all nodes and the other not;  thus pulling CB ahead of segwit instead of after seemed like a good way of making sure that the default protocol choices weren't ones that favored spy mining. And as you note, nodes already drop invalid blocks harmlessly...  the only exposed parties are SPV wallets getting ripped off by malicious miners, but they're already thoroughly exposed to that by the existing SPY mining today, segwit doesn't make a difference there.  Most of the hashpower is already validationless mining so they'll already extend an invalid block. Worse, most (all?) of the spv wallets just show a binary "confirmed" at one confirmation so they're already maximally vulnerable and the validationless mining hardly exacerbates their existing insecurity.

I think it would be great to get more general protection against validationless mining in the protocol. Unfortunately, miners and the developers of forks are aggressively against doing so. Short of a UASF for it, which I doubt there is political will for, esp with fork developers arguing that validationless mining is _good_ we wont likely get any unless/until it turns out to be an actual issue. Regardless, segwit no longer makes it any worse.

I also find much the recent scaremongering on this to be highly disingenuous; Bitcoin Classic implemented blindly mining off just relayed headers and relaying headers without validating them; and only dropped it because their implementation was crashy.  BU has implemented having nodes skip signature validation _entirely_ if the timestamp in the block header added by the miner of the block is too old.  Yet it is the same people who created these validation bypasses that are suddenly so concerned about an obscure corner case that was fixed a long time ago. Similarly, the BU people support the Bitcoin.com mining poll which engages in spy mining, and I believe they wrote its pool software. Doubly so because many of the same parties which are "so concerned" are both paid by and aggressively support the same miners that _today_ engage in spy mining. ( I don't say this to criticize you, I get that you're just picking up on claims people circulating; I'm specifically talking about the executives of the Bitcoin "Unlimited" corporation and the developers of Bitcoin Classic). They don't just fail to complain about the widely deployed validationless mining but actively participate and facilitate it themselves. But suddenly they're oh so concerned about a really obscure and outdated argument about segwit.
16  Bitcoin / Development & Technical Discussion / Re: Some 'technical commentary' about Core code esp. hardware utilisation on: July 07, 2017, 05:26:03 PM
It is very rare these days that someone outside the core team is so helpful to them as in this post.  Most outside core would be just as happy to see core fade away instead of being so very helpful as TB is being here, albeit TB was masterfully trolled by GM into doing GM's job for him, it is just as likely that GM's ego or outside incentives will prevent him from taking any of this helpful advice to heart.
Thanks for demonstrating your lack of either clue or integrity for the record-- some people might have been mistaking your endorsement of Wright as a moment of bamboozle rather than a deeper character flaw.

But just in case you missed the response to his claims there:

There was an incomplete PR for that, it was something like a 5% performance difference for initial sync at the time; it would be somewhat more now due to other optimizations. Instead we spent more time eliminating redundant sha256 operations in the codebase, which got a lot more speed up then this final bit of optimization will. It's used in the fibre codebase without autodetection. Please feel free to finish up the autodetection for it.  It's a perfect project for a new contributor.  We also have a new AMD host so that x86_64 sha2 extensions can be tested on it.

So, dumping some output of a google search citing code that we already had-- isn't exactly "showing us the wound", would you say?

Not to mention your highlight of the use of sha256^2 which is part of the protocol definition from day one and not something we could change without invalidating every transaction and block. (Nor is it entirely pointless...)  But I guess as Craig Wright's partner in crime you already know all about that because he's totally Satoshi. (lol)

Most interesting here is that TB has found where Core team have added the Big-O Quadratic Sighash bug which is their big issue of why then need SegWit and can't scale.

So, you're telling us that "TB" is Craig Wright?  Because that easily debunked claim is Wright's as far as I know.
17  Bitcoin / Development & Technical Discussion / Re: Some 'technical commentary' about Core code esp. hardware utilisation on: July 07, 2017, 03:24:09 AM
XT team for starters:
Fun fact: Mike Hearn contributed a grand total of something like 6 relatively minor pull requests-- most just changing strings.  It's popular disinformation that he was some kind of major contributor to the project. Several of his changes that weren't string changes introduced remote vulnerabilities (but fortunately we caught them with review.)

Quote
Right, if the logic doesn't work, just fall back to using registration date and post counts to establish authority.
Yes, I've been using Bitcoin pretty much its entire life and I can easily demonstrate it. My expertise is well established, why is it that you won't show us yours though you claim to be so vastly more skilled than everyone here?

Quote
At the time I didn't even know you guys were stupid enough to not compress the 150G of blocks, until someone reminded me in that thread. Seriously what is the point leaving blocks from 2009 uncompressed? SSD is cheap these days but not that cheap.
From 2009? ... you know that the blocks are not accessed at all, except by new peers that read all of them right?  They're not really accessed any less accessed than blocks from 6 months ago. (they're also pretty much completely incompressable with lz4, since unlike modern blocks they're not full of reused addresses).

As to why? Because a 10% decrease in size isn't all that interesting esp at the cost of making fetching blocks for bloom filtered lite nodes much more cpu intensive, as that's already a DOS vector.


[Edit: dooglus points out the very earliest blocks are actually fairly compressible presumably because they consist of nothing but coinbase transactions which have a huge wad of zeros in them.]

Quote
So after all the talk about your l33t porn codec skills, your solution to save space is to just prune the blocks? LOL. You might as well say "Just run a thin wallet".
Uh, sounds like you're misinformed on this too:  Pruning makes absolutely no change in the security, privacy, or behavior of your node other than that you no longer help new nodes do their initial sync/scanning. Outside of those narrow things a pruned node is completely indistinguishable.  And instead of only reducing the storage 10%, it reduces it 99%.

Quote
Why do you think compression experts around the world invented algorithms like Lz4? Why do you think it's part of ZFS? Because it is fast enough and it works, it is simple proven tech used by millions of low power NAS around the world for years.

Here, there are over 100 compression algorithms, all invented and benchmarked for you.
You'll easily find one that has a size/speed/mem profile that just happen to work great on bitcoin block files and is better than LZ4.
Lz4 is fine stuff, but it isn't the right tool for Bitcoin almost all the data in Bitcoin is cryptographic hashes which are entirely uncompressable.  This is why a simple change to more efficient serialization can get over 28% reduction while your LZ4 only gets 10%.     As far as other things-- no we won't: block data is not like ordinary documents and traditional compressors don't do very much with it.

(And as an aside, every one of the items in your list are exceptionally slow. lol, for example I believe the top item in it takes it about 12 hours to decompress its 15MB enwiki8 file. heh way to show off your ninja recommendation skills)

If you'd like to work on compression, I can point you to the compacted serialization spec that gets close to 30%... but if you think you're going to use one of the paq/ppm compressors ... well,  hope you've got a fast computer.
 
Quote
I would have made patches a long time ago if the whole project wasn't already rotten to the core.
Can you show us a non-trivial patch you made to any other project anywhere?
18  Bitcoin / Development & Technical Discussion / Re: Some 'technical commentary' about Core code esp. hardware utilisation on: July 06, 2017, 10:58:09 PM
Quote
And many people on the project quit because they didn't like working with you, what's your point?

Name one.

Quote
People have been laughing at your choices for years and here you are defending it because you wrote some codec to watch porn with higher fps some years ago.
Says the few days old account...


Quote
Inefficient data storage Oh please. Cargo cult bullshit at its worst.  Do you even know what leveldb is used for in Bitcoin?  What reason do you believe that $BUZZWORD_PACKAGE_DEJURE is any better for that?  Did it occur to you that perhaps people have already benchmarked other options?   Rocks has a lot of feature set which is completely irrelevant for our very narrow use of leveldb-- I see in your other posts that you're going on about superior compression in rocksdb: Guess what: we disable compression and rip out out of leveldb, because it HURTS PERFORMANCE for our use case.  It turns out that cryptographic hashes are not very compressible.

Everyone knows compression costs performance, it's for space efficiency, wtf are you even on about.

Most people's CPU is running idle most of the time, and SSD is still expensive.

So just use RocksDB, or just toss in a lz4 lib, add an option in the config and let people with a decent CPU to enable compression and save 20+G.

Reading failure on your part. The blocks are not in a database. Doing so would be very bad for performance.  The chainstate is not meaningfully compressible beyond key sharing (and if it were, who would care, it's 2GBish). The chainstate is small and entirely about performance. In fact we just made it 10% larger or so in order to create a 25%-ish initial sync speedup.

If you care about how much space the blocks are using, turn on pruning and you'll save 140GB. LZ4 is a really inefficient way to compress blocks-- it mostly just exploits repeated pubkeys from address reuse Sad   the compact serilization we have better (28% reduction) but it's not clear if its worth the slowdown, especially since you can just prune and save a lot more.

Especially since if what you want is generic compression of block files you can simply use a filesystem that implements it...  and it will helpfully compress all your other data, logs, etc.

Quote
So what's your excuse for not making use of SSE/AVX/AVX2 and the Intel SHA extension? Aesthetics? Portability? Pfft.

There was an incomplete PR for that, it was something like a 5% performance difference for initial sync at the time; it would be somewhat more now due to other optimizations. Instead we spent more time eliminating redundant sha256 operations in the codebase, which got a lot more speed up then this final bit of optimization will. It's used in the fibre codebase without autodetection. Please feel free to finish up the autodetection for it.  It's a perfect project for a new contributor.  We also have a new AMD host so that x86_64 sha2 extensions can be tested on it.

19  Bitcoin / Development & Technical Discussion / Re: Some 'technical commentary' about Core code esp. hardware utilisation on: July 06, 2017, 11:28:18 AM
What you're seeing here is someone trying to pump his ego by crapping on the work of others and trying to show off to impress you with how uber technical he is-- not the first or the last one of those we'll see.

A quarter of the items in the list like "Lack of inline assembly in critical loops."  are both untrue and also show up in other abusive folks lists as things Bitcoin Core is doing and is awful for doing because its antithetical to portability reliability or the posters idea of code aesthetics (or because MSVC stopped supporting inline assembly thus anyone who uses it is a "moron").

Here is the straight dope:  If the comments had merit and the author were qualified to apply them-- where is the patch?   Oh look at that, no patches.

Many of the of the people working on the project have a long term experience with low level programming (for example I spend many years building multimedia codecs; wladimir does things like video drivers and IIRC used to work in the semiconductor industry), and the codebase reflects many points of optimization with micro-architectural features in mind.  But _most_ of the codebase is not a hot-path and _all_ of the codebase must be optimized for reliability and reviewability above pretty much all else.

Some of these pieces of advice are just a bit outdated as well-- it makes little sense to bake in an optimization that a compiler will reliably perform on its own at the expense of code clarity and maintainability; especially in the 99% of code that isn't hot or on a latency critical path. (Examples being loop invariant code motion and use of conditional moves instead of branching).

Similarly, some are true for generic non-hot-path code: E.g. it's pretty challenging in idiomatic, safe C++ to avoid some amount of superfluous memory copying (especially prior to C++11 which we were only able to upgrade to in the last year due to laggards in the userbase), but in the critical path for validation there is virtually none (though there are an excess of small allocations, help improving that would be very welcome).   Though, you're not likely to know that if you're just tossing around insults on the internet instead of starting up a profiler.

And of course, we're all quite busy keeping things running reliably and improving-- and pulling out the big tens of percent performance improvements that come from high level algorithmic improvements.  Eeking out the last percent in micro-optimizations isn't always something that we have the resources to do even where they make sense from a maintainability perspective.  But, instead we're off building the fastest ECC validation code that exists out there bar none; because thats simply more important.

Could there be more micro-optimizations: Absolutely.  So step on up and get your hands dirty because there is 10x as much work needed as there are resources are. There is almost no funding (unlike the millions poured into BU just to crank out crashware); and we can't have basically any failures-- at least not in the consensus critical parts.  Oh yea, anonymous people will be abusive to you on the internet too.  It's great fun.

Quote
Inefficient data storage
Oh please. Cargo cult bullshit at its worst.  Do you even know what leveldb is used for in Bitcoin?  What reason do you believe that $BUZZWORD_PACKAGE_DEJURE is any better for that?  Did it occur to you that perhaps people have already benchmarked other options?   Rocks has a lot of feature set which is completely irrelevant for our very narrow use of leveldb-- I see in your other posts that you're going on about superior compression in rocksdb: Guess what: we disable compression and rip out out of leveldb, because it HURTS PERFORMANCE and actually makes the database larger-- for our use case.  It turns out that cryptographic hashes are not very compressible.  (And as CK pointed out, no the blockchain isn't stored in it-- that would be pretty stupid)

Pretty sad that you feel qualified to through out that long list of insults without having much of an idea about the architecture of the software.

Quote
Since inception, Core was written by amateurs or semi-professionals, picked up by other amateurs or semi-professionals
The regular contributors who have written most of the code are the same people pretty much through the entire life of the project; and they're professionals with many years of experience.   Perhaps you'd care to share with use your lovely and impressive works?

Quote
run two to four times faster without even trying.
Which wouldn't even hold a candle to the multiple orders of magnitude speedup we've produced so far cumulatively through the life of the project-- exactly my point about micro-optimizations.  Of course, contributions are welcome.  But it's a heck of a lot easier to wave your arms and insult people who've produced hundred fold improvements, because you think a laundry list of magic moves is going to get another couple times (and they might-- but at what cost?)

If you'd like to help out it's open and ready-- though you'll be held to the same high standard of review and validation and not just given a pass because a micro-benchmark got 1% faster-- reliability is the first concern... but 2x-level improvements in latency or throughput critical paths would be very very welcome even if they were a bit painful to review.

If you're not interested or able-- well then maybe you're just another drunken sports fan throwing concessions from the stand convinced that you could do so much better than the team, though you won't ever take to the field yourself. Tongue  It doesn't impress, quite the opposite: because you're effectively exploiting the fact that we don't self-promote much, and so you can get away with slinging some rubbish about how terrible we are just to try to make yourself look impressive.  It's a low blow against some very hard working people whom owe nothing to you.

If you do a really outstanding job perhaps you'll be able to overcome the embarrassment of:

Quote
2) Say what you will about Craig, he's still a mathematician, the math checks out.

(Hint: Wright's output is almost all pure gibberish; though perhaps you were too busy having fuck screamed at you to notice little details like his code examples for quadratic signature hashing being code from a testing harness that has nothing to do with validation, his fix being a total no op,  his false claims that quadratic sighashing is an implementation issue,  false claims about altstack having anything to do with turing completeness, false claims that segwit makes the system quadratically slower, false claim that Bitcoin Core removed opcode, yadda yadda. )
I for one an not impressed. Show us some contributions if you want to show that you know something useful, not hot air.
20  Bitcoin / Bitcoin Discussion / Re: The Barry Silbert segwit2x agreement with >80% miner support. on: June 28, 2017, 10:10:23 PM
Love the professionalism of:
Quote
LOL

More professional than Barry Silbert and his unethical closed door agreement. ... and also pretty spot on,  segwit2x and the process used to create it isn't just bad, but absurdly so...  as highlighted by jtimon's recent post: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-June/014661.html
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 239 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!