Bitcoin Forum
May 26, 2024, 07:11:57 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [43] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 ... 288 »
841  Bitcoin / Bitcoin Discussion / Re: ~80 MB transactions with 1 s/b fee were just injected into the mempool on: November 15, 2019, 12:18:43 PM
The transactions look like perfectly ordinary consolidation transactions. They're at a really low feerate-- they'll basically just be a supply of ready to include transactions whenever there is capacity until they all get included.
842  Bitcoin / Development & Technical Discussion / Re: What happens to "unselected" transactions? on: November 14, 2019, 11:59:10 PM
It gets picked up in some subsequent block or it doesn't.  Unless/until one of its inputs is spend some other way the transaction remains valid and can be included ones it makes sense to include it.


I'm not sure how approximate you intended your example to be. The actual behaviour is that miners collect up all translations they receive (above some very low threshold minimum fee to prevent DOS attacks) and queue them. Then they try to construct blocks that make the most fee they can, leaving out whatever doesn't fit-- which get included later when there are fewer higher paying things available.

E.g. This graph from a couple years ago shows the fees available for a block immediately before a new block was found (red), and the fees available in a block created instantly after a block was found (green).   So all those fees in green were already available but wouldn't fit.



As the subsidy goes away having a backlog is important to keeping hashrate up ... otherwise miners would have to turn off and stop mining as soon as a block is found and wait for there to be enough transactions to keep mining.

843  Bitcoin / Bitcoin Discussion / Re: Who is the single most influential person in Bitcoin? on: November 11, 2019, 10:40:31 AM
In my opinion, Gavin Andresen, he's one of the most prominent and vocal members of the Bitcoin community,
Weird post, I would have sworn it was copied from elsewhere and long ago but couldn't find it.

Gavin has been absent from Bitcoin for many years, almost completely absent since he backed an obvious scammer and claimed he was Satoshi.

As far as the repositories go and alert key goes, many people were given access-- just most had the good sense to not brag about it where it might get them targeted. I even sent some of the alerts when the alert key was still in use and also signed the final alert.

Here you go-- this is the alert private key:

30820113020101042053cdc1e0cfac07f7e1c312768886f4635f6bceebec0887f63a9d37a26a92e 6b6a081a53081a2020101302c06072a8648ce3d0101022100ffffffffffffffffffffffffffffff fffffffffffffffffffffffffefffffc2f300604010004010704410479be667ef9dcbbac55a0629 5ce870b07029bfcdb2dce28d959f2815b16f81798483ada7726a3c4655da4fbfc0e1108a8fd17b4 48a68554199c47d08ffb10d4b8022100fffffffffffffffffffffffffffffffebaaedce6af48a03 bbfd25e8cd0364141020101a14403420004fc9702847840aaf195de8442ebecedf5b095cdbb9bc7 16bda9110971b28a49e0ead8564ff0db22209e0374782c093bb899692d524e9d6a6956e7c5ecbcd 68284

Now you are the most influential person in Bitcoin.  Good luck.
844  Bitcoin / Development & Technical Discussion / Re: Does core have any SHA256 SIMD parallelization code for "ONE" message? on: November 10, 2019, 10:43:13 PM
I'm just going to ask it, since I can't easily find the list, maybe it's just in front of me and I can't see it. Anyone know how to get the list from Intel which chips have SHA? (and from AMD too.)

https://en.wikipedia.org/wiki/Intel_SHA_extensions

Intel Goldmont chips (sever market atom) and Ice Lake.  (I haven't used it on Ice Lake, but it's finally reported there). Intel has been pre-announcing it on arches back to skylake then failing to deliver.

Anything AMD Zen and Zen+/Zen2  (so all the threadripper and epyc), which is what all of Bitcoin's development using SHA-NI has been on.

Instruction latency of sha-ni is such that you're still better interleaving independent processing of several messages... but even without that its much faster than anything else except maybe a super wide many messages AVX512 version.
845  Bitcoin / Development & Technical Discussion / Re: Does core have any SHA256 SIMD parallelization code for "ONE" message? on: November 09, 2019, 02:30:44 PM
I am currently exploring parallelization of SHA256 algorithm using SIMD based on a paper I've found which is basically parallelization of the "message scheduling" step that according to the authors takes up 26% of the computation time.

If I understand bitcoin core's code (eg. AVX2), it seems like it doesn't support computing SHA256 of a large data using SIMD (eg. SHA256 of a single 512+ byte long data), but only has the code for computing SHA256 of multiple messages in parallel (ie. SHA256 of m1, m2, ..., m8) and return multiple hashes (ie. h1, h2, ... h8).

If I am reading the code wrong, please explain how it does that.
And if I am right then is there any reason why they didn't add this feature? It seems to be useful for computing the message digest of a big transaction specially the legacy ones which could easily be bigger than 512 bytes.

P.S. If you have any scientific paper about this topic that is newer than 2012 please let me know.

You're looking in the wrong place.

https://github.com/bitcoin/bitcoin/commit/c1ccb15b0e847eb95623f9d25dc522aa02dbdbe8#diff-58b88805302ed488ea34900368aab920

Most of the hashing in bitcoin is small messages (e.g. 64 bytes), and the N-message parallelization is much faster, when its available.

But for big messages there is SIMD too, it's just in different files.

State of the art is ... get a CPU that doesn't suck. Smiley SHA-NI is much faster than any of these SIMD techniques esp in the one message case.
846  Bitcoin / Bitcoin Discussion / Re: Another day, another Faketoshi on: November 04, 2019, 05:11:01 AM
People will applaud and support virtually anyone on a stage,  no matter how awful or absurd they are...

e.g. https://www.youtube.com/watch?v=3lxlLEb-_WM (Political activist/comedians impersonating Dow in front of the WTO (iirc); talking about how to turn disasters into profit with absurd visual aids like a gold plated human skeleton).
 
847  Bitcoin / Development & Technical Discussion / Re: Segmentation? on: November 01, 2019, 02:53:52 AM
The main problem with pruning is that you can't import a pre-funded address anymore.
Checkout "importprunedfunds".
848  Bitcoin / Development & Technical Discussion / Re: jeeq: ECDSA encryption on: October 30, 2019, 07:58:44 AM
Just leave here my implementation of Elliptic-Curve-Cryptography (ECC):
The above posters archive implements a similar totally cryptographically busted technique as was originally discussed in this thread.

No one should ever use it, unless like.. you're trying to trick your enemies into using something insecure. Smiley

[See https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2014-March/004720.html and following posts.]
849  Bitcoin / Bitcoin Discussion / Re: He's thankful he found "Bitcoin" on: October 26, 2019, 09:23:33 AM
There is an awful lot of craig wright footage out there,

It really needs this treatment: https://www.youtube.com/watch?v=e5nyQmaq4k4

The ignorant people he's hitting now aren't going to be convinced by reason...
850  Bitcoin / Bitcoin Discussion / Re: UPDATE CW at conference in London WTF? SCANDAL!!! What really happened? on: October 26, 2019, 09:20:40 AM
Indeed, the more chances and venues we are giving to this man, the more he will be making small and big mistakes that can show more that he is just another fake and not the real thing.
I don't think that's true. Everyone who was going to be convinced of the truth based on evidence is already convinced.
I am not so sure about this and see a slightly different picture. In my opinion, the fan base of CW keeps growing with every new event or announcement.
I don't think your observation conflicts with my understanding.

There are something like 4 billion people who use the internet.  If 1% of them are vulnerable to being conned by wright and immune to sanity then he has a total addressable market of 40 million potential victims--- he's a long way from that now.

But also be careful with your growth estimates, paid shills and fans can look pretty similar.
851  Bitcoin / Bitcoin Discussion / Re: UPDATE CW at conference in London WTF? SCANDAL!!! What really happened? on: October 23, 2019, 10:53:55 PM
Indeed, the more chances and venues we are giving to this man, the more he will be making small and big mistakes that can show more that he is just another fake and not the real thing.
I don't think that's true. Everyone who was going to be convinced of the truth based on evidence is already convinced.

Did you know that email advanced fee fraud scammers who's messages sound like an obvious "Nigerian" scams are believed to more successful than scammers who use more plausible and less well known story?

Most people won't fall for an email scam, you'll make it a dozen messages in with them and they will realize something is wrong and they'll abort before they pay you and you'll waste a lot of time.  If, instead, you spend your time focusing only on the prospective victims who are ignorant enough to have never heard of a nigerian-scam-email and foolish enough to fall for one-- you'll manage to scam more people.  A scammer doesn't want to maximize the number of  people they could scam in infinite time, they want to maximize the rate of successful new victims and every person they talk to burns up their time.  They want to maximize the number of people who fall for it HARD and hand over a lot of money, a person that is saying "hm, maybe.... he might be satoshi, just maybe" is of fairly little use to scammers like wright except as a useful idiot to increase his credibility for others.

Wright actively exploits that competent people with integrity AND competence (like, say, Bitcoin developers) all know that he is an obvious scammer.  He's never going to win that audience over and so he acts in ways to make himself more obvious and more offensive to that audience.  Then when all of those people are saying "this dude is obviously a scammer", he's able to exploit that while pandering to ignorant people: "see how sure they are? obviously they're biased and covering up the truth!".

This works especially well because the cryptocurrency space has attracted  many people with a reflexive distrust of "authority".  That audience is a prime scam target because you can just manipulate them to see whatever is in their own interest as 'authority'. They have problems cooperating to identify and protect themselves from scammers because anyone who starts successfully getting the message out is 'authority', etc.
852  Bitcoin / Bitcoin Technical Support / Re: [~1 BTC Bounty] on: October 23, 2019, 09:00:35 AM
It said that it's not against the consensus rues but the transaction was indeed non-standard that's why nodes are rejecting it.

that is really messed up! every single documentation that i have ever seen has always said "it must not be compressed or it will not be mined". now i went back and checked the out, they are all are ambiguous about it!
take BIP143 for example:
https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki#restrictions-on-public-key-type
Quote
As a default policy, only compressed public keys are accepted in P2WPKH and P2WSH. Each public key passed to a sigop inside version 0 witness program must be a compressed key: the first byte MUST be either 0x02 or 0x03, and the size MUST be 33 bytes. Transactions that break this rule will not be relayed or mined by default.

from bitcoincore.com: https://bitcoincore.org/en/segwit_wallet_dev/#creation-of-p2sh-p2wpkh-address
Quote
P2SH-P2WPKH uses the same public key format as P2PKH, with a very important exception: the public key used in P2SH-P2WPKH MUST be compressed, i.e. 33 bytes in size, and starting with a 0x02 or 0x03. Using any other format such as uncompressed public key may lead to irrevocable fund loss.

the orange parts are the ambiguity! why did they do this?!

It's not at all ambiguous, it states it exactly how it is.

It was the wish with segwit to prohibit the use of uncompressed keys. But there was a concern that problem's like OP's would arise from incompetent buggy software-- potentially involving really large funds losses.  In an abundance of caution these the rule was made initially standardness-only.  It has turned out to be less of an issue than had been feared (the OP's is the only sizable case I've heard of, at least).

853  Other / Politics & Society / Re: 2020 Democrats on: October 23, 2019, 07:38:57 AM
I'm surprised to see some people here named Kamala Harris as a preference to win.

Harris is a bad egg-- with a history full of overly aggressive prosecution of victimless crimes. Her naked ambition drove her to engage in the most absurd prosecutions just to make a name for herself.  For example, the prosecution of backpage is obviously outrageous even if you view the entire thing through the lense of law enforcement's own claims of what happened.

In her role as prosecutor she abused the states power to the maximum extent possible and treated the rule of law as just a PR game.  And she was effective at it.  There are other candidates who might aspire to such abuses, but for the most part they haven't demonstrated the competence to pull them off.

With Google manipulating the public in her favour her odds of winning might well be pretty good. But I think she has enough of an unsavoury history that enough on the left will not be hard to convince to stay home to let trump take the reelection. In spite of the sound and fury in the media, many Americans feel that they are better off in recent years at least financially than they were during most of obama's second term--  and it isn't hard to make a case for better the devil you know.


As far as Biden's odds go-- I think there is a lot to be said about the strategic value of him vs trump. But he is impressively old-- several years older than Trump who would himself be the oldest person elected president if he is re-elected.  When McCain/Palin ran there were many people who voted against who otherwise would have supported McCain because of the considerable odds of Palin becoming president.   I think all the candidates who are older than trump (Biden, Sanders, who else?) will have their presidential odds heavily influenced by who they choose as a running mate.

Interestingly, I believe my views are opposite Theymos' relative to how the odds change with an economic downturn.

I believe that if there is no downturn (or esp with an upswing) then the only candidates that have a chance are ones like Biden who offend few and seem non-threatening to most interests but will pick up the ANYONE BUT TRUMP voting block. If you're happy with how things are, you can trust that someone like Biden is not going to upset the apple cart too much-- maybe even less than trump, as many people still do worry that trump will accidentally escalate a twitter fight into a shooting war (since the media hasn't been tirelessly over-hyping that risk for the past two years I think people are less wary of it than random trump scandal dejure).

If there is a major economic event, however, the status quo will not be what people want, and candidates like Yang or Sanders (and sadly, Warren) would pick up the greatest boost-- much more than a more boring player.

Maybe my position is different from Theymos because I've seen literally none of the recent campaigning or debates? Is that the source of the discrepancy?

This sort of analysis should keep in mind that normally an ordinary recession takes a rather long time for it to really upset the public... at time goes on the downturn probabilities will start to require a catastrophic event in the economy and not just an ordinary recession.
854  Bitcoin / Bitcoin Discussion / Re: Craig Wright at conference in London UPDATE!!! SCANDAL and what really happened? on: October 23, 2019, 07:07:28 AM
Some of the devs claimed some of those would not work with older code at all.

I see your quotes:'Sort of a mixed bag there, you can actually take a pre BIP-50 node and fully sync the blockchain, I last did this with 0.3.24 a few months ago. It just will not reliably handle reorgs involving large blocks unless you change the BDB config too. So it’s debatable if this is a hard fork either, since it’s quasi-non-deterministic. There were prior bugs fixed where older versions would get stuck and stop syncing the chain before that too… So I think by a really strong definition of creating a blockchain which violates the rules mandated by prior versions we have never had a hardfork.' There is one for example.

Are you really trying?

I'm having a difficult time figuring out what you are attempting to ask.

The text you are quoting has nothing to do with address types, it is about the old versions having problems "involving large blocks" (see the text you quoted).  In my post two messages up, I state "If you take the very first release of Bitcoin by Satoshi and fix the BDB database problem with blocks >500KB then" -- referring to the same thing.

Versions prior to bitcoin 0.8 would get stuck when blocks are over about 500kb in a manner which is random and different on different nodes (even running the same software).  If you fix that bug, then they process everything else fine.
855  Bitcoin / Bitcoin Discussion / Re: Craig Wright at conference in London UPDATE!!! SCANDAL and what really happened? on: October 23, 2019, 05:35:29 AM
You certain it would accept all the new address formats???
Address format things are just UI, they're invisible to the blockchain itself.  It's like asking if a photocopier would accept italic type or something. Smiley
856  Bitcoin / Bitcoin Discussion / Re: Craig Wright at conference in London UPDATE!!! SCANDAL and what really happened? on: October 23, 2019, 05:01:26 AM
1. btc 2018 is not the same rules as bitcoin 2016.. neither is bitcoin cash. hens 2 different directions from the 2016 version. hense why gmax called it a bilateral split. again for emphasis. gmax named it such. not me. again lets get it right. the word bilateral split begun by gmax's utility of the word. it is not a word i invented or started using. the reason i laugh so much is that you want to deny it occuring yet it was the devs that caused it, named it, mandated it.. not me. i simply informed people of the devs actions. if you have an issue of the use of bilateral splits then take that up with those you follow.

Franky1 is a consistent shill and over the top liar who abusively exploits many people's lack of experience with technical matters to make claims which are flat on their face untrue.

If you take the very first release of Bitcoin by Satoshi and fix the BDB database problem with blocks >500KB then it will (very slowly) sync and accept the current Bitcoin blockchain. It will reject all those fraudulent fake bitcoin chains, such as "bitcoin cash".  It's been a couple years since I conducted the experiment, but I'm aware of no reason why it would be different now-- it's possible there there were other bugs in the original code which have since been triggered. It's so slow, however, that it's really a pain to test.

If you take Bitcoin 0.8.0 released in Feb 2013 as is with no fixes at all,  it will (very slowly) sync and accept the whole current Bitcoin chain. There are existant 0.8.x nodes running that are happily in sync with the current network so there isn't any ambiguity there.

The word "bilateral" there refers to _hardforks_ that won't just allow themselves to be reorged out if they're a loser in terms of hashrate.  Many of the early insane blocksize cranking hardforks didn't have that property. If the original Bitcoin had or subsequently ever achieved more hashes then the hardfork would simply be erased, potentially unconfirming days/weeks/months of transactions.  It comes from this post.  The word "bilateral" there means two directions: The chain forks off in an different direction *AND* cannot go back.  Bitcoin rejects Bcash blocks (because, among other reasons, the first bcash block warped down the difficulty for their forktime 'instamine') which is what makes it a hardfork,  and Bcash rejects Bitcoin blocks (because they didn't contain the instamine) which makes the hardfork bilateral.

Franky1 somehow takes that and fraudulently claims that Bitcoin was somehow changed to accomplish this, but it wasn't: Bitcoin was written from day one to reject bcash blocks. Suddenly changing the block difficulty is a violation of Bitcoin's consensus rules and always has been.

Anyone who tells you Bitcoin's consensus rules were changed in incompatible ways is misleading you, if they claim they were changed in incompatible ways since 2013 they are trying to tell you an over the top absurd lie... probably with the intention of defrauding you into buying some shitty altcoin they are shilling.

Fixing a bug that made blocks getting randomly rejected in a way that prevents consensus even with copies of itself could be argued to be a consensus change by the overly pedantic (like Luke-Jr)-- but even if you accept that pedantic definition there has still been no incompatible consensus change made since 2013... and changing the system to not spontaneously burst into flames is not remotely similar to the kinds of changes altcoins like bcash have made (e.g. significantly abandoning POW consensus, handing a massive windfall to early miners, etc.).

Franky1's consistent lying is part of why he's banned from the technical subforum.  I don't know why the rest of the community on BCT tolerates him shitting all over so many discussions.
857  Bitcoin / Development & Technical Discussion / Re: Compatibility of Legacy w Native Segwit w Nested Segwit on: October 21, 2019, 08:53:52 AM
Quote
https://en.bitcoin.it/wiki/Bech32_adoption#Web_Wallets
Bitcoin wiki says they can send to bc1 addresses.

But not receive? That is weird. I thought it would be the other way.

Sending to bc1 addresses is pretty simple to implement.  Receiving at a BC1 address means you need to implement most of segwit.

If you were thinking 'receiving from' then of course they can, there is no "from address" in a bitcoin transaction. And if any service ever fails to accept a confirmed payment from you because of the form of scriptpubkey you got paid at... that is a serious red flag of epic technical incompetence on their part. As far as I now, bc.i has no such issue.
858  Bitcoin / Development & Technical Discussion / Re: Compatibility of Legacy w Native Segwit w Nested Segwit on: October 19, 2019, 06:04:58 PM
Blockchain.info, no Segwit, but started accepting Ethereum, Bitcoin Cash, Ripple, and Stellar. Really clear what side its people are.
https://en.bitcoin.it/wiki/Bech32_adoption#Web_Wallets

Bitcoin wiki says they can send to bc1 addresses.
859  Bitcoin / Development & Technical Discussion / Note: franky1 is banned from the this subforum on: October 15, 2019, 12:07:51 AM
After years of continual harassment and misinformation from franky1 and repeated efforts to reach out and encourage polite and honest behaviour from him that have resulted in no improvements, franky1 is banned from the technical subforum.  The failure to remove people who are continually abusive and multipost repeated misinformation has done a lot of damage to the willingness of anyone technically competent to use this forum.

Any posts by franky1 which are made in or moved into this subforum will be summarily deleted.

Cheers,
860  Bitcoin / Development & Technical Discussion / Re: [meta] Rust in Bitcoin reference implementation on: October 13, 2019, 06:39:28 PM
I feel like the decision at blockstream to use rust for some things was, on the balance, an error or maybe it only broke even. (And I say this as being the person who was most personally responsible for it.) On one hand it did appear to allow building things that were not RCE vectors faster with fewer people and with less review, on the other hand it apparently resulted in there being significantly less review and development resources put into these efforts.  Lots of time was lost due to managing toolchain insanity (which has gotten better in some respects, but worse in others).  Lots of soft-testing got missed simply because toolchain friction meant that people weren't setup to build rust programs so they just didn't try them out unless their job required them to do so, when they otherwise would have.

The elements sidechain lost its one external signer because they couldn't be bothered to setup rust, and we had repeated problems with employee operated signers upgrading tied to rust.  Now, it's certainly possible that engineers at blockstream were particularly bad at handling this... and certain specific actions could have been taken to correct things-- e.g. mandatory two weeks of rust training for every engineer in the company even if they weren't expected to use rust in their job. But they were all drawn from the bitcoin community, so I don't think it speaks that well for how the bitcoin community would handle it, particularly because some of the corrective actions blockstream could have taken but didn't like mandating exposure, aren't available for open development.

Even as I write that-- Matt has been asking me for the last week to try to receive his bitcoin header lowra radio broadcast-- but the tool he wrote for it is written in rust, and the friction of installing a rust toolchain on the laptop that I can easily carry over to my antenna has kept me from trying it out, although I have the hardware and am interested.

Part of what that thread is proposing includes working around some of the worst insanity in the Rust ecosystem, e.g. the rust package manager, cargo, (and package ecosystem) is very much built around invisibly downloading and running huge graphs of dubious mystermeat code, just like the situation in javascript and ruby which has been of late resulting in many security problems.  The proposal there is to just not use cargo, which strikes me as a pretty much necessary idea.  The downside, of course, is more local effort needed to 'swim against the stream' because the rest of the rust universe uses cargo pretty heavily.

I think there is a strong argument that the inherent safety of rust lets you spend more time on avoiding logic errors instead of making sure your program doesn't crash, but since it also reduces the population of developers and testers it can still be a loss overall.  The kind of bugs we've experienced in Bitcoin are not the kind that rust structurally prevents, but instead are the kind that are still possible in rust.  In fact, I can't recall a single instance of a bug shipped in Bitcoin core that would have been structurally prevented in rust. The closest I can think of is the openssl heartbleed bug which was in a third-party library, written in another language, which would have still existed if Bitcoin used rust and just called that library.  I wouldn't be too surprised if I were mistaken if there were one or two I'm forgetting ... but in any case, it would be an extreme minority.   There is still an argument that there is time being spent avoiding those bugs that could be redirected and development could occur faster when people didn't need to worry about that class of bug, but those benefits still have to overcome the baseline reduction in development.

Using it for things like auxiliary tools-- things which are essentially freestanding, developed independently, and likely would not share a lot of collective review to begin with might be a win on the balance. There have long been parts of the software that are effectively isolated from some populations of developer-- E.g. a number of people who work on Bitcoin hardly review anything in the QT GUI, so components with their own developers. From the thread it sounds like Wladimir favours this sort of direction.

Unfortunately, there is a bit of a chicken-and-egg where primary usage depends on a level of ambient competence which is hard to get without primary usage. It may be that auxiliary usage helps build that.

The other point of contention is whether rust will actually reduce the number of major bugs in Core. C++ already does things that lets us not have to worry about some memory things, so it isn't as bad as c where it is very easy to forget to free a pointer. But we still can and do get segfaults due to null pointer dereferences so rust would certainly help there. But if you look at a lot of the other bugs that have been in Core, most of them have been logic errors. Rust would not help with those, and it could potentially make them worse as less people know rust.

At the end of the day, I'm personally +0 on rust. I mostly don't care, but would not be opposed to having rust in Core. It would be nice to have better compile time memory protection, but I don't think that's a super big issue that really needs to be fixed.
Pretty much my view, leaning to -0 on mandatory consensus parts.  It would be much more negative if I didn't like rust so much conceptually, and more positive if more of the developers who's contributions I consider critical were rust experts.

So, why not alter the C++ and Rust implementations to allow them to share a block database? Either one could fall over, and we would hope that the
other wouldn't fail in the same way (or for a different reason at the same time Grin). Isn't that a more sensible way to approach this?

I don't believe a second, compatible implementation of Bitcoin will ever be a good idea.  So much of the design depends on all nodes getting exactly identical results in lockstep that a second implementation would be a menace to the network.

Quote
it reminds me slightly of the memory managed concept in general; the people that promoted that stuff very quietly concede (or haughtily change the subject...) that it's not the magic it was sold as. The reality with Java and C# was that you actually did eventually need to understand the computer science behind memory allocation/deallocation, as the byte code compiler would make mistakes, or the "garbage collection" module in the runtime would destroy variables before they've even been used etc. And so the unhelpful the response was "hey guys, but Java can still do pointers though!", which naturally gave cause to the more sensible people to wonder why they were going through all the trouble of using Java to begin with.

I don't know the details of how Rust handles memory allocation, and clearly there are accomplished developers (who seemingly know Rust well) who find the overall proposition convincing.

Rust is essentially the opposite of the managed memory in Java and C#.   In those languages the idea is that the programmer is expected to ignore (and perhaps have no understanding of) memory management.  It turns out that memory management is critical to the computer's operation and ignoring (and especially not understanding) it is toxic to writing software with finite resource usage or decent runtime latency and performance.

Instead, Rust treats memory essentially just like C++ does but then enforces with compile time code analysis, runtime boundary checking, and stylistic norms that you've actually handled the memory safely. You're also forced to write code that this analysis works successfully on, even when something else might be okay but it confuses the analysis.  So instead of ignoring memory, you're required to pay the same attention to it you do in C++ and the correctness of your handling is enforced by the compiler.

Rust also makes a number of other morally similar decisions that make it very different from java. For example, in Java exception handling is a common source of bugs because there is this unknowable implicit return type of any function you call... which maybe you could handle but usually you don't handle because you never even knew it was there. Many C++ codebases largely eschew exceptions or only use them in very specific confined ways for this reason, but its hard to do so because the various libraries expect you to use them.  Rust eschews exceptions entirely except for the special case of panic which kills a whole thread.

You could even imagine getting the same benefits of rust in C++ with some sufficiently smart static analysis tools plus a set of replacement types that add the boundary checking.  But the problem is that you'd have to constrain yourself to the subset of the language that the static analysis supports, only use the boundary checked types, etc.  You'd effectively be using another language.  Rust is morally pretty close to what that language would be, though the syntax has stylistic differences that probably result in more learning curve than an imaginary safe-C++-subset would have.

It's perhaps worth mentioning that some elements of the rust safety-- e.g. runtime checking of bounds still result in effective program crashes ('panics', technically your program could handle it by restarting the thread, if it makes sense to do so) but just guarantees that it actually crashes rather than runs on in some corrupted zombie state potentially executing hostile code. Obviously, crashing is also no good but this is less of a problem than it might sound because idiomatic rust code uses things like iterators and coding error states with sum-types (where the type checker forces you to handle all the cass) and such that make it so you are seldom doing anything which could product a panic.  The thing is, the same is true in modern C++ which is why in Bitcoin we've seen relatively few memory safety errors. In rust these norms are a lot stronger and the compiler helps assure that you don't screw it up.  E.g. iterators prevent the most common causes of out-of-bounds accesses, but in C++ you have the problem of iterator invalidation when you write complicated code that mutates via iterators.  In rust the compiler just guarantees you don't do that, including sometimes not letting you write perfectly valid code simply because it cannot prove it safe.  Fortunately the cases where rust won't let you do a reasonable thing are rare, so the cost in having to work around the language isn't very great on average.

Unfortunately the art of computer language design and programming best practices are still essentially pre-scientific.  Everywhere in programming there are taboos and rituals people recommend for making better software.  There is been very little rigorous study characterizing the benefits of different approaches, so much of what people do when they try to make better software is pretty much superstition and the advocacy of it essentially religion-- which does a lot to explain the fervor that goes into it.  That said, even primitive man knew that water was wet.  It's pretty much unthinkable to me that the future would conclude that the areas that rust improves weren't worth improving, ... although I do suspect that much of the rust advocacy overplays the benefits.

It's conceivable to me that the effort competent developers spend avoiding and dealing with memory safety in C++ is, on a long term average, the same as the amount of effort consumed by satisfying the rust type system and borrow checker and working around cases rust won't allow. If so, the difference in a hypothetical world that invented and used rust instead of C++ would largely be reducing memory safety issues from rare to exceptionally rare (not zero due to unsafe blocks and compiler errors).  This would be a worthy difference, but we're not in that world and it's less clear how to value that difference against the transition costs. Rust advocates would have you believe that it's significantly less effort and that they can be more productive as a result, and that might be true too but I haven't seen that much evidence of it, and as Carlton notes-- that sort of claim is common, made by java, c#, go ... lisp... haskell.  And experience hasn't really supported those sorts of claims.  There clearly are things some languages do better than others, but it seems that no efforts so far have really lived up to their advocates claims of revolutionizing software development-- at least not to a level where they aren't dwarfed by the differences in productivity among individual developers. I think that I'd rather take Pieter writing in perl or assembly than I would take the vast majority of rust developers in rust. Smiley
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [43] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!