Bitcoin Forum
May 26, 2024, 11:03:56 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 109 »
281  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: April 04, 2016, 05:16:22 PM
OK, so somebody posted then quickly deleted a follow up to my messages above. I only took a glance before I was interrupted, but the main take-away was that indeed I should clarify what I meant.

So lets roll back to the original Satoshi's transaction design.

There are basically 3 main goals that the transaction format has to fulfill:

1) reference source and destination of funds, as well as amounts
2) cryptographically sign the source references to prove that one has control over them
3) detect (and possibly correct) errors in the transmitted transaction: both intentional (tampering) and unintentional (channel errors)

The original Satoshi's design used single SHA256 hash to cover all tree goals. It was a neat idea to kill 3 birds with one stone. But then it turned out that only 2 birds get killed, the center one is only getting injured. At it has about two lives: low-S and high-S.

So then we start trying to address those 3 main goals using a separate fields in the new transaction format. I'm not really prepared to discuss all the possibilities.

Lets just discuss a possible encoding for single UTxO reference. The current design is an ordered pair of (256 bit transaction id,short integer index of the outputs within that transaction). Lets just also assume that for some reason it becomes extremely important to shorten that reference (e.g. transferring transactions with a QR code or some other ultra-low-power-and-bandwidth radio technology).

It may turn out that the better globally unique encoding is an ordered pair (short integer block number in the blockchain, short integer index to the preorder traversal of the Merkle tree of transactions and their outputs). It may be acceptable to be able to refer only to the confirmed transactions in this format.

I'm not trying to advocate the change of the current UTxO reference format. All I'm trying to convey is that there are various ways to achieve the required goals, with various trade-off in their implementation.

Both the original Satoshi's design as well as the current SegWit design suffer from "just-in-time design" syndrome. The choices were made quickly without properly discussing and comparing the alternates. The presumed target environment is only modern high-power high-speed high-temperature 32-bit and 64-bit processors and high bandwidth communication channels.

Around the turn of the century there was a cryptographic protocol called https://en.wikipedia.org/wiki/Secure_Electronic_Transaction . It was deservedly an unmitigated failure. But they did thing right in their design. The original "Theory of operations" SET document did a thorough analysis of design variants:

1) exact bit counts of various representations and encodings
2) estimated clock counts of the operations on the then-current mainstream 32-bit CPUs
3) estimated clock counts of the operations on the then-current 8-bit micro CPUs like GSM SIM cards
4) estimated line and byte counts of the implementation source and object codes
5) range of achievable gains possible by implementing special-purpose hardware cryptographic instructions with various target gate counts.

Again, I'm definitely not advocating anything like SET and its dual signatures. I'm just suggesting spending more time on balancing various trade-off and possible goal of the completed application.
282  Bitcoin / Development & Technical Discussion / Re: Segwit details? SEGWIT WASTES PRECIOUS BLOCKCHAIN SPACE PERMANENTLY on: April 04, 2016, 03:17:19 AM
* Malleability could be fixed by just skipping the signatures (the same data that SegWit would segregate) when computing the txid.  
That _is_ segregation of the signatures up to completely non-normative ordering of data transferred. Segwit could just as well order the data into the same place in the serialized transactions when sending them, but its cleaner to not do so.
The "cleaner" part is true only to subset of people: those that were actually considering the original Satoshi's design as "ideal" or "perfect".

I personally think that the original design where "transaction hash" is both a "transaction identifier" and "transaction checksum" as a sort of a "neat hack".

Edit:
What about fixing those "other problems" (I don't want to say "hard", because IMO they aren't "hard" by themselves) without the segregation? Impossible or just not worth it?
A strong malleability fix _requires_ segregation of signatures.

A less strong fix could be achieved without it if generality is abandoned (e.g. only works for a subset of script types, rather than all without question) and a new cryptographic signature system (something that provides unique signatures, not ECC signatures) was deployed.

And even with giving up on fixing malleability for most smart contracts, it's very challenging to be absolutely sure that a specific instance is actually non-malleable. This can be seen in the history of BIP62-- where at several points it was believed that it addressed all forms of malleability for the subset of transactions it attempted to fix, only to  later discover that there were additional forms.  If a design is inherently subject to malleability but you hope to fix it by disallowing all but one possible representation there is a near endless source of ways to get it wrong.

Segregation removes that problem. Segwitness using scripts achieve a strong base level of non-malleability without doubt or risk of getting it wrong, both in design and by script authors. And only segregation applies to all scripts, not just a careful subset of "inherently non-malleable rules".

Getting signatures out from under TXIDs is the natural design to prevent problems from malleability and engineers were lamenting that Bitcoin didn't work that way as far back as 2011/late-2012.
The requirement for segregation is really only for "logical" segregation, not "physical" segregation.

My opinion is that the main point of the contention is that more programmers agree that "logical" (or algebraic) segregation is OK. Only much smaller subset of programmers agree that "physical" segregation (being far away in the serialized bytestream on the wire or on the disk) is the correct way to implement the algebraic segregation.

Edit2:

In addition to the above there is an issue of what is the optimal length of "transaction id" and "witness id". Transaction identifiers have to be globally unique, whereas "witness identifiers" only have to be unique within the block that they refer to. So the optimal length of the witness id could be much lower than 256.

283  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: April 04, 2016, 02:55:50 AM
You started writing really weird conflated stuff. What do fees have to do with transaction syntax? ... The amount of fees doesn't change the syntax, so doesn't require change of the version.

Sorry, I don't understand your objections.  

There are no "meta-rules" that specify what the validity rules can be.  They are not limited to "syntax", whatever that means.   Any computable predicate on bit strings could in principle be a validity rule, as long as it does not completely break the system.

Right now there are no validiy rules that refer to fees.  The minimum fee, like the Pirate Code, "is more what you'd call 'guideline' than actual rule"; each miner decides whether to require it (or even to require more than it).  But the minimum could be made into a validity rule.  the difference woudl be that each miner would not only impose it on his blocks, but also reject blocks solved by other miners that contain transactions that pay less than that fee.

Quote
The version field should be used to clearly describe syntax rules governing the transaction format.

As I wrote, this cannot be guaranteed.  If a fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag; that would negate the purpose of the fork.  They must reject such transactions.  

So, it is not safe to retain signed but unconfirmed transactions without broadcasting them.
I'm still unsure why we started talking about fees in this thread. The fees enter the consensus validity rules only when verifying that they aren't negative. The fees have to be positive or zero. The value of fees is only used when priority-sorting the already verified transactions.

Also, I don't believe in the existence of non-fixable bugs in the old rules that "fork (rule change) was executed to fix a bug or prevent an attack, the miners cannot continue to use the old rules for transactions that have the old version tag".

Edit: Getting back to the original argument:
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.
Perhaps the DOS/Windows argument wasn't the best. The better, but less well known, example would be mainframe disk device drivers. They easily cover old-style devices with interfaces designed in late 1960. The hardware implementations are "frozen" in the sense that nobody changes the relevant hardware logic anymore. It is just a small sub-area in the modern VLSI chips that implements exactly the same logic as the old TTL-style disk interface controller.

Nobody designs or writes an interface that is sprinkled with conditional logic to handle the old protocols (if () then {} else {}). There's one time inquiry to verify the protocol version used and then all operations are handled through indirection (e.g. (*handle_read[versn])(...).

Same idea could be applied to Bitcoin if the version field would be appropriately changed both in blocks and transactions.
284  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: April 04, 2016, 02:45:34 AM
I am not sure if I understood your comment.  Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change.  E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database.  An attacker could frustrate that measure by issuing transactions with the pre-fork version tag.   Does that answer your comment?
I don't buy the argument about "frustrating that measure". It is very easy to verify that the "old style" transactions use only "old coins", the coins that were confirmed no later than the effective time of the new transaction format.

Theoretically someone could try to launch the attack using only the "old coins", pretending to have a pre-signed transaction with some rather large n-lock-time. I think that type of attack would be self-extinguishing, it could be launched only once for each "old" UTxO entry.
285  Bitcoin / Development & Technical Discussion / Re: LevelDB reliability? on: April 04, 2016, 02:21:08 AM
That's precisely what we did with Monero. We abstracted our blockchain access subsystem out into a generic blockchainDB class,
Thats exactly how core has been done for years.

Though we don't consider it acceptable to have 32bit and 64 bit hosts fork with respect to each other, and so prefer to not take risks there!
This cannot be right. The Satoshi Bitcoin client always stored blockchain as the plain flat files. The database engines were used only for indexing into those files. The recent LevelDB-based backend worsened the situation by explicitly storing (also in plain files) the some precomputed data to undo the transaction confirmations in case of a reorganization.

The proper way to modularize the storage layers is to completely encapsulate all data and functions to access blockchain, without the crossing the abstraction layers inside the upper-level code.

I stress that "storage layers" need to be plural. The mempool is also a storage layer and also needs to be properly abstracted. In a proper, modular implementation the migration of transactions between those storage layers (for unconfirmed and confirmed transactions) will ideally require setting one field in the transaction attributes, e.g. bool confirmed;.
286  Bitcoin / Hardware / Re: Bitfury: "16nm... sales to public start shortly" on: March 31, 2016, 08:39:49 PM
Exiting times, again!
I wonder if that was a typo or a subliminal way to suggest that we sell.   Wink
287  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 31, 2016, 05:16:38 PM
Of course, I understand the difference between turing-complete and non-turing-complete structures.
Good. For those unfamiliar with that area of science here are good links to start their research:

https://en.wikipedia.org/wiki/High-level_synthesis
https://en.wikipedia.org/wiki/High-level_verification

288  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 31, 2016, 04:09:54 PM
It is not possible to create an algorithm for verifying turing-complete structures.

In other words.
Imagine that you have a tool which produces '1' if the realization matches consensus and '0' if the realization is wrong and can produce hard-forks and earthquakes.
Who is responsible for bugs in this tool?  Grin How would you check it? With another tool?

And the second objection.
We do not need to 'stone' the consensus code. The majority always has a right to change anything in consensus.
From the past experience talking with you I think you are just pretending to be a dumbass. But it is also possible that you don't understand the difference between the old stopping problem and the automated logical equivalency verification like the one used by ARM to verify the implementations of their namesake architectures.

Every CAD/EDA tool vendor has tools to do automated verification and automated test vector generation. The obvious problems are:

1) those tools are closed source
2) those tools are very expensive
3) the input languages are hardware oriented: Verilog & VHDL mostly, with only recent additions of SystemC or similar high-level-synthesis tools.

That isn't even anything new like zk-SNARKs. Those tools were in use and on the market for more than 10 years. I had used some early prototypes of those tools (based on LISP) in school back in 20th century.
289  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 31, 2016, 03:31:47 PM
Anyone with sufficient knowledge of algorithms and networking should be able to understand how it works without reading the code.
And there should not be "the" code, since the maintainers of that code would be a central authority.

Is there a reason to explain C++ code in any other [human or computer] language?

Of course. Machine verification. It could be even a subset of C++ like SystemC. But the proper definition of consensus-critical portion should be machine-verifiable.

I'm pretty sure that at least gmaxwell understands the importance of that. He's an advocate of zk-SNARKs and those have a synthesis of logic circuit equivalent to the given program as one of the intermediate steps.

The zk-SNARK people in Israel designed and implemented some subset of C (not C++) to facilitate logic synthesis. It is a key step to machine verification of the code.
290  Bitcoin / Hardware / Re: Bitfury: "16nm... sales to public start shortly" on: March 31, 2016, 03:06:04 AM
Were the first-gen chips outputting half speed because half the cores were shot, or was it a power limitation or similar issue?

If the chips with half the cores shot are still drawing full power for half the hashrate, efficiency sucks. If the chips with half the cores shot are drawing half power for half hashrate, efficiency is preserved but not power, so without extensive grading for balance they're useless for a string design (which Bitfury favors heavily). In either case, they blow the designed specs. Bitfury may find a use for chips with half the cores shot, but one way or another it's going to suck. Unless the design specs are assuming half the cores are shot, in which case they'd want to bin chips anyway and keep the fully-functional ones for themselves to build some legendary machines out of.
Apparently for the original Bitfury chips the numerical value of "suckage" was such that it was still worthwhile to buy/sell them.

IIRC the even the original Bitfury chip has some sort of bit vector that enabled/disabled individual hashing cores. I don't think that they would've removed it from their newest design. It is fairly simple design-wise to take care of clock-disabling and powering-off "cores shot".

I think many people in this thread are in their mind designing miner according to some ideal "specification-in-the-sky" or "spec-sheet-of-their-dreams". I'm all for doing idle speculation as a mind exercise when there's not much else to do. But if somebody wants to do non-idle stuff then it is better to learn some basics of engineering instead of incessantly placing bets on various pies-in-the-sky.

 
291  Bitcoin / Hardware / Re: Bitfury: "16nm... sales to public start shortly" on: March 31, 2016, 02:29:06 AM
By now I probably should have my standard debunking of the "low yield" argument assigned to a single-key shortcut.
Disclaimer: The following is purely an educated guess by me.
I have no 'in' with information on the hold up of BitFury's chip. My business is not directly involved in the actual production of physical silicon wafers/dies but is directly involved in what happens to the wafers full of dies (not BitFury's) needing to be tested and packaged into functional chips. Since current Node-size directly impacts what our end has to deal with I DO closely follow what is happening at the Foundies be they TSMC, Samsung, GloFo or others.
-End disclaimer

That said, I have a good idea probably what is up.
As I have been saying since oh, around Nov. of last year when noise began popping up about 16/14nm mining ASIC's coming out: Great. Yes there are huge advantages to be had. Once the production technology become viable enough for boutique chips vs micro-processors for Apple, AMD, Samsung, HTC and Cisco (network 'fabric' switches/buffers).

Guess what? To meet a reasonable final price-per-chip needed for 16/14nm ASIC's the yields per wafer just ain't there yet. Period.

Per statements by TSMC earlier this year Apple, AMD, and Cisco will take >80% of their capacity at the 16nm node until around June when 30% (my note - hopefully) more overall capacity there comes on line. Per a few articles I've read pertaining to yields at the 16/14nm node from all foundries is just barely over 50% viable dies per run. Acceptable for Apple, AMD, et al as they can write off scrap costs as part of the dev costs for 16/14nm because they and the other companies I mentioned are literally funding all development at that node and have been for years. Yields like that are devastating to our mining chip needs.

That means that only 20% of capacity is available to other companies, eg  BitFury with no certainty of how many good dies per wafer they will get. They along with others have to wait their turn in line, probably set once a month or at best every 2 weeks, to use that capacity to its fullest.

If based on development runs there are tweaks that need to be made to the silicon the delays snowball as Engineering sample wafers make their way through the initial testing at the Foundry, then onto a packaging house to be again probed before dicing from wafers into individual dies and finally packaged into actual chips, then final (hopefully at full speed/power) testing. Only then are the said Engineering samples sent to BitFury and from them to integrators for design testing.

The one shortcut there is that I suspect BitFury have their own packaging house (for what they need it ain't rocket science) so no scheduling conflicts with other customers there.

Anywho, I'd venture given Punin's acknowledgement of Kilo17 winning their bet that some design issues have arisen requiring respins to address. BFL (all their ASIC's), Bitmine.ch/Innosilicon (A1), and others all come to mind on what can happen to dramatically delay full production of chips.

Will these lower node chips from Bitfury (and no doubt Bitmain ) eventually reach full production mode and yields get better? I see no reason to say no. As for when, anyones guess on that.

Much to the credit of BitFury at least they have not taken the "Promise the moon for specs and pre-order now for delivery in ,<insert totally unrealistic time frame>!" route with the public!

Time to get off of the stump.
Bitcoin mining chips are too repetitive to apply the industry standard measures of yield.

The big names mentioned like Intel,Apple,etc. order extremely complex digital chips with very little redundancy. In particular the industry standard testing framework (JTAG) requires that all (or nearly all) flip-flops on the whole chip are threaded onto a single JTAG-chain for testing. Any break or short in JTAG-chain will make chip faulty even if the all non-testing logic is correct.

On the other hand Bitcoin mining chip consist of ridiculous amount of redundancy and SHA256D is self testing. therefore the standard JTAG chain would be a complete waste of space.

In case of Bitcoin mining chip a "50% yield" would mean than on the average chip about half of the hashing engines in the sea-of-hashers are working correctly. Such chips would be still commercially viable and sellable.

In addition to the above Bitcoin mining chip is nearly 100% self-love, it doesn't have to interface or be compatible with any external standard like DRAM or WiFi. Nearly all functional and timing violation could be worked-around at the driver level.

To refresh the history: the original Bitfury chip had yield of 0%: it was supposed to produce 5GH/s but only achieved 2-3GH/s. Additionally there was some scramble/permutation in the output logic that required inverse permutation in the mining software. Yet those chips sold quite well. Not only they sold, they also made money for buyers.

In summary: if you want to make non-absurdist conclusions about Bitcoin mining chips you'll need to use the yield measures used in the analog & mixed-signal fields, not the ones used in the digital logic field.

292  Bitcoin / Bitcoin Technical Support / Re: Threaded or asynchronous sockets? on: March 30, 2016, 08:17:11 PM
Hello, I am a P2P noob, can you please explain to me how Bitcoin allows connectivity between several nodes at once? Is it handled by some asynchronos sockets framework or in some other way like threading? Thanks.
Multithreading.
293  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 30, 2016, 08:15:19 PM
It's obvious to anyone with half a brain jstolfi has a deep understanding of Bitcoin, and he is making some valid points. (Plus I like his clear un-emotional posts.. )  Grin
Deep understanding of Bitcoin? It was disproved few pages ago by knightdk. JorgeStolfi clearly has no elementary comprehension of the source code of Bitcoin.

Yet he continues to post trivialities like:
And, in that case, bitcoin is done for anyway: nothing will save it if malicious miners have a majority of the hashpower.
which was already stated in the original Satoshi's whitepaper. This isn't deep. This is as shallow as it gets.

CIYAM tends to think that JorgeStolfi is some sort of paid disinformation operative. From my personal experiences with peer-review I would venture to guess that he may be heavily medicated or have some sort of amnesia. Such people tend to have good recall of the recent facts and recently acquired knowledge but start to have problems with recall and application of facts and knowledge acquired years ago.

294  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 27, 2016, 05:22:25 AM
I haven't looked at the code itself, but I do understand a few things about programming.  For example, a few months ago I found a rounding error in the table of block rewards on the bitcoin wiki.  (And integers are math too, you know.)

On the other hand, I wonder if you really understand how the protocol is supposed to work.  Can you see why the original design did not have non-mining relay nodes?
OK, now you've shifted your position to open crackpottery.

The original implementation certainly had non-mining relay nodes. In the original implementation mining (then CPU-only) was explicitly optional. The shift between the original and current is just that nowadays the probability that the randomly connected relay node also does mining is much lower.
295  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 25, 2016, 07:56:33 PM
As for the non-mining relay nodes, they are aberrations that have no place in the protocol and break all its (already weak) security guarantees.   Thhey should not exist, and clients shoud not use them.
Non-mining relay nodes have several useful purposes: probably the most important one is as a first line of defense against denial of service attacks. Especially if such nodes are run in the cloud service provider who charges $0/GB for incoming traffic (like Amazon EC2): it nearly completely defangs the most common DDoS via UDP flood.

I have to observe that for somebody with an actual scientific degree you are making questionable statements too fast and too often.

296  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 21, 2016, 05:42:54 AM
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?

I am not sure if I understood your comment.  Miners cannot apply old semantics when the transaction has an old version field, because that field can be faked by the clients to sabotage the change.  E.g., suppose that the change imposed a mininum output amount of 0.0001 BTC as a way to reduce spam attacks on the UTXO database.  An attacker could frustrate that measure by issuing transactions with the pre-fork version tag.   Does that answer your comment?
You started writing really weird conflated stuff. What do fees have to do with transaction syntax?

The version field should be used to clearly describe syntax rules governing the transaction format.

The amount of fees doesn't change the syntax, so doesn't require change of the version.

The existing client already has "misbehavior" score to disconnect itself from other peers that try to abuse it in various ways. There's no point to invent new mechanisms to do it. All that could be possibly required is to tune the specific values for various misbehavior demerits.
297  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 20, 2016, 03:43:42 PM
Similar difficulties exist in handling an old transaction that was created before a soft fork but was broadcast only after it, and became invalid under new rules.  The rules must have changed for a reason, so the transaction cannot simply be included in the blockchain as such.   For example, suppose that the change consisted in imposing a strict limit to the complexity of signatures, to prevent "costly transaction" attacks.  The miners cannot continue to accept old transactions according to old rules, because that would frustrate the goal of the fork. 
(Note that there is no way for a miner to determine when a transaction T1 was signed.  Even if it spends an UTXO in a transaction T2 that was confirmed only yesterday, it is possible that both T1 and T2 were signed a long time ago.)
Your argument is technically specious. Transactions in Bitcoin have 4 byte version field, that gives us potential for billions of rule-sets to apply to the old transactions. The correct question to ask: why this wasn't and isn't changed as the rules gets changed?

298  Bitcoin / Development & Technical Discussion / Re: Segwit details? N + 2*numtxids + numvins > N, segwit uses more space than 2MB HF on: March 19, 2016, 11:39:45 PM
Pre-signed but unbroadcast or unconfirmed transactions seem to be a tough problem. 
I disagree on the "tough" part. In my opinion this is less difficult than DOSbox/Wine on Linux or DOS subsystem in Windows 32 (and Itanium editions of Windows 64). It is more of the problem how much energy to spend on scoping the required area of backward compatibility and preparing/verifying test cases.

The initial step is already done in form of libconsensus. It is a matter of slightly broadening the libconsensus' interface to allow for full processing of compatibility-mode transactions off the wire and old-style blocks out of the disk archive.

Then it is just a matter of keeping track of the versions of libconsensus.

To my nose this whole "segregated witness as a soft fork" has a strong whiff of the "This program cannot be run in DOS mode" from Redmond, WA. Initially there were paeans written about how great it is that one could start Aldus Pagemaker both by typing PAGEMKR on the C> prompt (to start Windows) and by clicking PageMaker icon in the Program Manager (if you already had Windows started). Only years later the designers admitted this to be one of the worst choices in the history of backward compatibility.

299  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 19, 2016, 10:16:47 PM
Thanks! This really helps me find the math error in my analysis. I was missing the storks.

so we just need to wait stork number of blocks and that will be the optimal for mountain climbing.

I guess you think satoshi was an idiot too, as he said something about 10 blocks. Or was he committing fraud or whatever this crockpottery is.

ad hominem attacks vs. historical results with some actual math

And what exactly is this fraud you imply I am conducting? That using a hard coded lookup table (generated from p2p network and validated) instead of a DB is more efficient. Is that my supposed fraud?

James
I don't know. What I do know is that fraudster and gangsters (more generally common criminals) have exceptional sensitivity to being given "no respect" (cue Marlon Brando in The Godfather). Of course, correlation is not causation. But see how DeathAndTaxes reacted to the mention of money orders in 2012. I think that even he didn't know at that time what scam he's going to pull of years later.

https://bitcointalk.org/index.php?topic=93655.msg1036760#msg1036760

Your reaction to the mention of high-school-level science curriculum is very similar.

Again, time will show.
300  Bitcoin / Development & Technical Discussion / Re: are there any actual stats on chain reorgs, by depth? on: March 19, 2016, 09:26:25 PM
I dropped out of kindergarten, I couldnt handle being shamed and put in the corner when I asked questions I wasnt supposed to.
Take mountain climbing as a hobby. Your main adversary will be then impersonal gravity. But you'll learn to deal with reality and how to overcome obstacles. Or you'll get yourself killed.

Or you dont mind if people just categorize you as some type of troll? I would think a smart person would want his point of view appreciated and not just dismissed as troll blather.
I'm actually proud of getting called troll by now know fraudsters like DeathAndTaxes or shtylman. On you the question is still open: are you leaning more towards harmless crackpottery or towards willful fraud? We'll see.

I appreciated your post about the getchaintips RPC, that was useful. Maybe you can stick to making useful posts? Like math based analysis, which is what I tried to do. Did I make any math errors?
This is public forum. I make post useful to the global audience of the readers, even if they aren't useful to you or any particular poster of any thread.

Your math error is called: GIGO https://en.wikipedia.org/wiki/Garbage_in%2C_garbage_out .

The mistake you are making is common enough that there's lots of educational texts regarding the "correlation vs. causation" issue. When I was in school profs used to refer to an excellent joke paper written by some Scandinavian scientists about presence of storks and childbirth rates in Scandinavia. I'm not sure if it was ever translated to English.

Edit: More seriously: many law schools offer "personal development" classes on how to deal with adversity in a public courtroom. Apparently it is a common problem for even excellent students that were home schooled or went to religious schools.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [15] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 ... 109 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!