Bitcoin Forum
August 01, 2021, 09:40:44 PM *
News: Latest Bitcoin Core release: 0.21.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 »
81  Bitcoin / Development & Technical Discussion / Re: Block size increase on: August 12, 2015, 12:25:00 PM
This poll is really... Problematic. ... And hence the protracted and frustrating debate.
Good point.  I added "Let them stay full until we have a better solution than simply doubling it."  Only I suspect you might not want to choose that either because you view the 1MB limit as a problem, and I imagine that you can see the possibility that during the time it takes to find an acceptable solution, the pain of the 1MB limit can get bad enough to justify a change to it.
This just goes to show how sticky language is. I would like for Bitcoin to handle unlimited transactions, so anything less than that isn't ideal, but at the same time, full blocks don't represent a failure or bug in Bitcoin.

I would be happy to see a change in block sizes, even if it doesn't permanently resolve the issue, when we are seeing an uptick in node count and transaction fees heading in a healthy direction.
Whatever that change is will depend on the data. "Doubling" doesn't seem very well considered to me. Luke-Jr mentioned an increase of 15% to match increasing bandwidth availability. Maybe that's more realistic.

Quote from: Greg Maxwell
Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target.  This is frustrating; from a clean slate analysis of network health I think my conclusion would be to _decrease_ the limit below the current 300k/txn/day level.

http://sourceforge.net/p/bitcoin/mailman/message/34090559/
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009466.html
82  Bitcoin / Development & Technical Discussion / Re: Block size increase on: August 11, 2015, 05:41:55 PM
You can, sort of, force the change with 51% of minershashpower, but it would make a big mess.
Yes. But 51% is a majority. Majority can do whatever it want to do.
51% of minershashpower can vote today and switch to fork tomorrow.
I wouldn't recommend it.

Quote from: DannyHamilton
...a hard fork occurs if a change is made to the protocol such that blocks that are considered valid by one version of the software are not considered valid by another version.  In particular a hard fork occurs when something that is accepted as valid in the new software, wasn't previously accepted as valid in the old software.  This means that blocks from the new software will not be recognized as valid by the old software.  The chain will split if anyone is still running the old software. Depending on how long it takes people to notice that their old software isn't matching the rest of the network, it can create quite a mess for individuals who upgrade and suddenly find that transactions they thought were valid and completed suddenly don't exist.
https://bitcointalk.org/index.php?topic=621996.msg6897607#msg6897607

Quote
Wladimir J. van der Laan refuses to do any hard forks that don't have broad consensus.
Here's his opinion on that subject;
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/009137.html
Who is this man?
OK, I know that he is BitcoinCore developer, BitcoinFoundation member, etc, etc, etc...
I mean that he can not speak for majority and I can not rely on his opinion.
He's the Bitcoin Core maintainer, he's the one that ultimately decides when to accept pull requests, but you don't have to agree with him. My impression from the mailing list is that all core developers approach hard forks with extreme caution.
83  Bitcoin / Development & Technical Discussion / Re: Block size increase on: August 11, 2015, 04:17:13 PM
You can, sort of, force the change with 51% of minershashpower, but it would make a big mess. Wladimir J. van der Laan refuses to do any hard forks that don't have broad consensus.

Here's his opinion on that subject;
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/009137.html


84  Bitcoin / Development & Technical Discussion / Re: Block size increase on: August 11, 2015, 01:49:37 PM
This poll is really... Problematic. The problem with the block size is very nuanced. The people, like myself, that are concerned about it don't have some religious conviction about the 1MB block limit specifically. I can't remember reading anyone saying that they dogmatically want to keep the block size at one spot for all eternity. The other problem with the poll is that it's presuming that "full blocks" encapsulates the entire problem; As long as the blocks aren't full there isn't any problem. This ignores the fact that there is a real significant tradeoff in proportion to block size increases.

I could put the problem like this;
What kind of algorithm could be put in place to ensure that the Bitcoin block size limit doesn't need to be hard forked periodically in response to changing technology, market conditions, and node decentralization while simultaneously sustaining funding as inflation diminishes?
Before that question can be answered, one has to come to grips with two other subjective questions; "How decentralized should Bitcoin be?" and "What is a reasonable fee that can allow that to happen?"
These questions all seem to be answered with vague approximations and subjective value judgments which seems preclude any possibility of creating some monolithic algorithm; And hence the protracted and frustrating debate.

Gavin Andresen and Mike Hearn both seem to believe that blocks should never be full. I speculate they don't argue for a simple removal of the block size limit almost entirely for political reasons. I'm sure they would cite "spam" (Whatever that means.) as a mitigating factor.
Regardless, following their solution or some compromise is not simply an argument on technical merits, it's also buying into a particular ideological path for Bitcoin.

Asking, basically, when we should increase the block size after we know they're full is begging the question. Quite a few of them actually.
85  Bitcoin / Development & Technical Discussion / Re: Deleting abandoned UTXO on: July 21, 2015, 05:41:16 PM
I don't see why proposal's like Amaclin's are particularly bad. Sure there's an incentive for miners to reject transactions in order to confiscate the property but it's not likely because it would damage the fungibility of the very currency they're trying to acquire, and because a collusion of 50% of the hash-power represents a broken system anyway, and finally because the user can always pay a fee that outweigh's the potential benefit of the colluding miners.

Expanding on that last point, if you had 1000 btc and you wanted to move it to another address, 51 out of 100 miners (Each have 1% of total hashpower for simplicity.) could reject all of your transactions to get potentially 19.6 btc a piece. The sender could provide a 19.61 fee, and suddenly the incentive to collude is gone. So even in the unlikely event that half the hashpower will collude against one address, the conspiracy could be foiled by just paying a high enough fee to at least compete with the least significant reward of one of the colluding participants.

If something like Confidential Transactions is implemented, then it wouldn't be possible to reward the contents of the outputs to the miners. There would be no reward or maybe an inflationary reward.
86  Bitcoin / Development & Technical Discussion / Re: Safe(r) Deterministic Tie Breaking on: July 16, 2015, 04:05:40 PM
A problem I see with this is that if a miner is missing one of the tie blocks then it's going to come to a different conclusion, whereas if it only picks the one with the hardest difficulty then the network is only depending on all of the miners getting that particular block. It's easier to get one difficult block than it is to make sure you have all of the potentially conflicting forks before running the algorithm.

It also looks like it could be retroactively attacked by adding blocks to the list of "ties" that "happened" in order to get the result it wanted. So instead of getting an advantage by holding a more difficult than usual block, the miner would instead look for more than one block, if he happened to get one faster than usual, and release the amount that would give a result that picks his fork. Though that would generally be really unlikely.

It seems like this would actually make it more difficult to come to a consensus since it can be gamed by picking and choosing, holding and releasing blocks.

Maybe none of that is a problem because blocks are found fairly slow.. I'm not sure.
87  Bitcoin / Development & Technical Discussion / Re: Replace by fee cannot solve the double spend issue on: July 16, 2015, 01:01:21 PM
There is no acceptable length of time a merchant can wait to see if a double spend occurs in order to implement this counter measure - the only acceptable length of time is the time it takes a transaction to be included in a block.
You're sneaking in a definition here. "Acceptable" is a subjective term. The chance of a double spend succeeding is less likely the later you wait to broadcast the second transaction. I'm not completely sure what the odds are the longer one waits, but even if the attacker waits just 20 seconds (To try to sneak his double-spend in after the physical transaction takes place.) the original transaction would have propagated to more than 95% of the network, so I'm pretty sure that would mean their chance of success would be a corresponding sub 5%.1 (Assuming transaction propagation is at least as fast as block propagation, and that the outliers have less than or equal to a proportional amount of hashpower.)
Then even if he waits, he doesn't have any guarantee that the merchant won't see the double spend before it  gets accepted into a block and dumping his sweet vending machine loot into transaction fees.
jdillon does claim that Replace by Fee can make double spends "safe", which is again undefined. So maybe he was being overly optimistic, I don't know.

1http://bitcoin.stackexchange.com/questions/10821/how-long-does-it-take-to-propagate-a-newly-created-block-to-the-entire-bitcoin-n
88  Bitcoin / Development & Technical Discussion / Re: Replace by fee cannot solve the double spend issue on: July 15, 2015, 01:34:21 PM
The point is, you cannot solve the double spending problem by using replace by fee against the attacker. It does not work if you take irreversible action at the time you accept the transaction.

If it solves the "double spending problem", then Proof of Work in general could be discarded, because the genius of Bitcoin was creating a decentralized payment system that solves the double spending problem.

Quote from: Satoshi Nakamoto 1
We started with the usual framework of coins made from digital signatures, which provides strong control of ownership, but is incomplete without a way to prevent double-spending. To solve this, we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.

Also, Replace by Fee wasn't introduced in order to solve double spending problems. Replace by Fee is a way to swap out transactions in the mempool with higher fees in order to avoid transactions getting stuck in there, but when you permit that then it's trivial for a user to change the output of an unconfirmed transaction, making double spends extremely easy.2

What you're referring to is a method to mitigate double spending problems of zero confirmation transaction, while permitting Replace by Fee. What was proposed was having the merchant dump the entire transaction into fees if a double spend is detected. Miners would of course accept this transaction over the double spend because the reward is far greater.3 This doesn't guarantee that all zero confirmation transactions are completely reliable, it just heavily mitigates their chances of success. Meaning the primary concern about Replace by Fee is unwarranted.

While you're correct that immediate transactions with a merchant using zero confirmations is not risk free while Replace by Fee is implemented, that wasn't the point.

Quote
Quote from: jedunnigan on July 06, 2013, 11:39:57 PM
....However we can make zero-confirmation transactions safe without complex trusted identity systems, ironically by making it easier to double-spend. If we implement replace-by-fee nodes will always forward the transaction with the highest overall fee (including parents) even if it would double-spend a previous transaction. At first glance this appears to make double-spending trivial and zero-confirmation transactions useless, but in fact it enables a powerful counter-measure to an attempted double-spend: the merchant who was ripped off creates a subsequent transaction sending 100% of the funds to mining fees. All replace-by-fee miners will mine that transaction, rather than the one sending the funds back to the fraudster, and there is nothing the fraudster can do about it other than hope they get lucky and some one mines their double-spend before they hear about the counter spend. The transaction can also be constructed such that the payee pays slightly more in advance, with the merchant refunding the extra amount once the transaction confirms, to ensure that a double-spend will result in a net loss for the fraudster.

Miners will love this idea. This is a huge difference to what they receive today as mining fees. Perhaps miners would secretly initiate such double spends to provoke users to send the entire transaction amount to them.
While there is a moral hazard there, it seems rather insignificant to me. A cabal of miners getting together to secretly rip off vending machines and waitresses seems both unlikely and unsubstantial.
Bitcoin Core can't scale to that number of transactions anyway with today's technology.

Quote from: GMaxwell4
I don't think 1MB is magic; it always exists relative to widely-deployed technology, sociology, and economics. But these factors aren't a simple function; the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero ...  Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target.  This is frustrating; from a clean slate analysis of network health I think my conclusion would be to _decrease_ the limit below the current 300k/txn/day level.

1https://bitcoin.org/bitcoin.pdf
2https://bitcointalk.org/index.php?topic=179612.0
3https://bitcointalk.org/index.php?topic=251233.msg2669189#msg2669189
4http://sourceforge.net/p/bitcoin/mailman/message/34090559/
89  Bitcoin / Development & Technical Discussion / Re: Is there a universal Bitcoin exchange API ? on: July 15, 2015, 12:41:03 PM
This looks like the sort of thing you're looking for;
https://github.com/timmolter/XChange
90  Bitcoin / Development & Technical Discussion / Re: Dynamic Blocksize on: July 10, 2015, 09:03:24 PM
Why not remove the block size entirely was discussed somewhat recently here;
https://bitcointalk.org/index.php?topic=1065491.msg11439896#msg11439896

This is a succinct way to put it;
If the block space is infinite then transaction costs, through competition, would fall to roughly the rate needed to operate [a] single node (when inflation stops), because this is the most competitive configuration in a free market.

This is a refined way to put it  Tongue;
There is zero incentive for miners to not fill the blocks entirely; almost any non-zero fee would be sufficient.
There are physical limits and costs that would prevent this.  Each additional transaction increases the size of the block.  There are costs associated with increasing the size of a block.  At a minimum, there is a (very small) increase in the chance that the block will be orphaned.
The only _fundamental_ cost is communicating the discrepancy between the transactions included and the assumed included transactions.  This can be arbitrarily low, e.g. if miners delay a little to include only somewhat older well propagated transactions-- the cost then is not a question of "size" but in breaking rank with what other miners are doing (and, in fact, producing a smaller block would be more costly).

Even without optimal differential transmission, and only looking at techniques which are nearly _universally_ deployed by large miners today; with the relay network protocol the marginal cost of including an already relayed transaction is two bytes per transaction. I can no longer measure a correlation with block size and orphaning rate; though there was a substantial one a few years ago before newer technology mostly eliminated size related impact on orphaning.

Importantly, to whatever extent residual marginal cost exists these costs can be completely eliminated by consolidating the control of mining into larger pools. We saw people intentionally centralizing pooling as a response to orphaning already (two years ago) which prompted the creation of the block-relay-network/protocol to try to remove some of that centralization pressure by reducing the cost of block relay so there was less gain to lowering the cost by centralizing. Moreover, any funds being spent coping with these costs (e.g. paying for faster connectivity to the majority of the hash-power) cannot be funds spent on POW security.  So I would refine DumbFruit's argument to point out that it isn't that "fees would naturally be priced at zero" but that the equilibrium is one where there is only a single full node in the network (whos bandwidth costs the fees pay for) and no POW security, because the that is the most efficient configuration and there is no in system control or pressure against it, and no ability to empower the users to choose another outcome except via the definition of the system.  I believe this is essentially the point that he's making with "the most competitive configuration in a free market"-- even to the extent those costs exist at all they are minimized through maximal centralization.  This is why it is my believe that its essential that the cost of running a node be absolutely low and relatively insignificant compared to POW security, or otherwise centralizing is a dominant strategy for miners.
91  Bitcoin / Development & Technical Discussion / Re: Double spend protection with replace by fee on: July 10, 2015, 05:24:51 PM
I thought "replace by fee" could already be hardened against double spending by the merchant dumping  the whole transaction into fees if a double spend is detected.

However we can make zero-confirmation transactions safe without complex trusted identity systems, ironically by making it easier to double-spend. If we implement replace-by-fee nodes will always forward the transaction with the highest overall fee (including parents) even if it would double-spend a previous transaction. At first glance this appears to make double-spending trivial and zero-confirmation transactions useless, but in fact it enables a powerful counter-measure to an attempted double-spend: the merchant who was ripped off creates a subsequent transaction sending 100% of the funds to mining fees. All replace-by-fee miners will mine that transaction, rather than the one sending the funds back to the fraudster, and there is nothing the fraudster can do about it other than hope they get lucky and some one mines their double-spend before they hear about the counter spend. The transaction can also be constructed such that the payee pays slightly more in advance, with the merchant refunding the extra amount once the transaction confirms, to ensure that a double-spend will result in a net loss for the fraudster.
92  Bitcoin / Development & Technical Discussion / Re: Chances of a collision on: July 10, 2015, 02:35:38 PM
Quote from: RealBitcoin
around 1e-40 is probable,

At conception, you could have been one of more than a million different sperm cells trying to make it's way to the egg. There is about a one percent chance that your father would have died before he even met your mother. Your mother happened to excrete the one of two million possible eggs that made you. There was more than a 5% chance that the fertilized egg would have missed implantation.
Just looking at those statistics...
0.000001 * 0.99 *  0.0000005 * 0.95
So there was about a 0.000000000047025% chance that you would exist, looking at one generation. However! This was true of both your mother and father and every pairing going back for the last several hundred thousand years. So if we just look back some 100 generations the likelihood that such a combination would come together throughout the years to produce you is
0.00000000000047025^201

That's 1.37*10^-2478! It's actually significantly less likely than that.

1 in 2^80 is about 8.27*10^-25

So the fact that you exist means that at any given time the same exact address will be found 100 times in a row! According to you, I could say 1.37*10^-2478 is probable, because it happened, and therefore finding a particular address 100 times in a row is also probable.

What you're doing is a subtle form of selection bias. Out of the whole world, and given a long enough time period, matter is going to arrange itself in ways  that are infinitesimally unlikely. Sure it's unlikely that the guy would get hit by lightning that many times, but why select for lightning? At the creation of the universe, what are the odds that a particular electron would end up in a molecule in a cell in his spleen?

It's also a form of gross generalization. You could come up with any number of things that have happened that are much less likely than finding a collision, but so what? That doesn't mean finding a collision is any more likely to happen or less costly to achieve over a given period.

http://www.cdc.gov/nchs/data/nvsr/nvsr53/nvsr53_06.pdf
93  Bitcoin / Development & Technical Discussion / Re: Bitcoin Core noob question - how are CTransaction objects created? on: July 09, 2015, 07:23:27 PM
I don't do C++ programming, so I apologize if this is totally wrong, but I think it's created here starting on line 1783;
https://github.com/bitcoin/bitcoin/blob/master/src/wallet/wallet.cpp

It gets pushed out as a CWalletTx and gets parsed into CTransaction/CMutableTransaction on the miner side.
94  Bitcoin / Development & Technical Discussion / Re: Suggestion: Remove 1 MB block limit, but add "overage tax" on large blocks on: July 09, 2015, 06:42:29 PM
This reminds me of the rollover penalties mentioned here;
https://bitcointalk.org/index.php?topic=1078521.0

In any of these kinds of proposals there can be two effects; Either you penalize the larger blocks so much that it's never economically viable to actually make larger blocks (as is the case with the aforementioned rollover penalties) or you don't, in which case the nodes can make larger blocks to gain an advantage in the marketplace (Which is hardly distinguishable from raising the blocksize limit in some fashion.).

So if your protocol does the former thing, then it doesn't actually do anything in practice, if it does the latter thing then we can ask, "What's the advantage of this beyond  the block size increases?"

1.) It destroys currency.
2.) It changes the block size cap to a block size target.

I don't see the advantages of either of those things, and at the end of the day we would still be periodically complaining about where the "target" should be, so down the road we'd still be dealing with pretty much the same problem except that it's been obfuscated by an additional layer.
95  Bitcoin / Development & Technical Discussion / Re: 28 000 unconfirmed TXs on: July 09, 2015, 06:04:52 PM
It does become more expensive to raise the transaction fee to a certain minimum price though.

If the mempools start struggling to keep up, all you need to do is configure them to ignore transactions below some price. The current minimum price is clearly too cheap, as these attacks are being effective at filling mempools.
What's wrong with the mempool filling up and increasing your own transaction fee? The minimum bitcoin fee is zero. The default minimum is 0.0001  bitcoins, but that is entirely optional.
The developers so far have been reluctant to implement any kind of price fixing measures, knowing the negative consequences of it. This includes not only the fact that changing any of those parameters would require a hard fork, but also all of the other inherit economic problems.

https://mises.ca/posts/articles/false-remedy-price-fixing/
96  Bitcoin / Development & Technical Discussion / Re: 28 000 unconfirmed TXs on: July 09, 2015, 05:00:29 PM
And the worst thing about all this is, it only took $400 dollars to do.
If the block was 8MB, it would have costed $3200 USD dollars moneys.
No.

Imagine that you were in a shipping company that ships boxes of air to other people, and your manager is complaining because the boxes are always full. He insists that you need to start using bigger boxes so that the boxes won't be full of air.

The point is that it is always trivial to send enough transactions to fill a block. Big blocks don't prevent spam, or DOS, nor do they necessarily make it more expensive. The bigger the blocks, the smaller the fees for inclusion.
97  Bitcoin / Development & Technical Discussion / Re: 28 000 unconfirmed TXs on: July 09, 2015, 04:52:44 AM
Quote from: SoundMike
"The 1 MB blocksize limit was only put in as a tx spam limiting measure"
and
"The tx spam that is backing up in the memepool is proof we need to remove the blocksize limit"
Only one of these statements can be true.
Pick one.

I can't put it any better than that. People saying that we need to raise the blocksize limit to prevent DOS is utterly and obviously senseless, illogical, and untrue; contrary to all reason or common sense; laughably foolish and false*.

https://www.reddit.com/r/Bitcoin/comments/3cfext/bitcoin_under_attack/csvoxvp
*http://dictionary.reference.com/browse/absurd

This framing of the 1MB debate above is totally irrelevant to the historical and current context of the situation for Bitcoin.
Framing it in what way? I was saying that lifting the blocksize limit is not a way to prevent DOS or spam, which is true. Pedantically, larger blocks are negligibly harder to fill.
Redditors abound with this notion, and the OP thought that maybe this was the reason for the attack.

Trying to leverage the 1MB to force a viable fee-market earlier is a dangerous experiment in "Blockchain Economics", as the market will simply turn to other cryptocurrency solutions instead. Markets can't be forced to do anything, as a long list of central bankers are finding out time and time again.
So in the name of not messing with the market, we should mess with the market? In the name of not acting like central bankers, who arbitrarily modify the constraints of a currency, we should act like central bankers by modifying the constraints of a currency?
98  Bitcoin / Development & Technical Discussion / Re: 28 000 unconfirmed TXs on: July 08, 2015, 07:13:24 PM
Quote from: SoundMike
"The 1 MB blocksize limit was only put in as a tx spam limiting measure"
and
"The tx spam that is backing up in the memepool is proof we need to remove the blocksize limit"
Only one of these statements can be true.
Pick one.

I can't put it any better than that. People saying that we need to raise the blocksize limit to prevent DOS is utterly and obviously senseless, illogical, and untrue; contrary to all reason or common sense; laughably foolish and false*.

https://www.reddit.com/r/Bitcoin/comments/3cfext/bitcoin_under_attack/csvoxvp
*http://dictionary.reference.com/browse/absurd
99  Bitcoin / Development & Technical Discussion / Re: Dynamic Blocksize on: July 08, 2015, 04:02:58 PM
Of course, there's pretty close to an infinite number of schemes that can produce a dynamic blocksize. The issues are that they all so far have relied on future predictions, and altering the utility of Bitcoin, all in favor of some particular ideology.
That's if the scheme manages to avoid the minefield of attacks and sustainability issues.

I don't think there's any way for a single blockchain  to objectively get this perfectly right.

An idea I've been thinking about is tiered blockchains, resembling a merge mining architecture. The top tier would be very small, maybe able to handle a megabyte per year. This chain would also verify a subchain via arbitrary hashes submitted to it (Separate from it's own currency transactions.). The second tier would verify itself by pulling those hashes off of the top tier and following the hashes with the highest work. This could be tiered arbitrarily for larger and larger chains.
In that way, you could have as big of block space as you'd like without segregating hashpower and people could decide which tier they want to transact in depending on how decentralized they want their transaction to be. Nodes would also have the ability to choose how much resources they want to contribute. They could mine for only tier 1, or tier 1 and 2, or tier 1, 2, and 3, etc.
Each "tier" would essentially be a different currency on the open market and exchanges would find the market clearing price for each tier. Each tier would provide it's own currency as fees to miners, which is obviously dependent on it's value in the market.

Intuitively, it seems like there is a solution in that direction, but I haven't been able to rigorously show it.
100  Other / Off-topic / OT crap from Compact Confidential Transactions for Bitcoin on: July 08, 2015, 02:58:50 PM
This egotistical shit isn't going to help you.

Here we go again... When I realized who you were I said to myself, "Oh, that explains a lot.", but I didn't mention anything hoping you'd just stop. Enough time and space was wasted. Here we are though, several pages later... You often follow this pattern;

1.) You vaguely complain about something, padding it out with technobabble.
2.) Get rebutted.
3.) Complain that the rebuttal was an ad-hominem.
4.) Repeat 1 through 3, often revolving around the same argument and rebuttal.
5.) Something else is fixed that sort of resembles your vague complaint.
6.) You claim great success because of this thing that was fixed.
7.) Nobody else sees this, because it didn't happen.
8.) You get exasperated and leave in a melodramatic fit.

Can we just skip to 8 and call it a day?

Andrew Poelstra and Gregory Maxwell don't need any defense by me, their records stand on their own, but I'm thinking pointing this out may be helpful to those that aren't familiar with your antics. I'll also point out that most people, especially GMaxwell have been overwhelmingly patient with you.

https://bitcointalk.org/index.php?topic=279249.msg5640949#msg5640949
https://bitcointalk.org/index.php?topic=679743.0
https://bitcointalk.org/index.php?topic=600436.msg7872805#msg7872805

Edit: Relevant;
http://www.catb.org/esr/faqs/smart-questions.html

Edit2: Thanks for the move, totally appropriate.
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!