Bitcoin Forum
November 02, 2024, 03:41:51 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 [1472] 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 ... 1557 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 2032233 times)
solex
Legendary
*
Offline Offline

Activity: 1078
Merit: 1006


100 satoshis -> ISO code


View Profile
July 29, 2015, 08:04:19 AM
 #29421

This may be true in general, but for the particular hosting company where I run my VPS, disk is the limiting factor in the decision when I have to upgrade to a more expensive plan.  With pruning, you can not fully support the network and you have to be careful with your wallet handling, as rescanning is not possible.  That's actually what I do on one of the VPS exactly because the blockchain is already too big for the plan I use there, but I would have preferred to run an unpruned full node if I could.  Increasing the block size makes this issue larger by orders of magnitude.

Furthermore, as stated above already, block relay optimisations also help only "little" with bandwidth.  (At best a factor of two in comparison to a much bigger increase in block size being discussed.)

Yes. Your situation re disk is different, and the pruning has known improvements to come.
Block relay optimization is more than "just" a factor of two gain, because unconfirmed tx are relatively uniform over a 10 minute window (so the network has an impressive capacity to handle these, as seen by how fast the mempool can grow), whereas blocks need to be sent at once, so ~600x data / second. Also, up-link is invariably slower than down-link which is a big concern for miners publishing blocks.

sgbett
Legendary
*
Offline Offline

Activity: 2576
Merit: 1087



View Profile
July 29, 2015, 08:51:54 AM
 #29422

Misrepresenting my position then arguing against that isn't going to cut it.

I said the problem is that the block size is too small, and that the solution is to make the block size bigger. That doing this requires no additional functionality. I contrasted this to the sidechain solution which does require additional functionality. As you rightly say nobody is claiming that is entirely the solution, but that is irrelevant, the point is that this or other solution(s) that require extra functionality are the very thing that your Tanenbaum quote warns against.

All that stuff you said about how we need to change TX size, thats some other thing. Either you are intentionally conflating the two, which is disingenuous or you really can't tell the difference, which I doubt is the case.

It looks to me like your emotional attachment to your position is causing your reasoning to become irrational. I don't think anyone's argument is absurd. I can see how enforcing higher fees benefits some parties, I think that misses the big picture which is that it *requires* additional functionality.

I'm not frightened of pissing contests, I think they are childish. You don't need to "fight" anything. As a smart human being we all need to listen, think and reason. Not inject hyperbole, and inflammatory language into posts to try and bully your opposition. Your argument should stand on its own merit and not the vehemence with which it is delivered.

We do need to "fight" features.  Tannenbaum's maxim is a restatement of the KISS principle.  I don't care if you can't see or won't accept that because of your economic illiteracy and a priori attachment to the absurd Red Queen interpretation.  The fight is happening (with your participation) whether you like it or not, mooting your objections.

I'm not "misrepresenting" your position.  The problem is that you don't understand your own position.   Cheesy

"Contrived" 1MB tx bog down the network and present an attack vector.  8MB tx would 8^2 times worse, and thus the added complexity/functionality of larger blocks is demonstrated (your feeble speculative whining about my "emotional attachment" notwithstanding).

This is Sergio's conclusion; and Gavin's (present) ad hoc workaround is to marry (IE "conflate") a 100k tx size limit to any blocksize increase. 

Let me be clear because you are a slow learner: The intentional conflation of 100k tx size limits and larger blocks is Gavin's quasi-solution to the additional functionality/complexity required by larger blocks, not my "disingenuous" personal interpretation.  You were completely wrong about that (among other things).

What frightens me is that the whole thing seems to have turned into a pissing contest.
I'm not frightened of pissing contests

If you could stop contradicting yourself, that would be great.  Wink

The introduction of sidechains, LN and whatever other solutions is a complicated solution. It's also a solution motivated by a desire to fix a perceived economic issue, rather than sticking to the very simple issue at hand. It is the very opposite of what you are claiming to be important, that software should be kept simple.

That is a contradiction.

The simple solution is to remove the artificial cap. A cap that was put in place to prevent DDOS.

Your reference of CVE-2013-2292 is just distraction. It is a separate issue, one that exists now and would continue to exists with a larger block size.

Great job on affirming your emotional attachment by posting inflammatory content, irrational argument and personal insults.

LN isn't really about fixing a perceived economic issue, it is about secure instant payments, something Bitcoin by itself can't do at all. The observation may be that if many routine/small transactions move to such a system, the need for such transactions on the main chain is reduced, but that is secondary. Even if the main chain had infinite capacity Lightning would still be extremely useful.



Yes I see your point. As a solution to block size issue I think its fair to say its still complex.

That is not to say it shouldn't still be done - as you say it may be very useful for instant transactions. (I still think those can happen on chain though but time will tell)

"A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution" - Satoshi Nakamoto
*my posts are not investment advice*
lebing
Legendary
*
Offline Offline

Activity: 1288
Merit: 1000

Enabling the maximal migration


View Profile
July 29, 2015, 10:31:55 AM
 #29423

Mike H just wrecked Eric on bitcoindev.

http://bitcoin-development.narkive.com/jsfcbpPz/why-satoshi-s-temporary-anti-spam-measure-isn-t-temporary#post10
Mike Hearn via bitcoin-dev 31 minutes ago

I do love history lessons from people who weren't actually there.

Let me correct your misconceptions.


Initially there was no block size limit - it was thought that the fee
Post by Eric Lombrozo via bitcoin-dev
market would naturally develop and would impose economic constraints on
growth.
The term "fee market" was never used back then, and Satoshi did not ever
postulate economic constraints on growth. Back then the talk was (quite
sensibly) how to grow faster, not how to slow things down!
Post by Eric Lombrozo via bitcoin-dev
But this hypothesis failed after a sudden influx of new uses. It was still
too easy to attack the network. This idea had to wait until the network was
more mature to handle things.
No such event happened, and the hypothesis of which you talk never existed.
Post by Eric Lombrozo via bitcoin-dev
Enter a “temporary†anti-spam measure - a one megabyte block size limit.
The one megabyte limit was nothing to do with anti spam. It was a quick
kludge to try and avoid the user experience degrading significantly in the
event of a "DoS block", back when everyone used Bitcoin-Qt. The fear was
that some malicious miner would generate massive blocks and make the wallet
too painful to use, before there were any alternatives.

The plan was to remove it once SPV wallets were widespread. But Satoshi
left before that happened.


Now on to your claims:

1) We never really got to test things out a fee market never really got
Post by Eric Lombrozo via bitcoin-dev
created, we never got to see how fees would really work in practice.
The limit had nothing to do with fees. Satoshi explicitly wanted free
transactions to last as long as possible.
Post by Eric Lombrozo via bitcoin-dev
2) Turns out the vast majority of validation nodes have little if anything
to do with mining - validators do not get compensated validation cost is
externalized to the entire network.
Satoshi explicitly envisioned a future where only miners ran nodes, so it
had nothing to do with this either.

Validators validate for themselves. Calculating a local UTXO set and then
not using it for anything doesn't help anyone. SPV wallets need filtering
and serving capability, but a computer can filter and serve the chain
without validating it.

The only purposes non-mining, non-rpc-serving, non-Qt-wallet-sustaining
full nodes are needed for with today's network are:

1. Filtering the chain for bandwidth constrained SPV wallets (nb: you
can run an SPV wallet that downloads all transactions if you want). But
this could be handled by specialised nodes, just like we always imagined in
future not every node will serve the entire chain but only special
"archival nodes"

2. Relaying validated transactions so SPV wallets can stick a thumb into
the wind and heuristically guess whether a transaction is valid or not.
This is useful for a better user interface.

3. Storing the mempool and filtering/serving it so SPV wallets can find
transactions that were broadcast before they started, but not yet included
in a block. This is useful for a better user interface.

Outside of serving lightweight P2P wallets there's no purpose in running a
P2P node if you aren't mining, or using it as a trusted node for your own
operations.

And if one day there aren't enough network nodes being run by volunteers to
service all the lightweight wallets, then we can easily create an incentive
scheme to fix that.


3) Miners don’t even properly validate blocks. And the bigger the blocks
Post by Eric Lombrozo via bitcoin-dev
get, the greater the propensity to skip this step. Oops!
Miners who don't validate have a habit of bleeding money: that's the
system working as designed.
Post by Eric Lombrozo via bitcoin-dev
4) A satisfactory mechanism for thin clients to be able to securely obtain
reasonably secure, short proofs for their transactions never materialized.
It did. I designed it. The proofs are short and "reasonably secure" in that
it would be a difficult and expensive attack to mount.

But as is so often the case with Bitcoin Core these days, someone who came
along much later has retroactively decided that the work done so far fails
to meet some arbitrary and undefined level of perfection. "Satisfactory"
and "reasonably secure" don't mean anything, especially not coming from
someone who hasn't done the work, so why should anyone care about that
opinion of yours?

Bro, do you even blockchain?
-E Voorhees
sickpig
Legendary
*
Offline Offline

Activity: 1260
Merit: 1008


View Profile
July 29, 2015, 11:48:43 AM
 #29424


the exchanges is still going, if you have five mins just read it. Mike made a few others valid points imho in subsequent messages.
And those from  Thomas Zander deserve a quote.

Quote from: Thomas Zander
> The only way to see how fees would work in practice is to have scarcity.

This skips over the question why you need a fees market. There really is no
reason that for the next 10 to 20 years there is a need for a fees market to
incentive miners to mine.  Planning that far ahead is doomed to failure.

Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
July 29, 2015, 12:35:24 PM
 #29425

Lightning network and sidechains are not magical things, they're orthorgonal and not alternatives to relay improvements. But since you bring them up: They're both at a _futher_ level of development than IBLT at the moment; though given the orthogonality it's irrelevant except to note how misdirected your argument is...

I appreciate the further detail that you describe regarding the relay situation. So it seems that you are not optimistic that block propagation efficiency can have a major benefit for all full-nodes in the near future. Yet block propagation overhead is probably the single biggest argument for retaining the 1MB as long as possible. To cap volume growth to a certain extent.

I'm not sure about that.  My opinion is that the biggest argument for retaining the cap (or, at least, not introducing a 20x plus exponential increase or something like that) is the burden on full nodes due to (cumulated) bandwidth, processing power and, last but not least, disk space required.  Of course, for miners the situation may be different - but from my own point of view (running multiple full nodes for various things but not mining) the block relay issue is not so important.  As gmaxwell pointed out, except for reducing latency when a block is found it does only "little" by, at best, halving the total bandwidth required.  Compare this to the proposed 20x or 8x increase in the block size.

Bandwidth is always a much bigger concern than blockchain usage on disk. TB disks are very cheap, v0.11 has pruning.

Block (1) validation time as well as (2) propagation time are both issues for mining security and avoiding excessive forks/orphans which prolong effective confirmations.
They are separate issues however, and you can have either problem without the other in different blocks, or both together.

Both of these are exacerbated by block size.

As eager as I am for a block size increase for scalability, I'm not convinced that it is yet warranted given the risks...  The developers are aware of the issues.  They aren't goofing off.  It is just better to get it right than to get it done.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
thezerg
Legendary
*
Offline Offline

Activity: 1246
Merit: 1010


View Profile
July 29, 2015, 03:06:36 PM
 #29426

decision to force a fee market is a centralized solution

On it's face this is a nonsense argument since any development decisions are centralized in the same manner.

Increase the blocksize, decrease the blocksize, or leave it alone, they are all (centralized) development decisions.

It's also false that anything is really centralized about it because if there were truly a consensus for change (over the objections of the 'centralized' developers) there would be a successful fork.


Yes all dev decisions are essentially centralized, including the decision to NOT do something.  Since that is trivially true, I am talking about the effect of the decision.  And in one case miners can optimize their profitability by choosing to include transactions while in another case they are artificially limited.

Brg444 asked if I was for completely lifting the limit.  I would be from a theoretical perspective but from practical engineering experience its makes sense to have what is called a "sanity check".  This is a limit that you expect to never be reached.  If it is, something is very wrong with the system.  Therefore if it was up to me, I would choose a bump to (say) 4 MB and then a periodic increase that mirrors txn adoption curves. 

If we were creating a more complex solution in a less constrained environment, I would let miners expand the block with high fee txns (which I described previously) and I would implement something that puts the limit at (say) 10x the average of last month's fee paying txns.  The problem with the latter though is that a miner could artificially expand the block size by including a bunch of fake txns is his own blocks.  This can be solved with a payout pool (half the txn value goes into the pool) rather than a direct payout.



Quote
The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004

IDK about this out of context quote but Tanenbaum's approach is to put every feature in isolation so if one has an issue the whole system does not go down.  Linux, Windows, etc instead take the philosophy that essential features should be placed together to increase performance and decrease overall complexity.  It does not matter if the whole fails when one does because every feature is essential. Ofc, things have gotten pretty lenient WRT "essential" features in Linux/Windows lately...

Gold is unique and was the most efficient soln for thousands of years cementing its social perception of value.  Bitcoin at 1mb is more like the iphone.  It will be outcompeted in price (efficiency) before the majority of the world was even introduced to smartphones with the obvious result that the majority of phones are android.

Gold is not unique.  Silver.  QED.

Why do you speak of "fee market" in the singular?

By unique I was referring to people's inability to create new elements.  You can't just brew up gold 2.0 in the lab.  In that context silver is unique also.

Quote
Do you not understand  on- and off-chain fee markets will exist at Layers 1 and 2+, competing to be more efficient at bundling tx for eventual reconciliation with and inclusion into the Mother Blockchain?

You seem to, with the reference to the fact that "real markets evolve spontaneously and in a P2P manner to address real issues."

How does simply staying at 1MB (and rejecting the Red Queen interpretation) preclude such real markets' spontaneous evolution?

By what logic do you conflate a dearth of consensus for increased blocksize with "a centralized solution?"

Bitcoin and FinTech is invading the banker's space because of inefficiency.  Technology in all domains replaces low-tech solutions due to inefficiency.  Deliberately introducing inefficiency into the system (forcing a fee where one is not needed -- from the user's perspective efficiency is fee/amount transferred) is basically asking for new technology to replace bitcoin. 

The first apps to go will be colored coins (smart contracts) and immutable ledgers.  If these become useful, the money function of the chain that hosts them will have a much better operational efficiency (you don't have to move bitcoins into these coins, trade a stock, and then move them back).  If this coin/chain delivers similarly to Bitcoin on basic premises (scarce, decentralized, no premine etc) Bitcoin will be replaced.  In that case the BEST outcome for Bitcoin will be a gold analogy (store of value only).  However I have serious doubts about that because Bitcoin does not have gold's history, intrinsic value, etc.

Another outcome would be if a SC takes on the colored coins and immutable ledger function.  This would probably preserve Bitcoin's value since the SC is denominated in BTC.  Yet moving from SC to bitcoin chain is awkward so as Cypherdoc likes to argue all the coins might drain out of the bitcoin chain over to the SC.  I differ from him in my analysis because in this case at least BTC's value is preserved.  However, I find it hard to believe that a startup company would produce a SC with no advantage to themselves... they'll either produce a non-SC chain and out-mine other people in the early days or they'll be a per txn fee going to the startup.





inca
Legendary
*
Offline Offline

Activity: 1176
Merit: 1000


View Profile
July 29, 2015, 03:56:09 PM
 #29427

decision to force a fee market is a centralized solution

On it's face this is a nonsense argument since any development decisions are centralized in the same manner.

Increase the blocksize, decrease the blocksize, or leave it alone, they are all (centralized) development decisions.

It's also false that anything is really centralized about it because if there were truly a consensus for change (over the objections of the 'centralized' developers) there would be a successful fork.


Yes all dev decisions are essentially centralized, including the decision to NOT do something.  Since that is trivially true, I am talking about the effect of the decision.  And in one case miners can optimize their profitability by choosing to include transactions while in another case they are artificially limited.

Brg444 asked if I was for completely lifting the limit.  I would be from a theoretical perspective but from practical engineering experience its makes sense to have what is called a "sanity check".  This is a limit that you expect to never be reached.  If it is, something is very wrong with the system.  Therefore if it was up to me, I would choose a bump to (say) 4 MB and then a periodic increase that mirrors txn adoption curves. 

If we were creating a more complex solution in a less constrained environment, I would let miners expand the block with high fee txns (which I described previously) and I would implement something that puts the limit at (say) 10x the average of last month's fee paying txns.  The problem with the latter though is that a miner could artificially expand the block size by including a bunch of fake txns is his own blocks.  This can be solved with a payout pool (half the txn value goes into the pool) rather than a direct payout.



Quote
The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004

IDK about this out of context quote but Tanenbaum's approach is to put every feature in isolation so if one has an issue the whole system does not go down.  Linux, Windows, etc instead take the philosophy that essential features should be placed together to increase performance and decrease overall complexity.  It does not matter if the whole fails when one does because every feature is essential. Ofc, things have gotten pretty lenient WRT "essential" features in Linux/Windows lately...

Gold is unique and was the most efficient soln for thousands of years cementing its social perception of value.  Bitcoin at 1mb is more like the iphone.  It will be outcompeted in price (efficiency) before the majority of the world was even introduced to smartphones with the obvious result that the majority of phones are android.

Gold is not unique.  Silver.  QED.

Why do you speak of "fee market" in the singular?

By unique I was referring to people's inability to create new elements.  You can't just brew up gold 2.0 in the lab.  In that context silver is unique also.

Quote
Do you not understand  on- and off-chain fee markets will exist at Layers 1 and 2+, competing to be more efficient at bundling tx for eventual reconciliation with and inclusion into the Mother Blockchain?

You seem to, with the reference to the fact that "real markets evolve spontaneously and in a P2P manner to address real issues."

How does simply staying at 1MB (and rejecting the Red Queen interpretation) preclude such real markets' spontaneous evolution?

By what logic do you conflate a dearth of consensus for increased blocksize with "a centralized solution?"

Bitcoin and FinTech is invading the banker's space because of inefficiency.  Technology in all domains replaces low-tech solutions due to inefficiency.  Deliberately introducing inefficiency into the system (forcing a fee where one is not needed -- from the user's perspective efficiency is fee/amount transferred) is basically asking for new technology to replace bitcoin. 

The first apps to go will be colored coins (smart contracts) and immutable ledgers.  If these become useful, the money function of the chain that hosts them will have a much better operational efficiency (you don't have to move bitcoins into these coins, trade a stock, and then move them back).  If this coin/chain delivers similarly to Bitcoin on basic premises (scarce, decentralized, no premine etc) Bitcoin will be replaced.  In that case the BEST outcome for Bitcoin will be a gold analogy (store of value only).  However I have serious doubts about that because Bitcoin does not have gold's history, intrinsic value, etc.

Another outcome would be if a SC takes on the colored coins and immutable ledger function.  This would probably preserve Bitcoin's value since the SC is denominated in BTC.  Yet moving from SC to bitcoin chain is awkward so as Cypherdoc likes to argue all the coins might drain out of the bitcoin chain over to the SC.  I differ from him in my analysis because in this case at least BTC's value is preserved.  However, I find it hard to believe that a startup company would produce a SC with no advantage to themselves... they'll either produce a non-SC chain and out-mine other people in the early days or they'll be a per txn fee going to the startup.







Oh not the sidechains debate again! Smiley
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
July 29, 2015, 04:20:25 PM
 #29428

Most people do not understand the block size problem(s), and so discount the complexity. 

Fundamentally, the block chain presents a classic Garrett Hardin "tragedy of the commons" problem.  Each miner wins by getting their block accepted into the block chain, the bigger with more fees that can be accepted, the better.

Everyone else in "the commons" has costs commensurate with the block size except the winning miner.  Thus ultimately only miners are incentivised to maintain nodes and to do validation against the full chain.  This is all fine in the end game, where most are on board with Bitcoin.

We are no where near this today.  Those hoping we fail vastly outnumber those hoping we succeed, and have far greater resources.
So some carefulness is not wasted.  We will succeed simply by not failing.  The seeds of success are sown into the fabric of the Bitcoin code and architecture.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
brg444
Hero Member
*****
Offline Offline

Activity: 644
Merit: 504

Bitcoin replaces central, not commercial, banks


View Profile
July 29, 2015, 05:22:43 PM
 #29429

Mike H just wrecked Eric on bitcoindev.

Quote
Gregory Maxwell gmaxwell at gmail.com
Wed Jul 29 16:53:54 UTC 2015
Previous message: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't   temporary
Next message: [bitcoin-dev] Personal opinion on the fee market from a worried   local trader
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed, Jul 29, 2015 at 9:59 AM, Mike Hearn via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> I do love history lessons from people who weren't actually there.

I doubt the rest of us really enjoy hearing these "lessons" from from
you where you wildly distort history to reflect your views.

> Satoshi explicitly envisioned a future where only miners ran nodes, so it
> had nothing to do with this either.

As others have pointed out-- even if this were true, --- so what?
Many errors were made early on in Bitcoin.

But in this case it's not actually true and I'm really getting fed up
with this continued self-appointment of all that the creator of the
system thought. Your position and knoweldge is not special or
priveleged compared to many of the people that you are arguing with.

It was _well_ understood while the creator of the system was around
that putting every consensus decision into the world into one system
would not scale; and also understood that the users of Bitcoin would
wish to protect its decenteralization by limiting the size of the
chain to keep it verifyable on small devices.

Don't think you can claim otherwise, because doing so is flat out wrong.

In the above statement you're outright backwards-- there was a clear
expectation that all who ran nodes would mine. The delegation of
consensus to third parties was unforseen. Presumably Bitcoin core
making mining inaccessable to users in software was also unforseen.

> Validators validate for themselves. Calculating a local UTXO set and then
> not using it for anything doesn't help anyone. SPV wallets need filtering
> and serving capability, but a computer can filter and serve the chain
> without validating it.
>
> The only purposes non-mining, non-rpc-serving, non-Qt-wallet-sustaining full
> nodes are needed for with today's network are:
[...]
> Outside of serving lightweight P2P wallets there's no purpose in running a
> P2P node if you aren't mining, or using it as a **trusted node for your own
> operations**.

You wrote a long list of activities that are actually irrelevant to
many node users with the result of burrying the main reason any party
should be running a node (emphasis mine).

The incentives of the system demand as it exist today that many other
economically significant parties run nodes in order to keep the half
dozen miners from having a blank check to do whatever they want
(including supporting their operations through inflation)-- do not
think they wouldn't, as we've seen their happy to skip verification
entirely.

(Which, incidentially, is insanely toxic to any security argument for
SPV; ---- and now we see the market failure that results from your and
Gavin years long campaign to ignore problems in the mining ecosystem:
The SPV model which you've fixated on as the true nature of bitcoin
has been demonstrated in practice to have a potentially empty security
claim.)

> Miners who don't validate have a habit of bleeding money:   that's the
> system working as designed.

The information I have currently is that the parties engaging in that
activity found it to be tremendously profitable, even including losses
from issues.

#rekt

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 06:08:37 PM
 #29430

-
#rekt

talk about getting rekt.  Gregcoin my ass.

from Mike Hearn:

   >It was _well_ .... understood that the users of Bitcoin would wish to protect its decenteralization by limiting the size of the chain to keep it verifyable on small devices.


No it wasn't. That is something you invented yourself much later. "Small devices" isn't even defined anywhere, so there can't have been any such understanding.

The actual understanding was the opposite. Satoshi's words:

"At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware."

That is from 2008:
 
   http://satoshi.nakamotoinstitute.org/emails/cryptography/2/#selection-75.16-83.14

Then he went on to talk about Moore's law and streaming HD videos and the like. At no point did he ever talk about limiting the system for "small devices".

I have been both working on and using Bitcoin for longer than you have been around, Gregory. Please don't attempt to bullshit me about what the plan was. And stop obscuring what this is about. It's not some personality cult - the reason I keep beating you over the head with Satoshi's words is because it's that founding vision of the project that brought everyone together, and gave us all a shared goal.

If Satoshi had said from the start,

   "Bitcoin cannot ever scale. So I intend it to be heavily limited and used only by a handful of people for rare transactions. I picked 1mb as an arbitrary limit to ensure it never gets popular."

... then I'd have not bothered getting involved. I'd have said, huh, I don't really feel like putting effort into a system that is intended to NOT be popular. And so would many other people.


   >Don't think you can claim otherwise, because doing so is flat out wrong.


I just did claim otherwise and no, I am not wrong at all.

    >(Which, incidentially, is insanely toxic to any security argument for
    SPV; ---- and now we see the market failure that results from your and
    Gavin years long campaign to ignore problems in the mining ecosystem:



Since when have we "campaigned" to "ignore problems" in the mining ecosystem? What does that even mean? Was it not I who wrote this blog post?

    http://blog.bitcoinfoundation.org/mining-decentralisation-the-low-hanging-fruit/

Gregory, you are getting really crazy now. Stop it. The trend towards mining centralisation is not the fault of Gavin or myself, or anyone else. And SPV is exactly what was always intended to be used. It's not something I "fixated" on, it's right there in the white paper. Satoshi even encouraged me to keep working on bitcoinj before he left!


Look, it's clear you have decided that the way Bitcoin was meant to evolve isn't to your personal liking. That's fine. Go make an alt coin where your founding documents state that it's intended to always run on a 2015 Raspberry Pi, or whatever it is you mean by "small device". Remove SPV capability from the protocol so everyone has to fully validate. Make sure that's the understanding that everyone has from day one about what your alt coin is for. Then when someone says, gee, it'd be nice if we had some more capacity, you or someone else can go point at the announcement emails and say "no, GregCoin is meant to always be verifiable on small devices, that's our social contract and it's written into the consensus rules for that reason".

But your attempt to convert Bitcoin into that altcoin by exploiting a temporary hack is desperate, and deeply upsetting to many people. Not many quit their jobs and created companies to build products only for today's tiny user base.


My list of "things a full node is useful for" wasn't ordered by importance, by the way.


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
July 29, 2015, 06:21:15 PM
 #29431

This is a disaster.

On the one hand, we have a person who's actually telling the truth and also is the same guy who tried to break Tor, add blacklists, and other assorted nasty features.

On the other hand we have someone who tends to invent legitimately useful things, and also lie when it suits his purposes.



There's nothing worse than having to agree with the turd sandwich, because he's the only one telling the truth.
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 06:59:39 PM
 #29432

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:

cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 07:00:39 PM
 #29433

cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 07:02:02 PM
 #29434

clearing out the mempool quickly would solidify the attackers losses and gratify the previous tx validation work already done by all full nodes in the network.
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 07:05:32 PM
 #29435

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:

NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:09:03 PM
 #29436

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:12:00 PM
 #29437

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



Yes Smiley

More "attacks" please.

Just not really big blocks yet.

1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.

Fortunately with better code optimization, we may get that validation time down even more, as well as the other advances to make this safer.

We'll get there.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
cypherdoc (OP)
Legendary
*
Offline Offline

Activity: 1764
Merit: 1002



View Profile
July 29, 2015, 07:32:48 PM
 #29438

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?

NewLiberty
Legendary
*
Offline Offline

Activity: 1204
Merit: 1002


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:36:30 PM
 #29439

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
awemany
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
July 29, 2015, 07:36:56 PM
 #29440


1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.
Pages: « 1 ... 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 [1472] 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 ... 1557 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!