Bitcoin Forum
November 22, 2017, 06:19:51 AM *
News: Latest stable version of Bitcoin Core: 0.15.1  [Torrent].
 
   Home   Help Search Donate Login Register  
Poll
Question: Will you support Gavin's new block size limit hard fork of 8MB by January 1, 2016 then doubling every 2 years?
1.  yes
2.  no

Pages: « 1 ... 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 [1473] 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 ... 1558 »
  Print  
Author Topic: Gold collapsing. Bitcoin UP.  (Read 2011148 times)
lebing
Legendary
*
Offline Offline

Activity: 1288

Enabling the maximal migration


View Profile
July 29, 2015, 10:31:55 AM
 #29441

Mike H just wrecked Eric on bitcoindev.

http://bitcoin-development.narkive.com/jsfcbpPz/why-satoshi-s-temporary-anti-spam-measure-isn-t-temporary#post10
Mike Hearn via bitcoin-dev 31 minutes ago

I do love history lessons from people who weren't actually there.

Let me correct your misconceptions.


Initially there was no block size limit - it was thought that the fee
Post by Eric Lombrozo via bitcoin-dev
market would naturally develop and would impose economic constraints on
growth.
The term "fee market" was never used back then, and Satoshi did not ever
postulate economic constraints on growth. Back then the talk was (quite
sensibly) how to grow faster, not how to slow things down!
Post by Eric Lombrozo via bitcoin-dev
But this hypothesis failed after a sudden influx of new uses. It was still
too easy to attack the network. This idea had to wait until the network was
more mature to handle things.
No such event happened, and the hypothesis of which you talk never existed.
Post by Eric Lombrozo via bitcoin-dev
Enter a “temporary” anti-spam measure - a one megabyte block size limit.
The one megabyte limit was nothing to do with anti spam. It was a quick
kludge to try and avoid the user experience degrading significantly in the
event of a "DoS block", back when everyone used Bitcoin-Qt. The fear was
that some malicious miner would generate massive blocks and make the wallet
too painful to use, before there were any alternatives.

The plan was to remove it once SPV wallets were widespread. But Satoshi
left before that happened.


Now on to your claims:

1) We never really got to test things out a fee market never really got
Post by Eric Lombrozo via bitcoin-dev
created, we never got to see how fees would really work in practice.
The limit had nothing to do with fees. Satoshi explicitly wanted free
transactions to last as long as possible.
Post by Eric Lombrozo via bitcoin-dev
2) Turns out the vast majority of validation nodes have little if anything
to do with mining - validators do not get compensated validation cost is
externalized to the entire network.
Satoshi explicitly envisioned a future where only miners ran nodes, so it
had nothing to do with this either.

Validators validate for themselves. Calculating a local UTXO set and then
not using it for anything doesn't help anyone. SPV wallets need filtering
and serving capability, but a computer can filter and serve the chain
without validating it.

The only purposes non-mining, non-rpc-serving, non-Qt-wallet-sustaining
full nodes are needed for with today's network are:

1. Filtering the chain for bandwidth constrained SPV wallets (nb: you
can run an SPV wallet that downloads all transactions if you want). But
this could be handled by specialised nodes, just like we always imagined in
future not every node will serve the entire chain but only special
"archival nodes"

2. Relaying validated transactions so SPV wallets can stick a thumb into
the wind and heuristically guess whether a transaction is valid or not.
This is useful for a better user interface.

3. Storing the mempool and filtering/serving it so SPV wallets can find
transactions that were broadcast before they started, but not yet included
in a block. This is useful for a better user interface.

Outside of serving lightweight P2P wallets there's no purpose in running a
P2P node if you aren't mining, or using it as a trusted node for your own
operations.

And if one day there aren't enough network nodes being run by volunteers to
service all the lightweight wallets, then we can easily create an incentive
scheme to fix that.


3) Miners don’t even properly validate blocks. And the bigger the blocks
Post by Eric Lombrozo via bitcoin-dev
get, the greater the propensity to skip this step. Oops!
Miners who don't validate have a habit of bleeding money: that's the
system working as designed.
Post by Eric Lombrozo via bitcoin-dev
4) A satisfactory mechanism for thin clients to be able to securely obtain
reasonably secure, short proofs for their transactions never materialized.
It did. I designed it. The proofs are short and "reasonably secure" in that
it would be a difficult and expensive attack to mount.

But as is so often the case with Bitcoin Core these days, someone who came
along much later has retroactively decided that the work done so far fails
to meet some arbitrary and undefined level of perfection. "Satisfactory"
and "reasonably secure" don't mean anything, especially not coming from
someone who hasn't done the work, so why should anyone care about that
opinion of yours?

Bro, do you even blockchain?
-E Voorhees
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
Join ICO Now A blockchain platform for effective freelancing
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
1511331591
Hero Member
*
Offline Offline

Posts: 1511331591

View Profile Personal Message (Offline)

Ignore
1511331591
Reply with quote  #2

1511331591
Report to moderator
sickpig
Legendary
*
Offline Offline

Activity: 1218


View Profile
July 29, 2015, 11:48:43 AM
 #29442


the exchanges is still going, if you have five mins just read it. Mike made a few others valid points imho in subsequent messages.
And those from  Thomas Zander deserve a quote.

Quote from: Thomas Zander
> The only way to see how fees would work in practice is to have scarcity.

This skips over the question why you need a fees market. There really is no
reason that for the next 10 to 20 years there is a need for a fees market to
incentive miners to mine.  Planning that far ahead is doomed to failure.

Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.
NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 12:35:24 PM
 #29443

Lightning network and sidechains are not magical things, they're orthorgonal and not alternatives to relay improvements. But since you bring them up: They're both at a _futher_ level of development than IBLT at the moment; though given the orthogonality it's irrelevant except to note how misdirected your argument is...

I appreciate the further detail that you describe regarding the relay situation. So it seems that you are not optimistic that block propagation efficiency can have a major benefit for all full-nodes in the near future. Yet block propagation overhead is probably the single biggest argument for retaining the 1MB as long as possible. To cap volume growth to a certain extent.

I'm not sure about that.  My opinion is that the biggest argument for retaining the cap (or, at least, not introducing a 20x plus exponential increase or something like that) is the burden on full nodes due to (cumulated) bandwidth, processing power and, last but not least, disk space required.  Of course, for miners the situation may be different - but from my own point of view (running multiple full nodes for various things but not mining) the block relay issue is not so important.  As gmaxwell pointed out, except for reducing latency when a block is found it does only "little" by, at best, halving the total bandwidth required.  Compare this to the proposed 20x or 8x increase in the block size.

Bandwidth is always a much bigger concern than blockchain usage on disk. TB disks are very cheap, v0.11 has pruning.

Block (1) validation time as well as (2) propagation time are both issues for mining security and avoiding excessive forks/orphans which prolong effective confirmations.
They are separate issues however, and you can have either problem without the other in different blocks, or both together.

Both of these are exacerbated by block size.

As eager as I am for a block size increase for scalability, I'm not convinced that it is yet warranted given the risks...  The developers are aware of the issues.  They aren't goofing off.  It is just better to get it right than to get it done.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
thezerg
Legendary
*
Offline Offline

Activity: 1246


View Profile
July 29, 2015, 03:06:36 PM
 #29444

decision to force a fee market is a centralized solution

On it's face this is a nonsense argument since any development decisions are centralized in the same manner.

Increase the blocksize, decrease the blocksize, or leave it alone, they are all (centralized) development decisions.

It's also false that anything is really centralized about it because if there were truly a consensus for change (over the objections of the 'centralized' developers) there would be a successful fork.


Yes all dev decisions are essentially centralized, including the decision to NOT do something.  Since that is trivially true, I am talking about the effect of the decision.  And in one case miners can optimize their profitability by choosing to include transactions while in another case they are artificially limited.

Brg444 asked if I was for completely lifting the limit.  I would be from a theoretical perspective but from practical engineering experience its makes sense to have what is called a "sanity check".  This is a limit that you expect to never be reached.  If it is, something is very wrong with the system.  Therefore if it was up to me, I would choose a bump to (say) 4 MB and then a periodic increase that mirrors txn adoption curves. 

If we were creating a more complex solution in a less constrained environment, I would let miners expand the block with high fee txns (which I described previously) and I would implement something that puts the limit at (say) 10x the average of last month's fee paying txns.  The problem with the latter though is that a miner could artificially expand the block size by including a bunch of fake txns is his own blocks.  This can be solved with a payout pool (half the txn value goes into the pool) rather than a direct payout.



Quote
The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004

IDK about this out of context quote but Tanenbaum's approach is to put every feature in isolation so if one has an issue the whole system does not go down.  Linux, Windows, etc instead take the philosophy that essential features should be placed together to increase performance and decrease overall complexity.  It does not matter if the whole fails when one does because every feature is essential. Ofc, things have gotten pretty lenient WRT "essential" features in Linux/Windows lately...

Gold is unique and was the most efficient soln for thousands of years cementing its social perception of value.  Bitcoin at 1mb is more like the iphone.  It will be outcompeted in price (efficiency) before the majority of the world was even introduced to smartphones with the obvious result that the majority of phones are android.

Gold is not unique.  Silver.  QED.

Why do you speak of "fee market" in the singular?

By unique I was referring to people's inability to create new elements.  You can't just brew up gold 2.0 in the lab.  In that context silver is unique also.

Quote
Do you not understand  on- and off-chain fee markets will exist at Layers 1 and 2+, competing to be more efficient at bundling tx for eventual reconciliation with and inclusion into the Mother Blockchain?

You seem to, with the reference to the fact that "real markets evolve spontaneously and in a P2P manner to address real issues."

How does simply staying at 1MB (and rejecting the Red Queen interpretation) preclude such real markets' spontaneous evolution?

By what logic do you conflate a dearth of consensus for increased blocksize with "a centralized solution?"

Bitcoin and FinTech is invading the banker's space because of inefficiency.  Technology in all domains replaces low-tech solutions due to inefficiency.  Deliberately introducing inefficiency into the system (forcing a fee where one is not needed -- from the user's perspective efficiency is fee/amount transferred) is basically asking for new technology to replace bitcoin. 

The first apps to go will be colored coins (smart contracts) and immutable ledgers.  If these become useful, the money function of the chain that hosts them will have a much better operational efficiency (you don't have to move bitcoins into these coins, trade a stock, and then move them back).  If this coin/chain delivers similarly to Bitcoin on basic premises (scarce, decentralized, no premine etc) Bitcoin will be replaced.  In that case the BEST outcome for Bitcoin will be a gold analogy (store of value only).  However I have serious doubts about that because Bitcoin does not have gold's history, intrinsic value, etc.

Another outcome would be if a SC takes on the colored coins and immutable ledger function.  This would probably preserve Bitcoin's value since the SC is denominated in BTC.  Yet moving from SC to bitcoin chain is awkward so as Cypherdoc likes to argue all the coins might drain out of the bitcoin chain over to the SC.  I differ from him in my analysis because in this case at least BTC's value is preserved.  However, I find it hard to believe that a startup company would produce a SC with no advantage to themselves... they'll either produce a non-SC chain and out-mine other people in the early days or they'll be a per txn fee going to the startup.





inca
Legendary
*
Offline Offline

Activity: 1162


View Profile
July 29, 2015, 03:56:09 PM
 #29445

decision to force a fee market is a centralized solution

On it's face this is a nonsense argument since any development decisions are centralized in the same manner.

Increase the blocksize, decrease the blocksize, or leave it alone, they are all (centralized) development decisions.

It's also false that anything is really centralized about it because if there were truly a consensus for change (over the objections of the 'centralized' developers) there would be a successful fork.


Yes all dev decisions are essentially centralized, including the decision to NOT do something.  Since that is trivially true, I am talking about the effect of the decision.  And in one case miners can optimize their profitability by choosing to include transactions while in another case they are artificially limited.

Brg444 asked if I was for completely lifting the limit.  I would be from a theoretical perspective but from practical engineering experience its makes sense to have what is called a "sanity check".  This is a limit that you expect to never be reached.  If it is, something is very wrong with the system.  Therefore if it was up to me, I would choose a bump to (say) 4 MB and then a periodic increase that mirrors txn adoption curves. 

If we were creating a more complex solution in a less constrained environment, I would let miners expand the block with high fee txns (which I described previously) and I would implement something that puts the limit at (say) 10x the average of last month's fee paying txns.  The problem with the latter though is that a miner could artificially expand the block size by including a bunch of fake txns is his own blocks.  This can be solved with a payout pool (half the txn value goes into the pool) rather than a direct payout.



Quote
The only way to make software secure, reliable, and fast is to make it small. Fight Features. - Andy Tanenbaum 2004

IDK about this out of context quote but Tanenbaum's approach is to put every feature in isolation so if one has an issue the whole system does not go down.  Linux, Windows, etc instead take the philosophy that essential features should be placed together to increase performance and decrease overall complexity.  It does not matter if the whole fails when one does because every feature is essential. Ofc, things have gotten pretty lenient WRT "essential" features in Linux/Windows lately...

Gold is unique and was the most efficient soln for thousands of years cementing its social perception of value.  Bitcoin at 1mb is more like the iphone.  It will be outcompeted in price (efficiency) before the majority of the world was even introduced to smartphones with the obvious result that the majority of phones are android.

Gold is not unique.  Silver.  QED.

Why do you speak of "fee market" in the singular?

By unique I was referring to people's inability to create new elements.  You can't just brew up gold 2.0 in the lab.  In that context silver is unique also.

Quote
Do you not understand  on- and off-chain fee markets will exist at Layers 1 and 2+, competing to be more efficient at bundling tx for eventual reconciliation with and inclusion into the Mother Blockchain?

You seem to, with the reference to the fact that "real markets evolve spontaneously and in a P2P manner to address real issues."

How does simply staying at 1MB (and rejecting the Red Queen interpretation) preclude such real markets' spontaneous evolution?

By what logic do you conflate a dearth of consensus for increased blocksize with "a centralized solution?"

Bitcoin and FinTech is invading the banker's space because of inefficiency.  Technology in all domains replaces low-tech solutions due to inefficiency.  Deliberately introducing inefficiency into the system (forcing a fee where one is not needed -- from the user's perspective efficiency is fee/amount transferred) is basically asking for new technology to replace bitcoin. 

The first apps to go will be colored coins (smart contracts) and immutable ledgers.  If these become useful, the money function of the chain that hosts them will have a much better operational efficiency (you don't have to move bitcoins into these coins, trade a stock, and then move them back).  If this coin/chain delivers similarly to Bitcoin on basic premises (scarce, decentralized, no premine etc) Bitcoin will be replaced.  In that case the BEST outcome for Bitcoin will be a gold analogy (store of value only).  However I have serious doubts about that because Bitcoin does not have gold's history, intrinsic value, etc.

Another outcome would be if a SC takes on the colored coins and immutable ledger function.  This would probably preserve Bitcoin's value since the SC is denominated in BTC.  Yet moving from SC to bitcoin chain is awkward so as Cypherdoc likes to argue all the coins might drain out of the bitcoin chain over to the SC.  I differ from him in my analysis because in this case at least BTC's value is preserved.  However, I find it hard to believe that a startup company would produce a SC with no advantage to themselves... they'll either produce a non-SC chain and out-mine other people in the early days or they'll be a per txn fee going to the startup.







Oh not the sidechains debate again! Smiley
NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 04:20:25 PM
 #29446

Most people do not understand the block size problem(s), and so discount the complexity. 

Fundamentally, the block chain presents a classic Garrett Hardin "tragedy of the commons" problem.  Each miner wins by getting their block accepted into the block chain, the bigger with more fees that can be accepted, the better.

Everyone else in "the commons" has costs commensurate with the block size except the winning miner.  Thus ultimately only miners are incentivised to maintain nodes and to do validation against the full chain.  This is all fine in the end game, where most are on board with Bitcoin.

We are no where near this today.  Those hoping we fail vastly outnumber those hoping we succeed, and have far greater resources.
So some carefulness is not wasted.  We will succeed simply by not failing.  The seeds of success are sown into the fabric of the Bitcoin code and architecture.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
brg444
Hero Member
*****
Offline Offline

Activity: 644

Bitcoin replaces central, not commercial, banks


View Profile
July 29, 2015, 05:22:43 PM
 #29447

Mike H just wrecked Eric on bitcoindev.

Quote
Gregory Maxwell gmaxwell at gmail.com
Wed Jul 29 16:53:54 UTC 2015
Previous message: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn't   temporary
Next message: [bitcoin-dev] Personal opinion on the fee market from a worried   local trader
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed, Jul 29, 2015 at 9:59 AM, Mike Hearn via bitcoin-dev
<bitcoin-dev at lists.linuxfoundation.org> wrote:
> I do love history lessons from people who weren't actually there.

I doubt the rest of us really enjoy hearing these "lessons" from from
you where you wildly distort history to reflect your views.

> Satoshi explicitly envisioned a future where only miners ran nodes, so it
> had nothing to do with this either.

As others have pointed out-- even if this were true, --- so what?
Many errors were made early on in Bitcoin.

But in this case it's not actually true and I'm really getting fed up
with this continued self-appointment of all that the creator of the
system thought. Your position and knoweldge is not special or
priveleged compared to many of the people that you are arguing with.

It was _well_ understood while the creator of the system was around
that putting every consensus decision into the world into one system
would not scale; and also understood that the users of Bitcoin would
wish to protect its decenteralization by limiting the size of the
chain to keep it verifyable on small devices.

Don't think you can claim otherwise, because doing so is flat out wrong.

In the above statement you're outright backwards-- there was a clear
expectation that all who ran nodes would mine. The delegation of
consensus to third parties was unforseen. Presumably Bitcoin core
making mining inaccessable to users in software was also unforseen.

> Validators validate for themselves. Calculating a local UTXO set and then
> not using it for anything doesn't help anyone. SPV wallets need filtering
> and serving capability, but a computer can filter and serve the chain
> without validating it.
>
> The only purposes non-mining, non-rpc-serving, non-Qt-wallet-sustaining full
> nodes are needed for with today's network are:
[...]
> Outside of serving lightweight P2P wallets there's no purpose in running a
> P2P node if you aren't mining, or using it as a **trusted node for your own
> operations**.

You wrote a long list of activities that are actually irrelevant to
many node users with the result of burrying the main reason any party
should be running a node (emphasis mine).

The incentives of the system demand as it exist today that many other
economically significant parties run nodes in order to keep the half
dozen miners from having a blank check to do whatever they want
(including supporting their operations through inflation)-- do not
think they wouldn't, as we've seen their happy to skip verification
entirely.

(Which, incidentially, is insanely toxic to any security argument for
SPV; ---- and now we see the market failure that results from your and
Gavin years long campaign to ignore problems in the mining ecosystem:
The SPV model which you've fixated on as the true nature of bitcoin
has been demonstrated in practice to have a potentially empty security
claim.)

> Miners who don't validate have a habit of bleeding money:   that's the
> system working as designed.

The information I have currently is that the parties engaging in that
activity found it to be tremendously profitable, even including losses
from issues.

#rekt

"I believe this will be the ultimate fate of Bitcoin, to be the "high-powered money" that serves as a reserve currency for banks that issue their own digital cash." Hal Finney, Dec. 2010
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 06:08:37 PM
 #29448

-
#rekt

talk about getting rekt.  Gregcoin my ass.

from Mike Hearn:

   >It was _well_ .... understood that the users of Bitcoin would wish to protect its decenteralization by limiting the size of the chain to keep it verifyable on small devices.


No it wasn't. That is something you invented yourself much later. "Small devices" isn't even defined anywhere, so there can't have been any such understanding.

The actual understanding was the opposite. Satoshi's words:

"At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware."

That is from 2008:
 
   http://satoshi.nakamotoinstitute.org/emails/cryptography/2/#selection-75.16-83.14

Then he went on to talk about Moore's law and streaming HD videos and the like. At no point did he ever talk about limiting the system for "small devices".

I have been both working on and using Bitcoin for longer than you have been around, Gregory. Please don't attempt to bullshit me about what the plan was. And stop obscuring what this is about. It's not some personality cult - the reason I keep beating you over the head with Satoshi's words is because it's that founding vision of the project that brought everyone together, and gave us all a shared goal.

If Satoshi had said from the start,

   "Bitcoin cannot ever scale. So I intend it to be heavily limited and used only by a handful of people for rare transactions. I picked 1mb as an arbitrary limit to ensure it never gets popular."

... then I'd have not bothered getting involved. I'd have said, huh, I don't really feel like putting effort into a system that is intended to NOT be popular. And so would many other people.


   >Don't think you can claim otherwise, because doing so is flat out wrong.


I just did claim otherwise and no, I am not wrong at all.

    >(Which, incidentially, is insanely toxic to any security argument for
    SPV; ---- and now we see the market failure that results from your and
    Gavin years long campaign to ignore problems in the mining ecosystem:



Since when have we "campaigned" to "ignore problems" in the mining ecosystem? What does that even mean? Was it not I who wrote this blog post?

    http://blog.bitcoinfoundation.org/mining-decentralisation-the-low-hanging-fruit/

Gregory, you are getting really crazy now. Stop it. The trend towards mining centralisation is not the fault of Gavin or myself, or anyone else. And SPV is exactly what was always intended to be used. It's not something I "fixated" on, it's right there in the white paper. Satoshi even encouraged me to keep working on bitcoinj before he left!


Look, it's clear you have decided that the way Bitcoin was meant to evolve isn't to your personal liking. That's fine. Go make an alt coin where your founding documents state that it's intended to always run on a 2015 Raspberry Pi, or whatever it is you mean by "small device". Remove SPV capability from the protocol so everyone has to fully validate. Make sure that's the understanding that everyone has from day one about what your alt coin is for. Then when someone says, gee, it'd be nice if we had some more capacity, you or someone else can go point at the announcement emails and say "no, GregCoin is meant to always be verifiable on small devices, that's our social contract and it's written into the consensus rules for that reason".

But your attempt to convert Bitcoin into that altcoin by exploiting a temporary hack is desperate, and deeply upsetting to many people. Not many quit their jobs and created companies to build products only for today's tiny user base.


My list of "things a full node is useful for" wasn't ordered by importance, by the way.


_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

justusranvier
Legendary
*
Offline Offline

Activity: 1400



View Profile WWW
July 29, 2015, 06:21:15 PM
 #29449

This is a disaster.

On the one hand, we have a person who's actually telling the truth and also is the same guy who tried to break Tor, add blacklists, and other assorted nasty features.

On the other hand we have someone who tends to invent legitimately useful things, and also lie when it suits his purposes.



There's nothing worse than having to agree with the turd sandwich, because he's the only one telling the truth.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 06:59:39 PM
 #29450

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:00:39 PM
 #29451

cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:02:02 PM
 #29452

clearing out the mempool quickly would solidify the attackers losses and gratify the previous tx validation work already done by all full nodes in the network.
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:05:32 PM
 #29453

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:

NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:09:03 PM
 #29454

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:12:00 PM
 #29455

notice how the extra fees paid by full blocks actually strengthened the mining hashrate during the last attacks.  probably the extra revenue from tx fees encouraged miners to bring on more hashrate.  that's a good thing and could actually be even better if they were allowed to harvest/clear all the additional fees in the bloated mempools:



Yes Smiley

More "attacks" please.

Just not really big blocks yet.

1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.

Fortunately with better code optimization, we may get that validation time down even more, as well as the other advances to make this safer.

We'll get there.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:32:48 PM
 #29456

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?

NewLiberty
Legendary
*
Offline Offline

Activity: 1162


Gresham's Lawyer


View Profile WWW
July 29, 2015, 07:36:30 PM
 #29457

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

FREE MONEY1 Bitcoin for Silver and Gold NewLibertyDollar.com and now BITCOIN SPECIE (silver 1 ozt) shows value by QR
Bulk premiums as low as .0012 BTC "BETTER, MORE COLLECTIBLE, AND CHEAPER THAN SILVER EAGLES" 1Free of Government
awemany
Newbie
*
Offline Offline

Activity: 28


View Profile
July 29, 2015, 07:36:56 PM
 #29458


1-3mb are ok.  8mb is too much right now (yes the miners are wrong, there's bad stuff they haven't seen yet).
Getting blocks that take >10mins to validate is not a good thing.


I believe Gavin wants to limit TXN size to 100kiB at the same time the bigger block hard fork comes into effect. Shouldn't this very much remove your worries about an increase in blocksize?

And, yes, I agree, faster validation is even better than putting another artificial limit into the code base.
iCEBREAKER
Legendary
*
Offline Offline

Activity: 1834


[LOL2X]


View Profile WWW
July 29, 2015, 07:37:10 PM
 #29459

The introduction of sidechains, LN and whatever other solutions is a complicated solution. It's also a solution motivated by a desire to fix a perceived economic issue, rather than sticking to the very simple issue at hand. It is the very opposite of what you are claiming to be important, that software should be kept simple.

That is a contradiction.

The simple solution is to remove the artificial cap. A cap that was put in place to prevent DDOS.

Your reference of CVE-2013-2292 is just distraction. It is a separate issue, one that exists now and would continue to exists with a larger block size.

Bloating Layer 1 is a complicated solution; scaling at Level 2+ is an elegant one.

You still don't understand Tannenbaum's maxim.  Its point isn't 'keep software simple FOREVER NO MATTER WHAT.'  That is your flawed simpleton's interpretation.

"Fighting features" means ensuring a positive trade-off in terms of security and reliability, instead of carelessly and recklessly heaping on additional functionality without the benefit of an adversarial process which tests their quality and overall impact.

One does not simply "remove the artificial cap."  You may have noticed some degree of controversy in regard to that proposal.  Bitcoin is designed to strenuously resist (IE fight) hard forks.  Perhaps you were thinking of WishGrantingUnicornCoin, which leaps into action the moment anyone has an idea and complies with their ingenious plan for whatever feature or change they desire.

Like DoS, CVE-2013-2292, as an issue that exists now, is fairly successfully mitigated by the 1MB cap.  It is not a separate concern because larger blocks exacerbate the problem in a superlinear manner.  You don't get to advocate 8MB blocks, but then wave your hands around eschewing responsibility when confronted with the immediate entailment of purposefully constructed 8MB tx taking 64 times longer to process than a 1MB one.  The issue is intrinsic to larger blocks, which is why Gavin proposed a 100k max tx size be married to any block size increase.

Fully parsed, what you are claiming is

Quote
The simple solution is to remove the artificial cap hard fork Bitcoin.

Do you realize how naive that makes you look?


██████████
█████████████████
██████████████████████
█████████████████████████
████████████████████████████
████
████████████████████████
█████
███████████████████████████
█████
███████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
████████████████████████████
██████
███████████████████████████
██████
██████████████████████████
█████
███████████████████████████
█████████████
██████████████
████████████████████████████
█████████████████████████
██████████████████████
█████████████████
██████████

Monero
"The difference between bad and well-developed digital cash will determine
whether we have a dictatorship or a real democracy." 
David Chaum 1996
"Fungibility provides privacy as a side effect."  Adam Back 2014
Buy and sell XMR near you
P2P Exchange Network
Buy XMR with fiat
cypherdoc
Legendary
*
Offline Offline

Activity: 1764



View Profile
July 29, 2015, 07:50:22 PM
 #29460

i think this proves that miners are perfectly capable of handling bigger blocks as evidenced by the "no delays" in confirmation intervals.  if anything i find it interesting that they get sped up to <10 min during these attacks.  with bigger blocks, we'd clear out that mempool in an instant:



Well, no.
Bigger blocks just make the "attacks" marginally more expensive.  
Individually, I could also swamp the mempool at any of the proposed sizes.

The "attacks" are providing some funding, so I don't really mind them all that much.  This tragedy of the commons is not quite so tragic as the UTXO set growth and chain size would be with a similar attack on larger blocks, especially if combined with a validation time attack (TX with many inputs and outputs in a tangled array).
Very large blocks would be a much more vulnerable environment in which to conduct such an attack.  We'll get to the point of ameliorating these vulnerabilities, and the right people are working on it.

Patience...  Bitcoin is still in beta, its so young.  Lets give it the chance to grow up without breaking it along the way.

so looking at the peak of the last attack ~40BTC per day worth of tx fees were harvested.  let's say we go to 20MB blocks tomorrow.  since real tx's only amount to ~500kB per block worth of data, you'd have to essentially fill the 20MB all by yourself and spend 20MB*40BTC=800BTC per day give or take.  at $290/BTC that would equal $232000 per day.  and then you'd have to sustain that to even dent the network; say for a month.  that would cost you $7,000,000 for one month.  you sure you can do that all by yourself?



Why would you doubt it?

b/c i'm assuming it would be cost prohibitive for you?  and pretty much anyone else i might add altho i'm sure you'd argue not for a gvt; which actually might be true which is why i advocate NO LIMIT as that would throw a huge amount of uncertainty as to just how successfully one could jack the mempool in that scenario as it removes user disruption and thus greatly increases the financial risk ANY attacker would be subject o.

Quote
But in your example of 20MB, there are much easier and cheaper ways to DoS the network, as mentioned.

i know; your normal sized convoluted multi-input block that would have to, btw, be constructed as a non-standard tx block that is self mined by a miner which makes it unlikely b/c you'd have to believe a miner would be willing to risk his reputation and financial viability by attacking the network in such a way which has repercussions. that's also good news in that it removes a regular spammer or gvt attacker (non-miner) from doing this. and finally, that type of normal sized but convoluted validation block will propagate VERY slowly which risks orphaning which makes that attack also unlikely.
Pages: « 1 ... 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 [1473] 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 ... 1558 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!