cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 06, 2015, 10:10:48 PM |
|
no, memory is not just used for 1MB blocks. it's also used to store the mempools plus the UTXO set. large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument). have the potential to collapse a full node by overloading the memory. at least, that's what they've been arguing. "They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger. Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap. There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes. The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
well, that was precisely Peter's mathematical point the other day that you summarily dismissed. f2pool and Antminer are NOT in a similar position on the network as they are behind the GFC. they have in fact changed their verification policies in response to what they deem are large, full blocks as a defensive measure. that's why their average validation times are 16-37sec long and NOT the 80ms you claim. thus, their k validation times of large blocks will go up and so will their number of 0 tx SPV defensive blocks. and that's why they've stated that they will continue to mine SPV blocks. thanks for making his point. PeterR wasn't saying anything about mempools, and-- in fact-- he responded expressing doubt about your claim that mempool size had anything to do with this. Moreover, I gave instructions that allow _anyone_ to measure verification times for themselves. Your argument was that miners would be burned by unconfirmed transactions, I responded that this isn't true-- in part because they can keep whatever mempool size they want. To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set: $ ~/bitcoin/src/bitcoin-cli getmempoolinfo { "size" : 301, "bytes" : 271464 }
it also is a clear sign that miners do have the ability and financial self interest to restrict block sizes and prevent bloat in the absence of a block limit. Their response was not to use smaller blocks, their response was to stop validating entirely. (And, as I pointed out-- other miners are apparently mining without validating and still including transactions). these SPV related forks have only occurred, for the first time ever, now during this time period where spammers are filling up blocks and jacking up the mempool. full blocks have been recognizable as 950+ and 720+kB. this is undeniable.
If we're going to accept that every correlation means causation; what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject? In this case, these forks are only visible by someone mining an invalid block, which no one had previously done for over a year. if they are seeing inc orphans, why haven't they retracted their support of Gavin's proposal They are no longer seeing any orphans at all, they "solved" them by skipping validation entirely. They opposed that initial proposal, in fact, and suggested they could at most handle 8MB, which brought about a new proposal which used 8MB instead of 20MB though only for a limited time. Even there the 8MB was predicated on their ability to do verification free mining, which they may be rethinking now. i don't believe that. I am glad to explain things to people who don't understand, but you've been so dogmatically grinding your view that it's clear that every piece of data you see will only "confirm" things for you; in light of that I don't really have unbounded time to waste trying. Perhaps someone else will. On my phone now so this is going to be hard to respond. First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary. Second, my reference to Peters argument above aid nothing about mempool; I was talking about block verification times. You're obfuscation again. Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set. Fourth, you have no shame do you with the ad hominems? No, I'm not endorsing for any company like I told everyone ahead of time I was doing for HF.
|
|
|
|
tvbcof
Legendary
Offline
Activity: 4746
Merit: 1282
|
|
July 06, 2015, 10:11:55 PM |
|
... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?
I will not be surprised if this is true. Only I'll expect higher price ... few millions. He is fighting hard. A bit to hard I'd say. He's losing his support base. The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well. If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
traderCJ
|
|
July 06, 2015, 10:19:35 PM |
|
... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?
I will not be surprised if this is true. Only I'll expect higher price ... few millions. He is fighting hard. A bit to hard I'd say. He's losing his support base. The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well. If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good. Rather amazing that he still is posting. For starters, his counsel should have advised him to stop posting here.
|
|
|
|
tvbcof
Legendary
Offline
Activity: 4746
Merit: 1282
|
|
July 06, 2015, 10:23:20 PM |
|
... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?
I will not be surprised if this is true. Only I'll expect higher price ... few millions. He is fighting hard. A bit to hard I'd say. He's losing his support base. The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well. If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good. Rather amazing that he still is posting. For starters, his counsel should have advised him to stop posting here. Yup. He had an opportunity to save face and bow out that way but he's blown that one. Now he'll have to think of a different way, or hopefully stick around and continue to show the world the Gavinista's true colors.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
Odalv
Legendary
Offline
Activity: 1414
Merit: 1000
|
|
July 06, 2015, 10:27:20 PM |
|
cypherdoc, who is paying you now ? (KNC ?)
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 06, 2015, 10:29:47 PM |
|
cypherdoc, who is paying you now ? (KNC ?)
LOL, no one.
|
|
|
|
Odalv
Legendary
Offline
Activity: 1414
Merit: 1000
|
|
July 06, 2015, 10:33:29 PM |
|
cypherdoc, who is paying you now ? (KNC ?)
LOL, no one. Then you are losing money. :-)
|
|
|
|
BlindMayorBitcorn
Legendary
Offline
Activity: 1260
Merit: 1116
|
|
July 06, 2015, 10:37:54 PM |
|
cypherdoc, who is paying you now ? (KNC ?)
Here bro. I heard you liked tactical pitchforks
|
Forgive my petulance and oft-times, I fear, ill-founded criticisms, and forgive me that I have, by this time, made your eyes and head ache with my long letter. But I cannot forgo hastily the pleasure and pride of thus conversing with you.
|
|
|
tvbcof
Legendary
Offline
Activity: 4746
Merit: 1282
|
|
July 06, 2015, 10:38:34 PM |
|
no, memory is not just used for 1MB blocks. it's also used to store the mempools plus the UTXO set. large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument). have the potential to collapse a full node by overloading the memory. at least, that's what they've been arguing.
"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger. Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap. ... Thanks for this tid-bit about the UTXO database. This is the kind of info that someone who is mildly familiar with database technology but doesn't really want to make a lifes' work of studying the technicals find cumbersome to pick out. Especially since modern Bitcoin is already past what is realistic to run behind my ($80/mo) connectivity so unless/until I set up a VM somewhere it's kind of a textbook exercise. Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic. One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache. Any thoughts on this that can be quickly conveyed?
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
TPTB_need_war
|
|
July 06, 2015, 11:14:42 PM |
|
I favor Adam Backamoto's
stop equating Adam to Satoshi. no contest. you have a serious Daddy problem. No where near as serious as those who consider cypherdoc to be some sort of daddy figure. There are probably vastly fewer who consider you to be ' the LeBron James of Bitcoin' than you and your attorney might imagine. Probably there are a handful though which is pretty sad. The LeBron assertion is hilariously funny though one way or another. Whether it was you or your attorney who came up with that one, kudos for the comic relief. It behoves him to continue posting and proving his thread has the largest readership on bitcointalk by far because it assures he will win his case... ...the more you guys fight him and post here, the more you help him retain 3000 BTC ( perhaps in collusion with HF if they aren't just derelict and who knows perhaps even the judge via his well connected Obama legal counsel that being wild speculation not an accusation ...). Don't you realize he is either making the technical errors on purpose or it is a strategy he inherited by dumbdorc luck! If they wanted to win, they wouldn't argue that DorkyDoc didn't do adequate promotion (because there is an entire thread on this forum showing he did, and now we have a core Dev admitting he invested 100 BTC based on Dorc's thread which adds validity to his promotional value). Rather they would...
P.S. Gmax you committed a category error. It doesn't matter to his case if he slobbers on the technology (and because so many people can't understand the technology including some of the readers here, the attorneys, and the judge); it only matters that he has a huge following.
|
|
|
|
tvbcof
Legendary
Offline
Activity: 4746
Merit: 1282
|
|
July 06, 2015, 11:16:23 PM |
|
It behoves him to continue posting and proving his thread has the largest readership on bitcointalk by far because it assures he will win his case...
...the more you guys fight him and post here, the more you help him retain 3000 BTC.
Don't you realize he is making the technical errors on purpose!
Good catch. I had not thought of that.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
TPTB_need_war
|
|
July 06, 2015, 11:22:42 PM |
|
The coin needs to be the first legitimate instance of its kind, had a fair start/emission, and a market niche ----------------------------------------------------------------------------------------------------------------- Litecoin FAIL (not the first of its kind) Peercoin FAIL (no market niche) Bytecoin FAIL (not fair start) Boolberry FAIL (not the first of its kind) Ethereum FAIL (questionable start) All shitcoins FAIL (2-3 counts)
Only BTC and XMR fulfill all conditions, so it makes sense to invest into them (and them alone). To be fully hedged, you can keep 99.8% in BTC and set 0.2% aside in XMR. Going over this ratio, is overinvesting in XMR.
It is not hard to come with these understandings after a generous overview of the top 50 altcoins, reason why I'm as uninpressed with LTC market as with its innovative features (none). +1 on rpietila's logic. I would only add it needs a reasonable shot of attaining critical mass, so the niche needs to be evaluated for that probability. I am thinking Trapqoin has a potentially large market
|
|
|
|
laurentmt
|
|
July 06, 2015, 11:29:54 PM |
|
for each block in the Blockchain, which will help answering Q1. Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
statoshi.info might help ! EDIT: Export feature is in the "wheel" entry of the menu
|
|
|
|
TPTB_need_war
|
|
July 06, 2015, 11:50:13 PM Last edit: July 07, 2015, 12:19:24 AM by TPTB_need_war |
|
Without reading every page in this thread, I'll add my two cents worth here.
I can't see a reason why Gold can't rise along with Bitcoin at the moment, just at different rates. Whereas Bitcoin can approach $1000 again by the end of year (nearly 4x the current price) similarly Gold can approach $2000 by the end of the year (nearly 2x the current price). Neither Bitcoin or Gold are undermined by debt compared to all the trillions of dollars in stocks and bonds which are leveraged to general confidence in elite lending strategies.
mymy you are severely out of context considering the last 1000 or so pages you should have read here. anyway, do not forget about how the gold market is rigged, rotten from its heart by the FED Masters, whom nonetheless deem worth accumulating/stealing shit tons of it @FortKnox. bitcorn and popcoin is cheap now too tho I believe both of you are so incredibly removed from reality, that it boggles my mind. Let me try to help you, and I mean that sincerely. We are coming into a low for private assets[1] because for the moment the contagion in Europe is driving international capital flows (capital follows capital due to the wealth effect where Δ flow != Δ mcap) into the short end of the bond curve in the core EU economies in particular Germany (and away from the long-end and peripheral EU bond markets). October will be the bottom for private assets[1], after which they will begin to rise again as they did after their 2008 implosion (dollar and US stocks were making a phase transition from public to private alignment over this period). So you will see radical bottom in gold and BTC roughly this Sept or Oct, probably south of $850 and $150. I am thinking possibly double-digits for BTC with $100 as a psychological barrier that is necessary to shake out all the fools who bought at $600. New all-time highs for private assets will come in 2016 or 2017. By the end of 2017, the dollar and US stocks will fall away from private assets as the influx of safe haven capital will have peaked and the strong dollar will have choked off the US economy. Then USA will go over the cliff in 2018 being the last economy still standing in the world taking us into global economic abyss of epic deflation. Private assets will skyrocket after 2017, but they will also be hunted by the governments like mad dog Rottweilers munching on your arm (former US Treasury official, "we will burn the fingertips of goldbugs up to their armpits"). As for the manipulation thesis, it is utter nonsense. For example, Armstrong totally annihilated Fekete's backtwardation mumbo-jumbo, especially since Armstrong is the one who taught the Arabs to lease their gold to earn income to work around the anti-usury provision of the Islamic religion. Numerous other essays from Armstrong have explained why the nutter tinfoil hats are delusional about the manipulation argument. I don't have time to repeat all that. Do yourself a research favor. [1] which includes the dollar, US stocks, gold, bitcoin, because these are all aligned now as safe haven juxtaposed against the European contagion, i.e. the dollar and USA will receive a stampede of capital fleeing Europe after the October 2015 BIG BANG explosion of the sovereign debt (loaned from EU banks) contagion in Europe.
|
|
|
|
gmaxwell
Staff
Legendary
Offline
Activity: 4270
Merit: 8805
|
|
July 07, 2015, 12:09:19 AM |
|
for each block in the Blockchain, which will help answering Q1. Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
No idea, I'm not aware of anything that tracks that-- also what does "typical mean", do you mean stock unmodified Bitcoin Core? I expect correlation between empty blocks and mempool size-- though not for the reason you were expecting here: Createnewblock takes a long time, easily as much as 100ms, as it sorts the mempool multiple times-- and no one has bothered optimizing this at all becuase the standard mining software will mine empty blocks while it waits for the new transaction list. So work generated in the first hundred milliseconds or so after a new block will usually be empty. (Of course miners stay on the initial work they got for a much loonger time than 100ms). This is, however, unrelated to SPV mining-- in that case everything is still verified. As many people have pointed out (even in this thread) the interesting thing here isn't empty blocks, its the mining on an invalid chain. And before someone runs off with an argument that aspect of the behavior, instead defines some kind of upper limit-- optimizing the mempool behavior would be trivial if anyone cared to, presumably people will care to when the fees they lose are non-negligible. Beyond elimiating the inefficient copying and such, the simple of expident of running a two stage pool where the block creation is done against a smaller pool that constains only enough transactions for 2 blocks (which is refilled from a bigger one), would eliminate virtually all the cost. Likewise, as I pointed out up-thread incrementing your minfee can make your mempool as small as you like (the data I captured before was at a time when nodes with a default fee policy had 2.5 MB mempools). First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.
Uh. I don't care what the consensus of the "Gold collapsing" thread is, the UTXO set is not stored in memory. It's stored in disk, it's in the .bitcoin/chainstate directory. (And as you may note, a full node at initial startup uses much less memory than the current size of the UTXO). Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim. Second, my reference to Peters argument above aid nothing about mempool; I was talking about block verification times. You're obfuscation again. In your message to me you argued that f2pool was SPV mining becuase "the" mempool was big. I retored that their mempool has nothing to do with it, and besides they can make their mempool as small as they want. You argued that the mempools were the same, I pointed out that they were not. You responded claiming my responses was inconsistent with the points about verification delay; and I then responsed that no-- those comments were about verification delay, not mempool. The two are unrelated. You seem to have taken as axiomatic that mempool == verification delay, a position which is technically unjustified but supports your preordaned conclusions; then you claim I'm being inconsistent when I simply point out that these things are very different and not generally related. Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set.
I think your phone made your response too short here, I'm not sure where you're going with that. When you're back on a real computer, I'd also like to hear your response to my thought, that It is "Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap." Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic. One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache. Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term. The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives. Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue. When you talk about "would it be possible" do you mean an attack? It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.
|
|
|
|
TPTB_need_war
|
|
July 07, 2015, 12:22:55 AM |
|
Without reading every page in this thread, I'll add my two cents worth here.
I can't see a reason why Gold can't rise along with Bitcoin at the moment, just at different rates. Whereas Bitcoin can approach $1000 again by the end of year (nearly 4x the current price) similarly Gold can approach $2000 by the end of the year (nearly 2x the current price). Neither Bitcoin or Gold are undermined by debt compared to all the trillions of dollars in stocks and bonds which are leveraged to general confidence in elite lending strategies.
Yeah, I don't think it makes sense to come up with the idea that Bitcoin and precious metals would be mutually exclusive. I'm pretty sure that both will rise. Even if gold might ultimately be replaced by Bitcoin I doubt that this process will be fast enough to obstruct the general upward momentum of gold in a collapsing world economy. After all, Bitcoin's concept is like virtual gold: The supply is limited, it's very difficult to counterfeit and you have to put in substantial effort to obtain it. The key phase shift between gold and cryptocoin will likely come after 2017, when gold will be much easier for the Rottweilers to expropriate, steal, plunder, declare as Civil Asset Forfeiture, etc. See my upthread discussion with OROBTC (or just go to his profile and read his posts as he posts infrequently).
|
|
|
|
tvbcof
Legendary
Offline
Activity: 4746
Merit: 1282
|
|
July 07, 2015, 12:28:49 AM |
|
... Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic. One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache. Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term. The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives. Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue. When you talk about "would it be possible" do you mean an attack? It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block. Thanks for the input. Yes, as an attack. Say, for instance, one primed the blockchain with a lot of customized high-overhead transactions over a period of time. Then, when one wished to create a disruption, take action on all of them at once thereby upsetting those who were doing real validation. The nature of the blockchain being what it is, I see an attack being most productive at creating a period of unusability of Bitcoin rather than a full scale failure (excepting a scenario where secret keys could be compromised through a flaw in the generation process which would, of course, be highly devastating.) I was unaware that even today it would be possible to formulate transactions of the verification complexity that you mention. It would be interesting to know if anyone is watching the blockchain for transactions which seem to be deliberately designed this way.
|
sig spam anywhere and self-moderated threads on the pol&soc board are for losers.
|
|
|
solex
Legendary
Offline
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
|
|
July 07, 2015, 12:49:01 AM Last edit: July 07, 2015, 01:45:23 AM by solex |
|
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes. The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults. Pools like Eligius with very different policies are the outliers. IBLT will help by incentivising node owners to converge to the same policies. IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.
IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request. Since this block propagation efficiency was identified there could have been a lot of work done in Core Dev to advance it further (though I fully accept that other major advances like headers-first were in train and draw down finite resources). I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement. Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path. It is the LN which doesn't exist yet and will arrive far too late to help with scaling when blocks are (nearer to) averaging 1MB. the 1MB was either forward-looking, set too high, or only concenred about the peak (and assuming the average would be much lower) ... or a mixture of these cases.
So, in 2010 Satoshi was forward looking, when the 1MB was several orders of magnitude larger than block sizes.. Yet today we are no longer forward-looking or care about peak volumes, and get ready to fiddle while Rome burns. The 1MB is proving a magnet for spammers as every day the average block size creeps up and makes their job easier. A lot of people have vested interest in seeing Bitcoin crippled. We should not provide them an ever-widening attack vector. To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:
$ ~/bitcoin/src/bitcoin-cli getmempoolinfo { "size" : 301, "bytes" : 271464 }
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee. https://bitcointalk.org/index.php?topic=827209.0Maybe this is likely naive and unrealistic long-term, and a viable fees market (once the reward is lower) could push this up a little. Or is this another case where the majority of users are wrong yet again? Peter made the point that Bitcoin is at a convergence of numerous disciplines, of which no-one is an expert in all. I suggest that while your technical knowledge is absolutely phenomenal, your grasp of the economic incentives in the global marketplace is much weaker. While Cypherdoc might have had errors in judgment in the Hashfast matter (I know zero about this, and have zero interest in it), his knowledge of the financial marketplace is also phenomenal, and he correctly assesses how Bitcoin can be an economic force for good, empowering people trapped in dysfunctional 3rd world economies. He is right how Bitcoin has to scale and cheaply for users to maintain a virtuous feedback cycle of ecosystem growth, hashing power growth and SoV. Lots of people will not pay fees of 14c per tx when cheaper alternatives like LTC are out there. I see the recent spike in it (disclaimer: I don't have any) as the market "pricing in" that BTC tx throughput is going to be artificially capped. Whle BTC tx throughput will always be capped by technology, we should not be capping it at some lower level in the misguided belief that this "helps".
|
|
|
|
TPTB_need_war
|
|
July 07, 2015, 01:09:08 AM Last edit: July 07, 2015, 01:28:05 AM by TPTB_need_war |
|
furthermore, you ignore the obvious fact that hashers are independently minded and will leave any pool that abuses it's power via all the shenanigans you dream up to scare everyone about how bad Bitcoin is.
From Meni Rosenfeld's paper, the probability that a pool (or any solo miner) will receive any payout for a day of mining is: 1 - e -(% of network hashrate) x 144, where there are 144 blocks per day Thus a pool which has only 1% of the network hashrate has only 76% chance of winning any blocks for the day. And that probability is reset the next day. Thus a pool with only 1% of the network hashrate could go days without winning a block. This makes it very difficult to design a payout scheme (c.f. the schemes Meni details) to ephemeral SPV pool miners which (can come and go as often as they like) that is equitable and yet doesn't also place the pool at risk of bankruptcy, while also allowing for the fact that running a pool is an extremely high competition substitutable good, low profit margin business model (unless you economy-of-scale up and especially if use monopolistic tactics). In short, it is nearly implausible economically to run a pool that has only 1% of the network hashrate. Thus you can pretty well be damn sure that the pools are Sybil attacked and are lying about their controlling stakes, such that multiple 1% pools must be sharing the same pot of income from blocks in order to make the economics and math work. QED. Edit: note that with 2.5 minute blocks (i.e. Litecoin), it improves considerably: 1 - e -(% of network hashrate) x 576, where there are 576 blocks per day Thus a pool which has only 1% of the network hashrate has a 99.7% chance of winning any blocks for the day. However one must the factor in that latency (and thus orphan rate) becomes worse and higher hashrate profits more than lower hashrate given any significant amount of latency in the network, as gmaxwell pointed out upthread. So it is not clear that Litecoin gained any advantage in terms of decentralization with the faster block period.
|
|
|
|
cypherdoc (OP)
Legendary
Offline
Activity: 1764
Merit: 1002
|
|
July 07, 2015, 01:43:31 AM |
|
no, memory is not just used for 1MB blocks. it's also used to store the mempools plus the UTXO set. large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument). as you know, even Gavin talks about this memory problem from UTXO. and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs. http://gavinandresen.ninja/utxo-uhoh have the potential to collapse a full node by overloading the memory. at least, that's what they've been arguing. "They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger. Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap. i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all. sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree. one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization. i'm not seeing that.
|
|
|
|
|