Bitcoin Forum
December 04, 2016, 08:25:15 AM *
News: To be able to use the next phase of the beta forum software, please ensure that your email address is correct/functional.
 
  Home Help Search Donate Login Register  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 ... 229 »
481  Bitcoin / Bitcoin Discussion / Re: Gadget claims to steal encrypted keys from 19" distance. Time for Paper wallet ? on: July 11, 2015, 02:39:34 AM
If what they claim is true, no electronic storage for private keys are safe anymore. Paper wallet unaffected Smiley
None of these things effect _storage_; they potentially effect key generation and signing. When the key is at rest, no issue.  All of the "paperwallet" utilities I've seen are _highly_ vulnerable to sidechannel attacks. Worse, many are just webpages which are vulnerable to a littany of additional attacks.

Meanwhile, Bitcoin core is already hardened against this sort of thing.


It often seems to be the case that people spread FUD around fringe concerns with recommended actions that would actually make people less safe. One of the great mysteries of Bitcoin.
482  Bitcoin / Legal / Re: California Bill AB 1326 on: July 11, 2015, 01:04:16 AM
https://blockstream.com/DigitalCurrencyCASenateLetter.pdf may be of interest.
483  Bitcoin / Bitcoin Discussion / Re: The biggest single tx in the world? on: July 11, 2015, 12:45:57 AM
This is F2Pool;  see #bitcoin-dev earlier today where I gave wangchun a patch to produce unusually small, regular, and fast to verify signatures for the special case of spending a non-private private-key.
484  Bitcoin / Development & Technical Discussion / Re: Blockchain forks and proof of their age? on: July 11, 2015, 12:08:56 AM
This is exactly why it is good to verify a transaction on multiple block explorers, especially if you are only waiting for a relatively small number of confirmations.
I think too many people solely rely on 1 block explorer and still think 1 confirmation is okay
You could have checked more than four of them and wou ld have gotten the same incorrect data.
485  Bitcoin / Development & Technical Discussion / Re: Blockchain forks and proof of their age? on: July 10, 2015, 11:57:28 PM
That kind of profound confuseion is happening much more often because there are many altcoins that use centeralized blocksigning to pin the chain and prevent reorgs. Unfortunately they call this mechenism "checkpoints", though it has basically no relationship to the really narrow thing in Bitcoin by the same name.


Irony in suggesting "external sources of validation" like block explorers is that in the recent chain fork most of them were wrong.
486  Bitcoin / Bitcoin Discussion / Re: We ARE under attack.. we NEED to act... on: July 10, 2015, 12:24:37 AM
scaling blocks would be the perfect solution. but that's an infinite discussion. Undecided
One cannot address a crapflood attack by permantly accepting more crap for all time.

This is where the "Bitcoin Purists" need to admit they were wrong that "there can only be one crypto" and give credit where credit is due. Charlie (coblee) was right from 3 years ago when this spam attack happened. I remember the spam attack when it happened and the fee structure to discourage attacks has worked very nicely to date.
Are you talking about when he stopped ignoring my advice that his slavish copying of Bitcoin's code broke the existing anti-attack mechenisms and rendered them completely and totally ineffectial?-- and applied a patch I provided?  I'm surprised you'd forget that-- because

Go actually look at the litecoin repository. You'll see almost nothing but miles of them copying code from Bitcoin Core.

As mentioned above... We have almost the same protection in Bitcoin being mentioned here; but the attacker is just paying enough to avoid it (partially because it was subsiquently turned too low in anticipation of higher Bitcoin prices); and the latest volly of attack transactions  don't even involve very small payments.

I realize that you're a long time litecoin advocate; but seriously-- pick a argument that actually makes sense.  Otherwise it's just embarassing.

I traced the attacks to my alt-coin mining operation last year to Panama and Switzerland, they went to a lot of trouble to screw me out of $5 worth of doge.

Any idea where these attacks are originating from?
A big chunk of them originate from this transaction: 3bad15167c60de483cd32cb990d1e46f0a0d8ab380e3fc1cace01afc9c1bb5af  if you can figure out whos exchange withdraw this-- since this key immediately began making the attack txn itself is you may have some very concrete evidence about whos attacking here.
487  Bitcoin / Bitcoin Discussion / Re: We ARE under attack.. we NEED to act... on: July 09, 2015, 11:57:34 PM
It would be interesting to hear the reason why bitcoin developers turned down Lee's pull request to include litecoin's solution to the spam problem three years ago...
Bitcoin Core implemented something quite similar but better-- the dust limit, though it was later reduced in effectiveness by turning the limit ten times lower.  Of course, any kind of fee based discouragement for spamming isn't going to work if the fees are too low.

I say better because it actually addresses the root problem that it both were intended to address, the creation of utxo which cost the reciever more than they are worth to spend-- while the litecoin scheme leaves that attack open but makes it somewhat more costly.

Beyond seemingly forgetting the protections from Bitcoin copied into his own codebase (and disabled), Coblee seems to have forgotten history-- Litecoin's fee antispam was originally wacked; I pointed it out and posted a patch to fix it, and encouraged miners to apply it after none of the litecoin tech people seemed to care.  It was ignored until some jackass DOS attacked their network, then they blamed me for it, and applied the fix I suggested (though with lower fees).

The current attacks don't really have that much to do with low value outputs, the attacker doesn't seem to be trying to bloat the UTXO set-- actually last night the pattern changed after miner anti-attack-filter became effective at deprioritizing them, and they've been using larger amounts now.
488  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 09, 2015, 09:36:30 AM
I just want to highlight the comments here from bitcoin-dev today, which seem to indicate that these spamming attacks would have been 5.5x more expensive if a dust threshold software change had not been made. Perhaps there are deeper insights someone else has on this...?
Quote
16:53   wangchun   why can those spam get confirmed. 0.00001 BTC vout below dust threshold right?
16:54   phantomcircuit   wangchun, iirc the dust threshold is 546 satoshis
16:54   wangchun   not 5460 satoshis? changed?
16:54   aschildbach   wangchun: Yes it was cut by 10 a few months ago.
i don't think it really matters, does it?
since most of the mined blocks are filled with real demand trying to get into blocks with or without spamming,
Unfortunately, the majority in the blocks I checked earlier today have been the DOS attack-- e.g. transactions tracable from outputs of this transaction https://blockchain.info/tx/3bad15167c60de483cd32cb990d1e46f0a0d8ab380e3fc1cace01afc9c1bb5af  and a few others. ... though the attack style has been shifting to evade filtering by miners.

The change mentioned in the chat above is https://github.com/bitcoin/bitcoin/pull/3305 (you might find the comments there interesting), it's one of Mikes couple contributions to Bitcoin Core.
489  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 08, 2015, 11:26:59 PM
OK.  Provide me with your estimates for the following (and explain how you arrived at your numbers) and I'll update my table using your numbers:
1.  The cost per node to store 1 GB of additional blockchain data for 5 years, assume the outputs are spent.
2.  The cost per node to store 1 GB of additional blockchain data for 5 years, assuming the outputs are unspent.
I may be missing the context as this thread is high volume and I've not read any of the backlog...

But for a full verifying node, the on-going cost cost of 1GB of additional transactions with all outputs spent is 0; all the cost related to that 1GB of data is related to the bandwidth to get it to you and the verification cost, and for short term storage until its burried, after that it need not be stored.
The cost for unspent is some non-zero number which depends on your estimation of storage costs.


Does CreateNewBlock currently take longer to execute if there are more TXs in a miner's mempool to pick from?  If so, this would add credence to Cypherdoc's hunch that miner's are producing more empty blocks when mempool swells.  
Yep, I already pointed that out to you specifically! It's superlinear in the mempool size (well, ignoring caching)  But thats unrelated to f2pool/antpool and the other SPV miners, as they're not ever calling createnewblock in that case, as they're mining without even validating.   One can mine on a validated chain with no transactions while waiting for createnewblock (which is what eligius does, for example).  I also pointed out that this is trivially optimizable, but no one has bothered previously.

490  Other / Meta / Re: HashFast cypherdoc bankruptcy scandal : Time to clean up bitcoin on: July 08, 2015, 05:07:20 AM
Well I found it damning because he mislead me previously-- claiming he was just another customer that got some discounts and lost money w/ hashfast like many others did-- in order to get me to previously pull the negative raiting for his part in promoting something that caused large losses for a lot of people.  So finding out that he actually made an enormous profit from it was quite eye opening, no pun intended.
491  Bitcoin / Development & Technical Discussion / Re: [Crypto] Compact Confidential Transactions for Bitcoin on: July 07, 2015, 10:01:23 PM
I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.
492  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 09:12:51 PM
On the topic of block verification times, people on Reddit are saying this block (filled with one huge TX) took up to 25 seconds to verify:
yes, they're actually quoting pieter and I from #bitcoin-dev (telling the miner in advance that the transaction he was creating would take a _LONG_ time to verify). They created a huge non-standard 1MB transaction and part of the verification time is quadratic (in the number of inputs).

It's actually possible to create a block that would take many minutes to verify, though not with standard transactions-- only something contrived.
493  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:33:50 AM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.
494  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:23:56 AM
Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley
495  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:09:22 AM
as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).
496  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 02:41:27 AM
Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

Quote
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Quote
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalk.org/index.php?topic=827209.0
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley

497  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 12:09:19 AM
for each block in the Blockchain, which will help answering Q1.  Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
No idea, I'm not aware of anything that tracks that-- also what does "typical mean", do you mean stock unmodified Bitcoin Core?

I expect correlation between empty blocks and mempool size-- though not for the reason you were expecting here: Createnewblock takes a long time, easily as much as 100ms,  as it sorts the mempool multiple times-- and no one has bothered optimizing this at all becuase the standard mining software will mine empty blocks while it waits for the new transaction list. So work generated in the first hundred milliseconds or so after a new block will usually be empty. (Of course miners stay on the initial work they got for a much loonger time than 100ms).

This is, however, unrelated to SPV mining-- in that case everything is still verified. As many people have pointed out (even in this thread) the interesting thing here isn't empty blocks, its the mining on an invalid chain.

And before someone runs off with an argument that aspect of the behavior, instead defines some kind of upper limit-- optimizing the mempool behavior would be trivial if anyone cared to, presumably people will care to when the fees they lose are non-negligible.  Beyond elimiating the inefficient copying and such, the simple of expident of running a two stage pool where the block creation is done against a smaller pool that constains only enough transactions for 2 blocks (which is refilled from a bigger one), would eliminate virtually all the cost. Likewise, as I pointed out up-thread incrementing your minfee can make your mempool as small as you like (the data I captured before was at a time when nodes with a default fee policy had 2.5 MB mempools).

First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.
Uh. I don't care what the consensus of the "Gold collapsing" thread is, the UTXO set is not stored in memory. It's stored in disk,  it's in the .bitcoin/chainstate directory.  (And as you may note, a full node at initial startup uses much less memory than the current size of the UTXO). Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim.

Quote
Second, my reference to Peters argument above aid nothing about mempool; I was talking  about block verification times. You're obfuscation again.
In your message to me you argued that f2pool was SPV mining becuase "the" mempool was big. I retored that their mempool has nothing to do with it, and besides they can make their mempool as small as they want. You argued that the mempools were the same, I pointed out that they were not. You responded claiming my responses was inconsistent with the points about verification delay; and I then responsed that no-- those comments were about verification delay, not mempool. The two are unrelated.  You seem to have taken as axiomatic that mempool == verification delay, a position which is technically unjustified but supports your preordaned conclusions; then you claim I'm being inconsistent when I simply point out that these things are very different and not generally related.

Quote
Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set.
I think your phone made your response too short here, I'm not sure where you're going with that.

When you're back on a real computer, I'd also like to hear your response to my thought, that It is "Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap."

Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.
498  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 09:38:13 PM
no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

Quote
have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.
"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger.

Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap.

Quote
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
well, that was precisely Peter's mathematical point the other day that you summarily dismissed.  f2pool and Antminer are NOT in a similar position on the network as they are behind the GFC.  they have in fact changed their verification policies in response to what they deem are large, full blocks as a defensive measure.  that's why their average validation times are 16-37sec long and NOT the 80ms you claim.  thus, their k validation times of large blocks will go up and so will their number of 0 tx SPV defensive blocks. and that's why they've stated that they will continue to mine SPV blocks.  thanks for making his point.
PeterR wasn't saying anything about mempools, and-- in fact-- he responded expressing doubt about your claim that mempool size had anything to do with this.  Moreover, I gave instructions that allow _anyone_ to measure verification times for themselves.  Your argument was that miners would be burned by unconfirmed transactions, I responded that this isn't true-- in part because they can keep whatever mempool size they want.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}


Quote
it also is a clear sign that miners do have the ability and financial self interest to restrict block sizes and prevent bloat in the absence of a block limit.
Their response was not to use smaller blocks, their response was to stop validating entirely.  (And, as I pointed out-- other miners are apparently mining without validating and still including transactions).

Quote
these SPV related forks have only occurred, for the first time ever, now during this time period where spammers are filling up blocks and jacking up the mempool.  full blocks have been recognizable as 950+ and 720+kB.  this is undeniable.
If we're going to accept that every correlation means causation;  what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

In this case, these forks are only visible by someone mining an invalid block, which no one had previously done for over a year.

Quote
if they are seeing inc orphans, why haven't they retracted their support of Gavin's proposal
They are no longer seeing any orphans at all, they "solved" them by skipping validation entirely. They opposed that initial proposal, in fact, and suggested they could at most handle 8MB, which brought about a new proposal which used 8MB instead of 20MB though only for a limited time. Even there the 8MB was predicated on their ability to do verification free mining, which they may be rethinking now.

Quote
i don't believe that.
I am glad to explain things to people who don't understand, but you've been so dogmatically grinding your view that it's clear that every piece of data you see will only "confirm" things for you; in light of that I don't really have unbounded time to waste trying. Perhaps someone else will.
499  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 06:49:46 PM
since small pools can also connect to the relay network, and i assume they do, there is no reason to believe that large miners can attack small miners with large blocks.  in fact, we've seen the top 5 chinese miners deprecated due to the GFC making it clear they CANNOT perform this attack despite what several guys have FUD'd.
Basic misunderstanding there--- Being a larger miner has two effects: One is throughput not latency related: Being larger creates a greater revenue stream which can be used to pay for better resources.   E.g. if the income levels support one i7 CPU per 10TH/s of mining, then a 10x larger pool can afford 10x more cpus to keep up with the overall throughput of the network, which they share perfectly (relay network is about latency not so much about throughput-- its at best a 2x throughput improvement, assuming you were bandwidth limited);    the other is latency related,   imagine you have a small amount of hashpower-- say 0.01% of the network-- and are a lightsecond away on the moon.  Any time there is a block race, you will lose because all of the earth is mining against you because they all heard your block 1+ seconds later.  Now imagine you have 60% of the hashpower on the moon, in that case you will usually win because even though the earth will be mining another chain, you have more hashpower. For latency, the size of miner matters a lot, and the size of the block only matters  to the extent that it adds delay.

When it comes to orphaning races miner sizes matters, in some amount that is related to the product of the size-of-the-miner and time it takes to validate a block.

Quote
how can that be?  mining pools all use a full node around which they coordinate their mining.  all full nodes are relatively in sync with their mempools  
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

Quote
pt being, it's statistically unlikely that full blocks today represent the magical level of "large" blocks that Satoshi set 6 yrs ago.  the problems we are having with the forks are a result of the defensive tactics being taken from those full blocks.

Almost none of the blocks have been 1MB, the issues arise before then. _Consistent_ 1MB blocks wouldn't have been supportable on the network at the time that limit was put in place-- back in the 0.5.x-ish days we were getting up to 2minutes for a 100k block to reach the whole network; the 1MB was either forward-looking, set too high, or only concenred about the peak (and assuming the average would be much lower) ... or a mixture of these cases.

Quote
have the Chinese miners given you a technical reason why they're SPV'ing?

F2Pool reported that as block sizes grew they saw increased orphaning rates and that they were seeing an orphan rate of 4% though this was at a time before the relay network and when GHash in europe had ~50% of the hashpower under them.  Excluding the recent issues they've had almost no orphans since, they report.

Then why don't we decrease the blocktime from 10 min down to let's say 2 min. This way we can also have more transactions/second without touching the blocksize.
Ouch,  the latency related issues issues are made much worse by smaller interblock gaps once they are 'too small' relative to the network radius. When a another block shows up on the network faster than you can communicate about your last you get orphaned.  And for throughput related bottlenecks it doesn't matter if X transactions come in the form of a 10mb block or 10 1mb blocks.



500  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 05:40:03 PM
1.  Why do larger mining pools have less orphans, assuming most miners even small ones are connected to the relay network?
Because it greatly reduces the time it takes to transmit blocks but does not completely eliminate it-- nothing can (due to the speed of light).  In particular, something I didn't know until my conversation with them on July 4th:  the nearest relay network hub to F2Pool is still 200ms away due to insane routing that sends traffic between some networks in china and singapore via the US (thanks NSA?).

Quote
2. Even if mining pools set higher fees, aren't the unconfirmed TX's still added to their mempools?
No.

Quote
3. How is it that 1MB just "happened" to be the magic number at which blocks are deemed to be "large" ?
I don't know what you're talking about there.  AFAICT F2Pool would also consider e.g. 750k "large".

Do you mean why was 1MB selected as the particular hard limit in the protocol?   ::shrugs:: It happens to be the the highest value you could sync over a modem and stay up with the network (though not for mining, due to latency), though that could be by chance.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 ... 229 »
Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!