Bitcoin Forum
May 25, 2024, 07:26:26 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 [90] 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 ... 288 »
1781  Bitcoin / Development & Technical Discussion / Re: [Crypto] Compact Confidential Transactions for Bitcoin on: July 07, 2015, 10:01:23 PM
I've been toying around with a 384 bit quadratic extension field curve that supports a four dimensional endomorphism (GLV-GLS).  The use of the quadratic extension avoids much of the bad scaling, but I'm not to a point where I can benchmark anything yet.
1782  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 09:12:51 PM
On the topic of block verification times, people on Reddit are saying this block (filled with one huge TX) took up to 25 seconds to verify:
yes, they're actually quoting pieter and I from #bitcoin-dev (telling the miner in advance that the transaction he was creating would take a _LONG_ time to verify). They created a huge non-standard 1MB transaction and part of the verification time is quadratic (in the number of inputs).

It's actually possible to create a block that would take many minutes to verify, though not with standard transactions-- only something contrived.
1783  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:33:50 AM
ok, i'm not getting the bolded part.  this graph shows 37 MB worth of unconf tx's, no?:
No clue, no node I have access to is seeing that much-- they may have turned off the minfee rules (not unreasonable for a metrics thing)...

Even given that, again, 37MB doesn't explain your swap.
1784  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:23:56 AM
Interesting!  
And this is why I like the empirical "block box" approach.  I don't care initially what the mechanism is.  I try to find a simple model that explains the effect, and then, later, ask what that mechanism might be.
But now why would the "latency of the mining process" depend on the size of the previous block?  That doesn't make sense to me, but we just showed empirically that F2Pool is indeed more likely to produce an empty block when the previous block is large.
It wouldn't expect the miner latency part to be size dependant: the miner can't even tell how big the prior block was.  I expect your function relating them to have a big constant term in it! (thats why I asked if you tried other regression approaches. )

I suppose there may be some dependance that is introduced by virtue of what percentage of the miners got the dummy work.  Would be pretty interesting to try to seperate that.

Another trap of empirical analysis in this kind of discussion is that we can only measure how the system is-- but then we use that to project the future;  e.g.  say we didn't have ECDSA caching today, you might then measure that it was taking >2 minutes to verify a maximum size block... and yet 100 lines of code and that cost vanishes; which is bad news if you were counting on it to maintain incentives. Smiley
1785  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 03:09:22 AM
as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Quote
i didn't say this full block spam attack we're undergoing wasn't affecting my node at_all.  sure, i'm in swap, b/c of the huge #unconf tx's but it hasn't shut down or stressed my nodes to any degree.  one of the arguments by Cripplecoiners was that these large block attacks would shut full nodes down from destabilization resulting in centralization.  i'm not seeing that.
The highest number of unconfirmed transactions I've seen ever is about 8MB. Even if we assume the real max was 3x that this is not explaining your hundreds of megabytes of swap.   We just had half the hashpower of the network mining without validating creating multiple large forks and large reorginizations, but you don't see any destabilization. Okay.

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).
1786  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 02:41:27 AM
Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

Quote
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Quote
That min fee at 0.0005 is 14 cents, and most users consider this to be way too high, especially if BTC goes back to $1000 and this becomes 50 cents. I kicked off a poll about tx fees and 55% of users don't want to pay more than 1 cent, 80% of users think 5 cents or less is enough of a fee.
https://bitcointalk.org/index.php?topic=827209.0
GAH! I'm not saying it's a good setting-- I'm just giving a concrete example that nodes (and miners) can control their mempool sizes, as this was at odds with cypherdoc's expectations-- instead he thought miners might be suffering because of large mempools-- and I pointed out that if their mempool was too big they could simply reduce it and he said he didn't believe me. I don't know how I could have made it more clear, but I hope its clear now. Smiley

1787  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 07, 2015, 12:09:19 AM
for each block in the Blockchain, which will help answering Q1.  Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
No idea, I'm not aware of anything that tracks that-- also what does "typical mean", do you mean stock unmodified Bitcoin Core?

I expect correlation between empty blocks and mempool size-- though not for the reason you were expecting here: Createnewblock takes a long time, easily as much as 100ms,  as it sorts the mempool multiple times-- and no one has bothered optimizing this at all becuase the standard mining software will mine empty blocks while it waits for the new transaction list. So work generated in the first hundred milliseconds or so after a new block will usually be empty. (Of course miners stay on the initial work they got for a much loonger time than 100ms).

This is, however, unrelated to SPV mining-- in that case everything is still verified. As many people have pointed out (even in this thread) the interesting thing here isn't empty blocks, its the mining on an invalid chain.

And before someone runs off with an argument that aspect of the behavior, instead defines some kind of upper limit-- optimizing the mempool behavior would be trivial if anyone cared to, presumably people will care to when the fees they lose are non-negligible.  Beyond elimiating the inefficient copying and such, the simple of expident of running a two stage pool where the block creation is done against a smaller pool that constains only enough transactions for 2 blocks (which is refilled from a bigger one), would eliminate virtually all the cost. Likewise, as I pointed out up-thread incrementing your minfee can make your mempool as small as you like (the data I captured before was at a time when nodes with a default fee policy had 2.5 MB mempools).

First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.
Uh. I don't care what the consensus of the "Gold collapsing" thread is, the UTXO set is not stored in memory. It's stored in disk,  it's in the .bitcoin/chainstate directory.  (And as you may note, a full node at initial startup uses much less memory than the current size of the UTXO). Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim.

Quote
Second, my reference to Peters argument above aid nothing about mempool; I was talking  about block verification times. You're obfuscation again.
In your message to me you argued that f2pool was SPV mining becuase "the" mempool was big. I retored that their mempool has nothing to do with it, and besides they can make their mempool as small as they want. You argued that the mempools were the same, I pointed out that they were not. You responded claiming my responses was inconsistent with the points about verification delay; and I then responsed that no-- those comments were about verification delay, not mempool. The two are unrelated.  You seem to have taken as axiomatic that mempool == verification delay, a position which is technically unjustified but supports your preordaned conclusions; then you claim I'm being inconsistent when I simply point out that these things are very different and not generally related.

Quote
Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set.
I think your phone made your response too short here, I'm not sure where you're going with that.

When you're back on a real computer, I'd also like to hear your response to my thought, that It is "Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap."

Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.
1788  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 09:38:13 PM
no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

Quote
have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.
"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger.

Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap.

Quote
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
well, that was precisely Peter's mathematical point the other day that you summarily dismissed.  f2pool and Antminer are NOT in a similar position on the network as they are behind the GFC.  they have in fact changed their verification policies in response to what they deem are large, full blocks as a defensive measure.  that's why their average validation times are 16-37sec long and NOT the 80ms you claim.  thus, their k validation times of large blocks will go up and so will their number of 0 tx SPV defensive blocks. and that's why they've stated that they will continue to mine SPV blocks.  thanks for making his point.
PeterR wasn't saying anything about mempools, and-- in fact-- he responded expressing doubt about your claim that mempool size had anything to do with this.  Moreover, I gave instructions that allow _anyone_ to measure verification times for themselves.  Your argument was that miners would be burned by unconfirmed transactions, I responded that this isn't true-- in part because they can keep whatever mempool size they want.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}


Quote
it also is a clear sign that miners do have the ability and financial self interest to restrict block sizes and prevent bloat in the absence of a block limit.
Their response was not to use smaller blocks, their response was to stop validating entirely.  (And, as I pointed out-- other miners are apparently mining without validating and still including transactions).

Quote
these SPV related forks have only occurred, for the first time ever, now during this time period where spammers are filling up blocks and jacking up the mempool.  full blocks have been recognizable as 950+ and 720+kB.  this is undeniable.
If we're going to accept that every correlation means causation;  what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

In this case, these forks are only visible by someone mining an invalid block, which no one had previously done for over a year.

Quote
if they are seeing inc orphans, why haven't they retracted their support of Gavin's proposal
They are no longer seeing any orphans at all, they "solved" them by skipping validation entirely. They opposed that initial proposal, in fact, and suggested they could at most handle 8MB, which brought about a new proposal which used 8MB instead of 20MB though only for a limited time. Even there the 8MB was predicated on their ability to do verification free mining, which they may be rethinking now.

Quote
i don't believe that.
I am glad to explain things to people who don't understand, but you've been so dogmatically grinding your view that it's clear that every piece of data you see will only "confirm" things for you; in light of that I don't really have unbounded time to waste trying. Perhaps someone else will.
1789  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 06:49:46 PM
since small pools can also connect to the relay network, and i assume they do, there is no reason to believe that large miners can attack small miners with large blocks.  in fact, we've seen the top 5 chinese miners deprecated due to the GFC making it clear they CANNOT perform this attack despite what several guys have FUD'd.
Basic misunderstanding there--- Being a larger miner has two effects: One is throughput not latency related: Being larger creates a greater revenue stream which can be used to pay for better resources.   E.g. if the income levels support one i7 CPU per 10TH/s of mining, then a 10x larger pool can afford 10x more cpus to keep up with the overall throughput of the network, which they share perfectly (relay network is about latency not so much about throughput-- its at best a 2x throughput improvement, assuming you were bandwidth limited);    the other is latency related,   imagine you have a small amount of hashpower-- say 0.01% of the network-- and are a lightsecond away on the moon.  Any time there is a block race, you will lose because all of the earth is mining against you because they all heard your block 1+ seconds later.  Now imagine you have 60% of the hashpower on the moon, in that case you will usually win because even though the earth will be mining another chain, you have more hashpower. For latency, the size of miner matters a lot, and the size of the block only matters  to the extent that it adds delay.

When it comes to orphaning races miner sizes matters, in some amount that is related to the product of the size-of-the-miner and time it takes to validate a block.

Quote
how can that be?  mining pools all use a full node around which they coordinate their mining.  all full nodes are relatively in sync with their mempools  
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

Quote
pt being, it's statistically unlikely that full blocks today represent the magical level of "large" blocks that Satoshi set 6 yrs ago.  the problems we are having with the forks are a result of the defensive tactics being taken from those full blocks.

Almost none of the blocks have been 1MB, the issues arise before then. _Consistent_ 1MB blocks wouldn't have been supportable on the network at the time that limit was put in place-- back in the 0.5.x-ish days we were getting up to 2minutes for a 100k block to reach the whole network; the 1MB was either forward-looking, set too high, or only concenred about the peak (and assuming the average would be much lower) ... or a mixture of these cases.

Quote
have the Chinese miners given you a technical reason why they're SPV'ing?

F2Pool reported that as block sizes grew they saw increased orphaning rates and that they were seeing an orphan rate of 4% though this was at a time before the relay network and when GHash in europe had ~50% of the hashpower under them.  Excluding the recent issues they've had almost no orphans since, they report.

Then why don't we decrease the blocktime from 10 min down to let's say 2 min. This way we can also have more transactions/second without touching the blocksize.
Ouch,  the latency related issues issues are made much worse by smaller interblock gaps once they are 'too small' relative to the network radius. When a another block shows up on the network faster than you can communicate about your last you get orphaned.  And for throughput related bottlenecks it doesn't matter if X transactions come in the form of a 10mb block or 10 1mb blocks.



1790  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 05:40:03 PM
1.  Why do larger mining pools have less orphans, assuming most miners even small ones are connected to the relay network?
Because it greatly reduces the time it takes to transmit blocks but does not completely eliminate it-- nothing can (due to the speed of light).  In particular, something I didn't know until my conversation with them on July 4th:  the nearest relay network hub to F2Pool is still 200ms away due to insane routing that sends traffic between some networks in china and singapore via the US (thanks NSA?).

Quote
2. Even if mining pools set higher fees, aren't the unconfirmed TX's still added to their mempools?
No.

Quote
3. How is it that 1MB just "happened" to be the magic number at which blocks are deemed to be "large" ?
I don't know what you're talking about there.  AFAICT F2Pool would also consider e.g. 750k "large".

Do you mean why was 1MB selected as the particular hard limit in the protocol?   ::shrugs:: It happens to be the the highest value you could sync over a modem and stay up with the network (though not for mining, due to latency), though that could be by chance.
1791  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 06:32:13 AM
Bitcoin sits at a strange intersection of computer science, mathematics, economics and sociology, and we can all probably learn a bit from each other.  Communication is hard, especially across disciplines.  
Thats quite fair and true.

As an aside, today's 3 block invalid chain reorg included a 'v3' block on the invalid fork which contained a lot of transactions. Which may suggest someone is SPV mining while including transactions (something I'd pointed out was possible previously).
1792  Economy / Scam Accusations / Re: PSA: cypherdoc is a paid shill, liar and probably epic scammer: HashFast affair on: July 06, 2015, 06:15:17 AM
The original court document has him representing himself. So cypherdoc thinks that he is the "LeBron James" of Bitcoin world? ... does that make HashFast the Cavaliers?
He seems to have representation now (its in the legal docs but PM me if you need contact information for his attornies for some reason).

Makes sense, after all-- 3000 BTC pay for a lot of representation.  I assume part of the point of getting the attachment is so that he can't spend it all on his legal defense and then claim to be insolvent after losing. Sad


1793  Economy / Scam Accusations / Re: PSA: cypherdoc is a paid shill, liar and probably epic scammer: HashFast affair on: July 06, 2015, 05:03:55 AM
Hmmm, where does the 10% figure come from, did HashFast pull in 30,000 BTC?
From the complaint, the agreement was 10%, and yes-- more than 30,000 BTC it appears.

The audio recording in the response is halarious, Cypherdoc's attorney describes him as the  "the LeBron James of the Bitcoin world"-- presumably that was the least redicilous claim they could deploy to provide any other rationale the amounts paid in an effort to avoid the conclusions people actually in the Bitcoin world drawing in this thread.
1794  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: July 06, 2015, 04:40:08 AM
ok, we've all been lead to believe up to now that validation of tx's had to occur twice by full nodes.  first, upon receipt.  second, upon receipt of the block.  this was crucial to the FUD scare tactic of decrementing
[...]
what am i missing?
Where I expicitly pointed out to you in many places, in excruciating detail, that this was not at all the case??  https://www.reddit.com/r/Bitcoin/comments/39tgno/letting_miners_vote_on_the_maximum_block_size_is/cs6rek5?context=3   You seemed so happy to argue with it before, has your memory vanished now that you don't think it would be convient for you?

what's interesting is that we've never seen it done to the degree it is now.  we had the Mystery Miner a few years ago but he stopped it pretty quick.  also, despite many upgrades added to the protocol previously, we've never had a fork as a result of SPV mining before either.  what's different this time is the consistently full blocks and the fact that Wang Chun told us they create SPV blocks in response to large blocks as a defense.  it seems they consider full blocks large blocks so the excessive SPV mining created last nights fork in light of BIP66 and the upgrade to 0.10.x.  so in that sense, the 1MB cap is the direct cause of what is happening.  

The incohearence in some of these posts is so foaming so thick that it's oozing out and making the floor slick; careful-- you might slip and mess up your future as "the LeBron James of the Bitcoin world" (as your attorney decribed you (18:30), under oath, to a federal judge as part of litigation related to your possession of 3000 BTC taken primarly from members of this forum.).

As miners have created larger blocks F2Pool expirenced high orphaning (>4% according to them); they responded by adding software to mine without transfering or verifying blocks to avoid delays related to transfering and processing block data. Contrary to your claim-- the blocksize limit stems the bleeding here. Their issue is that large blocks take more time to transfer/handle and that they're falling behind as a result. Making blocks _bigger_ would not help this problem, it would do the _opposite_. If a miner wanted to avoid any processing of transaction backlog they'd simply set their minimum fee high and they'd never even mempool the large backlog.

Reasonable minds can differ on the relative importance of difference considerations, but when you're falling all over yourself to describe evidence against your position as support of it-- redefining F2pools crystal clear and plain descption of "large blocks" as their source of problems with the technically inexplicable "full" that you think supports your position, it really burns up whatever credibility you had left. That you can get away with it in this thread without a loud wall of "WTF" just shows what a strange echochamber it has become.
1795  Bitcoin / Bitcoin Discussion / Re: How bitcoin dev's are helping to kill bitcoin on: July 06, 2015, 04:08:41 AM
The guy sounds rude but he is right in one point: to be forced to reindex the blockchain because the computer had a problem or because there was a power outage is not nice at all. I run bitcoin-qt --testnet for development reasons and everytime that my laptop's battery is empty, or a windows update restart the machine, I have to reindex again. Sure, reindexing the test chain is very fast but not fun in any way.
The database is intended to handed unclean shutdowns and I tested it with _thousands_ of unclean shutdowns previously.  There may be system specific reasons or hardware problems that are causing you to see corruption on power loss, they're certantly not expected or intended (or expirenced by everyone else). If it is the case that your system is outright corrupting recently written blocks on unclean shutdown then there may be no real remedy possible.
1796  Bitcoin / Bitcoin Discussion / Re: How bitcoin dev's are helping to kill bitcoin on: July 06, 2015, 03:44:20 AM
Also why downloading blocks requires so much time unless you  either open ports , download the chain ,
Neither of these two things improve the time it takes to download blocks.
1797  Bitcoin / Development & Technical Discussion / Re: Transaction v3 (BIP 62) on: July 05, 2015, 09:48:41 PM
Means that if the version of the tx is different from 1 then the transaction is not standard.
So, at the end of the day, contrary to what BIP62 announce, there is no tx version 3 right ?
There is no BIP62 in the Bitcoin system today. It's an incomplete proposal; so there is no support for BIP62 at all.
1798  Bitcoin / Bitcoin Discussion / Re: How bitcoin dev's are helping to kill bitcoin on: July 05, 2015, 09:47:10 PM
in other hand...you are still considering Bitcoin Core as Desktop application? Maybe something changed recently, but I still remember, that after each reboot was my 4 years old laptop ~30 minutes almost unusable, because core start syncing blocks from previous day and I didn't have SSD back in the time:)
Well you called out the reason there-- not having an SSD.  I run bitcoin core on a battery life optimized ultra light laptop and never notice it running.  That its a fine desktop application doesn't mean that it's super awesome on a 4 year old non-ssd enabled laptop. The much larger blocks these days do require more work to catch up on, but the software is also quite a bit faster.
1799  Bitcoin / Development & Technical Discussion / Re: Numerically finding an optimal block size using decentralisation-utility on: July 05, 2015, 07:55:16 AM
Assuming a user will use all of their bandwidth is probably not good, You probably need to assign a marginal utlity to running a node and then scaling their tolerance based on their marginal utility and the available bandwidth.

I haven't personally chased this kind of approach because there will be so many judgements that the result won't really be interesting to anyone but the person who created it... but perhaps I'm wrong.

1800  Bitcoin / Bitcoin Discussion / Re: How bitcoin dev's are helping to kill bitcoin on: July 05, 2015, 07:47:06 AM
dude, you should finally realize, that bitcoin core is not desktop application anymore. it consumes lot of disk space, bandwidth or cpu and you can simply avoid all this using electrum or multibit.
next time just educate yourself little bit before verbally attacking guys, which are obviously smarter than you..
This isn't fair or correct.

Bitcoin Core should work fine for Decksperiment, it shouldn't even mind his poor attitude.

But it's not-- even though it works fine for many many other people--, unfortunately Decksperiment has not provided enough useful information to begin troubleshooting, and his approach doesn't exactly encourage anyone competent to try to help.

If you're on windows and running the latest version, then I suggest running memtest86 and some disk check-- defective hardware is a common cause of issues. If you have antivirus software some of it is known to corrupt Bitcoin, you should disable it or except the Bitcoin directories.

Pages: « 1 ... 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 [90] 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!