Bitcoin Forum
May 23, 2024, 04:49:57 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 [152] 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 233 »
3021  Bitcoin / Armory / Re: Negative balance after restore on: September 04, 2015, 08:44:49 PM
Start Armory. Pick User -> Expert Mode

Go to your wallet's properties dialog, click on the figure next to Addresses Used.

Pick an amount to extend the address chain with then click compute. It will rescan the wallet, then you should see your balance.
3022  Bitcoin / Armory / Re: Negative balance after restore on: September 04, 2015, 06:22:19 PM
Try Help -> Rebuild and Rescan
3023  Bitcoin / Development & Technical Discussion / Re: Really not understanding the Bitcoin XT thing... on: September 03, 2015, 11:11:30 AM
This doesn't happen.

May well in some future scenario, but certainly not now.

I am pretending it would happen with large propagation times, so we are not contradicting each other.

It's incendiary one way or another.

To get a second opinion, find someone who has worked in dynamic display ads and ask them how much they love data.  High value data like precise transaction logs probably has a broad range of opportunities for monetization which go far beyond simply spamming people with ads, but I suspect that just that alone would be enough to support a very robust monetary framework infrastructure and reason to work hard on gaining the largest footprint possible.

I am not doubting the value of data mining financial transaction logs, but you would have to prove Hearn is trying to modify the network in a way that it becomes feasible on the blockchain, and that such way can remain private to a few select people in order to turn a profit from this method first hand. After all, this dataset is public and there is no intuitive link between addresses and individuals.

Possibly open a book on what their sfII character of preference is  Cheesy Unless they're one of those "wildcard" assholes  Angry  ( Grin)

I like Ken
3024  Bitcoin / Development & Technical Discussion / Re: Why I support Jeff's BIP100 on: September 03, 2015, 10:58:32 AM
Even with no block subsidy and assuming all other miners always sweep the mempool I expect F will be greater than zero.  I think Alice from the example above could still find a viable mining strategy in only accepting 10-satoshi transactions and waiting for the mempool to grow sufficiently "plump" before beginning to hash.

That's assuming block size demand is superior to block size supply. Otherwise the mempool would essentially be empty after every block. Clearly there is a debate on the projected supply vs demand, but in the absence of a block size limit, I'm expecting the technological gain from fast relay networks will keep the supply way ahead of the demand for pretty much ever.

My expectation is that as long as there is a realistic block size limit in place, the Nash equilibrium will put upward pressure on fees. With the absence of a realistic limit, the Nash equilibrium will induce the opposite effect and fees will be not be sufficiently high to support proper difficulty.

So my response to this:

Quote
It may be profitable for a miner to increase the minimum fee they will accept

would be "only if the Nash equilibrium supports it". Which is the same as saying that demand will outgrow supply, which implies there is not enough block space to wipe the mempool of fee paying transactions.

The corollary to this statement would be that if a miner can wipe the mempool, then competing miners cannot afford to do any less.
3025  Bitcoin / Development & Technical Discussion / Re: Proposal: dynamic max blocksize according to difficulty on: September 03, 2015, 10:38:28 AM
When the hash rate increases because of new ASIC, the difficulty rises, this will allow for the max block size to increase. This increases supply so transaction fees go down, which will cause an increase in block space demand. All of which if you think it over, it makes sense. The bitcoin network has gotten stronger because of a massive increase of hash rate due to new ASIC, therefore it stands to reason it should support more transaction throughput.

Again there is no link between increased hash rate and block space demand. If a new ASIC tech is unleashed on the market tomorrow, why would transaction volume go up? Your local bank gets a new paint job, a larger parking lot, a McD right next to it and a larger safe, and your reaction is "ima purchase more goods and services"? Where is your extra purchasing power coming from? Isn't the fact that your bank is upgrading the result of the fees it's been charging its customers? What leads you to believe said customer are somehow wealthier as a result of this? If anything, shouldn't it be the opposite?

You have demonstrated yourself that if your algorithm was applied from day one, the current block size would be completely bloated and unrealistic. Your argument for supporting your method is that ASIC technology has caught up to its technological debt and is now dependent on Moore's law.

The implication is simple: difficulty growth is only a valid metric within certain bounds (that's your proposition, not mine, I'm just deducing). So again, how does your algorithm deals with situations where difficulty grows outside of its "healthy" bounds?

Quote
Intel has been around for decades, bitcoin only 6 years, of which 3 has shown dramatic growth.

That's not my point. My point is Intel is not building Bitcoin ASIC miners right now. If Bitcoin's market cap grows big enough for Intel to start building mining hardware, chances are difficulty will be growing much faster than Moore's law dictate for a few extra years.

Quote
It doesn't matter really at which rate block space demand grows. If it grows slowly, then transaction fees stay low and investment in mining will be low.

The fee market is not an end in and of itself. It is a mean to support certain network mechanics, one of which is to pay miners enough to have acceptable blockchain security. Not just security, but high enough that it is more profitable for a brute force attacker to just mine blocks than to try and rewrite block history.

You should design your proposal with the purpose of the fee market as your goal, not to simply sustain any fee market.

Quote
Unfortunately we don't have any other metric to determine the max blocksize.

There are plenty of other metrics in the blockchain to represent block space demand. Over a difficulty period consider total fees paid, average fee density, UTXO set, total value transfered, average value per UTXO, straight up average block size. Plenty of stuff to get creative with.
3026  Bitcoin / Development & Technical Discussion / Re: Synchronizing wallet balance from a pruned node on: September 03, 2015, 10:09:07 AM
Yeah using the p2p layer could be used, with those bloom filters. The BitcoinJ implementation is bugged and doesnt provide privacy and reading through the documents it's not clear to me how that could be fixed.

I'm suggesting to use the P2P layer with your local node (instead of RPC only) for the added functionality. Unless pruned nodes won't accept bloom filters (I have no idea whether it does or not), this is the easiest path to achieve your functionality. And since the node is local, the privacy issue goes out of the way.

I don't think there is a technical limitation preventing pruned nodes from fulfilling a bloom filter request. After all they store the UTXO set and the payment address can be extracted from each one of them. There is no real difference in that regard when compared to a full node, besides that the set of TxOut is smaller.

Quote
Presumably I should read through the bitcoin dev mailing list to figure out how the developers imagined using a pruned node would work.
I don't see a way around requiring a pruning node to redownload the entire blockchain whenever a wallet is imported. Presumably this might end up as the accepted way of doing things, users should recover or import new wallets rarely.

Depends on the operating mode. If they do away with the wallet history feature and only stick to balances, it should be pretty straight forward. Or they could bootstrap a wallet ala SPV, once they replace these useless  bloom filters with committed maps (I think that's the name).
3027  Bitcoin / Development & Technical Discussion / Re: Why I support Jeff's BIP100 on: September 02, 2015, 11:45:30 PM
Imagine a world where basically all fees are <= 1 satoshi.  Suppose Alice is a miner with 5% of the network's total hashrate.  Alice could advertise that she will no longer be processing all transactions but only those that pay at least 10 satoshis.  Each bitcoin user now has the option of paying 9 extra satoshi to reduce the expected first transaction waiting time by about 30 seconds.  Supposing this extra utility is worth the 9 satoshi in enough cases, Alice would increase her revenue.

Not necessarely. You should look at the problem the other way around. If all miners will only mine fee F transactions, and suddenly one of them decides to just indiscriminately wipe the mempool for every block it finds, then the average fee will go down.

You should also consider the tapering of inflation. Currently the coinbase reward composes the grand majority of miner revenue, so they can afford to mine small blocks as a result of refusing to integrate any transaction with fee < F. That will certainly push the average fee up. However, as the coinbase reward keeps diminishing, we will eventually reach an equilibrium where a miner cannot afford to mine too small a block (based on the fee density he expects) and will either have to take on these transactions paying below F, or not mine blocks until the mempool is "plump" enough (which is not viable).
3028  Bitcoin / Development & Technical Discussion / Re: Synchronizing wallet balance from a pruned node on: September 02, 2015, 10:40:42 PM
One issue I've just realised is the gettxout call also requires a numeric vout value. That's the number that goes with the txid.
There's no way to tell how many outputs a transaction has, so best you could do is try all numbers from zero to about 30 or 40 (?) And then you're wasting a lot of time and still might miss outputs.

As long as you have a Tx size you can guesstimate the top boundary for TxOut count per Tx. Short of that, block size could give you a broader range, but then you would have to resolve tx hashes to block height.

Not sure how much of that data is available through the RPC in pruned mode. I'm very familiar with block chain analysis but I work directly with raw block data, never through the RPC. Maybe you are better off using the P2P layer in an attempt to query more relevant data.
3029  Bitcoin / Development & Technical Discussion / Re: Synchronizing wallet balance from a pruned node on: September 02, 2015, 06:41:48 PM
3) is a bad idea since you should expect a DB engine to lock access to a single process by default. I would not base my code on this assumption, which imply you would need to bootstrap your own history DB without Core running, then use a different code path to maintain your own DB straight from blockdata.

2) is tedious and what are the chances that would be merged into Core?

1) is how I would do it, pull all blocks and check each for relevant UTXOs. This process can be very fast if you parallelize it, but you don't necessarily need to since it's just the original bootstrapping that will be resource intensive. Maintenance won't be nearly as costly and can use the exact same code path with bounds on block height.
3030  Bitcoin / Development & Technical Discussion / Re: Proposal: dynamic max blocksize according to difficulty on: September 02, 2015, 11:18:21 AM
Generally this kind of proposals cannot function without a second factor, which is meant to define a resizing threshold.

How is your code supposed to distinguish between the release of a new ASIC and actual growth in block space demand?

Quote
However I believe this to be nearing its maximum efficiency and closing in with Moore's Law.

Not necessarily. Consider that Intel is a $55B business whereas the Bitcoin as a whole has a $3~4B market capitalization. We're not yet at a stage where professional chip manufacturers have an incentive to build ASICs.

And generally, what makes you believe block space demand will grow at least at the same pace as Moore's Law?

Your proposal has no thresholds. It simply links block size evolution to difficulty. I believe difficulty changes are a good metric to define by how much the block size should be changed, but difficulty is not a good enough metric to reflect block size demand.

You need another factor to track block size demand, and use that as a trigger for resizing. On top of that, a decay function would be preferable for when the resizing condition doesn't trigger. That's a safety harness that will correct the effect of spam attacks, large miners trying to game the system, and too large a jump in block ceiling.
3031  Bitcoin / Development & Technical Discussion / Re: Really not understanding the Bitcoin XT thing... on: September 01, 2015, 10:27:25 PM
BUT ...
Now does that matter which fork you mine on? No. Since all that matters is who finds the next block to decide the fork.
Bitcoin core's rule is ludicrously simple and works.
If you are building on a block at height X, and another valid block appears at height greater than X, start building on the new block. Anything else, ignore it.
It doesn't matter which (valid) fork that new block is on, just switch to it.
So yes you could choose to help some other pool confirm it's 1s, 10s, 100s late block, if you choose to, but you wont gain anything on the bitcoin network by doing that ... without a cartel setup where they say they will do that for you also.

You affirm this because you are not considering the compound effect of the significant network "friction" the example assumes, as well the new incentive the orphan race creates for M and m.

Let's go back to the example in my previous post: M as the large miner, m as the small miner, N as the neutral miner, and significant propagation time. What you are stating is that in the case of a race between M and m, it doesn't matter to N which block it mines off of, it only matters that both blocks extend the last known top. After all, you are only "wasting" hash rate if you mine on top of a block below the max height.

Or are you? Clearly there is no incentive to ignore a valid solution if the only thing you are going after is the coinbase reward + average fees. But what if there was an incentive? Say someone emits a ZC paying 1000 BTC to address A with a small fee, and as soon as this address is mined, emits a transaction B that spends the same TxOuts to address B, but this time paying a 50BTC fee. Wouldn't it be in every miner's benefit (but the one which mined the block with A) to orphan that last block?

Not saying this what's happening in the orphan race per se, or that it is good practice to accept a 1000 BTC payment after a single confirmation, but what stands is that there can be incentives to actively orphan a block. And in the case of an orphan race, it is a coinbase reward + fees riding in the balance.

The proposition goes as follow: in the case of a race between M and m, M is expected to recruit over 51% of network the hash rate behind its block simply because M > m. Now we have 2 miner groups, where G mines off of M's block and g mine's of m's block. There are 2 outcomes:

1) G finds a solution first for height H+1, validating M's block at height H. At this point m has lost the race and its block reward, but this result is irrelevant to others miners in g.
2) g finds a solution first for height H+1. However, M knows it can propagate its solutions faster than m, so there is a window in which M can actively try to orphan g's block and still propagate its solution for H+1 to 51% of the network faster than the g miner can. M has an incentive to do so, which is to save its reward of H.

What does that mean for N?

a) If N is part of G and finds a solution to block H+1 first, there is virtually no chance this solution will get orphaned. After all, N is guaranteed M will back this solution, so there is at least that much extra hash rate guaranteed, which reduces the propagation time of N's solution. If m can't compete M's against propagation time, there is no reason to believe it can compete against M + N.

b) If N is part of g and finds a solution to block H+1, it knows M will attempt to orphan this block for a short period of time, and may succeed. M will try to orphan N's block simply because a window exists in which it is more profitable for M than to start work right away on top of N's block.

So N has no chance of getting orphaned if it is part of G, and a non zero chance to get actively orphaned if it's part of g. What motive does N have to be part of g? What motive does N have to not be part of G? If one solution has a quantifiable risk and the other doesn't, all of this for the same reward, why pick the risky one? Keep in mind that for N, there is no cost to switching from one solution to the other. N can start mining on top of m and switch to M later at no loss.

Point being, this situation can take place without the need of a 51% cartel.
You could argue that M is a cartel. But it doesn't need be 51% to present that active orphaning threat to smaller miners.
You could argue that over 51% of the network using the same relay network is a cartel, but then again their motivation to join the relay network is not to actively orphan smaller pools but to reduce their own orphan risk. There is no agreement that other members of the relay network will support your block in a race, but there is a strong indication that they will get your solution faster than they will get one from miners outside of the relay network.
You could argue that a mining pool is in fact a cartel in its own right, but this is only evidence that any barrier to entry will promote the formation of cartels (after all a solo miner is always guaranteed to reduce his propagation time by joining a pool, and gets other benefits on top of it).

"But I've never seen blocks emitted so close one another"

Far from me to doubt your figures, but the reality is that the 1MB block cap prevents propagation time from exploding upwards. The point isn't so much that concurrent block solutions are propagated within mere seconds of each but rather the average propagation time based on position in the network topology and hash rate. If 20MB blocks were to bump your propagation time to 30 sec, you would be clearly be a victim of this mechanic.

"But it would never reach 30 sec"

30 sec is a bloated value for the sake of this example but is it really all that unrealistic? I think that currently, ~60% of the network hash rate is in China. They naturally propagate fast to each other, and slowly to the rest of the world. How big does the average block have to be in the absence of fast relay networks for Chinese miners to receive it in an average 30 sec? Do we really want to try and find out?

"But no miners submit themselves to this logic atm"

That's mostly because the low block size hard cap prevents network propagation from getting out of hand. If the hard cap goes away with the current network topology, eventually some miners will start exploiting this aggressive orphaning scheme and the rest of the network will have to go along. And there already is a group in a prime setup for this purpose: Chinese miners.

"But cartels!"

While I am trying to demonstrate this particular scheme does not require a 51% miner/cartel, the scheme on its own is only enabled by long propagation times. This characteristic of the mining network creates imbalance and modifies the rules of the game in way that promotes the formation of cartels, the same way the heavily subsidizes cost of electricity in China is, in my opinion, the main factor for the current state of the mining market (where Chinese miners dwarf everybody else).

This is not how Bitcoin propogation is supposed to work.  If this is part of the XT proposal, it would destroy bitcoin, by centralizing more than can be mitigated with a fee market anyway.  Such a change would render nodes no longer peers, as larger pools would be superior.  This would break the 51% rule.

This ties back to the original topic, and the question that birthed this entire argument:

Quote
But we still need a working transaction market in the long term. How is this intention maintained in XT?

I am not suggesting this particular predatory mining practice is part of what XT is proposing. What I am suggesting is that this is one of the many imbalances that will emerge from removing the block cap in the current state of the mining network. My analysis clearly doesn't reach unanimity, and Kano (or someone else) may very well prove me wrong in the end (I still intent to defend my argument fiercely =P), but that doesn't nulify the other cases where large miners' profitability increases compared to small miners as blocks get bigger.

XT developers will either leave the network as is and in consequence promote a whole new class of incentives that will result in extreme mining centralization, or they will render propagation time insignificant through relay networks and have no mechanic to promote any nature of fee market whatsoever.

At this point I keep arguing the orphaning scenario with Kano for the sake of arguing (I hope he doesn't mind!), but that point is separate from the original topic, and my original contribution to the discussion, which so far still stands:

The XT developers don't want a fee market. Their statements and their decisions in code leads me to believe they do not see a fee market as useful to the network and I expect they will take the necessary steps to get rid of any remnants of such market, may it rise through unforeseen conditions (that I purport they see as limitations).

Hearn's mining insurance contract idea largely predates the XT fork debate, so at least he has been contemplating that narrative for longer than this contentious fork. I do not remember when or where I've read about it the first time nor whether it came with an actual proposal. Maybe someone has some links to contribute.

-----------------------------------------

As for Peter R's point, I don't really understand how it is relevant to the original topic. Whether a fee market can or cannot exist without a block size cap is irrelevant to both parties. XT wants no fee market to begin with, and Core wants a tight cap.

To proceed further, whether such a fee market could be healthy is an interesting discussion (although still irrelevant to the Core vs XT dilemma), but clearly Peter R's paper does not confirm or infirm that property. The main reason for that is that his definition of a healthy fee market is one in which there is constantly more demand for block space than the network topology can support, as long as there is a coinbase reward.

I don't want to sound demeaning, but to purport that network friction creates block size scarcity, sure. To then deduce a scarce resource in demand will naturally have a price... well duh. To finally conclude that this resource having a price is the necessary and sufficient condition for the market to be healthy, now that's quite the stretch. Some could even see the definition as insidious.

Moreover, you establish that in the absence of a coinbase reward, the entire model crumbles as the price of bytes in the blockchain becomes 0. This can only lead to a combination of the following conclusions:

1) Your model indicates that the cost of orphans cannot even sustain a fee market (let alone a healthy one) in the absence of inflation.
2) If indeed the cost of orphans can sustain a fee market without inflation, then your mathematical model is wrong, as it pretends otherwise. Or
3) Something is wrong with your premises. At least one of them is wrong or you are omitting a defining parameter of the network.

And before you object and link your paper again, I have read the entire thing thoroughly and maintain all the criticisms I have made so far. Among these criticisms, I will reiterate the main one:

Network propagation, as any properties of the network and the market that affects miner profitability through externalities, is a bad metric. We need a strong parameter to reign over fees specifically because among other things, it needs to dwarf the effect of externalities.

The business intelligence value of knowing who made what transactions alone is easily enough to pay for operating the network.  Especially when one has access to 'big data' processing facilities and a big network footprint already.  Much higher value than knowing something about people's search terms or scanning their e-mail and cloud storage contents.  I would expect to see users (to use a more kind descriptor) get 'cash back' when things are ramped up.

I have little doubt that Mike is keenly aware of this principle, and I suppose that he didn't really want to use it as a sales pitch so he cobbled together some fairly questionable 'mining assurance contract' scheme or whatever it was.

That's outright disturbing if this presentation stands true. Otherwise it's kind of incendiary. I don't want to judge before I see the actual proposal. Surely someone knows about it.
3032  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 31, 2015, 05:43:28 PM
What's the total size of your DB? The headers DB alone?

https://github.com/etotheipi/BitcoinArmory/blob/ffreeze/cppForSwig/mdb/mdb.c#L588

Reduce this value to 16MB and rebuild a new DB with that, see if it suffers from the bad_alloc.

I reduced MAX_MAPSIZE_INCEREMENT from 256MB to 16MB, re-compiled, and --rebuild.
The databases are now:
  20819968 2015-08-31 00:57 blocks
107724800 2015-08-31 00:57 headers
After a couple of days of uptime I got the std::bad_alloc crash.

I'm running an identical copy of bitcoin-qt/Armory on a newer desktop that has 16GB dram.
I got the std::bad_alloc crash on that machine too, just like the laptop.

I also got the std::alloc crash when sending to a "normal" wallet.

In one instance so far, I started Armory at 23:50 and sent BTC at 00:10 and got std::alloc.
Is there any code in Armory that uses "current time - previous time" and doesn't handle
a negative value?


Comment out this line:

https://github.com/etotheipi/BitcoinArmory/blob/ffreeze/ArmoryQt.py#L2619

It will prevent the ZC container from parsing any new tx. This is the most likely culprit atm.
3033  Bitcoin / Armory / Re: Let's get this USB stick malware risk straightened out once and for all on: August 30, 2015, 09:19:39 PM
Off topic, but I think an updated roadmap should be forthcoming seeing as your target userbase is now different to your actual userbase. I'm spending my own free time trying to help you guys with this, I need to know whether that time is being well spent. I expected to see more end user features developed, but you're telling me this is not necessarily the intention at all.

I'm sorry but I can't communicate on this by myself. etotheipi is preparing a public statement on that matter. I have no ETA to provide either.

I'm not entirely sure why this is a big deal either. There already exists software out there to turn text into QR codes and vice versa. And Armory shows you all the text data you need for an unsigned & signed transaction. So do we really need it in Armory? Doesn't seem that urgent to me. With Armory combined with other QR <-> Text programs we got all we need, don't we?

It's like saying do why do we need power plugs for passengers in a commercial flight? There're already getting individual multimedia stations.

The answer is "it's not necessary but it's nice to have and benefits our image".
3034  Bitcoin / Armory / Re: Let's get this USB stick malware risk straightened out once and for all on: August 30, 2015, 08:25:51 PM
None of this is at all complicated or difficult for you to understand.

It isn't but not for the reasons you are insinuating. What you are describing is user pattern, not software design. Armory is specifically coded to provide a fresh address when you request a payment and never sends coins to the same change address. Splitting UTXOs is a good practice to help with privacy but we do not support this in code atm, so I don't get where your point comes from. Yes, some people reuse addresses. We do not encourage this, but can't prevent it either.

Using QR codes to pass transaction data around is definitely an advanced feature and we would expect anyone using it to be familiar with the limitations of QR codes and avoid fragmenting their holdings within several UTXOs. However that solution does not cover 100% of our users and a vocal minority that will do things completely wrong will end up clogging our support channel and badmouth us publicly at every corner. This is a consideration we have to deal with as well, which is why we prefer audio modems to QR code when it comes to analog data carriers. They simply are more efficient.

Then again there is a matter of priority and cost of development. I started working on Armory claiming the 2BTC bounty to port LevelDB to Windows and I intended on going after the 25BTC for the audio data library afterwards. Then I was hired as a full time developer and spent the next couple years on much higher priority work like reworking the C++ backend.

A year and a half later, we got a half time developer to whom etotheipi gave the task of finalizing the audio lib a user submitted to claim the 25BTC reward. He got pretty far and... was turned into a full time employee with the burden of several responsibilities, none of which being the audio modem lib.

I think you can see where I am getting at by now. We shifted our focus to enterprise products a while ago and the demand in that market is for HSM integration, not enthusiast exotic features like QR codes and audio modems. The reality is there are a lot of things we'd like to put in the in the public version, but the business reality is that they don't make sense in the enterprise version so we are not developing them at all atm.
3035  Bitcoin / Armory / Re: Let's get this USB stick malware risk straightened out once and for all on: August 30, 2015, 04:14:30 PM
The problem with Armory is that you always have wanted to treat the online and offline wallets as the same - I simply don't do that so I don't have the problem of UTXOs.

I don't understand what you are referring to, care to elaborate?
3036  Bitcoin / Armory / Re: Let's get this USB stick malware risk straightened out once and for all on: August 30, 2015, 03:54:40 PM
Seriously I have been using QR codes to do BTC txs for *years* without a problem so I think you are exaggerating any problems other than the problems of trying to use a "normal wallet" (that ends up with lots of UTXOs).

It is amazing that so many people like you post negative things about using QR codes for BTC when I have done literally hundreds of BTC txs this way without a single problem (my guess is that you have never actually tried to use QR codes yourself).

What is the byte density of your QR codes? How many QR codes are you using per transaction? Are you attaching the relevant UTXOs to verify spend val and change on the offline signer?
3037  Bitcoin / Development & Technical Discussion / Re: Really not understanding the Bitcoin XT thing... on: August 30, 2015, 09:45:49 AM
As I said, you would need a 51% cartel to do that - your competitors are the rest of the entire network on the earlier block.

No you don't. If 2 solutions propagate simultaneously, as a miner you should choose the solution emitted by the larger miner. I'll make the example more extreme but it stands at any value really:

Miner M has 30% of the hash rate, Miner m has 1%.

Assuming the network propagation speed is equivalent at all points of the network, M will always get 51% hash rate support behind its solution faster than m will, for M only needs to propagate to an arbitrary group of miners controlling together 21%+ of the hash rate, whereas m needs to propagate to 50%+. The time to propagate to 51% of the total network hash rate is quantifiable, and in any scenario where propagation speed is non negligible, the difference in time to recruit 51% of the network between a large and a small miner will also be non negligible.

Now let's say m's average block propagation time to 51% of the network hash rate is t, and M's is T. We know t > T.

Now say miner N receives m's block first, then M's block next, for the same height. We have 3 situations (all timers starting when N receives m's block):

1) N receives M's block within T. N should switch to M's block simply because t > T. At this point there is a higher probability M will recruit 51% of the hash rate faster than m would.
2) N receives M's block after T but within t. N should switch to M's block, because there is a high probability M has already recruited 51% of the hash rate.
3) N receives M's block after t. N should stick to m, as there is a higher probability m has already recruited 51% of the hash rate.

The reality is a bit different however. Propagation speed isn't balanced across the network and while T and t are quantifiable, there is a lot of data to gather to even get valid estimates. What really matters is that for m < M you should always assume t >T, and that is enough information for any sane miner to switch to the M's block if it is received within a few seconds of m.

In a propagation race, the larger miner will always have an advantage, and it doesn't need a 51% cartel, it only needs to get a portion of the network behind its solution that is proportionally smaller to difference in hash rate with its competitor.

TL;DR: You don't need to control 51% of the hash rate, you only need to recruit 51% of the hash rate faster than your competitor. If you are larger than your competitor, you are essentially guaranteed to always recruit faster.

Obviously this all goes down the toilet once miners start using fast relay networks. But orphan cost will get flushed out along, so the assumption that a fee market can exists when propagation time is negligible, let alone that this fee market would be healthy, is completely undermined.
3038  Bitcoin / Development & Technical Discussion / Re: Really not understanding the Bitcoin XT thing... on: August 29, 2015, 11:04:05 PM
Take a look at the full paper if you like: link

I've skimmed it already several times, but since you insist on the validity of your claims I will give it a thorough read.

Regardless, something can be said about integrating this line to the quote if it relies on a lot more context (16 pages apparently) than the conclusion you emit right before it.
3039  Bitcoin / Development & Technical Discussion / Re: Really not understanding the Bitcoin XT thing... on: August 29, 2015, 09:34:46 PM
If nodes relay block solutions between miners, then yes their connection matters.

If a node is presented 2 blocks for the same height, it will always prefer the one it received first. It will not relay the second one even though it is as valid until the next block orphans one of them. Therefor nodes are poor relay points for miners and any miner not trying to connect directly to the majority of his competitors is doing it wrong. So I'll insist, nodes aren't all that relevant in block propagation between miners.

Quote
Nonetheless, the orphan cost still exists (it is just smaller), which creates a fee market even in the absence of a block size limit.  

The bold part is the issue. A weak fee market is the same as no fee market. It can't sustain mining and does not filter transactions enough. The absence of limit (or an unrealistically large limit for that matter) puts it all on Moore's law.

Orphan cost can essentially be equated to friction. For that friction to be significant, the adoption rate will have to at least constantly match Moore's law. Any engineering breakthrough or any better software propagation scheme will gravely undermine the network. Also, adoption is finite. Moore's law may or may not be infinite, that's up for debate, but certainly has more growth potential than Bitcoin adoption does.

Again, the existence of a fee market is pointless if it isn't healthy.

The cost of orphan is about as bad a metric as electricity to define mining profitability. As the cost of electricity, connectivity is subject to heavy government intervention (just look at China). Let's not reinforce the reliance of the network on yet another manipulated metric.

Lastly, technological improvement happens in thresholds, it doesn't propagate evenly or at a continuous pace. One day you have 20Mb DSL, the next day you have 250Mb fiber. One day you have 3G, the next you have LTE. Adoption on the other hand is a lot more continuous. What do you expect that will do to the fee market if suddenly the cost of orphans is so low that every miner can afford to deplete the mempool on every block for even the next 6 months? Year? 2 years?

But let's admit adoption isn't continuous, let's admit it is a bumpy as technological leaps. What happens if adoption booms and technological leaps do not occur concurrently?

The cost of orphans is not a good metric to establish a fee market because it is very inconsistent. It is poorly distributed and rather unpredictable. It acts as an extra barrier to entry into the mining market, which is meant to be defined by hash rate. Why do you think Chinese miner blind mine?

Also Moore's law doesn't regress. How do accommodate the fee market in a recession or a long term stagnation on hardware friction alone?

We need another metric to underline the fee market precisely because cost of orphans is a bad one. We shouldn't look at cost of orphans as a feature, we should try to reduce it as much as possible.

Quote
If most miners use the Corallo Relay Network, I agree that average node connection becomes less relevant.

And this is what we are getting to. No miner has cause to stay out of the relay network, and XT can't avoid this network either, because it reduces the cost of orphans. Any miner that doesn't use it is at a disadvantage. However it will finish off the fee market in XT before Moore's law even gets to kick in.

Quote
... and you don't "agree not to orphan" ... either your wording is incorrect or your understanding is incorrect.

Quote
What I have highlighted in bold doesn't make sense to me.  How can they agree not to orphan?  What if the block contained an invalid transaction?

Imagine Corallo's relay network doesn't exist. Imagine a couple large miners set up private version of that network for their own sake. Suddenly their cost of orphan has globally reduced.

I will not explain why this is bad for the network and why it will give an edge to these collaborating miners. I invite you to reread the previous paragraph if you want more details, it is pretty much all there.

Bottom line, either XT does not implement the relay network and a couple miners will use that edge to wear and tear their competition, or XT will implement it and friction will be so insignificant compared to demand that fees won't be able to sustain decent hash power.

TL;DR what jonny1000 said.

Quote
This can only be done by the equivalent of a 51% cartel not accepting blocks from the rest of the network

Not really. I am not talking about rejecting blocks outright, only about how the network tells 2 blocks apart in the case of a simultaneous solution.

Assume miner A has 10% hash rate and miner B has 30% hash rate. Assume both A and B find a solution for a block at about the same time. Let's say miner C receives these 2 blocks within a few seconds of each other. Which block do you think miner C should work on? A's or B's? Obviously B's.

Bottom line is, in the case of a propagation race, you don't need 51% hash power. You only need more hash power than your competitor and the rest of the network will prefer your block to his.

Quote
"We conclude by noting that the analysis presented in this paper breaks down when the block reward falls to zero. It suggests that the cost of block space is zero; however, this would suggest zero hash power, which in turn would suggest that transactions would never be mined and, paradoxically, that no block space would be produced. Happily, questions about the post-block reward future can be explored at a leisurely pace, as we have a quarter-century before it begins to become a reality.

Into the distant future then, a healthy transaction fee market is expected to exist without a block size limit."

How can you conclude this from the previous paragraph. If anything it means there needs to be a primary factor in determining fees that is not the cost of orphans. If you don't want a block size limit, then your prime candidate is enforcing minimum fees, and that's a whole new can of worms.

And it would not suggest zero hash power. It would suggest blocks will be mined by those who emit the transactions directly. There is always an incentive to mine blocks, with or without a coinbase reward. The reality however is that if that incentive is proportionally too low compared to the network's market capitalization, it will lack in security and make malevolent mining profitable.
3040  Bitcoin / Development & Technical Discussion / Re: Dynamically Controlled Bitcoin Block Size Max Cap on: August 29, 2015, 04:07:40 PM
Be fair on yourself. You have a more authoritative view than most others, seeing as your work (designing & implementing the block handling + storage for a major Bitcoin wallet) deals with both the subtle details and the overarching dynamics of this very topic (at least in respect of how Bitcoin works now).

I have no experience with network design so I can't alone come up with a proposal that accounts for the entire engineering scope the metric affects. Maybe I would be more motivated to run with my own proposal and implementation if I did.
Pages: « 1 ... 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 [152] 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 ... 233 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!