Bitcoin Forum
July 04, 2024, 05:14:05 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 [162] 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 ... 233 »
3221  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: June 07, 2015, 09:31:57 PM
New version out in ffreeze, please test away. There's a wealth of robustness changes and it should also be a bit faster. However no one at Armory has experienced the issues Carlton Banks and btcchris ran into, so most of these changes were speculative. It's possible these bugs will still be there partially or in full.

As usual, looking forward to your bug reports and logs, and again thank you for the time and effort you put in helping us to test things out =)
3222  Bitcoin / Development & Technical Discussion / Re: Elastic block cap with rollover penalties on: June 05, 2015, 02:31:20 PM
firstly, what we're talking about here is not, as DumbFruit generalizes, a "rollover fee". It's a disporportional penalty on mining large blocks. I'm not sure wether this changes his argument or its validity.

Unless I missed something huge, the proposal is not only to penalize large blocks, but to redistribute the penalties collected from these blocks back to other miners. In that sense there is a rollover of mining rewards (since penalized miners stand to earn their own penalties back), just not "fees rollover" per say.

Quote
Do the 2 25% miners (or 2 of the 10% miners) have a higher-than-in-current-system incentive to collude?

I don't think they would collude, rather I expect a large miner can deplete the mempool of high fee transactions (and only these) while staying under the soft cap (thus paying no penalties), and leave other miners to pick up the slack. This behavior is incentivized by the fact that the other pools, if they do pick up the slack (and try to deplete the mempool with less regard for fees) stand to pay penalties for the extra service rendered, and the large miner in return stands to gain from that.

This results in one of 2 scenarii:

1) Other miners decide to pick the slack from the large miner, they naturally loose in profitability as the large miner is vampirizing their rewards by collecting penatlies. As a result, a lot of users from the other pools will point their hardware to the large miner, which will get an ever increasing share of the network hash rate, which enables his scheme even further and so on.

2) As a reaction to the large miner's behavior, every other miner adopts his policies, which is to at least avoid the penalties, by always emitting blocks below the softcap, and this change never fixed what it was meant to.

Quote
Is Menis proposal making it easier for the 2 25% miners to try to drive out small (as in bandwidth) miners by mining disproportionally large blocks?

Other pools need to verify a block before they start to mine on top of it, so very large blocks, that propagate slower and take longer to validate could supposedly give a head start to the miner who emitted it, and be a disadvantage to smaller pools which stand at a risk of having their block orphaned the longer it takes for other miners to receive and validate their work.

I expect any centralized pool, however small they are, can afford the bandwidth and processing power to deal with much larger blocks than we have at the moment. This could hurt p2pool miners though.

Here are example scenarios, with made up values for the penalty function. I assume for simplicity (not a necessary assumption) that there is endless demand for transactions paying 1mBTC fee,  that typical blocks are around 2K txs, and that there are no minted coins. The pool clears at 1% per block.

Scenario 1: The network has 100 1% miners.
Every 1% miner knows he's not going to claim much of any penalty he pays, so he includes a number of transactions that maximize fees-penalty for the block. This happens to be 2K txs, with a total fee of 2 BTC and penalty of 1 BTC.

The equilibrium for the pool size is 100 BTC.
Miners get 2 BTC per block (2 fees + 1 pool collection - 1 penalty).
There are 2K txs per block.

Scenario 2: The network has 1 90% miner, and 10 1% miners.
The 1% miners build blocks with 2K txs, fee 2 BTC, penalty 1 BTC, like before.
The 90% miner knows that if he includes more txs, he'll be able to reclaim most of the penalty, so the marginal effective penalty exceeds the marginal fee only with larger blocks - say, which have 4K txs, 4 BTC fees, 4 BTC penalty.

The average penalty per block is 3.7 BTC. The equilibrium pool size is 370 BTC.
There are on average 3.8K txs per block.
A 1% miner gets, per block he finds, (2 + 3.7 - 1) = 4.7 BTC - more than in scenario 1!
The 90% miner gets, per block he finds, (4 + 3.7 - 4) = 3.7 BTC - less than small miners get in this scenario, but more than miners get in scenario 1!

Those examples do not stand. They hinge on the premise that there is endless demand for transactions paying 1mBTC fee. I understand the need to simplify these demonstrations but that defeats the underlying premise of this entire discussion. You example assumes there is no competition over fees, which is the premise of a "no block limit + hard fees" system. Your system sets both soft and hard caps to the block size, so there is no reason to believe people will sit at a 1mBTC fee when there is an endless demand for transactions

Model your demonstration with a fee structure using the Pareto principle i.e. 20% of the transactions pay for the 80% of the total fees available in the mempool (which is a lot closer to the current network's fee distribution than your examples), and the system falls apart. Anyone building blocks large enough to get penalized is just giving his rewards away to miners that prioritize big fee transactions and make a point of staying under the soft cap.

The issue with your proposal is not the penalty per say, it's the reward: there is a point where it is more profitable to let others get penalized. The existence of this point creates a threat that keeps all miners functioning below the soft cap. The threat is that they lose profitability in comparison to other pools, and those pools start siphoning away their hash rate as miners migrate.

If you were to take away the reward from the system a few things would be smoother:

1) No opportunities to game the system anymore. It all comes down to where the acceptable margin of fee vs penalty stands for the given mempool.
2) Very simple to implement.

The drawback is that since there is no reward, obviously the penalties are just destroyed. I'm not sure that's a drawback per say, for the following reasons:

1) It's trivial to blackhole bitcoins and it's been agreed that this is not damaging to the system. So this method isn't introducing some new DOS attacks on the system.
2) By destroying the penalty, the value of every other BTC just went up. As opposed to your system where you want to reward other miners from the penalties, this time everyone is getting rewarded, albeit in a much smaller magnitude. This means both miners, but everyone else holding coins is rewarded when a miner builds a block above the soft cap. Incidentally, that also means people running nodes (as long as they hold BTC, which is expectable).

This is perhaps the only proposition so far that has some sort of reward mechanism for node maintainers (granted it's tiny) which take equal part in the cost of node propagation and validation as miners.
3223  Bitcoin / Development & Technical Discussion / Re: Elastic block cap with rollover penalties on: June 04, 2015, 10:10:15 PM
I'd be in his position I'd would too ask to see some code or at least some data analysis supporting the design. You can't just propose stuff and expect the people reviewing it to do all the leg work. An implementation at least proves your design is conceptually sound. It's easy to forget certain aspects when you theorycraft, and having to implement at least the PoC certainly motivates you to keep it as simple as possible.
Not sure to which extent this is criticism to me. But I believe everyone has a part to play in this world, and should be doing what he's best at. My comparative advantage is in coming up with ideas and discussing them; and in unrelated work (to those who don't know me, my day "job" is in promoting Bitcoin in Israel). It's not in coding and empirical analysis - I'll leave that to others. This methodology worked quite well at the time I helped mining pool operators with implementing DGM. Perhaps the discussion I've started will result in this or a similar idea being implemented and accepted. But if not, so be it.

I'll clarify that I think Gavin's request is perfectly legitimate. I didn't exactly expect him to be so dazzled by the idea that he'd drop everything he was doing and start working on it.

It's not criticism directed towards anyone per say. In the course of my work with Armory, I get suggestions to implement this and that, but an idea that can be summarized in a single sentence can often demand 10k LoC. I'm much more inclined to look at a pull request than just some formulated concepts. As I said, having a PoC to support the idea has several advantages, one of which is to make the task of reviewers simpler, another which is to go through the most obvious optimizations right away. An idea without a PoC is not diminished, but an idea with a PoC is certainly improved. I felt like I should share that. It wasn't even an attempt to defend Gavin.

Obviously if you can find someone to work a PoC for your proposal, that would be fantastic.

You're an idea man, I'm a nuts and bolts guy, I can't help but look at this from my perspective. Your natural stance towards people with my skill set is "you don't sophisticate enough". My natural stance towards people with your skill set is "you complicate too much". This isn't about to change anytime soon, yet that doesn't make it a personal attack. Present a patient with some general syndrome to N different medical specialists, all in different fields, and they will come up with N different diagnosis. They're not all necessary wrong.

If you think there is some underlying ad hominem in my criticism of your proposal, that is not my intent. There are plenty of other sections in this forum which are ripe for this kind of rhetoric. I'm going to defend my point of view with every opportunities I get, I don't expect less from others. The intensity of the criticism may come across as unwarranted but that's only cause I'm genuinely interested in this discussion. That should vouch on its own for the importance I bear to theoretical research.
3224  Bitcoin / Development & Technical Discussion / Re: Elastic block cap with rollover penalties on: June 04, 2015, 08:39:04 PM
Monero avoids this problem, but most of the rest of this proposal has been implemented and running for quite a long time.  Its not new, or novel, except in ways that it is not as good.

When Gavin says "I need to see working code", he probably means code he can directly deploy to his test setup within the framework of bitcoin-core. I can relate to this demand and find it reasonable.

It's a polite way to say :"i am not interested in your design, i dont want to analyse it, f**k off. "

I'd be in his position I'd would too ask to see some code or at least some data analysis supporting the design. You can't just propose stuff and expect the people reviewing it to do all the leg work. An implementation at least proves your design is conceptually sound. It's easy to forget certain aspects when you theorycraft, and having to implement at least the PoC certainly motivates you to keep it as simple as possible.
3225  Bitcoin / Development & Technical Discussion / Re: Elastic block cap with rollover penalties on: June 03, 2015, 08:22:10 PM
This is similar to the idea of eschewing a block limit and simply hardcoding a required fee per tx size.

I assume you are referring to the debate on "hard block size limit + organic fees" versus "no block size limit + hard fees", the third option (no block limit and organic fees) being a non solution. Obviously an "organic block size limit + organic fees" is the ideal solution, but I think the issue is non trivial, and I have no propositions to achieve it. I don't even know if its philosophically possible.

In this light, a "pseudo elastic block size limit + organic fees" is the better and most accessible solution at the moment, and I will argue that my proposal cannot be reduced to "no block size limit + hard fees", and that it actually falls under the same category as yours. Indeed, like your proposal, mine relies on an exponential function to establish the fee expended to block size ratio. Essentially the T-2T range remains, where any blocks below T needs no fees to be valid, and the total fee grows exponentially from T to 2T.

In this regard, my proposal uses the same soft-hard cap range mechanics as yours. As I said, ideally I'd prefer a fully scalable solution (without any artificial hard cap), but for now this kind of elastic soft-hard cap mechanic is better than what we got and simple enough to review and implement. The fact that my solution has caps implies there will be competition for fees as long as the seeding constants of the capping function are tuned correctly. On this front it behaves neither worse nor better than your idea.

Since I believe fees should be pegged on difficulty, fees wouldn't be hard coded either. Rather the baseline would progress inversely to network hashrate, while leaving room for competition over scarce block room.

Quote
The main issue I have with this kind of ideas is that it doesn't give the market enough opportunity to make smart decisions

I will again argue to the contrary. As a matter of fact, I believe your solution offers no room for such adaptive market behavior, while mine does. To take both your examples in order:

Quote
such as preferring to send txs when traffic is low

With T being the soft cap and 2T the hard cap, your solution proposes to penalize all miners creating blocks larger than T. This pegs blockchain space to fees, the same as my proposal: the more txs waiting in the mempool, the higher the fee you need to get included in the next block. Inversely, the less items in the mempool, the more likely you are to have your low/zero fee tx mined right away, which creates an incentive to emit transactions during low traffic periods.

While your approach supports emitting txs during low traffic by imposing extra costs on high traffic, mine simply doesn't do with the extra cost. That doesn't mean emitting transactions during low traffic is NOT cheaper. As a matter of fact, it is, but the difference between low and high traffic isn't as significant.

The true difference between my solution and yours is that while mine allows miners to exceed the T soft cap as long as there are enough fees to go by, yours penalizes all blocks outgrowing T, which effectively locks all blocks at size T.

Indeed a selfish miner would have no incentive to build blocks beyond T, and they will also benefit in not including 0/low fee transactions. Indeed, by leaving all 0/low fee transactions in the mempool and only creating small blocks, where the block size is defined min(total size of high fee txs, T), a large selfish miner can deplete the mempool from all high fee transactions, leaving the rest of the network to pick up the slack.

Other miners are left with the choice to fill blocks up to T, but not further. Due to the selfish miners action (who are pumping high fee, small size blocks), there are not enough fees to be redeemed from the mempool, and the penalties for including transaction past T would come out of the good willed miners' coinbase rewards. You may have "benevolent" miners who would rather empty the mempool than follow game theory, but they only stand to earn less money than everybody else. On the other hand, selfish miners still qualify to get a cut of the fee pool, to which they make a point not to contribute to, effectively siphoning revenue from good will and "benevolent" miners.

The true effect on the network is that no one will bother creating blocks larger than T, and we will still have a defacto hardcoded block size cap.

My proposal offers to allow miners to take in extra fees as long as they remain below the curve defined by the capping function. Selfish miners do not have an opportunity to vampirize good willed miners anymore so while the entire behavior (high fee, low size blocks) is not deterred, it is at least not encouraged.

Please keep in mind that this analysis of your system relies on my current understanding of it. I'm still not 100% clear how the "fee pool" functions. I'm assuming it is either only funded in penalties, and all fees are paid to miners directly, or that all fees are pooled and distributed equally per blocks. The later assumption seems pointless since it can be easily bypassed, so I'm using the former one as the basis to my critics of your proposal.

Quote
or to upgrade hardware to match demand for txs

Again I will argue that my proposal supports hardware improvement while yours doesn't. In your case, T will act as a defacto hard cap on block size limit, so there is no incentive for miners to be able to handle more traffic and create blocks beyond that value. As long as miners won't output blocks larger than T, there is no reason for the rest of the network to upgrade either.

With my solution, as long as there are enough fees to go by, up until 2T blocksize (or whatever the hard cap ends up being), miners are motivated to include transactions paying fees beyond the soft cap, which justifies hardware improvement to handle the extra load, with the consequences this has on the rest of the network.

Quote
Another issue with this is miner spam - A miner can create huge blocks with his own fee-paying txs, which he can easily do since he collects the fees.

Both in the current state and the solution you propose, malevolent miners can disturb the network by mining either empty blocks, or blocks full of "useless transactions", sending their own coins back to themselves. In my solution malevolent miners also have the added opportunity to mine large blocks by paying fees to themselves. Let's analyze what counter there is to these 3 disturbance attacks.

1) In case of empty blocks, all good willed and selfish miners should simply ignore them. It increases their revenue and impeaches the attack.

2) In case of "useless transactions", either the transactions were never made public, in which case the blocks are easy to identify (full of txs that never hit the mempool) and can be ignored for the same reason as above, or the transactions have been published publicly, and the attacker is making a point of mining only these. At this point you can't really distinguish this miner from good willed ones with either solution.

3) With my solution, malevolent miners can pay themselves fees and enlarge blocks. However that only holds true if they are keeping the transactions private. In this case other miners can identify such blocks as malevolent and ignore them entirely (2T wide blocks full of large fee txs that never hit the mempool). If the attacking miner makes the transactions public, he can't expect to rake all fees, so the solution to this attack is no different from 1 & 2

4) With your solution, what is to stop a malevolent miner from maxing blocks with 0 fee transactions? Sure he would give up the coinbase reward in form of penalties, but now these are available for the taking to other miners. Good willed miners may ignore such blocks, but selfish miners probably won't.

With my solution, there is an incentive for both good willed and selfish miners to ignore blocks outright constructed to bloat the network. With yours, there is a built in cost to such disturbance, so selfish miners can choose to ignore the disturbance for the benefit of the reward. In my system, the only viable economic option is to ignore the blocks.

Quote
Using difficulty to determine the size/fee ratio is interesting. I wanted to say you have the problem that difficulty is affected not only by BTC rate, but also by hardware technology. But then I realized that the marginal resource costs of transactions also scales down with hardware. The two effects partially cancel out, so we can have a wide operational range without much need for tweaking parameters.

2 years ago I would have opposed pegging fees and block size to difficulty, because ASIC technology was catching up to current manufacturing processes and as such was growing much faster than every other hardware supporting the network. That would require too much manual adjustments of the pegging function to be acceptable. As time passes by, that criticism loses ground, and now is not a bad time to consider it.

I would be interested to see if you have room to factor difficulty in your current function.

I expect a fee pool alone will increase block verification cost.
It would not, in any meaningful way.

I try to not be so quick with drawing such conclusions. I'm not savvy with the Core codebase, but my experience with blockchain analysis has taught me that the less complicated a design is, the more room for optimization it has. You can't argue that adding a verification mechanic will simplify code or reduce verification cost, although the magnitude of the impact is obviously relevant. I'm not in a position to evaluate that, but I would rather remain cautious.

The point still remains, you don't need a fee pool to establish a relationship between fee, block size, and possibly difficulty.

Don't get me wrong, I believe the idea has merits. What I don't believe is that these merits apply directly to the issue at hand. It can fix other issues, but other issues aren't threatening to split the network. I also don't think this idea is mature enough.

As Gavin says, without an implementation and some tests it is hard to see how the system will perform. If we are going to “theorycraft”, I will attempt to keep it as lean as possible.

It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Obviously increasing the block size requires a hard fork, but the fee pool part could be accomplished purely with a soft fork.  

The coinbase of the transaction must pay <size penalty> BTC to OP_TRUE as its first output.  Even if there is no size penalty, the output needs to exist but pay zero.

The second transaction must be the fee pool transaction.

The fee pool transaction must have two inputs; the coinbase OP_TRUE output from 100 blocks previously and the OP_TRUE output from the fee pool transaction in the previous block.  

The transaction must have a single output that is 99% (or some other value) of the sum of the inputs paid to OP_TRUE.


By ignoring fees paid in the block, it protects against miners using alternative channels for fees.

It seems your implementation pays the fee pool in full to the next block. That defeats the pool purpose in part. The implementation becomes more complicated when you have to gradually distribute pool rewards to “good” miners, while you keep raking penalties from larger blocks.

Otherwise, a large block paying high penalties could be followed right away by another large block, which will offset its penalties with the fee pool reward. The idea here is to penalize miners going over and reward those staying under the soft cap. If you let the miners going over the cap get a cut of the rewards, they can offset their penalties and never care for the whole system.

As a result you need a rolling fee pool, not just a 1 block lifetime pool, and that complicates the implementation, because you need to keep track of the pool size across a range of blocks.
3226  Bitcoin / Development & Technical Discussion / Re: Elastic block cap with rollover penalties on: June 03, 2015, 05:17:58 AM
I personally think a P2P network cannot rely on magic numbers, in this case 1MB nor 20MB blocks. The bar is either set too low and it creates an artificial choke point, or it is set too high which opens room for both DOS attacks and more centralization. As such, a solution that pins block size to fees is, in my point of view, the most sensible alternative to explore. Fees are the defacto index to determine transaction size and priority, so they are also the best candidate to determine valid block size.

However I have a couple divergence with Meni's proposal:

First, while I find the fee pool idea intriguing, and I certainly see benefits to it (like countering "selfish" mining), I don't believe it is a necessary device to pin block size limits to fees. Simply put, I think it's a large implementation effort, or if anything a much larger one than is needed to achieve the task at hand. It also requires modifying, or at least amending consensus rules, something the majority of the Core team has been trying to keep to a minimum. I believe there is wisdom in that position.

Second, I expect a fee pool alone will increase block verification cost. If it is tied to the block size validity as well, it would increase that cost even further. The people opposing block size growth base it on the rationale that an increase in resource to propagate and verify blocks effectively raises the barrier to entry of the network, resulting in more centralization. This point has merits and thus I think any solution to the issue needs to keep the impact on validation cost as low as possible.

Meni's growth function still depends on predetermined constants (i.e. magic numbers), but it is largely preferable to static limits. Meni wants to use it to define revenue penalties to miners for blocks larger than T.

I would scrap the fee pool and use the function the opposite way: the total sum of fees paid in the block defines the maximum block size. The seeding constant for this function could itself be inversely tied to the block difficulty target, which is an acceptable measure of coin value: i.e. the stronger the network, the higher the BTC value, and reciprocally the lower the nominal fee to block size balance point.

With an exponential function in the fashion of Meni's own, we can keep a healthy cap on the cost of larger blocks, which impedes spam by design, while allowing miners to include transactions with larger fees without outright kicking lower fee transactions out of their blocks.

As Meni's proposal, this isn't perfect, but I believe it comes at the advantage of lower implementation cost and disturbance to the current model, while keeping the mechanics behind block size elasticity straight forward. Whichever the community favors, I would personally support a solution that ties block size limit to fees over any of the current proposals.


3227  Bitcoin / Armory / Re: Hard fork procedures on: June 01, 2015, 06:22:26 PM
Yeah, now the six year old is confused.

I guess I'll have to keep a close eye on things, and if anything looks slightly off make the necessary contingencies to ensure my BTC don't end up on the wrong chain.

Armory is an advanced wallet, not meant for 6yo or users otherwise uneducated in the basic functions of the Core implementation, therefor I cannot provide an ELI5 level response.

Now, consider the following: Armory only runs on top of a binary implementing the bitcoin consensus and networking layer. It ultimately trusts that binary which means it doesn't check blocks against its own set of consensus rules.

Considering the difference between Core and XT (i.e. minimal), Armory won't know any better whether it is ran against Core or XT. Now if you understand how Core stores its block data on disk and know that Armory reads the block data from disk, you will know not to run XT and Core on the same home folder if you want to guarantee Armory will see only the fork of your choosing.
3228  Bitcoin / Armory / Re: Hard fork procedures on: June 01, 2015, 12:56:38 PM
OK, but just to make it absolutely clear (assume I'm a six year old who stumbled into bitcoins and computers), if the time comes, and I download and run XT, Armory will have no problem? Also, Armory can run QT by itself, if memory serves. Will we get the option of running XT if there is a fork and we choose to do that?

Depends on the binary you make available to Armory.
3229  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 29, 2015, 11:22:40 PM
The new wallet format supporting BIP32 will be backwards compatible with every previous Armory wallet format.
When you say "BIP32", do you mean BIP44?

I guess that's the proper name for the BIP specified HD wallets. Confusing me >_<"
3230  Bitcoin / Armory / Re: Hard fork procedures on: May 29, 2015, 11:19:03 PM
As long as your Core implementation only records the relevant fork on disk, Armory will only see that.

Armory trusts the local Core instance for block validity so it doesn't check that transactions are valid (according to this or that consensus rules). If somehow your Bitcoin Node maintains both forks on disk (why would it if the divergence in consensus caused a fork?), then Armory will only care for the longest chain.
3231  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 29, 2015, 11:14:34 PM
i saw this in the Bitcoin wiki for deterministic wallets:

"Armory deterministic wallet
Armory has its own Type-2 deterministic wallet format based on a "root key" and a "chain code." Earlier versions of Armory required backing up both the "root key" and "chaincode," while newer versions start deriving the chaincode from the private key in a non-reversible way. These newer Armory wallets (0.89+) only require the single, 256-bit root key. This older format is intended to be phased out in favor of the standard BIP0032 format. "

is this going to be a problem for those who keep and maintain old style wallets with both the chain code and root key?

The new wallet format supporting BIP32 will be backwards compatible with every previous Armory wallet format.
3232  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 29, 2015, 12:41:27 PM
Yes 32 bit. I have 140 Gb (61.3 / 206 Gb)

0.93.x does not work on x86 OS. Wait for 0.94
3233  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 29, 2015, 12:22:03 PM
Are you using a 32bit OS? How much free disk space do you have?
3234  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 28, 2015, 11:01:16 PM
force your block file location with --satoshi-datadir
3235  Bitcoin / Armory / Re: Transaction not accepted on: May 28, 2015, 03:04:05 PM
first check box in File -> Settings
3236  Bitcoin / Armory / Re: Transaction not accepted on: May 28, 2015, 11:54:14 AM
btw, i'm not sure if anyone knows or have tried it but yes you can run your p2pool with armory running.

Careful when doing this, Armory has a habit of deciding that bitcoin.conf needs to be overwritten completely, replacing all your bitcoind parameters with a brand new RPC password and nothing else. This may screw with your p2pool-ing, depends how much you rely on those settings, particularly having a (known) RPC password (also, a default value for maxconnnections affects stale rate significantly).

Turn off auto bitcoind management and Armory will ignore bitcoin.conf entirely
3237  Bitcoin / Armory / Re: Armory - Discussion Thread on: May 25, 2015, 06:03:38 PM
I don't think this is exactly true in Armory's case. We have attempted to jail Armory using cgroups but during database rebuilds it fails if the memory restrictions are too stringent (<2GB in my testing). Additionally, if there is swap on the system, it appears the system will use the swap before asking Armory to relinquish any of its used memory, causing other processes on the system to hang waiting for I/O.

Both are obvious symptoms. If Armory is aggressively asking for memory it will take priority over processes in the background or that aren't requesting as much I/O. That's basic OS optimization.

As for memory requirements, Armory expects to have about 2~4GB of RAM at its disposal for it's own scanning needs. The rest is mmap'ed files requested by LMDB. That's controlled by a hardcoded value and will probably be dynamically assigned in the upcoming version.
3238  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: May 21, 2015, 10:03:48 PM

As for progress, I'll improve the log message with the upcoming fixes.

Thanks. I hope it could be logged by fixed time interval, not by block interval. It would be very helpful for slow machine like mine.

That's a good idea, I'll do that.
3239  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: May 21, 2015, 06:14:48 PM
It became very slow again and I restarted the VM. However, it is redoing ~4000 completed blocks


-WARN  - 1432179524: (BlockUtils.cpp:1071) Scanning from 272867 to 357353
-WARN  - 1432183241: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 275000
-WARN  - 1432190441: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 277500
-WARN  - 1432212681: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 280000

(I killed it here with Control-C)

Log file opened at 1432217549: /home/xxx/.armory/armorycpplog.txt
-INFO  - 1432217554: (BlockUtils.cpp:850) blkfile dir: /home/xxx/.bitcoin/blocks
-INFO  - 1432217554: (BlockUtils.cpp:851) lmdb dir: /home/xxx/.armory/databases
-INFO  - 1432217554: (lmdb_wrapper.cpp:439) Opening databases...
-INFO  - 1432217554: (BlockUtils.cpp:1181) Executing: doInitialSyncOnLoad
-INFO  - 1432217554: (BlockUtils.cpp:1253) Total number of blk*.dat files: 272
-INFO  - 1432217554: (BlockUtils.cpp:1254) Total blockchain bytes: 36,422,961,275
-INFO  - 1432217554: (BlockUtils.cpp:1628) Reading headers from db
-INFO  - 1432217567: (BlockUtils.cpp:1654) Found 357398 headers in db
-DEBUG - 1432217567: (Blockchain.cpp:214) Organizing chain w/ rebuild
-INFO  - 1432217574: (BlockUtils.cpp:1295) Left off at file 271, offset 65665089
-INFO  - 1432217574: (BlockUtils.cpp:1298) Reading headers and building chain...
-INFO  - 1432217574: (BlockUtils.cpp:1299) Starting at block file 271 offset 65665089
-INFO  - 1432217574: (BlockUtils.cpp:1301) Block height 357353
-DEBUG - 1432217574: (Blockchain.cpp:214) Organizing chain w/ rebuild
-INFO  - 1432217575: (BlockUtils.cpp:1337) Looking for first unrecognized block
-INFO  - 1432217575: (BlockUtils.cpp:1489) Loading block data... file 271 offset 65665081
-ERROR - 1432217575: (BlockUtils.cpp:516) Next block header found at offset 65665089
-INFO  - 1432217888: (BlockUtils.cpp:544) Reading raw blocks finished at file 271 offset 87382506
-INFO  - 1432217888: (BlockUtils.cpp:1354) Wrote blocks to DB in 7.70157s
-INFO  - 1432217888: (BlockUtils.cpp:1371) Checking dupIDs from 276324 onward
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1556) Starting with:
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1557) 1 workers
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1558) 1 writers
-WARN  - 1432217900: (BlockUtils.cpp:1071) Scanning from 276325 to 357427


The wording is off. It signals the highest buffered block, not the highest processed and written block (cause I was lazy and didn't change this kinda stuff)

So it took 8 hours to buffer ~5000 blocks without processing? How could I know the actual progress?

Buffering, processing and writing data are all different tasks that take place in parallel. A batch of blocks is buffered, processed and written, then the batch is cleaned up. Only after a batch has been written will the DB resume past that point. The buffering threads have a bottleneck (only 3 buffers in the waiting queue at most). The processing threads have no bottleneck (and usually out pace the writing threads in supernode) which is why the scanning is getting really slow (RAM gets filled with too much data to allow the writers to run properly). As for why this happens, it's cause I forgot to turn the processing threads bottleneck back on when I was doing some tests o.o"!

As for progress, I'll improve the log message with the upcoming fixes.
3240  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: May 21, 2015, 05:35:32 PM
It became very slow again and I restarted the VM. However, it is redoing ~4000 completed blocks


-WARN  - 1432179524: (BlockUtils.cpp:1071) Scanning from 272867 to 357353
-WARN  - 1432183241: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 275000
-WARN  - 1432190441: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 277500
-WARN  - 1432212681: (BlockWriteBatcher.cpp:355) Finished applying blocks up to 280000

(I killed it here with Control-C)

Log file opened at 1432217549: /home/xxx/.armory/armorycpplog.txt
-INFO  - 1432217554: (BlockUtils.cpp:850) blkfile dir: /home/xxx/.bitcoin/blocks
-INFO  - 1432217554: (BlockUtils.cpp:851) lmdb dir: /home/xxx/.armory/databases
-INFO  - 1432217554: (lmdb_wrapper.cpp:439) Opening databases...
-INFO  - 1432217554: (BlockUtils.cpp:1181) Executing: doInitialSyncOnLoad
-INFO  - 1432217554: (BlockUtils.cpp:1253) Total number of blk*.dat files: 272
-INFO  - 1432217554: (BlockUtils.cpp:1254) Total blockchain bytes: 36,422,961,275
-INFO  - 1432217554: (BlockUtils.cpp:1628) Reading headers from db
-INFO  - 1432217567: (BlockUtils.cpp:1654) Found 357398 headers in db
-DEBUG - 1432217567: (Blockchain.cpp:214) Organizing chain w/ rebuild
-INFO  - 1432217574: (BlockUtils.cpp:1295) Left off at file 271, offset 65665089
-INFO  - 1432217574: (BlockUtils.cpp:1298) Reading headers and building chain...
-INFO  - 1432217574: (BlockUtils.cpp:1299) Starting at block file 271 offset 65665089
-INFO  - 1432217574: (BlockUtils.cpp:1301) Block height 357353
-DEBUG - 1432217574: (Blockchain.cpp:214) Organizing chain w/ rebuild
-INFO  - 1432217575: (BlockUtils.cpp:1337) Looking for first unrecognized block
-INFO  - 1432217575: (BlockUtils.cpp:1489) Loading block data... file 271 offset 65665081
-ERROR - 1432217575: (BlockUtils.cpp:516) Next block header found at offset 65665089
-INFO  - 1432217888: (BlockUtils.cpp:544) Reading raw blocks finished at file 271 offset 87382506
-INFO  - 1432217888: (BlockUtils.cpp:1354) Wrote blocks to DB in 7.70157s
-INFO  - 1432217888: (BlockUtils.cpp:1371) Checking dupIDs from 276324 onward
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1556) Starting with:
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1557) 1 workers
-WARN  - 1432217900: (BlockWriteBatcher.cpp:1558) 1 writers
-WARN  - 1432217900: (BlockUtils.cpp:1071) Scanning from 276325 to 357427


The wording is off. It signals the highest buffered block, not the highest processed and written block (cause I was lazy and didn't change this kinda stuff)
Pages: « 1 ... 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 [162] 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 ... 233 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!