Bitcoin Forum
June 26, 2024, 09:06:24 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 [155] 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 ... 233 »
3081  Bitcoin / Armory / Re: Tutorial: Compiling Armory and getting it onto an offline computer on: August 21, 2015, 10:26:59 PM
Yes, that's better, and putting the SD on read only while on the offline machine is another assurance of safety... Is an SD exploitale through hardware this way?

My understanding of SD cards is that they dont have to negotiate a utility class like USB, they are identified only as storage by the OS, and their drivers only allow for that anyways.

On the other hand, USB can negotiate several classes for the same PID&VID, mostly the infamous HID class. The grand majority of USB attacks come from that unrestricted class negotiation allowed by the standard and most of them rely on the power of the HID class.
3082  Bitcoin / Development & Technical Discussion / Re: Dynamically Controlled Bitcoin Block Size Max Cap on: August 21, 2015, 12:19:47 PM
How does difficulty change affects the size of blocks found today ? Is there any co-relation between difficulty and block size ? If not, then IMO, it wont be wise to make difficulty a parameter to change max block size cap.

It does not but I would return this question to you: how is a doubling/halving of the block cap representative of the actual market growth/contraction that triggered the change? Difficulty variations are built in the blockchain and provide a very realistic perspective on the economic progression of the network, as they are a marker of profitability.

Keep in mind that my proposal evaluates total fee progression as well over difficulty periods, so in the case a new chip is released that largely outperforms previous generations and the market quickly invests into it, that event on its own would not be enough to trigger a block size increase, as there is no indication fees would also climb in the same fashion.

The idea is to keep the block size limit high enough to support organic market growth, while progressing in small enough increments that each increment won't undermine the fee market. I think difficulty progression is an appropriate metric to achieve that goal.

I've always thought fees should be somehow inversely pegged to difficulty to define the baseline of a healthy fee market. This is a way to achieve it.

Quote
People, the problem is not 'what' the limit should be & 'how' to reach it. The problem is that large blocks will kill bitcoin, so large blocks are not an option, what to do then is the question? How to make bitcoin scalable?

The problem is blocks larger than the network's baseline resource will magnify centralization. This is a bad thing, but the question is not "how to make Bitcoin scalable". My understanding of scalability (please share yours if it differs from mine) is for a piece of software that attempts to consume as much resources as is made available. An example of a scalable system would be Amazon's ec2. The more physical machines support it, the more powerful it gets. Another one is BitTorrent, where the more leechers show up, the more bandwidth the torrent totals (i.e. bandwidth is not defined by seed boxes alone).

I would say the current issue with Bitcoin and big blocks isn't scalability but rather efficiency. We don't want to use more resources, we want to use the same amount of resources in a more efficient manner. Block size is like a barrier to entry: the bigger the blocks, the higher the barrier. Increasing efficiency in block propagation and verification would reduce that barrier in return, allowing for an increase in size while keeping the network healthy. I am not familiar with the Core source but I believe there are a few low hanging fruits we can go after when it comes to block propagation.

Also, I believe the issue isn't truly efficiency, but rather centralization. Reducing the barrier to entry increases participants and thus decentralization but the real issue is that there are no true incentives to run nodes nor to spread mining to smaller clusters. I understand these are non trivial problems, but that's what the September workshop should be about, rather than scalability.

If there is an incentive to run full nodes and if there is an incentive to spread mining, then block size will no longer be a metric that affects centralization on its own. Keep in mind that it currently is the case partly because it is one of the last few metric set to a magic number. If it was controlled by a dynamic algorithm keeping track of economic factors, we wouldn't be wasting sweat and blood on this issue today and be looking at how to make the system more robust and decentralized instead.
3083  Bitcoin / Armory / Re: Transferring exactly 100% of wallet when transaction fee is variable on: August 20, 2015, 05:15:59 PM
Which version are you using and do you have dust in your wallet?
3084  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 20, 2015, 05:13:57 PM

However, I keep on getting popups about Armory losing it's connection to bitcoind.
(ERROR) Networking.py:359 - ***Connection to Satoshi client LOST!  Attempting to reconnect...


I found that "maxconnections=6" in bitcoin.conf was causing bitcoind to reject connections
from Armory on the localhost interface. When I changed it to "maxconnections=30", Armory
started working as expected, and bitcond says it has 9 open connections.


Oh right, there's that too. I personally do it the other way and add "addnode=127.0.0.1" to guarantee Core tries to keep the localhost connection with Armory alive.
3085  Bitcoin / Development & Technical Discussion / Re: Dynamically Controlled Bitcoin Block Size Max Cap on: August 20, 2015, 05:02:54 PM
I like this initiative, it is by far the best I've seen for the following reasons: it allows for both increase and reduction (this is critical) of the block size, it doesn't require complicated context and mainly, it doesn't rely on a hardcoded magic number to rule it all. However I'm not comfortable with the doubling nor the thresholds, and I'd would propose to refine them as follow:

1) Roughly translating your metrics gives something like (correct me if I misinterpreted):

- If the network is operating above half capacity, double the ceiling.
- If the network is operating below half capacity, halve the ceiling.
- If the network is operating around half capacity, leave it as is.

While the last 2 make sense, the first one is out of proportion imo. The increment step could be debated over and over but I think a more straight forward solution is to peg it to difficulty, i.e. if an increase is triggered, the block size limit should be readjusted in the same proportion that the difficulty changed:

- If the difficulty increased 20% and a block size limit increase is triggered, the limit would be increased by 20%.
- If the difficulty only increased by 5%, so would the block size limit.
- If the difficulty increased but the block limit increase was not triggered, stay as is.
- If the difficulty was reduced, in every case reduce the block limit by that same proportion.

As for the increase threshold, I don't think your condition covers the most common use case. A situation where 100% of blocks are filled at 85% would not trigger an increase, but a network were 50% of blocks are filled at 10% and the other 50% are full would trigger the increase, which is a behavior more representative of a spam attack than organic growth in transaction demand.

I would suggest to evaluate the total size used by the last 2000 blocks as a whole, if it exceeds 2/3 or 3/4 (or whatever value is the most sensible) of the maximum capacity, then trigger an increase.

Maybe that is your intended condition, but from the wording, I can't help to think that your condition is to evaluate size consumption per block, rather than as a whole over the difficulty period.

2) The current situation with the Bitcoin network is that it is trivial and relatively cheap to spam transactions, and thus trigger block ceiling increase. At the same time, the conditions for a block size decrease are rather hard to sustain. An attacker needs to fill half the blocks for a difficulty period to trigger an increase, and only needs to keep 11% of blocks half full to prevent a decrease.

Quoting from your proposal:

Quote
Those who want to stop decrease, need to have more than 10% hash power, but must mine more than 50% of MaxBlockSize in all blocks.

I don't see how that would prevent anyone with that much hashing power from preventing a block size decrease. As you said, there is an economic incentive for a miner to include fee paying transactions, which reduces the possibility a large pool could prevent a block size increase by mining empty blocks, as it would bleed hash power pretty quickly.

However, this also implies there is no incentive to mine empty blocks. While a large miner can attempt to prevent a block size increase (at his own cost), a large group of large miners would be desperate to try and trigger a block size reduction, as a single large pool could send transactions to itself, paying fees to its own miners, to keep 11% of blocks half filled.

I would advocate that the block size decrease should also be triggered by used block space vs max available space as a whole over the difficulty period. I would also advocate for a second condition to trigger any block size change: total fee paid over the difficulty period:

- If both blocks are filling and the total sum of paid fees has increased at least as much as a portion of the difficulty (say 1/10th, again up for discussion) over a single period, then an increase in block size is triggered.
- Same goes with the decrease mechanism. If block size and fees have both decreased accordingly, trigger a block size decrease.

One or the other condition is not enough. Simply filling blocks without an increase in fees paid is not a sufficient condition to increase the network's capacity. As blocks keep on filling, fees go up and eventually the conditions are met. On the other hand, if block size usage goes down but fees remain high, or fees go down but block size usage goes up (say after a block size increase), there is no reason to reduce the block size either.

3) Lastly, I believe in case of a stalemate, a decay function should take over. Something simple, say 0.5~1% decay every difficulty period that didn't trigger an increase or a decrease. Block size increase is not hard to achieve as it relies on difficulty increase, blocks filling up and fees climbing, which takes place concurrently during organic growth. If the block limit naturally decays in a stable market, it will in return put a pressure on fees and naturally increase block fill rate. The increase in fee will in return increase miner profitability, creating opportunities. Fees are high, blocks are filling up and difficulty is going up and the ceiling will be bumped up once more to slowly decay again until organic growth resumes.

However in case of a spam attack, it forces the attacker to keep up with the climbing cost of triggering the next increase rather than simply maintaining the size increase he triggered at a low cost.

I believe with these changes to your proposal, it would turn exponentially expensive for an attacker to push the ceiling up, while allowing for an organic fee market to form and preventing fees from climbing sky high, as higher fees would eventually bump up the size cap.

3086  Bitcoin / Development & Technical Discussion / Re: Not Bitcoin XT on: August 20, 2015, 02:20:21 PM
Right now 1 company has control over bitcoin's future, and has been spreading baseless FUD and ad-hominem because it does not intend to relinquish it.  If that is not bad enough already...

And how exactly is this not FUD against Core devs? Even if you concede to the accusations, how is this not a ad hominem argument? There is a very clear reason this debate should remain purely technical. This is an engineering problem, it should only be treated on this basis.
3087  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 19, 2015, 10:28:33 PM
I'm sorry if I have come across as pushy. That was never the intention; I know the time of any developer is precious, so I would not want to hoard on it. While, of course, wanting to reach a personal goal of running the newest version of Armory on my old computer, I genuinely hoped that if this might be a bug, I would be of help in solving it.

It was not my intent to sound exasperated. By our standards, this is a bug, but most likely one in CryptoPP, which is unmaintained for a while now.

The reality of this bug is that first of all to fix it I'd probably need to look at the code behavior with your CPU directly. The kind of remote debugging that was practiced in this thread is only good at identifying issues. Unless I have a very good hunch as to what is the underlying cause (usually I do if it's my own code), I can't fix this stuff without experiencing it first hand. Turns out this is probably a CryptoPP issue, so short of having a machine like yours, I'll have a terrible time finding a fix.

The other part is that CryptoPP is a library we piggy back on. Seeing it's crypto, we are very cautious about modifying anything in there. Usually we would try to narrow down the issue and report it to the official maintainer, but that's not an option anymore. We'd have to weight the cost of finding the issue (which possibly requires for someone to get very familiar with this library) and coming up with a as small as possible fix against the reward and the risk of modifying our crypto lib. Simply put, even if we had a fix, we may not implement it.

The future isn't so dark however. As Core is planning to drop OpenSSL, we too plan on ditching CryptoPP and rely on the libraries Core are replacing OpenSSL with, one of which is Sipa's secp256k1 library. So it's possible that this bug will entirely go away in the future.

There is also a plan to come up with lighter builds for some targeted versions such as purely offline Armory and Litenode. Getting rid of the need for C++11 in the offline bundle may very well be one of our goals as it is painful to maintain older OS compatibility otherwise, which is a desirable for our offline package.
3088  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 19, 2015, 03:55:17 PM
Then it's possibly GCC's C++11 implementation that has some forward compatibility issues. Either try clang or take it easy and just rely on 0.92
3089  Bitcoin / Armory / Re: Armory i18n (internationalisation) on: August 19, 2015, 10:04:51 AM
Ask njaard
3090  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 19, 2015, 10:03:02 AM
The version of gcc you use does not support C++11.

Try clang or build an earlier version, like 0.92. You can pull the tag v0.92.3 and build that.

To make the .deb, try this script:

https://github.com/etotheipi/BitcoinArmory/blob/v0.92.3/dpkgfiles/make_deb_package.py

I'm not all that familiar with it but this is what etotheipi used to build 0.92 at least.
3091  Bitcoin / Armory / Re: Armory - Discussion Thread on: August 18, 2015, 10:39:38 PM
Build ffreeze and try that instead.
3092  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 18, 2015, 10:38:26 PM
I'm thinking the CPU is old enough it could be missing some enhanced instruction sets. The easiest way to verify this would be to build from source straight on the faulty machine. If you dont want to build straight from your to be offline signer, build from source on your other machine in this fashion:

Code:
make DEBUG=1

The debug build should turn off all compiler and linker optimization, which may be just what you need to fix this.
3093  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 18, 2015, 05:06:45 PM
That's the first time I see it choke like this. I have little to help you with without a backtrace. Do you have another machine you can create a mock wallet on? Same software, different hardware kinda test.
3094  Bitcoin / Armory / Re: Offline bundle for newer Ubuntus? on: August 18, 2015, 03:58:24 PM
Quote
OSError: [Errno 2] No such file or directory: '/home/thomas/.bitcoin/'

It failed here and halted the entire process of discovering system specs. Start in offline mode or create that folder.
3095  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 18, 2015, 09:22:37 AM
Quote
Is there an option to increase the polling interval to, let's say, 10 second?

There is no command line argument for this purpose afaik. The current stance on this piece of code is that it works enough that we completely ignored it for the past few versions update. As such, I am completely unfamiliar with it.

There is also no plan to fix it or overhaul it, as blocks over P2P would require a C++ implementation of the Bitcoin P2P protocol, in which case all communications with Core would be handled straight from the backend and the Python code dropped entirely, which is another reason I've made a point of ignoring it for so long.


At any rate, if you feel adventurous, you can modify the Python hardcoded value. I think this line should do it:

https://github.com/etotheipi/BitcoinArmory/blob/ffreeze/ArmoryQt.py#L6391

Change nextBeatSec=1 to what you see fit. You can use decimal values to signify smaller time increments (i.e. 1.2 translates to 1200ms)

Quote
Or better still, could Armory open a single tcp connection to bitcoind and keep it open for
the duration of the session?

I don't think that's the default reactor behavior. Dropping the socket after every iteration is indeed expensive compared to a keep-alive model, but still marginal in the scope of a localhost connection. The issue isn't so much the amount of sockets opened than the less than stellar Python GC and the implication that no one in Python bothers with explicit clean ups.
3096  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 17, 2015, 09:26:04 PM
bad_alloc means the process failed to allocate new memory. Most likely the code is trying to allocate a ridiculous amount of RAM (possibly some badly derserialized varInt).

Download the testnet chain and try to build that. Also, could you give me your first block file? (blk00000.dat)
3097  Bitcoin / Armory / Re: Armory not recognizing that bitcoind is still catching up on: August 17, 2015, 12:53:29 PM
The bootstrap does not cover all of the blockchain. It is usually updated along with Core updates, as milestone hashes are hardcoded in. Downloading the bootstrap will only give you history up to a certain point in the past, the rest has to be downloaded straight by Core itself, and Armory can only display your up to date balance once you have an up to date copy of the blockchain locally.

Start BitcoinQt, let it finish downloading blocks. Once it is fully sync'ed, start Armory. It should be able to figure it out on its own. If it fails, do a rebuild and rescan.
3098  Bitcoin / Armory / Re: Armory - Discussion Thread on: August 16, 2015, 11:31:12 PM
Possibly some bad blocks. Try to sync the testnet chain, see if you get the same symptoms.
3099  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 16, 2015, 09:35:46 PM
This looks severe. Do a Help -> Rescan and see if it you get the same output.
3100  Bitcoin / Armory / Re: Starting preliminary 0.94 testing - "Headless fullnode" on: August 16, 2015, 09:15:54 AM
Although I am running on 0.93.2 I can report as much. My desktop has been running Armory for more than a year now and I've had no issues or crashes at all. Zero. My laptop however, is a different issue. Whenever I wake it up from days of inactivity, Armory quickly throws a "missing headers" error and quits, only to return with full functionality on next launch. At first I didn't know if it was an issue with sleep or just it not being able to keep up with bitcoind but today for the first time it happened on the desktop.

We had a power failure yesterday and after restoration there was no internet until just about now. During all this time Armory had been running uninterrupted. When the internet came back, bitcoind began catching up with a day's worth of blocks, and Armory threw the "missing headers" error.

Could you please investigate if there are issues with Armory keeping up with bitcoind when it's behind by say 2-3 days and suddenly it gets a chance to catch up?

I am re-reporting this in the ongoing bigger Armory thread to keep a 0.93 error off the 0.94 post but since you are investigating this now, I figured I'd mention it here as well.

That's a 0.93 specific issue that has been fixed in 0.94

0.93 can't tell the difference between a mangled block and a block in the process of being written to disk (which appears mangled when Armory reads it). 0.94 side steps the issue by rereading partial blocks at each iteration until the partials either fill up or a valid block further in the file extends the chain.

The true solution would be to listen on the new block P2P notification and grab data over the socket, but that's a whole new effort.
Pages: « 1 ... 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 [155] 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 ... 233 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!