Because you have to redeem full UTXOs to spend any amount of coins. Read up on transaction mechanics.
|
|
|
You can't address a database over 2GB with LMDB in 32 bit. Fullnode DB in 0.93 is 30+ GB. Something is off in what you are saying.
|
|
|
as i said, it was working perfectly fine on 32 bit right up to now. the --satoshi-datadir doesn't alter the failure.
You got online and fully sync'ed with 0.93 on a 32 bit OS?
|
|
|
32 bit, Armory 0.93.1
Yeah, no... wait, 0.93.1 does run on Ubuntu 32 bit. was working with 12.04, 32 bit... The issue isn't running, it's the addressing space limitation of 32 bit operating systems. You are wasting your time trying to get 0.93.x setup on your system, it will never pass the DB build stage. If you insist on running Armory online on a 32bit OS, build 0.94 alpha on the ffreeze branch, although it's unstable currently. Also your issue is most likely related to auto path discovery. Force the blockchain dir with --satoshi-datadir.
|
|
|
32 bit, Armory 0.93.1
Yeah, no...
|
|
|
EDIT: It passed 272500. Can I assume my database is not corrupted?
Yes Would it help if I try building a Supernode Db? I'm retiring the 0.9.x bitcoin blockchain from my system disk, so I'll have the space and the patience to let it build.
I got some changes coming in soon, I'd rather you wait on those
|
|
|
It died around block 270000 with "segmentation fault". Nothing special in log
EDIT: I restarted and it shows "Scanning from 270396 to 357270"
Well then I guess both fullnode and supernode suffer from the same symptoms. As for the resume height, that looks correct to me.
|
|
|
I'm building a supernode database with 0.93.99.1. Do I need to do it again when 0.94 is finalized and officially released?
Unless something goes stupid, I don't intent to modify the format of the DB ever again. I don't expect I'll have to change it to fix the current bug Carlton Banks and btchris are experiencing. On the other hand I'd like some feedback on stability, since a lot of changes were to improve supernode stability.
|
|
|
Help -> Rescan Databases.
Then try to spend the same coins again.
|
|
|
That's fine, as long as I perceive no risk to me or my affects, then I'm as happy doing that as I've ever been (you're publishing to github after all).
My understanding of the AGPL license is that if you were to modify Armory source and distribute binaries (or run a commercial activity) based on those changes without publishing the altered code publicly, you would be in infringement. I don't think this affects changes for personal and non commercial use. Regardless, if you were to fork Armory publicly (say on a public github repo), you could do as many changes as you like. That's just my interpretation however. From my perspective there is no license infringement in doing those changes under these conditions, and you have my guarantee I won't be coming after you over this. Again, I'm no lawyer and these words only engage me. I don't have the credentials to speak for the business on this matter. If anything the responsibility is mine since I asked you to do these changes.
|
|
|
I don't know whether you're asking me to break the terms of the license by doing that. Can I get some assurance that I'm protected from legal action by your employers? I have zero interest in reading the license terms myself, it is, in my opinion, abhorrent to use litigation to enforce so-called freedom.
I don't know how the legal stuff goes. I can give you my personal guarantee but I don't know what that is worth either, legally speaking (I doubt I'm in a position to do that). I'm going to push some stuff today and I'll disable the prefetch thread along the way, how about you wait for that?
|
|
|
Carlton Banks, btcchris: something new to try, disable the block data prefetch thread. In BlockWriteBatcher.h#416, comment out these 2 lines: LoadedBlockData(const LoadedBlockData&) = delete; BFA_PREFETCH getPrefetchMode(void) { /*if (startBlock_ < endBlock_ && endBlock_ - startBlock_ > 100) return PREFETCH_FORWARD;*/ return PREFETCH_NONE; }
Regarding speed, in fullnode the bottleneck is your CPU. On my Debian VM with 2 workers I scan fullnode in 40min. On my host with 9 workers I scan it in 5 (i7 5820). Once the thread toggler is ready it should run a lot better than going with some estimated value.
|
|
|
Welcome to my world =P Threading issues are tricky. Consider no one has ran into trouble in-house, and our first 2 testers get something right away. You could try to force the thread count per task manually see how it reacts. In BlockWriteBatcher.cpp, around line 1540, add the lines in bold: else { //otherwise, give workers 90% of the processing power int32_t writers = (totalThreadCount_ * 10) / 100; if (writers < 1) writers = 1;
int32_t workers = totalThreadCount_ - writers; if (workers < 1) workers = 1; workers = 1; writers = 1;
nWorkers_.store(workers); nWriters_.store(writers); }
This controls the amount of workers (reading through transactions and extracting history) and writers (here you would force them both to 1 to test low performance stability). Also, around line 385: { //Grab the SSHheader static mutex. This makes sure only this scan is //creating new SSH keys. If several scanning threads were to take place, //it could possibly result in key collision, as scan threads are not aware //of each others' state unique_lock<mutex> addressingLock(SSHheaders::keyAddressingMutex_);
uint32_t nThreads = 3; if (int(endBlock) - int(startBlock) < 100) nThreads = 1;
nThreads = 1; prepareSshToModify(scf);
This controls the amount of "grab" threads, which pull raw data from the blockchain and serialize it into PulledBlock objects used by the workers. Be nice and don't comment on my naming sense, unlike some of my colleagues T__T. This stuff needs consolidated but I'll get to that once the thread toggler is done. As you can see, it runs on static thread counts currently, but it's meant to adjust resource per task dynamically (to squeeze every last drop of processing power regardless of the machine, and eventually give the user live control over how many cpu cores it can use). For max performance in Fullnode, you want a grab thread per worker and only 1 writer.
|
|
|
btchris: unless you consistantly get the same error, this probably the aftermath of a buffer overrun in another thread
Carlton Banks: That's usually symptomatic of a threading issue. Throw off the timing a little and the issue doesn't appear anymore.
|
|
|
Carlton Banks: would you mind building ffreeze in debug mode, running it in gdb and posting the backtrace when it crashes?
I don't totally know what you mean, but yes in principle. What's gdb? https://www.gnu.org/software/gdb/Essentially, linux debug mode. First install gdb. On Debian based systems, sudo apt-get install gdb will do it. I'm not familiar with rpm but I expect there are gdb builds available. With gdb installed, type these commands in the top code folder: make clean make DEBUG=1 <- build with debug symbols gdb python <- start python in debug mode run ./ArmoryQt.py <- The command line argument goes after run ... <- code is now running, wait till it segfaults backtrace <- after it segfaults, you receive control back, type in backtrace and paste the result back in this thread
|
|
|
Carlton Banks: would you mind building ffreeze in debug mode, running it in gdb and posting the backtrace when it crashes?
|
|
|
It's also is a heck of a lot faster to start up Armory the first time once Core is sync'd, since we're not building a 30+ GB database anymore. It's called "headless fullnode" because it's the same as previous "fullnode" version of Armory, but without maintaining it's own copy of the blockchain (of course, if you run supernode it will maintain your 2x-3x sized DB, but that's not the default Armory mode). Will it ever be possible to run Armory without requiring filesystem-level access to the block files? Seems like a change such as this makes such an operating mode more difficult. I run bitcoind and Armory in separate virtual machines, and finding a way to share those files in a way that gets all the permissions right and satisfies LevelDB's locking requirements is non-trivial. It would be great to have an option for Armory to maintain its own database using blocks it obtains via its peer connection to the node, rather than via direct filesystem access, without requiring a supernode. We do plan on implementing blocks over p2p
|
|
|
Also, SDM mode produces this: Traceback (most recent call last): File "/usr/lib/armory-testing/ArmoryQt.py", line 7129, in <module> form = ArmoryMainWindow(splashScreen=SPLASH) File "/usr/lib/armory-testing/ArmoryQt.py", line 259, in __init__ self.startBitcoindIfNecessary() File "/usr/lib/armory-testing/ArmoryQt.py", line 2441, in startBitcoindIfNecessary self.setSatoshiPaths() File "/usr/lib/armory-testing/ArmoryQt.py", line 2499, in setSatoshiPaths TheTDM.setSatoshiDir(self.satoshiHomePath) AttributeError: 'FakeTDM' object has no attribute 'setSatoshiDir'
and then crashes. That's the torrent manager failing to find your /blocks folder. That suggests it's looking at an empty folder.
|
|
|
Whole thing crashes when scanning tx's. CPU/Memory usage goes vertical-ish. Possibly the source of offence: (python:3377): Gtk-CRITICAL **: IA__gtk_progress_configure: assertion `value >= min && value <= max' failed Killed Logs are otherwise uneventful How far did it get in block height?
|
|
|
We build the longest chain based on headers and verify the block data hashes to what the respective header carries. We ultimately trust Core is serving us valid blocks. While Core does write invalid blocks time to time, the only way this could throw off Armory is if Core were to maintain an invalid fork that is stronger than the main valid branch.
I think at this point the network as a whole would have bigger issues to deal with than Armory displaying data from an huge invalid fork.
|
|
|
|