ShadesOfMarble
Donator
Hero Member
Offline
Activity: 543
Merit: 500
|
|
March 25, 2014, 08:23:23 PM |
|
What is, in your experience, the fastest db to use with Abe?
|
|
|
|
John Tobey (OP)
|
|
March 25, 2014, 10:33:43 PM |
|
What is, in your experience, the fastest db to use with Abe?
SQLite with connect-args=":memory:" or (according to K1773R) something entirely in RAM (tmpfs). Honestly, Abe could be much better optimized for both speed and space. Data could be further denormalized. Unindexed data such as scripts, and even the non-initial bytes of hashes, could be read from the blockfile or RPC. If you want speed, I suggest you compare (the now open source) BlockExplorer or other such projects.
|
|
|
|
publicjud
Legendary
Offline
Activity: 1120
Merit: 1003
twet.ch/inv/62d7ae96
|
|
March 26, 2014, 09:29:19 PM |
|
anyone know the fix to the decimal point being off by 100? Everything is correct expect it says 2301 instead of 23.01 for example.
|
Join Twetch twet.ch/inv/62d7ae96
|
|
|
K1773R
Legendary
Offline
Activity: 1792
Merit: 1008
/dev/null
|
|
March 26, 2014, 10:10:32 PM |
|
anyone know the fix to the decimal point being off by 100? Everything is correct expect it says 2301 instead of 23.01 for example.
see https://bitcointalk.org/index.php?topic=22785.msg4601816#msg4601816
|
[GPG Public Key]BTC/DVC/TRC/FRC: 1 K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM A K1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: N K1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: L Ki773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: E K1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: b K1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
|
|
|
publicjud
Legendary
Offline
Activity: 1120
Merit: 1003
twet.ch/inv/62d7ae96
|
|
March 26, 2014, 10:59:02 PM |
|
Thank you, fixed problem.
|
Join Twetch twet.ch/inv/62d7ae96
|
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
March 27, 2014, 06:26:22 PM |
|
I have a question about mempool transactions regarding performance: So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning. However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck. I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away. There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so. Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes. What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help). Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance? I'm a bit confused... does anyone have an idea what could've been causing this? It could be transactions being left open, suboptimal SQL, or If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s). This is the error Exception happened during processing of request from ('178.63.69.203', 49828) Traceback (most recent call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__ self.finish() File "/usr/lib/python2.7/SocketServer.py", line 704, in finish self.wfile.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
(I think nginx (load-balancing here) or the client times out because it's taking too long) on AuroraCoind debug.log I see this: 22478 ThreadRPCServer method=getrawtransaction 22479 ThreadRPCServer method=getrawtransaction 22480 ThreadRPCServer method=getrawtransaction 22481 ThreadRPCServer method=getrawtransaction 22482 ThreadRPCServer method=getrawtransaction 22483 ThreadRPCServer method=getrawtransaction 22484 ThreadRPCServer method=getrawtransaction 22485 ThreadRPCServer method=getrawtransaction 22486 ThreadRPCServer method=getrawtransaction 22487 ThreadRPCServer method=getrawtransaction 22488 ThreadRPCServer method=getrawtransaction 22489 ThreadRPCServer method=getrawtransaction 22490 ThreadRPCServer method=getrawtransaction 22491 ThreadRPCServer method=getrawtransaction 22492 ThreadRPCServer method=getrawtransaction 22493 ThreadRPCServer method=getrawtransaction 22494 ThreadRPCServer method=getrawtransaction 22495 ThreadRPCServer method=getrawtransaction 22496 ThreadRPCServer method=getrawtransaction 22497 ThreadRPCServer method=getrawtransaction 22498 ThreadRPCServer method=getrawtransaction 22499 ThreadRPCServer method=getrawtransaction 22500 ThreadRPCServer method=getrawtransaction 22501 ThreadRPCServer method=getrawtransaction 22502 ThreadRPCServer method=getrawtransaction 22503 ThreadRPCServer method=getrawtransaction 22504 ThreadRPCServer method=getrawtransaction
about 500 per secondspsql> select * from pg_stat_activity; reports only one IDLE connection
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
John Tobey (OP)
|
|
March 27, 2014, 06:37:51 PM |
|
This is the error Exception happened during processing of request from ('178.63.69.203', 49828) Traceback (most recent call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__ self.finish() File "/usr/lib/python2.7/SocketServer.py", line 704, in finish self.wfile.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
(I think nginx (load-balancing here) or the client times out because it's taking too long) No Abe stack frames here, so I don't have much to go by. on AuroraCoind debug.log I see this: 22478 ThreadRPCServer method=getrawtransaction 22479 ThreadRPCServer method=getrawtransaction 22480 ThreadRPCServer method=getrawtransaction 22481 ThreadRPCServer method=getrawtransaction 22482 ThreadRPCServer method=getrawtransaction 22483 ThreadRPCServer method=getrawtransaction 22484 ThreadRPCServer method=getrawtransaction 22485 ThreadRPCServer method=getrawtransaction 22486 ThreadRPCServer method=getrawtransaction 22487 ThreadRPCServer method=getrawtransaction 22488 ThreadRPCServer method=getrawtransaction 22489 ThreadRPCServer method=getrawtransaction 22490 ThreadRPCServer method=getrawtransaction 22491 ThreadRPCServer method=getrawtransaction 22492 ThreadRPCServer method=getrawtransaction 22493 ThreadRPCServer method=getrawtransaction 22494 ThreadRPCServer method=getrawtransaction 22495 ThreadRPCServer method=getrawtransaction 22496 ThreadRPCServer method=getrawtransaction 22497 ThreadRPCServer method=getrawtransaction 22498 ThreadRPCServer method=getrawtransaction 22499 ThreadRPCServer method=getrawtransaction 22500 ThreadRPCServer method=getrawtransaction 22501 ThreadRPCServer method=getrawtransaction 22502 ThreadRPCServer method=getrawtransaction 22503 ThreadRPCServer method=getrawtransaction 22504 ThreadRPCServer method=getrawtransaction
about 500 per secondspsql> select * from pg_stat_activity; reports only one IDLE connection Perhaps the HTTP request is triggering a catch-up which takes too long. Have you tried separating the loader from the server? One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]). Then the web requests will not wait for data to load.
|
|
|
|
youngwebs
Legendary
Offline
Activity: 1080
Merit: 1055
DEV of DeepOnion community pool
|
|
March 27, 2014, 09:04:59 PM |
|
I made succesfully a working explorer for Trollcoin. One question remains however:
1: I have a process for loading the blockchain 2: I have a process for serving the html pages
process 1 is now handled by a cron job process 2 is only running as long as i keep an SSH session with server open.
Anyone with some tips on how to daemonize the Abe webserver process?? I tried to find some information with google but this seems to be a hard part!
|
|
|
|
John Tobey (OP)
|
|
March 28, 2014, 01:13:46 AM |
|
I made succesfully a working explorer for Trollcoin. One question remains however:
1: I have a process for loading the blockchain 2: I have a process for serving the html pages
process 1 is now handled by a cron job process 2 is only running as long as i keep an SSH session with server open.
Anyone with some tips on how to daemonize the Abe webserver process?? I tried to find some information with google but this seems to be a hard part!
Search for "upstart" or "daemontools", or you could follow Abe's FastCGI instructions and use a regular web server. Edit: For keeping an SSH tunnel open, I used to use daemontools, but I think upstart is more usable and standard nowadays, at least on Linux.
|
|
|
|
John Tobey (OP)
|
|
March 28, 2014, 03:46:58 AM |
|
New Abe feature: Standard Bitcoin multisig and pay-to-script-hash (P2SH) support is in the master branch, thanks to Jouke's generous sponsorship. This old post describes what it means. Upgrade could take a few minutes to over an hour on a fully loaded Bitcoin database as Abe scans for output scripts not yet assigned an address. Always backup your important data prior to upgrading. Master also has the beginning of a test suite covering SQLite, MySQL, and PostgreSQL, which you can run by installing pytest and running py.test in the bitcoin-abe directory. To test with MySQL and PostgreSQL requires those databases' respective instance creation tools. Specify ABE_TEST=quick or ABE_TEST_DB=sqlite in the process environment to test only with a (much faster) SQLite in-memory database. The tests cover block, tx, and address pages, prior to HTML rendering.
|
|
|
|
Nite69
|
|
March 28, 2014, 06:17:08 AM Last edit: March 28, 2014, 06:28:37 AM by Nite69 |
|
I have a question about mempool transactions regarding performance: So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning. However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck. I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away. There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so. Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes. What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help). Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance? I'm a bit confused... does anyone have an idea what could've been causing this? It could be transactions being left open, suboptimal SQL, or If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s). This is the error Exception happened during processing of request from ('178.63.69.203', 49828) Traceback (most recent call last): File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request self.finish_request(request, client_address) File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__ self.finish() File "/usr/lib/python2.7/SocketServer.py", line 704, in finish self.wfile.flush() File "/usr/lib/python2.7/socket.py", line 303, in flush self._sock.sendall(view[write_offset:write_offset+buffer_size]) error: [Errno 32] Broken pipe
(I think nginx (load-balancing here) or the client times out because it's taking too long) on AuroraCoind debug.log I see this: 22478 ThreadRPCServer method=getrawtransaction 22479 ThreadRPCServer method=getrawtransaction 22480 ThreadRPCServer method=getrawtransaction 22481 ThreadRPCServer method=getrawtransaction 22482 ThreadRPCServer method=getrawtransaction 22483 ThreadRPCServer method=getrawtransaction 22484 ThreadRPCServer method=getrawtransaction 22485 ThreadRPCServer method=getrawtransaction 22486 ThreadRPCServer method=getrawtransaction 22487 ThreadRPCServer method=getrawtransaction 22488 ThreadRPCServer method=getrawtransaction 22489 ThreadRPCServer method=getrawtransaction 22490 ThreadRPCServer method=getrawtransaction 22491 ThreadRPCServer method=getrawtransaction 22492 ThreadRPCServer method=getrawtransaction 22493 ThreadRPCServer method=getrawtransaction 22494 ThreadRPCServer method=getrawtransaction 22495 ThreadRPCServer method=getrawtransaction 22496 ThreadRPCServer method=getrawtransaction 22497 ThreadRPCServer method=getrawtransaction 22498 ThreadRPCServer method=getrawtransaction 22499 ThreadRPCServer method=getrawtransaction 22500 ThreadRPCServer method=getrawtransaction 22501 ThreadRPCServer method=getrawtransaction 22502 ThreadRPCServer method=getrawtransaction 22503 ThreadRPCServer method=getrawtransaction 22504 ThreadRPCServer method=getrawtransaction
about 500 per secondspsql> select * from pg_stat_activity; reports only one IDLE connection This is what I get after block with a lot of tranasactions: /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'}) 486 seconds = int(seconds) 487 satoshis = int(satoshis) => 488 ss = int(ss) 489 total_ss = int(total_ss) 490 ss = None, builtin int = <type 'int'>
|
Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
March 28, 2014, 07:00:46 AM |
|
Thanks John, Perhaps the HTTP request is triggering a catch-up which takes too long. Have you tried separating the loader from the server? One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]). Then the web requests will not wait for data to load.
What happens if there is a catch-up triggered by request A, then request B comes in? That stack-trace happens quite often in a row, not just once. I'm trying your suggestion now, sounds promising to me.
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
March 28, 2014, 07:12:08 AM |
|
Anyone with some tips on how to daemonize the Abe webserver process??
You could just use a tool called "screen" (or gnu screen). - #> screen
- #> python abe
- close ssh session, abe will keep running in the detached screen
- log back in and use "screen -x" to reconned to the detached screen
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
Nite69
|
|
March 28, 2014, 08:12:36 AM |
|
Anyone with some tips on how to daemonize the Abe webserver process??
You could just use a tool called "screen" (or gnu screen). - #> screen
- #> python abe
- close ssh session, abe will keep running in the detached screen
- log back in and use "screen -x" to reconned to the detached screen
Or ctrl-d instead of closing the ssh session
|
Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
|
|
|
youngwebs
Legendary
Offline
Activity: 1080
Merit: 1055
DEV of DeepOnion community pool
|
|
March 28, 2014, 08:26:57 AM |
|
I made succesfully a working explorer for Trollcoin. One question remains however:
1: I have a process for loading the blockchain 2: I have a process for serving the html pages
process 1 is now handled by a cron job process 2 is only running as long as i keep an SSH session with server open.
Anyone with some tips on how to daemonize the Abe webserver process?? I tried to find some information with google but this seems to be a hard part!
Search for "upstart" or "daemontools", or you could follow Abe's FastCGI instructions and use a regular web server. Edit: For keeping an SSH tunnel open, I used to use daemontools, but I think upstart is more usable and standard nowadays, at least on Linux. thanks for fast reply,also all other suggestions that came hereafter. will look into the best option formy linux server!
|
|
|
|
John Tobey (OP)
|
|
March 28, 2014, 01:38:09 PM |
|
This is what I get after block with a lot of tranasactions: /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'}) 486 seconds = int(seconds) 487 satoshis = int(satoshis) => 488 ss = int(ss) 489 total_ss = int(total_ss) 490 ss = None, builtin int = <type 'int'> You can edit Abe/abe.py and replace "int(ss)" with "None if ss is None else int(ss)" and similarly for "int(total_ss)". I hesitate to apply this change in the master branch, since the error indicates a bug elsewhere. "ss" is not supposed to be None there. I suspect database corruption resulting from parallel loading processes. The root cause is Abe's failure to specify "transaction isolation level serializable" when loading. I would like to fix it, but it would take some effort, and meanwhile, my advice is to have all but one process use --no-load at any given time.
|
|
|
|
John Tobey (OP)
|
|
March 28, 2014, 01:41:06 PM |
|
What happens if there is a catch-up triggered by request A, then request B comes in?
B tries to "help" A catch up. Which would be okay if the loader code were free of bugs. Probably the easiest fix (when I--or someone--has time) is to enforce single-threaded loading with a database lock.
|
|
|
|
Nite69
|
|
March 28, 2014, 02:44:43 PM |
|
This is what I get after block with a lot of tranasactions: /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'}) 486 seconds = int(seconds) 487 satoshis = int(satoshis) => 488 ss = int(ss) 489 total_ss = int(total_ss) 490 ss = None, builtin int = <type 'int'> You can edit Abe/abe.py and replace "int(ss)" with "None if ss is None else int(ss)" and similarly for "int(total_ss)". I hesitate to apply this change in the master branch, since the error indicates a bug elsewhere. "ss" is not supposed to be None there. I suspect database corruption resulting from parallel loading processes. The root cause is Abe's failure to specify "transaction isolation level serializable" when loading. I would like to fix it, but it would take some effort, and meanwhile, my advice is to have all but one process use --no-load at any given time. I guess this is result from the same problem mole sees. I have solved this by dropping the database and re-building it. I guess it would be enought to delete the latest block from the database, but have not had time to investigate this further. Auroracoin blockchain is still small enought..
|
Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
|
|
|
John Tobey (OP)
|
|
March 28, 2014, 05:51:10 PM |
|
I guess this is result from the same problem mole sees. I have solved this by dropping the database and re-building it. I guess it would be enought to delete the latest block from the database, but have not had time to investigate this further. Auroracoin blockchain is still small enought..
Yes, unfortunately I think molecular is on the big chain. I started a module (Abe.admin) to delete problem data but didn't get to the "delete from block X onwards" function.
|
|
|
|
|