Bitcoin Forum
April 27, 2024, 06:25:41 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [37] 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 »
  Print  
Author Topic: [ANNOUNCE] Abe 0.7: Open Source Block Explorer Knockoff  (Read 220734 times)
ShadesOfMarble
Donator
Hero Member
*
Offline Offline

Activity: 543
Merit: 500



View Profile
March 25, 2014, 08:23:23 PM
 #721

What is, in your experience, the fastest db to use with Abe?

Review of the Spondoolies-Tech SP10 „Dawson“ Bitcoin miner (1.4 TH/s)

[22:35] <Vinnie_win> Did anyone get paid yet? | [22:36] <Isokivi> pirate did!
1714242341
Hero Member
*
Offline Offline

Posts: 1714242341

View Profile Personal Message (Offline)

Ignore
1714242341
Reply with quote  #2

1714242341
Report to moderator
"There should not be any signed int. If you've found a signed int somewhere, please tell me (within the next 25 years please) and I'll change it to unsigned int." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714242341
Hero Member
*
Offline Offline

Posts: 1714242341

View Profile Personal Message (Offline)

Ignore
1714242341
Reply with quote  #2

1714242341
Report to moderator
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 25, 2014, 10:33:43 PM
 #722

What is, in your experience, the fastest db to use with Abe?

SQLite with connect-args=":memory:" or (according to K1773R) something entirely in RAM (tmpfs).

Honestly, Abe could be much better optimized for both speed and space.  Data could be further denormalized.  Unindexed data such as scripts, and even the non-initial bytes of hashes, could be read from the blockfile or RPC.  If you want speed, I suggest you compare (the now open source) BlockExplorer or other such projects.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
publicjud
Legendary
*
Offline Offline

Activity: 1120
Merit: 1003


twet.ch/inv/62d7ae96


View Profile
March 26, 2014, 09:29:19 PM
 #723

anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.

Join Twetch twet.ch/inv/62d7ae96
K1773R
Legendary
*
Offline Offline

Activity: 1792
Merit: 1008


/dev/null


View Profile
March 26, 2014, 10:10:32 PM
 #724

anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.
see https://bitcointalk.org/index.php?topic=22785.msg4601816#msg4601816

[GPG Public Key]
BTC/DVC/TRC/FRC: 1K1773RbXRZVRQSSXe9N6N2MUFERvrdu6y ANC/XPM AK1773RTmRKtvbKBCrUu95UQg5iegrqyeA NMC: NK1773Rzv8b4ugmCgX789PbjewA9fL9Dy1 LTC: LKi773RBuPepQH8E6Zb1ponoCvgbU7hHmd EMC: EK1773RxUes1HX1YAGMZ1xVYBBRUCqfDoF BQC: bK1773R1APJz4yTgRkmdKQhjhiMyQpJgfN
publicjud
Legendary
*
Offline Offline

Activity: 1120
Merit: 1003


twet.ch/inv/62d7ae96


View Profile
March 26, 2014, 10:59:02 PM
 #725

anyone know the fix to the decimal point being off by 100?  Everything is correct expect it says 2301 instead of 23.01 for example.
see https://bitcointalk.org/index.php?topic=22785.msg4601816#msg4601816

Thank you, fixed problem.

Join Twetch twet.ch/inv/62d7ae96
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 27, 2014, 03:11:16 PM
 #726

Thanks to Sebastian, Abe has a new public demo featuring Bitcoin and Litecoin: http://bcv.coinwallet.pl/

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
March 27, 2014, 06:26:22 PM
 #727

I have a question about mempool transactions regarding performance:

So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning.

However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck.

I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away.

There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so.

Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes.

What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help).

Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance?

I'm a bit confused... does anyone have an idea what could've been causing this?

It could be transactions being left open, suboptimal SQL, or Huh  If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s).


This is the error


Exception happened during processing of request from ('178.63.69.203', 49828)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 704, in finish
    self.wfile.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe


(I think nginx (load-balancing here) or the client times out because it's taking too long)

on AuroraCoind debug.log I see this:


 22478  ThreadRPCServer method=getrawtransaction
 22479  ThreadRPCServer method=getrawtransaction
 22480  ThreadRPCServer method=getrawtransaction
 22481  ThreadRPCServer method=getrawtransaction
 22482  ThreadRPCServer method=getrawtransaction
 22483  ThreadRPCServer method=getrawtransaction
 22484  ThreadRPCServer method=getrawtransaction
 22485  ThreadRPCServer method=getrawtransaction
 22486  ThreadRPCServer method=getrawtransaction
 22487  ThreadRPCServer method=getrawtransaction
 22488  ThreadRPCServer method=getrawtransaction
 22489  ThreadRPCServer method=getrawtransaction
 22490  ThreadRPCServer method=getrawtransaction
 22491  ThreadRPCServer method=getrawtransaction
 22492  ThreadRPCServer method=getrawtransaction
 22493  ThreadRPCServer method=getrawtransaction
 22494  ThreadRPCServer method=getrawtransaction
 22495  ThreadRPCServer method=getrawtransaction
 22496  ThreadRPCServer method=getrawtransaction
 22497  ThreadRPCServer method=getrawtransaction
 22498  ThreadRPCServer method=getrawtransaction
 22499  ThreadRPCServer method=getrawtransaction
 22500  ThreadRPCServer method=getrawtransaction
 22501  ThreadRPCServer method=getrawtransaction
 22502  ThreadRPCServer method=getrawtransaction
 22503  ThreadRPCServer method=getrawtransaction
 22504  ThreadRPCServer method=getrawtransaction


about 500 per seconds

psql> select * from pg_stat_activity;

reports only one IDLE connection

PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 27, 2014, 06:37:51 PM
 #728

This is the error


Exception happened during processing of request from ('178.63.69.203', 49828)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 704, in finish
    self.wfile.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe


(I think nginx (load-balancing here) or the client times out because it's taking too long)

No Abe stack frames here, so I don't have much to go by.

on AuroraCoind debug.log I see this:


 22478  ThreadRPCServer method=getrawtransaction
 22479  ThreadRPCServer method=getrawtransaction
 22480  ThreadRPCServer method=getrawtransaction
 22481  ThreadRPCServer method=getrawtransaction
 22482  ThreadRPCServer method=getrawtransaction
 22483  ThreadRPCServer method=getrawtransaction
 22484  ThreadRPCServer method=getrawtransaction
 22485  ThreadRPCServer method=getrawtransaction
 22486  ThreadRPCServer method=getrawtransaction
 22487  ThreadRPCServer method=getrawtransaction
 22488  ThreadRPCServer method=getrawtransaction
 22489  ThreadRPCServer method=getrawtransaction
 22490  ThreadRPCServer method=getrawtransaction
 22491  ThreadRPCServer method=getrawtransaction
 22492  ThreadRPCServer method=getrawtransaction
 22493  ThreadRPCServer method=getrawtransaction
 22494  ThreadRPCServer method=getrawtransaction
 22495  ThreadRPCServer method=getrawtransaction
 22496  ThreadRPCServer method=getrawtransaction
 22497  ThreadRPCServer method=getrawtransaction
 22498  ThreadRPCServer method=getrawtransaction
 22499  ThreadRPCServer method=getrawtransaction
 22500  ThreadRPCServer method=getrawtransaction
 22501  ThreadRPCServer method=getrawtransaction
 22502  ThreadRPCServer method=getrawtransaction
 22503  ThreadRPCServer method=getrawtransaction
 22504  ThreadRPCServer method=getrawtransaction


about 500 per seconds

psql> select * from pg_stat_activity;

reports only one IDLE connection


Perhaps the HTTP request is triggering a catch-up which takes too long.  Have you tried separating the loader from the server?  One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]).  Then the web requests will not wait for data to load.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
youngwebs
Legendary
*
Offline Offline

Activity: 1080
Merit: 1055


DEV of DeepOnion community pool


View Profile WWW
March 27, 2014, 09:04:59 PM
 #729

I made succesfully a working explorer for Trollcoin. One question remains however:

1: I have a process for loading the blockchain
2: I have a process for serving the html pages

process 1 is now handled by a cron job
process 2 is only running as long as i keep an SSH session with server open.

Anyone with some tips on how to daemonize the Abe webserver process??
I tried to find some information with google but this seems to be a hard part!

John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 28, 2014, 01:13:46 AM
 #730

I made succesfully a working explorer for Trollcoin. One question remains however:

1: I have a process for loading the blockchain
2: I have a process for serving the html pages

process 1 is now handled by a cron job
process 2 is only running as long as i keep an SSH session with server open.

Anyone with some tips on how to daemonize the Abe webserver process??
I tried to find some information with google but this seems to be a hard part!

Search for "upstart" or "daemontools", or you could follow Abe's FastCGI instructions and use a regular web server.

Edit: For keeping an SSH tunnel open, I used to use daemontools, but I think upstart is more usable and standard nowadays, at least on Linux.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 28, 2014, 03:46:58 AM
 #731

New Abe feature: Standard Bitcoin multisig and pay-to-script-hash (P2SH) support is in the master branch, thanks to Jouke's generous sponsorship.  This old post describes what it means.  Upgrade could take a few minutes to over an hour on a fully loaded Bitcoin database as Abe scans for output scripts not yet assigned an address.  Always backup your important data prior to upgrading.

Master also has the beginning of a test suite covering SQLite, MySQL, and PostgreSQL, which you can run by installing pytest and running py.test in the bitcoin-abe directory.  To test with MySQL and PostgreSQL requires those databases' respective instance creation tools.  Specify ABE_TEST=quick or ABE_TEST_DB=sqlite in the process environment to test only with a (much faster) SQLite in-memory database.  The tests cover block, tx, and address pages, prior to HTML rendering.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
Nite69
Sr. Member
****
Offline Offline

Activity: 477
Merit: 500


View Profile
March 28, 2014, 06:17:08 AM
Last edit: March 28, 2014, 06:28:37 AM by Nite69
 #732

I have a question about mempool transactions regarding performance:

So I'm running http://blockexplorer.auroracoin.eu and because I have allocated quite a machine to the task everything has been zappy and fine this morning.

However when I checked back from work an hour later I saw loads of exceptions saying "error: [Errno 32] Broken pipe", the nginx I have in front reporting gateway timeouts. I'm hypothesizing the db queries are the bottleneck.

I tried rebuilding the database (drop it completely and rebuild)... that didn't help, it started again right away.

There are loads of mempool transactions in AURoracoin because we're being pool-hopping-attacked to the point where there hasn't been a block for 6 hours or so.

Another instance I run with same setup, but on quite weak machine (vm) was able to cope with the load quite well and didn't suffer broken pipes.

What fixed it was to re-initialize BOTH the blockchain of the AuroraCoind AND the db (just db didn't help).

Now the question: how are mempool transactions handled and could the existance of many mempool transactions have considerable impact on db (or abe.py) performance?

I'm a bit confused... does anyone have an idea what could've been causing this?

It could be transactions being left open, suboptimal SQL, or Huh  If you have a collection of Python stack traces or database process lists from during the timeouts, they might point to the offender(s).


This is the error


Exception happened during processing of request from ('178.63.69.203', 49828)
Traceback (most recent call last):
  File "/usr/lib/python2.7/SocketServer.py", line 295, in _handle_request_noblock
    self.process_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 321, in process_request
    self.finish_request(request, client_address)
  File "/usr/lib/python2.7/SocketServer.py", line 334, in finish_request
    self.RequestHandlerClass(request, client_address, self)
  File "/usr/lib/python2.7/SocketServer.py", line 651, in __init__
    self.finish()
  File "/usr/lib/python2.7/SocketServer.py", line 704, in finish
    self.wfile.flush()
  File "/usr/lib/python2.7/socket.py", line 303, in flush
    self._sock.sendall(view[write_offset:write_offset+buffer_size])
error: [Errno 32] Broken pipe


(I think nginx (load-balancing here) or the client times out because it's taking too long)

on AuroraCoind debug.log I see this:


 22478  ThreadRPCServer method=getrawtransaction
 22479  ThreadRPCServer method=getrawtransaction
 22480  ThreadRPCServer method=getrawtransaction
 22481  ThreadRPCServer method=getrawtransaction
 22482  ThreadRPCServer method=getrawtransaction
 22483  ThreadRPCServer method=getrawtransaction
 22484  ThreadRPCServer method=getrawtransaction
 22485  ThreadRPCServer method=getrawtransaction
 22486  ThreadRPCServer method=getrawtransaction
 22487  ThreadRPCServer method=getrawtransaction
 22488  ThreadRPCServer method=getrawtransaction
 22489  ThreadRPCServer method=getrawtransaction
 22490  ThreadRPCServer method=getrawtransaction
 22491  ThreadRPCServer method=getrawtransaction
 22492  ThreadRPCServer method=getrawtransaction
 22493  ThreadRPCServer method=getrawtransaction
 22494  ThreadRPCServer method=getrawtransaction
 22495  ThreadRPCServer method=getrawtransaction
 22496  ThreadRPCServer method=getrawtransaction
 22497  ThreadRPCServer method=getrawtransaction
 22498  ThreadRPCServer method=getrawtransaction
 22499  ThreadRPCServer method=getrawtransaction
 22500  ThreadRPCServer method=getrawtransaction
 22501  ThreadRPCServer method=getrawtransaction
 22502  ThreadRPCServer method=getrawtransaction
 22503  ThreadRPCServer method=getrawtransaction
 22504  ThreadRPCServer method=getrawtransaction


about 500 per seconds

psql> select * from pg_stat_activity;

reports only one IDLE connection


This is what I get after block with a lot of tranasactions:
Code:
 /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'})
    486             seconds = int(seconds)
    487             satoshis = int(satoshis)
=>  488             ss = int(ss)
    489             total_ss = int(total_ss)
    490
ss = None, builtin int = <type 'int'>

Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx
Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp
Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR
AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
March 28, 2014, 07:00:46 AM
 #733

Thanks John,

Perhaps the HTTP request is triggering a catch-up which takes too long.  Have you tried separating the loader from the server?  One process runs Abe in an infinite loop passing --no-serve, and the web process uses --no-load (or datadir=[]).  Then the web requests will not wait for data to load.

What happens if there is a catch-up triggered by request A, then request B comes in?

That stack-trace happens quite often in a row, not just once.

I'm trying your suggestion now, sounds promising to me.

PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
molecular
Donator
Legendary
*
Offline Offline

Activity: 2772
Merit: 1019



View Profile
March 28, 2014, 07:12:08 AM
 #734

Anyone with some tips on how to daemonize the Abe webserver process??

You could just use a tool called "screen" (or gnu screen).

  • #> screen
  • #> python abe
  • close ssh session, abe will keep running in the detached screen
  • log back in and use "screen -x" to reconned to the detached screen


PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0  3F39 FC49 2362 F9B7 0769
Nite69
Sr. Member
****
Offline Offline

Activity: 477
Merit: 500


View Profile
March 28, 2014, 08:12:36 AM
 #735

Anyone with some tips on how to daemonize the Abe webserver process??

You could just use a tool called "screen" (or gnu screen).

  • #> screen
  • #> python abe
  • close ssh session, abe will keep running in the detached screen
  • log back in and use "screen -x" to reconned to the detached screen



Or ctrl-d instead of closing the ssh session

Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx
Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp
Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR
AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
youngwebs
Legendary
*
Offline Offline

Activity: 1080
Merit: 1055


DEV of DeepOnion community pool


View Profile WWW
March 28, 2014, 08:26:57 AM
 #736

I made succesfully a working explorer for Trollcoin. One question remains however:

1: I have a process for loading the blockchain
2: I have a process for serving the html pages

process 1 is now handled by a cron job
process 2 is only running as long as i keep an SSH session with server open.

Anyone with some tips on how to daemonize the Abe webserver process??
I tried to find some information with google but this seems to be a hard part!

Search for "upstart" or "daemontools", or you could follow Abe's FastCGI instructions and use a regular web server.

Edit: For keeping an SSH tunnel open, I used to use daemontools, but I think upstart is more usable and standard nowadays, at least on Linux.

thanks for fast reply,also all other suggestions that came hereafter. will look into the best option formy linux server!

John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 28, 2014, 01:38:09 PM
 #737

This is what I get after block with a lot of tranasactions:
Code:
 /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'})
    486             seconds = int(seconds)
    487             satoshis = int(satoshis)
=>  488             ss = int(ss)
    489             total_ss = int(total_ss)
    490
ss = None, builtin int = <type 'int'>

You can edit Abe/abe.py and replace "int(ss)" with "None if ss is None else int(ss)" and similarly for "int(total_ss)".

I hesitate to apply this change in the master branch, since the error indicates a bug elsewhere.  "ss" is not supposed to be None there.  I suspect database corruption resulting from parallel loading processes.  The root cause is Abe's failure to specify "transaction isolation level serializable" when loading.  I would like to fix it, but it would take some effort, and meanwhile, my advice is to have all but one process use --no-load at any given time.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 28, 2014, 01:41:06 PM
 #738

What happens if there is a catch-up triggered by request A, then request B comes in?

B tries to "help" A catch up.  Which would be okay if the loader code were free of bugs.  Probably the easiest fix (when I--or someone--has time) is to enforce single-threaded loading with a database lock.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
Nite69
Sr. Member
****
Offline Offline

Activity: 477
Merit: 500


View Profile
March 28, 2014, 02:44:43 PM
 #739

This is what I get after block with a lot of tranasactions:
Code:
 /usr/local/lib/python2.7/dist-packages/Abe/abe.py in handle_chain(abe=<__main__.Abe instance>, page={'body': ['<p>Search by address, block .........}, 'title': u'AuroraCoin'})
    486             seconds = int(seconds)
    487             satoshis = int(satoshis)
=>  488             ss = int(ss)
    489             total_ss = int(total_ss)
    490
ss = None, builtin int = <type 'int'>

You can edit Abe/abe.py and replace "int(ss)" with "None if ss is None else int(ss)" and similarly for "int(total_ss)".

I hesitate to apply this change in the master branch, since the error indicates a bug elsewhere.  "ss" is not supposed to be None there.  I suspect database corruption resulting from parallel loading processes.  The root cause is Abe's failure to specify "transaction isolation level serializable" when loading.  I would like to fix it, but it would take some effort, and meanwhile, my advice is to have all but one process use --no-load at any given time.


I guess this is result from the same problem mole sees. I have solved this by dropping the database and re-building it. I guess it would be enought to delete the latest block from the database, but have not had time to investigate this further. Auroracoin blockchain is still small enought..

Sync: ShiSKnx4W6zrp69YEFQyWk5TkpnfKLA8wx
Bitcoin: 17gNvfoD2FDqTfESUxNEmTukGbGVAiJhXp
Litecoin: LhbDew4s9wbV8xeNkrdFcLK5u78APSGLrR
AuroraCoin: AXVoGgYtSVkPv96JLL7CiwcyVvPxXHXRK9
John Tobey (OP)
Hero Member
*****
Offline Offline

Activity: 481
Merit: 529



View Profile WWW
March 28, 2014, 05:51:10 PM
 #740

I guess this is result from the same problem mole sees. I have solved this by dropping the database and re-building it. I guess it would be enought to delete the latest block from the database, but have not had time to investigate this further. Auroracoin blockchain is still small enought..

Yes, unfortunately I think molecular is on the big chain.  I started a module (Abe.admin) to delete problem data but didn't get to the "delete from block X onwards" function.

Can a change to the best-chain criteria protect against 51% to 90+% attacks without a hard fork?
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [37] 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!