XIU
|
|
July 05, 2011, 09:59:46 PM |
|
People actually send 15000BTC without a transaction fee? What if the transaction wouldn't be included?
|
|
|
|
John Tobey (OP)
|
|
July 06, 2011, 02:57:52 AM |
|
People actually send 15000BTC without a transaction fee? What if the transaction wouldn't be included?
They could try again with a fee. Either the same coin (from a wallet backup or custom software) or the change. If the second transaction depends on the first but has a big fee, a miner will gain by including the first so they can include the second.
|
|
|
|
John Tobey (OP)
|
|
July 06, 2011, 03:00:54 AM |
|
I wonder what the problem was. It seems fixed now. But I can almost guarantee Abe will pick up bugs on its way to being as fast as BlockExplorer.
|
|
|
|
|
John Tobey (OP)
|
|
July 15, 2011, 06:14:52 AM |
|
Changes since July 4: * I've created the Version 0.4 branch, where I intend to backport fixes. * The chain summary page (the one listing several blocks in the same chain) loads much faster than before at high counts per page. * Address search accepts an initial substring, still without storing addresses in the database. * FastCGI support has matured. See README-FASTCGI.txt for setup. * Abe supports Weeds currency natively. * The datadir configuration directive can add a new currency without changes to Python code. * auto-agpl provides a link to download the source directory: a license compliance aid for those not wishing to use a Github fork. * /chain/Bitcoin/q/getblockcount: first of (I hope) many BBE-compatible APIs. * Several small fixes and speedups.
|
|
|
|
theymos
Administrator
Legendary
Offline
Activity: 5348
Merit: 13336
|
|
July 15, 2011, 09:33:03 AM |
|
This is a bug unique to the BBE mirror: pages sometimes appear before all of the necessary data is fully committed to the database. It fixes itself in ~30 seconds.
|
1NXYoJ5xU91Jp83XfVMHwwTUyZFK64BoAD
|
|
|
John Tobey (OP)
|
|
July 15, 2011, 04:21:02 PM |
|
This is a bug unique to the BBE mirror: pages sometimes appear before all of the necessary data is fully committed to the database. It fixes itself in ~30 seconds.
Ah, well, Abe tends to return "500 Server Error" rather than an incomplete page. This, too, fixes itself in a bit if you reload. Not sure which failure mode I prefer.
|
|
|
|
dserrano5
Legendary
Offline
Activity: 1974
Merit: 1029
|
|
July 15, 2011, 08:46:12 PM |
|
Ah, well, Abe tends to return "500 Server Error" rather than an incomplete page. This, too, fixes itself in a bit if you reload. Not sure which failure mode I prefer.
500 + handler explaining the most probable cause?
|
|
|
|
John Tobey (OP)
|
|
July 16, 2011, 01:44:10 PM |
|
Ah, well, Abe tends to return "500 Server Error" rather than an incomplete page. This, too, fixes itself in a bit if you reload. Not sure which failure mode I prefer.
500 + handler explaining the most probable cause? Getting better. Throw in an automatic retry or two?
|
|
|
|
Sedo
Newbie
Offline
Activity: 23
Merit: 0
|
|
August 01, 2011, 02:05:03 PM |
|
I am getting this error after having installed Postgresql 8.4, python-crypto, python-psycopg2 and setting up the DB Traceback (most recent call last): File "abe.py", line 1485, in <module> sys.exit(main(sys.argv[1:])) File "abe.py", line 1479, in main store = make_store(args) File "abe.py", line 87, in make_store store = DataStore.new(args) File "/home/max/workspace/abe-jtobey-bitcoin-9d671ec/DataStore.py", line 1662, in new return DataStore(args) File "/home/max/workspace/abe-jtobey-bitcoin-9d671ec/DataStore.py", line 102, in __init__ store.initialize() File "/home/max/workspace/abe-jtobey-bitcoin-9d671ec/DataStore.py", line 538, in initialize store.configure() File "/home/max/workspace/abe-jtobey-bitcoin-9d671ec/DataStore.py", line 801, in configure "No known sequence type works") Exception: No known sequence type works any hints?
|
|
|
|
John Tobey (OP)
|
|
August 04, 2011, 03:37:51 AM Last edit: August 04, 2011, 04:58:16 AM by John Tobey |
|
Hi, thanks for the report. Update: Fixed in latest. I've reproduced the error using the latest code. I don't have a good fix yet, but adding "store.commit()" after line 886 in DataStore.py gets me past this.diff --git a/DataStore.py b/DataStore.py index 2bb250e..12eb096 100644 --- a/DataStore.py +++ b/DataStore.py @@ -884,6 +884,7 @@ store._ddl['txout_approx'], "CREATE TABLE abe_test_1 (" " abe_test_1_id NUMERIC(12) PRIMARY KEY," " foo VARCHAR(10))") + store.commit() # XXX id1 = store.new_id('abe_test_1') id2 = store.new_id('abe_test_1') if int(id1) != int(id2):
|
|
|
|
John Tobey (OP)
|
|
August 09, 2011, 08:48:16 PM |
|
I've added the network hash rate estimates. It's pretty compatible with Block Explorer and adds optional start and stop arguments for fetching less than all history. http://abe.john-edwin-tobey.org/q/nethashHoping someone converts this into graphs like these for alt chains...
|
|
|
|
Yanz
|
|
August 12, 2011, 10:09:18 PM |
|
Hi John, I can't get nethash to show. It locks up the server and I see python take up 99% of the cpu. I thought it was the database being slow since I'm using sqlite. I installed the mysql handler for python and now it says Traceback (most recent call last): File "abe.py", line 1741, in <module> sys.exit(main(sys.argv[1:])) File "abe.py", line 1735, in main store = make_store(args) File "abe.py", line 109, in make_store store = DataStore.new(args) File "/home/henry_root/abe/DataStore.py", line 1766, in new return DataStore(args) File "/home/henry_root/abe/DataStore.py", line 123, in __init__ store._init_datadirs() File "/home/henry_root/abe/DataStore.py", line 392, in _init_datadirs store.binin(addr_vers))) File "/home/henry_root/abe/DataStore.py", line 195, in to_hex return None if x is None else binascii.hexlify(x) UnicodeEncodeError: 'ascii' codec can't encode character u'\xf3' in position 0: ordinal not in range(128)
I check the database via phpmyadmin and I see that it made all the tables and stuff.
|
With great video cards comes great power consumption.
|
|
|
John Tobey (OP)
|
|
August 13, 2011, 03:07:18 AM |
|
Hi Yanz, This error UnicodeEncodeError: 'ascii' codec can't encode character u'\xf3' in position 0: ordinal not in range(128)
is fixed in the latest commit. Thanks! But I don't think it explains the "lockup". Are you sure Abe isn't just loading data? For SQLite the full block chain can take a week or longer. SQLite is okay for small, experimental block chains, though even there, you could wait hours. (Optimization of the initial data load should be a high priority enhancement. Currently, it's slow but simple, since it uses the same mechanism as the run-time new block import, which can not be optimized the way I have in mind.) Let me know how it goes.
|
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
August 13, 2011, 11:03:53 AM |
|
First of all, John, thanks a lot for Abe, it's awesome as far as I can tell. I'm currenlty importing blocks into a mysql db (at block 16000 after about 1.5 hours), it'll takes ages (as in: days), but that's ok for me. One problem I found was with the datadir table: """CREATE TABLE datadir ( dirname VARCHAR(500) PRIMARY KEY, blkfile_number NUMERIC(4), blkfile_offset NUMERIC(20), chain_id NUMERIC(10) NULL )""",
MySql didn't like the VARCHAR(500) primary key. Says it's too long, something about 768 (?) bytes. Reducing it to 128 helped. Maybe you should consider using an INT PK here and an index on dirname if you need it? Something within me tells me that having a VARCHAR as Primary Key is somehow bad. Can't substantiate that, but in my own projects, I always use an INT as primary key (I always name it "id", too, but that's another matter). You wouldn't want a VARCHAR(500) as foreign key, would you? That's just wastefull. This is, of course, just a suggestion and I know Abe is currenlty not optimized at all and a change like this will probably be a bitch to upgrade. Other than that, keep going! I sent my small donation
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
John Tobey (OP)
|
|
August 14, 2011, 02:46:41 AM |
|
First of all, John, thanks a lot for Abe, it's awesome as far as I can tell.
Thanks! I'm currenlty importing blocks into a mysql db (at block 16000 after about 1.5 hours), it'll takes ages (as in: days), but that's ok for me.
Thanks for the report. I'm considering a totally new initial import design with table indexes replaced by Python variables. One problem I found was with the datadir table: [...] Maybe you should consider using an INT PK here and an index on dirname if you need it?
Good idea. Done. The upgrade is not so bad, since Abe has machinery for it. This is actually the seventeenth schema change handled by --upgrade. Other than that, keep going! I sent my small donation Thanks, I hope it works for you!
|
|
|
|
|
John Tobey (OP)
|
|
August 14, 2011, 03:34:04 PM |
|
Ooh! Lots of yumminess coming down the pike. I look forward to using your library to help display transactions before they get into a block. It'd be nice to harmonize our database schemata somewhat. While I wrap my head around libbitcoin's span_left/span_right reorg strategy, I suggest storing total_difficulty in blocks (where it is immutable) rather than chains. Abe relies on this (block_chain_work) to estimate network hash rate over a range of blocks by fetching just the start and end blocks. Thanks!
|
|
|
|
genjix
Legendary
Offline
Activity: 1232
Merit: 1076
|
|
August 14, 2011, 03:54:40 PM |
|
Ooh! Lots of yumminess coming down the pike. I look forward to using your library to help display transactions before they get into a block. It'd be nice to harmonize our database schemata somewhat. While I wrap my head around libbitcoin's span_left/span_right reorg strategy, I suggest storing total_difficulty in blocks (where it is immutable) rather than chains. Abe relies on this (block_chain_work) to estimate network hash rate over a range of blocks by fetching just the start and end blocks. Thanks! Are you on IRC? You should hit me up on #bitcoinconsultancy You can quickly query the total difficulty for a chain using a single SQL command: SELECT SUM(to_difficulty(bits_head, bits_body)) FROM blocks WHERE span_left=X AND span_right=X; Will give you the total difficulty in chain X (you have to have to_difficulty() defined as: bits_body * 2^[8*(bits_head - 3)] )
|
|
|
|
molecular
Donator
Legendary
Offline
Activity: 2772
Merit: 1019
|
|
August 14, 2011, 06:11:47 PM |
|
Thanks for the report. I'm considering a totally new initial import design with table indexes replaced by Python variables.
Import status: now at block 62,000 (after 1.5 days). Note that I have a pretty slow machine that only uses 30W :-), not exactly made for db loads. While waiting for the import, I reverse-engineered the schema using mysql-workbench. Maybe this helps some people. I can also provide .pdf or .mwb files if someone wants them. It seems some relations are "missing" (like txout, pubkey,...?). I didn't want to put them in the diagram, because I wanted it to exactly reflect the db schema.
|
PGP key molecular F9B70769 fingerprint 9CDD C0D3 20F8 279F 6BE0 3F39 FC49 2362 F9B7 0769
|
|
|
|