Nearing completeness ... as of rev 1383, generation and sharing of shares between nodes works. TODO: - expiring of old shares - currently they build up in the client's memory
- various limits - maximum block size, nonce size, merkle branch length
- more review of timestamping, chain selection
Nice! But some questions: how finally rewarding and share-tracking will be? Will be a score of best N shares? Will the reward go only to shares in the solved block? I think, N=600 is too low. A single share is worth 0,083 BTC, which is a pretty high reward for a single match; so, large variability is expected for those whose processing power is below 100 MH/s. Ideally, N=5000 makes 1 share = 0,01 BTC which is low enough to attend smooth incomes. Also, if a list of best shares will be kept by nodes, it doesn't matter wheather the shares belong to the round the block was solved or not. Can I get a list of hostnames (or IP addresses if they don't have a hostname) of people interested in supporting this by running a node?
I can run a node. I'll send the IP in a PM.
|
|
|
New version 0.11 posted. - Allow the RNG to be seeded from a file, suggested by Shevek
- Tweak the synchronization on the pattern list
Thanks for the seed option! I've tested the code. A "break;" instance should be after "seedfile = optarg;". After this, the program works perfectly!
|
|
|
Until its added: check what Joric made: https://github.com/joric/pywalletIt makes exporting/importing keys easy without recompiling Bitcoin. It does it right to the wallet using a python script! Still new but pretty cool and easy! Nice tool! But it only works on freshly created wallet.dat files. If you try to dump, or to add a priv. key on to long-term used wallet, the program yields an error message.
|
|
|
New version 0.10 is up.
This version is approx. 6X (!!) faster at prefix matching, thanks to an OpenSSL optimization for quickly computing batches of modular inverses. This optimization also makes the cost of regular expressions much more acute. The search rate for matching a single regular expression only improved by about 3X, and overall is approx. 1/3 the speed of a prefix match.
Congratz! But... any news about entropy import?
|
|
|
Does somebody see an difference between spam and the content of this forum?
|
|
|
I'm back and I've resumed full time work on this. Watch SVN if you're interested... Is the last revision usable? I've tried it but crashes: Fatal error: Traceback (most recent call last): File "/home/shevek/bitcoin/p2pool/p2pool/main.py", line 188, in main tracker = p2pool.Tracker() TypeError: __init__() takes exactly 2 arguments (1 given)
|
|
|
From your code, I think you pick a random number and an EC-product in each iteration.
I sure hope not! Where are you seeing this in the code? Any specific function/line? Uhmm.... I think I saw it in the first versions in the first lines of the loop, when you wrote the program between "code" labels.
Going to "github" for re-reading the code.
|
|
|
So, the biggest improve is, that heavy EC-product is computed only twice!!! New test address is taken from single EC-add operation which is relatively cheap in compute resources.
Indeed, performing one EC_POINT_add() instead of an EC_POINT_mul() per iteration saves a lot of time, and improves the search rate by about a factor of four. It would be a lot more than four if there were a faster way to convert the public key to hashable form. Anyway, this is exactly how vanitygen works now. Really!? From your code, I think you pick a random number and an EC-product in each iteration. Network random number seeding is an extreme security feature. I'm not against adding it, but it wouldn't be trivial to implement, and I question whether any actual user will care enough to turn it on. So prove me wrong.
You mean: if a guy in random.org knows what and why I'm picking, then he can clone my random data and reproduce the process to take my private key. Yes, you are right. This is why I propose TWO different network sources, and one of these is made through secure server. So, if someone snuffs the modem, she won't able to deal with the data. Moreover, local entropy is added in my proposal. Anyway you can simply implement the option adding external entropy source. Say, " -E file" where "file" is some file the user provides. Then, the program reads up to, say, 64 bytes from file, concatenates these data to your local random data and hash it all together. If the option is not used, only local entropy is gathered and used. User is responsible about what she provides as "file". Shouldn't that be s = SHA256(E1 ^ E2 ^ E3) mod p where ^ is bitwise xor? I think Shevek was using the cryptographer's |, which means concatenation. Not the C/C++ |. Sure!
|
|
|
samr7, please read the following. I have a honest proposal for pulling your code with more security and... speed! First, I suppose you are familiar with openssl-API. I'm not (otherwise, I'd rewrite the code myself), but after some revision of your program I've found points of improvement. On the other hand, elliptic curve mathematics are familiar to me ;-) So, this is my proposal. 1) Get a good 256-bit random number. It will act as a seed. You can obtain it from three sources. E1 = 32 bytes from random.org. Here it is a sample code that dumps the result in a file; I suppose you can adapt it: wget "http://www.random.org/cgi-bin/randbyte?nbytes=32&format=file" -O - > E1 E2 = 32 random bytes from hotbis. Sample code: wget "https://www.fourmilab.ch/cgi-bin/Hotbits?nbytes=32&fmt=%20bin" -O E2 E3 = 32 random bytes from local sources. For example, /dev/urandom, but you can use other not so quality sources. Finally, obtain the seed number, "s": s = SHA256(E1 | E2 | E3) mod p where "p" is the prime order of the curve (I suppose you can take it, and perform this operation, from openssl-API) 2) Get the first public address The public key is calculated with EC-product: where "Q" is the fixed point in the curve. Now address is: PubAdd = base58(RIPEMD160(SHA256(PubKey))+checksum) or similar, isn't it? 3) Start the loop Loop start: If pattern matchs "PubAdd", then go end Else do: PubKey <- PubKey + Q (EC-Add operation) s <- (s + 1) mod p PubAdd = base58(RIPEMD160(SHA256(R))+checksum) Endif Loop end P' = s*Q (EC-product) if (P' == PubKey) PrivKey = base58(s+checksum) print PubAdd, PubKey, PrivKey else something went bad endif So, the biggest improve is, that heavy EC-product is computed only twice!!! New test address is taken from single EC-add operation which is relatively cheap in compute resources. But you must be sure that 256 initial seed is really random. What's your opinion?
|
|
|
That's not entirely true: as addresses are only 160 bits, it suffices to try 2^160 private keys to have a reasonable chance of finding one that matches a given address. Thus, a private key only needs 160 of randomness.
Th 160-bit reduction of addresses of public keys is a way to keep shorter chains with still good collision resistance. But inside this, there is a true 256-bit public key Thus, the task-force needed to blow up a private key (that is: the number of tries in EC-product in order to find one private key that matches the targeted public key) is ~2^256 because the EC-multiplier is just a 256 bit number (a 78-decimal-digit integer, if you want). If you resign 96 bits of good entroy, you are giving a chance to an attacker to resolve your priv. key after 2^160 tries. It is still huge but... why resign the full strength of the elliptic curve?
|
|
|
Btw using just time() was quite secure anyway. Assuming Vanitygen runs at least several minutes, the amount of keys to brute-force is >2^64: - 15 bit PID
To be used once (for one key). If used twice or more times, the full chain of keys can be compromised. - 21 bit timeframe (1 month)
Again, to be used once. Consecutive keys will have a time variation about perhaps 8~10 bits. - 7-8 bit CPU speed, + a few bits of time uncertainty each EC_KEY_generate_key()
To be used for one key. The entropy between keys drastically falls. - 2^20+ keys to generate
I don't understand your claim: each key needs 256 bits to be builded. And these 256 bits should have high entropy quality (ideally, 256 bits of entropy). So, picking up 256 bits really random is a hard task. Guys: velocity and good entropy cannot be pushed together. Use these vanity addresses only if you like strong emotions.
|
|
|
It is on the way, in pre-alpha state. Its name: p2pool
|
|
|
What entropy sources it uses?
It uses linux time() (measured in seconds) and process PID when EC_KEY_generate_key() is invoked. This method sucks! Snuff the weak source of entropy is the favourite side attack for a half-skilled hacker. Use at least /dev/random And consider pick random bits from hotbits and random.org.
|
|
|
did anyone try this ? want to know if doing this and placing wallet.dat on a truecrypt container can work I symlink wallet.dat from an encrypted partition (encfs, fuser, in Debian). It works.
|
|
|
What entropy sources it uses?
|
|
|
Working on it - I've pretty much completely formulated the 'plan', and I'm now beginning updating the code.
I've been on vacation since the day after the initial release, so I haven't had much time to code (though I've had lots of time to think!).
Nice news! Recently I've SVN-loaded the code and some errors appeared when miner is on. I'll wait until you say it is ready for re-testing.
|
|
|
What miner are you using? It seems to be submitting shares with a fixed difficulty of 1 rather than paying attention to the 'target' in the getwork message, in which case frequent messages like this would be normal.
cpuminer I'm using a toy miner on p2pool, only for testing purposes. I'm working on a successor to p2pool that is completely decentralized. I'd recommend not trying to use p2pool as it is now; technically it's fine - there network is up and I'm mining on it, but it probably won't generate a block before I finish the successor. Also, accordingly, the SVN repo is in flux; if you still want to use p2pool, use the released version.
p2pool is a great idea! I bet it will sometime implemented naturally on bitcoin client because it is the p2p-way to mine, as the whole bitcoin project is. Go on! So I'll follow your advice and I'll get fresh versions from SVN for my tests.
|
|
|
There just aren't many users currently (only me at the moment). Can you post any Python error messages you see?
GOT SHARE! afe24d1fd5f252cceb1b781f683db4bdddc45da174456f5f0d2d16ef
Error processing data received from worker: Traceback (most recent call last): File "main.py", line 399, in got_response p2p_share(share) File "main.py", line 245, in p2p_share res = chain.accept(share, net) File "main.py", line 55, in accept share2 = share.check(self, height, previous_share2, net) # raises exceptions File "/home/shevek/bitcoin/p2pool_2/p2pool.py", line 103, in check raise ValueError('not enough work!') ValueError: not enough work!
|
|
|
Is the program actually used in the real net or only in the testnet? I've used it for a while and all shares I've found were rejected as stale, with an error messages on terminal related to python code. There were also messages about connected peers... but only one IP-address appeared (I don't remember: 71. ?).
|
|
|
|