The new merger committed split and perform merge block by block. So it does not need full RAM I'm still working on it
For tame and wild separated array this is not yet supported but can be done. At the moment, i'm working to have a usable and stable prgoram.
|
|
|
@MrFreeDragon Thanks for the link, was funny @patatasfritas About the merger, yes, I know, I currently working on a low RAM usage version and the integration of your features. concerning sug group, I didn't have a look yet but splitting range is not a good way of parallelization, you have only a gain of sqrt(n) where n is the number of subgroup.
|
|
|
The plot does not take into consideration the DP overhead. In his first try, zielar had a setup which create a very large overhead (too much kangaroos). An overhead more than 2 times bigger than the expected number of op, so he didn't manage to get something. I helped him a bit yesterday to have a clean setup. If all is going well, he will reach 50% probability Saturday evening (UTC+2). Unfortunately no news from him since yesterday
|
|
|
@JeanLuc tell me please about shifting. Indeed, in order to use the DP from the previous work, we still need to shift the range and subtract the range of the public key?
You can do without shifting, the shift is done only once at the beginning. Of course, if in one case you shit and in one case you do no shift all will be incompatible. Anyway, I'm working on the merger to reduce memory consumption. I added the plot of probability of success:
|
|
|
ok thanks. Note that the count in the work files are inaccurate due to the counting granularity especially when using GPUs. The good value to take in consideration is the DP count and the total number of iteration is ~DPcount*2^dpbit if all work files that has been merged have the same dpbit.
|
|
|
In "wsplit" mode, the time and count should be reset after every save? When I merged all split files the total time are incorrect.
Yes you're right. I will fix this. Thanks for the useful work, I will take your mods (work on windows ?) and apd it to the official release.
|
|
|
Hi,
I did a small program to calculate chance of finding the key (without taking in consideration DP overhead) after a certain number or group operation. Each value are the result of 100.000 trials so we can expect a precision of 0.3%.
0.5*sqrt(N) P=4.606 % 1.0*sqrt(N) P=17.162 % 1.5*sqrt(N) P=34.138 % 2.0*sqrt(N) P=52.314 % 2.5*sqrt(N) P=68.049 % 3.0*sqrt(N) P=80.350 % 3.5*sqrt(N) P=88.846 % 4.0*sqrt(N) P=94.103 % 4.5*sqrt(N) P=97.164 % 5.0*sqrt(N) P=98.746 % 5.5*sqrt(N) P=99.424 % 6.0*sqrt(N) P=99.752 %
I will increase accuracy and number of point and add a nice plot to the README of the Kangaroo program.
|
|
|
This is clearly not optimized for speed. This option of VanitySearch computes 3 addresses and you need only one. The script launch at each line the program. For Mb/s you need a dedicated program.
|
|
|
I really like this race HardwareCollector or Zielar ?
|
|
|
Published a new release with a faster merger. Thanks to test it
|
|
|
Simply add echo $i; before or after the command pons@linpons:~/VanitySearch$ for i in `more tst.txt`; do echo $i;./VanitySearch -cp $i | grep Addr | grep P2PKH | awk '{ print $3 }'; done 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB 12csaq6uhAtyhzUN8N4akVpat7GP6wB3ea 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EC 1PGVt2mWHgbULrx9pjWDPc2EKp2LWLT4fd 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72ED 1DVjd5oZqCCYywtgL7FLbT3ZX4FFJK5nwc
or pons@linpons:~/VanitySearch$ for i in `more tst.txt`; do ./VanitySearch -cp $i | grep Addr | grep P2PKH | awk '{ print $3 }';echo $i; done 12csaq6uhAtyhzUN8N4akVpat7GP6wB3ea 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB 1PGVt2mWHgbULrx9pjWDPc2EKp2LWLT4fd 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EC 1DVjd5oZqCCYywtgL7FLbT3ZX4FFJK5nwc 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72ED
|
|
|
I tested -wsplit in the 89bit range. Works perfet! Thanks JeanLuc. Grabber take files from server and send to local PC, where merger merged this files with masterfile. Key solved during merge after 32minutes on 2080ti. I did not wait until the key is solved on the server, enough that it was found through merge.
Many thanks for the test I'm currently working on the merger trying to speed it up and decrease memory consumption.
|
|
|
If you want a pubkey with a X having more than 90 leading zeros you will need to perform ~2^90 group operation to find it. The 2 points you mentioned are the privilege of the gui who have chosen G.
|
|
|
If you use -wsplit on classic mode (no client/server) yes the kangaroo will be integrated in the file and in case of crash you can restart with the last saved file which will contains kangaroo. If you change the config of the server, yes you need to restart all clients.
To solve a key, kangaroos are not needed only DP in the HashTable and work file header are needed. The fact to save kangaroo (-ws) avoid path breaking, and creation of new kangaroo. I forgot to enable -ws for client mode, I ll add this ASAP.
|
|
|
Ok, I could read the whole work file successfully. My mistake was I read 8 bytes per nbItem and 8 bytes per maxItem, but should 4 bytes. nbItem is the number of entries within the block. Why do we need maxItem value? What does it show?
maxItem is used by the HashTable class for the allocation. It is the same thing as the capacity in a standard c++ vector. It is not a useful information to process to the HashTable.
|
|
|
Yes I think it should be possible to run this script in a cygwin bash terminal I don't think windows cmd (or .bat script) can do this but I'm not an expert in .bat script.
|
|
|
But when you merge files offline you any way should have a lot of memory to merge works, no? (the same amount of RAM as without -wsplit) because app should download huge workfile and add new small work file.
That's right but here if the system swap it will be less disaster than swapping on the server host. zielar said that he has enough RAM for DP22 so it should be ok with DP23.
|
|
|
I think it exists some fork doing this. Ask to zielar.
|
|
|
Yes try -wsplit. This will also block clients during backup and hashtable cleanup. It should solve the overload.
|
|
|
The -cp option expect a private key in hex format: pons@linpons:~/VanitySearch$ ./VanitySearch -cp 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB PubKey: 03BB113592002132E6EF387C3AEBC04667670D4CD40B2103C7D0EE4969E9FF56E4 Addr (P2PKH): 12csaq6uhAtyhzUN8N4akVpat7GP6wB3ea Addr (P2SH): 34rLuX6QStr8qJCbmWouBuVQGDrFT4h453 Addr (BECH32): bc1qz8qw3se7c8rr7vs6sgnp29xv2vsutamxyd839p
There is no trivial way to do what you want with VanitySearch, you can write a small script: pons@linpons:~/VanitySearch$ cat tst.txt 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EB 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72EC 5B3F38AF935A3640D158E871CE6E9666DB862636383386EE510F18CCC3BD72ED
pons@linpons:~/VanitySearch$ for i in `more tst.txt`; do ./VanitySearch -cp $i | grep Addr | grep P2PKH | awk '{ print $3 }'; done 12csaq6uhAtyhzUN8N4akVpat7GP6wB3ea 1PGVt2mWHgbULrx9pjWDPc2EKp2LWLT4fd 1DVjd5oZqCCYywtgL7FLbT3ZX4FFJK5nwc
|
|
|
|