GoVanza
Newbie
Offline
Activity: 149
Merit: 0
|
|
June 20, 2020, 05:22:08 PM |
|
Are you planning to make the OpenGL version?
|
|
|
|
Konstanting2
Newbie
Offline
Activity: 27
Merit: 0
|
|
June 21, 2020, 07:46:39 AM |
|
Hello to all! To my questions, while there is no answer, help who can. I tried to compile the file myself, but it does not start, what did I do wrong? To output information to a separate file Kangaroo.exe -t 0 -d 14 -gpu -gpuId 0 -w 65save.txt -wi 30 65.txt -o 64save.txt save the kangaroo in a working file also did not start Kangaroo.exe -t 0 -d 14 -gpu -gpuId 0 -w 65save.txt -wi 30 65.txt -ws:64save.txt
Kangaroo v1.9 Kangaroo [-v] [-t nbThread] [-d dpBit] [gpu] [-check] [-gpuId gpuId1[,gpuId2,...]] [-g g1x,g1y[,g2x,g2y,...]] inFile -v: Print version -gpu: Enable gpu calculation -gpuId gpuId1,gpuId2,...: List of GPU(s) to use, default is 0 -g g1x,g1y,g2x,g2y,...: Specify GPU(s) kernel gridsize, default is 2*(MP),2*(Core/MP) -d: Specify number of leading zeros for the DP method (default is auto) -t nbThread: Secify number of thread -w workfile: Specify file to save work into (current processed key only) -i workfile: Specify file to load work from (current processed key only) -wi workInterval: Periodic interval (in seconds) for saving work -ws: Save kangaroos in the work file -wsplit: Split work file of server and reset hashtable -wm file1 file2 destfile: Merge work file -wmdir dir destfile: Merge directory of work files -wt timeout: Save work timeout in millisec (default is 3000ms) -winfo file1: Work file info file -m maxStep: number of operations before give up the search (maxStep*expected operation) -s: Start in server mode -c server_ip: Start in client mode and connect to server server_ip -sp port: Server port, default is 17403 -nt timeout: Network timeout in millisec (default is 3000ms) -o fileName: output result to fileName -l: List cuda enabled devices -check: Check GPU kernel vs CPU inFile: intput configuration file
Please, if possible with examples of use. HELP!!!! A few days ago I asked for help. After all, it’s not difficult for you. Thank you so much in advance !!!!!!!!
|
|
|
|
radd66
Newbie
Offline
Activity: 5
Merit: 1
|
|
June 21, 2020, 09:08:05 AM Last edit: June 21, 2020, 09:52:07 AM by radd66 |
|
@Konstanting2 First try this: Kangaroo.exe -l If is says: GPU #0 Then try this command: Kangaroo.exe -t 0 -d 14 -gpu -ws -w 65save.txt -wi 30 -o 64save.txt 65.txt If it says: GPU #1 Then: Kangaroo.exe -t 0 -d 14 -gpu -gpuId 1 -ws -w 65save.txt -wi 30 -o 64save.txt 65.txt This thread is for VanitySearch, you may get more help if you post here: Jean Luc Kangaroo: https://bitcointalk.org/index.php?topic=5244940.0
|
|
|
|
Konstanting2
Newbie
Offline
Activity: 27
Merit: 0
|
|
June 21, 2020, 11:20:06 AM |
|
If you collected * bat * as the above program does not start, help correct and fix the ERROR.
|
|
|
|
Konstanting2
Newbie
Offline
Activity: 27
Merit: 0
|
|
June 21, 2020, 01:49:34 PM |
|
Priv: 0x1A838B13505B26867 How to translate into an understandable type type HuhHuhHuh Private Key WIF 51 characters base58, starts with a '5' 5HscGG8iqA7YnZPN9ZEPdJSHVsb5zuUVST9XFB4M1tQUWUbFsh3 Private Key WIF Compressed 52 characters base58, starts with a 'K' or 'L' KwUNeMo5dZDuJ5wiLmG6sDxbpSHT4acLXLQcVZPmLutcR3MHSHN3 Private Key Hexadecimal Format (64 characters [0-9A-F]): 078B13055AB016486AF996FCD19DBD8AD528BE1D4F014845BC1D1E06E7BE38A8
|
|
|
|
Jean_Luc (OP)
|
|
June 22, 2020, 09:09:21 AM |
|
If you can compile the last release, you can do: pons@linpons:~/VanitySearch$ ./VanitySearch -cp 1A838B13505B26867 PrivAddr: p2pkh:KwDiBf89QgGbjEhKnhXJuH7LrciVrZi3qZM21gaY8WN2CdwnTG57 PubKey: 0230210C23B1A047BC9BDBB13448E67DEDDC108946DE6DE639BCC75D47C0216B1B Addr (P2PKH): 18ZMbwUFLMHoZBbfpCjUJQTCMCbktshgpe Addr (P2SH): 32xNiwhBsssQikVNaRPzxpp4UZKc7F789h Addr (BECH32): bc1q2tnk8f7acx4yl2q327xyj8qmcl74wqfhz3w3zh
|
|
|
|
|
student4ever
Newbie
Offline
Activity: 12
Merit: 0
|
|
June 22, 2020, 03:55:59 PM Last edit: June 22, 2020, 11:03:30 PM by student4ever |
|
Hi,
why is the software failing when I am looking for more than 50 prefixes? It looses speed dramatically. With my Rig I have 4 Bk/s and the speed goes as I increase the number of prefixes. Can you please fix it? So that it doesnt matter if you are looking for 50 prefixes or 5 000 000. Otherwise oclvanitygen would be better for a large number of prefixes.....
|
|
|
|
bangbumbang
Jr. Member
Offline
Activity: 41
Merit: 1
|
|
June 23, 2020, 11:33:07 AM Last edit: June 23, 2020, 11:44:05 AM by bangbumbang |
|
thx 4 .18 !
though having the same problem i had with cuda 10.2 when self compiled but this time with you'r realeased 18.exe (cuda 11 is installed and working fine)
-check all fine prefix search all fine (though not testet over 50 prefixes) searching very few adresses runs a while but even below 100 adresses:
VanitySearch v1.18 Search: 73 addresses (Lookup size 37,[1,7]) [Compressed or Uncompressed] Start Tue Jun 23 13:24:08 2020 Base Key: Randomly changed every 44400 Mkeys Number of CPU thread: 0 GPU: GPU #0 GeForce RTX 2080 Ti (68x64 cores) Grid(544x512) [852.29 Mkey/s][GPU 852.29 Mkey/s][Total 2^31.67][Prob 0.0%][50% in 3.76903e+31y][Found 0] GPUEngine: Launch: an illegal memory access was encountered
is there maybe some cuda environment setting i'm missing or havent set ?!
thx up front...
|
|
|
|
shlomogold
Jr. Member
Offline
Activity: 75
Merit: 2
|
|
June 23, 2020, 12:34:24 PM |
|
Hi,
why is the software failing when I am looking for more than 50 prefixes? It looses speed dramatically. With my Rig I have 4 Bk/s and the speed goes as I increase the number of prefixes. Can you please fix it? So that it doesnt matter if you are looking for 50 prefixes or 5 000 000. Otherwise oclvanitygen would be better for a large number of prefixes.....
4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
|
|
|
|
student4ever
Newbie
Offline
Activity: 12
Merit: 0
|
|
June 23, 2020, 12:53:50 PM Last edit: June 23, 2020, 01:10:36 PM by student4ever |
|
4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
It is a rig of 8 gtx 1060 6gb and yes B stand for billion. I have 2 of that type and right now I am building one with 2080 super. Since we are now in the billions, we should also use x Bk/s, because 4000 Mk/s is shit. I better write 4 Bk/s. With my new rig I can expect about 10 Bk/s. However I have significant speed loss if I am searching for a large number of prefixes. With about 10k prefixes I am down to 700 Mk/s, but I just parsed some data and want to search for 10M prefixes. I dont know where the speed will be with so many prefixes. In oclvanitygen there were no impact on the speed - doesnt matter if you are looking for 10 or 10M prefixes. But vanitysearch is still 5-6 times faster with so many prefixes. I was thinking of doing an FPGA, but with this much speed I dont need an FPGA. I probably just put more rigs on this task.
|
|
|
|
Jean_Luc (OP)
|
|
June 23, 2020, 01:08:28 PM |
|
is there maybe some cuda environment setting i'm missing or havent set ?!
Do you have the same issue with the 1.17 and CUDA 10.0 ? However I have significant speed loss if I am searching for a large number of prefixes.
When searching, are you using only compressed, both, only uncompressed ?
|
|
|
|
bangbumbang
Jr. Member
Offline
Activity: 41
Merit: 1
|
|
June 23, 2020, 02:28:22 PM |
|
Do you have the same issue with the 1.17 and CUDA 10.0 ?
self compiled 1.17 with cuda 10.2 -> NOT working your exe 1.17 with cuda 10.2 -> working your exe 1.17 with cuda 11 -> NOT working your exe 1.18 with cuda 10.2 -> NOT working but no surprise your exe 1.18 with cuda 11 -> NOT working bit of a surprise self compiled 1.18 with cuda 11 -> not tested as of now i was a litte frustrated because i couldnt figure out why my complies didnt work on 10.2 with 1.17 (compilation went ok exe just didnt work.. same error as above) also because a few weeks ago there was someone posting here that they didn't have any problems compiling 1.17 to a running state after a small issue he had. and now with 1.18 and cuda 11 i'm totaly confused as to why. PS: 1.17 with cuda 10.0 self compiled on linux was working fine, if i want to use it though i have to run this on a windows system..
|
|
|
|
COBRAS
Member
Offline
Activity: 1008
Merit: 23
|
|
June 23, 2020, 03:57:53 PM |
|
Vanytysearch working WITH ANY ADDRESSES or not ?
Is any a method for convert finded prefix for hexPrivKey ?
Br.
|
[
|
|
|
Jean_Luc (OP)
|
|
June 23, 2020, 04:05:07 PM |
|
VanitySearch can search for fulll adresses but it is unlikely it founds something. I will try tomorrow with a set of addresses an see if I manage to reproduce the issue.
|
|
|
|
COBRAS
Member
Offline
Activity: 1008
Merit: 23
|
|
June 23, 2020, 06:05:50 PM Last edit: June 24, 2020, 12:35:53 AM by COBRAS |
|
VanitySearch can search for fulll adresses but it is unlikely it founds something. I will try tomorrow with a set of addresses an see if I manage to reproduce the issue.
Maybe this wil be helpfull for your development https://github.com/ryancdotorg/brainflayerBrainflare is popular product some years ago... Jean_Luc, I read info what brainflare on CPU get 40 billiard om keys/Hour. This is faste then VanitySearch ? Can see brainflare and if you interested get function from brainflare and made code for CUDA Br
|
[
|
|
|
shlomogold
Jr. Member
Offline
Activity: 75
Merit: 2
|
|
June 23, 2020, 08:50:28 PM |
|
4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
It is a rig of 8 gtx 1060 6gb and yes B stand for billion. I have 2 of that type and right now I am building one with 2080 super. Since we are now in the billions, we should also use x Bk/s, because 4000 Mk/s is shit. I better write 4 Bk/s. With my new rig I can expect about 10 Bk/s. However I have significant speed loss if I am searching for a large number of prefixes. With about 10k prefixes I am down to 700 Mk/s, but I just parsed some data and want to search for 10M prefixes. I dont know where the speed will be with so many prefixes. In oclvanitygen there were no impact on the speed - doesnt matter if you are looking for 10 or 10M prefixes. But vanitysearch is still 5-6 times faster with so many prefixes. I was thinking of doing an FPGA, but with this much speed I dont need an FPGA. I probably just put more rigs on this task. I assume you target a very specific set of addresses (as we all do). Not trying to be a spoilsport here, but I did some math and here is what we have: 10B/s 100B/10 s 1T/100 s 10T/1000 s 100T/10000 s 1 Quadr/100 000 s 10 Quadr/1 000 000 s 100 Quadr/ 10 000 000 s 1Quint/100 000 000 s or 27,777 h or 3.2 years 10Quint/1 000 000 000 s or >30 years 100 Quint/10 000 000 000 s or >300 years 1 grain of sand contains roughly 40 quintillion atoms or 4.33×10*19 the whole universe contains 10*80 atoms which is almost equal to the number of all bitcoin addresses. so in 30 years you're only able to check 1/4 of a grain of sand when you need to check the whole universe in order to find those addresses. so basically me using my laptop which produces 100M/sec and you using your x14 times more powerful equipment, we still have almost the same chance because of such huge numbers. sorry if this is off topic but I think it can give people a good and clear view of what we are dealing here with
|
|
|
|
COBRAS
Member
Offline
Activity: 1008
Merit: 23
|
|
June 23, 2020, 09:59:54 PM |
|
4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
It is a rig of 8 gtx 1060 6gb and yes B stand for billion. I have 2 of that type and right now I am building one with 2080 super. Since we are now in the billions, we should also use x Bk/s, because 4000 Mk/s is shit. I better write 4 Bk/s. With my new rig I can expect about 10 Bk/s. However I have significant speed loss if I am searching for a large number of prefixes. With about 10k prefixes I am down to 700 Mk/s, but I just parsed some data and want to search for 10M prefixes. I dont know where the speed will be with so many prefixes. In oclvanitygen there were no impact on the speed - doesnt matter if you are looking for 10 or 10M prefixes. But vanitysearch is still 5-6 times faster with so many prefixes. I was thinking of doing an FPGA, but with this much speed I dont need an FPGA. I probably just put more rigs on this task. I assume you target a very specific set of addresses (as we all do). Not trying to be a spoilsport here, but I did some math and here is what we have: 10B/s 100B/10 s 1T/100 s 10T/1000 s 100T/10000 s 1 Quadr/100 000 s 10 Quadr/1 000 000 s 100 Quadr/ 10 000 000 s 1Quint/100 000 000 s or 27,777 h or 3.2 years 10Quint/1 000 000 000 s or >30 years 100 Quint/10 000 000 000 s or >300 years 1 grain of sand contains roughly 40 quintillion atoms or 4.33×10*19 the whole universe contains 10*80 atoms which is almost equal to the number of all bitcoin addresses. so in 30 years you're only able to check 1/4 of a grain of sand when you need to check the whole universe in order to find those addresses. so basically me using my laptop which produces 100M/sec and you using your x14 times more powerful equipment, we still have almost the same chance because of such huge numbers. sorry if this is off topic but I think it can give people a good and clear view of what we are dealing here with How many this in brainflare hours ? p.s. Braiflare calculate 40 Bill.key/h ?
|
[
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
June 24, 2020, 02:51:38 AM |
|
4Bk/s as in Billions? that's quite impressive. what is your rig consist of?
It is a rig of 8 gtx 1060 6gb and yes B stand for billion. I have 2 of that type and right now I am building one with 2080 super. Since we are now in the billions, we should also use x Bk/s, because 4000 Mk/s is shit. I better write 4 Bk/s. With my new rig I can expect about 10 Bk/s. However I have significant speed loss if I am searching for a large number of prefixes. With about 10k prefixes I am down to 700 Mk/s, but I just parsed some data and want to search for 10M prefixes. I dont know where the speed will be with so many prefixes. In oclvanitygen there were no impact on the speed - doesnt matter if you are looking for 10 or 10M prefixes. But vanitysearch is still 5-6 times faster with so many prefixes. I was thinking of doing an FPGA, but with this much speed I dont need an FPGA. I probably just put more rigs on this task. I assume you target a very specific set of addresses (as we all do). Not trying to be a spoilsport here, but I did some math and here is what we have: 10B/s 100B/10 s 1T/100 s 10T/1000 s 100T/10000 s 1 Quadr/100 000 s 10 Quadr/1 000 000 s 100 Quadr/ 10 000 000 s 1Quint/100 000 000 s or 27,777 h or 3.2 years 10Quint/1 000 000 000 s or >30 years 100 Quint/10 000 000 000 s or >300 years 1 grain of sand contains roughly 40 quintillion atoms or 4.33×10*19 the whole universe contains 10*80 atoms which is almost equal to the number of all bitcoin addresses. so in 30 years you're only able to check 1/4 of a grain of sand when you need to check the whole universe in order to find those addresses. so basically me using my laptop which produces 100M/sec and you using your x14 times more powerful equipment, we still have almost the same chance because of such huge numbers. sorry if this is off topic but I think it can give people a good and clear view of what we are dealing here with Not to disagree that these are large numbers but I think the math is a little off... If the universe contains 10^80 atoms then there are 6,842,277,657,836,000,000,000,000,000,000 times more atoms in the universe than there are RIPEMD160 bitcoin addresses, 2^160. Even if you think there is a unique address for each possible hex private key, 16^64; 10^80 is still larger. What I do agree with is that one CPU has just a good of a chance of finding a specific bitcoin address as does a single GPU rig. However, if the GPU number is high enough, I'd put my money on the GPUs...the odds will increase as the number of GPUs increase.
|
|
|
|
student4ever
Newbie
Offline
Activity: 12
Merit: 0
|
|
June 24, 2020, 11:40:23 AM Last edit: June 24, 2020, 07:03:49 PM by student4ever |
|
First of all: There are maybe 2^160 possible adresses, but not every base number leads to an actual adress. There are about 2^96 possible private keys so far, which can lead to diffenrent adresses if you hash them compressed or uncompressed. So I dont need to hash 2^160, just 2^96 keys to uncompressed. With 10 Bk/s and 10 M adresses I am looking for I have the probability of finding an adresses by luck once every 300 years. However I will increase the rig number and gpus involved to have a speed of 10 Tk/s = 10 000 Bk/s so than I will have have an adress by luck in less than a year. Maybe vanitysearch will also increase speed in the future, so that I don´t need that many gpu hardware. However we could also make a deeplearning process out of it where the systems learns which binary input change will lead to a prefixed output? Maybe my maths is not correct, but who cares... I do it for fun and the chance is bigger than 0 and can be increased by speed and number of prefixes. And the softwares in this field increased the speed dramatically over the last years. For example the LBC was happy to have 150 Mk/s with 50 people in a pool and vanitygen was fast if you had 400 kK/s. So yes, it becomes more likely that someone will have luck. I have 2 rigs at work and 1 comes next week. So as more rigs I am putting to work as more likely it is that I will have luck.
|
|
|
|
|