Ovixx
Newbie
Offline
Activity: 30
Merit: 0
|
|
July 10, 2024, 07:23:27 AM |
|
Because always it's possible (with enough hash rate)mine a block with your private Transaction on it, ( yourself or a miner )
yeah, you need just some tons of hashpower, better some Exahashes/sec and a lot of patience how mutch hash i can get with e5-2699 v4 and 256 gb ram? It's like asking on the forum what the weather is like outside, when you can open the door, you go out and convince yourself.
|
|
|
|
thunderbolt1978
Newbie
Offline
Activity: 7
Merit: 0
|
|
July 10, 2024, 11:05:27 AM Last edit: July 10, 2024, 11:23:13 AM by thunderbolt1978 |
|
how mutch hash i can get with e5-2699 v4 and 256 gb ram?
who knows??? 🤷 2697v4 128gb ram, i guess double... around 3Ekeys/s - Bloom filter for 34359738368 elements : 117781.20 MB
- Bloom filter for 1073741824 elements : 3680.66 MB
- Bloom filter for 33554432 elements : 115.02 MB
- Allocating 512.00 MB for 33554432 bP Points
- Reading bloom filter from file keyhunt_bsgs_4_34359738368.blm .... Done!
- Reading bloom filter from file keyhunt_bsgs_6_1073741824.blm .... Done!
- Reading bP Table from file keyhunt_bsgs_2_33554432.tbl .... Done!
- Reading bloom filter from file keyhunt_bsgs_7_33554432.blm .... Done!
- Thread 0x3cdf67df043da02a45f3290d7b3eecb9c conds: ~1 Ekeys/s (1945754752221815642 keys/s)
3970X 128Gb RAM +] Making checkums .. ... done - Sorting 33554432 elements... Done!
- Writing bloom filter to file keyhunt_bsgs_4_34359738368.blm .... Done!
- Writing bloom filter to file keyhunt_bsgs_6_1073741824.blm .... Done!
- Writing bP Table to file keyhunt_bsgs_2_33554432.tbl .. Done!
- Writing bloom filter to file keyhunt_bsgs_7_33554432.blm .... Done!
- Thread 0x38c93b54973c7fbb68df07b81d88b50ee econds: ~1 Ekeys/s (1529332042058244140 keys/s)
testing right now with 2697v4 512GB RAM 3970X 256GB RAM is my calculation correct ? 130 bit Key = 2^129 keys = 680,564,733,841,876,926,926,749,214,863,536,422,912 keys ? 680,564,733,841,876,926,926,749,214,863,536,422,912 / 1529332042058244140 keys/s = 445.007.830.298.214.485.823 s = 14.111.105.729.902 years ?
|
|
|
|
farshadbbb
Newbie
Offline
Activity: 12
Merit: 0
|
|
July 13, 2024, 01:25:50 PM |
|
how mutch hash i can get with e5-2699 v4 and 256 gb ram?
who knows??? 🤷 2697v4 128gb ram, i guess double... around 3Ekeys/s - Bloom filter for 34359738368 elements : 117781.20 MB
- Bloom filter for 1073741824 elements : 3680.66 MB
- Bloom filter for 33554432 elements : 115.02 MB
- Allocating 512.00 MB for 33554432 bP Points
- Reading bloom filter from file keyhunt_bsgs_4_34359738368.blm .... Done!
- Reading bloom filter from file keyhunt_bsgs_6_1073741824.blm .... Done!
- Reading bP Table from file keyhunt_bsgs_2_33554432.tbl .... Done!
- Reading bloom filter from file keyhunt_bsgs_7_33554432.blm .... Done!
- Thread 0x3cdf67df043da02a45f3290d7b3eecb9c conds: ~1 Ekeys/s (1945754752221815642 keys/s)
3970X 128Gb RAM +] Making checkums .. ... done - Sorting 33554432 elements... Done!
- Writing bloom filter to file keyhunt_bsgs_4_34359738368.blm .... Done!
- Writing bloom filter to file keyhunt_bsgs_6_1073741824.blm .... Done!
- Writing bP Table to file keyhunt_bsgs_2_33554432.tbl .. Done!
- Writing bloom filter to file keyhunt_bsgs_7_33554432.blm .... Done!
- Thread 0x38c93b54973c7fbb68df07b81d88b50ee econds: ~1 Ekeys/s (1529332042058244140 keys/s)
testing right now with 2697v4 512GB RAM 3970X 256GB RAM is my calculation correct ? 130 bit Key = 2^129 keys = 680,564,733,841,876,926,926,749,214,863,536,422,912 keys ? 680,564,733,841,876,926,926,749,214,863,536,422,912 / 1529332042058244140 keys/s = 445.007.830.298.214.485.823 s = 14.111.105.729.902 years ? the t and cpu single core power mathers
|
|
|
|
|
wilspen
Newbie
Offline
Activity: 23
Merit: 0
|
|
July 13, 2024, 08:36:09 PM |
|
Friend, I wanted to understand a little about this -R mode and the -B random mode, would the -R mode be completely random? or does it generate a random initial value and continue sequentially? I have this doubt because from time to time, it prints several random points according to the number of threads, but I don't know if this point would be just demonstrative or if it would be a starting point.
and another question would be what is the difference between -R and -B random mode.
|
|
|
|
albert0bsd (OP)
|
|
July 14, 2024, 04:28:59 PM |
|
Friend, I wanted to understand a little about this -R mode and the -B random mode, would the -R mode be completely random? or does it generate a random initial value and continue sequentially?
it is a Initial random value and N keys secuentially from that point, where default N is 32 bits value is set with -n example for 24 bits. Each thread start with a different point A and another question would be what is the difference between -R and -B random mode.
both are the same but -B options are only for BSGS mode.
|
|
|
|
|
albert0bsd (OP)
|
|
July 18, 2024, 06:38:57 PM |
|
That depends of your physical memory also of the Operating system. By the whay don't use k above 4096 without SET the N value, any value 4096 without the correct N will lead on a sub-optimal behavior just like your example That is on the documentation: https://github.com/albertobsd/keyhunt?tab=readme-ov-file#what-values-use-according-to-my-current-ram2 G -k 128 4 G -k 256 8 GB -k 512 16 GB -k 1024 32 GB -k 2048 64 GB -n 0x100000000000 -k 4096 128 GB -n 0x400000000000 -k 4096 256 GB -n 0x400000000000 -k 8192 512 GB -n 0x1000000000000 -k 8192 1 TB -n 0x1000000000000 -k 16384 2 TB -n 0x4000000000000 -k 16384 4 TB -n 0x4000000000000 -k 32768 8 TB -n 0x10000000000000 -k 32768
|
|
|
|
benjaniah
Newbie
Offline
Activity: 23
Merit: 2
|
|
July 19, 2024, 12:26:15 AM |
|
That depends of your physical memory also of the Operating system. By the whay don't use k above 4096 without SET the N value, any value 4096 without the correct N will lead on a sub-optimal behavior just like your example That is on the documentation: https://github.com/albertobsd/keyhunt?tab=readme-ov-file#what-values-use-according-to-my-current-ram2 G -k 128 4 G -k 256 8 GB -k 512 16 GB -k 1024 32 GB -k 2048 64 GB -n 0x100000000000 -k 4096 128 GB -n 0x400000000000 -k 4096 256 GB -n 0x400000000000 -k 8192 512 GB -n 0x1000000000000 -k 8192 1 TB -n 0x1000000000000 -k 16384 2 TB -n 0x4000000000000 -k 16384 4 TB -n 0x4000000000000 -k 32768 8 TB -n 0x10000000000000 -k 32768 Could you recommend a k and n setting for 96 GB?
|
|
|
|
albert0bsd (OP)
|
|
July 19, 2024, 12:54:12 AM |
|
Could you recommend a k and n setting for 96 GB?
maybe -n 0x400000000000 -k 3072
|
|
|
|
maseratti007
Newbie
Offline
Activity: 2
Merit: 0
|
|
July 23, 2024, 05:52:11 AM |
|
AlbertoBSDIn BSGS Mode what is the difference between "Random" and "Dance"? Are both good for long ranges? Thanks for any clarification!
|
|
|
|
powerusa
Newbie
Offline
Activity: 9
Merit: 0
|
|
July 23, 2024, 07:13:19 PM |
|
AlbertoBSDIn BSGS Mode what is the difference between "Random" and "Dance"? Are both good for long ranges? Thanks for any clarification! Sequential Mode: This mode searches for the private key in a sequential manner, from a starting point and incrementing step by step. It is straightforward but can be slow if the target key is far from the starting point. Backward Mode: In this mode, the search starts from a specified point and moves backward. This can be useful if you have reason to believe the key might be located before a certain point. Both Mode: This mode combines both forward and backward searching. The algorithm searches in both directions simultaneously, which can increase the chances of finding the key faster compared to only moving in one direction. Random Mode: Instead of following a linear path, the search jumps to random positions. This mode can help in scenarios where the key is not expected to be near the starting point and can be located anywhere in the search space. Dance Mode: This is a more advanced and less predictable search pattern, potentially combining aspects of the other modes in a dynamic way. The exact implementation details can vary, but the goal is to maximize the coverage of the search space and increase the likelihood of finding the key efficiently.
|
|
|
|
maseratti007
Newbie
Offline
Activity: 2
Merit: 0
|
|
July 23, 2024, 07:33:59 PM |
|
AlbertoBSDIn BSGS Mode what is the difference between "Random" and "Dance"? Are both good for long ranges? Thanks for any clarification! Sequential Mode: This mode searches for the private key in a sequential manner, from a starting point and incrementing step by step. It is straightforward but can be slow if the target key is far from the starting point. Backward Mode: In this mode, the search starts from a specified point and moves backward. This can be useful if you have reason to believe the key might be located before a certain point. Both Mode: This mode combines both forward and backward searching. The algorithm searches in both directions simultaneously, which can increase the chances of finding the key faster compared to only moving in one direction. Random Mode: Instead of following a linear path, the search jumps to random positions. This mode can help in scenarios where the key is not expected to be near the starting point and can be located anywhere in the search space. Dance Mode: This is a more advanced and less predictable search pattern, potentially combining aspects of the other modes in a dynamic way. The exact implementation details can vary, but the goal is to maximize the coverage of the search space and increase the likelihood of finding the key efficiently. Thank You Very Much!!!
|
|
|
|
Tofee
Jr. Member
Offline
Activity: 45
Merit: 14
|
|
August 04, 2024, 03:51:27 PM |
|
That depends of your physical memory also of the Operating system. By the whay don't use k above 4096 without SET the N value, any value 4096 without the correct N will lead on a sub-optimal behavior just like your example That is on the documentation: https://github.com/albertobsd/keyhunt?tab=readme-ov-file#what-values-use-according-to-my-current-ram2 G -k 128 4 G -k 256 8 GB -k 512 16 GB -k 1024 32 GB -k 2048 64 GB -n 0x100000000000 -k 4096 128 GB -n 0x400000000000 -k 4096 256 GB -n 0x400000000000 -k 8192 512 GB -n 0x1000000000000 -k 8192 1 TB -n 0x1000000000000 -k 16384 2 TB -n 0x4000000000000 -k 16384 4 TB -n 0x4000000000000 -k 32768 8 TB -n 0x10000000000000 -k 32768 Hi all, I am newbie to keyhunt. I have 32gb ram and rtx4070. While running the command line as -> ./keyhunt -t 8 -m address -f abc.txt -r (from hexadecimal to hexadecimal) -q -s 10 -k 2048 -R the speed is 5564110 keys/s. I wanna to scan backwards because I think the target address is nearest to back end of the range. Please suggest what is the most efficient command line I need to follow. Thanks
|
|
|
|
Veliquant
Newbie
Offline
Activity: 3
Merit: 0
|
|
August 05, 2024, 06:26:05 PM |
|
Good afternoon Alberto:
I have been studying BSGS, and Pollard's Kangaroo methods for a while now. I have made an observation trying to mix BSGS with the Kangaroo method that I believe could be helpful.
I have figured out you can use the concept of distinguished points used in the kangaroo method to improve BSGS.
Here is a simple example of what I have observed:
One of the big restraints in the BSGS is the available memory to store the database, each X coordinate you calculate (every jump or every step you make) goes against the database. The other big restraint is that each point has a lookup cost against the database. (1 Point = 1 search). I understand you use a bloom filter in your program to make the search on the database a lot faster and make the database as small as possible in order to fit in the RAM available.
Here is my idea: what about if you store only distinguished points in the database, this means only X coordinates with an amount of N leading zeroes. This has 3 big benefits, one, you can discard all the points that does not have N leading zeroes on the fly, with a very small cost. Second, only compare against the database the small X coordinates. Third, you will store less bits per X coordinates.
This will improve the speed of the algorithm because you will make the same amount of comparisons but most of them only discarding the bigger X's, and then compare only the small X coordinates not discarded.
The other big benefit is that your database won't be restrained anymore by RAM, instead use RAM to store only the distinguished points found and in a latter step, compare against a big stored database in a server.
The draw-back is that you will spend more point computations building the database, but this is made only once.
By example a very simple case:
Case 1: You set a database of 1024 X coordinates (baby steps), you can jump a distance of 2048 (because of symmetry) and each jump has to be compared using the bloom filter. Cost : 1 jump of distance 2024 = 1 database search
Case 2: You set N to be 8 bits, so you store only the X coordinates beginning with 8 leading zero bits. You search for the first 1024 distinguished points, with a mean search size of 256 points per distinguished point. To make the database, you have to search 1024*256 = 262.144 points to get the 1024 distinguished points needed. Now you can make jumps of 262.144 * 2 = 524.288 (2 **19), and search for the next distinguished point after landing ( one very big jump, many small jumps). You make the first jump 2**19, then find the next distinguished point ( with a step of one), then jump 2**19 again. On each step you discard the big X coordinates, and keep only the small X's, this small ones are the only you need to compare. So the mean jump length on average is the same (2**19/256) = 2048, but you only made 256 comparisons on average against the first byte of the coordinate ( a very small lookup cost) and stored temporarily only one small X coordinate that will be later compared against the database.
cost : To build the database calculate 2**18 points, then for 1 jump average 2024 distance = 1 leading zeroes comparison + 1/256 database search.
I 'm trying this idea using a slightly modified version of "Bitcrack", but it has been difficult for me because I'm new at cuda programming.
Does your implementation of the bloom filter takes this observation in to consideration?
Do you think this is a good idea?
Thanks
|
|
|
|
|
albert0bsd (OP)
|
|
August 07, 2024, 02:10:18 AM |
|
For wallet #120, when generating the Bitcoin address from the private keys found by Keyhunt, it does not reach the Bitcoin address presented for the wallet on the challenge website.
Shouldn't it be the same address?
The 120 keys/address for the keyhunt documentation are examplest to test the speed. Please Read and Understand the documentation before running it without know what are you doing. I have been studying BSGS, and Pollard's Kangaroo methods for a while now. I have made an observation trying to mix BSGS with the Kangaroo method that I believe could be helpful.
I need time to understand what are you doing, because right now Kangaroo its still a mystery for me. So i CANT say anything about what you write I am newbie to keyhunt. I have 32gb ram and rtx4070. While running the command line as -> ./keyhunt -t 8 -m address -f abc.txt -r (from hexadecimal to hexadecimal) -q -s 10 -k 2048 -R the speed is 5564110 keys/s. I wanna to scan backwards because I think the target address is nearest to back end of the range. Please suggest what is the most efficient command line I need to follow. Thanks
Kehunt is for CPU only, so you GPU model doesn't have anything to do here. For address there is no backwards mode, if you believe that your target is near of the end, THEN select a sub-range that is near of the desire range and not the whole range....
|
|
|
|
Tofee
Jr. Member
Offline
Activity: 45
Merit: 14
|
|
August 07, 2024, 02:29:03 PM |
|
For wallet #120, when generating the Bitcoin address from the private keys found by Keyhunt, it does not reach the Bitcoin address presented for the wallet on the challenge website.
Shouldn't it be the same address?
The 120 keys/address for the keyhunt documentation are examplest to test the speed. Please Read and Understand the documentation before running it without know what are you doing. I have been studying BSGS, and Pollard's Kangaroo methods for a while now. I have made an observation trying to mix BSGS with the Kangaroo method that I believe could be helpful.
I need time to understand what are you doing, because right now Kangaroo its still a mystery for me. So i CANT say anything about what you write I am newbie to keyhunt. I have 32gb ram and rtx4070. While running the command line as -> ./keyhunt -t 8 -m address -f abc.txt -r (from hexadecimal to hexadecimal) -q -s 10 -k 2048 -R the speed is 5564110 keys/s. I wanna to scan backwards because I think the target address is nearest to back end of the range. Please suggest what is the most efficient command line I need to follow. Thanks
Kehunt is for CPU only, so you GPU model doesn't have anything to do here. For address there is no backwards mode, if you believe that your target is near of the end, THEN select a sub-range that is near of the desire range and not the whole range.... Thanks for reply.
|
|
|
|
Tofee
Jr. Member
Offline
Activity: 45
Merit: 14
|
|
August 08, 2024, 02:28:30 PM |
|
Can anyone suggest me, in keyhunt program how does one read the output and the .dat file (binary format) building up inside keyhunt folder. Thanks
|
|
|
|
Tofee
Jr. Member
Offline
Activity: 45
Merit: 14
|
|
August 08, 2024, 03:45:35 PM |
|
you can use the bPfile.c to generate your .bin file ( this is the baby step table) ./bPfile 1048576000 Pfile.bin [+] precalculating 1048576000 bP elements in file Pfile.bin
This process can take some time, please be patient, maybe some hour depent of your speed. Once that the file is already created, execute: albertobsd $ ./keyhunt -m bsgs -f 120.txt -r 800000000000000000000000000000:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF -t 4 -k 250 -R -p ./bPfile.bin [+] Version 0.1.20210306 K*BSGS [+] Setting mode BSGS [+] Setting 4 threads [+] Setting k factor to 250 [+] Setting random mode. [+] Opening file 120.txt [+] Added 1 points from file [+] Setting N up to 17593008128000. [+] Init bloom filter for 1048576000 elements : 1797.00 MB [+] Allocating 0.00 MB for aMP Points [+] Precalculating 16778 aMP points [+] Allocating 16000.00 MB for bP Points [+] Reading 1048576000 bP points from file ./bPfile.bin
-k 250 is new factor of speed, 250 use some more of 17 GB of RAM. But the speed will be huge: Total 155574970875904000 keys in 180 seconds: 864305393755022 keys/s 864 Terakeys/s Best regards! How does one calculate the number as shown above 1048576000. Also, I have 32 Gb RAM. I am getting only 16Mkeys/sec where as in the above post you mention by using -k 250 and 17 GB RAM you are able to get 864 Terakeys. Can you please explain the Linux commands for ./bpfile and calculating the number. Thanks
|
|
|
|
|