arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 03:30:12 PM |
|
4) my script finds the key every time? So far, yes. Ok, when I ran your original script, it was less than 20% of the time the key was found. Keep working on the database size. I know you will get it down! If you can get it down to 1 bit per key like the OPs, then you're onto something. His DB is unmatched so far. Less than 20%? Did you try with the last version? Maybe the first version was not correct, I fixed some stuff. I don't understand why: 1 bit for 2^32 keys is better than 64bits for 2^32/2^20 = 2^12 keys. Less size, less search time, 100% keys found. To speed up the serch, I need to store more information per byte, more density of information/work per byte stored. For example, If I generated all 2^44 keys and I stored only the keys with the first 20 bit = 0, I would get a database with the same size of a databse created with 1 key each 1 million. But the search time would be faster, cause I would need to check only 1 key against the database instead of 1 million.
|
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 205
Merit: 1
|
|
December 05, 2023, 03:34:21 PM |
|
4) my script finds the key every time? So far, yes. Ok, when I ran your original script, it was less than 20% of the time the key was found. Keep working on the database size. I know you will get it down! If you can get it down to 1 bit per key like the OPs, then you're onto something. His DB is unmatched so far. Less than 20%? Did you try with the last version? Maybe the first version was not correct, I fixed some stuff. I don't understand why: 1 bit for 2^32 key is better than 64bits for 2^32/2^20 = 2^12 keys. Less size, less search time, 100% keys found. The first version of the script worked, the second does not find matches. There may be an error in the code.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 03:39:15 PM |
|
The first version of the script worked, the second does not find matches. There may be an error in the code.
On my PC, 100 keys found out of 100, .... Did you set the same parameters in both scripts?
|
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 205
Merit: 1
|
|
December 05, 2023, 03:56:48 PM |
|
The first version of the script worked, the second does not find matches. There may be an error in the code.
On my PC, 100 keys found out of 100, .... Did you set the same parameters in both scripts? About an hour and a half ago, I copied both scripts, created a database and launched the search script. I didn't change anything in the parameters.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 04:36:54 PM |
|
The first version of the script worked, the second does not find matches. There may be an error in the code.
On my PC, 100 keys found out of 100, .... Did you set the same parameters in both scripts? About an hour and a half ago, I copied both scripts, created a database and launched the search script. I didn't change anything in the parameters. Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script Now it should work: To perform multiple search, for linux: time for i in {1..100}; do python3 search_pk_arulbero.py; done
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 04:41:09 PM |
|
1 bit for 2^32 keys is better than 64bits for 2^32/2^20 = 2^12 keys.
Less size, less search time, 100% keys found.
To speed up the serch, I need to store more information per byte, more density of information/work per byte stored.
For example, If I generated all 2^44 keys and I stored only the keys with the first 20 bit = 0, I would get a database with the same size of a databse created with 1 key each 1 million.
But the search time would be faster, cause I would need to check only 1 key against the database instead of 1 million. I think you are looking at it from a small interval size. I am looking at it from a larger standpoint. For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB. I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference? We are looking at using a smaller DB size in different ways. If you ran a DP of 20 in a 2^44 range, you would wind up with roughly 2^24 keys stored. As far as finding DPs, I have the fastest script, not on the market lol. GPU script. I can find all DPs (of any size), in a 2^44 range in a little less than 16 minutes.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 04:53:14 PM |
|
1 bit for 2^32 keys is better than 64bits for 2^32/2^20 = 2^12 keys.
Less size, less search time, 100% keys found.
To speed up the serch, I need to store more information per byte, more density of information/work per byte stored.
For example, If I generated all 2^44 keys and I stored only the keys with the first 20 bit = 0, I would get a database with the same size of a databse created with 1 key each 1 million.
But the search time would be faster, cause I would need to check only 1 key against the database instead of 1 million. I think you are looking at it from a small interval size. I am looking at it from a larger standpoint. For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB. I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference? We are looking at using a smaller DB size in different ways.If you ran a DP of 20 in a 2^44 range, you would wind up with roughly 2^24 keys stored. As far as finding DPs, I have the fastest script, not on the market lol. GPU script. I can find all DPs (of any size), in a 2^44 range in a little less than 16 minutes. Yes, if you are trying to reduce the size of a DP database, the task is different. You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys. Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different. You don't need the private key, you need only to know if you have hit a DP already in database?
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 05:00:09 PM |
|
Yes, if you are trying to reduce the size of a DP database, the task is different.
You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys.
Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different. Who said my keys or this script's keys are limited to a 2^34 space? I can spread them out as little or as much as wanted. Example: I can generate 2^34 keys spread out every 1000000000 keys or I can mimic Kangaroo jumps; spread the keys out every (for #130) 130 / 2 + 1 = 66; so I could spread the keys out every 2^66 key. For the DB that has 2^36 keys stored in it, the search script generates 64 subs and checks entire DB in 1 second. I'm not concerned about speed when a DB has over 68 Billion keys in it. It's a lot faster than the merging of files to check for a collision (JLPs Kangaroo version).
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 05:04:04 PM |
|
Yes, if you are trying to reduce the size of a DP database, the task is different.
You want to know if a certain DP key is in DP database as fast as possible, right? Then you need a small size and fast search algorithm that works with a group of non consecutive keys.
Your 2^33.5 keys are spread in a 2^115 space, not in a 2^34 space. The task is very different. Who said my keys or this script's keys are limited to a 2^34 space? I can spread them out as little or as much as wanted. Example: I can generate 2^34 keys spread out every 1000000000 keys or I can mimic Kangaroo jumps; spread the keys out every (for #130) 130 / 2 + 1 = 66; so I could spread the keys out every 2^66 key. For the DB that has 2^36 keys stored in it, the search script generates 64 subs and checks entire DB in 1 second. I'm not concerned about speed when a DB has over 68 Billion keys in it. It's a lot faster than the merging of files to check for a collision (JLPs Kangaroo version). But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database. And the DP keys in DP database are not equally spaced.
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 05:30:04 PM |
|
But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database. And the DP keys in DP database are not equally spaced.
I don’t understand. The keys in DB are/can be equally spaced. DP keys aren’t equally spaced because points with DPs are not equally spread. Imagine running Kangaroo with DP 0 and space wasn’t an issue.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 05:40:08 PM |
|
But the distance between 2 keys must be the same, otherwise I don't understand how can you know if a key is in your database. And the DP keys in DP database are not equally spaced.
I don’t understand. The keys in DB are/can be equally spaced. DP keys aren’t equally spaced because points with DPs are not equally spread. Ok, we agree on these facts. But you said: For #115, DP size of 25, had 2^33.5 points; a file size of over 350GB.
I have a file with 2^36 keys stored, but at only 8GB. Do you see the difference?
You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced.
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 05:49:00 PM |
|
You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced. Lol, ok. With Kangaroo, it does not matter if they are or are not equally spaced. However, if you think the keys stored in a Kangaroo DB must be NOT equally spaced, then that is also achievable with this script. You just mimic different starting keys, like Kangaroo does. To me, I have a DB full of DP 0 bits, wilds. Now I run only tames, and check for collisions. In my mind, it does not matter if the keys in the DB are equally spaced or not, but that is easily achieved by offsetting the target pubkey before running DB creation.
|
|
|
|
arulbero
Legendary
Offline
Activity: 1940
Merit: 2094
|
|
December 05, 2023, 06:00:54 PM |
|
You (not me) are comparing 2 different type of sets, but you can't apply the idea of this thread to the DP keys, because they are not equally spaced. Lol, ok. With Kangaroo, it does not matter if they are or are not equally spaced. However, if you think the keys stored in a Kangaroo DB must be NOT equally spaced, then that is also achievable with this script. You just mimic different starting keys, like Kangaroo does. To me, I have a DB full of DP 0 bits, wilds. Now I run only tames, and check for collisions. In my mind, it does not matter if the keys in the DB are equally spaced or not, but that is easily achieved by offsetting the target pubkey before running DB creation. Then: you have a DB full of DP 0 bits (no DP) wilds equally spaced when you run tames, you want to check for collisions every step using the idea of this thread? And if the wilds DP (DB bits > 0) are not equally spaced, how can you perform a check against the wild DB using 1 bit per key?
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 06:09:49 PM |
|
Then:
you have a DB full of DP 0 bits (no DP) wilds equally spaced
when you run tames, you want to check for collisions every step using the idea of this thread? No, not necessarily. I have other ideas how to check for collisions that doesn't require a check every step; but it would work.
And if the wilds DP (DB bits > 0) are not equally spaced, how can you perform a check against the wild DB using 1 bit per key?
During creation of Dbs; pubkey - create pubkey - some random number - create pubkey - some random number again - create .......... pubkey - some random number again - create
While each create is evenly spaced, we have several creations (could be hundreds or thousands, it's unlimited)
I am trying to think of ways to lessen the space trade off of traditional kangaroo, that's all. I'm sure you have ideas.
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 06:16:40 PM |
|
@WanderingPhilospher:
I think is the time to publish your modification script for creating database and for seeking in database.
I think -> to better optimised you need use mmap library
in 2 GB files I have up to 2 seconds for "finding the str"
Mine is only different from the OPs in that I don't use the BitArray, I stick to bytes. Tradeoff, my DB is 4 times as large, but search is faster. mmap increases search time? With a 2GB DB file, I could find a key in less than a second (in the smaller ranges we've been discussing here) I have a Db that is 488MB that contains 1,000,000,000 keys; that means I can check an entire 2^29.89 range in less than a second. So at 2GB (roughly) 4 times that speed, so a 2^31.89 range checked in less than a second.
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 329
Merit: 90
New ideas will be criticized and then admired.
|
|
December 05, 2023, 07:03:14 PM |
|
There are many options when we work with bits: could use this with bits
PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).
Repeat 1 million times.
We save a database that we will call "A" where all pubs and random number are saved from the random subtractions.
We save a database that we will call "B" where only the bits obtained are saved.
We scan
64 -bit collision margin
If there is match We identify the pub found counting the bits in the DB using the % 1024 to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).
We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database. With the difference that they would not be in sequences and could cover the target range randomly. .
Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
|
BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1204
Merit: 237
Shooters Shoot...
|
|
December 05, 2023, 07:35:23 PM |
|
There are many options when we work with bits: could use this with bits
PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).
Repeat 1 million times.
We save a database that we will call "A" where all pubs and random number are saved from the random subtractions.
We save a database that we will call "B" where only the bits obtained are saved.
We scan
64 -bit collision margin
If there is match We identify the pub found counting the bits in the DB using the % 1024 to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).
We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database. With the difference that they would not be in sequences and could cover the target range randomly. .
Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts? That would work as well. Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file.
|
|
|
|
whanau
Member
Offline
Activity: 121
Merit: 36
|
|
December 06, 2023, 12:18:46 AM |
|
Ah, I uploaded a version of create_database that doesn't match with that version of the search_pk script Now it should work:
It works for me up to 0x81 and after that it does not seem to find any keys. I tried deleting and recreating the database from 0x80 but still nothing found.
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 329
Merit: 90
New ideas will be criticized and then admired.
|
|
December 06, 2023, 12:25:09 AM |
|
There are many options when we work with bits: could use this with bits
PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).
Repeat 1 million times.
We save a database that we will call "A" where all pubs and random number are saved from the random subtractions.
We save a database that we will call "B" where only the bits obtained are saved.
We scan
64 -bit collision margin
If there is match We identify the pub found counting the bits in the DB using the % 1024 to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).
We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database. With the difference that they would not be in sequences and could cover the target range randomly. .
Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts? That would work as well. Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file. added binary version.
|
BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
COBRAS
Member
Offline
Activity: 1018
Merit: 23
|
|
December 06, 2023, 03:13:49 AM |
|
There are many options when we work with bits: could use this with bits
PUBKEY - RANDOM NUMBER = CREATE -1024 Pubs, as my publication (1 bit per key).
Repeat 1 million times.
We save a database that we will call "A" where all pubs and random number are saved from the random subtractions.
We save a database that we will call "B" where only the bits obtained are saved.
We scan
64 -bit collision margin
If there is match We identify the pub found counting the bits in the DB using the % 1024 to calculate the PK found (because each operation is different and 1024 was the number of sequential keys for each pub).
We would basically have 1 million mini dB of 1024 keys stored in 1 1000000*1024 bits database. With the difference that they would not be in sequences and could cover the target range randomly. .
Then that match (pub) we will find in database "A" and add to the RANDOM NUMBER to obtain pk from the main target
Did you ever tinker or mess with your multi-pub binary and search scripts? That would work as well. Generate pubs and offsets (random number subtracted by of original pubkey) and save in a file and load the new gen pubs in another and run the multi pub script against the pub only file. added binary version.Thank you very much, Mister ;!)
|
[
|
|
|
|