Bitcoin Forum
May 03, 2024, 02:30:52 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 »  All
  Print  
Author Topic: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1)  (Read 2278 times)
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 03, 2023, 04:54:35 PM
 #81

Can you give me a concrete example of how or what keys you would store (making the filesize smaller) and how you could use it in a search?

My example to you is:

If I generate 2^25 keys in the database, it's size will be 16MB.


2^25 keys -> 16MB means :

you store 4 bits for each key?

2^25 * 4 bits = 2^27 bits = 2^24 bytes = 16MB. 

Or you store only some keys?
1714703452
Hero Member
*
Offline Offline

Posts: 1714703452

View Profile Personal Message (Offline)

Ignore
1714703452
Reply with quote  #2

1714703452
Report to moderator
1714703452
Hero Member
*
Offline Offline

Posts: 1714703452

View Profile Personal Message (Offline)

Ignore
1714703452
Reply with quote  #2

1714703452
Report to moderator
1714703452
Hero Member
*
Offline Offline

Posts: 1714703452

View Profile Personal Message (Offline)

Ignore
1714703452
Reply with quote  #2

1714703452
Report to moderator
In order to get the maximum amount of activity points possible, you just need to post once per day on average. Skipping days is OK as long as you maintain the average.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714703452
Hero Member
*
Offline Offline

Posts: 1714703452

View Profile Personal Message (Offline)

Ignore
1714703452
Reply with quote  #2

1714703452
Report to moderator
1714703452
Hero Member
*
Offline Offline

Posts: 1714703452

View Profile Personal Message (Offline)

Ignore
1714703452
Reply with quote  #2

1714703452
Report to moderator
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 04:56:00 PM
 #82

This script generates a database of 32 million keys (3.9 MB) in 3.5 seconds,

it stores only 1 key each 64 keys, and only 8 bytes for each key


Code:
#!/usr/bin/env python3
# 2023/Dec/03, arulbero_pub_key.py
import secp256k1 as ice

#############################################################################
# Set the number of public keys to generate and other parameters
#############################################################################
start_key = 1
num_public_keys = 32000000
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 64
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key,num_public_keys+1, rate_of_key_to_store)

interval_to_store = range(start_key,num_public_keys+1,rate_of_key_to_store)

binary_mode = 1

#private_keys = list(interval_to_generate)

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

###########generation#################
public_keys=[]

for i in interval_to_generate:                 #generate the other keys
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = int(public_keys[i],16)
f.write(h.to_bytes(bytes_to_store,byteorder='big'))

f.close()

else:

f = open("public_keys_database.txt", "w")

###########generation#################
public_keys=[]

for i in interval_to_generate:
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = public_keys[i]
f.write(h)
f.close()

If you want to read the data, switch to "binary_mode = 0".

With 2^32 keys (4 billions), it takes about 4 minutes to generates a db of 257 MB (1 key each 128 keys, 64 bits for key).
But how does this help you do a search for a key?
What search would you run with this script?

The OPs (and my modified script) basically stores “wilds” a known pubkey’s offset keys.

And the search script can land in any key space, do 64 key computations, and find the key of the key is in that range.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 04:58:09 PM
 #83

Can you give me a concrete example of how or what keys you would store (making the filesize smaller) and how you could use it in a search?

My example to you is:

If I generate 2^25 keys in the database, it's size will be 16MB.


2^25 keys -> 16MB means :

you store 4 bits for each key?

2^25 * 4 bits = 2^27 bits = 2^24 bytes = 16MB. 

Or you store only some keys?
I store every key, 2^25 keys, represented with a “0” or a “1”.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 03, 2023, 05:12:32 PM
Last edit: December 03, 2023, 05:28:53 PM by arulbero
 #84


I store every key, 2^25 keys, represented with a “0” or a “1”.


Ok, then you are using the idea of this thread.




But how does this help you do a search for a key?
What search would you run with this script?

The OPs (and my modified script) basically stores “wilds” a known pubkey’s offset keys.

And the search script can land in any key space, do 64 key computations, and find the key of the key is in that range.

The idea is:

if you know that a public key P is in your db, of course you need only 64 key computations. The same for my script.
( I didn't write a script "search" so far, I made only a script to build the database as small as possible).

Your idea: store a bit of any key

My idea: store several bits of some keys



The general case is:

1) I have a public keys P. I don't know the private key, but I know it is in range [1 - 2^64] (example). But this range is too big, I cant store or build a such huge database.

There are 2 main possibilities:

a) I generate a database of 2^32 public keys [from 1*G to 2^32*G] (baby steps) and I store all of them, then

I compute P1 = P - 2^32*G and verify if it is in my db, otherwise I compute P2 = P - 2*(2^32*G) and so on (giant steps).

This is baby-giant steps algorithm, about 2^32 computation but a huge size database.

b) I generate a database of 2^32 public keys [from 1*G to 2^32*G], but I store only 1 key each 64 (to save space).

Then I compute P1 = P - 2^32G and I verify if it is in my db.
Now I have to perform 64 subtraction from P1: P1 - G, P1 - 2G, P3 - G , .... P3 - 64*G because P1 could be in interval [1*G -> (2^32)*G] but not in my db (I don't store all the keys). These 64 subtraction are the price we have to pay to have a smaller database.

to be sure that P1 is or is not in the interval [1->2^32].

If P1 or the other 64 keys are not in my db, I compute P2 = P - 2*(2^32*G) and so on.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 05:22:37 PM
 #85


I store every key, 2^25 keys, represented with a “0” or a “1”.


Ok, then you are using the idea of this thread.




But how does this help you do a search for a key?
What search would you run with this script?

The OPs (and my modified script) basically stores “wilds” a known pubkey’s offset keys.

And the search script can land in any key space, do 64 key computations, and find the key of the key is in that range.

The idea is:

if you know that a public key P is in your db, of course you need only 64 key computations. The same for my script.
( I didn't write a script "search" so far, I made only a script to build the database as small as possible).


The general case is:

1) I have a public keys P. I don't know the private key, but I know it is in range [1 - 2^64] (example). But this range is too big, I cant store or build a such huge database.

There are 2 main possibilities:

a) I generate a database of 2^32 public keys [from 1*G to 2^32*G] (baby steps) and I store all of them, then

I compute P1 = P - 2^32*G and verify if it is in my db, otherwise I compute P2 = P - 2*(2^32*G) and so on (giant steps).

This is baby-giant steps algorithm, about 2^32 computation but a huge size database.

b) I generate a database of 2^32 public keys [from 1*G to 2^32*G], but I store only 1 key each 64 (to save space).

Then I compute P1 = P -2^32G and I verify if it is in my db.
I have to perform 64 subtraction from P1: P1 - G, P1 - 2G, P3 - G , .... P3 - 64*G

to be sure that P1 is or is not in the interval [1->2^32].

If P1 or the other 64 keys are not in my db, I compute P2 = P - 2*(2^32*G) and so on.
Understood.

I have a database with 63,570,000,000 keys stored in it, with a size of only 7.7GB. This one was created using the BitArray function, but it has flaws when doing a search so I am not using it at the moment.

But when I run the search script, it uses the equivalent in RAM, roughly 7.7GB of RAM used during the search.

But I can make jumps the size of 63,570,000,000 (almost 2^36), do 64 computations, and know within a second if key is in that range or not. So every less than a second, it jumps 2^36 keys and checks for a match. But this is using python, could be much faster in C, C++, and a lot faster with GPU support.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 03, 2023, 05:35:23 PM
 #86

Understood.

I have a database with 63,570,000,000 keys stored in it, with a size of only 7.7GB. This one was created using the BitArray function, but it has flaws when doing a search so I am not using it at the moment.

But when I run the search script, it uses the equivalent in RAM, roughly 7.7GB of RAM used during the search.

But I can make jumps the size of 63,570,000,000 (almost 2^36), do 64 computations, and know within a second if key is in that range or not. So every less than a second, it jumps 2^36 keys and checks for a match. But this is using python, could be much faster in C, C++, and a lot faster with GPU support.

Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)
sssergy2705
Copper Member
Newbie
*
Offline Offline

Activity: 188
Merit: 0


View Profile
December 03, 2023, 05:52:15 PM
 #87

This script generates a database of 32 million keys (3.9 MB) in 3.5 seconds,

it stores only 1 key each 64 keys, and only 8 bytes for each key


Code:
#!/usr/bin/env python3
# 2023/Dec/03, arulbero_pub_key.py
import secp256k1 as ice

#############################################################################
# Set the number of public keys to generate and other parameters
#############################################################################
start_key = 1
num_public_keys = 32000000
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 64
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key,num_public_keys+1, rate_of_key_to_store)

interval_to_store = range(start_key,num_public_keys+1,rate_of_key_to_store)

binary_mode = 1

#private_keys = list(interval_to_generate)

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

###########generation#################
public_keys=[]

for i in interval_to_generate:                 #generate the other keys
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = int(public_keys[i],16)
f.write(h.to_bytes(bytes_to_store,byteorder='big'))

f.close()

else:

f = open("public_keys_database.txt", "w")

###########generation#################
public_keys=[]

for i in interval_to_generate:
P = ice.scalar_multiplication(i)
public_keys.append(P[33-bytes_to_store:33].hex())

###########writing the db###############
for i in range(0,len(interval_to_store)):
h = public_keys[i]
f.write(h)
f.close()

If you want to read the data, switch to "binary_mode = 0".

With 2^32 keys (4 billions), it takes about 4 minutes to generates a db of 257 MB (1 key each 128 keys, 64 bits for key).

This is quite interesting, but how can we use it, for example, to find puzzle 65?
digaran
Copper Member
Hero Member
*****
Offline Offline

Activity: 1330
Merit: 899

🖤😏


View Profile
December 03, 2023, 05:57:02 PM
 #88

@arulbero, I get the error for int too big to convert, can your script work on mobile?

🖤😏
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 06:18:14 PM
 #89

Understood.

I have a database with 63,570,000,000 keys stored in it, with a size of only 7.7GB. This one was created using the BitArray function, but it has flaws when doing a search so I am not using it at the moment.

But when I run the search script, it uses the equivalent in RAM, roughly 7.7GB of RAM used during the search.

But I can make jumps the size of 63,570,000,000 (almost 2^36), do 64 computations, and know within a second if key is in that range or not. So every less than a second, it jumps 2^36 keys and checks for a match. But this is using python, could be much faster in C, C++, and a lot faster with GPU support.

Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)

What would be the false positives for the way you store keys? Mine is almost 0%. I've ran hundreds of tests now with zero false positives.

Also, why can't you store keys as a 0 or a 1? To make your DB even smaller?

My search script is like the OPs original but I added incremental.

In random, it lands on random point, does 64 comps, goes to next random point, 64 comps, etc.

Incremental is the same way except if you start at 0 and your stride/jump size is 2^20, then land on point 2^20, do 64 comps, jump to 2^20+2^20 point, do 64 comps, etc.

The script can also spread the keys out like you are saying. Save only every other 128 keys, the keys do not have to be sequential.

I'm curious to your false positives and any search script you would create.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 03, 2023, 06:42:36 PM
 #90


What would be the false positives for the way you store keys? Mine is almost 0%. I've ran hundreds of tests now with zero false positives.

Also, why can't you store keys as a 0 or a 1? To make your DB even smaller?

My search script is like the OPs original but I added incremental.

In random, it lands on random point, does 64 comps, goes to next random point, 64 comps, etc.

Incremental is the same way except if you start at 0 and your stride/jump size is 2^20, then land on point 2^20, do 64 comps, jump to 2^20+2^20 point, do 64 comps, etc.

The script can also spread the keys out like you are saying. Save only every other 128 keys, the keys do not have to be sequential.


I store 64 bits for each key, you store 64 bits for 64 keys;  in my database there are less strings of 64 bits to compare with, then I think my probability is almost zero too.

For smaller databases (like 2^30 keys) I could use even less than 64 bits for keys (like 48 bits) without problems. If I implemented the "first x bits = 0" then the probability of collisions (at the same size of database) would be even lower (but it would take much longer to build the database).


I'm curious to your false positives and any search script you would create.
Me too.
ecdsa123
Full Member
***
Offline Offline

Activity: 211
Merit: 105

Dr WHO on disney+


View Profile
December 03, 2023, 06:46:16 PM
 #91


What would be the false positives for the way you store keys? Mine is almost 0%. I've ran hundreds of tests now with zero false positives.

Also, why can't you store keys as a 0 or a 1? To make your DB even smaller?

My search script is like the OPs original but I added incremental.

In random, it lands on random point, does 64 comps, goes to next random point, 64 comps, etc.

Incremental is the same way except if you start at 0 and your stride/jump size is 2^20, then land on point 2^20, do 64 comps, jump to 2^20+2^20 point, do 64 comps, etc.

The script can also spread the keys out like you are saying. Save only every other 128 keys, the keys do not have to be sequential.


I store 64 bits for each key, you store 64 bits for 64 keys;  in my database there are less strings of 64 bits to compare with, then I think my probability is almost zero too.

For smaller databases (like 2^30 keys) I could use even less than 64 bits for keys (like 48 bits) without problems. If I implemented the "first x bits = 0" then the probability of collisions (at the same size of database) would be even lower (but it would take much longer to build the database).


I'm curious to your false positives and any search script you would create.
Me too.




and if database is larger and larger -> it creating then too many false positives.

Donate: bc1q0sezldfgm7rf2r78p5scasrrcfkpzxnrfcvdc6

Subscribe : http://www.youtube.com/@Ecdsa_Solutions
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 03, 2023, 07:27:28 PM
 #92

and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 07:32:17 PM
 #93

and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.
I don’t agree.

The DB, when checked, is checked against 64 or 128 or 256, or however high you want to go.

If you were checking the entire 2^256 keys, then yes, but when checking smaller range sizes (up to 160 bits) then I don’t think a larger DB creates more false positives.
ecdsa123
Full Member
***
Offline Offline

Activity: 211
Merit: 105

Dr WHO on disney+


View Profile
December 03, 2023, 07:57:55 PM
 #94

I have one idea , writing those database directly to memory and seek in as the object. of course as  a custom bytearray

seeking as dword

Donate: bc1q0sezldfgm7rf2r78p5scasrrcfkpzxnrfcvdc6

Subscribe : http://www.youtube.com/@Ecdsa_Solutions
COBRAS
Member
**
Offline Offline

Activity: 846
Merit: 22

$$P2P BTC BRUTE.JOIN NOW ! https://uclck.me/SQPJk


View Profile
December 03, 2023, 08:15:33 PM
 #95

and if database is larger and larger -> it creating then too many false positives.

Then simply you have to store more bits per key;

if you want to have 0 collisions, store 256 bits for key. It works always.

And if you want a smaller database,  you can store 1 key each 1024 keys.

I agree with you Alberto, you more experienced then ather im this talk

$$$ P2P NETWORK FOR BTC WALLET.DAT BRUTE F ORCE .JOIN NOW=GET MANY COINS NOW !!!
https://github.com/phrutis/LostWallet  https://t.me/+2niP9bQ8uu43MDg6
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 03, 2023, 10:05:53 PM
 #96

Quote
Then, you can reduce the space and increase the costs of search computations.

With my script a db of 2^36 keys utilizes about 4GB. In that case I store 1 key each 128. It takes about 1 hour.
And 128 computations are very fast for the 'search part'.

And searching 64 bit --> shift 64 bit --> next 64 bit --> shift 64 bit should be faster than
64 bit -> shift 1 bit -> 64 bit (If I understand your script)

You say 4GB but you aren't comparing apples to apples.

In your scenario of only saving every 128 keys out of 2^36 keys, your DB does not contain 2^36, but 2^36 / 2^7 = 2^29 keys.

If I spread out my keys to every 128 inside a 2^36 range and only store 2^29 keys, my DB size would be about 63MB. Big difference, huge.
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 04, 2023, 12:39:09 AM
Last edit: December 05, 2023, 04:40:30 PM by arulbero
Merited by whanau (2)
 #97

I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py


parameters:

private key interval: from 1 to 2^32
keys stored: 2^32 / 2^7 = 2^25 (1 each 128)
bits for each key = 64
size of database: 257 MB

time to create the database: 3 min 55 s

time to find the private key of a single random public key in the db: 0.1 - 0.3 s




create_database_arulbero.py

Code:
#!/usr/bin/env python3
# 2023/Dec/03, create_database_arulbero.py
import secp256k1 as ice
import sys

#############################################################################
# Set the number of public keys to generate
#############################################################################
start_key = 1
num_public_keys = 2**32 #(about 4 billions of keys)
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20
rate_of_key_to_store = rate_of_key_to_generate

interval_to_generate = range(start_key, start_key + num_public_keys, rate_of_key_to_store)
interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1

#############################################################################


if (binary_mode == 1):

f = open("public_keys_database.bin", "wb") #binary mode

########################################generation#############################################
public_keys=[]


public_keys_complete=list(map(ice.scalar_multiplication,interval_to_generate)) #generates the public keys complete
public_keys=list(map(lambda w: int(w[33-bytes_to_store:33].hex(),16),public_keys_complete)) #extract only the last bytes_to_store



########################################writing the db##########################################
for i in range(0,len(interval_to_store)):
f.write(public_keys[i].to_bytes(bytes_to_store,sys.byteorder))  #writes each key

f.close()

else:

f = open("public_keys_database.txt", "w")

#generation
public_keys=[]

for i in interval_to_generate:
P4 = ice.scalar_multiplication(i)
public_keys.append(P4[33-bytes_to_store:33].hex())

#writing the db
for i in range(0,len(interval_to_store)):
f.write(public_keys[i])

f.close()

search_key_arulbero.py
Code:
# 2023/Dec/03, arulbero_search_key.py
import secp256k1 as ice
import random
import sys

#############################################################################
# Set the number of public keys to generate
#############################################################################

start_key = 1
num_public_keys = 2**32
bits_to_store = 64
bytes_to_store = bits_to_store//8
rate_of_key_to_generate = 2**20
rate_of_key_to_store = rate_of_key_to_generate

split_database = 256 #read only a fraction of the database to speedup the finding of the string

interval_to_generate = range(start_key,start_key + num_public_keys, rate_of_key_to_store)

interval_to_store = range(start_key,start_key + num_public_keys,rate_of_key_to_store)

binary_mode = 1

#########################################################################################

#pk = 0x3243 = 12867
#P = ice.scalar_multiplication(12867)
#P="0x6800b#b8a9dffe1709ceac95d7d06646188c2cb656c09cd2e717ec67487ce1be3"


#############generates random private key and public key#################################
pk= random.randint(start_key, start_key + num_public_keys)
#pk=start_key + num_public_keys-1
P = ice.scalar_multiplication(pk)
print()
print("This is the public key: " + P[1:33].hex())
print("I need to find this private key: "+str(pk))


###################subtraction of : P - 1G,  P - 2G, ..., P - rate_of_key*G################
substract_pub = ice.scalar_multiplication(1)
complete_pub= ice.point_loop_subtraction(rate_of_key_to_generate, P, substract_pub)


partial_pub=[] #need only the last bytes_to_store
P2=int(P[33-bytes_to_store:33].hex(),16).to_bytes(bytes_to_store,sys.byteorder)
partial_pub.append(P2)

for i in range(1,rate_of_key_to_store+1):
partial_pub.append(int(complete_pub[(i-1)*65+33-bytes_to_store:(i-1)*65+33].hex(),16).to_bytes(bytes_to_store,sys.byteorder))




################search in database##########################################################
num_bytes = num_public_keys*bytes_to_store//rate_of_key_to_store
size = num_bytes//split_database
s_partial_pub = set(partial_pub)


with open("public_keys_database.bin", 'r+b') as f:

#s=f.read()

for k in range(0, num_bytes, num_bytes//split_database):

f.seek(0,1)
partial_db = f.read(num_bytes//split_database)

l_partial_db = [partial_db[i:i + bytes_to_store] for i in range(0, size, bytes_to_store)]
s_partial_db = set(l_partial_db)

a = list(s_partial_db & s_partial_pub)
if (len(a)>0):

n = partial_db.find(a[0])

if n > -1:
print()
print("Private key found!!!")
private_key = (n+k)//bytes_to_store*rate_of_key_to_generate + partial_pub.index(a[0])+1
if(pk == private_key):
print("It is correct!!!")
else:
print("Collision!")
print(private_key)
P3 = ice.scalar_multiplication(private_key)
pub_key=P3[1:33].hex()
print((pub_key)[:])
f.close()
sys.exit(0)


print("string not found")
 
arulbero
Legendary
*
Offline Offline

Activity: 1915
Merit: 2074


View Profile
December 04, 2023, 12:46:01 AM
 #98

You say 4GB but you aren't comparing apples to apples.

In your scenario of only saving every 128 keys out of 2^36 keys, your DB does not contain 2^36, but 2^36 / 2^7 = 2^29 keys.

If I spread out my keys to every 128 inside a 2^36 range and only store 2^29 keys, my DB size would be about 63MB. Big difference, huge.

If the goal is finding the private key of a public key that is in a interval for which you have precomputed the public key, AND at the same time the goal is to keep the size of the db as small as possible, why don't you reduce your db size to 63 MB then?
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1050
Merit: 219

Shooters Shoot...


View Profile
December 04, 2023, 01:37:50 AM
 #99

Quote

Activity: 1863
Merit: 2039


View Profile Personal Message (Online)

Ignore
   
   
Re: lightweight database, for brute force using publickeys-32Mk =3.81MB(secp256k1)
Today at 12:39:09 AM
   
Reply with quote  +Merit  #97
I wrote the both scripts:

create_database_arulbero.py
search_key_arulbero.py

parameters:

private key interval: from 1 to 2^32
keys stored: 2^32 / 2^7 = 2^25 (1 each 128)
bits for each key = 64
size of database: 257 MB

time to create the database: 3 min 55 s

time to find the private key of a single random public key in the db: 0.1 - 0.3 s

For a known public key; it could be unknown like your script (I would just need to generate the random pk/pub like you did, but the time results would be the same), but here are my script results for a key within the 2^32 bit range.

Generated 2,000,000 keys in less than 4 seconds.
Size of database = 977kb (less than 1MB)
time to find private key in the db: less than 2 seconds

Total time = less than 6 seconds.

I could adjust the number of keys generated, but there's no need in such a small range. With more keys generated, the search time would go down significantly.

I will run it with 2^25 keys and report back generation and search times.

UPDATE:
I generated 2^25 keys in 65 seconds.
Search took less than a second.
Total time = 66 seconds.
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 237
Merit: 53

New ideas will be criticized and then admired.


View Profile WWW
December 04, 2023, 02:04:28 AM
Last edit: December 04, 2023, 05:43:09 PM by mcdouglasx
 #100

Quote
You are right, no match.
It must be a problem with the reading of bytes.
You definitely found a bug.
Because, it is in the DB but does not get it.
I'll fix it.

Edit:
The curious part is that if we start at 6 we get 3092000006
and match.
Any update? I've been trying tweaks to the search script, but I have not been successful.


Sorry, I've been busy, basically the problem is when we look in bytes, if we stop using bytes we find it.


edit: for incremental
Code:
#@mcdouglasx
import secp256k1 as ice
import random
from bitstring import BitArray



print("Scanning Binary Sequence")



start=0
end= 4000000000

#1:4000000000
for i in range(start, end,4000000):
   
    target = ice.scalar_multiplication(i)

    num = 64 # collision margin.

    sustract= 1 # #amount to subtract each time.

    sustract_pub= ice.scalar_multiplication(sustract)

    res= ice.point_loop_subtraction(num, target, sustract_pub)
       
    binary = ''
       
    for t in range (num):
           
        h= (res[t*65:t*65+65]).hex()
        hc= int(h[2:], 16)
           
           
        if str(hc).endswith(('0','2','4','6','8')):
            A="0"
            binary+= ''.join(str(A))
               
        if str(hc).endswith(('1','3','5','7','9')):
            A="1"
            binary+= ''.join(str(A))
       
           
    my_str = binary

    b = BitArray(bin=my_str)
    c = bytes(b)

    file = open("data-base.bin", "rb")
    dat= BitArray(file.read())
   

    if b  in dat:
       
        print("found")
        s = c
        f = dat
        inx = f.find(s)
        inx_1=str(inx).replace(",", "")
        inx_0=str(inx_1).replace("(", "")
        inx_2=str(inx_0).replace(")", "")
       
        Pk = (int(i) + int(inx_2))
           
        data = open("win.txt","a")
        data.write("Pk:"+" "+str(Pk)+"\n")
        data.close()
        break

I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
Pages: « 1 2 3 4 [5] 6 7 8 9 10 11 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!