Bitcoin Forum
September 27, 2024, 03:33:11 PM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Ultra-Lightweight Database with Public Keys (for puzzle btc)  (Read 605 times)
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 21, 2024, 03:57:27 PM
Last edit: September 21, 2024, 04:22:45 PM by mcdouglasx
Merited by ABCbits (6), albert0bsd (2)
 #1

https://github.com/Mcdouglas-X/Private-Key-Search-and-Ultra-Lightweight-Database-with-Public-Keys


This project implements a database designed to store interleaved bit patterns (010101...) of 15 bits or more in length.

These patterns (Pp) are stored along with the number of public keys between patterns (Bi) and the total

bits traversed to the end of each pattern (Tb).


requirements:

secp256k1

https://github.com/iceland2k14/secp256k1

Database Structure

The database stores data in the following format:

Code:
Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

This format allows the representation of thousands of public keys to be stored in just a few lines, making the database lightweight and easy to manage.

With my previous implementation 120 million keys fit into a 14 MB file and now the same keys are represented in a 165 KB file. Therefore, the file has been reduced by approximately 98.82%.

Code:
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import sys

target_public_key = "0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"

num_keys = 120000000
subtract = 1
low_m = num_keys // 1000000
lm = num_keys // low_m

db_name = "patt_db.txt"

patternx = re.compile(r'((10)+1|(01)+0)')

def process_res(res, lm, prev_bits=None):
    binary = bitarray()
    if prev_bits:
        binary.extend(prev_bits)
    
    for t in range(lm):
        segment = res[t*65:t*65+65]
        bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
        binary.append(bit == '1')
    
    return binary

def count_patterns(binary_bits, total_bits):
    matches = patternx.finditer(binary_bits.to01())
    last_end = 0
    for match in matches:
        pattern = match.group()
        if len(pattern) >= 15:
            bits_between = match.start() - last_end
            total_bits += bits_between + len(pattern)
            last_end = match.end()
            with open(db_name, 'a') as f:
                f.write(f"Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}\n")
    
    remaining_bits = len(binary_bits) - last_end
    next_prev_bits = binary_bits[-remaining_bits:]
    
    return total_bits, next_prev_bits

print("Making DataBase")

target = ice.pub2upub(target_public_key)
subtract_pub = ice.scalar_multiplication(subtract)
prev_bits = None
total_bits = 0  

for i in range(low_m):
    sys.stdout.write(f"\rprogress: {i + 1}/{low_m}")
    sys.stdout.flush()
    lm_i = lm * i
    lm_upub = ice.scalar_multiplication(lm_i)
    A1 = ice.point_subtraction(target, lm_upub)
    res = ice.point_loop_subtraction(lm, A1, subtract_pub)
    binary_bits = process_res(res, lm, prev_bits)
    total_bits, prev_bits = count_patterns(binary_bits, total_bits)

print("\nDone!")

Search Functionality

To find matches, the search script processes between 10,000 and 250,000 public keys per cycle (low_m). You can configure this value at your discretion;
100,000 is recommended, as it is the average number of keys between patterns.

For example, if there are 50,000 public keys between pattern A and pattern B, starting at an intermediate point between both patterns will easily lead you
to the next pattern, and the script will calculate your private key.

Code:
#@mcdouglasx
import secp256k1 as ice
import random
import regex as re
from bitarray import bitarray
import time

print("Searching Binary Patterns")

#Pk: 10056435896
#0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21

Target_pub ="0339d69444e47df9bd7bb7df2d234185293635a41f0b0c7a4c37da8db5a74e9f21"
#range
start = 10000000000                                                                                
end =  12000000000                                      


with open('patt_db.txt', 'r') as Db:
    target = Db.readlines()

patternx = re.compile(r'((10)+1|(01)+0)')

def process_res(res, low_m):
    binary = bitarray()
    
    for t in range(low_m):
        segment = res[t*65:t*65+65]
        bit = '0' if int(segment.hex()[2:], 16) % 2 == 0 else '1'
        binary.append(bit == '1')
    
    return binary

def count_patterns(binary_bits, Rand, start_time):
    matches = patternx.finditer(binary_bits.to01())
    last_end = 0
    last_match_info = None
    
    for match in matches:
        pattern = match.group()
        if len(pattern) >= 15:
            bits_between = match.start() - last_end
            total_bits = match.start()  
            
            last_end = match.end()
            
            X = f"Bi: {bits_between}, Pp: {pattern}"
            for t in target:
                if X in t:
                    Tb_in_t = int(t.split(", Tb: ")[1].split(",")[0])
                    pk = (Rand - total_bits + Tb_in_t)-len(pattern)
                    pk_f = ice.scalar_multiplication(pk).hex()
                    cpub = ice.to_cpub(pk_f)
                    if cpub in Target_pub:
                        last_match_info = f"Rand: {Rand} Bi: {bits_between}, Pp: {pattern}, Tb: {total_bits}, T: {t.strip()}, pk: {pk}"
                  
    
    if last_match_info:
        pk_f = ice.scalar_multiplication(pk).hex()
        cpub = ice.to_cpub(pk_f)
        elapsed_time = time.time() - start_time
        print("pk:", pk)
        print("cpub:", cpub)
        print("Elapsed time:", elapsed_time, "seconds")
        
        with open('found.txt', 'a') as f:
            f.write(f"pk: {pk}\n")
            f.write(f"cpub: {cpub}\n")
            f.write(f"Elapsed time: {elapsed_time} seconds\n")      
        exit()

low_m = 100000
sustract = 1
sustract_pub = ice.scalar_multiplication(sustract)        
start_time = time.time()

while True:
    
    Rand = random.randint(start, end)
    pk = ice.scalar_multiplication(Rand)
    res = ice.point_loop_subtraction(low_m, pk, sustract_pub)
    binary_bits = process_res(res, low_m)
    count_patterns(binary_bits, Rand, start_time)
    

Performance

The speed of the search depends on the size of your database. For instance, if you have a database of 100 million keys and your computer processes 1 million
keys per second, you would be processing around 1 billion keys per second.

Implementation Notes

This project is an implementation of an idea and is not optimized for speed or efficiency. You can create your own implementation in C.

You can reduce the minimum pattern length in database, which in turn could reduce low_m, resulting in faster search cycles and a larger database cost.

I have left the test database on github, if you just want to test the script.

At the moment, I don't have the computational resources to integrate into the world of GPUs and CPUs in C  Sad.

BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
GTX1060x2
Newbie
*
Offline Offline

Activity: 9
Merit: 3


View Profile
September 21, 2024, 04:25:43 PM
 #2

Are you sure it's a lightweight database? More like a Bloom filter, but secp256k1 is used instead of xxhash/murmurhash/etc.
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 21, 2024, 04:39:40 PM
 #3

Are you sure it's a lightweight database? More like a Bloom filter, but secp256k1 is used instead of xxhash/murmurhash/etc.

yes, A Bloom filter is a probabilistic data structure used to test whether an element is present in a set. However, it does not store the elements themselves, but instead uses a series of hash functions to determine the presence of an element. Because of this, it is not possible to extract specific data, such as numbers, directly from a Bloom filter.
That's why I say this is a database, you can access and reuse the data.

BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
COBRAS
Member
**
Offline Offline

Activity: 972
Merit: 22


View Profile
September 21, 2024, 05:16:43 PM
 #4

Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx

[
bitcurve
Newbie
*
Offline Offline

Activity: 28
Merit: 6


View Profile
September 21, 2024, 05:24:56 PM
 #5

Have you considered using XOR filters?
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 21, 2024, 05:32:32 PM
 #6

Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx

The patterns I choose them like this to guarantee a balance between the weight of the DB and the search speed, but you can experiment with it. Whenever the same changes in both scripts are made, there will be no problems with PK calculations.

Here:pk = (Rand - total_bits + Tb_in_t)-len(pattern)
Then, a check is made, to avoid false positives.

Have you considered using XOR filters?

I really publish my ideas at your most basic point, so it is better understood.


BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
COBRAS
Member
**
Offline Offline

Activity: 972
Merit: 22


View Profile
September 21, 2024, 05:36:34 PM
 #7

Great,Im also try use bit patterns.

why only this patters

Bi: 13806, Pp: 010101010101010, Tb: 13821

Bi: 10889, Pp: 101010101010101, Tb: 24725

Bi: 10637, Pp: 101010101010101, Tb: 35377

Bi: 186843, Pp: 010101010101010, Tb: 222235

were is

011011011
1001001

 ?

bro, how you calc from so little examples of bits (only 4 patterns)  fool privkey range ?

thx

The patterns I choose them like this to guarantee a balance between the weight of the DB and the search speed, but you can experiment with it. Whenever the same changes in both scripts are made, there will be no problems with PK calculations.

Here:pk = (Rand - total_bits + Tb_in_t)-len(pattern)
Then, a check is made, to avoid false positives.

Have you considered using XOR filters?

I really publish my ideas at your most basic point, so it is better understood.



Bro, you talk about first bits of pubkeys in your patterns, or patterns in privkeys ?

[
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 21, 2024, 06:14:21 PM
 #8

Bro, you talk about first bits of pubkeys in your patterns, or patterns in privkeys ?

It is a bit by pairs and odd public keys, that is, the patterns are pubkeys sequences, in the code this can be seen.

BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
albert0bsd
Hero Member
*****
Offline Offline

Activity: 957
Merit: 683



View Profile
September 21, 2024, 07:42:55 PM
Merited by ABCbits (2), mcdouglasx (2)
 #9

Have you considered using XOR filters?

I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the speed is boosted by this method i am going to change the bloom filter for a XOR filter becuae i believe that it may need only 16 bits per each item of 64 bits ( This mean that each group of 64 publickeys can be stored in a 16 bits)

The idea behind this database 1 item per bit might be some dubious because you need to some extra process to perform a search in the database, for example in my case, before this approach you only need to do a single subtraction (public key subtraction) and the compate it to the database, but now you need to do the original subtraction and then with that keys you need to contruct a item to compare in your database (bloom filter / short filter/ list / whatever) so you need to calculate other set of keys in order to be able to compare against your current database this mean thay you need more process. And not only 64 times more process you need more items to compare because you don't know if your target public keys is aligned with your current groups of keys.

So i am testing this just to validate what speed i am going to have, I am working a the very low bit level with bit shift operations and I think that my current approach is the most efficient from the CPU point ot view. I am not using any string conversion so i think there should be a point where this database will be useful
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 21, 2024, 09:02:42 PM
 #10

Have you considered using XOR filters?

I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the speed is boosted by this method i am going to change the bloom filter for a XOR filter becuae i believe that it may need only 16 bits per each item of 64 bits ( This mean that each group of 64 publickeys can be stored in a 16 bits)

The idea behind this database 1 item per bit might be some dubious because you need to some extra process to perform a search in the database, for example in my case, before this approach you only need to do a single subtraction (public key subtraction) and the compate it to the database, but now you need to do the original subtraction and then with that keys you need to contruct a item to compare in your database (bloom filter / short filter/ list / whatever) so you need to calculate other set of keys in order to be able to compare against your current database this mean thay you need more process. And not only 64 times more process you need more items to compare because you don't know if your target public keys is aligned with your current groups of keys.

So i am testing this just to validate what speed i am going to have, I am working a the very low bit level with bit shift operations and I think that my current approach is the most efficient from the CPU point ot view. I am not using any string conversion so i think there should be a point where this database will be useful
I hope for good results! In the case of this specific database, I would create a custom XOR filter. Since I only need the total bits (Tb) value of each line in the database, I would configure XOR to return the necessary value according to the line found. However, since I don’t think this PC will ever handle a 10MB database, I haven’t given it much thought.

BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
albert0bsd
Hero Member
*****
Offline Offline

Activity: 957
Merit: 683



View Profile
September 21, 2024, 10:14:40 PM
 #11

However, since I don’t think this PC will ever handle a 10MB database, I haven’t given it much thought.

I am storing up to 1 TB of binary data: 8.7 trillions of keys: 8796093022208.

This amout of bits grouped and stored in a bloom filter may need only 480 GB.

But stored in a Xor filter it will use only near 256 GB so i am trying everything to compact it more and try to use better methods to the search of collisions and search of the offset of the keys.
COBRAS
Member
**
Offline Offline

Activity: 972
Merit: 22


View Profile
September 22, 2024, 12:20:23 AM
 #12

@albertobsd


how many pubkeys you archives


Quote
I am implementing this datababase ( 1 bit per publickey ) i am grouping those in a 64 bits item and storing those 64bit integers in a Bloom filter for fast search, those need 29 bits per each 64 keys i think this will give more speed to the BSGS algorithm.

If the

2^64 ? 2^64 is anoth for find 2^130 bit priv ?


i find answers you archive 2^43 pubkeys. this is not anith for get privkey, need brute  2^87 range. I thin you know this, yes ?

[
albert0bsd
Hero Member
*****
Offline Offline

Activity: 957
Merit: 683



View Profile
September 22, 2024, 12:58:14 AM
 #13

Well, I do what I can with what I have at my disposal.
COBRAS
Member
**
Offline Offline

Activity: 972
Merit: 22


View Profile
September 22, 2024, 02:02:29 AM
 #14

Well, I do what I can with what I have at my disposal.

Then we are on the "search way", we do work what not get result asap, but , maybe  something was finded, for ex , my result 2^30 for 12191606 operations  but, this is mach slover then BSGS, probably  you find something similar too..... have a good day.


[
solimansleeks
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
September 23, 2024, 09:57:03 AM
 #15

Well, I do what I can with what I have at my disposal.



tell me that you who get the 130 puzzle "tell me Huh"
albert0bsd
Hero Member
*****
Offline Offline

Activity: 957
Merit: 683



View Profile
September 23, 2024, 03:36:44 PM
 #16

tell me that you who get the 130 puzzle "tell me Huh"

Sadly no, I wan't that lucky person.

By the way guys please keep this in topic for "Ultra-Lightweight Database with Public Keys"

I am finishing my code for this database with my own approach  (64 keys grouped in a 64 bits integer and each 64bit integer stored in a bloom filter using only 29 bits per item), since BSGS Speed depends on the total items stored in the database first tests looks promising.

Since i am not implementing the same approach/code of mcdouglasx i thing that I am going to open a new Topic for this, but all the credits of the bit database (1 bit per public keys) goes to him. Also I am donating to him.
COBRAS
Member
**
Offline Offline

Activity: 972
Merit: 22


View Profile
September 23, 2024, 03:41:02 PM
 #17

tell me that you who get the 130 puzzle "tell me Huh"

Sadly no, I wan't that lucky person.

By the way guys please keep this in topic for "Ultra-Lightweight Database with Public Keys"

I am finishing my code for this database with my own approach  (64 keys grouped in a 64 bits integer and each 64bit integer stored in a bloom filter using only 29 bits per item), since BSGS Speed depends on the total items stored in the database first tests looks promising.

Since i am not implementing the same approach/code of mcdouglasx i thing that I am going to open a new Topic for this, but all the credits of the bit database (1 bit per public keys) goes to him. Also I am donating to him.

Great

[
mcdouglasx (OP)
Member
**
Offline Offline

Activity: 289
Merit: 84

New ideas will be criticized and then admired.


View Profile WWW
September 23, 2024, 04:17:12 PM
 #18

tell me that you who get the 130 puzzle "tell me Huh"

Sadly no, I wan't that lucky person.

By the way guys please keep this in topic for "Ultra-Lightweight Database with Public Keys"

I am finishing my code for this database with my own approach  (64 keys grouped in a 64 bits integer and each 64bit integer stored in a bloom filter using only 29 bits per item), since BSGS Speed depends on the total items stored in the database first tests looks promising.

Since i am not implementing the same approach/code of mcdouglasx i thing that I am going to open a new Topic for this, but all the credits of the bit database (1 bit per public keys) goes to him. Also I am donating to him.

I truly believe that you are on the right track, managing more keys in BSGS keyhunt is, without a doubt, the path to important progress on this topic.

BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
solimansleeks
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
September 23, 2024, 04:46:13 PM
 #19

tell me that you who get the 130 puzzle "tell me Huh"

Sadly no, I wan't that lucky person.

By the way guys please keep this in topic for "Ultra-Lightweight Database with Public Keys"

I am finishing my code for this database with my own approach  (64 keys grouped in a 64 bits integer and each 64bit integer stored in a bloom filter using only 29 bits per item), since BSGS Speed depends on the total items stored in the database first tests looks promising.

Since i am not implementing the same approach/code of mcdouglasx i thing that I am going to open a new Topic for this, but all the credits of the bit database (1 bit per public keys) goes to him. Also I am donating to him.


i`m realy sad for that you not the one  Cry
and i keep my eye on this topic "Ultra-Lightweight Database with Public Keys" and waiting for the new implementation
AbadomRSZ
Newbie
*
Online Online

Activity: 5
Merit: 0


View Profile
September 24, 2024, 03:38:34 PM
 #20

Can this script be used in any puzzler that has the pubkey revealed or does it only work in a specific bit wallet?
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!