mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 03:40:06 AM Last edit: December 06, 2023, 09:55:37 PM by mcdouglasx Merited by digaran (2), citb0in (1) |
|
creating the lightweight database, billions keys. last edit: 12/06/20231-we generate a bin file with the publickeys represented in 1s and 0s. 0=even, 1=odd. single publickey#@mcdouglasx import secp256k1 as ice from bitstring import BitArray import bitcoin
print("Making Binary Data-Base")
target_public_key = "030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b"
target = ice.pub2upub(target_public_key)
num = 10000 # number of keys.
sustract= 1 #amount to subtract each time, If you modify this, use the same amount to scan.
Low_m= 10
lm= num // Low_m print(lm)
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(lm, target, sustract_pub) binary = '' for t in range (lm):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = bytes(BitArray(bin=binary))
binary_file = open('data-base.bin', 'ab') binary_file.write(my_str) binary_file.close()
for i in range (1,Low_m): lm_upub= sustract_pub= ice.scalar_multiplication((lm*i)*sustract)
A1= ice.point_subtraction(target, lm_upub)
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(lm, A1, sustract_pub) binary = '' for t in range (lm):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = bytes(BitArray(bin=binary))
binary_file = open('data-base.bin', 'ab') binary_file.write(my_str) binary_file.close()
If you want to create a longer database than your memory supports For example, 1000 million keys and your memory limit is 100 million, divide by 10 changing this variable: - You should use numbers like this: num = 10000000000000 Low_m= 10000 num = 20000000000000 Low_m= 2000 Avoid doing this or you will find private keys with the last numbers changed num = 14678976447 Low_m= 23 we did it!We have a huge database, with little disk space. Because there is no sequence between even and odd pubkeys we can set a collision margin of 64, 128, 256.... By this I mean that as you increase the collision margin, the probability of finding an identical binary sequence decreases. What's the use of this?We just need to randomly generate binary sequences and check if they exist in the Database. searching single pubkey#@mcdouglasx import secp256k1 as ice import random from bitstring import BitArray
print("Scanning Binary Sequence")
#Pk: 1033162084 #cPub: 030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b
#range start= 1033100000 end= 1033200000
while True:
pk= random.randint(start, end) target = ice.scalar_multiplication(pk)
num = 64 # collision margin.
sustract= 1 # #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub) binary = '' for t in range (num): h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A)) my_str = binary
b = bytes(BitArray(bin=my_str))
file = open("data-base.bin", "rb")
dat = bytes(file.read()) if b in dat: s = b f = dat inx = f.find(s)*sustract inx_0=inx Pk = (int(pk) + int(inx_0))+int(inx_0)*7 data = open("win.txt","a") data.write("Pk:"+" "+str(Pk)+"\n") data.close() break
for multiple pubkeys- first we create the database by making random subtractions in a range specified by us, you can create your own list at your discretion #@mcdouglasx import secp256k1 as ice import random
print("Making random sustract Data-Base")
target_public_key = "0209c58240e50e3ba3f833c82655e8725c037a2294e14cf5d73a5df8d56159de69"
target = ice.pub2upub(target_public_key)
targets_num= 1000
start= 0 end= 1000000
for i in range(targets_num):
A0 = random.randint(start, end) A1 = ice.scalar_multiplication(A0) A2= ice.point_subtraction(target, A1).hex() A3 = ice.to_cpub(A2) data = open("rand_subtract.txt","a") data.write(str(A3)+" "+"#"+str(A0)+"\n") data.close() Db multi-pubkeys-we create our database for multiple keys. #@mcdouglasx import secp256k1 as ice from bitstring import BitArray
print("Making Binary Data-Base")
target__multi_public_keys = "rand_subtract.txt"
with open(target__multi_public_keys, 'r') as f:
lines= f.readlines() X = len(lines) for line in lines: mk= ice.pub2upub(str(line.strip()[:66]))
num = 1024 # number of keys for pub.
subtract= 1 #amount to subtract each time.
subtract_pub= ice.scalar_multiplication(subtract)
res= ice.point_loop_subtraction(num, mk, subtract_pub) binary = '' for t in range (num):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = BitArray(bin=binary)
binary_file = open('multi-data-base.bin', 'ab') my_str.tofile(binary_file) binary_file.close() scan multiple publickeys, bit version #@mcdouglasx import secp256k1 as ice import random from bitstring import BitArray
print("Scanning Binary Sequence")
#range start= 3090000000 end= 3093472814
#total number of pubkeys for targets in database.
X= 1024
while True:
pk= random.randint(start, end) target = ice.scalar_multiplication(pk)
num = 64 # collision margin.
subtract= 1 #amount to subtract each time.
subtract_pub= ice.scalar_multiplication(subtract)
res= ice.point_loop_subtraction(num, target, subtract_pub) binary = '' for t in range (num): h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A)) my_str = binary
b = BitArray(bin=my_str)
c = bytes(b)
file = open("multi-data-base.bin", "rb") dat= BitArray(file.read())
if b in dat: s = c f = dat inx = f.find(s) inx_1=str(inx).replace(",", "") inx_0=str(inx_1).replace("(", "") inx_2=str(inx_0).replace(")", "") Pk = (int(pk) + ((int(inx_2) % X)*subtract)) cpub=ice.to_cpub(ice.scalar_multiplication(Pk).hex()) with open("rand_subtract.txt", 'r') as Mk:
lines= Mk.readlines() for line in lines: mk_0= str(line.strip()) mk= int(mk_0[68:]) mk2= mk_0[ :66] if mk2 in cpub: print("found") cpub2=ice.to_cpub(ice.scalar_multiplication(Pk+mk).hex()) data = open("win.txt","a") data.write("Pk:"+" "+str(Pk+mk)+"\n") data.write("cpub:"+" "+str(cpub2)+"\n") data.close() break break scan multiple publickeys, version bytes (faster).#@mcdouglasx import secp256k1 as ice import random from bitstring import BitArray import numpy as np
print("Scanning Binary Sequence")
#range start= 3000000000 end= 3095000000
X= 1024 #number of sequential pubkeys for each target
while True:
pk= random.randint(start, end) target = ice.scalar_multiplication(pk)
num = 64 # collision margin.
subtract= 1 #amount to subtract each time.
subtract_pub= ice.scalar_multiplication(subtract)
res= ice.point_loop_subtraction(num, target, subtract_pub) binary = '' for t in range (num): h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A)) my_str = binary
b = BitArray(bin=my_str) c = bytes(b)
file = bytes(np.fromfile("multi-data-base.bin")) dat= (file)
if c in dat: s = c f = BitArray(dat) inx = f.find(s) inx_1=str(inx).replace(",", "") inx_0=str(inx_1).replace("(", "") inx_2=str(inx_0).replace(")", "") Pk = (int(pk) + ((int(inx_2) % X)*subtract)) cpub=ice.to_cpub(ice.scalar_multiplication(Pk).hex()) with open("rand_subtract.txt", 'r') as Mk:
lines= Mk.readlines() for line in lines: mk_0= str(line.strip()) mk= int(mk_0[68:]) mk2= mk_0[ :66] if mk2 in cpub: print("found") cpub2=ice.to_cpub(ice.scalar_multiplication(Pk+mk).hex()) data = open("win.txt","a") data.write("Pk:"+" "+str(Pk+mk)+"\n") data.write("cpub:"+" "+str(cpub2)+"\n") data.close() break break The difference between bit version and byte version is that with bytes some bits are masked within a byte, and it does not detect some collisions. This does not happen in the bit version. -------------------------------------------------------------------------- This code is just a demonstration that may not be optimized correctly, preferably make your own version in C. Don't forget to say thank you, to motivate me to contribute other ideas.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
ymgve2
|
|
November 27, 2023, 04:01:40 AM |
|
What's the advantage of this over storing full public keys at specific intervals?
Also as you target a larger number of keys, your time spent scanning for the bit sequence increases, unless you use some smarter data structures (which would not be as small as 3.81mb anymore)
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 04:12:00 AM |
|
What's the advantage of this over storing full public keys at specific intervals?
Also as you target a larger number of keys, your time spent scanning for the bit sequence increases, unless you use some smarter data structures (which would not be as small as 3.81mb anymore)
generating sequences 01001.... The possibility of finding an identical sequence is very low. The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
ymgve2
|
|
November 27, 2023, 04:19:37 AM |
|
If you sort and store partial keys you can search by using binary search, cutting search time from linear to log. For larger number of keys, this becomes much faster than your single bit storage. To reduce space requirements, only store public keys that fit a criteria (like ending with the six lower bits zero) when building. When searching, subtract until you hit the bit criteria, then look up with binary search.
Also, the fact that the public keys you "store" have to be sequential makes these methods much less efficient than kangaro/BSGS etc.
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 04:58:24 AM |
|
If you sort and store partial keys you can search by using binary search, cutting search time from linear to log. For larger number of keys, this becomes much faster than your single bit storage. To reduce space requirements, only store public keys that fit a criteria (like ending with the six lower bits zero) when building. When searching, subtract until you hit the bit criteria, then look up with binary search.
Also, the fact that the public keys you "store" have to be sequential makes these methods much less efficient than kangaro/BSGS etc.
You only store binary sequences in large spaces, if you want to spread your database over the entire range, and not in the same consecutive range. Using jump addition and subtraction to your target.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 197
Merit: 1
|
|
November 27, 2023, 03:04:08 PM |
|
generating sequences 01001.... The possibility of finding an identical sequence is very low.
The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.
Tested for 35 bit range, 50 million keys. You write that the probability of false collisions is small, but I see something else. Pk: 0x4a6af41c8 Pk: 0x4ab282db0 Pk: 0x4aa361888 Pk: 0x4995442d8 Pk: 0x4ac360410 Pk: 0x4aae069e0 Pk: 0x4990d2b70 Pk: 0x4a8293880 Pk: 0x49f888a98 Pk: 0x4a561b330 Pk: 0x4941be9a8 Pk: 0x49f676c08 Pk: 0x4ac3f4a18 Pk: 0x4abf25b48 Pk: 0x49cd58448 Pk: 0x4a3d172c0 Pk: 0x4a61e9c68 Pk: 0x497caefb0 Pk: 0x49ad43d20 Pk: 0x49b2be970 Pk: 0x4a2da0e08 Pk: 0x49daf14c8 Pk: 0x4a703a6d8 Pk: 0x4ac0ab410 Pk: 0x4ad48cd78 Pk: 0x49e381bf8 Pk: 0x4a8ce39b8 Pk: 0x4a2dbdcb0 Pk: 0x4a4b02ed0 Pk: 0x496075af8 Pk: 0x49df04bc8 Pk: 0x4968e4c58 Pk: 0x49b84c1d8 Pk: 0x4a7274c00 Pk: 0x4a103c310 Pk: 0x49dbc9168 Pk: 0x4980b4718 Pk: 0x4ac09a4a8 Pk: 0x496513eb8 Pk: 0x4a7fde400 Pk: 0x49e0a30d8 Pk: 0x4a0ca7000 Pk: 0x4a8ce5788 Pk: 0x4a29828a8 Pk: 0x4aa33ffe8 Pk: 0x49a7d8070 Pk: 0x4a7761668 Pk: 0x4ab04fb48 Pk: 0x49d3b4878 Pk: 0x49f91b288 Pk: 0x495b36470 Pk: 0x4a5b73060 Pk: 0x4a73533c8 Pk: 0x4a069d880 Pk: 0x495857b48
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 03:15:03 PM |
|
generating sequences 01001.... The possibility of finding an identical sequence is very low.
The traditional way limits you in space, and it works the same. If you choose 64 as the collision margin you will divide your computing power/64, but with a huge database, you would still find the key faster than using xpoint.
Tested for 35 bit range, 50 million keys. You write that the probability of false collisions is small, but I see something else. Pk: 0x4a6af41c8 Pk: 0x4ab282db0 Pk: 0x4aa361888 Pk: 0x4995442d8 Pk: 0x4ac360410 Pk: 0x4aae069e0 Pk: 0x4990d2b70 Pk: 0x4a8293880 Pk: 0x49f888a98 Pk: 0x4a561b330 Pk: 0x4941be9a8 Pk: 0x49f676c08 Pk: 0x4ac3f4a18 Pk: 0x4abf25b48 Pk: 0x49cd58448 Pk: 0x4a3d172c0 Pk: 0x4a61e9c68 Pk: 0x497caefb0 Pk: 0x49ad43d20 Pk: 0x49b2be970 Pk: 0x4a2da0e08 Pk: 0x49daf14c8 Pk: 0x4a703a6d8 Pk: 0x4ac0ab410 Pk: 0x4ad48cd78 Pk: 0x49e381bf8 Pk: 0x4a8ce39b8 Pk: 0x4a2dbdcb0 Pk: 0x4a4b02ed0 Pk: 0x496075af8 Pk: 0x49df04bc8 Pk: 0x4968e4c58 Pk: 0x49b84c1d8 Pk: 0x4a7274c00 Pk: 0x4a103c310 Pk: 0x49dbc9168 Pk: 0x4980b4718 Pk: 0x4ac09a4a8 Pk: 0x496513eb8 Pk: 0x4a7fde400 Pk: 0x49e0a30d8 Pk: 0x4a0ca7000 Pk: 0x4a8ce5788 Pk: 0x4a29828a8 Pk: 0x4aa33ffe8 Pk: 0x49a7d8070 Pk: 0x4a7761668 Pk: 0x4ab04fb48 Pk: 0x49d3b4878 Pk: 0x49f91b288 Pk: 0x495b36470 Pk: 0x4a5b73060 Pk: 0x4a73533c8 Pk: 0x4a069d880 Pk: 0x495857b48 What is that? I don't understand, are they false positives? If so, you must increase the collision margin from 64 to 128 or higher and problem solved. It is still random+ scalar so you can increase the margin of precision thanks to scalar multiplication. Edit: Although I tested it with a collision margin of 64 and I did not get false positives. Maybe you are doing something wrong if you modified the code, share details and we will find a solution.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 197
Merit: 1
|
|
November 27, 2023, 03:20:51 PM |
|
What is that? I don't understand, are they false positives? If so, you must increase the collision margin from 64 to 128 or higher and problem solved. It is still random+ scalar so you can increase the margin of precision thanks to scalar multiplication.
import secp256k1 as ice print("Making Binary Data-Base")
target_public_key = "02f6a8148a62320e149cb15c544fe8a25ab483a0095d2280d03b8a00a7feada13d"
target = ice.pub2upub(target_public_key)
num = 50000000 # number of times.
sustract= 10 #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub)
for t in range(num+1):
h = (res[t*65:t*65+65]).hex()
if h:
hc = int(h[2:], 16)
if str(hc).endswith(('0','2','4','6','8')):
A = "0"
elif str(hc).endswith(('1','3','5','7','9')):
A = "1"
with open("dat-bin35.txt", "a") as data:
data.write(A)
else:
break
#@mcdouglasx import secp256k1 as ice import random from bitstring import BitArray
print("Scanning Binary Sequence")
#range start= 1 end= 34359738366
while True:
pk= random.randint(start, end)
target = ice.scalar_multiplication(pk)
num = 32768 # number of times.
sustract= 10 #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub) binary = '' for t in range (num): h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A)) my_str = binary
b = bytes(BitArray(bin=my_str))
file = open("data-base35.bin", "rb")
dat = bytes(file.read()) if b in dat: with open (r"data-base35.bin", "rb") as file: s = b f = bytes(file.read()) inx = f.find(s) Pk = (int(pk) + int(inx))+int(inx)*7 data = open("win.txt","a") data.write("Pk:"+" "+hex(Pk)+"\n") data.close() pass
|
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 197
Merit: 1
|
|
November 27, 2023, 03:27:17 PM |
|
By the way, for the 40-bit range there are no collisions at all. What database size is required for higher ranges? How can one calculate the size for a specific range? Is it possible to parallelize work across physical processor cores?
|
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 197
Merit: 1
|
|
November 27, 2023, 03:38:07 PM Last edit: November 27, 2023, 03:48:19 PM by sssergy2705 |
|
You already deleted the message, but I saw it in the mail. Here are other options. But I don't see any difference. num = 1024 Pk: 0x49552b0d0 Pk: 0x497fddd50 Pk: 0x499a7d5d0 Pk: 0x4a000b210 Pk: 0x4a5f30078 Pk: 0x4ae3ea090 Pk: 0x49ad6b998 num = 4096 Pk: 0x4a090c6e0 Pk: 0x49a1b7a78 Pk: 0x49fd6a730 Pk: 0x4ab095b80 Pk: 0x4a6b69d50 Pk: 0x498660e38 Pk: 0x4a4416ed8 num = 8192 Pk: 0x4a2fc3bb8 Pk: 0x499b576b8 Pk: 0x4a125b9a0 Pk: 0x4a9f3a5b0 Pk: 0x4985b0678 Although knowing the first digit and 50% of the second, this greatly facilitates further work) Thanks for the work done, I will continue testing.
|
|
|
|
digaran
Copper Member
Hero Member
Offline
Activity: 1330
Merit: 899
🖤😏
|
|
November 27, 2023, 04:13:38 PM |
|
Told you not to share anything public, no actually working script. ☠
So it only takes 1 bit to store a key? I can't imagine what would that be like with 100TB space to store, and with enough fire power + bsgs or keyhunt it should be easy to solve large keys.
Can you also change n to secp256k1 n - leading f's? To check if you can figure something out of it. I'd say there will be no floats if you do that.
|
🖤😏
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 07:40:01 PM Last edit: November 28, 2023, 08:46:59 PM by Mr. Big |
|
You already deleted the message, but I saw it in the mail. Here are other options. But I don't see any difference. num = 1024 Pk: 0x49552b0d0 Pk: 0x497fddd50 Pk: 0x499a7d5d0 Pk: 0x4a000b210 Pk: 0x4a5f30078 Pk: 0x4ae3ea090 Pk: 0x49ad6b998 num = 4096 Pk: 0x4a090c6e0 Pk: 0x49a1b7a78 Pk: 0x49fd6a730 Pk: 0x4ab095b80 Pk: 0x4a6b69d50 Pk: 0x498660e38 Pk: 0x4a4416ed8 num = 8192 Pk: 0x4a2fc3bb8 Pk: 0x499b576b8 Pk: 0x4a125b9a0 Pk: 0x4a9f3a5b0 Pk: 0x4985b0678 Although knowing the first digit and 50% of the second, this greatly facilitates further work) Thanks for the work done, I will continue testing. I'm sorry, I deleted the message because I'm not very sure of the solution, try this. inx = f.find(s) replace inx = f.find(s)*sustract Let me know if it gives you the correct pk at collision margin 64. my apologies. edit apparently it has to do with the jump of 10 in subtraction, not with the collision margin edit2: fixed
Told you not to share anything public, no actually working script. ☠
Can you also change n to secp256k1 n - leading f's? To check if you can figure something out of it. I'd say there will be no floats if you do that.
To change N directly to secp256k1.dll you can do it with a hex editor. The script works, the problem lies when it makes subtraction jumps other than 1, I hope it is solved. So it only takes 1 bit to store a key? I can't imagine what would that be like with 100TB space to store, and with enough fire power + bsgs or keyhunt it should be easy to solve large keys.
This is a good option, the FUDs say that it is of no use, they do not see further, nor do they deign to try (blah blah), they only limit themselves to criticizing
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
sssergy2705
Copper Member
Jr. Member
Offline
Activity: 197
Merit: 1
|
|
November 27, 2023, 08:45:26 PM |
|
The script works, the problem lies when it makes subtraction jumps other than 1, I hope it is solved.
I checked that the script is working correctly now. But in the high ranges of 40 and above, there are no coincidences yet. In principle, up to 40 bits and simple random can cope with the same success.
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 27, 2023, 09:13:47 PM |
|
The script works, the problem lies when it makes subtraction jumps other than 1, I hope it is solved.
I checked that the script is working correctly now. But in the high ranges of 40 and above, there are no coincidences yet. In principle, up to 40 bits and simple random can cope with the same success. you mean this range? 549755813887 :1099511627775 take into account: pubkey is in range? if you make jumps in database subtract= 10, in scan set it the same. Use 64 collision margin as long as you do not receive false positives, otherwise increasing it too much decreases your search speed. At high ranges use multiple objectives to cover more space in the range.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
NotATether
Legendary
Offline
Activity: 1722
Merit: 7249
In memory of o_e_l_e_o
|
|
November 28, 2023, 07:37:53 AM |
|
I think you are writing the ones and zeros as bytes, when you should be writing them out as bits.
Here's what you should do to make your program faster. set a counter like i to 0, and then each time you perform a subtraction, do byte_value |= 0 [or 1] << i; i = (i + 1) % 8. Then only do a write after every 8 iterations. Although, you can make the writing process even faster by waiting until you fill thousands of bytes like this, and then just write them all at once in one batch.
|
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1148
Merit: 236
Shooters Shoot...
|
|
November 28, 2023, 02:04:24 PM Last edit: November 28, 2023, 02:34:37 PM by WanderingPhilospher |
|
I think you are writing the ones and zeros as bytes, when you should be writing them out as bits.
Here's what you should do to make your program faster. set a counter like i to 0, and then each time you perform a subtraction, do byte_value |= 0 [or 1] << i; i = (i + 1) % 8. Then only do a write after every 8 iterations. Although, you can make the writing process even faster by waiting until you fill thousands of bytes like this, and then just write them all at once in one batch.
This will depend on the amount of RAM you have, of course. However, That's what I did, I wrote 2^26 keys at once. Ate up at highest point, 14GB of RAM. If limited RAM you could do the counter as you suggested. I have found a key in the 44 bit range in about 3-4 minutes, 5 or 6 times. It's an interesting script. It's interesting in the way it can store massive amounts of "keys" in such a little file. For 2^26 keys, it only uses a file size of 8,192 kb. That is impressive. The searching method is not great in terms of speed. Also, you do not/should not use a start range of 1 (unless you are subtracting from original key and shrinking the key). When I generated 2^26 keys with a num = 64 option, I set the start range to 8796093022208-(2^26-64) = 8796025913408. I do not know where the key is but I know it's max (17592186044415) and it's minimum (8796093022208) and since my subtraction was set at 1 (sequential with no steps such as using a subtraction as 7 or 10, etc) I know the target priv/pubkey will only be from Target down to - 2^26 (keys generated) - 64 (num option). start (min) = 8796025913408 end (max) = 17592186044415 That's how I approached this script when running tests. You could also speed the script up using an upfront pub subtraction to cut the range in half. Example, if we know the key is in the 44 bit range, we can do an upfront subtraction of 0x80000000000, and then use that newly generated pub in this script and then search in a max 43 bit range. Now to ponder how to add this script's storage size function to an existing GPU script...
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 28, 2023, 03:23:37 PM |
|
I think you are writing the ones and zeros as bytes, when you should be writing them out as bits.
Here's what you should do to make your program faster. set a counter like i to 0, and then each time you perform a subtraction, do byte_value |= 0 [or 1] << i; i = (i + 1) % 8. Then only do a write after every 8 iterations. Although, you can make the writing process even faster by waiting until you fill thousands of bytes like this, and then just write them all at once in one batch.
This script speeds up the creation of the database. #@mcdouglasx import secp256k1 as ice from bitstring import BitArray
print("Making Binary Data-Base")
target_public_key = "030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b"
target = ice.pub2upub(target_public_key)
num = 16000000 # number of times.
sustract= 1 #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub) binary = '' for t in range (num):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = bytes(BitArray(bin=binary))
binary_file = open('data-base.bin', 'wb') binary_file.write(my_str) binary_file.close() but as @WanderingPhilospher says This will depend on the amount of RAM you have, of course.
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
WanderingPhilospher
Full Member
Offline
Activity: 1148
Merit: 236
Shooters Shoot...
|
|
November 28, 2023, 09:15:01 PM |
|
I think you are writing the ones and zeros as bytes, when you should be writing them out as bits.
Here's what you should do to make your program faster. set a counter like i to 0, and then each time you perform a subtraction, do byte_value |= 0 [or 1] << i; i = (i + 1) % 8. Then only do a write after every 8 iterations. Although, you can make the writing process even faster by waiting until you fill thousands of bytes like this, and then just write them all at once in one batch.
This script speeds up the creation of the database. #@mcdouglasx import secp256k1 as ice from bitstring import BitArray
print("Making Binary Data-Base")
target_public_key = "030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b"
target = ice.pub2upub(target_public_key)
num = 16000000 # number of times.
sustract= 1 #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub) binary = '' for t in range (num):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = bytes(BitArray(bin=binary))
binary_file = open('data-base.bin', 'wb') binary_file.write(my_str) binary_file.close() but as @WanderingPhilospher says This will depend on the amount of RAM you have, of course.
How would you break it up as NotA was saying: "|= 0 [or 1] << i; i = (i + 1) % 8" for those with smaller RAM.
|
|
|
|
Kpot87
Jr. Member
Offline
Activity: 40
Merit: 1
|
|
November 28, 2023, 09:40:14 PM Last edit: November 28, 2023, 09:52:41 PM by Kpot87 |
|
I think you are writing the ones and zeros as bytes, when you should be writing them out as bits.
Here's what you should do to make your program faster. set a counter like i to 0, and then each time you perform a subtraction, do byte_value |= 0 [or 1] << i; i = (i + 1) % 8. Then only do a write after every 8 iterations. Although, you can make the writing process even faster by waiting until you fill thousands of bytes like this, and then just write them all at once in one batch.
This script speeds up the creation of the database. #@mcdouglasx import secp256k1 as ice from bitstring import BitArray
print("Making Binary Data-Base")
target_public_key = "030d282cf2ff536d2c42f105d0b8588821a915dc3f9a05bd98bb23af67a2e92a5b"
target = ice.pub2upub(target_public_key)
num = 16000000 # number of times.
sustract= 1 #amount to subtract each time.
sustract_pub= ice.scalar_multiplication(sustract)
res= ice.point_loop_subtraction(num, target, sustract_pub) binary = '' for t in range (num):
h= (res[t*65:t*65+65]).hex() hc= int(h[2:], 16) if str(hc).endswith(('0','2','4','6','8')): A="0" binary+= ''.join(str(A)) if str(hc).endswith(('1','3','5','7','9')): A="1" binary+= ''.join(str(A))
my_str = bytes(BitArray(bin=binary))
binary_file = open('data-base.bin', 'wb') binary_file.write(my_str) binary_file.close() but as @WanderingPhilospher says This will depend on the amount of RAM you have, of course.
Hi! get this error num = 128000000 Traceback (most recent call last): File "D:\BTC\lightweight-database\lightweight-database\binary_Db.py", line 20, in <module> hc= int(h[2:], 16) ^^^^^^^^^^^^^^ ValueError: invalid literal for int() with base 16: ''
|
|
|
|
mcdouglasx (OP)
Member
Offline
Activity: 258
Merit: 67
New ideas will be criticized and then admired.
|
|
November 28, 2023, 10:12:18 PM Last edit: November 28, 2023, 11:22:52 PM by mcdouglasx |
|
Hi! get this error Traceback (most recent call last): File "D:\BTC\lightweight-database\lightweight-database\binary_Db.py", line 20, in <module> hc= int(h[2:], 16) ^^^^^^^^^^^^^^ ValueError: invalid literal for int() with base 16: ''
I just tested and works, apparently that mistakes occurs when you try to put an additional line if you have an previous script, update. How would you break it up as NotA was saying:
"|= 0 [or 1] << i; i = (i + 1) % 8"
for those with smaller RAM.
The script works like this: You have a database like this 1010100010110101000101101010001010101000101101000101101010101010001011010100010 1101010001010101000101101010001011011010100010000 target is = 1101010001011010100010101010001 The target is in: 101010001011010100010 11010100010101010001011010001011010101010100010110101000101101010001010101000101101010001011011010100010000 If we separate in bits: 10101000 1011010 100010 11 01010001 01010100 01011010 00101101 01010101 00010110 10100010 11010100 01010101 00010110 10100010 11011010 10001000 No coincidences were found. edit: I update. - Script aggregate that solves the memory limit
|
I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
|
|
|
|