Bitcoin Forum
September 12, 2024, 10:26:25 PM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 [291]
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 205453 times)
COBRAS
Member
**
Offline Offline

Activity: 910
Merit: 22


View Profile
September 11, 2024, 06:08:42 PM
 #5801

Quote
Yes, all forward, same jump rules for all. But jump function is function of Y, so they don't jump forward with same size even if they have the same X. So the ones on the left are going towards the ones to the right, but the two don't go forward with equal jumps. This makes the covering more uniform.
Can you give an example of what you mean by the jump function is function of Y, with an example? Thanks.

Code:
jump_index = current_element.y % jump_table_length

Let's say we have two points with the same X, they are opposite points.

P = (xp, +yp)
-P = (xp, -yp)

If they use xp as jump function, they jump forward with the same distance, hence a less random walk. Since yp = ± sqrt(xp**3 + 7) jumping by Y spreads the randomness.

Another way to view this: in an [-N, N] interval we have N unique X values, and 2N unique Y values. Larger pool = better pseudo-randomness in an interval that's half of the length.

But even jumping by X alone has very good results, it was just that jumping by Y had even better ones.

Why using N  if range of privkey is anoth ?

[
kTimesG
Member
**
Offline Offline

Activity: 135
Merit: 25


View Profile
September 11, 2024, 11:18:15 PM
 #5802

Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?
COBRAS
Member
**
Offline Offline

Activity: 910
Merit: 22


View Profile
September 11, 2024, 11:43:38 PM
Last edit: Today at 08:34:33 PM by Mr. Big
 #5803

Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....



Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


What shift you mean ? Divide shift to right, multylpy to left, shift ","
... you talk about this shift ?


shift
:

10 / 5 = 2,0

10/50 = 0,20

10/500  = 0,020


....

no profit



Little offtop, but

x = 9901 +100 -1

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1


output:

9000 1
8000 2
7000 3
6000 4
5000 5
4000 6
3000 7
2000 8
1000 9
0 10
-1000 11
-2000 12
-3000 13
-4000 14
-5000 15
-6000 16
-7000 17
-8000 18
-9000 19
-10000 20

[Program finished]

if you run scrypt result will be 0, this can be pubkey of privkey 0 if replase nuber  x to pubkey

also in ooint 0, will be known what lubkey is divideble to  10 without flost part.

Needs a 2**65 substraction for find pubkey divisible to 2**65. Identify what pub 130 dividebla to 2**65 will be pubkey = 0.

so, again too many operations : 2**65 substraction and 2**65 searches 0 pubkey....


enother ex:


x = 9901

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1
    
    
    
    output:

8901 1
7901 2
6901 3
5901 4
4901 5
3901 6
2901 7
1901 8
901 9
-99 10
-1099 11
-2099 12
-3099 13
-4099 14
-5099 15
-6099 16
-7099 17
-8099 18
-9099 19
-10099 20

[Program finished]

lets sub any result from x:

sub -  -5099:


x = 9901 - -5099

d = 0
i = 1
while x >= - 9901:
    
    d = x - 1000
    
    x = d
    
    print(x,i)
    
    
    
    i = i +1
    
    
    
    result,:

14000 1
13000 2
12000 3
11000 4
10000 5
9000 6
8000 7
7000 8
6000 9
5000 10
4000 11
3000 12
2000 13
1000 14
0 15
-1000 16
-2000 17
-3000 18
-4000 19
-5000 20
-6000 21
-7000 22
-8000 23
-9000 24
-10000 25

[Program finished]

we had now point 0, and x is divisable to 15.

Can someone help and provide how to find clean x  "going this way" ?


[
kTimesG
Member
**
Offline Offline

Activity: 135
Merit: 25


View Profile
Today at 01:09:06 AM
 #5804

Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....
Why are you spamming 3 posts in a row, IDK. And it's clear you didn't understand what I meant, you're also somehow mixing scalar indices with point coordinates and so on... your code also makes no sense.

My post was about reducing ECDLP complexity in an interval of size N from 2*sqrt(N) group operations to ~ 1.05 * sqrt(N) group operations by taking advantage of the group's fast inversion. I suggest maybe reading the exact definitions for every word in that sentence... and what "shifting a problem" between equivalent domains means in math / logic in general.
COBRAS
Member
**
Offline Offline

Activity: 910
Merit: 22


View Profile
Today at 01:28:28 AM
Last edit: Today at 01:40:40 AM by COBRAS
 #5805

Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....
Why are you spamming 3 posts in a row, IDK. And it's clear you didn't understand what I meant, you're also somehow mixing scalar indices with point coordinates and so on... your code also makes no sense.

My post was about reducing ECDLP complexity in an interval of size N from 2*sqrt(N) group operations to ~ 1.05 * sqrt(N) group operations by taking advantage of the group's fast inversion. I suggest maybe reading the exact definitions for every word in that sentence... and what "shifting a problem" between equivalent domains means in math / logic in general.

I was make only 1 answer to you.

[
WanderingPhilospher
Full Member
***
Offline Offline

Activity: 1148
Merit: 236

Shooters Shoot...


View Profile
Today at 04:35:24 AM
 #5806

Why using N  if range of privkey is anoth ?
Why using whatever continuous range of size 2N if you can shift the public key into a self-symmetric interval and solve that instead?


Sorry, I dont know what are you talk about.I seen what you use N in your calcs. Then I  take experiments with pinkey 60 bits ex, not need fool N , enoth 60 bit for find 60 bit. If soft use big N so make N operations, smaler N maybe get less operations, but privkey not changes Ofcause then change only N....

He's not making a smaller N or a smaller range size, he is using curve symmetry. There is no "fooling" N.

If you are searching for a public key that is originally in a 60 bit range, manual but easy to explain way, take secp256k1 N and subtract 2^59 from it. This is your lower bound / start range. Your upper bound / end range will be 2^60-1. Same size range, but now it is using the curves symmetrical properties.
Now, you can start the 1 tame and 1 wild on the -negative side (lower bound side) and 1 tame and 1 wild on your positive side (upper bound side).
Akito S. M. Hosana
Jr. Member
*
Offline Offline

Activity: 50
Merit: 1


View Profile
Today at 08:53:30 AM
 #5807

Can we Beat the Square Root Bound for ECDLP over Fp2 via Representations?

https://eprint.iacr.org/2019/800.pdf
nomachine
Member
**
Offline Offline

Activity: 405
Merit: 23


View Profile
Today at 09:12:07 AM
 #5808

Too much math going on here.   Grin

mcdouglasx
Member
**
Offline Offline

Activity: 258
Merit: 67

New ideas will be criticized and then admired.


View Profile WWW
Today at 04:21:30 PM
 #5809

My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalk.org/index.php?topic=5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
citb0in
Hero Member
*****
Offline Offline

Activity: 784
Merit: 725


Bitcoin g33k


View Profile
Today at 05:48:57 PM
 #5810

My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalk.org/index.php?topic=5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

wish you best of luck and happy hunting Smiley

     _______.  ______    __        ______        ______  __  ___ .______     ______     ______    __          ______   .______        _______
    /       | /  __  \  |  |      /  __  \      /      ||  |/  / |   _  \   /  __  \   /  __  \  |  |        /  __  \  |   _  \      /  _____|
   |   (----`|  |  |  | |  |     |  |  |  |    |  ,----'|  '  /  |  |_)  | |  |  |  | |  |  |  | |  |       |  |  |  | |  |_)  |    |  |  __ 
    \   \    |  |  |  | |  |     |  |  |  |    |  |     |    <   |   ___/  |  |  |  | |  |  |  | |  |       |  |  |  | |      /     |  | |_ |
.----)   |   |  `--'  | |  `----.|  `--'  |  __|  `----.|  .  \  |  |      |  `--'  | |  `--'  | |  `----.__|  `--'  | |  |\  \----.|  |__| |
|_______/     \______/  |_______| \______/  (__)\______||__|\__\ | _|       \______/   \______/  |_______(__)\______/  | _| `._____| \______|
2% fee anonymous solo bitcoin mining for all at https://solo.CKpool.org
No registration required, no payment schemes, no pool op wallets, no frills, no fuss.
kTimesG
Member
**
Offline Offline

Activity: 135
Merit: 25


View Profile
Today at 05:49:17 PM
 #5811

My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalk.org/index.php?topic=5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

I would prefer any day choosing to use an as fast as possible database that returns or writes results as immediately as possible, rather than a CPU-hungry tiny database. And all databases are binary... Sounds like you're just compressing some bit-map of ranges, what's the actual worst-case update/query/insert efficiency of your database? This is a problem that already has been solved in much faster and smarter ways, e.g. GZIP, LZMA, deflate, etc.
mcdouglasx
Member
**
Offline Offline

Activity: 258
Merit: 67

New ideas will be criticized and then admired.


View Profile WWW
Today at 06:16:02 PM
 #5812

My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalk.org/index.php?topic=5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

I would prefer any day choosing to use an as fast as possible database that returns or writes results as immediately as possible, rather than a CPU-hungry tiny database. And all databases are binary... Sounds like you're just compressing some bit-map of ranges, what's the actual worst-case update/query/insert efficiency of your database? This is a problem that already has been solved in much faster and smarter ways, e.g. GZIP, LZMA, deflate, etc.
The speed depends how fast your implementation of Seck256k1, does not require a demanding effort of resources for the writing/reading phase of the DB, which was the big problem of DB in the puzzles.

I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
kTimesG
Member
**
Offline Offline

Activity: 135
Merit: 25


View Profile
Today at 06:51:24 PM
 #5813

The speed depends how fast your implementation of Seck256k1, does not require a demanding effort of resources for the writing/reading phase of the DB, which was the big problem of DB in the puzzles.
I can do 200 million secp256k1 pubkey additions/s, so generate 200 new million pub keys per second, by using 20 CPU cores, each running at 10 Mop/s. Can your DB handle this multi-threaded mode? Even a mature DBMS will have serious issues storing the first batch of results. It would also need a disk that can handle writing at least 32 bytes * 200 M = 6.4 GB/s ~= 50 Gbps, which don't even exist yet AFAIK. So, is it really about how fast secp256k1 can get, or about the limits of your DB?
nomachine
Member
**
Offline Offline

Activity: 405
Merit: 23


View Profile
Today at 07:30:08 PM
Last edit: Today at 07:53:00 PM by nomachine
 #5814

It would also need a disk that can handle writing at least 32 bytes * 200 M = 6.4 GB/s ~= 50 Gbps, which don't even exist yet AFAIK.


They say the Crucial T705 NVMe speed is 12/14 GB/s

But you also need a (high-end) motherboard that supports this transfer rate.
mcdouglasx
Member
**
Offline Offline

Activity: 258
Merit: 67

New ideas will be criticized and then admired.


View Profile WWW
Today at 09:27:49 PM
 #5815

The speed depends how fast your implementation of Seck256k1, does not require a demanding effort of resources for the writing/reading phase of the DB, which was the big problem of DB in the puzzles.
I can do 200 million secp256k1 pubkey additions/s, so generate 200 new million pub keys per second, by using 20 CPU cores, each running at 10 Mop/s. Can your DB handle this multi-threaded mode? Even a mature DBMS will have serious issues storing the first batch of results. It would also need a disk that can handle writing at least 32 bytes * 200 M = 6.4 GB/s ~= 50 Gbps, which don't even exist yet AFAIK. So, is it really about how fast secp256k1 can get, or about the limits of your DB?

with the way the data is stored it does not require a complex decompression system (it is not a compression algorithm) it is a system designed exclusively for searching for public keys, therefore it is handled by the search script as if you put 10 public keys in a text file to give a basic example, as for how many public keys could it store until being affected by the size of the db, well, it will depend on the reading capacity in large files, however in a 1gb file you would have approximately 2,621,400,000,000 pubkeys, you need more gb without losing reading speed, we could adapt a bloom filter .. but this is only an initial version of an idea that can be improved over time.

I'm not dead, long story... BTC bc1qxs47ttydl8tmdv8vtygp7dy76lvayz3r6rdahu
COBRAS
Member
**
Offline Offline

Activity: 910
Merit: 22


View Profile
Today at 10:19:43 PM
 #5816

My new public key search system is almost ready. I had to reinvent my binary database system because, although the database was lightweight  https://bitcointalk.org/index.php?topic=5475626, I had efficiency issues with binary search. This is now a thing of the past. I have designed a system that stores 100 million public keys in an 80 KB file, yes, what you read 80KB!(in the future it will be smaller) that meets maximum efficiency. We would only be limited by the current speed of Secp256k1 when generating the 100 million or more public keys while creating the database. I am finishing designing the search script after months of being stuck due to personal issues, I am finally back on track.

fo get key from 2^27 need a

2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2×2 = 134217728 pubkeys. this downgrade pubkey  2^27 to pubkey with privkey 1

same situation with downgrade 2^130 to 2^103 etc


if substract 1 bit, from 2^30 need initial 2^27 pubkeys  to take one of them in result with privkey 1. Because all step generate 50/50 probability  what  will go to - areal or you steal in + area

[
Pages: « 1 ... 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 [291]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!