Bitcoin Forum
May 28, 2024, 12:33:52 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 74 »
1  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: May 25, 2024, 03:15:22 PM
keyhunt not good

you should learn to configure it, defaul thread subrange is 32 bits, for small ranges with multiple threads you should lower the N value  with "-n number" if the range is less than 1 Million keys you should use -n 0x10000
where 0x10000 its a 16 bits subrange per thread

Look



Found i less than a second

are you key hunt creator?
nice too meet you
It was my mistake, I apologize, dear friend
ok , thank you
I meant https://github.com/WanderingPhilosopher/KeyHuntCudaClient
i use of first version keyhunt cuda

Zero clues of how or what you ran, but with a single CPU core, using keyhunt-cuda, it is found pretty much as the program starts:

Code:
KeyHunt-Cuda v1.08

COMP MODE    : COMPRESSED
COIN TYPE    : BITCOIN
SEARCH MODE  : Single Address
DEVICE       : CPU
CPU THREAD   : 1
SSE          : YES
RKEY         : 0 Mkeys
MAX FOUND    : 65536
BTC ADDRESS  : 1E5V4LbVrTbFrfj7VN876DamzkaNiGAvFo
OUTPUT FILE  : Found.txt

Start Time   : Sat May 25 10:11:49 2024
Global start : 20000000000000000 (66 bit)
Global end   : 21000000000000000 (66 bit)
Global range : 1000000000000000 (61 bit)


[00:00:02] [CPU+GPU: 5.28 Mk/s] [GPU: 0.00 Mk/s] [C: 0.000000 %] [R: 0] [T: 10,706,944 (24 bit)] [F: 1]

BYE

SO maybe you entered wrong things/flags when trying to run the program.
2  Bitcoin / Bitcoin Discussion / Re: == Bitcoin challenge transaction: ~1000 BTC total bounty to solvers! ==UPDATED== on: May 05, 2024, 03:16:44 AM
I'm currently checking apps that I haven't checked before... and that's how I found PubHunt. I entered the 29 closest unresolved addresses without pubkey in the input... This way I achieve a scan of 6400Gkeys/s . What are the estimates that a pubkeys lookup for 29 addresses with this method and this program at this speed will yield the intended expectations more than a traditional key lookup? What are the real chances of success and effectiveness of this method?
Hi Zielar
Waouhh impressive this speed! If you could choose the beginning and end of the search range, you could find pubkey #66 between 2 and 4 months. On the other hand the search is carried out randomly it makes random hashes on the PK of #64 #66 #67 #68 #69 #71 and #72 it can be faster as well as much longer depending on luck. Too bad this program could be largely optimized like choosing the hash range #66 as well as the random or sequential mode with your speed you could come across #66 in 1 month or 2 depending on luck.

Edit
Looking more closely at the operation of this utility and your speed, the proba are these
in 10 days on all the beaches by inserting the 6 pubkeys (I calculated for the first 6 # not 29)  you have a one in 148 chance of having one of the keys
in 20 days 1/74  1.35%
in 40 days 1/37  2.75%
in 80 days 1/18  5.5%
in 160 days 1/9  11%
in 320 days 1/4  25%
it remains arbitrary because luck can enormously speed up the process Grin

Is there any way to specify the bit range in this program ? I am newbie so any help would be appreciated
Thanks

I was trying to figure out a way to see if PubHunt even works. It is not easy to test on lower complexity keys. Would also be nice to see current key being worked on. So far I have.

https://github.com/Chail35/PubHuntRange

PubHunt.cpp
Line 330

         printf("\r[%s] [GPU: %.2f MK/s] [T: %s (%d bit)] [F: %d] %02hhx %016llx %llu  ",

%016llx should be the starting key. However it looks like this. and only upldates every few trillion keys.

[00:00:06] [GPU: 4316.00 MK/s] [T: 25,904,021,504 (35 bit)] [F: 0] a4 00007fffb46dd420 17179869184

Many thanks to Chail35 for recently updating this fork!


I am confused about the fork you provided the link for above. Confused as in, if it is working in a range, then there is no difference between this forked program and keyhunt-cuda.

The original pubhunt did not work in a range, it generated random public keys to try and link to a h160/address. Which, if you do the math, meant there could be close to 2^96 possibilities. But now, shrinking the range down to 66 bit or 67 bit, etc., makes that number more than likely just 1. 1 match compared to 2^96 possible matches is a big difference.
3  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: April 24, 2024, 03:29:44 AM
I removed the space between the private keys.
See how the first 4 digits of the private key are known?
Now guess 66!



What is this sorcery?!
4  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: April 22, 2024, 03:57:07 PM
Anyone have seen or experience with Bitcrack / fork for load starting points by file ?

Not sure what you are exactly wanting, but the easiest way is to create a Python script to feed the program starting ranges, and to keep track of said starting ranges. Not sure if starting ranges = starting points, but that is what I would do.
5  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 11, 2024, 04:13:05 AM
Even if let's say #130 is barely possible by stretching limit and a lot of hashpower, but certainly we should start thinking of some ideas of improving further if we really have to attempt #135 plus.
I agree with you. I am currently searching for #130, but how can we improve any of the existing tools? We talking creating mathematical shortcuts such as division/subtraction, or making current programs faster, or both lol?!

I am definitely interested in all possibilities.

Also, I am a fan of your python coding, 10000%; really great job(s) on your github repos!
6  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: April 08, 2024, 01:32:14 PM
I’m seeing so many variations of software throughout this thread and the other one. If I wanted to randomly let my pc waste electricity on weekends doing a random search for #66, what would be my best bet?

Use the program as you wish, but from puzzle 79, 81, 82, 83 and above.

Or, ask someone with knowledge about timings which is the lowest feasible puzzle to be solved that will leave enough time to withdraw without a bot snatchingup. I don't know this answer, could be even puzzle #70?  

If someone can answer that, that would be great.


Puzzle 81 is quite safe. I solved Puzzle 80 in about 26 hours (with extremely large memory in BSGS). The bot does not work on that number.
Nothing below 100, is really safe.

Especially with all this time in between the solving of puzzles.

Kangaroo would still be the program to use, IMO, that's what I would use anyway. Not BSGS, although one could if they had everything prebuilt and ready to go.
7  Bitcoin / Project Development / Re: Large Bitcoin Collider Thread 2.0 on: April 08, 2024, 02:05:27 AM
Hi guys,

Nice work!

I also did develop my tool, "Colisionador" (collider in spanish), as I do not trust the founds are being sent to the cloud.

I did program it on C, trying to make if super efficient and fast.
I keep the source code closed at the moment, but just ask if you would like to collaborate.
You can run the binaries released it if you want: https://github.com/japeral/colisionador_releases

I like to run it in Debian on WSL, running on Windows 10. There is also binaries for ARMv7 and Windows X86.

It collides against a sorted list of ripemd160 from public addresses with balance.
On 2024-03-22 when I scrapped a fully synced BTC node database with 53994718 addresses (53.9 Million addresses).

The list include the 256 pieces of the puzzle also, those with balance 0 and still not 0.

You can download the full sorted list of r160's updated up to 2024-03-22 here:
https://drive.usercontent.google.com/download?id=1mAX4kdXaYSBYYGtgQf30Sw_Zkhl2GC3r&export=download&confirm=yes

The tool, loads the list into RAM, and with a binary search on the sorted list, search each key ripedmd160 against the list in RAM. You can edit the list to target different public addresses with the "-list" parameter.

Speed improvements are marginal if the list size is reduced, so I always load the entire list.

The search speed is between 15-25K keys/s per CPU thread, depending on the CPU clock.

On a AMD Ryzen 7 7730U CPU with 16th threads, the tool searches at 15K keys/s per thread, making a total of ~215K keys/s
On a Raspberry pi4, @4 threads, it makes a total of ~31K keys/s.

It currently supports CPU only.
It spreads the load across all the threads available in the system. I use OpenMP for the parallelization.

It computes the: compressed_02, compressed_03, uncompressed_04, compressed_06 and compressed_07 Id's.

I am targeting the search space of puzzle 66 (that is all zeros from the left, bit 66 to 1, and the random bits down to bit 0). For that just run the tool with the parameter "-puzzle 66". The lower bits are randomized every 16777215 Keys (000000 - FFFFFF) ~ 15mins of work. Each thread is independent and jumps into a different location inside the puzzle search space.

You can run as many instances of the tool, in as many machines you want.

Because the random nature of the jumps, the probability of covering the entire search space is even.

You could also target all the addresses with balance, just with the parameter "-puzzle 255", against the probability of finding anything. You could use it as a lottery cracker.

If a finding happens, the tool saves the result in a found.txt file. It also converts the private key into WIF format for convenience to easily import it into the wallet with importprivkey command.

I have a question:

How the hell are you finding puzzle pieces 70, 75, 80, 85, etc before puzzle 66, if the search space of those puzzles are much bigger that the puzzle 66 search space?

It is also interesting,  puzzle pieces: 161-256 are also spent.

Is it any other clue to delimit the search space of every puzzle piece?

As far as I know:
For puzzle 66, search space is:

Code:
 from:
00000000000000000000000000000000000000000000000200000000000000000000000
to:
000000000000000000000000000000000000000000000003FFFFFFFFFFFFFFFFFFFFFFF

That is: 36893488147419103231 keys to check!

Thanks!
I'm pretty sure this program is dead.

Every 5th key has its public key exposed so one can find the keys within minutes using kangaroo or BSGS.

You may want to read up more on the puzzles to answer your questions, get a better understanding at what people are using these days to search for keys such as #66. A 4090 is getting at least 5,000 MKey/s with GPU programs.
8  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: April 06, 2024, 03:28:03 AM
Total number of addresses that start with the same 11 characters = 2^66 / 58^(11 * log2(58))

First, calculate 58^(11 * log2(58)):

58^(11 * log2(58)) ≈ 58^42.614 ≈ 1.2107 × 10^74 (approximately)

Then, divide 2^66 by this result:

2^66 / 1.2107 × 10^74 ≈ 460708775.52

So, the approximate number of addresses that start with the same 11 characters from a key length of 2^66 bits is about 460,708,775.
This made me laugh out loud.

Where did 10^74 come from?

You realize 10^74 is larger than 2^66, correct? Not by a little but a lot.

Thanks for the laugh though! I appreciate it.
9  Bitcoin / Bitcoin Discussion / Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it on: April 05, 2024, 01:54:43 AM
Code:
●  12-char prefix
                                       20d45a6a7625355df5e5fc41b358e6f8581991e2
13zb1hQbWVsc1111111111111111111111  => 20d45a6a76253570014f1a30288bd88dc3ff6ffb
13zb1hQbWVsczzzzzzzzzzzzzzzzzzzzzz  => 20d45a6a76253571d7012502fabee39e6bbaf2b4
                                                      1d5b20ad2d2330b10a7bb82b9

FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF / 1D5B20AD2D2330B10A7BB82B9       =   0x8B8758F7C8AD0286
 (every  0x8B8758F7C8AD0286  will find 1 address 12-char prefix)

Did you realize that 0x8B8758F7C8AD0286   is near 1/3 of the 66 bit space?

Code:
>>> 2**65 // 0x8B8758F7C8AD0286
3
>>> 2**65 / 0x8B8758F7C8AD0286
3.6694959191703664

It just interesting

It seems to me that these calculations are very abstract, we are missing several steps for ripemd160. If this were really the case, then any addresses could be taken away without problems. I know you Alberto, we corresponded several times in tg. I am grateful to you in many ways, most likely thanks to you I started this activity. I have no goal of emptying other people’s wallets, but if one is found ownerless or from a puzzle, I will definitely send you a significant reward, rly. But, over time, I realized that this is more of a study, most likely this will never happen, maybe. Once again I apologize, I'm using a translator.

If you are talking addresses or better yet, address prefixes (which both are a reflection of their H160), you don't really need to know all of the steps, really any of them, the H160 or the SHA256, etc. You just have to know what comprises an address, in this case base58. And from there you can get a rough idea of how many specific prefixes will be in a specific, certain sized range.

First, the leading 1, isn't part of the prefix, it is a gimme because all of these type addresses start with 1.

Someone correct me if I am wrong, but couldn't we just take range size / 58^number of leading prefixes (minus the starting "1")
If everything is uniformly spread out, over the range/curve (as the experts like to say, but it's not proven, but is a good "guesstimator" of sorts), an 11-char prefix (1 + 11 characters = total of 12 characters), in a 2^65 bit range, there should be around 1.4 of them. 2^65 / 58^11
2^65 / 58^11 = 1.4
2^65 / 58^10 = 85
2^65 / 58^9 = 4,967
2^65 / 58^8 = 288,088

and I guess you could apply a similar logic to H160s as well, but use 16^number of leading H160 prefixes, instead of 58^number of leading prefixes (minus the starting "1")

Anyone?
10  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 03, 2024, 07:17:13 PM
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size.
You still didn't get my point. Let's revisit your example:
Keyspace - 129 bits
DP = 32

Let's pretend we do like 500 Mkeys/second. So we find a DP once every 8 seconds or so (one in 4 billion jumps).
Let's also go small and allocate 512 MB of RAM for all the bit gymnastics.

Will you wait 4 years and 3 months to write the DPs in the memory to a file? Because this is how long it will take before your allocated RAM gets filled.

No, you'll want to dump them sooner than that. This gets messy really quick, no matter how you look at it. Oh, you want to use 8 GB of RAM? Maybe 16? Oh shit, you may not live long enough to have that first dump even started. On the other hand, you also have almost zero chances to have a collision inside such a tiny amount of memory (I'm still talking about 16 GB, if not even more), so what do we do? Well, not hard to guess - dump data more often, I guess. And merge and merge and merge... maybe now you get my point.
You really do not understand what you are saying, really, at all.
Are you using a CPU and DDR1 RAM in your current PC?

Please, just use the program in a medium range so you can actually know how it treats DPs and files.

It is nothing for the server (or single CPU) to write to file, even with very low DP. But let's keep up with the ongoing DP 32.
Let's keep this simple and say the hashtable is full of 256 bits, for each entry, 128 for point and 128 for distance. Now let's calculate 10,000,000 entries.
32 bytes/entry×10,000,000 entries=320,000,000 bytes
This is equivalent to 320,000 kilobytes (KB), 320 megabytes (MB), or 0.32 gigabytes (GB)

However, I do not want to split the hashtable at .32 GB, I want a full GB before storing. So we will say I only want to save to file, every 30,000,000 DPs found.

With your example of 500 MKey/s, which is really slow for modern GPUs...but a DP is found and stored in RAM every 8 seconds. There are 86,400 seconds in a day, so that would mean every day, there would be 10,800 DPs found and stored in RAM. Now we wait until we have 30,000,000 DPs to write from RAM to the work file, how long will that take? 2,777 days? But yes, with only 1 card, it would take decades and decades to solve, so we run 256 GPUs getting 500 MKey/s. So now, every day, we should find 2,764,800 DPs a day, so that would mean every 11 days, we would write to file. However, like I said, 500 MKey/s is slow, so let's say that our GPUs get 1,500 MKey/s, a DP found every 3 seconds. So now each GPU finds 28,800 DPs a day. So 256 x 28,800 = 7,372,800 DPs a day, so every 4 days we are writing to file. You can adjust to whatever time frame or GB you want to with the program. Maybe you want to save every 12 hours, or every hour, doesn't matter. Now when you merge those smaller files, it obviously creates a larger file, so now when you go to merge the next batch of smaller files, you merge those with the larger file, so you do not have to keep merging, every file, every merge.

You act like dumping the data more often is a problem; maybe it is if you have the first PC known to man, trying to act as the server. Again, dump as much as your RAM holds in 1 file, or dump every hour, it does not matter, it does not get messy. It can be as clean as you want it/need it to be, or as "messy" as you want it to be.

You can merge once a day, or as often as you like. It's not hard, you just start the program with the merge commands and that's it.

115 was solved with this program/algo, using 256-384 GPUs that were getting 1,500-1,600 MKey/s, with a DP of 25, which created a master work file in excess of 300GB, in 13 days.

So no, I still do not get your point.
11  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 02, 2024, 01:13:37 PM
Quote
1. Why is the file loaded into memory? You almost state you can just read/write from the file to do DP storage/lookup operations, so why is it read and operated in the RAM? Smiley
2. Would you say the solution (collision) was found during a file merge from 2 different X GB files, or after a GPU loop? If the latter, that would mean the hashmap file was scanned and written to, for each DP, which by all means sounds unfeasible, at least for the simple structure it uses.
The DPs are first loaded into memory. You control how often the DPs are written to the workfile. Once every minute, hour, 2 hours, 12 hours, however often. And this is good depending on how much RAM you have and/or the DP size. With higher DP size, you will have less DPs found and entered into hash table so you can write often or as seldom as you wish. With lower DPs, many will be found so you would need to write to file based on RAM constraints. If you are trying to solve a smaller bit range or a very large DP, then you do not have to use/create workfiles and the solution will be found via all RAM, i.e. all points and distances are kept in RAM until solution is found.

If you are using a server, then all clients send their DPs to it and it stores/writes to file all DPs found by each client, again, as often or as seldom as you wish. This is how Zielar did it I do believe. A server with many clients. And yes, he was merging the workfiles every so often (I am not sure how often) on the server side, to check for collision/solution.

The only I/O operations should just be on the server side. But again, it isn't writing to file every time a DP is found, you control how often or seldom. And with the wsplit option, you can tell the program to create a new workfile every x amount of minutes/hours/days etc. so you are only writing to disk once every x amount of time. Sometimes a solution is found within a single workfile, but most likely the solution will be found via the merging of these individual workfiles.

Exceeding the upperbounds with JLPs unmodded program is true, the whole 125 bit max. But there are other programs out there that basically mimic this program that have limits of 192-256.

No, I do not get your point. I was trying to show you that one does not need exabytes of RAM or storage to solve 130. One can solve with GBs or TBs (based on DP) and there isn't a massive I/O operation constantly going on. Of course it will take many GPUs to solve within a faster time frame, less if one is patient, but storage requirement is no where near 1 exabyte, to find a solution.


Quote
Considering those calculations and the comparison with 115#, what would be the neccessary requirements to at least "try" to solve 130# with let's say DP 27 in matter of storage and GPU power ?
AT best case scenario, with algo averages, you would need to store 2^38.55 DPs to solve 130 using DP 27, which would roughly equate into storage requirements somewhere in the neighborhood of 300+GB multiplied by 36.5, so almost 11 TBs if my math is correct...A lot, lol. Which there are many hard drives out there to store all of that, the problem would be trying to merge all of those workfiles. GPU power can be as few or as many, GPU power gives you a rough time frame to find a solution and plays into overall DPs (overhead, which does affect storage space size) as JLP talks about in his github readme.
12  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 02, 2024, 03:32:52 AM
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
Cool, but can you also tell me how many jumps were actually stored in those 300 GB?

Was it completed before the expected number of kangaroos of each type filled each set, to reach even a probabilistic 50%? Without such details, the best I have was estimates, and the estimates derive from the number of operations.
So if you tell me the data storage was 100 kB after 2**58 operations, anything is possible, but that would just mean not a lot of DPs were found, not that the estimates were wrong.

Does the required storage per entry scale linearly when you increase the DP? Because logically, a higher DP under a higher keyspace requires way more jumps to find all DPs, which means the travelled average distance of any kangaroo increases in size, which means your storage also increases in size because you have to store longer distances.


It was close to JLPs program estimates, I want to say within 2.5%.

Also, the jumps (each jump) aren't stored. All the kangaroos are jumping via jumps but only those that land on a DP (leading 0s with this program) are stored.

I think for 115, it took a total of 2^58.36 total jumps and the workfile(s) contained 2^33.36 DPs (DP 25); and it took right around 13 days to solve. 130 at DP 32 would contain a little more than 115 at DP 25; but even if you doubled it, you are looking at 600+GB, triple it 1TB, quadrupled it 1.4 TB, etc. it is still no where close to 1 exabyte of storage required.

Distances do not really increase in size, in the same range/group. They are spread out randomly at program start and then, for say 115, they have average jump size of somewhere around 2^57ish; so each jump is about 2^57, but inside a 2^114 range, so in order to really increase, each kangaroo would have to make more than 2^57 jumps (which would take a lifetime).

But yes, a larger range would have more hex characters stored for the distance (which would take up more storage space), compared to a smaller range, but the points (DPs) would all be the same size/length, regardless of the range. So 130 compared to 115 range would equal around 3-4 more hex characters per distance.
13  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 01, 2024, 02:17:14 PM
You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
I believe you are the one who should look into the source code, before making any other statements about it. Maybe check the hash lookup part, you will be surprised?...

Let's say you wait 1 year to fill up your DP cache (in RAM) and decide to write it to disk. Do you think you will have to write the same amount of data you have in RAM? Before answering, maybe create a simple C program that maps a 2 TB file to memory and write a single byte at offset 10 GB for example. No seeks, no nothing. I don't think you get the actual difference between offline storage and accessing random addresses of volatile memory which is in O(1). This is why it's called "random address" indexing, any byte you want to read it gets completed in constant time.

You are right though about the storage size difference, it's just 2**(0.5) between #115 with DP 25 and #130 with DP 40.
I cannot answer you how many actual storage bytes were required because I don't care about how the program stored them, just their count.

Quote
This program also solved #115 in 13 days (114 bit key on the Secp256K1 field). It required 2**58.36 group operations using DP25 to complete.
2**58.38 operations = 57.37 bits jump for each kang set = 2**(57.37 - 25) stored jumps each, so storage space was around 2*2**32.37 items for the complete merged "central table" / hashmap. Which is almost perfeclty equal to my estimates.
I fully understand the code and how this program works. All aspects.

115 was solved with a little more than 300GB of, wait, checks notes, “files”.

That was your first rant, exabytes lol.

I hope no SSDs were lost solving 110 and 115 with this program. RIP SSDs and all of your exabytes that you stored. 😁
14  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 01, 2024, 01:42:42 PM
Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
What exactly is wrong with my calculations, to be precise?... You were specifically asking something about #130 with a specific DP. Now you want to know why #115 is a manageable effort? OK...

Keyspace: 114 bits.
T size: 52 bits
W size: 52 bits
DP = 25, so Tdp size = 27 bits
Wdp size = 27 bits

Required memory for hashmap: 2*2**27 bits entries = 268 million entries * sizeof(jump entry)
Requires steps: 2*2**52 + numKang * 2**25
Probability(success) = 1 - (1 - 2**-52) ** (2**52) = 63%

I do not believe you understand how a hash map needs to work, in order to check for a collision.
In simplest terms, you will need some sort of a sorted list. Optimally, a balanced binary search tree. That one has O(log n) lookup and insert complexity. So, for a DP 25, that is filled with 2**28 entries, it requires 28 steps for a single lookup, IN AVEGAGE CASE scenario, when your search tree is balanced.

The memory requirement difference between #115 with DP 25 and #130 with DP 32 is orders of magnitudes higher!

It doesn't matter if you use a single CPU with few TB of RAM or many CPUs each with less RAM each. You will end up with the same problem of having to merge / query the DP "central table" / search tree / sorted list, call it whatever you want.

Good luck with your "files".
You are in this thread spouting non sense. That’s the issue. You have no clue as to how this program works.

You say solving 130 will require exabytes of RAM/storage, which is false. So I ask how many would it take for 115 with DP 25, how many exa or terabytes? You couldn’t answer in terms of bytes.

You talk about crashing SSDs, more info that you don’t know about this program or how it works. If an SSD can’t handle being written to, 12 times a day, then what’s the point of even owning one lol. Yes, this program, you control what and how often DPs are saved to your hard drive.

Now you claim 130 with DP 32 is magnitudes more than 115 with DP 25, which further shows you don’t know how kangaroo or this program works. Magnitudes? lol.

Maybe look at the code for this program that you are commenting on, before commenting on it.
15  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: April 01, 2024, 10:32:15 AM
I don't understand so deeply how kangaroo works but @Etar has created one distributed kangaroo.
Here are the links:
- https://github.com/Etayson/Server-Client-apps-for-Etarkangaroo
- https://github.com/Etayson/Etarkangaroo

If someone has the resources needed to run a 130# puzzle operation we can try to gather some gpu power.

Regarding what @WanderingPhilospher is asking to @kTimesG we can try to run this tool for 115# and see how long it takes and how many resources do we need/use. 
If so just PM me here.


No need to test #115, I already know the answer, I just wanted kTimesG to continue to put his foot in his mouth lol.

He really has no clue what he is saying nor understands this program. This program being the program that this thread is about. I like to have good laughs and was looking forward to his follow up answer but maybe he now realizes he is wrong and won't comment again lol.

As far as Etar's program, it is good and works and works up to 192 bits. The only drawback is it is written in PureBasic and many people who use Linux, struggle trying to compile it.
16  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: March 31, 2024, 05:56:55 PM
Can someone show at least 2 DPs? I'd like to know more about them and to know what makes them useful in finding a collision.

They can be in many forms, here is trailing, with 0s:
824EDCFE76F16D85BB77626BE00000000

Or could be leading, with 0s:
00000000824EDCFE76F16D85BB77626BE

Those are both, at least, DP 32.

With those DPs, you would also have a distance/private key.

So if you get a tame DP to match a wild DP, then you would have a collision and possible solution to whatever public key you are looking for.

Tame:
824EDCFE76F16D85BB77626BE00000000  1000
Wild:
824EDCFE76F16D85BB77626BE00000000  0001

Then subtract wild distance from tame distance and that would be the private key to the public key being searched for. Priv key = 999

This is an example, those do not match up in the real curve, I just used them for an easy explanation.
17  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: March 31, 2024, 04:02:38 AM
First thing, I would not be using RAM. All DP info stored in files.

You still think it would require exabytes? To store all of those DPs?
Glad to see some progress. Did you factor in the entire overhead of using a non-O(1) lookup and item storage for DP? Or are you talking in abstract?

You have only 2 options:
a. Insist on a constant time factor to solve the puzzle, with all the assumptions factored in. One of those requirements is a O(1) DP lookup / storage. You will need a few TB of random access memory at a physical level, not in "files".
b. Actually understand how real-life limitations kick in.

Since the DP points are impossible to predict or compute them in "ranges", actual overheads like disk page reads / writes kick in.
Reading to a SSD is thousands of times slower than reading from physical RAM.
Writing to files involves reading entire pages of data, combining it, and writing it back.

Since DP values are uniformly spread out across the entire range, so say goodbye to "fast" SSD speeds when reading and writing even a single DP entry, since it is not sequential, and you have almost 0% chance to even have two same DP points in a close range.

But sure, if you wait like 5 seconds to compute a DP, store it to a file, and using this as your base, sure, it's a 2**65.5 + DP overhead total time. But you'll end up at the end with a broken SSD with an exceeded write failures count.

Just answer the question, how much storage or RAM do you think one would need, solving a 115 bit range key, using a DP of 25? According to your previous calculations...
18  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: March 30, 2024, 03:36:38 PM
I had zero overflow during tests.

And for 130, I am using the average case scenario and numbers. No exabytes needed. And it’s obvious you don’t understand the difference between a kangaroo and a stored DP.

You do the math yourself, take a DP, we will say DP 32, and you tell me, in your expert opinion, how much storage space is needed, roughly, for solving 130.  

I would reference you to OPs GitHub to read on time/memory tradeoff but you’ve already stated you don’t agree with much of what he has said or programmed.

Anyway, let me know storage space required, avg run case, for 130, using DP32
I think you are forgetting something about real-life constraints. Let's go by your example.
I will use log2 (bits) as a base to simplify calculations.

Puzzle #130 - keyspace = N = 129 bits.

We have two sets: T (tame) and W (wild), each having sqrt(N) elements (64.5 bits).

At this point probability of a collision (T and W intersection is not empty) is:
P(success) = 1 - P(failure) = 1 - (1 - 2**(-64.5)) ** (2 ** 64.5) = 63%

Total operations so far: 2 * 2**64.5 = 2 ** 65.5

Adding DP = 32 into the mix. So we store on average every T point out of every other 2 ** 32 and every W point out of every 2 ** 32.

So size of T = 2 ** (64.5 - 32) = 2 ** 32.5
Size of W = 2 ** (64.5 - 32) = 2 ** 32.5

Remember we didn't change nothing about probability, so these numbers are still for a 63% success probability.

Now, since DP only reduces storage by a factor of 2**DP, then the number of operations until T and W collide increases by:
2 * 2**DP / 2 on average (operations between real collision and reaching the DP point, on average for T and W) = 2 ** DP

So total ops for a 63% chance of success = 2 ** 65.5 + 2 ** DP

Now, you may say: oh, so I should apologize because I stated we need much more storage. Well, let's go back to real-life:

- set of size T with DP 32 = 2**32.5 elements
- set of size W with DP 32 = 2**32.5 elements
- P(success) = 63%

Now, how many kangaroos do we use? The naive answer is, it doesn't matter, because we are counting OPERATIONS total.

But it does matter when having to think how to store the traveled distances.
Let's see what distance a single kangaroo would travel, on average.

Jump distances = [2**0, 2**1, 2**2, ... 2**128]
Avg(jump dist) = (2**129 - 1) / 129 which almost equals 2**(129 - 7) = 2 ** 122

Number of jumps performed by the kangaroo to fill the T or W set is NOT 2**32.5, but 2**64.5 (because we still jumped even if not reaching a DP)

So total traveled distance = 2**64.5 * avgJumpSize = 2 ** (122 + 64.5) = 186.5 bits = 24 bytes

So storing a jump requires:
- jump position / key, let's say 33 bytes (X + Y sign)
- distance traveled (24 bytes) + kang type

Just by doing some simple estimations this would require a lot of TB (terabytes) of RAM.
You will need to increase the number of kangaroos by a factor of 256 to get rid of one byte in the stored DP.
65536 kangaroos to get rid of two bytes. Etc...

So to conclude:

- you need 2**32.5 tames DP, each around 60+ bytes
- you need 2**32.5 wilds DP, each around 60+ bytes
- your chances after 2**65.5 operations are around 63%
- the more kangaroos you have, the more DP overhead increases: 2**32 * numKangaroos
- the kangaroo jumps and the lookup for stored jumps needs to be in the complexity range of O(1) - e.g. RAM, not some swap file

If you can prove me that you can fit in real-life the two T and W sets without having to rely on memory swap to a storage device, then yes, you were right.

So, it goes without saying that maybe a real-life approach would not even get anywhere near storing the DP points, in our lifetime. Simply due to resource constraints.

Why did I say we need exabytes of data? Well, sir, I will let this as an exercise for the reader.
A lot to unpack here.

First thing, I would not be using RAM. All DP info stored in files.

You still think it would require exabytes? To store all of those DPs?

Before more deep diving, riddle me this, based on your calculations for 130 bit range and DP 32 (which you say would require exabytes), how much would 115 bit range at DP 25 need, according to your calculations?
19  Bitcoin / Bitcoin Discussion / Re: == Bitcoin challenge transaction: ~1000 BTC total bounty to solvers! ==UPDATED== on: March 28, 2024, 12:22:51 AM
The challenge creator already stated the reasons for the challenge.

Nothing else to say.

One should not spend beyond their means trying to find one of the private keys.

You can’t do that then say all of this is BS or creator is laughing. If anything, they would laugh in comfort, knowing BTC wallets are safe, for now.
20  Bitcoin / Development & Technical Discussion / Re: Pollard's kangaroo ECDLP solver on: March 27, 2024, 01:22:32 AM
I was able to solve a 128 bit key using an unmodded version, but I obviously knew where the key was and could place the kangaroos in optimal positions.

128-1 = 127, so really in a 127 bit range, because the program subs start range from key or start range is greater than 0.
What good does it do if you will most likely overflow the 128-bit after just a few jumps, no matter what start distance you begin with?


So I advise everyone to do their own DD and take what ktimes says, with a grain of salt. I believe he is the one who was going to solve #66 with pencil and paper. He’ll always spout this and that, and everyone but him is dumb, but hasn’t provided any insight into anything, other than his owned perceived genius.
I never stated I'm solving 66 with pen and paper, only that 66 is a hashing rate contest that has nothing to do with ECDLP at all.
I believe you were the one thinking we only need something like 2* 2**33 kangaroos or whatever to solve #130 in something like 2**66 steps... when the reality is we need many exabytes of stored data to have a 50% chance for a collision, in that many steps you mentioned.
I had zero overflow during tests.

And for 130, I am using the average case scenario and numbers. No exabytes needed. And it’s obvious you don’t understand the difference between a kangaroo and a stored DP.

You do the math yourself, take a DP, we will say DP 32, and you tell me, in your expert opinion, how much storage space is needed, roughly, for solving 130. 

I would reference you to OPs GitHub to read on time/memory tradeoff but you’ve already stated you don’t agree with much of what he has said or programmed.

Anyway, let me know storage space required, avg run case, for 130, using DP32
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 ... 74 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!