Bitcoin Forum
April 03, 2026, 08:50:47 AM *
News: Latest Bitcoin Core release: 30.2 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 [364] 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 ... 648 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 377292 times)
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
January 31, 2025, 09:08:46 AM
Last edit: January 31, 2025, 09:21:25 AM by kTimesG
 #7261

The probabilities of finding close prefixes are not the same as finding distant ones.

Why not? Did you empirically test this (if you do that, you will learn that you are wrong). And please don't say it's because of compound probability. I'm still waiting for you to give the references where compound probability refers to events occurring "next to each other" and not "both, but anywhere in a bunch of events".

If you visualize your range / set as a circle, and arrange the keys (in any way you want), how the hell is it less likely to have two keys with same hash prefix next to each other, then spaced by whatever delta? The chances are the same once you count all of the combinations. You don't want to use a circle? Then you are in a fallacy again, because you forget about the previous and next ranges than the one you're analyzing / browsing.

Now, the same applies when you do the calculations with "3 hashes next to each other" and so on, but in this context we are talking about the deltas between two keys, so let's stick to this.

There is an average (not exact, but it exists), and the more keys you advance, the more likely you are to find another prefix. It is common sense.

You are suggesting that the chances to find another prefix increase WHILE you are iterating through the hashes. However, those chances are written in stone long before you ever begin to browse through the set, in whatever order you wish (sequential, reverse, random, scan jumping, flipping key bits, etc.)

I don't believe you can distinguish the difference between a Set and a List, or else it would be obvious why it never matters whether you find the "successes" next to each other or not.

Advancing or jumping by some average just because you found a hash, is simply just a good way of making the scanned ranges management a living hell. This is why @WP is simply using the found (longest I assume) prefixes of each well-defined subrange (like maybe 40-bits each) as a proof of work that he scanned the respective range (I'm 100% certain that there were subranges where he found more than the average number of estimated prefixes of whatever length, and ranges where he found less).

And this is also why I am confident he DID find a closer prefix than bilbilgin, simply because somewhere in some subrange, there was one prefix a little LONGER than some other that was "close by", against the "average chances" of them being so close. So I wouldn't play that bet.

Let me ask you a genuine question: if you change the generator of the curve (pick some other G), and you pick some 66-bit range (let's say, same range as Puzzle 67), and I give you a target address in that range, does the chances of finding hashes close by increase, decrease, or stay the same?

Off the grid, training pigeons to broadcast signed messages.
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
January 31, 2025, 11:44:37 AM
 #7262

Let me ask you a genuine question: if you change the generator of the curve (pick some other G), and you pick some 66-bit range (let's say, same range as Puzzle 67), and I give you a target address in that range, does the chances of finding hashes close by increase, decrease, or stay the same?

It remains the same in the system I apply.
Because I repeat. Think of it like a P=NP problem.
In fact, there is no single RANGE or PROBABILITY.

If you think of the different production in the mixing process as a single point center of RANGE or PROBABILITY, you will make a mistake.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
January 31, 2025, 01:23:57 PM
 #7263

snip~

you are right, there is no such thing as compound probability, I made that up, there is no such term, there is no such thing as compound probability, it is always the same in all cases, that is why wp has found many very close prefixes, almost one next to the other, and bibilgin found 4 prefixes "1BY8GQbnueY" in a range of 1000000 keys (but it hides it), because it is equally probable, right.
those of you reading this have a hard time finding a long prefix are probably doing it wrong, because finding them nearby is just as probable as far away and that is the absolute truth. Happy?
Gerbie
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
January 31, 2025, 01:50:22 PM
 #7264

A very interesting discussion!

Perhaps someone with the hardware to test it can help.

The prefix "1BY8GQbnueY" or even "1BY8GQbnue" is too long though. We should start with a shorter one to test the principle and look for the distribution.
Depending on your hardware, "1BY8GQbn" or "1BY8GQbnu" might work.

Search for these prefixes. If you find several of them, take two very close together and scan in the area between them. If you find another one, narrow the range again to the smallest range in between. This way we can find an expected distance value for that prefix.

Then test this on other prefixes you found, and you can see if the value you found is also close to the distances between the other prefixes. However, there will be a deviation, so allow some margin when setting the ranges. It might lead to some kind of average, but of course there is no guarantee. Some prefixes will be an "exception" and be much closer or further away.

On the other hand, what is missing from the discussion are the ECC calculations. Two consecutive private keys can generate two public keys very close to each other, or very far apart. These values are then hashed and so on to generate the address... I would say that the chance of two successive inputs generating hashes and addresses close to each other is very unlikely, but thanks to ECC we have no clue about the input of the hash (based on the private key)...

For my part, I would appreciate a more factual and respectful discussion, let's get emotional (and maybe even drunk) when you find a private key  Smiley
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
January 31, 2025, 01:59:36 PM
Last edit: January 31, 2025, 02:41:08 PM by bibilgin
 #7265

A very interesting discussion!

Perhaps someone with the hardware to test it can help.

The prefix "1BY8GQbnueY" or even "1BY8GQbnue" is too long though. We should start with a shorter one to test the principle and look for the distribution.
Depending on your hardware, "1BY8GQbn" or "1BY8GQbnu" might work.

Search for these prefixes. If you find several of them, take two very close together and scan in the area between them. If you find another one, narrow the range again to the smallest range in between. This way we can find an expected distance value for that prefix.

Then test this on other prefixes you found, and you can see if the value you found is also close to the distances between the other prefixes. However, there will be a deviation, so allow some margin when setting the ranges. It might lead to some kind of average, but of course there is no guarantee. Some prefixes will be an "exception" and be much closer or further away.

On the other hand, what is missing from the discussion are the ECC calculations. Two consecutive private keys can generate two public keys very close to each other, or very far apart. These values are then hashed and so on to generate the address... I would say that the chance of two successive inputs generating hashes and addresses close to each other is very unlikely, but thanks to ECC we have no clue about the input of the hash (based on the private key)...

For my part, I would appreciate a more factual and respectful discussion, let's get emotional (and maybe even drunk) when you find a private key  Smiley

If people like mcdouglasx, alberto and zielar approve of this topic, I can show you a small presentation example.

Or I can just write to them privately.

Edit;

I shared a small example with Gerbie. Due to his good-willed approach, a small presentation was sent. Gerbie, it would be right for him to explain the example himself.
Gerbie
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
January 31, 2025, 02:13:54 PM
 #7266

Another point of discussion for which someone here might have expertise: random search.

I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!

Here again we have maths (statistics).

Imagine a big bag full of green beans and only one red bean (the private key we're looking for). If I pick a random bean blindly from the bag and throw it back if it is not the red one, every bean I pick will have the same incredibly low probability of being the red one. And for each pick, the chance of getting the red one will be the same as taking the one I threw back before... So I might end up scanning a lot more keys than the size of the range before I find the private key.
So even if I count the number of beans I test, I have no guarantee that after testing as many beans as there are in the bag, there will be no red bean. Maybe I was just unlucky and didn't pick it.

Any constructive thoughts/opinions on this?
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
January 31, 2025, 02:23:45 PM
 #7267

Another point of discussion for which someone here might have expertise: random search.

I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!

Here again we have maths (statistics).

Imagine a big bag full of green beans and only one red bean (the private key we're looking for). If I pick a random bean blindly from the bag and throw it back if it is not the red one, every bean I pick will have the same incredibly low probability of being the red one. And for each pick, the chance of getting the red one will be the same as taking the one I threw back before... So I might end up scanning a lot more keys than the size of the range before I find the private key.
So even if I count the number of beans I test, I have no guarantee that after testing as many beans as there are in the bag, there will be no red bean. Maybe I was just unlucky and didn't pick it.

Any constructive thoughts/opinions on this?

My opinion on Random Search.

"There is only a PROBABILITY based on LUCK."

Since you put it back in the bag, PROBABILITY is the same and LUCK is the same in the system where Memory/Database is not recorded.

But here there are 2 types of PROBABILITY.

Independent probability and Dependent probability.

Independent Probability, after putting it back in the Bag, random selection result.

Dependent Probability, like shuffling the Bag, making the first selection from the left of the Bag, the next selection from the right or bottom. There are many variations of Dependent Probability effect triggered by the options.
Etar
Sr. Member
****
Offline Offline

Activity: 654
Merit: 316


View Profile
January 31, 2025, 02:38:36 PM
 #7268

-snip-
Imagine a big bag full of green beans and only one red bean (the private key we're looking for). If I pick a random bean blindly from the bag and throw it back if it is not the red one, every bean I pick will have the same incredibly low probability of being the red one. And for each pick, the chance of getting the red one will be the same as taking the one I threw back before... So I might end up scanning a lot more keys than the size of the range before I find the private key.
So even if I count the number of beans I test, I have no guarantee that after testing as many beans as there are in the bag, there will be no red bean. Maybe I was just unlucky and didn't pick it.
Any constructive thoughts/opinions on this?
I don't think anyone searches for a key this way. Usually, if you want to search randomly, the entire range is divided into small subranges. These subranges are entered into the database. And each time after a random selection of a subrange from all the remaining ones, it is removed from the queue and does not participate in the future. Thus, green beans do not return to the bag.

Hi,
Lately I was wondering if it is possible to modify JLPKangaroo_OW_OT to write DP's on SSD M.2 NVMe instead of RAM? Do You have some experience or theoretical knowledge about such an issue?
I know that RAM is something different than SSD, but concerning that main task in Kangaroo is to caclucate points, find DP's and search for colision, is there a chance to store DP's on NVMe SSD without losing performance?
Best Regards
Damian
For JLP you can try using client/server. The DPs will be saved and sent to the server, where they will be merged into 1 file.
Or if you don't care what software to use, you can try Etarkangaroo where can save DPs after any period of time without losing much in performance. When saving, they merge.
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1484
Merit: 285

Shooters Shoot...


View Profile
January 31, 2025, 02:52:14 PM
 #7269

Quote
I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!
What size search range were you using? It must have been a somewhat small one relative to GPU speed.

Quote
Imagine a big bag full of green beans and only one red bean

By not leaving the red beans out after grabbing one, on average, you double your attempts. Meaning, if you place the red bean back in the bag, you will find the green one, on average, every 100 attempts. By leaving the red bean out of the bag, you would find the green one, on average, every 50 attempts.

Quote
If people like mcdouglasx, alberto and zielar approve of this topic, I can show you a small presentation example.

Why do you need approval of these people lol?! It is a community forum, post whatever you feel like, as you already have. Why not include ktimesg's name as well? Because he doesn't agree with you 100%? The one's you mentioned would not agree with you 100% either.
Gerbie
Newbie
*
Offline Offline

Activity: 6
Merit: 0


View Profile
January 31, 2025, 03:12:03 PM
 #7270

Quote
I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!
What size search range were you using? It must have been a somewhat small one relative to GPU speed.

I don't know the exact range, but yes it was relatively small, so I knew I could scan the full range within 30 minutes on a Google Colab T4. That made me wonder, is that specific for a small range, or will it happen with a large range as well, only that I don't notice it (so easily)...?
I tend to test in small steps, so I can see what happens. The numbers are so huge, that my small brain has to split it in smaller bits... and I don't want to run a test for several weeks to find out my idea is not working at all and I just wasted resources.
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
January 31, 2025, 03:33:33 PM
Last edit: January 31, 2025, 03:51:50 PM by kTimesG
 #7271

Quote
I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!
What size search range were you using? It must have been a somewhat small one relative to GPU speed.

I don't know the exact range, but yes it was relatively small, so I knew I could scan the full range within 30 minutes on a Google Colab T4. That made me wonder, is that specific for a small range, or will it happen with a large range as well, only that I don't notice it (so easily)...?
I tend to test in small steps, so I can see what happens. The numbers are so huge, that my small brain has to split it in smaller bits... and I don't want to run a test for several weeks to find out my idea is not working at all and I just wasted resources.

You would have to be really lucky to pick N distinct random keys in a N-sized set, where the events are all independent, and the starting conditions are identical at every draw.

In fact, there is an exact formula for this. and you can even know in advance with a very high degree of confidence how many times you'll get repeats, and after how many draws you'll start to get 3 repeats, and so on.

There is also the possibility that you'll never pick some of the numbers even after very very many draws (much larger than N). This is also something you can calculate at advance.

The only problem is that you can never know in advance what keys you'll miss, what keys will start to repeat, and things like that. But it is actually normal to have some keys picked up more often, and others less often, because these are the 99.(9)%... majority of cases in the combinational possibilities.

Your draws would just be one in such of the gazillions possibilities. So the possibility where you perfectly pick every random number without repeat is just a single one, out of those gazillion ones.

If you don't trust this information, just go ahead and do a full CDF and StdDev analysis on the history of any lottery you want (or just simulate one in Python). You'll see some numbers start to get repeated much more often while others remain behind in the histogram However, this is normal - they just landed in the universe where some specific combination occurred, there is no conspiracy. Oh, and also it might help you pick out the next lucky numbers. I'm kidding, not really, but maybe a couple of them.

you are right, there is no such thing as compound probability, I made that up, there is no such term, there is no such thing as compound probability, it is always the same in all cases, that is why wp has found many very close prefixes, almost one next to the other, and bibilgin found 4 prefixes "1BY8GQbnueY" in a range of 1000000 keys (but it hides it), because it is equally probable, right.
those of you reading this have a hard time finding a long prefix are probably doing it wrong, because finding them nearby is just as probable as far away and that is the absolute truth. Happy?

Are you OK? You are putting words in my mouth. I never said compound probability does not exist, but I asked you twice already, and a final time now:

What reference do you have that states compound probability refers to two independent events occurring one after the other and not anywhere in the number of trials.

I can prove to you that what I said makes sense in practice, unlike you, who seems to read inexistent words in theories. Yes, finding them nearby or at distance D = whatever (pick any number you want for D), has the exact same chances.

Off the grid, training pigeons to broadcast signed messages.
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
January 31, 2025, 07:48:44 PM
 #7272


I don't know the exact range, but yes it was relatively small, so I knew I could scan the full range within 30 minutes on a Google Colab T4. That made me wonder, is that specific for a small range, or will it happen with a large range as well, only that I don't notice it (so easily)...?
I tend to test in small steps, so I can see what happens. The numbers are so huge, that my small brain has to split it in smaller bits... and I don't want to run a test for several weeks to find out my idea is not working at all and I just wasted resources.

Although, I would be happy if you could write your thoughts and opinions after reviewing the explanation (small presentation) I wrote to you privately.
benjaniah
Jr. Member
*
Offline Offline

Activity: 54
Merit: 3


View Profile
January 31, 2025, 11:14:04 PM
 #7273

Imagine a big bag full of green beans and only one red bean (the private key we're looking for).

Interesting thought experiment.
I did some calculations to help visualize the size of puzzle 67's keyspace. There are 73786976294838206464 possible keys in puzzle 67. Some may look at the number "73786976294838206464" or at the hex range of 40000000000000000:7FFFFFFFFFFFFFFFF and think, well that number doesn't really look that big.

Now let's take a look at the Great Pyramid of Giza, in Egypt. It is the largest pyramid in Egypt, with an original height of around 146.6 meters (480.6 feet) and a base length of about 230.4 meters (756 feet) on each side. Next, imagine you have 73786976294838206463 grains of just average, white, sand, and just 1 grain of black sand (representing all of puzzle 67's possible keys, and the 1 private key). You would fill the volume of the Great Pyramid of Giza with sand ~370 times. Try to picture 370 of these pyramids, all of white sand, and inside one of them somewhere is 1 individual grain of black sand.

If you took a typical US 5 Gallon bucket, and could check 1 bucket per second, looking for that 1 black grain of sand (and hopefully not miss it), that's about 1.5 billion grains of sand per second. It would take you 1560 years to check all of the sand, assuming you were checking nonstop, 24/7/365. Maybe a little bit longer, if you took a few breaks to eat some yogurt.
mcdouglasx
Hero Member
*****
Offline Offline

Activity: 952
Merit: 532



View Profile WWW
January 31, 2025, 11:19:57 PM
 #7274

What reference do you have that states compound probability refers to two independent events occurring one after the other and not anywhere in the number of trials.

According to your criteria, each prefix independently has the same probability of being in one place or another, and you are right (independently the probabilities are the same). However, you only take that as a single truth, which is where I disagree. If you debated between joint probability and compound probability, you would have a bit more logic.

because, joint probability refers to the probability that two or more events occur simultaneously. So, it could be interpreted as all the hashes being "written in stone", therefore you could argue that all independent prefixes were generated simultaneously for Bitcoin.

In the case of compound probability, on the other hand, which refers to the probability that two or more independent events occur in sequence, you could also apply it to Bitcoin by calculating the probability that a prefix repeats as you move from one prefix to another.

But in any case, it cannot be taken as a single independent event, it is not like you are looking for a different prefix in each event. In the same way, this book explains it with examples.

"A First Course in Probability" by Sheldon Ross
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
February 01, 2025, 10:40:45 AM
 #7275

What reference do you have that states compound probability refers to two independent events occurring one after the other and not anywhere in the number of trials.
In the case of compound probability, on the other hand, which refers to the probability that two or more independent events occur in sequence, you could also apply it to Bitcoin by calculating the probability that a prefix repeats as you move from one prefix to another.

But in any case, it cannot be taken as a single independent event, it is not like you are looking for a different prefix in each event. In the same way, this book explains it with examples.

Again, you are using the words "occur in sequence" which is not the same as "occur in a sequence". There's nothing in that book about this.

You're still assuming the probability changes just because you move from one key to the next. But there's no external entity that would do that (except faith maybe).

Example: some sequence of 100 events, and you know somehow that 2 of them are successful ones. It doesn't matter what we mean by "successful" (hash prefix match, or whatever you'd like to use).

Total possible sequence combinations: 4950
Total combinations where successes are next to each other: 99

So, the probability to have the successes next to each other is 99 in 4950.

Total combinations where successes are separated by a distance of 2: 98
Total combinations where successes are separated by a distance of 3: 97
Total combinations where successes are separated by a distance of 4: 96
...
Total combinations where successes are separated by a distance of 50: 50

So, the probability to have the successes separated by a distance of 50 is: 50 in 4950.

At this point, you would be something like: OK, this sounds like it's actually more likely to have the successes next to each other, rather than evenly spaced! So WTF is happening here?

Let's continue:

Total combinations where successes are separated by a distance of 51: 49
Total combinations where successes are separated by a distance of 52: 48
...
Total combinations where successes are separated by a distance of 98: 2
Total combinations where successes are separated by a distance of 99: 1

Maybe now something starts to become obvious: wait, so we only have 1 in 4950 as a probability to have the successes spaced by distance of 99? That doesn't sound right.

Well, friends, what we did is we forgot about what happened before our sequence and after our sequence. So, if we put head to head our sequence of events, we'll see that if the successes are at #1 and #100, they are "next to to each other".

So let's fix our probabilities:

Total combinations where successes are next to each other (or at dist 99): 99 + 1 = 100

So, the probability to have the successes separated by a distance of (1 or 99) is: 100 in 4950.

The same goes for the other ones, let's see what happens:

Total combinations where successes are separated by a distance of X (or 100 - X): 100 - X + X= 100

So, same chances whatever delta we pick.

Off the grid, training pigeons to broadcast signed messages.
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
February 01, 2025, 12:05:02 PM
 #7276

--Again, you are...

Let me ask you a question.

There are 2 wallets with 12 prefixes similar to each other.
Among these wallets, there will be no wallet with 9 or 10 prefixes similar.

You say that this is also possible. Is that right?
kTimesG
Full Member
***
Offline Offline

Activity: 784
Merit: 242


View Profile
February 01, 2025, 01:23:44 PM
Last edit: February 01, 2025, 01:51:27 PM by kTimesG
 #7277

--Again, you are...

Let me ask you a question.

There are 2 wallets with 12 prefixes similar to each other.
Among these wallets, there will be no wallet with 9 or 10 prefixes similar.

You say that this is also possible. Is that right?

I don't understand what you asked, but anyway, I think you didn't even read the post. If you're looking for addresses that match a 12 char prefix, that's one thing. If you're looking for addresses that match 1-bit prefixes, that's a separate thing. If that first bit is also the first bit for all possible hashes that have the 12 char prefix, then obviously there are more hashes that have the bit, then hashes that have the prefix. However, there are the same amount of hashes that start off with the OTHER bit, so, yes, it is possible (e.g., yes, there are existing combinations) where everything else (except your 2 addresses with the same 12-char prefix), start off with a different bit, hence not the same prefix or any sub-prefix of it (1,2... 11 chars). However, that doesn't mean that this possibility is actually found in some sequential subrange, but it is also not excluded (for secp256k1, you'd have to check all the existing 2**256 ranges that comprise of 2**66 sequential keys). And you can obviously simply create one: pick 2**66 keys with hashes starting with a 0, except two of them that have hashes starting with a 1. Oh, wait, ECC math, G is fixed, keys have an order, "it can't happen"... etc... no comment.

Off the grid, training pigeons to broadcast signed messages.
bibilgin
Newbie
*
Offline Offline

Activity: 279
Merit: 0


View Profile
February 01, 2025, 02:04:43 PM
 #7278

I don't understand what you asked, but anyway, I think you didn't even read the post. If you're looking for addresses that match a 12 char prefix, that's one thing. If you're looking for addresses that match 1-bit prefixes, that's a separate thing. If that first bit is also the first bit for all possible hashes that have the 12 char prefix, then obviously there are more hashes that have the bit, then hashes that have the prefix. However, there are the same amount of hashes that start off with the OTHER bit, so, yes, it is possible (e.g., yes, there are existing combinations) where everything else (except your 2 addresses with the same 12-char prefix), start off with a different bit, hence not the same prefix or any sub-prefix of it (1,2... 11 chars). However, that doesn't mean that this possibility is actually found in some sequential subrange, but it is also not excluded (for secp256k1, you'd have to check all the existing 2**256 ranges that comprise of 2**66 sequential keys). And you can obviously simply create one: pick 2**66 keys with hashes starting with a 0, except two of them that have hashes starting with a 1. Oh, wait, ECC math, G is fixed, keys have an order, "it can't happen"... etc... no comment.

The question is very clear and concise.
This discussion started with similar prefixes.
Question There are 2 wallets with 12 prefixes.
Their Hex codes can come one after another. It doesn't matter how many bits there are. (According to your opinion.)

Without any 9 or 10 prefix similarity between them. Is that your opinion?

Example:
1BY8GQbnubbH1UeVuGYAdVXSMHiBLd7iBZ
1BY8GQbnubbHgnpysMTFdfbrnPcw9oRU6i

Among the wallets with 12 prefix similarity, 1BY8GQbnu (9 prefix) or 1BY8GQbnub (10 prefix) can come one after another without a similar wallet.
stenaku
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
February 01, 2025, 02:44:24 PM
 #7279

Quote
I found that random searches sometimes gave me the same results several times. That means I'm scanning the same keys more than once!
What size search range were you using? It must have been a somewhat small one relative to GPU speed.

I don't know the exact range, but yes it was relatively small, so I knew I could scan the full range within 30 minutes on a Google Colab T4. That made me wonder, is that specific for a small range, or will it happen with a large range as well, only that I don't notice it (so easily)...?
I tend to test in small steps, so I can see what happens. The numbers are so huge, that my small brain has to split it in smaller bits... and I don't want to run a test for several weeks to find out my idea is not working at all and I just wasted resources.

Which script are you using for Google Colab T4?
karrask
Newbie
*
Offline Offline

Activity: 38
Merit: 0


View Profile
February 03, 2025, 05:32:10 AM
 #7280

hello. Can I ask you a question about kangaroos? using the example of the kangaroo RC.
When the program starts, the total number of kangaroos is displayed - 1507328 kangaroos, Speed: 2271 MKeys/s - do I understand correctly that the speed is indicated for each kangaroo?
Pages: « 1 ... 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 [364] 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 ... 648 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!