|
kTimesG
|
 |
December 24, 2025, 02:55:18 PM |
|
OK cool, so zero answers to any of my questions, and just on-going "snip" followed by chaotic statements regarding contaminated ranges, location biases, and freshness of events.
Are you even, ever reading what people write you? In general I mean. Frankly, the only thing I can ever say that will make you happy, is that, obviously, you are simply right, and we can put the topic to rest, hopefully.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 24, 2025, 04:06:25 PM |
|
OK cool, so zero answers to any of my questions, and just on-going "snip" followed by chaotic statements regarding contaminated ranges, location biases, and freshness of events.
Are you even, ever reading what people write you? In general I mean. Frankly, the only thing I can ever say that will make you happy, is that, obviously, you are simply right, and we can put the topic to rest, hopefully.
I can't reply to a thread where you're "off topic". You're taking the statistics in a purist way, whereas the central debate of the thread is optimizing the search for people with limited resources. In other words, you're looking at the most purist aspect of the statistics, and I'm looking for a way to search better in the vast puzzle space, since I don't have CPU farms to do an exhaustive search. I'm inventing a smart search method, even though it doesn't guarantee 100% success.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
WanderingPhilospher (OP)
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
December 24, 2025, 04:17:54 PM Last edit: December 26, 2025, 06:12:03 PM by achow101 |
|
I may indeed do a test with what you say:
Quote If they are independent H160, then scanning the first 65% or the last 65% or whatever 65% will yield, on average, the exact same results, the same stats, the same "wins", the same whatever statistics you throw at them. Making them equivalent.
That will be interesting to see how close the number of prefixes are found, if we only run the first 65% of previously ran ranges. Data to follow...
SO I re-ran this test: Run 3 = 107 x 2^40 search space size and searching for leading 40 bits (prefix but with bits) But this time only checked the first 65% of the range, then skipped to the next, as the above states. In the original run we found 74 prefixes, in this run we found 76 prefixes. More thoughts later...
To summarize thus far: I have ran multiple tests, in puzzle 71's range on the curve, looking for leading 40 bits of 71's address. I chose to do this to get the real world data and stray away from the python hypotheticals. Not saying the python was wrong, but wanted to see real data. It looks like both were close to each other, but I still do not understand how the 100% sequential did not log any wins during the python process. So with the tests, we found that yes, looking for x bits/prefixes, gets through x ranges around 65% faster than 100% sequential search (obviously because it skips/does not check all keys). We also knew that it would, and did, miss some prefixes. McD knows this and acknowledges that. I then ran another test tailored to what ktimesg was saying: If they are independent H160, then scanning the first 65% or the last 65% or whatever 65% will yield, on average, the exact same results, the same stats, the same "wins"... and that test concluded that on average, the statement is correct. SO really, if a person is limited in resources (CPUs/GPUs) and wanted to try and gain some advantage by not checking all keys, then either method, prefix - skip to next block after finding x bit prefix, or just picking 65% of each block (front, middle, back) and skipping to next block when completed, will yield, on average, the same number of found prefixes. McD believes in a "smarter search": If I find my prefix in any portion of N, it's better to sacrifice the rest of that range because it's statistically less profitable to stay there.
I prefer to move to a block where the 1/N probability is "fresh" and not contaminated by a previous find.
Any self-respecting programmer chooses the dynamic prefix option. We're not looking for the solution 100% of the time; we're looking to scan statistically consistent parts.
Your 65% method is a blind guillotine that forbids you from looking at 35% of the space by design. Mine is an intelligent search. I'd rather fail by chance than fail because my own code prevents me from finding the solution.
and thinks that ktimesg's is more of a blind search. Both know and agree that without 100% checking of all keys, the target could be missed with either method. Between the two methods, there is no right or wrong or better, merely different styles/philosophies. But if you want to shave some keys/time, maybe run one of the methods and try your luck. Six one way, half a dozen another. Missing anything? Mod note: Consecutive posts merged
|
|
|
|
|
|
mcdouglasx
|
 |
December 24, 2025, 06:24:11 PM |
|
You've hit the nail on the head, @WP. As a programmer, the logical choice will always be the prefix version for one fundamental reason: we're looking for a unique target.
With the prefix method, every time we enter a block, we maintain the chance that the find is the actual target. In the "blind cut," if the target is in the final 35%, the probability of success is ZERO by design. It doesn't matter if the overall statistics seem the same; in a "unique target" search, you can't afford an algorithm that's forbidden from finding it if it falls in an arbitrary area.
The prefix method offers a logical margin of error. While kTimesG ignores a part of the world without looking, prefix scans the search space with an intelligent criterion that gives us a statistical chance of finding the key anywhere in the block.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 24, 2025, 09:06:34 PM |
|
If it looks like a duck, sings like a duck, walks like a duck, then it's a duck.
McD: "no, the duck is not a real duck, because this and that. Here's a duck that's a better duck, but I can't tell you why it's better. But you should really use this duck, because even if it seems to be a duck, it's actually not the same duck whatsoever."
It's obvious you don't actually read what people write, you just go on and on about why any real programmer (where is he?) will always use the superior, faster, better, prefix method, because this and that, no matter that it does exactly (and I mean exactly) the same thing as with a clean start to X% hashing process, without any conditions and anything.
Considering that:
- sequentially scanning X% has 0% overhead, it's fast, it's clean, it's optimized to the bone; - prefix method is: - impossible to parallelize (yes, I stand 100% by my words - it is impossible to parallelize) - impossible to scale, - requiring a complex system of arbitrary stops (branch divergence) - throws to the trash all the optimizations required to scan keys in sequence (like batch inversion, which is the #1 optimization reason for why scans are extremely fast
then the conclusion is clear:
Yes, any good coder will obviously pick the prefix method, because it has a better statistical edge than the lame method that has the same exact statistical edge Any good coder will always prefer to throw away results of batch inversion, and add complex logic that deals with arbitrary skips and block range management, especially more when there are 50.000 threads running in parallel..
Good luck finding such a coder, I am looking forward to an actual implementation that uses the prefix method, and that runs at the same speed as the lame methods. That would be a great day for achievements in proving that coding impossible things is indeed possible, and basically breaking the information theory field itself.
Until that day, I think you would be very good at sales pitch, but I wouldn't really trust you to do my taxes, even if you do it for free.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 24, 2025, 09:32:40 PM |
|
snip
Well, I see that since you can't refute my previous comment about the ZERO probability of your method, you've decided to shift the debate from "strategy" to "implementation." It's the classic tactic you use when logic corners you... you start sarcastically talking about ducks and how difficult programming is. What you call "impossible complexity," I call engineering. I'd much rather face a code optimization challenge than use a method that, by design, prevents me from finding the key if it's located in the final 35% of the block. Your method is "clean and fast" at failing, where mine at least has a real chance of succeeding. Keep your ducks and your theoretical averages; I'll stick with the search that doesn't blind me to the target. And that's without adding that your probability of failure per block will be 35%, while that of the prefix will have a dynamic less than 1/N. Is it possible for you to respect the technical area of sarcasm and unprofessional things for at least once?
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
December 25, 2025, 09:03:28 AM |
|
Well, I see that since you can't refute my previous comment about the ZERO probability of your method...
I did that already way too many times, most recently just a couple posts ago. You somehow also understood from WP's experiments that the prefix method is the way to go, basically missing his point altogether (e.g. that it doesn't matter which way you choose). But you never read anything, and it's just like it never happened, you just continue pushing forward more strongly than ever, and you totally ignore (with 100% probability) any actual on-point questions. Maybe if i try it with a seasonal theme it works and you move your eyes over the words. You get a prefix in the first block. You want to jump. Answer this:
Why, instead of jumping, you can't just continue? Why is it that the rest of the "block" is, in your view, not identically equivalent to whatever same amount of keys at the end of the range?. Why can't you simply scan the rest of the block (and trim down from the end of the last block), and continue this way, ending up with a perfectly continuous X% scan?
It's the classic tactic you use when logic corners you... you start sarcastically talking about ducks and how difficult programming is. What you call "impossible complexity," I call engineering.
Is it possible for you to respect the technical area of sarcasm and unprofessional things for at least once?
Honestly, sarcasm helps a lot, especially once it's clear that the other person doesn't read, listen, or understands basic things in either math or programming. All the H160s in some range are statistically speaking, extractions out of an urn of balls.
You take one ball out, that's a H160. You put the ball back in. Repeat N times (for example 2**70 times).
See? There is no concept of "skips". Skip what? Extracting balls? How do you even count skips?
If someone can still believe in "location bias" and "jumps" "blocks with entropy" when the problem is as simple as extracting balls and putting them back in, then they have some much bigger problem.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
December 25, 2025, 03:15:42 PM |
|
You keep confusing laboratory statistics with hunting strategy. Just because two methods add up to the same result on paper doesn't mean they're equally mediocre in practice.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
WanderingPhilospher (OP)
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
December 26, 2025, 05:04:35 PM |
|
Considering that:
- sequentially scanning X% has 0% overhead, it's fast, it's clean, it's optimized to the bone; - prefix method is: - impossible to parallelize (yes, I stand 100% by my words - it is impossible to parallelize) - impossible to scale, - requiring a complex system of arbitrary stops (branch divergence) - throws to the trash all the optimizations required to scan keys in sequence (like batch inversion, which is the #1 optimization reason for why scans are extremely fast
Hmmmm, maybe I did not over complicate it or ran tests a little differently. I kept all the optimizations such as batch inversions and just ran the ranges like I normally would. Yes, where sub range is divided up into x ranges, each thread has different starting point, and then goes sequentially. There was one line of code to change for matching x bits and the old -stop (if found > 0) flag was kept/used. Correct me if I am wrong, but with the prefix method, it shouldn't matter where the prefix is found within the sub, sub range, just that one is found, and then skip to the next sub range. It's how I did it, which using it this way, it is just as easy to: parallelize scale and keeps all the code the same with optimizations and does not require any complex system of arbitrary stops. Maybe it did not mimic the python script 100% but the overall "intent" is the same; find one prefix in a range and then stop and start looking in new range.
|
|
|
|
|
|
kTimesG
|
 |
December 26, 2025, 08:07:52 PM |
|
Correct me if I am wrong, but with the prefix method, it shouldn't matter where the prefix is found within the sub, sub range, just that one is found, and then skip to the next sub range.
It's how I did it, which using it this way, it is just as easy to:
parallelize scale
What do you define as "skip to the next sub range"? I cannot possibly see how such an operation can run in parallel, since it's serial by definition. In other words, how can two parallel threads do this: one of them scans the next key, and the second one skips it? Aren't these, uhm, two separate operations? How can they both run at the same time? How can 10.000 parallel threads scan their next key, and skip it, at the same time? Maybe what you are describing is a concurrency-based (CPU) machinery, not a parallel high-scale model, in which all the execution units do the exact same thing at every clock cycle.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
WanderingPhilospher (OP)
Sr. Member
  
Offline
Activity: 1456
Merit: 275
Shooters Shoot...
|
 |
Today at 05:28:34 AM |
|
Correct me if I am wrong, but with the prefix method, it shouldn't matter where the prefix is found within the sub, sub range, just that one is found, and then skip to the next sub range.
It's how I did it, which using it this way, it is just as easy to:
parallelize scale
What do you define as "skip to the next sub range"? I cannot possibly see how such an operation can run in parallel, since it's serial by definition. In other words, how can two parallel threads do this: one of them scans the next key, and the second one skips it? Aren't these, uhm, two separate operations? How can they both run at the same time? How can 10.000 parallel threads scan their next key, and skip it, at the same time? Maybe what you are describing is a concurrency-based (CPU) machinery, not a parallel high-scale model, in which all the execution units do the exact same thing at every clock cycle. Ahhhhh, I think I mistakenly, mistook your words lol. When I saw parallel, I was thinking more of multiple GPUs running together, like a pool. But maybe that is more in-line with scalable... What I was doing and mean by sub range, if we are searching in say the 71 bit range and we break it into 40 bit subranges, where 2^30 subranges would exist. So when I ran my tests, I ran the GPU program "normal" and each thread was spreadout over the entire subrange, equally, and then each thread searched sequentially from their starting point; versus, starting at key 1 of subrange and going entirely sequential, which is probably what mdc's python script does/did. However, do you not think you could create a program to run "parallel"? Let's say each range is say 1/128th of N (our entire searchable range size/width). And we have x amount of GPU threads and they are equally divided over a subrange (1/128th of N). Each thread would be responsible for z amount of keys in each subrange and a total of z keys * 128 subranges. So if a thread finds a matching prefix/bits match, it goes to its next assigned set of keys in the next subrange...something like that. I may need to digest this more to really see if it is doable and at what costs. Good convo/thoughts though. EDIT: I know in one of the GPU kangaroo programs, I was able to reset/restart any thread that found a point that met the DP criteria. It would reset to a new random starting point and then jump along that new path. Would need to change the random to some fixed value...anywho, spitballing this out there before I forgot about it.
|
|
|
|
|
|
kTimesG
|
 |
Today at 09:38:25 AM |
|
However, do you not think you could create a program to run "parallel"?
No. I'm pretty sure the universe will collapse into itself before anyone can actually run in parallel something that is obviously a serial algorithm. Each thread would be responsible for z amount of keys in each subrange
Exactly. Not an arbitrary amount of keys (e.g. until some prefix match is found). Otherwise, the thread needs to keep track of various things. Which means, ALL threads need to keep track of those various things, which means = lower throughput. EDIT: I know in one of the GPU kangaroo programs, I was able to reset/restart any thread that found a point that met the DP criteria. It would reset to a new random starting point and then jump along that new path. Would need to change the random to some fixed value...anywho, spitballing this out there before I forgot about it.
In kang, every point jumps to its next point, so if you don't care about other stats, nobody minds that you are replacing some point in the GPU memory, before the next launch. Zero kernel changes, in fact. But when scanning H160 of sequential scalars, you have batch inversion loops, and a strategy that ensures exact coverage of exact areas. "Skipping" = branch divergence, breaking the loop, figuring out where to "land", probably computing some EC, having that managed somehow, and many other immediate issues. For example: how will a thread know when to stop the kernel, since some threads started working on new subranges? Or do we now wait for all threads to finish a subrange? The kernel will never exit in this case. If you want the kernel to exit, then you add in the "arbitrary point match" book keeping. That is what I mean by lack of parallelism, and impossibility to scale.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
Today at 01:29:20 PM |
|
snip
You talk about " collapse of the universe" and " branch divergence" to avoid admitting that your method is, strategically, inferior.What WP describes isn't " impossible," it's simply dynamic task management, and you don't need threads " jumping" within the kernel, breaking batch inversion, if you know how to structure a scheduler. But let's return to the reality you're trying to avoid with technical jargon: Your method is the "fastest" in the world, finding nothing in 35% of each block. You have a ZERO probability of success if the key is at the end of the range. Your parallelism is perfect, but your strategy is a blind guillotine.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|
kTimesG
|
 |
Today at 07:49:26 PM |
|
snip
You talk about " collapse of the universe" and " branch divergence" to avoid admitting that your method is, strategically, inferior.What WP describes isn't " impossible," it's simply dynamic task management, and you don't need threads " jumping" within the kernel, breaking batch inversion, if you know how to structure a scheduler. But let's return to the reality you're trying to avoid with technical jargon: Your method is the "fastest" in the world, finding nothing in 35% of each block. You have a ZERO probability of success if the key is at the end of the range. Your parallelism is perfect, but your strategy is a blind guillotine. We live in different realities. For the billionth time: those 35% you talk about are the same 35% that the prefix method skips. So, you have a ZERO probability of success if the key is in some of those 35% of skipped keys. Do I need to repeat the obvious question again? I'll add it at the end. The difference is that over here, in the real reality, things like "dynamic task management" and "structure a scheduler" are at the same level of non-sense as "contaminated ranges" and "location biases". If you'd have even the slightest clue on computer science, you'd quickly realize that it's a very idiotic idea to end up with some code that runs ten times slower, only to end up with the same exact results as a plain old sequential scan. Not on paper. Here, in the real world. I can also formally prove why your "dynamic scheduler" non-sense makes, well, no sense, over a finite range. But why bother, it is something that common sense dictates, which is not the case when trying to bring you down to normality. You get a prefix in the first block. You want to jump. Answer this:
Why, instead of jumping, you can't just continue? Why is it that the rest of the "block" is, in your view, not identically equivalent to whatever same amount of keys at the end of the range?. Why can't you simply scan the rest of the block (and trim down from the end of the last block), and continue this way, ending up with a perfectly continuous X% scan?
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
Today at 08:16:50 PM |
|
snip
You talk about " collapse of the universe" and " branch divergence" to avoid admitting that your method is, strategically, inferior.What WP describes isn't " impossible," it's simply dynamic task management, and you don't need threads " jumping" within the kernel, breaking batch inversion, if you know how to structure a scheduler. But let's return to the reality you're trying to avoid with technical jargon: Your method is the "fastest" in the world, finding nothing in 35% of each block. You have a ZERO probability of success if the key is at the end of the range. Your parallelism is perfect, but your strategy is a blind guillotine. We live in different realities. For the billionth time: those 35% you talk about are the same 35% that the prefix method skips. So, you have a ZERO probability of success if the key is in some of those 35% of skipped keys. Do I need to repeat the obvious question again? I'll add it at the end. The difference is that over here, in the real reality, things like "dynamic task management" and "structure a scheduler" are at the same level of non-sense as "contaminated ranges" and "location biases". If you'd have even the slightest clue on computer science, you'd quickly realize that it's a very idiotic idea to end up with some code that runs ten times slower, only to end up with the same exact results as a plain old sequential scan. Not on paper. Here, in the real world. I can also formally prove why your "dynamic scheduler" non-sense makes, well, no sense, over a finite range. But why bother, it is something that common sense dictates, which is not the case when trying to bring you down to normality. You get a prefix in the first block. You want to jump. Answer this:
Why, instead of jumping, you can't just continue? Why is it that the rest of the "block" is, in your view, not identically equivalent to whatever same amount of keys at the end of the range?. Why can't you simply scan the rest of the block (and trim down from the end of the last block), and continue this way, ending up with a perfectly continuous X% scan?
You're still stuck on the fallacy that "jumping is the same as cutting." No, we don't live in different realities; you live in a simulation of averages, and I design strategies for a single target. To answer your questions: 1. Why jump instead of continuing? Because computation time is finite. If the prefix method tells me that the probability of the key being in the remaining 95% of the block is negligible, my best decision is to move to virgin territory. Staying to "clean" a block that already gave a negative signal is a loss in terms of opportunity cost. 2. Why isn't the rest of the block equivalent to keys at the end of the range? Statistically, in an infinite universe, they are. But in a real search for a puzzle with a finite range, my goal is to maximize horizontal coverage. By jumping, my method touches more areas of the puzzle in the same amount of time. Your "guillotine" simply ignores 35% of each area it touches. I prefer to have looked at and discarded 100% of the range (even if it means skipping), than to have perfectly examined only 65%. 3. Regarding your "code 10 times slower": That's the favorite exaggeration of those who don't know how to optimize a scheduler. Re-initializing a point on the curve (k times G) takes microseconds. If your code is 10 times slower because of handling dynamic ranges, the problem isn't the method, it's your implementation. We're not looking for averages. We're looking for the target. I can't risk the key being at the end of the block if, with prefixes, I have a dynamic probability less than 1/N, and you're taking a blind risk. I'm relying on a dynamic probability per block, and even though the overall statistics might seem the same, prefixes are more reliable. I don't know why you're having trouble accepting that.
|
| | 2UP.io | │ | NO KYC CASINO | │ | ██████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ████████████████████████ ██████████████████████████ | ███████████████████████████████████████████████████████████████████████████████████████ FASTEST-GROWING CRYPTO CASINO & SPORTSBOOK ███████████████████████████████████████████████████████████████████████████████████████ | ███████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ █████████████████████████ ███████████████████████████ | │ |
WELCOME BONUS 200% + 500 FS | │ | █▀▀▀ █ █ █ █ █ █ █ █ █ █ █ █▄▄▄ | | ▀▀▀█ █ █ █ █ █ █ █ █ █ █ █ ▄▄▄█ |
|
|
|
|