Akito S. M. Hosana
Jr. Member
Offline
Activity: 420
Merit: 8
|
 |
March 18, 2025, 06:23:17 PM |
|
Will you add more features to your Cyclone version? N keys to be scanned at once? Write progress into file (it isn't working yet) Stride? (I have done some changes in your version, but it is a fix stride, that is, always need to makefile to change it) Read range list to be scanned from file...
I don't tnhik so. I barely convinced him to do what we have now. 
|
|
|
|
|
|
kTimesG
|
 |
March 18, 2025, 06:49:25 PM |
|
~ offtopic ~
No one said prefixes aren't rare, but seems the ignorance is on your side, because: - no one is gonna waste a ton of power to find the exceptional case where two sequential keys result in a very long common prefix; that's a waste of time. Having infinitely small chances of a collision is the main reason why you don't see two near keys colliding, not because you've just found some prefix and that would somehow mean it has any sort of relevance for the very next hash, be it the next one or 500 billion keys later. - all of your science is bogus techno-babble; we're still waiting an explanation on why waiting 10 minutes to continue flipping a coin changes the probabilities of the future outcomes. Ouch, excuse, me, I meant skipping some keys here and there, on basis of a totally refutable religion that it's in any way different from continue scanning the next key. If I'm going to the groceries store today, than this somehow changes the chances that Satoshi ate oranges today. This is pretty much the whole basis of your theories. I wrote some time ago a post that you deleted because you didn't like that I can understand what your code does without having to execute it. Man - you're the one that needs to come up with the validity of your theories, not on us to get on our knees and being called "you didn't understand it" every time. We did understand it.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
March 18, 2025, 07:13:08 PM |
|
~ offtopic ~
No one said prefixes aren't rare, but seems the ignorance is on your side, because: - no one is gonna waste a ton of power to find the exceptional case where two sequential keys result in a very long common prefix; that's a waste of time. Having infinitely small chances of a collision is the main reason why you don't see two near keys colliding, not because you've just found some prefix and that would somehow mean it has any sort of relevance for the very next hash, be it the next one or 500 billion keys later. - all of your science is bogus techno-babble; we're still waiting an explanation on why waiting 10 minutes to continue flipping a coin changes the probabilities of the future outcomes. Ouch, excuse, me, I meant skipping some keys here and there, on basis of a totally refutable religion that it's in any way different from continue scanning the next key. If I'm going to the groceries store today, than this somehow changes the chances that Satoshi ate oranges today. This is pretty much the whole basis of your theories. I wrote some time ago a post that you deleted because you didn't like that I can understand what your code does without having to execute it. Man - you're the one that needs to come up with the validity of your theories, not on us to get on our knees and being called "you didn't understand it" every time. We did understand it. Didn't you understand that refuting a theory without evidence and generalizing is easy? In fact, it's the same reasoning that flat-earthers use to defend their arguments, theorizing. And believe me, there are flat-earthers who know how to argue to the point of leaving others speechless, but that says nothing about the truth. That's what you do—to support your ideas, you generalize too much, which casts doubt on your intelligence. You just go off on tangents and don't address the main topic. You are the perfect example of what a person who thinks they're intelligent would never do.
|
|
|
|
|
kTimesG
|
 |
March 18, 2025, 07:45:23 PM |
|
Man - you're the one that needs to come up with the validity of your theories, not on us to get on our knees and being called "you didn't understand it" every time. We did understand it.
Didn't you understand that refuting a theory ... Nah, I'm just a flat-earther robot AI chat-gpt agent who doesn't understand what you are even trying to prove, or what exactly anything of what you're rambling about every other day has to do with the topic, except constant push up of some non-sense that has no practical usage whatsoever... backed by some books of which authors I doubt would be proud to know you named their work in such a context. Also, I couldn't care less about any of your opinions on me, if this is all you're able to prove - they have zero relevance to the topic as well.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
VinIVaderr
Newbie
Offline
Activity: 4
Merit: 0
|
 |
March 18, 2025, 07:53:10 PM |
|
~ offtopic ~
No one said prefixes aren't rare, but seems the ignorance is on your side, because: - no one is gonna waste a ton of power to find the exceptional case where two sequential keys result in a very long common prefix; that's a waste of time. Having infinitely small chances of a collision is the main reason why you don't see two near keys colliding, not because you've just found some prefix and that would somehow mean it has any sort of relevance for the very next hash, be it the next one or 500 billion keys later. I believe you are correct. We know 10k gpu's can find one of these keys. But that's for other people, not me, for sure. This also isn't, in my opinion, testing the strength of ecdsa or bitcoin. I look at this from a technicality perspective; is there a weakness in math, random, or a code exploit. For a long time I've chatted with the GPT's and there is consensus that ecdsa and elliptic curve algorithms are elegant and no vulnerabilities are expected. The only error is human in poor entropy or coding. I've pressed Ctrl + c so many times. But I enjoy the "think tank" collaboration in this thread.
|
|
|
|
|
|
mcdouglasx
|
 |
March 18, 2025, 07:55:44 PM |
|
snip
- all of your science is bogus techno-babble; we're still waiting an explanation on why waiting 10 minutes to continue flipping a coin changes the probabilities of the future outcomes. Ouch, excuse, me, I meant skipping some keys here and there, on basis of a totally refutable religion that it's in any way different from continue scanning the next key.
This is easily debunked: two events can be independent and, together, exhibit compound probability without interfering with each other's independence. We can use several independent events, for example, flipping 3 consecutive heads with coins in 10 attempts, where the probability increases for each attempt. Getting 3 consecutive heads in flips 1, 2, and 3 is not the same as in flips 8, 9, and 10. Therefore, my argument of exploring the most probable spaces first is valid, all of this without contradicting the fundamental principles of probability or the independence of events.
|
|
|
|
|
kTimesG
|
 |
March 18, 2025, 08:32:31 PM |
|
This is easily debunked: ~ start of gibberish ~
Well, that explains it. I'm not even going to attempt going into why you have it so very wrong at the most fundamental levels. It's clear you haven't understood a thing about what various people tried to already explain to you. Maybe ask yourself who is flat-earther here. Keep going at it though. But please don't try to convince people that, because you believe circles are not really round, then they should prove that to you, not the other way around. Or making some claims on things you clearly have zero knowledge about, like stating that your methods are efficient, when they are actually slowing down everything because of core limitations. Or writing things like "this can be be made more faster using CUDA" when it's clear you don't even understand the core principles on high-level parallel computing, or what SIMD implies, and why it can never be made to work with your methods. Did you even understand what WanderingPhilospher tried to show you with his experiments and conclusions? Or do you need all of us to go travel around the globe so we can show you that Earth isn't flat after all, just to try to bring back some sense into you? It's OK, I know nothing will work anyway 
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
March 18, 2025, 09:04:51 PM |
|
snip
You wrote 7+ lines, and none of them were about compound probability, so who reasons better? Me, with logical examples, or you, talking about yourself as if you were a god (who knows everything), with poor and generalized reasoning? You think you're intelligent but end up in absolute embarrassment. I repeat my idea: it's not about skipping keys randomly and that's it. It's simple—I search the most probable places first, understand well, "first." It's not that the keys I skip (momentarily) cannot be explored later, all of this based on compound probability. I see no fault in using probabilities as tools, especially when the search space is enormous. Without wanting to underestimate anything, sequential+random, BSGS, kangaroo, and prefixes, for the current difficulty of the puzzles, are no longer useful unless you have a GPU farm, but that does not discredit the existing algorithms. Or are we now going to talk only about GPU farms for 10 more years? Or at least, since you are the genius, throw us a new idea; maybe it will be you who breaks the paradigms.
|
|
|
|
bibilgin
Newbie
Offline
Activity: 280
Merit: 0
|
 |
March 18, 2025, 09:15:26 PM |
|
Well, that explains it. I'm not even going to attempt going into why you have it so very wrong at the most fundamental levels. It's clear you haven't understood a thing about what various people tried to already explain to you. Maybe ask yourself who is flat-earther here. Keep going at it though. But please don't try to convince people that, because you believe circles are not really round, then they should prove that to you, not the other way around. Or making some claims on things you clearly have zero knowledge about, like stating that your methods are efficient, when they are actually slowing down everything because of core limitations. Or writing things like "this can be be made more faster using CUDA" when it's clear you don't even understand the core principles on high-level parallel computing, or what SIMD implies, and why it can never be made to work with your methods. Did you even understand what WanderingPhilospher tried to show you with his experiments and conclusions? Or do you need all of us to go travel around the globe so we can show you that Earth isn't flat after all, just to try to bring back some sense into you? It's OK, I know nothing will work anyway  Friends, I was out of town for a few weeks. I can't follow because I have some more work to do. Thanks for your contributions to NoMachine regarding the Cyclone study. I won't be able to be there for a few more weeks, everyone take care. But as far as I can see, KTimesG continues to act like a defender.  Okay, my friend, we understand you too. Isn't the reason you are here to get information anyway? The information we have is wrong, you know the best. lol 1- If you have a lot of money (1k - 2k GPU), you will find the wallet that fast. The one with a lot of money gets the wallet. 2- In Karma, Prefixes, H160 or many similar places, no one can definitely provide 100% proof. (Anyone who has such proof will not be here.  ) (We are thinking about Probability and working.) First of all, my advice, do you research how many encryptions are broken? 95% encryption algorithms are broken with Probability theories. Now, without getting into a discussion, why don't you answer the questions that you can learn about or that concern you? According to us, what you think is wrong. According to you, what we think is wrong. There is no need to write pages for something that doesn't even have any proof. I wish everyone success. Take care. I'll be away for 2-3 weeks.
|
|
|
|
|
|
kTimesG
|
 |
March 18, 2025, 09:33:56 PM |
|
snip
You wrote 7+ lines, and none of them were about compound probability, so who reasons better? I told you I'm not going to even attempt to reason about what you wrote, because it is irrelevant. Team douglas & bilibgin (I have no idea what he wrote, probably something on your defense and most likely more insults towards me): you two are the only guys here that make me an idiot, don't you ever find this weird? Yes, you need a GPU farm to solve problems that require a GPU farm to be solved. Not fantasies. Yes, this might mean you'll hear about that for the next 10 years, if that is what we have as a solution. Maybe FPGA and ASIC as well. Not fantasies. Yes, Puzzle 67 was solved via massive amount of computing. Not fantasies. I am not arguing with you because it is totally useless and has no purpose than to get both of us annoyed. Maybe someone else has more patience with you, I don't. The solver of puzzle 67 definitely didn't. ANd his remarks on your methods were basically one-liners conclusions of everything I ever tried to explain to you but you fail to accept. So if ONE LINERS are too much to argue about, what the fuck more can I do with pages over pages of trying to say the same things in 1000 different ways? - probabilities are consequences of an uniform distribution - independent events have no history - there is no difference between a range and a set of random indices - YOU CANNOT ALTER events based on observations You are viewing all of this exactly in the most opposite way possible, and I can't really help with this. So just follow my advice: keep going at it with all you got! Meanwhile, stupid guys, who you so-call "geniuses" that "do not understand you", are full-blown scanning ranges in the fastest way that technology allows. Because implementing any of your ideas, that you consider smart, are actually totally unrealistic to manage or implement at scale.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
March 18, 2025, 10:09:36 PM |
|
snip
In summary, I'm not competing for the puzzle; I'm just validating that a statistical search is the most reasonable approach for ordinary people who want to try their luck. I have never claimed that my proposal is miraculous, only that it is more logical when your resources are limited.
|
|
|
|
|
kTimesG
|
 |
March 18, 2025, 11:40:03 PM |
|
snip
In summary, I'm not competing for the puzzle; I'm just validating that a statistical search is the most reasonable approach for ordinary people who want to try their luck. I have never claimed that my proposal is miraculous, only that it is more logical when your resources are limited. The issue with this is that it doesn't validate as a reasonable approach, neither a logical one. This is why. Since you agree with the laws of the uniform distribution, then some logical conclusions immediately follow from them: Theorem: the probability of any range is exactly identical to the probability of any set of keys you can think of, of the same length. Hence, it is exactly the same thing whether you scan a range, or scan a set, or wait it out 10 minutes and continue from some other key, or if you jump around, or if you give up and ask a friend to continue scanning on his computer. All of these events have the same amount of extra success, because they're all independent. BUT - scanning sequentially is the fastest and most efficient way that this can practically be done. Why? Because: 1. It's a bitch to keep track of what was randomly scanned - inefficiency, resource wasting, time consuming, avoiding birthday paradox, needing some sort of database. 2. Starting off a job from some X index, requires computing the public key from the private key - inefficiency, resource consumption, more processing time. 3. It's simply the most efficient and easy to scale method. Now, since the probabilities are exactly identical in all cases, where is the logic and reasonability of complicating things? You are spreading this fake sentiment that playing the lottery with the puzzle increases chances. It does not. If one scans keys 5327542 and 3528294356, the chances of finding the key are identical as if they simply scanned the first and second keys of the puzzle. Or the last two. Or the middle two. Or keys 42 and 99999. This is probably what doesn't make sense to you, unfortunately. So, scanning some range in sequence, has the exact same probability of success as if you randomly pick up any other keys, check if they were scanned or not, compute their pubKey from scratch, and then check them. Which of the two options is the most logical and reasonable?
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
|
mcdouglasx
|
 |
March 19, 2025, 12:41:46 AM |
|
snip
I already explained it here https://bitcointalk.org/index.php?topic=1306983.msg65161690#msg65161690, but you keep preferring to omit the existence of compound probability. This is something real! According to your logic, each key is the same because each prefix has the same probabilities on its own, as an independent event. I agree with you, each prefix has the same individual probability of occurring. But I don’t rely on a single event for you to use that as a counterargument. My logic is this: if I am going to scan 1,000,000 sequential keys, since the hashes are distributed uniformly, and being aware that they are not related to each other, compound probability tells me that if I find a hash with 10 prefixes, it is highly unlikely to find the same matching prefix in so few attempts. When you generalize by saying that longer prefixes always appear, you are considering all possibilities and not a specific target. In this case, if you have already found a hash with a specific prefix, the probability of finding that same prefix several times in a limited number of attempts is extremely low.
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
March 19, 2025, 12:52:21 AM |
|
snip
I already explained it here https://bitcointalk.org/index.php?topic=1306983.msg65161690#msg65161690, but you keep preferring to omit the existence of compound probability. This is something real! According to your logic, each key is the same because each prefix has the same probabilities on its own, as an independent event. I agree with you, each prefix has the same individual probability of occurring. But I don’t rely on a single event for you to use that as a counterargument. My logic is this: if I am going to scan 1,000,000 sequential keys, since the hashes are distributed uniformly, and being aware that they are not related to each other, compound probability tells me that if I find a hash with 10 prefixes, it is highly unlikely to find the same matching prefix in so few attempts. When you generalize by saying that longer prefixes always appear, you are considering all possibilities and not a specific target. In this case, if you have already found a hash with a specific prefix, the probability of finding that same prefix several times in a limited number of attempts is extremely low. This, again, goes against basic math. Do you have formal math education, by any chance ? Because we’re talking stats 101 here. I don’t mean to be disrespectful but you seem so sure about something so evidently wrong that I have to ask.
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
|
mcdouglasx
|
 |
March 19, 2025, 01:13:48 AM Last edit: March 19, 2025, 01:54:20 AM by mcdouglasx |
|
snip
I already explained it here https://bitcointalk.org/index.php?topic=1306983.msg65161690#msg65161690, but you keep preferring to omit the existence of compound probability. This is something real! According to your logic, each key is the same because each prefix has the same probabilities on its own, as an independent event. I agree with you, each prefix has the same individual probability of occurring. But I don’t rely on a single event for you to use that as a counterargument. My logic is this: if I am going to scan 1,000,000 sequential keys, since the hashes are distributed uniformly, and being aware that they are not related to each other, compound probability tells me that if I find a hash with 10 prefixes, it is highly unlikely to find the same matching prefix in so few attempts. When you generalize by saying that longer prefixes always appear, you are considering all possibilities and not a specific target. In this case, if you have already found a hash with a specific prefix, the probability of finding that same prefix several times in a limited number of attempts is extremely low. This, again, goes against basic math. Do you have formal math education, by any chance ? Because we’re talking stats 101 here. I don’t mean to be disrespectful but you seem so sure about something so evidently wrong that I have to ask. You know how I easily prove my point? By looking at the distribution of your proof of work here: https://github.com/Kowala24731/btc67/blob/main/src/main/resources/com/btc67/proofs.txt So, what are you going to say? edit: What I see here is simple: FUD, FUD, and more FUD, to make the search lose interest and clear the path for yourself. Since you don’t counter-argue properly, I present and use your work as a basis to clarify my point—that probabilities are as they should be—and you provide no counterargument when claiming I’m wrong, without any foundation. I don’t see math in your argument, only FUD. If I’m wrong, you just need to tell me why I can’t use compound probabilities in my approach, instead of going around in circles on the matter I intentionally bring up in every comment. But for you and ktimesg, it seems like part of probability doesn’t exist.
|
|
|
|
VinIVaderr
Newbie
Offline
Activity: 4
Merit: 0
|
 |
March 19, 2025, 02:18:12 AM |
|
Here are the two sides to your argument.
Probability builds over repeated attempts. vs. Exponential decay towards certainty. Because keys are deterministic 0-1,000,000 are the same. Ranges reduce time complexity. It only appears that each key is independent over a distributed field but they are repeatable. Every key maps to a public key and is used to sign messages or transactions and only that key will move these bitcoins. By the way when did it become 'OK" to post private keys online>FUD
|
|
|
|
|
VinIVaderr
Newbie
Offline
Activity: 4
Merit: 0
|
 |
March 19, 2025, 02:34:18 AM |
|
|
|
|
|
|
Bram24732
Member

Offline
Activity: 322
Merit: 28
|
 |
March 19, 2025, 02:41:45 AM |
|
snip
I already explained it here https://bitcointalk.org/index.php?topic=1306983.msg65161690#msg65161690, but you keep preferring to omit the existence of compound probability. This is something real! According to your logic, each key is the same because each prefix has the same probabilities on its own, as an independent event. I agree with you, each prefix has the same individual probability of occurring. But I don’t rely on a single event for you to use that as a counterargument. My logic is this: if I am going to scan 1,000,000 sequential keys, since the hashes are distributed uniformly, and being aware that they are not related to each other, compound probability tells me that if I find a hash with 10 prefixes, it is highly unlikely to find the same matching prefix in so few attempts. When you generalize by saying that longer prefixes always appear, you are considering all possibilities and not a specific target. In this case, if you have already found a hash with a specific prefix, the probability of finding that same prefix several times in a limited number of attempts is extremely low. This, again, goes against basic math. Do you have formal math education, by any chance ? Because we’re talking stats 101 here. I don’t mean to be disrespectful but you seem so sure about something so evidently wrong that I have to ask. You know how I easily prove my point? By looking at the distribution of your proof of work here: https://github.com/Kowala24731/btc67/blob/main/src/main/resources/com/btc67/proofs.txt So, what are you going to say? edit: What I see here is simple: FUD, FUD, and more FUD, to make the search lose interest and clear the path for yourself. Since you don’t counter-argue properly, I present and use your work as a basis to clarify my point—that probabilities are as they should be—and you provide no counterargument when claiming I’m wrong, without any foundation. I don’t see math in your argument, only FUD. If I’m wrong, you just need to tell me why I can’t use compound probabilities in my approach, instead of going around in circles on the matter I intentionally bring up in every comment. But for you and ktimesg, it seems like part of probability doesn’t exist. I’m well aware of what compound probabilities are. You’re just using them backwards. Also, I don’t need to discourage you from doing anything because you’re statistically irrelevant.
|
I solved 67 and 68 using custom software distributing the load across ~25k GPUs. 4090 stocks speeds : ~8.1Bkeys/sec. Don’t challenge me technically if you know shit about fuck, I’ll ignore you. Same goes if all you can do is LLM reply.
|
|
|
|
mcdouglasx
|
 |
March 19, 2025, 03:36:35 AM |
|
snip
I already explained it here https://bitcointalk.org/index.php?topic=1306983.msg65161690#msg65161690, but you keep preferring to omit the existence of compound probability. This is something real! According to your logic, each key is the same because each prefix has the same probabilities on its own, as an independent event. I agree with you, each prefix has the same individual probability of occurring. But I don’t rely on a single event for you to use that as a counterargument. My logic is this: if I am going to scan 1,000,000 sequential keys, since the hashes are distributed uniformly, and being aware that they are not related to each other, compound probability tells me that if I find a hash with 10 prefixes, it is highly unlikely to find the same matching prefix in so few attempts. When you generalize by saying that longer prefixes always appear, you are considering all possibilities and not a specific target. In this case, if you have already found a hash with a specific prefix, the probability of finding that same prefix several times in a limited number of attempts is extremely low. This, again, goes against basic math. Do you have formal math education, by any chance ? Because we’re talking stats 101 here. I don’t mean to be disrespectful but you seem so sure about something so evidently wrong that I have to ask. You know how I easily prove my point? By looking at the distribution of your proof of work here: https://github.com/Kowala24731/btc67/blob/main/src/main/resources/com/btc67/proofs.txt So, what are you going to say? edit: What I see here is simple: FUD, FUD, and more FUD, to make the search lose interest and clear the path for yourself. Since you don’t counter-argue properly, I present and use your work as a basis to clarify my point—that probabilities are as they should be—and you provide no counterargument when claiming I’m wrong, without any foundation. I don’t see math in your argument, only FUD. If I’m wrong, you just need to tell me why I can’t use compound probabilities in my approach, instead of going around in circles on the matter I intentionally bring up in every comment. But for you and ktimesg, it seems like part of probability doesn’t exist. I’m well aware of what compound probabilities are. You’re just using them backwards. Also, I don’t need to discourage you from doing anything because you’re statistically irrelevant. Argue and give me a lesson. Back up your words with math. In any case, explain why my approach is misused. Even if I were ignorant and you were right, I can’t take your words seriously, especially if they’re not supported by solid reasoning. If you can’t justify why I’m misusing compound probabilities, why bother writing at all?.
|
|
|
|
|
nomachine
|
 |
March 19, 2025, 03:43:02 AM |
|
I used to, when I was 13... But Atlantic is quite dirty.  Now I sit in front of a PC, tripping with python and C code, trying to makeup them as best as I can. Will you add more features to your Cyclone version? N keys to be scanned at once? Write progress into file (it isn't working yet) Stride? (I have done some changes in your version, but it is a fix stride, that is, always need to makefile to change it) Read range list to be scanned from file... You'll feel like a child again when you retire—trust me.  can you quote what you are referring to for zahid888's work on it?
Just look at his profile... I don't tnhik so. I barely convinced him to do what we have now.  You're lucky—I caught a big fish, so I'm in a good mood. I did all of that. https://github.com/NoMachine1/CycloneThanks for your contributions to NoMachine regarding the Cyclone study. I won't be able to be there for a few more weeks, everyone take care.
You’re welcome. No problem. Enjoy your time! 
|
BTC: bc1qdwnxr7s08xwelpjy3cc52rrxg63xsmagv50fa8
|
|
|
|