bibilgin
Newbie
Offline
Activity: 275
Merit: 0
|
 |
December 10, 2025, 07:14:04 PM |
|
....
Did the topic I discussed with someone else upset you? I have no intention of offering any support, information, or help to anyone here. (Just like you. I'm not writing with AI.) I communicate with the people I support, share with, and exchange information with via Telegram, and I can say that I am a little more SKILLED than you in terms of TRUST, INFORMATION, and EXCHANGE. Now, keep counting your bits with AI. As you said, this is NOT YOUR FORUM page. Don't read or see the topics I write. Because I don't even read the EMPTY (AI) topics you write.
|
|
|
|
|
|
kTimesG
|
 |
December 10, 2025, 07:32:53 PM |
|
I have no intention of offering any support, information, or help to anyone here.
My goal is to provide more meaningful information to help newcomers or those with missing information, rather than having them reread 609 pages.
Listen, you and a few others are the main reason I'm not gonna publish actual software for you and others to full around with, any time soon. Be happy with FixedPaul's app and don't ever ask about better software, expecting anything in return, with your embarrassing insults and attitude towards everyone; your contribution to global warming is welcome. Using other people's code while being a crypto expert skilled in having no idea how to look at a H160's binary value is a very good indicator on your actual skills and how much "help" you can ever give newcomers. My ignore list highly welcomes you again. Now, do your usual thing and throw in a few nice insults or whatever non-sense.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
bibilgin
Newbie
Offline
Activity: 275
Merit: 0
|
 |
December 10, 2025, 07:52:40 PM |
|
.....
You've already added me to the BLOCK list. Is this the 6th or 7th time? We've argued before. You showed that you did nothing, just watched, and randomly wrote messages to people via AI. I use Telegram to avoid helping or giving information to anyone here. I recommend you do the same. Because you made a MISTAKE at the beginning, admit it. (YOU APOLOGIZED.) You hid behind nonsense like "I can't find it with low-end hardware," "luck," "probability," "low probability," etc. But you didn't accept the TRUTH. You think we can only find it with HIGH-END hardware (by INVESTING). As I said before, I have no expenses. 3-4 RTX 3070 ti graphics cards (my BUSINESS PARTNER pays the electricity bill) and renting 4x 4090 GPUs for $10-20 a week (for the period I want to check quickly). With this hardware, I found the closest prefix in the world. That's why you might be jealous. Because you might be good at Artificial Intelligence, but I'm good at Cryptography. (Mathematics)
|
|
|
|
|
Torin Keepler
Newbie
Offline
Activity: 22
Merit: 0
|
 |
December 10, 2025, 07:52:50 PM |
|
I've come to the conclusion that knowledge brings composure. A person who understands a topic well is less likely to be driven by emotions and less inclined to resort to insults, belittling, or attempts to show superiority. That’s why I suggest we treat each other with respect, stay calm in discussions, and aim for constructive dialogue. I also wanted to say a few words about the $200 challenge I announced earlier. The task is to find a private key within a 115-bit range based on a public key. The challenge started 6 days ago. At the moment, the speed is 70,000 MKeys/s, and the number of participants has reached 17. Based on rough estimates, to complete it within 15 days, the average speed would need to be around 240,000 MKeys/s. If you're interested in joining the challenge, I’ll be glad to see you in the group. And whether you choose to trust me regarding the $200 prize is entirely up to you. P.S. This isn’t the first challenge of this kind. If you’d like to learn how the previous one went and who the winners were, all the information is available in the group. https://t.me/puzzle135/14976
|
|
|
|
|
E36cat
Newbie
Offline
Activity: 53
Merit: 0
|
 |
December 11, 2025, 02:43:48 PM |
|
I have no intention of offering any support, information, or help to anyone here.
My goal is to provide more meaningful information to help newcomers or those with missing information, rather than having them reread 609 pages.
Listen, you and a few others are the main reason I'm not gonna publish actual software for you and others to full around with, any time soon. Be happy with FixedPaul's app and don't ever ask about better software, expecting anything in return, with your embarrassing insults and attitude towards everyone; your contribution to global warming is welcome. Using other people's code while being a crypto expert skilled in having no idea how to look at a H160's binary value is a very good indicator on your actual skills and how much "help" you can ever give newcomers. My ignore list highly welcomes you again. Now, do your usual thing and throw in a few nice insults or whatever non-sense. all i see for years you mumbling the same AI things, you did not publish anything, you did not find any key with all your AI knowledge, you will not publish or help others, why you keep writing in the forum? to argue with people all the time? my opinion is that nobody needs your here
|
|
|
|
|
|
kTimesG
|
 |
December 11, 2025, 03:48:19 PM |
|
all i see for years you mumbling the same AI things, you did not publish anything, you did not find any key with all your AI knowledge, you will not publish or help others, why you keep writing in the forum? to argue with people all the time? my opinion is that nobody needs your here
You are 100% correct, nobody needs me here. Regarding the other things: if you only look in a trash bin, unlikely to find healthy food. In short, all your affirmations are against the provable reality, if you actually care to look in serious places. But I know you won't, which raises the obvious question of why did you bother to post this, and how does it help anyone with anything? Now, my real question to you and your boring friends here is: what in the world makes you think I am writing using AI, and what even minuscule evidence do you even have to support this (already stupid) claim? Everything I ever said was backed up by actual research papers, and actual common sense, with actual real-life evidences and proofs, not AI-supported fantasies, like the rest of the 99% of what's written in this thread. In my view, the only thing AI managed to do when it comes to this thread, is bringing some people in a mental idiocracy, and the funny thing is: they actually believe in the shit AI makes them believe in, when the reality (the rational one, not the one inside a mental institution) is in total opposition to that bullshit.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
cctv5go
Newbie
Offline
Activity: 50
Merit: 0
|
 |
December 11, 2025, 04:11:13 PM |
|
Ps. Hello, everyone. I really take it from you. We quarrel every day. I recently developed and improved a set of algorithms, and it only takes 15 days and 10,000 dollars to solve the problem #71, believe me. Donate:13gLHZJYcCJVSCoSuWbAFiYPU3X7kv47iH
|
|
|
|
|
mjojo
Newbie
Offline
Activity: 81
Merit: 0
|
 |
December 11, 2025, 04:56:51 PM |
|
Does anyone have prefix starts with 1PWo3JeB9jrG?
I’d like to buy it.
1PWo3JeB9jrGLDTmsp45h1pDXXtb7zisQH 1PWo3JeB9jrGFsveoYdzAukjcwp645X7Zx Hi @bib...just make sure the above one lead 79b (like owner said in this forum).. and the below one I just guess also in 7 too..
|
|
|
|
|
analyticnomad
Newbie
Online
Activity: 78
Merit: 0
|
 |
December 11, 2025, 05:06:32 PM Last edit: December 11, 2025, 05:39:12 PM by analyticnomad |
|
all i see for years you mumbling the same AI things, you did not publish anything, you did not find any key with all your AI knowledge, you will not publish or help others, why you keep writing in the forum? to argue with people all the time? my opinion is that nobody needs your here
You are 100% correct, nobody needs me here. Regarding the other things: if you only look in a trash bin, unlikely to find healthy food. In short, all your affirmations are against the provable reality, if you actually care to look in serious places. But I know you won't, which raises the obvious question of why did you bother to post this, and how does it help anyone with anything? Now, my real question to you and your boring friends here is: what in the world makes you think I am writing using AI, and what even minuscule evidence do you even have to support this (already stupid) claim? Everything I ever said was backed up by actual research papers, and actual common sense, with actual real-life evidences and proofs, not AI-supported fantasies, like the rest of the 99% of what's written in this thread. In my view, the only thing AI managed to do when it comes to this thread, is bringing some people in a mental idiocracy, and the funny thing is: they actually believe in the shit AI makes them believe in, when the reality (the rational one, not the one inside a mental institution) is in total opposition to that bullshit. Why is it so difficult for people to understand the concept that some people have a higher-level knowledge of these subjects? They don't like the way you say things even though you're right, so they emotionally respond to themselves when they respond to you. Typical.
|
|
|
|
|
Bram24732
Member

Offline
Activity: 224
Merit: 22
|
 |
December 11, 2025, 06:11:10 PM |
|
But you’re performing for the audience because your ego is hurt after I pointed out your complete incompetence.
Listen, dude, first of all, you don't know the difference between Rho and Kangaroo. You made your account yesterday, and today you're asking GPT for your first readings on basic things. It's OK, we all had a day zero for learning. Me? Well, I had an actual ECDLP competition one year ago, some guy got 500 $ for breaking a 80-bits key in 15 minutes. I also broke a 56-bits ECDSA signature challenge just a few months ago, for which no software even exists publicly. It took two weeks and actual hundreds of GPUs running around the clock, to solve the problem. Should I mention that I didn't need stupid idiots to convince to run my code on their machines, but rather implemented a distributed low-latency interruptible fully working system? As for Kangaroo & co., I have around 20 or so versions of it, written from scratch, ranging from Python to C to CUDA, and using all sorts of shit (symmetry, multiple types of walks, endomorphism, etc.) and made literally tens of thousands of simulations to see how each one works and what parameters work better than others. So I know my shit pretty well, and guess what? It's also running as we speak. Nevermind that there was a bot competition of breaking a 70 or so bits key once in the mempool. My bot cracked it in 3 seconds. Just as it can crack any 80 bits key in a couple of seconds. Any key. You? You have no clue about what the difference is between three totally different categories of algorithms, and are hiding weird Python code in EXE wrappers for unknown reasons. We should really compare our respective interruptible systems. Might be some nice corner cases we mutually benefit from sharing with each other.
|
|
|
|
|
|
kTimesG
|
 |
December 11, 2025, 09:08:31 PM |
|
We should really compare our respective interruptible systems. Might be some nice corner cases we mutually benefit from sharing with each other.
What do you mean by corner cases? If it's coded tightly, suddenly having some GPU disappearing for good in the middle of some processing can simply be offset to the next free available worker. I explained some time ago that dividing a range of whatever size into N (threads x jumpers) work partitions, of M sub-partitions each (number of required launches) can be done with only one or two EC scalar multiplications to compute the initial starting points before whatever starting launch number is desired. Because at any given time, the points of all threads from all blocks are all separated by the exact same distance, and they all have the same offset from their original start points. This scales up to millions of initial starting points being ready very, very fast. This allows quickly resuming an interrupted job somewhere else by an identical GPU, as long as the progress (step S < M finished) is recorded incrementally, which is just a few bytes over the wire for each worker, every few seconds.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
whistle307194
Copper Member
Newbie
Offline
Activity: 19
Merit: 0
|
 |
December 12, 2025, 12:58:00 AM |
|
Hello, I have a question regarding BitCrack from this repository: https://github.com/brichard19/BitCrackI’m currently testing a rather unusual approach that some of you may find hard to believe, but due to my limited understanding, I’m having trouble explaining or modifying it properly so I’m hoping to get some help here. When the program starts, it initializes the GPU based on the parameters -threads, -blocks, and -points. This GPU initialization phase takes anywhere from half a second (when testing low-bit ranges) to several seconds when using large ranges, high bit values, or full GPU utilization. After this initialization, the actual computation begins. My issue is that whenever the program reaches the end of a range and stops, I have to start it again manually. But each time I restart it, the GPU has to reinitialize from scratch, which wastes time. My question is: Is it possible to keep the GPU running at 100% continuously, and when the program reaches the end of a range, have it automatically switch to the next range without reinitializing the GPU? For example, let’s say I run the program starting at the range 200000000F00000000 with some stride, and range each full pass from the current start to the range end takes about one minute. When I restart the program, the GPU preparation phase repeats again... I know that it could be merged to run using a python code which relaunch Bitcrack with desired parameters but still that does not solve the main problem I try to avoid.
|
|
|
|
|
SovereignScott
Newbie
Offline
Activity: 6
Merit: 0
|
 |
December 12, 2025, 03:18:25 AM |
|
Greetings. I don't know if it's of much help, but I think what your are referring is part of the process, assuming you would be scanning a certain range dimension.
Maybe one has to consider the ranges dimensions it was used to scan back then...
why not trying to adapt your unusual method to encounter a sweet spot...?
Anyhow, I've also been testing my own unusual method (I guess like many) and for the purpose, due to my modest setup, I found a way to counter Fixed Paul version of VS, considering what some of the people I read from say about it to miss a few keys. BitCrack seems to be missing some updates. I don't know.
|
|
|
|
|
|
fixedpaul
|
 |
December 12, 2025, 09:08:22 AM |
|
Hello, I have a question regarding BitCrack from this repository: https://github.com/brichard19/BitCrackI’m currently testing a rather unusual approach that some of you may find hard to believe, but due to my limited understanding, I’m having trouble explaining or modifying it properly so I’m hoping to get some help here. When the program starts, it initializes the GPU based on the parameters -threads, -blocks, and -points. This GPU initialization phase takes anywhere from half a second (when testing low-bit ranges) to several seconds when using large ranges, high bit values, or full GPU utilization. After this initialization, the actual computation begins. My issue is that whenever the program reaches the end of a range and stops, I have to start it again manually. But each time I restart it, the GPU has to reinitialize from scratch, which wastes time. I believe what you are referring to is the initialization of the starting points for each GPU thread. Basically, each thread analyzes a subrange and must have a starting point, i.e. a public key. This initialization phase is performed by the CPU, and the time it takes depends linearly on the degree of parallelism : "the number of threads", which in bitcrack you define with -threads (per block) and -blocks. You can also use a total number of threads that "exceeds the GPU’s capabilities", which will then obviously be serialized, but this can still have beneficial effects on the overall speed of the GPU kernel. The problem with that bitcrack repository is that the initialization phase is very inefficient: each starting point is computed starting from its private key, making it practically unusable with a high degree of parallelism. What I did in my repository was to rewrite this initialization phase of the starting points from scratch, making it much more efficient (using the same principles used in the GPU kernel, nothing new). It should take only a few tenths of a second even with powerful GPUs and weak CPUs. In my repo, -threads and -blocks are hardcoded for simplicity; the preset values are the ones that, on average, I’ve seen make the computation most “efficient.” Remember that this initialization time depends only on the "number of threads", not on the size of the range. On the opposite, the larger the range is, the less the initialization phase weighs in %, on the total time. You can try it and see whether this somehow solves your problem, or at least makes it "more negligible". I found a way to counter Fixed Paul version of VS, considering what some of the people I read from say about it to miss a few keys. BitCrack seems to be missing some updates. I don't know.
Which keys would it miss?
|
|
|
|
|
SovereignScott
Newbie
Offline
Activity: 6
Merit: 0
|
 |
December 12, 2025, 09:47:34 AM |
|
I found a way to counter Fixed Paul version of VS, considering what some of the people I read from say about it to miss a few keys. BitCrack seems to be missing some updates. I don't know.
Which keys would it miss? I was curious about it too, as some users refer - but for my experience, I believe so far none. Though I got pm saying it would miss some keys at the end.I'm no expert in programming, far from that. I noticed it's the quickest to initialize so even I set up three GPUs to work on the same range, If I slice it into smaller chunks it goes by smoothly. Compared to BC, with the same size chunks - like 2^38 as example - what BC finds, your finds it too. Honestly wasnt able to understand why some users tell about it.
|
|
|
|
|
whistle307194
Copper Member
Newbie
Offline
Activity: 19
Merit: 0
|
 |
December 12, 2025, 11:05:09 AM Last edit: December 12, 2025, 11:56:29 AM by whistle307194 |
|
Hello, I have a question regarding BitCrack from this repository: https://github.com/brichard19/BitCrackI’m currently testing a rather unusual approach that some of you may find hard to believe, but due to my limited understanding, I’m having trouble explaining or modifying it properly so I’m hoping to get some help here. When the program starts, it initializes the GPU based on the parameters -threads, -blocks, and -points. This GPU initialization phase takes anywhere from half a second (when testing low-bit ranges) to several seconds when using large ranges, high bit values, or full GPU utilization. After this initialization, the actual computation begins. My issue is that whenever the program reaches the end of a range and stops, I have to start it again manually. But each time I restart it, the GPU has to reinitialize from scratch, which wastes time. I believe what you are referring to is the initialization of the starting points for each GPU thread. Basically, each thread analyzes a subrange and must have a starting point, i.e. a public key. This initialization phase is performed by the CPU, and the time it takes depends linearly on the degree of parallelism : "the number of threads", which in bitcrack you define with -threads (per block) and -blocks. You can also use a total number of threads that "exceeds the GPU’s capabilities", which will then obviously be serialized, but this can still have beneficial effects on the overall speed of the GPU kernel. The problem with that bitcrack repository is that the initialization phase is very inefficient: each starting point is computed starting from its private key, making it practically unusable with a high degree of parallelism. What I did in my repository was to rewrite this initialization phase of the starting points from scratch, making it much more efficient (using the same principles used in the GPU kernel, nothing new). It should take only a few tenths of a second even with powerful GPUs and weak CPUs. In my repo, -threads and -blocks are hardcoded for simplicity; the preset values are the ones that, on average, I’ve seen make the computation most “efficient.” Remember that this initialization time depends only on the "number of threads", not on the size of the range. On the opposite, the larger the range is, the less the initialization phase weighs in %, on the total time. You can try it and see whether this somehow solves your problem, or at least makes it "more negligible". I found a way to counter Fixed Paul version of VS, considering what some of the people I read from say about it to miss a few keys. BitCrack seems to be missing some updates. I don't know.
Which keys would it miss? I like the way You've explained it..Yeah I admit I'm not technician but strongly addicted to Chatgpt at the moment and once I have modified the kernel and some other functions and I managed to have overview of created "start points", even program wrote txt.file with all of them confirming my expectation back then. So here we go; I've used an older gpu that pulls ~1000Mkeys/s and program created exactly 130k starting points from the given initial range start. So as I understand correctly it should create starting points sequentially this way: 200000000000000000 200000000000000001 200000000000000002 ... 20000000000000000f 2000000000000000a1 2000000000000000a2 ... 20000000000000034a all the way to the total amount of created starting points initially . Indeed exactly the way I expected it to work(chat did, not me...) So the last start point should look this 2000000000000 1fbd0 - the initial start value decimal + 130k. I have chosen a target(an address far from the start point) that requires an one start point in between of all starting points and I thought if so then at some point after N additions it must be hit. Yeah in my dream... No matter what I tried, program misses always. When I have set as a initial range start a value that is required to hit my target then of course, after a while it hits but I think that is not the point. If gpu creates so many start points that basically should cover "that one" then why would it miss anyway?
|
|
|
|
|
Bram24732
Member

Offline
Activity: 224
Merit: 22
|
 |
December 12, 2025, 01:35:29 PM |
|
We should really compare our respective interruptible systems. Might be some nice corner cases we mutually benefit from sharing with each other.
What do you mean by corner cases? If it's coded tightly, suddenly having some GPU disappearing for good in the middle of some processing can simply be offset to the next free available worker. I explained some time ago that dividing a range of whatever size into N (threads x jumpers) work partitions, of M sub-partitions each (number of required launches) can be done with only one or two EC scalar multiplications to compute the initial starting points before whatever starting launch number is desired. Because at any given time, the points of all threads from all blocks are all separated by the exact same distance, and they all have the same offset from their original start points. This scales up to millions of initial starting points being ready very, very fast. This allows quickly resuming an interrupted job somewhere else by an identical GPU, as long as the progress (step S < M finished) is recorded incrementally, which is just a few bytes over the wire for each worker, every few seconds. I’m quite sure we’re both more than capable of managing the interruptible distribution of the workload. I was more thinking about how to manage renting on various platforms and their API specificities, the renting latency, etc… I’m quite sure you’ve experienced that their API design / rate limiting gets annoying quite fast ? That’s what I was referring to.
|
|
|
|
|
|
kTimesG
|
 |
December 12, 2025, 02:09:29 PM |
|
So as I understand correctly it should create starting points sequentially this way: 200000000000000000 200000000000000001 200000000000000002 ...
For a range [A, B] the fastest way to compute it entirely is starting from the middle of it and adding all the constant points (half of the range size), using the shared inverse to get each (left & right) points, and finally moving off to the next middle point of a new range (for example, but not necessarily, the immediate next range). Rinse and repeat as many times as needed. So those starting points make no sense, but I'm not surprised that it's something ChatGPT would gladly suggest. They should all be middle points of distinct partitions inside some larger range (your target range). There are various valid options to make this choice, in such a way that the entire target range is covered efficiently, eventually. I was more thinking about how to manage renting on various platforms and their API specificities, the renting latency, etc… I’m quite sure you’ve experienced that their API design / rate limiting gets annoying quite fast ? That’s what I was referring to.
Never needed more than 50 or so actual instances at once, so my only interaction with such APIs was a script that destroyed all instances automatically once they were no longer needed. I kinda hunted the cheapest bid rates manually, so yeah, I had to do some clicks once in a while. I'm sure the bidding can be automated as well, but it seems to be a big problem to find a lot of cheap computing power... and at the end of the day it all comes down to what budget is allocated.
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Bram24732
Member

Offline
Activity: 224
Merit: 22
|
 |
December 12, 2025, 05:10:53 PM |
|
So as I understand correctly it should create starting points sequentially this way: 200000000000000000 200000000000000001 200000000000000002 ...
For a range [A, B] the fastest way to compute it entirely is starting from the middle of it and adding all the constant points (half of the range size), using the shared inverse to get each (left & right) points, and finally moving off to the next middle point of a new range (for example, but not necessarily, the immediate next range). Rinse and repeat as many times as needed. So those starting points make no sense, but I'm not surprised that it's something ChatGPT would gladly suggest. They should all be middle points of distinct partitions inside some larger range (your target range). There are various valid options to make this choice, in such a way that the entire target range is covered efficiently, eventually. I was more thinking about how to manage renting on various platforms and their API specificities, the renting latency, etc… I’m quite sure you’ve experienced that their API design / rate limiting gets annoying quite fast ? That’s what I was referring to.
Never needed more than 50 or so actual instances at once, so my only interaction with such APIs was a script that destroyed all instances automatically once they were no longer needed. I kinda hunted the cheapest bid rates manually, so yeah, I had to do some clicks once in a while. I'm sure the bidding can be automated as well, but it seems to be a big problem to find a lot of cheap computing power... and at the end of the day it all comes down to what budget is allocated. Oh ok, I might have overestimated your automation then. I was running thousands for 67 and 68, so automation and finding the cheapest programatically made a lot of sense. If ever you need to do that, hit me up I’ll share what I learned the long (and mostly annoying) way.
|
|
|
|
|
uvindele
Newbie
Offline
Activity: 5
Merit: 0
|
 |
December 12, 2025, 05:32:47 PM |
|
..for 135.. If we increase the distance between the jumps, could that make it possible to obtain a collision that is “close” to the target key? We can get a collision that doesn’t match the key we’re looking for, but it can be near it — and maybe that could allow us to “narrow down” the range in which the actual key might be located?
|
|
|
|
|
|