|
|
BlackAKAAngel
Newbie

Activity: 26
Merit: 0
|
 |
September 21, 2025, 03:24:18 PM |
|
when sommebody want the vanitysearch with i key range i can send per email i dont want to make oficial on github its work great and great ratio speed Trusting random Newbies with private key generation is how you risk your funds. now i see how many here are hackers im so sorry but how many stupid you can be und du loyce bist du wierklich sehr dumm todoy make it oficiel on github that code i just add the range its rewrite code from the https://github.com/JeanLucPons/VanitySearch i just add the key range
|
|
|
|
|
BlackAKAAngel
Newbie

Activity: 26
Merit: 0
|
 |
September 28, 2025, 10:31:21 PM |
|
|
|
|
|
|
WanderingPhilospher
Sr. Member
  

Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
September 30, 2025, 03:42:06 AM |
|
This is not a good one to use for the puzzles (not sure if that is what you are trying to use it for). This is a good one to use for actual vanity addresses, because it will find keys out of the range you are searching in (endo and symm/negation); so while you can use it for the puzzles (technically) it will be much slower than FP's VS-Bitcrack that Ice mentioned above; because it is checking more than one key at a time, so speed seems faster, but it's checking keys in other ranges. So if you just use compression, the speed shown, should be divided by 6, to show what speed you are actually getting, in the range you are searching in.
|
|
|
|
|
BlackAKAAngel
Newbie

Activity: 26
Merit: 0
|
 |
September 30, 2025, 07:50:24 PM |
|
This is not a good one to use for the puzzles (not sure if that is what you are trying to use it for). This is a good one to use for actual vanity addresses, because it will find keys out of the range you are searching in (endo and symm/negation); so while you can use it for the puzzles (technically) it will be much slower than FP's VS-Bitcrack that Ice mentioned above; because it is checking more than one key at a time, so speed seems faster, but it's checking keys in other ranges. So if you just use compression, the speed shown, should be divided by 6, to show what speed you are actually getting, in the range you are searching in. dit you check the code?, or the change what are made it on the code i think its generate pure keys just from the given key range
|
|
|
|
|
ethanhunt2023
Newbie

Activity: 59
Merit: 0
|
 |
November 19, 2025, 08:02:47 AM |
|
when sommebody want the vanitysearch with i key range i can send per email i dont want to make oficial on github its work great and great ratio speed
When there is already so many versions on github for such features (For example. https://github.com/FixedPaul/VanitySearch-Bitcrack) then why would someone go through an email attachment. I wanted to know if “FixedPaul/VanitySearch-Bitcrack” needs these settings for Linux: Edit the makefile and set up the appropriate CUDA SDK and compiler paths for nvcc. Or pass them as variables to make commands. Install libgmp: sudo apt install -y libgmp-dev CUDA = /usr/local/cuda-11.0 CXXCUDA = /usr/bin/g++ make gpu=1 CCAP=2.0 all
|
|
|
|
|
|
WhyFhy
|
 |
November 23, 2025, 06:05:42 AM Last edit: December 15, 2025, 03:40:38 PM by WhyFhy |
|
I've stumbled across what appears to be character clustering in P2SH Base58 addresses that I can't find documented anywhere. I'm calling them "compression basins." Ten specific characters (N, R, S, T, U, Y, b, e, g, m) exhibit higher co-occurrence rates than the other 48 Base58 characters when searching P2SH prefixes with vanitysearch. Using -c 3*turn*me*beg* on a 4090, I hit 3FtURnMeBegsZLGKQrxggR9BoZxmGNKMyR in 10-20 minutes, that's 12 basin characters forming 4 readable words. The natural clustering (including the unintended 's' in "begs" and "my") suggests this maybe isn't just luck.
Theory: This appears to be a probabilistic advantage through Hash160 & Base58 encoding properties. Substring searches using these basin characters are noticeably more likely to converge. Challenge: Beat 12 characters / 4 words (intentional or not). First person posts proof here gets $5 BTC.
Case insensitive accepted Wildcards allowed Must use basin character subset N, R, S, T, U, Y, b, e, g, m
Has anyone encountered research on non-uniform character distribution in Base58 post-Hash160?
*Checking into it deeper, I do believe these may be resulting clusters from searching the 31h1 to 3R2c range?
**Clarification by beat I mean baseline 12 basin characters AND 4 readable words, with at least one margin exceeded 5 words/12 chars or 4 words/13 chars
|
|
|
|
WanderingPhilospher
Sr. Member
  

Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
December 15, 2025, 03:00:28 PM |
|
I've stumbled across what appears to be character clustering in P2SH Base58 addresses that I can't find documented anywhere. I'm calling them "compression basins." Ten specific characters (N, R, S, T, U, Y, b, e, g, m) exhibit higher co-occurrence rates than the other 48 Base58 characters when searching P2SH prefixes with vanitysearch. Using -c 3*turn*me*beg* on a 4090, I hit 3FtURnMeBegsZLGKQrxggR9BoZxmGNKMyR in 10-20 minutes, that's 12 basin characters forming 4 readable words. The natural clustering (including the unintended 's' in "begs" and "my") suggests this maybe isn't just luck.
Theory: This appears to be a probabilistic advantage through Hash160 & Base58 encoding properties. Substring searches using these basin characters are noticeably more likely to converge. Challenge: Beat 12 characters / 4 words (intentional or not). First person posts proof here gets $5 BTC.
Case insensitive accepted Wildcards allowed Must use basin character subset N, R, S, T, U, Y, b, e, g, m
Has anyone encountered research on non-uniform character distribution in Base58 post-Hash160?
*Checking into it deeper, I do believe these may be resulting clusters from searching the 31h1 to 3R2c range?
Would this count? 3BeMeByMyW2muK4VeQLUSV8dxVaypnRNi4 You said 12 characters / 4 words; / could be either/or. The above is at least 5 words, that's why I ask.
|
|
|
|
|
|
WhyFhy
|
 |
December 15, 2025, 03:39:51 PM |
|
I've stumbled across what appears to be character clustering in P2SH Base58 addresses that I can't find documented anywhere. I'm calling them "compression basins." Ten specific characters (N, R, S, T, U, Y, b, e, g, m) exhibit higher co-occurrence rates than the other 48 Base58 characters when searching P2SH prefixes with vanitysearch. Using -c 3*turn*me*beg* on a 4090, I hit 3FtURnMeBegsZLGKQrxggR9BoZxmGNKMyR in 10-20 minutes, that's 12 basin characters forming 4 readable words. The natural clustering (including the unintended 's' in "begs" and "my") suggests this maybe isn't just luck.
Theory: This appears to be a probabilistic advantage through Hash160 & Base58 encoding properties. Substring searches using these basin characters are noticeably more likely to converge. Challenge: Beat 12 characters / 4 words (intentional or not). First person posts proof here gets $5 BTC.
Case insensitive accepted Wildcards allowed Must use basin character subset N, R, S, T, U, Y, b, e, g, m
Has anyone encountered research on non-uniform character distribution in Base58 post-Hash160?
*Checking into it deeper, I do believe these may be resulting clusters from searching the 31h1 to 3R2c range?
Would this count? 3BeMeByMyW2muK4VeQLUSV8dxVaypnRNi4 You said 12 characters / 4 words; / could be either/or. The above is at least 5 words, that's why I ask. Clarification by beat I mean baseline 12 basin characters AND 4 readable words, with at least one margin exceeded 5 words/12 chars or 4 words/13 chars But this entry is close and lightly supports the idea.
|
|
|
|
WanderingPhilospher
Sr. Member
  

Activity: 1498
Merit: 286
Shooters Shoot...
|
 |
December 16, 2025, 03:02:30 AM |
|
Clarification by beat I mean baseline 12 basin characters AND 4 readable words, with at least one margin exceeded 5 words/12 chars or 4 words/13 chars But this entry is close and lightly supports the idea.
So then this would count? 3NeQ be8hLFNwYx betbeTwC5 beS meTFji5G
|
|
|
|
|
|
WhyFhy
|
 |
December 16, 2025, 05:10:39 AM |
|
Clarification by beat I mean baseline 12 basin characters AND 4 readable words, with at least one margin exceeded 5 words/12 chars or 4 words/13 chars But this entry is close and lightly supports the idea.
So then this would count? 3NeQ be8hLFNwYx betbeTwC5 beS meTFji5G this qualifies, it appears it took hardly any time either. Pm me a wallet I shoot you the $5
|
|
|
|
pliego
Full Member
 

Activity: 168
Merit: 108
Rainbet #1 non-kyc crypto casino & sportsbook
|
 |
December 16, 2025, 04:00:10 PM |
|
This whole theory about character clustering in Base58 is actually pretty wild. I’ve always felt like certain patterns in P2SH prefixes hit way faster than they should, and seeing it laid out as 'compression basins' explains a lot of those weird streaks I've seen on my own logs. I actually spent a few hours messing with the settings on my rig to see if I could force a longer string of those specific characters, and it’s definitely not just random luck. It’s one of those deep-dive technical details that makes you realize how much we still don't fully document about address generation. Definitely more interesting than just watching hashes fly by
|
|
|
|
Morexl
Newbie

Activity: 11
Merit: 1
|
 |
December 19, 2025, 04:17:53 AM |
|
how to use this option "-rp privkey partialkeyfile: Reconstruct final private key(s) from partial key(s) info".
if the find too many keys with the same prefix.
|
|
|
|
|
|
kTimesG
|
 |
December 19, 2025, 03:14:55 PM |
|
how to use this option "-rp privkey partialkeyfile: Reconstruct final private key(s) from partial key(s) info".
if the find too many keys with the same prefix.
Have you bothered to read the README? It's literally just two scrolls down from where your above quote is mentioned how to "Generate a vanity address for a third party using split-key" which shows that "-rp" is used by Alice to reconstruct a final private key from the previously generated split key info and Bob's found key. Who would have thought!
|
Off the grid, training pigeons to broadcast signed messages.
|
|
|
Garys27
Newbie

Activity: 12
Merit: 0
|
 |
December 25, 2025, 02:26:48 AM |
|
I've stumbled across what appears to be character clustering in P2SH Base58 addresses that I can't find documented anywhere. I'm calling them "compression basins." Ten specific characters (N, R, S, T, U, Y, b, e, g, m) exhibit higher co-occurrence rates than the other 48 Base58 characters when searching P2SH prefixes with vanitysearch. Using -c 3*turn*me*beg* on a 4090, I hit 3FtURnMeBegsZLGKQrxggR9BoZxmGNKMyR in 10-20 minutes, that's 12 basin characters forming 4 readable words. The natural clustering (including the unintended 's' in "begs" and "my") suggests this maybe isn't just luck.
Theory: This appears to be a probabilistic advantage through Hash160 & Base58 encoding properties. Substring searches using these basin characters are noticeably more likely to converge. Challenge: Beat 12 characters / 4 words (intentional or not). First person posts proof here gets $5 BTC.
Case insensitive accepted Wildcards allowed Must use basin character subset N, R, S, T, U, Y, b, e, g, m
Has anyone encountered research on non-uniform character distribution in Base58 post-Hash160?
*Checking into it deeper, I do believe these may be resulting clusters from searching the 31h1 to 3R2c range?
**Clarification by beat I mean baseline 12 basin characters AND 4 readable words, with at least one margin exceeded 5 words/12 chars or 4 words/13 chars
when using wildcards I get this error, is this going to effect search or is it normal 3*Be*MY* GPUEngine: Wrong totalPrefix 0!=1249031367! GPUEngine: Wrong totalPrefix 0!=1249031367! GPUEngine: Wrong totalPrefix 0!=1249031367! GPUEngine: Wrong totalPrefix 0!=1249031367!
|
|
|
|
|
|