Bitcoin Forum
December 25, 2025, 03:20:31 PM *
News: Latest Bitcoin Core release: 30.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 [614] 615 616 617 »
  Print  
Author Topic: Bitcoin puzzle transaction ~32 BTC prize to who solves it  (Read 358706 times)
k2laci
Member
**
Offline Offline

Activity: 183
Merit: 10


View Profile
December 19, 2025, 07:14:05 AM
 #12261

Alright guys, I hear you and I understand the skepticism. Thank you for your remarks. I am well aware that these systems can be insecure and that a malicious actor could sabotage the operation. However, what is your advice for regular participants like myself who don't own a GPU farm? Solo searching equates to thousands of years of computation. A pool offers completely different odds—as long as there isn't a bad actor to spoil it. We are simply lottery participants


Unfortunately, there is no perfect solution. Even now, when I look at btcpuzzle and honestpool, both have their advantages and disadvantages. But if a new pool appears every year with its own separate database, it makes no sense. Moreover, those who are scanning now have a lower chance than those who join later, for example when 50% of the range has already been solved.
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 19, 2025, 07:55:30 AM
Last edit: December 19, 2025, 08:11:06 AM by Bram24732
 #12262

Worse yet: somebody knows all the required proofs keys beforehand, since they were chosen. This is like trying to prove to someone that you know the answer of a problem, because the problem was created out of the known answer.

Please note that verification addresses are generated on-the-fly when a user clicks the "Check" button or sends an API request. This is done to avoid putting a sudden heavy load on the database, allowing it to handle the load gradually.

If a check has already been initiated (via the button or by selecting a range via the API), the addresses generated during that initial instance are displayed.

For security and fairness reasons, I do not store the private keys for verification addresses until the range completion is confirmed. Initially, only the addresses are created and saved for future verification.

When private keys for the verification addresses are submitted, the server generates compressed addresses from them and compares these against the database. If all verification addresses match, the corresponding range is marked as verified.

Therefore, no one knows in advance which private keys correspond to the verification addresses.  Wink

The screenshots show the database structure and how it is populated.

As you can see, private keys are only recorded for those ranges that have been submitted for verification and confirmed.





Geez, just require every submitter to send the private keys to all addresses starting with n zero bits.
Adjust n so that spoofing a range is statistically impossible, but so that verifying server side is easy enough. Something like n = 40 maybe.

Not only the system can’t be meaningfully manipulated, but also you’ll be able to prove work after the fact should you want to merge work with another pool / group.

If n is small enough you even could ZKProof the progress of your pool.
HoMLoL
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
December 19, 2025, 11:25:08 AM
 #12263

Please note that verification addresses are generated on-the-fly
Therefore, no one knows in advance which private keys correspond to the verification addresses.  Wink

If they are generated on the fly, that means they are known in advance at some point in time, so this is contradictory to "no one knows in advance".

Even with the best intentions, this is simply a ticking bomb waiting to go kaboom one way or another. Because it doesn't really secure anything, as it lacks a trust basis.

If you're passing onto users the responsibility of having to trust your server's key generation process, secure RAM evacuation of secret material, and generally whether any keys are stored or not wherever, with or without anyone (including yours) being aware, or maybe making them deterministic in nature due to some bug or feature, then what can I say, except I'm not the one that needs any convincing.

Honestpool handles this much better: their ranges are half the size and each is split into 5 segments, with a random check-address picked from each segment.

If this is true it's even worse: only 50% of all ranges can be trusted to have been scanned, on average. Who's to stop anyone from sending the "scanned" report after finding every key from every segment (which, on average, is found after 50% of each segment is done).

It seems you don't fully grasp how the pool operates and that the server here merely serves as a database... https://github.com/homlol/honestpool.

No one can give 100% guarantees that a range has been fully checked—not in my pool, nor in any other. Proof addresses are just minor evidence of work done; everything else relies on the integrity of each participant. Only a foolish person who actually uses a pool would confirm that a range is fully checked if all proof addresses are found, while the actual program progress is at 5% or 90%.

I am a person of action, not words, and live by the principle "less talk, more action." Instead of just writing and criticizing, implement your own pool that provides 100% trust and 100% verifies every range. If you manage to do that, I'll tip my hat to you.

Some will just never get it lol.

Everyone wants or demands "open source" yada yada, but all open source pools, have weaknesses. You try and tell them, but all will say, "trust". But I am not trusting anything that can be manipulated by others, solo or shared. Imagine a pool spends years trying to find 71 only to realize the range it was in was searched within the first 4 months, but a bad actor just wanted to spoof the range and get more credit for the shared prize.

Even on my solo pool, ones I have built/setup for others, everything is encrypted; the ranges sent to the clients, the PoW addresses, everything. And when the PoW addresses and private keys are sent back to the server, they are encrypted. So no lone bad actor or a rented GPU's owner, can possibly sabotage the entire pool. Yes, someone could hack into the server and figure out the encrypted key, so you have to also have fail safes for that as well.

Bottom line, if a pool has open source code, I skip it. I can spoof any of the ones currently running and there are a lot more smarter people out there than me, and malicious.

To date, I have not encountered a more open and honest pool. Others have their own software, or upon a find, they split the reward between the pool and participants. I have provided a tool that ensures, in case of a success, all coins belong to the lucky finder. The script I wrote does not send a confirmation request for a completed range when the target address is found—the program simply stops and outputs a message about the primary find. Therefore, the private key cannot be transmitted to malicious actors, as the search runs on the local machine, and a confirmation request is sent only if the target address is not found.

Something like n = 40 maybe.

Why were exactly five proof addresses chosen? To allow for easy verification via a web interface. If there were 40 proof addresses, it would be far too many.
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 19, 2025, 12:23:40 PM
Merited by Cricktor (1)
 #12264

I am a person of action, not words, and live by the principle "less talk, more action." Instead of just writing and criticizing, implement your own pool that provides 100% trust and 100% verifies every range. If you manage to do that, I'll tip my hat to you.

I did, twice.

Something like n = 40 maybe.
Why were exactly five proof addresses chosen? To allow for easy verification via a web interface. If there were 40 proof addresses, it would be far too many.

Please tell me the check is not made in the UI...
Also, n=40 means that if your range is, lets say 48 bits - you have an average of 256 proofs, not 40 - which should be trivial to verify, even in a browser...

If I were to be as trustless as possible here is what I would do (napkin math but you'll get the idea) :

- 2^30 = 1 073 741 824 ranges of size 40 bits.
- Client collects every private key for addresses starting with 24 zero bits. There is on average 2^16 = 65536 of them. Those are youre proofs.
- Those proofs weight on average 2Mb - can probably be safely compressed to ~256Kb. When a client validates a block, it sends those to the server, which has to check those 65536 private keys. it's super quick to do.
- You can store those private keys or not, depending on if you want to prove the work to someone else later on (Keeping all of those keys for P71 would be on average ~128Tb)
- With 65536 private keys uniformly distributed in a range, the ability to cheat is virtually zero. Even if it exists, the deviation from the honest version is not worth the hassle.



HoMLoL
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
December 19, 2025, 12:47:54 PM
 #12265

Please tell me the check is not made in the UI...

What does the user interface have to do with it? I even provided a step-by-step, detailed explanation on GitHub of how the verification works. Private keys are sent to the server specifically so that it can generate addresses from them and check for matches with the proof addresses.

There is on average 2^16 = 65536 of them. Those are youre proofs.

You think that 2^16 (65,536) is a small number and that the verification will be fast. I might agree in theory, but there's a crucial point... Once at least 10,000 parts are verified, the database will contain 655,360,000 records, and this will significantly slow down its performance. You're only thinking theoretically, but in practice, from a technical standpoint, it turns out to be quite different...

UPD:

In theory, it's indeed possible not to store (or to delete) records after confirmation. However, in practice: the database selected 4,784 parts for verification, and of these, 2,068 have been verified. This means the database will be filled with unverified proof addresses.

P.S.
Hmm... I'm actually thinking now — maybe we should indeed delete the verification addresses after confirmation... Why keep them in the database...?
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 19, 2025, 01:03:01 PM
 #12266

Please tell me the check is not made in the UI...

What does the user interface have to do with it? I even provided a step-by-step, detailed explanation on GitHub of how the verification works. Private keys are sent to the server specifically so that it can generate addresses from them and check for matches with the proof addresses.

There is on average 2^16 = 65536 of them. Those are youre proofs.

You think that 2^16 (65,536) is a small number and that the verification will be fast. I might agree in theory, but there's a crucial point... Once at least 10,000 parts are verified, the database will contain 655,360,000 records, and this will significantly slow down its performance. You're only thinking theoretically, but in practice, from a technical standpoint, it turns out to be quite different...

UPD:

In theory, it's indeed possible not to store (or to delete) records after confirmation. However, in practice: the database selected 4,784 parts for verification, and of these, 2,068 have been verified. This means the database will be filled with unverified proof addresses.

P.S.
Hmm... I'm actually thinking now — maybe we should indeed delete the verification addresses after confirmation... Why keep them in the database...?

You don’t need a database for that. Just store the proofs as flat files on disk for each range.
Also, you don’t need anything from the DB to validate an incoming share, so I don’t see why the volume would cause any issue.
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 19, 2025, 02:31:38 PM
 #12267

It seems you don't fully grasp how the pool operates and that the server here merely serves as a database...

No one can give 100% guarantees that a range has been fully checked—not in my pool, nor in any other. Proof addresses are just minor evidence of work done; everything else relies on the integrity of each participant.

Why were exactly five proof addresses chosen? To allow for easy verification via a web interface. If there were 40 proof addresses, it would be far too many.

LOL. You were given evidence by 3 people so far on why your architecture is broken. Obviously, no one likes when their baby toy is proved to be defective.

PoW keys (the ones that neither a server nor a participant can know in advance) are a 100% proof that work has been performed. This is just like in Bitcoin mining.

Your strategy is defective because if client A scans a range, you cannot prove to client B that client A really scanned any ranges at all. The proof is lacking.

So its not even a matter of "integrity" - it's simple math that can't be fucked with.

Bonus tips: searching for a single pattern instead of 2 or 5 or 50 patterns is faster. Clients simply scan for a pattern (for example, zero bits, or prefixes, or suffixes, or whatever) and those are proof of work, for anyone and for everyone. The verification is identical, because on the server, you simply verify if H160(k) matches. This is simple evidence that clients reached the actual hashing step. That's all.

Another bonus tip: the server never needs to send verification addresses to anyone. Simply receive proof keys. End of story. Less traffic overall.

And yes, I am speaking from the perspective of someone actually doing this, but of course, it wasn't a public pool at all, because I wanted actual real results, not arbitrary unsafe cooperation (which no pool can ever offer). It worked basically identical to what WP and Bram described, but with 40-bit ranges, collecting stats about how many keys were found (from 32 bits up) and actual keys for everything above 36 bits). The stats showed that the operations worked accordingly, while the actual keys showed PoW, which allows to actually prove that work was done, not that some whatever keys were encountered, for which no one cares about, nor can they be proven that someone stumbled upon them. Uninteresting.

Off the grid, training pigeons to broadcast signed messages.
WanderingPhilospher
Sr. Member
****
Offline Offline

Activity: 1442
Merit: 275

Shooters Shoot...


View Profile
December 19, 2025, 02:39:37 PM
 #12268

Please tell me the check is not made in the UI...

What does the user interface have to do with it? I even provided a step-by-step, detailed explanation on GitHub of how the verification works. Private keys are sent to the server specifically so that it can generate addresses from them and check for matches with the proof addresses.

There is on average 2^16 = 65536 of them. Those are youre proofs.

You think that 2^16 (65,536) is a small number and that the verification will be fast. I might agree in theory, but there's a crucial point... Once at least 10,000 parts are verified, the database will contain 655,360,000 records, and this will significantly slow down its performance. You're only thinking theoretically, but in practice, from a technical standpoint, it turns out to be quite different...

UPD:

In theory, it's indeed possible not to store (or to delete) records after confirmation. However, in practice: the database selected 4,784 parts for verification, and of these, 2,068 have been verified. This means the database will be filled with unverified proof addresses.

P.S.
Hmm... I'm actually thinking now — maybe we should indeed delete the verification addresses after confirmation... Why keep them in the database...?
As Bram said, it was napkin math...and a suggestion. You can modify how many you want to collect as proof. You could adjust:

Code:
- 2^30 = 1 073 741 824 ranges of size 40 bits.
- Client collects every private key for addresses starting with 24 zero bits. There is on average 2^16 = 65536 of them. Those are youre proofs.

to collect only 32 leading zero bits; this would on average collect 2^8 PoW keys per 2^40 range. Or adjust to what you are comfy with and your server or HD can handle.


Quote
Alright guys, I hear you and I understand the skepticism. Thank you for your remarks. I am well aware that these systems can be insecure and that a malicious actor could sabotage the operation. However, what is your advice for regular participants like myself who don't own a GPU farm? Solo searching equates to thousands of years of computation. A pool offers completely different odds—as long as there isn't a bad actor to spoil it. We are simply lottery participants
As stated, there is no right or wrong, it comes down to what you are comfortable with...are you willing to put in x amount of years of work with the possibility of not being rewarded? Which do you feel more comfy with? Those are the questions you have to answer for yourself. The main point is, all pools have some flaws. You also mentioned lottery participants, so you could do your own solo work and then you would know everything is 100% accurate, unless of course, you spoof yourself Smiley
fixedpaul
Member
**
Offline Offline

Activity: 83
Merit: 26


View Profile WWW
December 19, 2025, 03:03:04 PM
Last edit: December 19, 2025, 04:34:38 PM by fixedpaul
 #12269

What I would do is use as PoW all private keys whose second 32 bits of the hash60 (that is, bits 32 to 63)* match the winning key.
The advantage of using these bits is that I can stop a bit earlier when computing RIPEMD160 and skip the last few rounds and the final additions, getting a small efficiency gain. I also don’t need to try different prefixes or different keys: a single check on those 32 bits is enough to collect the PoWs and look for the winning key.

From a mathematical point of view, this is the same as using PoW keys whose hashes start with 32 zero leading bits. However, there is still a problem: I need to choose a threshold above which I can say, “OK, I’m reasonably confident that the range has been fully scanned.”

For example, if the range is 2^44 keys, on average I expect about 4096 PoWs. But how do I choose the minimum number of PoW keys to say that the range has been scanned?

If I choose 3996, I can say with 99% confidence that at least 94% of the range was scanned. In other words, if a malicious actor always scans only 94% of the range, only 1 time out of 100 would they end up with more than 3996 PoWs.
On the other hand, there are false positives: if someone really scans 100% of the range, the probability of still getting fewer than 3996 PoWkeys is about 5.7%. That means around 1 range out of 20 scanned by honest users would be wrongly considered as not scanned.

If I set the threshold to 3954, I can say with 95% confidence that at least 94% of the range was scanned. In this case, the false-positive rate drops to about 1.26%. In my opinion, false positives should be kept as low as possible (around 0.01%), so the threshold value should be lowered. The math here is pretty simple: it’s a binomial distribution, which can be safely approximated with Poisson

I think this PoW approach is the best one, but the threshold needs to be chosen carefully, since it’s a trade-off between making sure malicious actors really scan the range and not punishing honest users who actually scan 100%

*EDIT: third 32 bits of the hash60 (that is, bits 64 to 95)
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 19, 2025, 03:30:24 PM
Last edit: December 19, 2025, 03:43:06 PM by kTimesG
 #12270

What I would do is use as PoW all private keys whose second 32 bits of the hash60 (that is, bits 32 to 63) match the winning key.
The advantage of using these bits is that I can stop a bit earlier when computing RIPEMD160 and skip the last few rounds and the final additions, getting a small efficiency gain.

This is exactly what my kernel does, works even better when the 32 bits are hardcoded (the SASS merges it into whatever XOR'ed value needs to match).

However, are you sure you didn't mean the middle 32 bits (bits 65 to 96), instead of the second one?

Code:
    R52(d2, e2, a2, b2, c2, w[3], 13);
    R51(c1, d1, e1, a1, b1, X15, 5);

    // can compute s[2]
    // everything below doesn't affect it

    R52(c2, d2, e2, a2, b2, X9, 11);
    R51(b1, c1, d1, e1, a1, X13, 6);

    // can compute s[1] and s[4]

    R52(b2, c2, d2, e2, a2, X11, 11);

    // can compute s[0]

    s[0] = R160_IV_1 + c1 + d2;
    s[1] = R160_IV_2 + d1 + e2;
    s[2] = R160_IV_3 + e1 + a2;
    s[3] = R160_IV_4 + a1 + b2;
    s[4] = R160_IV_0 + b1 + c2;

I think this PoW approach is the best one, but the threshold needs to be chosen carefully, since it’s a trade-off between making sure malicious actors really scan the range and not punishing honest users who actually scan 100%

In case an user hits a "special" range with too few PoW keys, it should not be an issue because, statistically, they will eventually hit ranges with more than expected PoW keys. So, long-term, it can be determined with a really good accuracy whether an user is malicious or not, by gathering their combined PoW stats. And then there are only these realities:

1. PoW is within expected bounds
2. Either the user's a bad actor (too few PoW) or H160 is broken (too many PoW).

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 83
Merit: 26


View Profile WWW
December 19, 2025, 04:00:01 PM
 #12271


However, are you sure you didn't mean the middle 32 bits (bits 65 to 96), instead of the second one?


Yes, you’re right! I had s[2] in mind, which are the third 32 bits, not the second ones, lol I can't count bits


In case an user hits a "special" range with too few PoW keys, it should not be an issue because, statistically, they will eventually hit ranges with more than expected PoW keys. So, long-term, it can be determined with a really good accuracy whether an user is malicious or not, by gathering their combined PoW stats. And then there are only these realities

That’s also true: if you reason in terms of users rather than single ranges, then with many ranges long-term you can extract more information. Hopefully a malicious user wouldn’t scan each range using a different username and IP every time, although at that point it might be bordering on paranoia


H160 is broken .

That rang a bell  Cheesy
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 19, 2025, 05:44:28 PM
 #12272

Hopefully a malicious user wouldn’t scan each range using a different username and IP every time, although at that point it might be bordering on paranoia

It's not paranoia at all. People can go at great lengths (and depths) just to show that it's possible to break something, or to simply annoy the hell out of someone.

A serious pool that's trying to find the reward of 7.1 BTC would use hardened TLS, Ed448 certificates, client auth, and a secure user enrollment system that signs their certificate requests, to deal with this use-case. And these are just the tip of the iceberg anyway, in the entire architecture. So a bad actor won't really be able to create a working socket, once his certificate gets revoked. By that point, re-enrolling might be too much headache, with possible costs (creating new identities, etc).

Off the grid, training pigeons to broadcast signed messages.
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 19, 2025, 06:08:16 PM
 #12273

Hopefully a malicious user wouldn’t scan each range using a different username and IP every time, although at that point it might be bordering on paranoia

It's not paranoia at all. People can go at great lengths (and depths) just to show that it's possible to break something, or to simply annoy the hell out of someone.

A serious pool that's trying to find the reward of 7.1 BTC would use hardened TLS, Ed448 certificates, client auth, and a secure user enrollment system that signs their certificate requests, to deal with this use-case. And these are just the tip of the iceberg anyway, in the entire architecture. So a bad actor won't really be able to create a working socket, once his certificate gets revoked. By that point, re-enrolling might be too much headache, with possible costs (creating new identities, etc).

As long as economic incentives are sound I think you can simply rely on proofs if you have enough of those.
Setting the cheating edge to sub 0.01% is probably good enough.
The alternative is too cumbersome and you won’t get any users because it’s too technical
HoMLoL
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
December 19, 2025, 06:40:35 PM
 #12274

It seems you don't fully grasp how the pool operates and that the server here merely serves as a database...

No one can give 100% guarantees that a range has been fully checked—not in my pool, nor in any other. Proof addresses are just minor evidence of work done; everything else relies on the integrity of each participant.

Why were exactly five proof addresses chosen? To allow for easy verification via a web interface. If there were 40 proof addresses, it would be far too many.

LOL. You were given evidence by 3 people so far on why your architecture is broken. Obviously, no one likes when their baby toy is proved to be defective.

PoW keys (the ones that neither a server nor a participant can know in advance) are a 100% proof that work has been performed. This is just like in Bitcoin mining.

Your strategy is defective because if client A scans a range, you cannot prove to client B that client A really scanned any ranges at all. The proof is lacking.

So its not even a matter of "integrity" - it's simple math that can't be fucked with.

Bonus tips: searching for a single pattern instead of 2 or 5 or 50 patterns is faster. Clients simply scan for a pattern (for example, zero bits, or prefixes, or suffixes, or whatever) and those are proof of work, for anyone and for everyone. The verification is identical, because on the server, you simply verify if H160(k) matches. This is simple evidence that clients reached the actual hashing step. That's all.

Another bonus tip: the server never needs to send verification addresses to anyone. Simply receive proof keys. End of story. Less traffic overall.

And yes, I am speaking from the perspective of someone actually doing this, but of course, it wasn't a public pool at all, because I wanted actual real results, not arbitrary unsafe cooperation (which no pool can ever offer). It worked basically identical to what WP and Bram described, but with 40-bit ranges, collecting stats about how many keys were found (from 32 bits up) and actual keys for everything above 36 bits). The stats showed that the operations worked accordingly, while the actual keys showed PoW, which allows to actually prove that work was done, not that some whatever keys were encountered, for which no one cares about, nor can they be proven that someone stumbled upon them. Uninteresting.

Okay, that's your opinion, which is equally uninteresting to me. I won't prove anything to anyone anymore...
My pool is working and helping, and I'm not interested in other people's opinions. Good luck to everyone!
Bram24732
Member
**
Offline Offline

Activity: 224
Merit: 22


View Profile
December 19, 2025, 07:39:13 PM
 #12275

Okay, that's your opinion, which is equally uninteresting to me. I won't prove anything to anyone anymore...
My pool is working and helping, and I'm not interested in other people's opinions. Good luck to everyone!

To be clear it’s not an opinion. Those are facts.
You don’t care that your pool is poorly designed and have a bit too much pride to take the time to understand what we’re saying and fix it. That’s fine. But don’t expect us to not call you out on it.
HoMLoL
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
December 19, 2025, 10:54:13 PM
 #12276

Okay, that's your opinion, which is equally uninteresting to me. I won't prove anything to anyone anymore...
My pool is working and helping, and I'm not interested in other people's opinions. Good luck to everyone!

To be clear it’s not an opinion. Those are facts.
You don’t care that your pool is poorly designed and have a bit too much pride to take the time to understand what we’re saying and fix it. That’s fine. But don’t expect us to not call you out on it.


What facts? This is nonsense. Everything you've stated as "facts" is nonsense. On this forum, all you like to do is write, not contribute to the development and actual search for the puzzles. And there are no facts here...

I don't even understand what vanity addresses have to do with it, since verification requires private keys for specific addresses, not their vanity versions.

It's laughable to receive criticism from people who have done nothing themselves to search for addresses like 71, 72, 73, etc., which have no known public keys.

Actually, no — all you do is whine and complain that life is unfair.

Good luck to you.
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 19, 2025, 11:42:36 PM
 #12277

The guy using MySQL (2005 called, they want their phpMyAdmin back) and thinking 600 million records is technically a problem is trying to roast the guy who single-handedly solved 67 and 68 literally just a few months ago.


Terrific. The best part is that you don't understand what "Vanity addresses" have to do with anything.

I think this alone speaks volumes on where your pool will end up really soon, which is in a historical trash bin. There is absolutely no reason for anyone to trust that even a single key has ever been scanned by anybody, so there is zero incentive for anyone to join your pool.

Oh and BTW you can easily store many billions of entries in a SQLite file on disk, with instant lookup. I think you don't understand how a database works, have you heard about indices?

Off the grid, training pigeons to broadcast signed messages.
HoMLoL
Newbie
*
Offline Offline

Activity: 12
Merit: 0


View Profile
December 20, 2025, 12:18:17 AM
 #12278

The guy using MySQL (2005 called, they want their phpMyAdmin back) and thinking 600 million records is technically a problem is trying to roast the guy who single-handedly solved 67 and 68 literally just a few months ago.


Terrific. The best part is that you don't understand what "Vanity addresses" have to do with anything.

I think this alone speaks volumes on where your pool will end up really soon, which is in a historical trash bin. There is absolutely no reason for anyone to trust that even a single key has ever been scanned by anybody, so there is zero incentive for anyone to join your pool.

Oh and BTW you can easily store many billions of entries in a SQLite file on disk, with instant lookup. I think you don't understand how a database works, have you heard about indices?

Oh, I completely forgot that there's a generation here for whom MySQL is no longer considered a real database... Good thing you don't even know what punch cards are — you probably couldn't handle that either.

If memory usage in programs is not a problem for you, then you wouldn't limit the DP in Kangaroo.

Are there proofs of how 67 and 68 were solved?

I and many others would be grateful to know: what role do vanity addresses actually play here?
After all, addresses are generated from a public key, and two identical vanity addresses could be located at the beginning and end of the range!

Facts, facts, facts... We only need facts!
kTimesG
Full Member
***
Offline Offline

Activity: 700
Merit: 220


View Profile
December 20, 2025, 12:40:07 AM
 #12279

Oh, I completely forgot that there's a generation here for whom MySQL is no longer considered a real database... Good thing you don't even know what punch cards are — you probably couldn't handle that either.

If memory usage in programs is not a problem for you, then you wouldn't limit the DP in Kangaroo.

Are there proofs of how 67 and 68 were solved?

I and many others would be grateful to know: what role do vanity addresses actually play here?
After all, addresses are generated from a public key, and two identical vanity addresses could be located at the beginning and end of the range!

Facts, facts, facts... We only need facts!

LMFAO (squared, cubed). Memory usage? Are you for real? Now you're just proving that indeed you have no idea how a database engine actually works, and why MySQL is a total overkill for your purposes (unless of course you're still running a GoDaddy website or something, not a Bitcoin puzzle pool).

I think that "I and many others" simply means "you". If I'm the generation that used floppy disks and (maybe) punch cards, you're pretty much the generation that has an attention span of 2 seconds and doesn't bother reading more than the first 3 words in a sentence before getting bored. You know, human history doesn't stop 3 days ago, so maybe you should fact check for yourself before asking for stupid evidence (facts) that already exists.

Just for kicks, I will spoof your pool occasionally. You will never know when, or what ranges, or what user did it. How's that for fun? How does everyone participating in your pool feel about this concept? I can afford losing a few hours to automate this stuff, and neither you or anyone else that ever joined your pool willl ever, ever know, what ranges are actually scanned or not.

Off the grid, training pigeons to broadcast signed messages.
fixedpaul
Member
**
Offline Offline

Activity: 83
Merit: 26


View Profile WWW
December 20, 2025, 12:55:08 AM
Merited by Cricktor (1)
 #12280


Are there proofs of how 67 and 68 were solved?

I and many others would be grateful to know: what role do vanity addresses actually play here?
After all, addresses are generated from a public key, and two identical vanity addresses could be located at the beginning and end of the range!

For 67, Bram had publicly posted on GitHub the PoWs (the same we are talking about), in order to statistically demonstrate how much work had been done and which ranges has been scanned
I believe you may not have fully understood what they are trying to explain to you, because no one has ever talked about “vanity addresses.” And I think these are constructive suggestions, not just criticisms
Pages: « 1 ... 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 [614] 615 616 617 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!