LoyceV
Legendary
Offline
Activity: 4004
Merit: 21611
Thick-Skinned Gang Leader and Golden Feather 2021
|
 |
April 13, 2026, 07:59:04 AM |
|
The only problem I can think of, is that, since bits are the same, the old nodes' chain will not get immediately reorged by the new chain, as CPU miners can generate min-diff blocks more often than miners can generate real-diff blocks. The old chain will be reorged when an ASIC miner mines a block with a timestamp less than 20 minutes than the previous one, as it must use real-diff bits in the old field. ASIC miners can do that already if they want. So it comes down to the same problem: convince some ASIC miners to wipe out CPU blocks. And if you have to convince them to make changes anyway, isn't it much easier to go with a hard-fork?
|
¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
|
|
|
hmbdofficial
Member

Offline
Activity: 154
Merit: 34
|
 |
April 13, 2026, 08:05:15 AM |
|
Technically doable, and I tried similar ideas to test cases, when SHA-256 would be broken. Basically, even if the block header says 0x1d00ffff, then still, you can require meeting the real difficulty, and reject the block, if it is not the case.
In your testing scenarios I mean when imagining SHA- 256 being broken, how did you handle the chainwork? Did you adjust it manually, or did you keep the legacy calculation and rely on the new validation rule?.
|
|
|
|
|
BlackHatCoiner (OP)
Legendary
Offline
Activity: 1988
Merit: 9678
Bitcoin is ontological repair
|
 |
April 13, 2026, 08:16:36 AM |
|
But then, the contribution to the chainwork will be still calculated in the old way. Why not configure the new nodes to calculate chainwork using the new field after the fork height? It will only appear outdated in the old nodes. New nodes' chainwork will be rendered correctly. ASIC miners can do that already if they want. So it comes down to the same problem: convince some ASIC miners to wipe out CPU blocks. The problem is the timestamp. If they consider the CPU blocks invalid, and try to mine a block with real difficulty, but with >20 min timestamp than the previous block, old nodes will reject it because of bad-diff-bits. What we want is for the old nodes to follow the new chain without having to update. Wiping out CPU blocks works if they replace the CPU blocks with their min-diff blocks, but this can be a very complicated and inconsistent solution. Invalidating blocks with >20 min timestamp (road C) and introducing a new bits field (road D) are the only consistent softfork proposals that have a chance of actually working.
|
|
|
|
|
stwenhao
|
how did you handle the chainwork? By expanding it to 512 bits. Because if you can produce any SHA-256 result, then all following blocks could hash into zero. Which means, that you will then have the current chainwork as it is, and the new chainwork somewhere else. And then, a new hash function could be executed on the same data, so it would be difficult to produce again. did you keep the legacy calculation and rely on the new validation rule? The legacy part would just contain the low 256 bits, and the new part would contain only the upper 256 bits of 512-bit chainwork. Why not configure the new nodes to calculate chainwork using the new field after the fork height? Well, you can do that. It will only appear outdated in the old nodes. Which is tricky, because then, one ASIC block, valid in the old network, could potentially reorg a lot of new ASIC blocks from the new network, because their chainwork would be weaker. Which means, that technically, it could be a soft-fork, but it has a high chances of leading to chain splits, even if you will have hashrate majority on your side. Just because the blocks perceived by "strong" in the new network, won't be counted as such in the old network. It is similar to switching from the longest chain to the heaviest chain: new nodes follow the new rules, but old nodes can still be tricked, to accept a weaker chain. The difference is, that this time, having hashrate majority is not enough, to get all nodes on the same chain, if they count things differently. Invalidating blocks with >20 min timestamp (road C) and introducing a new bits field (road D) are the only consistent softfork proposals that have a chance of actually working. Honestly, I think the longer you would think about possible solutions, the lower chances you would have to actually deploy something. If you would have a plan, like "just hard-fork, and don't care", then you know, what to do, and how to do it. If you would start making a lot of different solutions, then you wouldn't know, which solution is better, and you will endlessly invent new ones, instead of deploying anything, and moving forward.
|
|
|
|
BayAreaCoins
Legendary
Offline
Activity: 4494
Merit: 1390
AltQuick.com Owner
|
Honestly, I think the longer you would think about possible solutions, the lower chances you would have to actually deploy something.
If you would have a plan, like "just hard-fork, and don't care", then you know, what to do, and how to do it.
If you would start making a lot of different solutions, then you wouldn't know, which solution is better, and you will endlessly invent new ones, instead of deploying anything, and moving forward.
I agree with this.
I spoke to someone more technical than me... He explained the softfork on a low level that I understand. I support the soft fork as proposed. Though, I still think a hardfork is cleaner and I do think the difficulty of it is over estimated. We can start with a soft fork IMO and the hard fork remains an option... perhaps not being so heavy-handed is the best route. ACK Soft fork
|
|
|
|
BlackHatCoiner (OP)
Legendary
Offline
Activity: 1988
Merit: 9678
Bitcoin is ontological repair
|
 |
April 13, 2026, 06:23:09 PM |
|
Honestly, I think the longer you would think about possible solutions, the lower chances you would have to actually deploy something. Agreed. We will start with road C, which is disabling >20 min timestamp blocks. Do you think I should create an entirely new PR, or edit the current one? I'm thinking that having all this history in git might be confusing now that I'll rebase it and make completely new changes. Another good reasoning behind the softfork is to see the willingness of the mining pools. If they don't care to upgrade, there's no need to discuss anything. I hardly doubt Bitcoin Core will ever merge this if there's no support from miners for an extended period.
|
|
|
|
|
stwenhao
|
 |
April 14, 2026, 06:08:56 AM |
|
I think the easiest route is to go with the hard-fork, and see, if it will succeed. And try soft-forking later, when hard-fork will fail, or will be rejected by the majority. Because in that case, the code is ready, so it can be done here and now. Then, it is a matter of getting support for what you already have, instead of making yet another solution. We will start with road C, which is disabling >20 min timestamp blocks. You can do it on a separate branch, to see, if it would be better, than the current implementation. Because you don't want to break a working code, that you now have. And if the hard-fork will succeed, then it won't be needed. Do you think I should create an entirely new PR, or edit the current one? I think the code should be pushed on some branch first, and then you can think about editing the Pull Request, and picking the right branch. Because Pull Request is just attached to some branch, and that's all. Changing Pull Request from that branch to another is easy. The hard part is writing the code. So, if you will have each version on a separate branch, then later, it would be a matter of picking the right branch, and making a Pull Request. I hardly doubt Bitcoin Core will ever merge this I agree. They wanted to start testnet5 completely from scratch. And now, they are testing things on signet, and don't care about testnet3 or testnet4. Note that testnet4 was first created, and then merged into Bitcoin Core, when it was up and running, and already produced around 40k blocks. Which means, that if you want to fix testnet4, then you should first deploy it, and then try to merge it. Because from the Core's perspective, testnet4 is no longer worthless, so they are no longer interested in using it for testing. So, it is now yet another altcoin, and all fixes are just for AltQuick exchange, and some remaining users, it is basically yet another altcoin, which could be even deprecated in some next version, just like testnet3 is scheduled to be dropped.
|
|
|
|
BayAreaCoins
Legendary
Offline
Activity: 4494
Merit: 1390
AltQuick.com Owner
|
 |
April 14, 2026, 02:17:13 PM Last edit: April 14, 2026, 03:32:09 PM by BayAreaCoins |
|
I think the easiest route is to go with the hard-fork, and see, if it will succeed.
I feel like you're just dead set on being a pain in the ass.  and all fixes are just for AltQuick exchange
Fixing what for AltQuick? Last I checked we are doing just fine... it's the rest of you poor bastards. 1400 TBTC4 in the faucet too while giving 0.01 every ten minutes to users (often 0.02 due to affiliates)... Which is the point of Testnet also. How many coins do you have in your public-facing faucet?
|
|
|
|
BlackHatCoiner (OP)
Legendary
Offline
Activity: 1988
Merit: 9678
Bitcoin is ontological repair
|
No, the softfork is the easiest route, because it requires the least disruption in the network. It is only a handful of miners that need to point their hashrate to their new nodes. Everything else can stay the same. The hardfork should be the last resort, and must be initiated when we know there is at least some significant support from miners as well.
I'm almost done with the code. Super simple. I will create another PR soon.
|
|
|
|
|
stwenhao
|
 |
Today at 07:48:34 AM |
|
Fixing what for AltQuick? Well, if all blocks would be produced by ASICs, then coins could be worth more. And because it could move the price upwards, then that's why you are interested in getting it fixed, because then you could earn more BTCs. it's the rest of you poor bastards I think "the rest of poor bastards" are sitting in signet, while they still can. But I guess if you will list it, then they will think about going to yet another test network. Signet already produced around 300k blocks. It is running since 2020. And it could be a good time, to do yet another reset, and have for example OP_CAT, or other things, deployed from the very beginning. Also, because it is only a matter of switching the signet challenge, it can be done quite easily. How many coins do you have in your public-facing faucet? You can see them in my signature for mainnet, testnet4, and signet. And in the mainnet, people claimed 56-bit Proof of Work from bc1qts0jh2d2nesmketmw3thedwrx939k2tqu04gy90x9hd4049g4uhs83ltjx and still, the next one in the queue is bc1qn9vp8l5rs7huyl237s4q9lhrzcs0mzaajt528ysq3wgnzvlkay5sdfz6am where 64-bit Proof of Work is needed. Which means, that to make new faucets, you no longer need to make a centralized website. You can just send them to the proper address, and then, users could get these coins, by providing some Proof of Work. And, as you can see, it is possible even on mainnet, and of course on all testnets.
|
|
|
|
LoyceV
Legendary
Offline
Activity: 4004
Merit: 21611
Thick-Skinned Gang Leader and Golden Feather 2021
|
Which means, that to make new faucets, you no longer need to make a centralized website. You can just send them to the proper address, and then, users could get these coins, by providing some Proof of Work. I've seen your topics about it, and I think it's safe to say this is far above the understanding of the average user who wants to do some testing. And it's annoying having to learn how to get coins just so you can test the project you're actually working on.
|
¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
|
|
|
hmbdofficial
Member

Offline
Activity: 154
Merit: 34
|
 |
Today at 08:04:55 AM |
|
I think "the rest of poor bastards" are sitting in signet, while they still can. But I guess if you will list it, then they will think about going to yet another test network. Signet already produced around 300k blocks. It is running since 2020. And it could be a good time, to do yet another reset, and have for example OP_CAT, or other things, deployed from the very beginning. Also, because it is only a matter of switching the signet challenge, it can be done quite easily.
Although I have this thought of how would deploying OP_CAT or similar covenants from genesis block on a new Signet improve testing compared to activating it later through a soft fork? And also who would realistically decide whether to reset Signet or Create a new one? And also if it will require coordination from Bitcoin core develope, Signet or a broader community discussion?
|
|
|
|
|
LoyceV
Legendary
Offline
Activity: 4004
Merit: 21611
Thick-Skinned Gang Leader and Golden Feather 2021
|
 |
Today at 08:21:45 AM |
|
all I can think of is expanding this project: The QR-codes I created are for testnet4 (and all addresses on the first page have 1 testnet coin each). PDF: loyce.club/other/print_two-sided.pdf (anyone who needs testnet4 coins: take some, but leave some for someone else)Address list: loyce.club/other/addresses.txt (note: the last 200 addresses are not in the PDF, and the keys are lost) There are currently still 13 testnet4 coins inside that PDF. I was disappointed when someone took almost all of them at once. Before that, it was used honorably for several months. And that's the problem of adding value: someone will want all of it. Off-topic: should I fund pages 3,5,...,11 on the above PDF too? I didn't keep the private keys, only the QR-codes, but I do have the address list as watch-only in Electrum to keep track of balances. Done  I sent a total of 300 Testnet4 coins, spread out over all 1800 addresses in that PDF. I'm hoping this will slow down the farmer and keep this available for people who actually need just a bit.
|
¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
|
|
|
|
stwenhao
|
 |
Today at 08:29:55 AM Last edit: Today at 08:58:59 AM by stwenhao |
|
this is far above the understanding of the average user who wants to do some testing Only because it is not yet popular. But technically, if you would have a client, which would do that, then it could be done automatically. I didn't implement a CPU miner for that yet, but it is doable. And one of the user, who solved it, wrote some code even for GPUs, to claim more difficult puzzles. Also, I think this is how mining could be decentralized in the future. You don't have the power to mine 3.125 BTC in the coinbase? Well, then you could get for example 0.03125 BTC, by meeting 100x easier target. Maybe in the short-term, making a website with a faucet is better. But for a long-term solution, I think some effort should be put into expanding ideas like that. how would deploying OP_CAT or similar covenants from genesis block on a new Signet improve testing compared to activating it later through a soft fork? It is easier, if you can apply things from block zero, than if there are some exceptions, and you have pre-activation coins. who would realistically decide whether to reset Signet or Create a new one? Of course developers. Historically, testnets were resetted, when they were traded, so if it will happen in the current signet, then they may decide to do so. And because it is centralized, they can for example just stop producing blocks for the old network, and leave it as it is. if it will require coordination from Bitcoin core develope, Signet or a broader community discussion? If you have networks like mainnet, testnet3 or testnet4, then you need discussion, agreements, signalling, and things like that. Because if you don't, then old users, miners, and developers, can stay where they are, and use what they are using now, and then your change may fail, because you may be left alone, with your own code, which nobody else uses. In signet, it is centralized. Developers just decide "we want OP_CAT", and they can just deploy it, without asking anyone, because they fully control the network. Edit: By the way: other altcoins must be really bad, if exchanges no longer want to list a new, shiny altcoin, and they decide to go for something as centralized as signet. Does it mean, that they are even worse, and more centralized? It is funny to see, that test networks for BTC can reach higher prices than serious altcoins, intended to deal with real money. Sounds very bullish for BTC.
|
|
|
|
|
ertil
|
 |
Today at 09:38:33 AM |
|
And it's annoying having to learn how to get coins just so you can test the project you're actually working on. Sometimes I feel like developers are implementing things in a more complex way, than they could, only to get rid of some unwanted competition. Because 20 minute rule was working fine in testnets, before it was abused. Back then, just some of them sniped a block or two on CPUs from Windows 98 era, while everyone else used ASICs. And then, it became a problem only, when everyone started doing it. Because when ASICs no longer could produce a new block in 20 minutes, then all of them automatically took part in the attack, by also making blocks with minimal difficulty, just because this is the default behavior of the code. So, maybe a "Proof of Developer" could be used, instead of "Proof of Work". Then, only real coders could join, and everyone else wouldn't even know, how to start. By looking at signet blocks, and how exactly are they signed, it seems to be done in a much more complex way, than it could be. Also, if you look at how Satoshi implemented FindAndDelete, then again: even cryptographers like Nadia Heninger were surprised, that the crypto people made it so complex, to compute sighashes from messages. And here, in Saint Wenhao's challenge, you also have to deal with all Segwit complexity, to handle it correctly.
|
|
|
|
|
BlackHatCoiner (OP)
Legendary
Offline
Activity: 1988
Merit: 9678
Bitcoin is ontological repair
|
New pull request: consensus: soft fork on testnet4 that fixes the min difficulty blocks exploit. I closed the old one. I have tested the softfork for an earlier block height, and I can confirm that it works. Specifically, when the next block is the fork height, getblocktemplate returns block template with timestamp minimum of (current clock time, previous block timestamp + 1200). There is only one little problem that might be a non-issue but worth to note; when the fork height comes, and upgraded clients reject non-upgraded clients' blocks, the upgraded clients' outgoing connections are likely to drop to 0. I don't think that this will be a problem, because propagation of the upgraded chain to non-upgraded nodes does not rely on upgraded nodes keeping outbound connections to them. Non-upgraded nodes continuously initiate random outbound connections from their addrman, so some of them will land on upgraded nodes as their peers. I'll leave some time for anyone to review it, in case I've forgotten something, before I create the testnet4-fix page.
|
|
|
|
LoyceV
Legendary
Offline
Activity: 4004
Merit: 21611
Thick-Skinned Gang Leader and Golden Feather 2021
|
 |
Today at 10:01:00 AM |
|
I didn't implement a CPU miner for that yet, but it is doable. And one of the user, who solved it, wrote some code even for GPUs, to claim more difficult puzzles. Even if it's all automated in a wallet, I can't imagine this is going to work: either poor slow CPU-owners will take weeks to find some test coins, or powerful fast GPU-owners can take everything.
|
¡uʍop ǝpᴉsdn pɐǝɥ ɹnoʎ ɥʇᴉʍ ʎuunɟ ʞool no⅄
|
|
|
|
stwenhao
|
 |
Today at 10:44:22 AM Last edit: Today at 11:43:04 AM by stwenhao |
|
I can't imagine this is going to work Well, there is a similar challenge on signet with OP_CAT, when it becomes easier over time, depending on the number of confirmations, and so far, it works for CPUs. either poor slow CPU-owners will take weeks to find some test coins, or powerful fast GPU-owners can take everything You can have different difficulties for different miners. If you can mine single satoshis on a CPU, or much more than that on a GPU, then you don't have to take the easiest coins. Also because accumulating coins costs you a transaction fee, so if you have a more powerful machine, then it is more profitable to take a bigger amount in one shot, than to accumulate dust-like amounts, and lose more on fees, because of the transaction size. Also note, why mining pools are needed: because nobody could directly get less coins, for solving some easier challenge. Basically, these addresses allow you to "pay to share on-chain" in a decentralized way. And because amounts and difficulties can be adjusted as needed, it is possible to pay someone some coins, if that person will provide a given amount of Proof of Work. when the fork height comes, and upgraded clients reject non-upgraded clients' blocks, the upgraded clients' outgoing connections are likely to drop to 0 Well, this is why you need hashrate majority on your side. Otherwise, if you are in the minority, then some people may produce min-difficulty blocks, and get some ASIC blocks on top of that, reaching a bigger chainwork. But: if people will start mining ASIC blocks with different timestamps now, then they could stop the CPU attack, if the difficulty would start dropping, and if an average ASIC block would be mined every 10 minutes, like it should be. Edit: getblocktemplate returns block template with timestamp minimum of (current clock time, previous block timestamp + 1200). It should not be done during difficulty adjustments. In the current testnet4, ASIC block is required then, regardless of its time. Which means, that every 2016 blocks, ASICs could bring back the proper time, to let the difficulty adjust correctly. I guess the easiest way to do that, is to check, when "fPowAllowMinDifficultyBlocks" is applied, and fix it only then.
|
|
|
|
|