I still don't understand how they get their difficulty, though. I guess they just call "getblocktemplate". 1. If block_time > last_block_time+1200, then difficulty=1 2. Otherwise, difficulty=network_diff Note that every miner can control "block_time", because it is just what is put in block header. By default, Bitcoin Core gives you the earliest time (based on the current time, the MTP rule, and the timewarp 600 seconds rule). But: you can put some later timestamp there, to trigger "difficulty=1" condition. but how do they know which value to put in their blocks, beforehand? There are only two options: the network difficulty, and the minimal difficulty. If you want to get the network difficulty, then just check the latest block, where it was adjusted. block_number=60942 60942/2016=30+... 30*2016=60480 //latest adjustment bits_in_block(60480)=0x1913e3b7 bits_to_target(0x1913e3b7)=0000000000000013e3b700000000000000000000000000000000000000000000 bits_to_target(0x1d00ffff)=00000000ffff0000000000000000000000000000000000000000000000000000 difficulty=00000000ffff0000000000000000000000000000000000000000000000000000/0000000000000013e3b700000000000000000000000000000000000000000000 difficulty=26959535291011309493156476344723991336010898738574164086137773096960/124848484694520496450254989037678616522135653265297473798144 difficulty=215938025+81833825167668903265531020299488556990381026556677309071360/124848484694520496450254989037678616522135653265297473798144 difficulty=215938025+(2^176*3*5*7*79*103/2^176*3^3*23*2099) difficulty=215938025+284795/434493 difficulty=215938025.6554651053066447560720195722370671104022389313521736828901731443... But, miners are not calculating "0x1913e3b7" for every block. They can do it just once per 2016 blocks. https://github.com/bitcoin/bitcoin/blob/master/src/pow.cpp#L14unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params) { assert(pindexLast != nullptr); unsigned int nProofOfWorkLimit = UintToArith256(params.powLimit).GetCompact();
// Only change once per difficulty adjustment interval if ((pindexLast->nHeight+1) % params.DifficultyAdjustmentInterval() != 0) { if (params.fPowAllowMinDifficultyBlocks) { // Special difficulty rule for testnet: // If the new block's timestamp is more than 2* 10 minutes // then allow mining of a min-difficulty block. if (pblock->GetBlockTime() > pindexLast->GetBlockTime() + params.nPowTargetSpacing*2) return nProofOfWorkLimit; else { // Return the last non-special-min-difficulty-rules-block const CBlockIndex* pindex = pindexLast; while (pindex->pprev && pindex->nHeight % params.DifficultyAdjustmentInterval() != 0 && pindex->nBits == nProofOfWorkLimit) pindex = pindex->pprev; return pindex->nBits; } } return pindexLast->nBits; }
// Go back by what we want to be 14 days worth of blocks int nHeightFirst = pindexLast->nHeight - (params.DifficultyAdjustmentInterval()-1); assert(nHeightFirst >= 0); const CBlockIndex* pindexFirst = pindexLast->GetAncestor(nHeightFirst); assert(pindexFirst);
return CalculateNextWorkRequired(pindexLast, pindexFirst->GetBlockTime(), params); } See? This "CalculateNextWorkRequired" function is called only every 2016 blocks. In all other cases, the code is just copy-pasting "nBits" from the nearest block header, which contained the real network difficulty. Does that mean that if it's shown as 210M, it's 6 times easier to mine it? Well, if you have for example 336 ASIC-mined blocks, and 1680 CPU-mined blocks, when you explore the latest 2016 blocks, then having 336 blocks with 216M gives you the same chainwork, as 2016 blocks, mined with 36M. Because during difficulty adjustments, all that is counted, is just "how fast the last 2016 blocks were mined", and not "how many hashes were needed to do that". if we took these ASIC miners and started a new Bitcoin network, with mainnet rules, would difficulty reach 210M / 6 = 35M? Exactly. Initially, you have a network with all ASICs, where 100% blocks are honestly mined. Then, when CPU miners abuse 20 minutes rule, it means 2016 blocks are produced faster, than they would be in 100% ASIC world. So, the difficulty is increased, and in the next two weeks, ASICs can produce even less blocks. And that situation keeps repeating, as long as other limits are not reached. And then, the difficulty put in ASIC blocks no longer reflect, what they can mine during 10 minutes.
|
|
|
How can I configure it such that I mine on ASIC difficulty? Just use "getblocktemplate", and check, which value is displayed there. Now, I don't use "getdifficulty", in the same way, as I don't use Bitcoin Core's fee estimations. Is the displayed difficulty determined only by the frequency of the ASIC mined blocks? Well, the network difficulty was supposed to change once per 2016 blocks (two weeks). Test networks broke that assumption, so if some website assumed that, then it can treat "difficulty=1" incorrectly. The best way is to check "chainwork" instead. the difficulty seems way higher than that If you have more than 80% blocks, mined with the minimal difficulty, then the consequence is that the difficulty, declared in block headers, is measured not as "the difficulty per 10 minutes", but rather "the difficulty per hour" instead. So, if you see "testnet4_diff=6000", then you should read it as something like "testnet4_diff_per_10_minutes=1000" instead. Think about it in that way: you start from some honest network, where 100% blocks are mined by ASICs. You can see 2016 blocks with real difficulty, per two weeks. If CPU miners can join it, and mine half of the blocks, then you have 1008 CPU blocks and 1008 ASIC blocks per week. Then, the difficulty is doubled. And then, you have 504 ASIC blocks (because of unfairly raised difficulty) and 1008 CPU blocks in the next week, so the next time, the difficulty is raised again. And so on, and so on. Until reaching an equilibrium, which seems to be something around 1/6th of blocks, mined by ASICs. So, instead of having "the difficulty per 10 minutes", you have "the difficulty per hour" instead. And the main rules, which are stopping the difficulty from raising infinitely, are just Median Time Past on the last 11 blocks (strong consensus rule), and 2 hours rule (de-facto standard, non-consensus rule). Or are those hashrates displayed not accurate at all? If you want to get accurate values, then look at chainwork. It can tell you, how many hashes were needed, to make the chain.
|
|
|
if BTC doesn't increase the supply cap It will not. Because it will be a hard-fork, leading to just another altcoin. However, you don't need changes in supply. All you need, is just changing proportions. So, you can have a subnetwork, which will use different amounts, and it can work on top of Bitcoin, as long, as you can make it backward-compatible, and represent on-chain amounts correctly. For example: Lightning Network introduced millisatoshis. Every on-chain satoshi is converted into 1000 millisatoshis, and then, it is converted back from 1000 millisatoshis, into a single satoshi. However, there are no technical barriers, which would stop people from introducing non-linear dependencies, where you could have 21 billion coins limit, and where it could be possible to create more coins, out of thin air, which would exist only in a particular L2. So, for now, we have a constant 1:1000 peg. But: it is possible to make "a different tail supply L2 network", which will have different rules. Miners not getting paid enough rewards in transaction fees Minimal on-chain fees are set into 1 sat/vB as a de-facto standard. It was not changed for a long time, and it seems, that it will not change in the future. Assuming 4 MvB fully filled blocks, it means getting at least 0.04 BTC per block in fees. So, after next halvings, the basic block reward will go below 0.04 BTC, but fee rules may be left unchanged. Then, is 0.04 BTC per block not enough, if the basic block reward will be lower than that? We know that Bitcoin L1 is not going to scale up any time soon. Some people underestimate changes like full-RBF. If you have two on-chain transactions, where one is spending coins from the other, and you have a chain of unconfirmed transactions, then you can batch them, and increase feerate, without adding new coins. If you have one transaction taking 1 kvB, and another transaction taking another 1 kvB, then by batching both transactions, the batched version could take for example 1.5 kvB, which would also bump fees, so miners would have an incentive, to include batched version instead. I think sooner or later, we would need something like "batched mempool explorer", where instead of seeing, that "we have 650 MB transactions waiting", people could see for example "we have 400 MB transactions waiting, if all of them will be batched". Bitcoin L1 stays at the same fee rates as they are today, but does not achieve mass adoption. Why do you think, that "fee rates" is the barrier? For example, I sold almost all of my BTCs, because of new KYC/AML rules, being enforced since 30 December 2024. I used BTC in 2019-2024, sometimes fees were higher, sometimes lower. But usually, it was not a big issue to use Bitcoin. Also, I tried LTC for a while, when fees were higher on BTC, but finally, it turned out, that if I would just stick with BTC instead, then it would be much more profitable for me, even if I would pay higher fees, because of that. by using some project on L1 like Ordinals, Runes It is good, that fees are stopping some users from pushing Ordinals on-chain. It is by design. And more batching should be also implemented, to let regular transactions compete with Ordinals, by using optimizations like cut-through. Notice how "increase the supply cap and print more bitcoins" is not an option. People will try it anyway. However, fortunately, burning coins is easier, than making them out of thin air. Which means, that people should be ready to burn coins, if tail supply supporters will somehow produce too much. And fortunately, making it mandatory to burn everything, which will be overprinted, is a valid soft-fork. Nobody who owns Bitcoin will agree to let it become a money printer. You don't need 100% network support, to locally increase the supply. Block size limit (on Bitcoin L1) got increased. This will not happen, because then, it would be possible to push more on-chain spam, without making it easier, to push more regular transactions. Also, Ordinals blocked any block size increase proposal for a while. And "quantum resistant addresses" will block it even further.
|
|
|
How does the library "know" when to do regular inverse double 16G-->8G and when to do 7G---->SomethingG... That "something" is just "3.5". The choice of base point doesn't matter, if you can halve a point G, then you can halve every other point, in exactly the same way. For example: 03 E60FCE93B59E9EC53011AABC21C23E97B2A31369B87A5AE9C44EE89E2A6DEC0A 16*G 02 2F01E5E15CCA351DAFF3843FB70F3C2F0A1BDD05E5AF888A67784EF3E10A2A01 8*G 02 5CBDF0646E5DB4EAA398F365F2EA7A0E3D419B7E0330E39CE92BDDEDCAC4F9BC 7*G 03 592152C398D6C719636A03A6DAD64246A5A6814AA62C156B0CE5332F6759B031 3.5*G If you take "3.5*G", and double it, then you will get "7*G". If you want to halve a point, you can just multiply it by "1/2", which is equal to 0x7fffffffffffffffffffffffffffffff5d576e7357a4501ddfe92f46681b20a1 in secp256k1. More than that: not only you can halve a point, but you can also get "one third", "one fifth", and "1/x" for any x-value, in range from 1 to 0xfffffffffffffffffffffffffffffffebaaedce6af48a03bbfd25e8cd0364140. For example: 1/2=7fffffffffffffffffffffffffffffff5d576e7357a4501ddfe92f46681b20a1 1/3=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa9d1c9e899ca306ad27fe1945de0242b81 1/5=66666666666666666666666666666665e445f1f5dfb6a67e4cba8c385348e6e7 02 00000000000000000000003B78CE563F89A0ED9414F5AA28AD0D96D6795F9C63 (1/2)*G 03 4C7FF4F2BA8603998339C8E42675CEAC23EF2E9623FDB260B24B1C944A2EA1A9 (1/3)*G 03 A3C9D9DE2BA89D61C63AF260BE9759D752B8BFEF56EE41B2DAB2B99871AF38A8 (1/5)*G And then, if you try to multiply 034C7FF4F2BA8603998339C8E42675CEAC23EF2E9623FDB260B24B1C944A2EA1A9 by three, you will get G, as a result. To better understand it, you can try some smaller elliptic curve, for example where "p=79, n=67, base=(1,18)". Then, it will be easier to understand, that "1/2=34" in case of this smaller curve, in exactly the same way, as "1/2=7fffffffffffffffffffffffffffffff5d576e7357a4501ddfe92f46681b20a1" in case of secp256k1. Because "34*2=68=67+1". And that's how you can calculate the inversion: if you multiply "1/x" by "x", then you should get exactly "1", if you compute it modulo "n". It is that simple. You do not know if multiplier*G is odd or even Exactly. And that's the reason, why elliptic curves are safe. In other case, if it would be possible to know, if a given distance between two points is "odd or even", then it would be trivial to break every possible private key.
|
|
|
the size always exceeds 1MB Not always, but it is usually the case. Some exceptions exist, for example: https://mempool.space/block/0000000000000000000169070f8f0650ed43a83e1b22b99542c216c5933682ceOr does the 1MB ranges from 1 - 1.99MB ? No, it is from few hundreds of bytes, up to 4 MB. Another example: https://mempool.space/block/0000000000000000000515e202c8ae73c8155fc472422d7593af87aa74f2cf3dDoes the weight of a block (which doesn't exceeds 4MWU) determines the numbers of tx that could fit into it? No. The legacy 1 MB part determines it. The number of possible created or consumed UTXOs remains unchanged, no matter if Segwit is enabled or not. And if Segwit would allow 1 GB witness, then still, UTXO creation and consumption would be still limited into this 1 MB legacy part. However, if a single UTXO would handle more than a single user, then yes, you will have more "transactions", even if they would be represented on-chain as a single "transaction". Is it possible to have an unmined block, even after the block weight has reached its limit of 4MWU maybe due to mining difficulty or other reasons? Well, anyone can create a block template. So, even if you have no mining equipment, then by having just a CPU, and some full node, you can create a block template, and check, which transactions can go to which blocks. And this is what sites like mempool.space do: they have their own mempool, with all waiting transactions, and then they guess, which block will contain which transaction, based on transaction fee, and transaction size.
|
|
|
I thought those could be made out of thin air(?) No, they are also mined. However, all signet blocks are signed, which means, that only developers can mine them. Of course, you can make your own signet, and then it depends, what challenge will you put as "signet challenge", which will then be required in every coinbase transaction. So, signet is just "testnet with restricted coinbase", where you have to sign your block, before you start mining. Edit: https://en.bitcoin.it/wiki/Signet
|
|
|
Does it still mean that if I save that "listdescriptors true" output, I can always get all my coins back from it, change or not? Yes. I think my descriptors are precisely "hardened", ie: they have single quotes in them. But: if you use default settings, then probably not all keys are hardened. And the last keys are probably non-hardened. You can compare it with my example. You should get the same results, even if no wallet is loaded: deriveaddresses "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/0/*)#l34fcp03" "[0,1]" [ "bcrt1qt3g6gh0j0ry2j837jm0wgv726arcxvehulkk6a", "bcrt1q79gdtgxpz577kdhfdhvveyfg2fff6h5ptdzuyt" ] listdescriptors No wallet is loaded. Load a wallet using loadwallet or create a new one with createwallet. (Note: A default wallet is no longer automatically created) (code -18) Which means, that if you put your results from "listdescriptors" here, then you can check, if it works for your keys. For deriving keys, you need only "tpub" (testnet master public keys), because "tprv" (testnet master private keys) are needed only for spending. Do I understand correctly that the best is to use "fundrawtransaction" in between, in order to let the wallet decide on the best UTXO set to use? Well, I always manually pick all inputs and outputs. Also because Core wallet usually gives too huge fee estimates, or pick UTXOs differently, than I want. But: in case of automating things, I would probably batch as many things, as possible, which means collecting all withdrawal requests, and handling them every sometimes (once per 6 hours, once per day, or something similar). It is also possible to offer faster processing, but then, fees will naturally be higher, because then, transactions are not batched, if you send coins to a single user. I remember, when as a customer, I paid for example only 80 satoshis, when my withdrawal was batched in a group of hundreds of users, and when mempools were below 4vMB, and accepted 1 sat/vB fees. As a single user, if I would handle it manually, I would probably pay at least 110 satoshis, if not more. So: batching is the way to go, if customers are not in a hurry.
|
|
|
Does it look reasonable? Yes. What about change addresses? They are similar, but they just use a little bit different descriptors. You import them in a similar way, as regular ones, just put a different path here (as far as I remember, replacing "0" with "1" in derivation path, should do the trick). This is regular address: getnewaddress bcrt1qt3g6gh0j0ry2j837jm0wgv726arcxvehulkk6a getaddressinfo bcrt1qt3g6gh0j0ry2j837jm0wgv726arcxvehulkk6a { "address": "bcrt1qt3g6gh0j0ry2j837jm0wgv726arcxvehulkk6a", "scriptPubKey": "00145c51a45df278c8a91e3e96dee433cad747833337", "ismine": true, "solvable": true, "desc": "wpkh([cb4cc245/84h/1h/0h/0/0]0252b21103c4875db871378a804c32b28432e7aa258b6c9faf947f4ac562257f72)#9kjzpwfl", "parent_desc": "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/0/*)#l34fcp03", "iswatchonly": false, "isscript": false, "iswitness": true, "witness_version": 0, "witness_program": "5c51a45df278c8a91e3e96dee433cad747833337", "pubkey": "0252b21103c4875db871378a804c32b28432e7aa258b6c9faf947f4ac562257f72", "ischange": false, "timestamp": 1733986112, "hdkeypath": "m/84h/1h/0h/0/0", "hdseedid": "0000000000000000000000000000000000000000", "hdmasterfingerprint": "cb4cc245", "labels": [ "" ] } This is change address: decoderawtransaction 02000000000101977f70fe8d23e93b3c864a2e21f97962c7c2ac38aeefb374cf17a18d7e3ddf560000000000fdffffff0200ca9a3b00000000160014f150d5a0c1153deb36e96dd8cc912852529d5e8173276bee00000000160014d7ba14dab01391b5b10d22ccff306e3995a5caaa02473044022052d8131d43c7131b8430e28a14d79d2626357f1754187ab4b19b4097f739648e022026bb5b9d5c4541541922b57186f076a495e72c67e22acf974b54f8853b0be77801210252b21103c4875db871378a804c32b28432e7aa258b6c9faf947f4ac562257f7265000000 { "txid": "2e7ad6ee50db4d4505097e4815e50abd9fc12614388bd1ec90748b779e3902b6", "hash": "0189c8f460d3731f0fc47758517863f1591450d590867eba9b6511d88452342d", "version": 2, "size": 222, "vsize": 141, "weight": 561, "locktime": 101, "vin": [ { "txid": "56df3d7e8da117cf74b3efae38acc2c76279f9212e4a863c3be9238dfe707f97", "vout": 0, "scriptSig": { "asm": "", "hex": "" }, "txinwitness": [ "3044022052d8131d43c7131b8430e28a14d79d2626357f1754187ab4b19b4097f739648e022026bb5b9d5c4541541922b57186f076a495e72c67e22acf974b54f8853b0be77801", "0252b21103c4875db871378a804c32b28432e7aa258b6c9faf947f4ac562257f72" ], "sequence": 4294967293 } ], "vout": [ { "value": 10.00000000, "n": 0, "scriptPubKey": { "asm": "0 f150d5a0c1153deb36e96dd8cc912852529d5e81", "desc": "addr(bcrt1q79gdtgxpz577kdhfdhvveyfg2fff6h5ptdzuyt)#9yhcs8hc", "hex": "0014f150d5a0c1153deb36e96dd8cc912852529d5e81", "address": "bcrt1q79gdtgxpz577kdhfdhvveyfg2fff6h5ptdzuyt", "type": "witness_v0_keyhash" } }, { "value": 39.99999859, "n": 1, "scriptPubKey": { "asm": "0 d7ba14dab01391b5b10d22ccff306e3995a5caaa", "desc": "addr(bcrt1q67apfk4szwgmtvgdytx07vrw8x26tj4260jefq)#5jcwue40", "hex": "0014d7ba14dab01391b5b10d22ccff306e3995a5caaa", "address": "bcrt1q67apfk4szwgmtvgdytx07vrw8x26tj4260jefq", "type": "witness_v0_keyhash" } } ] } getaddressinfo bcrt1q67apfk4szwgmtvgdytx07vrw8x26tj4260jefq { "address": "bcrt1q67apfk4szwgmtvgdytx07vrw8x26tj4260jefq", "scriptPubKey": "0014d7ba14dab01391b5b10d22ccff306e3995a5caaa", "ismine": true, "solvable": true, "desc": "wpkh([cb4cc245/84h/1h/0h/1/0]032f34c09c0d3d8f0e5ddc1291d26d4f4678d2204d97c1fe5bfc2d1b9d8e06af86)#qwg0al2l", "parent_desc": "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/1/*)#w9sg95lf", "iswatchonly": false, "isscript": false, "iswitness": true, "witness_version": 0, "witness_program": "d7ba14dab01391b5b10d22ccff306e3995a5caaa", "pubkey": "032f34c09c0d3d8f0e5ddc1291d26d4f4678d2204d97c1fe5bfc2d1b9d8e06af86", "ischange": true, "timestamp": 1733986112, "hdkeypath": "m/84h/1h/0h/1/0", "hdseedid": "0000000000000000000000000000000000000000", "hdmasterfingerprint": "cb4cc245", "labels": [ ] } As you can see, they are similar: "parent_desc": "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/0/*)#l34fcp03" regular address "parent_desc": "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/1/*)#w9sg95lf" change address Should I also "pre-generate" change addresses? In case of descriptor wallets, you don't have to "pre-generate" anything. As long as you have non-hardened derivation, and you know the master public key, you can derive public keys from that. deriveaddresses "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/0/*)#l34fcp03" "[0,1]" [ "bcrt1qt3g6gh0j0ry2j837jm0wgv726arcxvehulkk6a", "bcrt1q79gdtgxpz577kdhfdhvveyfg2fff6h5ptdzuyt" ] deriveaddresses "wpkh([cb4cc245/84h/1h/0h]tpubDCinLkxqJCQE4Bqj9UC4nmsrVrWQJFTMh5uT3LwsuU9PNA8QW6MZE9Gr6oLMRnMJpUqHvY8BkjKP8ECZBvhwFda52pXgamyJ1czQ8APe9ca/1/*)#w9sg95lf" 0 [ "bcrt1q67apfk4szwgmtvgdytx07vrw8x26tj4260jefq" ] One important thing: compromising the master public key, and a child private key, will compromise all private keys, derived from this particular public key. But: if you never reveal those private keys, you shouldn't worry about that. Is using "getnewaddress" the proper way to do it? As shown above, if you know the descriptor, then you can derive keys from any range. But: if you call "getnewaddress", then it will be marked as "used", so the next call to "getnewaddress" will give you the next one in the queue. Why oh why is there not an option to set a change address in "sendtoaddress"? I don't know. But usually, I just do things manually, by using "createrawtransaction", and then "signrawtransactionwithwallet", to control everything. Is it me or dealing with crypto daemons only gets more complex as time passes, instead of getting simpler? Yes, things are getting more and more complex. In the past, you could just stick with P2PK, and not worry too much about it. And there was also just "generate" command, which was enough to mine some blocks, without connecting to any mining pool. But: as more people are jumping into the crypto world, it will be more and more complex, because they will create new systems, new layers, and sooner or later, you won't even have a single UTXO per user, because it would be too expensive, to do that on-chain (and then, you would need another API, to join and split many keys and signatures into single addresses and coins, and to handle multi-user transaction, just moving a single coin, from one UTXO to another).
|
|
|
Pełne KYC przy każdej transakcji fiat/krypto niezależnie od kwoty. Z tego powodu sprzedałem jakieś 99.9% BTC, które miałem. Jakąś drobnicę zostawiłem w ramach testów, bo może się to przydać, żeby zapuścić jakąś transakcję w sieci, ale zasadniczo, to KYC/AML oznacza, że będę musiał pokazywać dowód w bitomatach, a tego nie mogę zaakceptować.
|
|
|
You don't need blockchain to do that. Computers have been solving this problem since the 1950s. How? If you have a valid signature, then both transactions are valid: "Alice->Bob" and "Alice->Charlie". Both signatures are correct, because Alice has the private key, and can create as many alternative versions, as she wants. And if you don't have a blockchain, and you don't want to have a trusted third party, then how do you want to decide, which version is correct?
|
|
|
to allow those who could not safely transact for fear of their government to transact It doesn't solve that problem either. For example, I sold 99.9% of my BTCs, because of new KYC/AML regulations, since 2025. I still have some coins, just to play with them, if needed, and Bitcoin is still a very interesting project, from the programming perspective, but it will be harder and harder to use it in practice, without having your ID checked by someone. The problem, which is really solved, is just a double spending problem. If you have "Alice->Bob" and "Alice->Charlie" transaction, then blockchain can tell you, which one is valid, and which one was supported by the hashrate majority. And that's it. There's not that much beyond Byzantine Generals Problem, really (by the way: if you want to execute an example, described by Satoshi, where each General has a computer, then you can use OP_SIZE on DER signature, and lock it into a shared public key, for a given group of Generals, and then, the fastest General can create a transaction, which will claim the whole pot, and will be first broadcasted to the Bitcoin network). And then programming would be dragged backwards in progress by about 40 years. We are getting there, but not because of Bitcoin. Recently, the "magic box, that can do everything" is Artificial Intelligence. Blockchain-based ideas will soon hit their limits, if people will find out, that transactions have fees, and if you do a lot of computations, without batching anything, then you have to pay a lot of money for doing that. Also, AI-based models will hit a different limits, when people will notice, that "AI programmed by AI" is not the best way to do things, and can lead to more and more dumb models, which will just turn everything into some kind of echo chamber, where the same data will be just processed over and over again, without adding anything creative. Really, how should that blockchain-based magic "removal of AI fake news" even work? Fake news will never be removed, because if some alternative version of the reality is logical, and could happen in theory, then it is hard to falsify it, if you don't have enough data. If you have "Alice->Bob" genuine transaction, and "Alice->Mallory" fake transaction, then both can have valid signatures, and the news saying that "Mallory received coins from Alice" is called "fake news", only because the "hashrate majority" decided to do so. And because all AI models have their limits, and their models are simplified and optimized, to give answers fast enough, there will be always some room for "fake news". Because AI can predict your age, based on your age, with 99% accuracy (and there is always 1% left for fake answers, just because that's how AI models are trained, and that's what prevents them from "overfitting", when you introduce a penalty for producing "too complex solutions").
|
|
|
how to mine on testnet4? Just apply those changes: https://bitcointalk.org/index.php?topic=5496494.msg64205870#msg64205870Then, you will produce valid blocks locally. Getting it globally confirmed depends on your luck (if you propagate the block faster than other nodes, and if ASICs won't reorg it). 20 minutes difference in most blocks mined Of course, because in this way, you can get the most coins, out of the system, by using consensus rules in your favor. Because blocks with minimal difficulty are easy to mine, and push the timestamps forward, so that ASIC blocks can bring back the time to the past, and then, you can mine blocks with the same timestamps, over and over again (which is exactly, what you can observe, if you just look at block timestamps only). So, people are just maximizing their profits. I estimate the equilibrium is somewhere around 1/6th blocks, mined by ASICs, which means around 83% CPU-mined blocks (which you can easily challenge with any CPU, as long as ASICs won't reorg them).
|
|
|
without scriptsig you'll have no first block. Why not? You can for example detect, if something is P2PK, and then handle it. And in all other cases, you can just assume it as valid. Then, it will be compatible with the rest of the network, just like a downgraded soft-fork. That's why old nodes have no Taproot or Segwit, and can still follow the same chain. And you can do that too, and throw away the whole Script implementation, then hardcode just some patterns for P2PK, and implement only ECDSA, and nothing else.
|
|
|
Your model is too simple. Let's prove, that Satoshi participated in your message: P3=03678AFDB0FE5548271967F1A67130B7105CD6A828E03909A67962E0EA1F61DEB6 R3=02DE3DCE2CDA1208E97C20BB537BB13E349A853463AD4DBA1FA20501979B87D495 s3=0000000000000000000000000000000000000000000000000000000000000001
P4=0249A5B691630895861D099B168950F9E8A38289D90657EFFF6C30B121F4C5E208 R4=03BADE9D75A6D917A5D8BA2A26D1F1AED70E59E50E0C87D366CFAD5BC4D9B8D730 s4=9308a87057ca70d6b6c597f5043686f2e2fa8487e05e6e46e510528b404da728
P=P3+P4 P=0256B76A89AF551C6C58B1972EE5BE3DAE2A2A39202AD7439962F647780F025E6A R=R3+R4 R=025DA08419189F4C48912F64130DC262C522EBE81E10663927FB1341EC41E208E8 s=s3+s4 s=9308a87057ca70d6b6c597f5043686f2e2fa8487e05e6e46e510528b404da729 Joined parts should not be used directly as a proof, by checking just your public key. You should instead hash all public keys, and put it into R-value, and then prove it, by revealing a commitment. In other case, you can prove, that anyone was a part of your multisig.
|
|
|
One could argue that Satoshi has intentionally "lost" access to the coins that they 've mined. Even in that case, the public key is valid. Which means, that there is a matching private key, which can be used, to sign those coins, and move them. But fortunately, the Genesis Block is unspendable, so even if you know the key, you cannot move those coins. But still: you can move all coins, which were deposited later into this P2PK, or into matching P2PKH. There are assumptions, but no proof whatsoever, that Satoshi mined many blocks (~20,000). There are only traces of extraNonce, pointing out, that some miner produced a lot of blocks. It doesn't have to be Satoshi, and the estimation of 20k blocks is also covered with a huge assumption, that a single miner own all chunks of nonces (which may be incorrect, because every time, when extraNonce starts again from zero, there could be another miner, with similar mining power). The coins that they 've generated haven't moved either. Some of them moved, but then, they are excluded from the pattern. For example: http://satoshiblocks.info/We have blocks from 0 to 14 with one extraNonce line, and people assume, that Satoshi mined them. But: block number 9 and 12 are explicitly excluded from the pattern, because those coins were spent. And then, we have blocks from 15 to 25, forming another extraNonce line. And people assume, that if the first line was mined by Satoshi, then the second line also was.
|
|
|
If we have addition, then it does follow to say that, by the definition of the words and the operations, there is subtraction, multiplication and division. But to be redundant, they cannot be applied directly to points on the curve, but to their constituent parts to produce the result desired. When it comes to addition and subtraction, then you can apply that to any two public keys. However, if you have multiplication and division, then you can only multiply or divide a public key by a private key. You cannot divide or multiply two public keys alone, here is why: https://bitcointalk.org/index.php?topic=5460766.0So, I guess the way forward for you, would be to test everything on a small elliptic curve, where you can brute force everything. And then, switch to a little bit bigger one. And so on, and so forth. Example parameters: https://bitcointalk.org/index.php?topic=5459153.0
|
|
|
Clients treat these outputs as standard Sending to new Segwit addresses was always standard. However, spending from them is non-standard. Older example: https://mempool.space/pl/address/bc1sw50qgdz25jBut may be add some filters to a client And then, when a new address type will be officially introduced, it will be harder, because old clients wouldn't allow sending to them. For example: if you have pre-Taproot client, it can still send coins into Taproot, as long as it has Segwit implementation. The same with recently added anchors: https://mempool.space/address/bc1pfeessrawgfAnd also, there are reasonable limits, for example you cannot have a Segwit address with more than 40 bytes, so it is two times smaller than OP_RETURN limit (so, if anyone would want to spam, then that person would abuse OP_RETURN anyway).
|
|
|
|