Bitcoin Forum
June 14, 2024, 10:41:51 AM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 »
1  Bitcoin / Development & Technical Discussion / Re: Total confusion about Scriptpubkey, Standard and non standard Transactions on: June 12, 2024, 12:28:23 PM
Quote
Does the NULL_DATA represents txoutput(data) with zero value entry?
https://github.com/bitcoin/bitcoin/blob/master/src/script/solver.h#L22
Quote
Code:
enum class TxoutType {
    NONSTANDARD,
    // 'standard' transaction types:
    PUBKEY,
    PUBKEYHASH,
    SCRIPTHASH,
    MULTISIG,
    NULL_DATA, //!< unspendable OP_RETURN script that carries data
    WITNESS_V0_SCRIPTHASH,
    WITNESS_V0_KEYHASH,
    WITNESS_V1_TAPROOT,
    WITNESS_UNKNOWN, //!< Only for Witness versions not already defined above
};

Quote
This part was a little bit complex to understand
https://github.com/bitcoin/bitcoin/blob/master/src/script/solver.cpp#L164
Quote
Code:
        if (witnessversion == 1 && witnessprogram.size() == WITNESS_V1_TAPROOT_SIZE) {
            vSolutionsRet.push_back(std::move(witnessprogram));
            return TxoutType::WITNESS_V1_TAPROOT;
        }
This is Taproot, also known as P2TR.

https://github.com/bitcoin/bitcoin/blob/master/src/script/solver.cpp#L196
Quote
Code:
    int required;
    std::vector<std::vector<unsigned char>> keys;
    if (MatchMultisig(scriptPubKey, required, keys)) {
        vSolutionsRet.push_back({static_cast<unsigned char>(required)}); // safe as required is in range 1..20
        vSolutionsRet.insert(vSolutionsRet.end(), keys.begin(), keys.end());
        vSolutionsRet.push_back({static_cast<unsigned char>(keys.size())}); // safe as size is in range 1..20
        return TxoutType::MULTISIG;
    }
And this is bare multisig, also known as P2MS.
2  Bitcoin / Development & Technical Discussion / Re: Total confusion about Scriptpubkey, Standard and non standard Transactions on: June 12, 2024, 10:08:44 AM
Quote
Do you write it yourself or find it elsewhere?
Let's see: https://en.bitcoin.it/wiki/Protocol_rules#%22tx%22_messages
3  Bitcoin / Development & Technical Discussion / Re: Total confusion about Scriptpubkey, Standard and non standard Transactions on: June 12, 2024, 07:32:38 AM
Quote
Is there any proper definition for ScriptPubKey, standard and non standard transactions?
https://github.com/bitcoin/bitcoin/blob/master/src/policy/policy.cpp#L70
Quote
Code:
bool IsStandard(const CScript& scriptPubKey, const std::optional<unsigned>& max_datacarrier_bytes, TxoutType& whichType)

Quote
Do ScriptPubKey and standard transactions mean the same thing?
No. When you have "scriptPubKey", then it is just a field, expressed as "const CScript& scriptPubKey", which contains raw bytes of the Script. But "IsStandard()" is a function, taking "scriptPubKey" as one of arguments, and it can return "bool" saying "true" if a transaction is standard, or "false" if it is not.

Quote
How do P2TR and P2MS work?
https://github.com/bitcoin/bitcoin/blob/master/src/script/solver.cpp#L140
Quote
Code:
TxoutType Solver(const CScript& scriptPubKey, std::vector<std::vector<unsigned char>>& vSolutionsRet)

Quote
Is it possible for a transaction to be non standard and still be valid?
Of course. Non-standard and invalid are two different things.

Quote
Can you explain the last paragraph above?
Non-standard transaction, which is valid: https://mempool.space/testnet4/tx/09096e1e6fb31f33f3de40b9a1d76908e565e9646e5aed4e0813e0a2a799fc4c
4  Bitcoin / Development & Technical Discussion / Re: Ordinals and other non-monetary "use cases" as miner reward on 2140+ on: June 12, 2024, 04:45:58 AM
Quote
How will miners find an incentive to mine blocks if there is no block reward left?
Look at testnet3, and try to use your CPU to mine something there. Then, you will find your answer.

Quote
Some people claim that the transaction fees will be enough.
They are enough on testnet3. Why wouldn't the same situation repeat on mainnet? Those networks are very similar, just 20 minute rule is one of the main difference between them, other things are pretty much the same (which is also the reason, why test coins got some value, even though they shouldn't, and why they often behave better than some altcoins).

Quote
the price will be so high that there will be not enough transaction volume
You cannot take all of your coins to the grave. Sooner or later, people would need to spend their coins, because if their plan is to never move them, then sending all of that to OP_RETURN will achieve the same outcome.

Quote
Well what if the fees are not enough, or there are not enough volume tx going on, or anything else?
Then, people will abuse 21 million coins limit even further than today, and that would "solve" the problem. Because note that even though we have altcoins, they are not all pegged into existing coins. They usually create additional supply, which abuses the system. If you have 21 million BTC, but if BTC has only 50% domination, then the situation is the same, as it would be with 100% BTC domination, and 42 million coins.

Quote
Would non-monetary use cases like Ordinal inscriptions solve the problem?
No, they would create the problem instead, for example by raising fees sooner, than they should, and taking down regular payments. It is not only about the incentive they bring to the miners. It is also about the incentive they removed, by discouraging people to transact, when they saw some mempool stats, and switched into other payment methods.
5  Bitcoin / Development & Technical Discussion / Re: Requesting Testnet4 tBTC on: June 10, 2024, 08:05:18 PM
Quote
how to mine testnet4 useing cpu
Code:
$ cat mining4.sh
nonce=0
while [ 1 ]
do
  ./bitcoin-cli --testnet4 generatetoaddress 1 mk4Lmwd1g787twjQYbswdotpYDz9XVD3hH 100000000
  echo nonce: $nonce
  ((nonce=nonce+1))
done

Quote
please guide me step by step
1. Download, build and run Bitcoin Core, made by fjahr, from branch 2024-04-testnet-4-fix: https://github.com/fjahr/bitcoin/tree/2024-04-testnet-4-fix
2. Generate some new address (or import it into Bitcoin Core).
3. Start mining. Most likely, your blocks will be ignored, but you will see them in your GUI, if you use Bitcoin Core as a wallet.

Quote
how to cerate testnet4 address
Code:
getnewaddress
mk4Lmwd1g787twjQYbswdotpYDz9XVD3hH
dumpprivkey mk4Lmwd1g787twjQYbswdotpYDz9XVD3hH
cPLfGZGNgVsYvGfUTP62JdPBzHhL3jjqQ82RobkamETm7WzdgcGE

Quote
this testnet mk4Lmwd1g787twjQYbswdotpYDz9XVD3hH adderess also use to testnet4 ??
Yes: https://mempool.space/testnet4/address/mk4Lmwd1g787twjQYbswdotpYDz9XVD3hH
6  Bitcoin / Development & Technical Discussion / Re: [Guide] Solo mine testnet bitcoins with bfgminer, Bitcoin Core, and a CPU/GPU on: June 10, 2024, 01:05:23 PM
Quote
Why wouldn't history of Testnet3 repeat in Testnet4?
Of course it will repeat on testnet4. But, as I said in another topic, the solution is to improve the code in your Bitcoin Core node, not in your mining equipment. Think about it, and use proper rules, to always work on blocks with minimal difficulty.

Quote
Does it make sense to drive difficulty up to levels where only few dare to waste money for energy for a worthless-by-design-and-intention coin like tBTC?
Of course it doesn't make any sense. However, as long as testnet rules are quite close to mainnet ones (in both: testnet3 and testnet4), those coins are doomed to get some value over time. And then, it is all about having enough support for more frequent resets.

Quote
It would be more fun for participants to be able to mine Testnet4 with CPU cycles in a truely decentralized fashion.
It is already possible. There are quite frequent block reorgs, but besides that, it works even on testnet3.

Quote
setting a max difficulty in testnet
There already is a max difficulty, but nobody reached it yet, even on mainnet. Each hash is just a 256-bit number, and if you create SHA-256 hash of almost all zeroes, then you will reach the max.

Quote
and known EOL would help
This one we have as well, currently set into year 2106 or year 2038, depending on Bitcoin Core version (some of them are buggy).

Quote
mine all you want with whatever ASIC you have as of Jan 1 2025 testnet 4 is going to become testnet 5
Well, if you reset things more often than every four years, then you have no chance to test any halvings, and then you can permanently set basic block reward into 50 tBTC.

Quote
It's not like the old testnets will stop working just new ones will start.
We already have something like that in signet: anyone can just pick a new signet challenge, and it will start a brand new network. And if you pick OP_TRUE, then it is more or less the same as testnet4, but without 20 minute block rule.
7  Bitcoin / Development & Technical Discussion / Re: Requesting Testnet4 tBTC on: June 10, 2024, 10:09:12 AM
Quote
But still, when you cpu mine at a couple dozen Mh/s, you're still vastly outnumbered against a multi Terrahash/second asic that's also mining at the same difficulty.
It is obvious, that you have to prepare some blocks in advance (for example in testnet3, you have everything filled, up to 2 hours in the future). And then, it is more related to network connections, than to the computing power. You just prepare a block, and you have your block with difficulty one, vs some ASIC block, also with difficulty one. Then, it is not about "who will mine it faster", but rather about "who will propagate it faster".

And of course, ASIC miners rule that world of test chains, so you have to mine your blocks around theirs. But still, it is possible to propagate some block faster, than some ASIC will do, because both players will prepare them in advance, and then the competition is related only to network connections.

Another thing is that ASIC miners potentially could always reorg CPU-mined blocks, but for some reason they don't. It is more profitable to mine a strong, regular block on top of them, because then, you can pick any block time you want. In case of CPU block times, it is more restricted.

Quote
i tested out the cpuminer that i included into my image for 2 weeks to see if it was stable, and i did not solve a single block...
It should mine at least some blocks, but they probably were reorged. If that's the case, then you have to improve the code for your node, not for your mining equipment. For example, to mine testnet blocks easily, I slighttly modified Bitcoin Core, and since then, it works on my CPU.

Quote
So yes, maybe at the moment it might be possible to mine a couple testnet4 blocks with your cpu if you're really lucky.
I repeated the same thing on testnet3, just to be sure. Now, I can mine on CPU in both networks, but testnet4 is more stable. However, mining in testnet3 is still quite good idea, if you want to test block rewards, based on transaction fees.
8  Bitcoin / Development & Technical Discussion / Re: Requesting Testnet4 tBTC on: June 10, 2024, 06:45:59 AM
Quote
Just for your information: no, you can't...
Yes, I can, and I did. It is always a lottery, but I can put something in the next coinbase, if you want. There is always a chance, that it will take some time, because of frequent chain reorganizations, but it is possible.

Quote
The diff is already sky-high
The diff for CPU mining is always one. Obviously, I won't try mining any more difficult blocks on CPUs.

Also see: https://bitcointalk.org/index.php?topic=5468925
Connected topic: https://blog.lopp.net/griefing-bitcoin-testnet/

And note, that on testnet4, some tricks are easier to achieve, than on testnet3 (as long as the basic block reward is still quite high).
9  Bitcoin / Development & Technical Discussion / Re: Requesting Testnet4 tBTC on: June 07, 2024, 05:13:31 AM
Quote
However, I haven't been able to find a testnet4 faucet yet.
Why would anyone run a faucet for the network, which is not yet finalized, and can be resetted at any time?

Quote
Would anyone be willing to share some tBTC with me at the following address?
Why you need a faucet, if you can mine a block on your CPU? For example, note how many blocks were mined by wiz: https://mempool.space/testnet4/mining/pool/wiz

One of the latest blocks: https://mempool.space/testnet4/block/0000000012bfe95b1e2fcdccf0f855792a051e2412287869c758491a5cacdbf7

See? Those blocks have just the minimal difficulty. You need only CPU to mine it.
10  Bitcoin / Development & Technical Discussion / Re: Addressing Block size and occasional mempool congestion on: June 05, 2024, 10:13:25 AM
Quote
That is unlikely to happen. It's going to break tons of software.
Some software will be broken anyway, and then people will have a choice: to upgrade, or to deal with some broken version somehow. For example: timestamps have four bytes allocated. Which means, that after year 2106, we will be forced into hard-forking anyway.

Another example: year 2038 problem. Many people thought we are resistant, but some versions are not, because of type casting between signed and unsigned, and between 32-bit and 64-bit values: https://bitcointalk.org/index.php?topic=5365359.msg58166985#msg58166985

Edit: By the way, a similar discussion is ongoing on Delving Bitcoin: https://delvingbitcoin.org/t/is-it-time-to-increase-the-blocksize-cap/941
11  Bitcoin / Development & Technical Discussion / Re: Addressing Block size and occasional mempool congestion on: June 04, 2024, 12:41:02 PM
Quote
you could allow blocks to be 50MB in size. and at the same time, only let transactions be 500 bytes maximum or something.
Then, a single user will do more than a single transaction.

Quote
1TB is old tech. they are making hard drives 20+TB now. lets get with the times.
You don't want to run a full archival node, even if the size of the chain is below 1 TB. Would you run a node, if we increase it? I guess not.

So, why do you want to increase it, and not participate in the costs of doing so?

Quote
The way you could implement such a thing into Bitcoin would be like you could have DNS entries for a particular domain where you define one record for the alias you want to use, containing a signed BIP322 transaction.
You mean LNURL for on-chain payments?

Quote
making OP_FALSE OP_IF ... OP_ENDIF non-standard
It could help, but would be not enough, when you have mining pools, willing to bypass such limitations.

Quote
Your statement isn't relevant with today's condition
It somewhat is, but taken from another angle: big mining pools will handle it, but regular users may stop running non-mining nodes. And that will indirectly lead to mining centralization, because then, nobody except big pools will agree to run a full archival node 24/7. And in that case, it will be possible to skip more and more steps, if users will stop caring about validating the output, produced by those mining pools.

Quote
For example, "peg" your Bitcoin to L2 and remove the "peg" from L2.
For that reason, decentralized sidechains are needed, because then you end up with a single, batched on-chain transaction every sometimes (for example every three months). And I guess, sooner or later, people may be forced to connect their coins, and to handle more than one person on a single UTXO, when it will be too expensive to make single-user transactions anymore.
12  Bitcoin / Development & Technical Discussion / Re: Addressing Block size and occasional mempool congestion on: June 04, 2024, 06:35:01 AM
Quote
but for how long are we going to continue like this ?.
As long, as needed, to reach transaction joining, and improve batching.

Quote
Even with this implementation, we haven't been able to say "Goodbye" to congestion.
Because those tools are not there, to get rid of congestion for legacy transactions. They are there, to allow cheaper transactions for those, who opt-in. Everyone else will pay for that, as usual, because those changes are backward-compatible.

Quote
but this idea wasn't enough to get approval from the community due to the absence of replay protection.
1. It was because of hard-fork, not because of replay protection.
2. If you want to introduce replay protection, it can be done at the level of the coinbase transaction, but BTC simply didn't introduce replay protection as a soft-fork, and altcoins like BCH didn't bother to make it "gradually activated", or to maintain any compatibility in sighashes. Imagine how better some altcoins could be, if all of their transactions would be compatible with BTC, and if everything, what was confirmed on BCH, would be eventually confirmed on BTC, and vice versa. Then, you would have 1:1 peg, and avoid a lot of issues.

Quote
Meaning, the absence of this protection could cause a replay attack.
It is a feature, not a bug. For example, some people told about things like "flippening", where some altcoin would reach bigger hashrate than BTC, and take the lead. But: those altcoin creators introduced incompatible changes, which effectively destroyed any chance for such "flippening". Because guess what: it is possible to start from two different points, then reach identical UTXO set on both chains, and then simply switch into the heaviest chain, without affecting any user. But many people wanted to split coins, not to merge them. And if splitting and dumping coins is profitable, then the end result can be easily predicted.

Quote
This was suppose to address the issue, but it doesn't have a say in the size of blocks either.
If you want to solve the problem of scalability, then the perfect solution is when you don't have to care about things like the maximum size of the block. Then, it "scales": if you can do 2x more transactions, without touching the maximum block size, then it is "scalable". If you can do 2x more transactions, and it consumes 2x more resources, then it is not "scaling" anymore. It is just "linear growth". And it can be done without any changes in the code, just release N altcoins: BTC1, BTC2, BTC3, ..., BTCN, and tada! You have N times more space!

Quote
but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.
If you need some technical solution, then you need a better code. Then, you have two options: write a better code, or find someone, who will do that.

Quote
What do you think is a possible solution to this problem?.
Transaction joining, batching, and having more than one person on a single UTXO in a decentralized way.
13  Bitcoin / Bitcoin Discussion / Re: Google, Yahoo and Byzantine (fault) generals problem on: June 03, 2024, 07:55:27 AM
Quote
If everything was done by a single person, then why was Satoshi a big blocker in forum posts and emails, and a small blocker when writing software?
Because the whole network was in its infancy, and people thought in terms of hard-forks, not soft-forks. For example:

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.
See?
1. Doing that "if" statement directly is a hard-fork. It could work, when the coin is just a small altcoin with free, or almost-free transactions, but it is not serious, when it grows bigger than that.
2. It is treating the network in a semi-centralized manner, when you can ask everyone to upgrade, and they will obey without asking questions.
3. The centrally-controlled alert system was still there, and it was actively used, after Value Overflow Incident.

Quote
In this mail, Satoshi argues for 100 GB added in the blockchain everyday. In this forum post, he makes it clear that running a node is not the intended configuration for the average user, which is another way to say "only large farms will verify the blockchain".
There is no contradiction. In the same way, you can say that the blockchain will have enormous Proof of Work, so it will be well-protected from any chain reorganization, and a single confirmation would be sufficient in 99% cases. And at the same time, you can say that regular users won't have enough power to mine it, so there will be some non-mining clients.

After taking that sentence, someone could say: "Hey, why everyone is not a miner, if 1 CPU = 1 vote?". But there is no contradiction there. You actually can be a miner, even on CPU. Nobody will ban you for doing so. And you can even receive millisatoshis in LN for mining very weak blocks, but don't expect to get it supported on the main layer, because it would cause a flood of transactions, each sending single satoshis over and over again. Or: another way to say the same thing, is that you can mine on CPU, but the estimated time would be for example one block per 1234 years, or worse.

Quote
In the software, he lowered the block size limit from 32 MB to 1, IIRC.
1. The 32 MiB limit was indirectly specified, as a maximum size of the P2P message.
2. This 1 MB was still unreachable, because of BDB locks.
3. He didn't lift that 32 MiB limit, it was still there. He only introduced block size limit, without even touching P2P message limits.

Quote
That doesn't make sense, even for 2010.
It makes sense, if your rule of thumb is that you would have some kind of control over the code, and anyone could just raise this limit, and it will be followed by all users.

Quote
It would be trivial to add a lot more than 144 MB every day for "large server farms".
In 2010, we had zero "large server farms". If it would be unlimited back then, it could be abused. And then, people would be forced to introduce any limit anyway. But since nobody knew, how to make a temporary limit correctly, it was permanent. Because blocks are not like transactions, when you can reject them now, and include them later. If you reject a block, then it is rejected.

Quote
Or was this arbitrary?
It was just some kind of guess. Satoshi loved decimal numbers, and putting "a million bytes" as a limit was a nice, round number to start with.

Quote
If you're not convinced about Nick being Satoshi, then have you read this?
No. But let's dig into that:

Quote
I must stress this: an open, unbiased search of texts similar in writing to the Bitcoin whitepaper over the entire Internet, identifies Nick’s bit gold articles as the best candidates.
You can run a similar analysis on some forum posts for different people or groups. You could be surprised, how many times, there is a match, even though the content is not written by the same person. Often, the end result reflects more your writing style or personality, than the ownership. Which means, that between two introverts, there will be a bigger match, than between introverted and extraverted people.

Quote
For each expression, when it is possible and relevant, we will mention the proportion of cryptography papers containing the expression (using Google Scholar), to measure how common its use is among researchers, and later provide a rough value for the probability of the null hypothesis.
Note that the whitepaper was published on a mailing list, instead of being submitted formally. And that makes a lot of difference, also because of the timing: October 31, 2008. The academic year starts in October, so it was just one month after that. It was not yet polished, and thought through, to be oficially submitted as a thesis or some other kind of dissertation, to reach a particular degree. It was just some another experiment at that time, widely criticized, and many people rejected upfront the whole approach to the problem. So, if you compare it with other papers, it will naturally lead you into blogs and less formal work.

The same thing I observed in my topic: it was informal, experimental, and reached a lot of merits: https://bitcointalk.org/index.php?topic=5402178
However, if I would use more formal description, with more academic approach, it would be rejected, and nobody would pay attention.

Quote
the probability of finding all of “it should be noted”, “for our purposes”, “can be characterized” and “preclude” as part of a given researcher’s vocabulary has the upper bound 0.08%
Of course. And you can CTRL+F the phrase "guess what", and find out, that the author is vjudeu. Or maybe CTRL+F the word "that". Or check, where commas are misplaced, and get some conclusions out of that. It is not important in this context, the whole system has a different technical approach to the way, how the coin should be designed, and people focus on some unimportant writing details, which could be easily simulated.

Quote
This is evidenced by Satoshi’s reference to Wei Dai’s b-money, as well as hashcash, while both of them do not even seem to have been a direct inspiration to Bitcoin.
Aha. The whole model of hashing a message was taken almost 1:1 from hashcash, and the author seems to think, that it is not related. Great. Even the initial difficulty of 20 leading zero bits is something, which was always present in HashCash, and was just copy-pasted into pre-release version. Then, Satoshi raised it to 40 leading zero bits (hence the Genesis Block), and then lowered again into 32 leading zero bits (another HashCash reference).
Code:
SHA-1("1:20:1303030600:adam@cypherspace.org::McMybZIhxKXu57jd:ckvi")=00000b7c65ac70650eb8d4f034e86d7d5cd1852f
SHA-1("0:030626:adam@cypherspace.org:6470e06d773e05a8")=00000000c70db7389f241b8f441fcf068aead3f0
https://bitcointalk.org/index.php?topic=5355610.0
Quote
Code:
///static const unsigned int MINPROOFOFWORK = 40; /// need to decide the right difficulty to start with
static const unsigned int MINPROOFOFWORK = 20;  /// ridiculously easy for testing
Of course, this "main.h" file in November 2008 version was not inspired by HashCash, not at all.

Note: There was also some quite low SHA-1 hash (50 leading zero bits or something around that), but I cannot find it.

Quote
If Satoshi had been writing independently from Nick, wouldn’t he have cited his work as per proper scientific etiquette?
Many people think in a similar way, without being aware of that. I thought about UTXO-only model, long before I read about it for the first time, and even longer before I started to grasp the whole complexity behind it. Does it mean, that I should quote Garlo Nicon? https://bitcointalk.org/index.php?topic=5197019.msg52920685#msg52920685

And the funniest thing is the first reply:
Congratulations, you've just invented Ripple. Tongue

https://twitter.com/tuurdemeester/status/1028262924987637760

This is what you will get as the result.
See? Should the current authors of "assumeutxo" quote Ripple in their work?

Quote
There is also the remarkable lack of public reaction on Nick’s part when Bitcoin started taking off.
Duh, you could say in the same way, that there was "the remarkable lack of public reaction when Ripple started taking off". Or HashCash. Or "e-cheque" (if you know what I mean, because I expect not that many people know about what James A. Donald tried to create). The communication over the network is never instant: https://gwern.net/bitcoin-is-worse-is-better

Enough for now, this post is going to be too long.
14  Bitcoin / Bitcoin Discussion / Re: Google, Yahoo and Byzantine (fault) generals problem on: June 02, 2024, 07:02:37 PM
Quote
How do you know it was the Japanese version?
1. Because it is hard to get this Visual compiler in another language version, including English.
2. There were some bugs, related to ¥ character in paths, which usually happens, if you use Japanese version of Windows: https://en.wikipedia.org/wiki/Yen_and_yuan_sign#Microsoft_Windows
3. You have to use Japanese system to handle Visual in Japanese version. If you have English system, it will display strange squares, or not render the font at all. Also, the language is integral part of the program, and there is no option to change it easily in some menu, so you need both software in the same language (the system and the compiler).

Quote
Satoshi's whitepaper shares notable similarities with Nick Szabo's writings though.
Their style of writing text is completely different. For example, Satoshi is known for using double spaces as a separator, instead of a single space, between sentences, after each dot. Another reason is the whole construction of each text, where Satoshi starts creating it from the bottom to the top, while Nick simply uses top to the bottom approach. Also, this is another reason to read the whitepaper in the right order: Satoshi first tested hash functions, and calculated probability for chain reorganizations, and so on. The same pattern can be observed for his posts on forum.

Quote
Perhaps Szabo penned the whitepaper, while the technical design originated from a different mind.
I am quite sure that everything was done by a single person. If you ever tried to coordinate any group, then you probably know, what am I talking about. Even when I am trying to create something with Garlo Nicon, it is sometimes hard to plan our actions in advance, and when the group is even larger, then it becomes harder and harder.

If Satoshi would be a group of just two people, it would create a lot of different patterns, than what you can observe. It is much more common to split a single identity between many accounts, than to merge more than one person, behind a single account. Been there, done that, the initial idea of the Garlo Nicon group was to publish all posts under a single name, and it quickly failed, so we registered separately.

For example: imagine that there are two people, and they disagree about anything. The first person wants 10 minutes per block, and the second one wants 15 minutes per block. One wants "explosion of special cases", and another wants "the Script". One thinks about "simple for loop with additions", and another wants to "optimize every bit of that, and do right rotation". Even between Bitcoin developers, there is a huge space to disagree about some details. How do you imagine to handle all of that? Who would be in charge, and decide, what is published or not?

And there is another problem: if there are more people, then what happened, after Satoshi disappeared? All members of the group died simultaneously? Nobody wanted to take the lead? And what about private keys? They shared all of them? What about GPG keys? What about all kinds of accounts? Having a group of just two people leads to a lot of problems, to do all of that correctly, not to mention about any bigger group.

Also, when it comes to the amount of work, produced in a given time, if there would be just two people, then they would produce more content, and you could catch it immediately. Which means, that if you have any larger group, and you want to pretend, that you are a single person, then you have to introduce delays in publication. And even then, to reach a huge level of agreement, you need some kind of coordination, like putting everything through a single person, before publishing it, to not introduce some small and subtle changes, which can be used to easily separate each author.
15  Bitcoin / Bitcoin Technical Support / Re: [May 2024] Fees are low, use this opportunity to Consolidate your small inputs on: June 02, 2024, 12:16:18 PM
Quote
Would this megabyte type of transaction be propagated normally? I think it is non-standard.
It is non-standard, only if you do that on legacy Scripts. But: there are ways to perform similar things on Segwit or Taproot, because of OP_CODESEPARATOR and some other tricks. In general, OP_CODESEPARATOR is one of those opcodes, which can enforce rehashing things over and over again, and I wonder, if it will ever be disabled.

Quote
I'd appreciate if you could spare your two cents on the previous block size debate.
I already did:
Scaling is directly related with compression. If you can use the same resources to achieve more goals, then that thing is "scalable". So, if the size of the block is 1 MB, and your "scaling" is just "let's increase it into 4 MB", then it is not a scaling anymore. It is just a pure, linear growth. You increase numbers four times, so you can now handle 4x more traffic. But it is not scaling. Not at all.

Scaling is about resources. If you can handle 16x more traffic with only 4x bigger blocks, then this is somewhat scalable. But we can go even further: if you can handle 100x more traffic, or even 1000x more traffic with only 4x bigger blocks, then this has even better scalability.

Quote
I think you support small block size, because I'd read you in the bitcoin-dev mailing list, and you're all small blockers there, IIRC.
Yes, for example here: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019988.html

As you can see, I wrote about commitments long time ago, and today, I still think they are better than legacy OP_RETURNs, but instead of creating a separate output, they can be moved into R-value of the signature, then it is even better, because then it can be used on every address type.

Quote
I highly respect the endless hours you've spent, discussing on that mailing list. I think you can enlighten us.
Well, everyone can post on the mailing list. The main difference is that you have to wait some time for publication, so your post is not visible immediately, but is first read by some human, and then manually accepted. But besides that, it is similar to forums, and it doesn't matter that much, because many times I write some posts on disk, and they are sitting there, unpublished, and wait for future input. If after some days or weeks, they are still good enough to be published, then they are released.

By the way, in the current queue, I have some notes about RIPEMD-160 (and how it is related to SHA-1 or other hash functions), but I have to implement it from scratch, to write properly about it. And I guess Garlo Nicon is thinking about DLEQ, for example in the context of secp160k1. But all of that is work in progress.

Quote
And it's worth to mention usage of CPU instruction (such as SSE2) also somewhat allow more scaling.
In general, yes, but it depends, how things are internally wired. A lot of optimizations are based on parallelism, and if you have to do some sequential hashing, then it equally hurts all full nodes. Which means, that if some transaction is complex, then not only it is a bottleneck for mining pools, but it is also a bottleneck for non-mining nodes, used to propagate transactions in the network.

Some good video about x86 internal assembly, and why we are doomed to stick with some problems for years, even if we switch to another architecture: https://www.youtube.com/watch?v=xCBrtopAG80 (by the way, I expect sooner or later, the Script will be also splitted into smaller parts, like micro-ops, and it will be possible to deal with them more directly than today, so maybe the cost would not be measured in "satoshis per bytes").
16  Bitcoin / Development & Technical Discussion / Re: (Ordinals) BRC-20 needs to be removed on: June 02, 2024, 07:23:15 AM
Quote
bitcoin is not like Windows where you can just not support the older operating system anymore
Aha. With version "10" of the terminal in Windows 11, because of backward compatibility? Or with preserving the old icons, drop-down menus, the whole EXE format, and a lot of other stuff? Or maybe with WinAPI, where the large part of it is compatible even with Windows 95, and is still written in C?

Quote
If this were Microsoft and a new version of Windows they would do away with backwards compatibility with Script and just go with Tapscript but we can't do that.
Oh yes, I see how Microsoft hates backward compatibility, and for that reason, every EXE file has the old DOS header with the famous text "This program cannot be run in DOS mode".

Quote
The longer time goes on and the more "upgrades" that occur, the more excess baggage bitcoin is going to have weighting it down in my opinion. And that can't be good. Alot of inefficiency there...
As I said in some older topic, the world is unupgradable. You have to maintain the backward compatibility, unless you want to lose a large part of your customers. Every serious thing makes it compatible. Currently, the C programming language is so compatible, that it is no longer a language, but rather a protocol: https://www.youtube.com/watch?v=gqKyP2hXFoA (and Bitcoin is going to be similar, because a lot of altcoins used the CopyCat strategy, so Bitcoin is becoming a protocol, rather than just a coin).

Quote
the right way is how Microsoft does it and how any reasonable company would do it
Yeah, and they created Windows Subsystem for Linux to be less compatible with other systems, right? And they Open Sourced the old MS-DOS code for the same reason, right?
17  Bitcoin / Bitcoin Discussion / Re: Google, Yahoo and Byzantine (fault) generals problem on: June 02, 2024, 06:36:31 AM
Quote
Debian running as a live CD would be almost as secure though.
Satoshi used Japanese version of Windows XP, and was running Microsoft Visual C++ 6.0 SP6 as the main compiler, and also used "MinGW GCC (v3.4.5)" to produce Linux versions. You can confirm all of that by reading the code of "BitCoin v0.01 ALPHA".

Quote
Anyway, it could help narrow the possibilities.
Not really. If I read something about RIPEMD-160, does it mean that I am Satoshi? Many people explored similar topics, and if you dig into the past, then you can learn, why Satoshi did something in this way, and not another. But you won't discover his true identity, because that kind of information is simply no longer there. You can only follow his way of thinking, to discover, why he designed things the way they are.

In a similar way, you could try to find all people talking about "Proof of Work". And guess what: you would find Adam Back, the creator of HashCash, which is not Satoshi. Or: you can dig into "Bit gold" and find this post: https://unenumerated.blogspot.com/2005/12/bit-gold.html (it is fundamentally different approach than in Bitcoin, because of the architecture).

Quote
The main problem with all these schemes is that proof of work schemes depend on computer architecture, not just an abstract mathematics based on an abstract "compute cycle."
See? It is completely different, than what Satoshi did. But by digging for "proof of work", you can easily encounter that, and think "hey, Nick Szabo is Satoshi!". But it is not true. The whole Script is not "wrapped x86 architecture", as described by Nick Szabo. Also because he paid a huge attention into details like that, and in this case, Nick could easily avoid "Value Overflow Incident", while Satoshi didn't care about internal processor assembly that much (and replaced simple for loop with additions, into a bunch of bitwise operations like right shift).

Quote
Our job is to be curious but if you are not, or resisit being, it's perfectly fine and okay to be that respectful. Peace.
You can be curious about design decisions, made by Satoshi, because you need that kind of background, if you plan to change the code, write your own altcoin, or just make some test network. However, digging into his identity is a completely different matter, and that kind of things should not be done.

Fortunately, most traces are gone, and as we get better and better in crypto, more ways to prove that "I am Satoshi" will be gone, for example if secp256k1 will be broken, then the signature using the public key from the Genesis Block would be worthless, so the identity of Satoshi would get more and more protected, after each breakthrough. Currently, most e-mail addresses and domain names simply expired, or changed hands, so it is harder to trace Satoshi, than it was. And I hope it will get even harder in the future, as more legacy things will be broken, and replaced with new versions.
18  Bitcoin / Development & Technical Discussion / Re: Genesis Wallet Address And Unspendable Bitcoin on: June 02, 2024, 05:55:13 AM
Quote
It is very true that a certain amount of Bitcoin can't be spent, but why is that?
Because Bitcoin works on coins, not on addresses. You have the first coin, which is in this particular output: https://mempool.space/tx/4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b#vout=0

You have transaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b and output index 0. If it would be present in the database of all coins, then it would be normally spendable, as all other coins are. However, the Bitcoin client initially started from the empty database. And initially, there was no such thing as "the Genesis Block". You started creating the whole chain from scratch, from zero "previous block hash".

And then, Satoshi created the first block, and hardcoded it in the client. He simply put a rule, that "the block 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f has to be there as the first block". But: he didn't add the transaction 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b to the database! Which means, that the block is there, but the coins are not. And if you create a valid transaction, which will spend the UTXO 4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b:0, then it will be rejected by all nodes, because this UTXO is not in the database of all spendable coins.

Quote
And what should happen to the other amount on same address?
It can be normally moved anywhere, without any restrictions, as long as you can produce a valid signature. Just because the next coins were created in the next blocks, and are just regular Scripts.

Quote
But there haven't been emphasis on if further additions could be made and what will happen to those additions.
They can be normally moved. To test that, you can create a regtest chain, where you can replace Satoshi's address with your own, and just test it. Also, many altcoins can prove you that, for example in Litecoin, you have this address: https://blockchair.com/litecoin/address/Ler4HNAEfwYhBmGXcFP2Po1NpRUEiK8km2 and as you can see, it is normally spendable in other transactions, except 97ddfbbae6be97fd6cdf3e7ca13232a3afff2353e29badfab7f73011edd4ced9.

Quote
As at the time of my writing, this address contains about 100btc with the unspendable 50btc inclusive.
No, because it is P2PK, and it contains only 50,01004040 BTC: https://mempool.space/address/04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f

However, P2PKH version was always fully spendable, and contains 49,99353524 BTC: https://mempool.space/address/1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa

Quote
can the remaining btc be spent assuming Satoshi has the private key?
Yes.

Quote
If Yes, give your reasons. If No, also share your reasons
Yes, you can test it in your own regtest, or by exploring many altcoins like Litecoin, which copy-pasted the same code, related to the Genesis Block, and used just their own public keys, instead of Satoshi's key.

Quote
Are the unspendable coins limited to only the first block (Genesis block) ?
Yes, it is only limited to the coins from the first transaction, which is https://mempool.space/tx/4a5e1e4baab89f3a32518a88c31bc87f618f76673e2cc77ab2127b7afdeda33b

Quote
If the first reward (50btc) was intensionally transformed to unspendable coins by Satoshi, does that mean that our current developers or miners can also do thesame ?.
Yes, you can burn your coins by sending them into OP_RETURN. Also, when it comes to miners, they can claim less coins than allowed, and then they are irreversibly burned. For example, this miner burned a single satoshi, and all transaction fees in block 124724 in this transaction: https://mempool.space/tx/5d80a29be1609db91658b401f85921a86ab4755969729b65257651bb9fd2c10d
19  Bitcoin / Bitcoin Technical Support / Re: [May 2024] Fees are low, use this opportunity to Consolidate your small inputs on: June 01, 2024, 02:13:17 PM
Quote
What happens during those 12 minutes?
Different pools will do different things. The attacker will know in advance, that "yes, this block is correct" or "no, it abuses sigops limit, or any other quirky rule". And during those 12 minutes, different pools may apply different strategies. One strategy is to mine a block with just the coinbase transaction, and nothing else. But: on top of which block it should be mined?

An honest miner can decide to mine on top of what is already validated. But then, there is a risk, that this "12 minutes block" is not an attack. Maybe all of those transactions were flying in mempools, and some mining pool just included all of them, without having any evil plan in mind? But then, how to quickly check all rules, without performing full block validation?

Quote
Once that happens, they can stop verifying the attacker's block
It depends. Mining a block with some coinbase transaction, and nothing else, would be always valid, if the previous block is valid. Which means, that mining pools can mine on top of attacker's block, and then they will keep validating it.

Some example of sigops limit violation: https://bitcointalk.org/index.php?topic=5447129.msg62014494#msg62014494

And then, imagine that you know in advance, that some mining pools use some kind of simplified block validation, and they don't check every rule (to get a better performace, because of using outdated custom software, or for whatever reason). Then, you may broadcast for example a lot of 1-of-3 multisig transactions into the network on purpose, and wait for other pools to grab them, and mine a new block. Then, you have 80k sigops limit, but some of their blocks may accidentally have 80,003. And there are more quirky rules to exploit.

Also, if you broadcast transactions, which are valid, but can become invalid in a particular context, then you can always say later: "See? Those transactions are valid, because they are included in other blocks, we did nothing wrong!".
20  Bitcoin / Development & Technical Discussion / Re: (Ordinals) BRC-20 needs to be removed on: May 31, 2024, 03:47:47 PM
Quote
This sounds pretty much like lightning, with the important difference that the tokens are not transferred off-chain.
They are transferred off-chain, if by saying off-chain, you mean "outside the main Bitcoin network". But yes, they are on-chain, if you define on-chain as "on any chain, for example sidechain".

In general, if you imagine a network, where all LN transactions are shared with every node, then you will get my idea of decentralized sidechains. Because, according to the rules "sign your coins to peg them in". And LN closing channel transactions contain valid signatures, so they can be pegged into such sidechain. It is even compatible with hiding internal data from third parties, because penalty transactions are encrypted, and you only share them in encrypted way, so the network can see basic things, like "this UTXO was already used", without having direct access to that data, and without having an option to close the channel, if the creators are not willing to do so.

Which means, that the whole product of LN can be used as an input for the sidechain. And the output would contain just some batched transaction, broadcasted into the main Bitcoin network.

Quote
"Bitcoins" in a sidechain are tokens issued, the Bitcoins in Lightning are actual UTXOs that have not been settled in the Bitcoin blockchain yet.
You don't have to "issue" a different token. You can just reuse, what is already there. In the same way, you don't have to invent new signed transactions to peg coins in. You can reuse existing output, produced for example by LN (but it is not limited to this case). And then, if you reuse existing signed transactions, then there are two options: both networks are IOU, or none of them are IOU. Because you can use exactly the same bytes in both, point at the same UTXOs, broadcast the same transactions, and so on.
Pages: [1] 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!