Bitcoin Forum
September 21, 2024, 02:21:29 PM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [30] 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 ... 290 »
581  Bitcoin / Development & Technical Discussion / Re: Depending on orphan blocks - abuse of the protocol ? on: August 30, 2020, 09:55:15 PM
Stale blocks  (the correct term for what you want) are not generally accessible.
582  Bitcoin / Bitcoin Discussion / Re: When Bitcoin Maximalists are Promoting/Shilling an Altcoin on: August 29, 2020, 08:12:58 PM
Here is an email I wrote about this a few days ago:

Quote
I am really disappointed-- on the basis the content excerpted in this thread: https://twitter.com/francispouliot_/status/1298423415594840066

Long before cryptocurrency existed I was regularly asked by friends to help them understand investments, taxes, and finance stuff-- since I'm the sort of person that finds numbers and contract terms fun.  If cryptocurrency had never existed and a friend came to me asking about an investment with INX's terms I would strongly caution them away from it;  I might even say it smells like a scam.

-- because if it's a scam or not depends critically on unknowable post issuance management decisions: Do they actually pay out income or do they shovel all income back into operations and grow the company until they ultimately sell it out from under the token holders? AFAICT nothing about the terms suggests that they're particularly incentivized to ever pay the token holders even if their revenue becomes substantial, much less obligated.  I think at a minimum to be equitable there would need to be an option-like component to the terms which in the event that the company was sold token holders would receive a share of the companies' gain in value minus the dividends already paid out.  Otherwise, I think an argument can be made that the management is legally obligated to the shareholders to rip off the token holders in this manner.

Even with that I'd still probably caution against the investment due to how complex and unusual it is and the potential for gotchas.

The fact that it is issued on some cryptocurrency system just makes it a little less trustworthy (potential for technical snafu), taps into an established market of read-made victims, and imports a collection of people incentivized to pump a sketchy investment.  None of this helps but it is not the most fundamental issue.

Some people have pointed out that unlike many ICO's this one doesn't appear to be an outright scam.  I think it would be more accurate to say that it's not necessarily true that it is a scam, maybe it turns out to be one, maybe it doesn't.  Then again, if someone were making a really attractive offering in a market of scammy products you would expect them to put in a real effort to make it extremely clear that their investment isn't a bad one.  They haven't done that and I think that is a pretty bad sign.

So, sure, while they might meet a "not certainly a scam" bar, that is an extremely low bar and the fact people think this is noteworthy is really just an indictment of the ecosystem.

That well respected Bitcoiners are sticking their names on it-- for what sounds like more or less an immediate $250k payment-- is really disappointing to me.  If INX merely goes as poorly as explicitly permitted by the contracts their reputations should be justifiably trashed-- because it'll be impossible to tell if the holders weren't paid because the company tunnelled out real profits as payroll and other expenses vs just failed in the marketplace.  Considering the failure rate of businesses, even if all intentions are good, the odds of it going poorly can't be extremely low.

So to me that says that they assign a fairly low value to their reputations, or that they believe that it's likely that even if it fails and takes everyone's money they won't take a substantial reputational hit from it, or that they're being foolish and don't actually see what an inequitable setup the whole thing is.  I think the first two are likely and the last isn't likely. All of them are bad.

I don't know if other people have similar thoughts but this is my impression.
583  Bitcoin / Bitcoin Technical Support / Re: Bitcoin Core software stuck, need help on: August 28, 2020, 07:17:24 PM
My best guess (and it's just a guess) is that you're being some kind of (national?, local?) firewall or anti-virus that is censoring or otherwise corrupting the transmission of a particular block.  Unfortunately, Bitcoin P2P is still not encrypted which makes it vulnerable to being screwed up by random censorware in the network.

Can you post what your current best block hash is?  It'll be the bestblockhash field in the getblockchaininfo output.

We could try getting you past that block by giving you a hex copy of it over https then you could submit it via the rpc.

If would also be useful to add logips=1 to the configuration so you can make sure that it's not connecting to the same peers over and over again (and we could try fetching the block in question from a peer that failed for you).
584  Bitcoin / Development & Technical Discussion / Re: What's stopping OP_CHECKMULTISIG extra pop bug from being fixed? on: August 27, 2020, 11:21:41 PM
I think it is probably wrong to describe it as a bug.  I think it was intended to indicate which signatures were present to fix the otherwise terrible performance of checkmultisig.

Regardless, there is no real point to fixing it:  Any 'fix' would require that all software using checkmultisig get an incompatible change (as part of a highly disruptive hard fork).  Because the extra value is now always zero (and was pretty much always, or was actually always zero before)  you can compress it out  completely over the wire or on disk if you really care-- so the only effect it has is its weight in transactions and the one or so extra cpu cycle going into a hash.

Instead a new operation can be introduced that just doesn't have that behaviour-- and that would be compatible, software that wants the new behaviour would just upgrade when it wants it,  no flag day, no disruption.

BIP342 replaces checkmultisig entirely with something that is more computationally efficient and more flexible (and more space/weight efficient too, once you count that the signatures are 9 bytes shorter and the pubkeys are 1 bytes shorter).
585  Bitcoin / Development & Technical Discussion / Re: Electrumx not updating with mempool transactions on: August 22, 2020, 06:18:10 PM
I personally wouldn't use or run electrumx: It's author is a big outspoke advocate of a scammer.  It's not much of a leap to worry that in the future the software will be changed to exploit users more directly.

If that happened everyone else will be going "well, duh, what did you expect?"
586  Bitcoin / Development & Technical Discussion / Re: Non-interactive schnorr signatures? on: August 22, 2020, 05:39:20 PM
This might be relevant to your interests: https://blockstream.com/2015/08/24/en-treesignatures/

It's written pre-taproot so that specific scripts/leafs you'd use wouldn't be the same, but it gives counts on enumerations of combinations.  The linked implementation also constructs huge trees without needing to use a lot of memory and the same approach could be adopted for an implementation for taproot construction.

Also this: https://medium.com/@murchandamus/2-of-3-multisig-inputs-using-pay-to-taproot-d5faf2312ba3
587  Bitcoin / Development & Technical Discussion / Re: Non-interactive schnorr signatures? on: August 22, 2020, 05:00:30 AM
Your question isn't quite clear enough for me.

For N of N no interaction for key creation is needed.  The keys have to be delinearized to prevent rogue key attacks-- but musig just multiplies each key with a value computed from the hash of all the keys.

For N of M interaction and storage are fundamentally required, not just for schnorr but for other efficient threshold signatures too-- efficient being a key word.

But taproot has other ways of doing N of M: You can do a checkmultisig-like checksig-add, or you can make a tree of all the N-of-N subsets and get a script that scales linearly with the participant count. Like for a 2 of 3 with keys A, B, C...  the valid possibilities are A&&B, B&&C, and C&&A.

You can even make the taproot root key one of these N of Ns, e.g. if one is most likely to get used.  So essentially the size of the signature scales with the log of the probability that a given choice will be made. It's not quite as efficient but it avoids interaction and storage.

In many applications there is some sufficient N of N that is much more likely to be used than others, so in practice the efficiency gap may not be large. For example, if you have 2 of 3 with you, an offline key of yours, and some 2FA service then you normally expect the 2-of-2 involving you and the 2fa will be signing 99.99% of the time.

Unrelated to efficiency/storage-durability there is another big reason that many applications may not want to use efficient threshold signatures:  They're unaccountable.  If an unauthorized payment is made there is no way to prove which keys were involved.

The above alternatives to native thresholds are all inherently accountable.

A couple years back I proposed an alternative construction that I called polysig which is not supported in taproot today but could be added in a leaf version later which had size that was linear in the number of non-participating signers, relatively private (observers only learn the number of missing keys not the total number of keys) and completely accountable. But given that taproot can efficiently do the "A specific N of N" or "something else"-- there wasn't a lot of interest in going forward with completing the polysig work (e.g. formally proving its security).


588  Bitcoin / Development & Technical Discussion / Re: Vanity addresses are unsafe on: August 21, 2020, 01:23:49 AM
I think that while the OP is technically wrong, there is a kernel of truth in their claim.

Vanity addresses, generated correctly, aren't inherently weaker-- as people were quick to point out.

Attackers can generate lookalike addresses for any address, vanity or not-- but they can only look so alike due to exponential complexity.

It's unsafe to tell if a an address, ANY address, is the right one by comparing only a couple characters especially if an attacker can predict what characters you'll check.  Just like statistical methods in password guessing are surprisingly effective I imagine that a sufficiently smart attacker could make more successful lookalikes than you'd assume on first guess-- maybe because he uses fancy techniques to compute really fast, maybe because he is really good at predicting the visual similarity of addresses.

Why would do people use vanity addresses?  Mostly because they expect people to recognize the name as being associated with them.  They use them because they want people to be comparing those friendly human readable characters.

So essentially vanity addresses encourage an unsafe behaviour-- users validating an address by comparing a few easily predicted characters. Making a predictable and weak comparison is unsafe for any kind of address, but for a vanity address enabling a weak comparison is a lot of the point for people.

Is it possible to use vanity addresses safely?  Absolutely-- validate them by comparing parts other than the vanity part.   But do most users compare them safely? Probably not.

If users could be relied on to compare as much non-vanity part as they would on a non-vanity address, you could make an argument that they increase security by making it more expensive to generate vaguely similar addresses. But I think it's clear that this benefit is dwarfed by people-- even experts-- not being as diligent in comparing them.   While working on BIP173 I contemplated making it expensive to brute force similar addresses-- e.g. applying a expensive cryptographic permutation to the data before encoding it in an address.  But I felt I just couldn't justify making implementers life harder for a speculative benefit that only helps really sloppy users. Any operation expensive enough to really slow attackers would be painfully slow on hardware wallet devices which are uniformly very underpowered.  ... and I also expected that because that kind of strengthening would make vanity address generation much harder too I might have had to navigate opposition from users that didn't want vanity addresses to be much less available to them.

Another alternative I considered but didn't go for was requiring the underlying address data to have an all zeros suffix.  This would also make generation expensive-- but would have the additional benefit of allowing shorter addresses-- so at least the benefit wouldn't just be against sloppy users.   But it would really complicate hdwallet/taproot/multisig usage where generating a key is already complicated.

In any case, ... just because something exists doesn't mean it's a good idea. There is evidence in the wild that lookalike vanity addresses being used to rob people and lookalike onion domains in Tor, so the security concern isn't just a theory.

Given that, as mentioned, they're also pretty toxic for privacy (both due to encouraging reuse and being identifiable) they're probably better avoided.
589  Bitcoin / Development & Technical Discussion / Re: Taproot proposal on: August 19, 2020, 10:57:05 PM
Pieter has posted about changing BIP340 to us a different R tiebreaker: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-August/018081.html

This is some signature algorithm behavioural minutia. Basically BIP340 did an unconventional thing because we believed it was faster enough to be worth a small increase in implementation complexity but it turns out that our belief was based on a both a broken benchmark and a supporting (wrong) assumption and it's not actually faster and might, in fact, be slightly slower in the long run.  Changing to the more conventional thing would simplify implementations and make them somewhat faster.

He tells me that he's received a bunch of positive commentary on it, so I expect the change will be made soon!


590  Bitcoin / Development & Technical Discussion / Re: Bitcoin Core: can it show OP_RETURN data on the GUI ? on: August 19, 2020, 04:38:40 AM
No and I'd avoid any wallet that displayed it.  That data comes from an third party and can be malicious.  If not malicious, it can be and often is spam.

E.g. if FooWallet had a behaviour to display it what would happen if someone started spamming FooWallet users with "Emergency FooWallet Upgrade required: Load www.emergencyfoowalletupgrade.com for more info!"?

The best defence against malicious messages and spam is both to just to avoid displaying human readable information sourced from untrusted sources.
591  Bitcoin / Development & Technical Discussion / Re: Private key hack new method on: August 17, 2020, 02:17:34 PM
You also can't just come up with any domain parameters you like,
Unless you're microsoft. Tongue


This thread is silly. If you can change the basepoint the problem is trivial.   Set the basepoint to any number times the pubkey.  The private key is the modular inverse of that number.

592  Bitcoin / Development & Technical Discussion / Re: Joining mempool RBF transactions on: August 14, 2020, 08:00:33 PM
The reason I'm such a huge fan of the idea of "utxo-giftcards" is that it's pretty phenomenally space-efficient (as you can give someone money without even making a transaction) and a death-blow t entire classes of bitcoin analysis attacks. (Of course there are drawbacks in transferring a private-key, but I feel those are pretty obvious and understood enough that it's easy to use what is most suitable)

It's also a perfect match to the model of a gift or a donation-- you don't care if the donor/gifter claws their money back at the last moment (-- you'd rather that didn't but you weren't relying on them not doing it).  It also addresses the problem that small bitcoin gifts usually result in lost coins.

I'm very happy that Bitcoin I gifted in the past was gifted using by handing out private key.  People lost their gifts and I was able to recover them for them.
593  Bitcoin / Development & Technical Discussion / Re: Selfish full node for production? on: August 13, 2020, 12:28:52 AM
What you signal is mostly moot however, if you're not even listening for connections from outside.
Wouldn't my outbout 8 peers receive whatever I broadcast with sendrawtransaction? I will set maxconnections to 20 then, just to be safe.
I mean it doesn't matter if you are node-limited or not.  Nodes that you connect out to will not request historical blocks from you unless they're weird and modified. This is done specifically to avoid burdening users on limited connections behind NAT with hundreds of gigabytes of block requests per month.  You only end up serving historical blocks if you accept incoming connections.
594  Bitcoin / Development & Technical Discussion / Re: Selfish full node for production? on: August 12, 2020, 04:26:00 PM
maxconnections=8 // no more than the 8 outbound connections my node will attempt
That will work but disabling p2p listening would be better (also not allow inbound but will be more secure).

Quote
Setting a low with -maxuploadtarget won't work for me because my application will broadcast many transactions (possibly new to the network), so it's very important that these broadcasts are done properly.
Yes it would. maxuploadtarget only restricts fetching historical blocks, it won't restrict anything about you sending transactions.

Quote
@JuleAdka suggestion seems interesting. I took a look at BIP 159 (https://github.com/bitcoin/bips/blob/master/bip-0159.mediawiki) which introduced NODE_NETWORK_LIMITED. Disabling NODE_NETWORK might be a good way to make sure nobody tries to download historical blocks from my node. Is there a way to disable this service flag? Searched through the options and didn't find a way to do it (bitcoind --help | grep "service").
Yes, enable pruning. You can set the pruning limit so high that nothing actually gets pruned, but you'll still signal yourself as pruned to the network. An option should probably get added to do this more directly, you should open a feature request for a "-nodelimited=1" option.

What you signal is mostly moot however, if you're not even listening for connections from outside.
595  Bitcoin / Development & Technical Discussion / Re: Selfish full node for production? on: August 11, 2020, 11:49:36 PM
You can set an amount of upload limit per day very low with -maxuploadtarget.

You can also reduce your maximum connection count, which will reduce traffic a lot.


You can also run a pruned node, which will cause you to not serve historical blocks at all.  This is the best.  I'm not sure if there is a flag to set node-limited while still actually having all the blocks... there should be. (perhaps setting pruning with an absurdly high pruning limit is sufficient).


There are plenty of nodes on the network serving historical blocks, it's not particularly selfish to not do so. 
596  Bitcoin / Development & Technical Discussion / Re: Segwit Questions on: August 11, 2020, 02:12:39 PM
However, how does scriptSig: OP_TRUE unlock the original Pre-Segwit input UTXO A with scriptPubKey: OP_DUP OP_HASH160 404371705fa9bd789a2fcd52d2c580b65d35549d OP_EQUALVERIFY OP_CHECKSIG on the Non-Segwit node? 
It doesn't. Segwit style signing is only used for segwit style outputs.  Old style outputs are spent using the old means.
597  Alternate cryptocurrencies / Altcoin Discussion / Re: Thoughts on a bitcoin tax to pay for development? on: August 07, 2020, 12:25:48 AM
If you want to fund development yourself, great!

If you ever see someone trying to change bitcoin's consensus rules to literally stuff money into their pockets,  I hope you reject their efforts with all due vigour.  That kind of funding proposal is extremely centralizing.

Scammers gonna scam.  The fact that scamming has been reliably profitable in this ecosystem is no reason for Bitcoin to duplicate it.
 
598  Bitcoin / Development & Technical Discussion / Re: Segwit Questions on: August 06, 2020, 10:40:06 AM
Quote
In fact, SegWit node will strip the witness data before sending the block to legagy node.

Interesting, so a SeqWit node sends out two versions of the Mined Block?  The stripped version for Legacy Nodes and the Extended Witness Data Block for Segwit nodes?

Thanks

If it's connected to an old non-segwit peer it'll send that peer a stripped block. All blocks it receives are the complete blocks, and of course any it sends to modern nodes are complete too.

There aren't that many pre-segwit nodes on the network anymore since segwit has been out for something very close to four years now. E.g. my node at home has 45 peers right now and every one of them is node_witness.

599  Bitcoin / Development & Technical Discussion / Re: Joining mempool RBF transactions on: August 06, 2020, 03:37:35 AM
Except "batch on replacement" is ludicrously hard to do.
That's why I say they should batch in the first place. Smiley

Quote
I actually think it's the hardest (pure) programming problem I've worked on, and don't feel like I'd even be able to do it given another 6 months. I've never done "logic programming" but I almost feel like something like that would be essential, where you sort of logically describe all the high level concepts and ask it to solve what you should do. As just trying to handle all the cases imperatively just seems impossible to not end up in an exploded spaghetti nightmare of a gazillion states.
[Bit off topic, just ranting here incase you have any insights]
That was all I had to offer. I think the right way to solve that isn't to write code for it, it's to write (or steal) a logic-relation engine.

In particular handling all the cases were an earlier partial payment confirms, and then making sure that your follwup payment conflicts with the earlier complete payments (either by being a child of the partial or more directly) ... just a gnarly mess.

For most people the best advise right now is batch in the first place.
600  Bitcoin / Development & Technical Discussion / Re: Joining mempool RBF transactions on: August 05, 2020, 10:05:28 PM
Without signature aggregation there wouldn't be much savings unless there was cut-through going on, but there isn't much of that naturally because wallets don't normally spend unconfirmed outputs by third parties.

It some cases senders have failed to batch and could batch on replacement, but the solution there is ... batching in the first place. Smiley
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [30] 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 ... 290 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!