Bitcoin Forum
May 25, 2024, 09:06:30 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 [73] 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 ... 288 »
1441  Bitcoin / Development & Technical Discussion / Re: SegWit is a waste of disk space? on: April 24, 2016, 06:19:32 AM
We need profit.
Pump at all costs altcoins are ----> over there.

Quote
Why do we need solution for the thing what is not a problem?  Grin
It's a problem for some things and not others. The fact that whatever you're doing might not be bothered by it isn't a reason that it shouldn't be fixed for others that care about it.
1442  Bitcoin / Development & Technical Discussion / Re: SegWit is a waste of disk space? on: April 24, 2016, 06:14:15 AM
segwit makes individual transactions bigger.
It does not. (unless you want to count a byte or two from flagging in particular serialization which could simply be compressed out, if anyone cares).

You should read the FAQ: https://bitcoincore.org/en/2016/01/26/segwit-benefits/

Key advantages include solving malleability, facilitating safe script upgrades, lowering resource usage for lite clients, allowing additional capacity without significant additional UTXO bloat risk (in fact, it largely corrects the issue that creating additional UTXO is significantly cheaper than cleaning them up). In doing so it significantly reduces some of the risk points in increased capacity.

Quote
saying you don't need the segwit data is the equivalent of saying SPV mining is OK
The network consists of more than just miners, segwit doesn't change the data miners need. Allowing a continuum of options in using the network is important for avoiding some users being 'priced out' by the operating cost of it.

Quote
As for malleability, so anyone wanna say why that can't be done properly on it's own without segwit?

Segwit is the only proper solution for malleability. The only other alternative is long lists of canonical encoding rules-- a mess of bandaids which, at best, only solve it for a subset of transaction types-- and even then maybe not (it's very hard to be sure that a set of canonical encoding rules is actually sufficient to guarantee non-malleability). That kind of patchy approach is a huge pile of technical debt that would make it much harder to implement and maintain the consensus code in Bitcoin implementations.
1443  Bitcoin / Development & Technical Discussion / Re: Will Schnorr signatures be able to give us default "CoinJoined" transactions? on: April 23, 2016, 10:09:49 PM
Schnorr sigs will deliver a more efficient way to deal with this stuff.
[...]
When we will be able to start a "roadmap to fungibility"? like a transition from the current standard transaction model, to the next standard ("CoinJoined") transaction model? Because that's what we should do as soon as possible.
This should be a top priority imo. Lets not forget that Bitcoin is supposed to be p2p cash so this is a must to reach that definition, so like I said before, default state of a transaction should be Coinjoin and CT enabled for everyone, unless you want to be transparent in purpose.
The only reason our schnorr sigs will have that property is because Adam Back, Pieter, and myself have been working on it-- this kind of aggregatability isn't something that would just automatically come from schnorr, it requires a special design.

We consider it a priority, but it's only with the advent of segwit that it becomes sufficiently easy to deploy these improvements that I can be pretty confident of getting them in (and rather than having them end up as a marketing point in some altcoin.).  Segwit isn't in the network yet, and there is still a sizable "online" force of people attacking it, the folks working on it (and on general fungibility) improvements-- which makes it harder to give concrete schedules.

I'd like to say that I expect to get aggregateable schnorr into Bitcoin in the next year; but that depends on a multitude of factors that are hard to predict and that I can't control.
1444  Bitcoin / Development & Technical Discussion / Re: Turing completeness and state for smart contract on: April 23, 2016, 10:01:15 PM
It's state that matters. You can't have true interoperability without it.
That also seems confused to me. Bitcoin and Bitcoin smart contracts have state-- if they didn't they wouldn't work at all. But rather than having sprawling state that requires random access to the system and hiding the true cost of providing this state, Bitcoin transactions flow state information from inputs to output.

The result is a model which is more like monads in purely functional languages: A transaction collects some state (txin selection), creates new state (new txouts), and the scriptsig/witness proves that the state transition was permitted.

Similarly to the move to thinking in terms of verification instead of computation, this insight often permits transformations that improve scalablity.  E.g. instead of storing the state explicitly in outputs, in some applications you can store a hash tree over the state; then subsequent actions can simply show access and update proofs. This kind of compaction can't be used in all cases, but where it can it's very efficient.  I spoke some of these advantages on the subject of code above (e.g. MAST), but they apply no less for state.

An example of elided state, a simple one without state updates, is https://blockstream.com/2015/08/24/treesignatures/ which shows the construction of fairly efficient, pretty private, accountable multisig that avoids putting all of the applicable public keys into the blockchain.

The advantages of this kind of construction will be more clear to more people as future script enhancements restore functionality in Bitcoin script that was disabled; which will bring back the ability to enforce constraints on state carried from inputs to outputs.
1445  Bitcoin / Bitcoin Discussion / Re: MIT ChainAnchor - Bribing Miners to Regulate Bitcoin on: April 22, 2016, 05:13:52 PM

Quote
Here ChainAnchor is deployed as an overlay above the current public and permissionless Blockchain. The goal of the overlay approach is not to create a separate chain, but rather use the current permissionless Blockchain (in Bitcoin) to carry permissioned-transactions relating to Users in ChainAnchor in such a way that non-ChainAnchor nodes are oblivious to the transactions belonging to a permissioned-group. We use the example of the current Bitcoin blockchain as the underlying blockchain due to the fact that today it is the only operational blockchain that has achieved scale.
1446  Bitcoin / Development & Technical Discussion / Re: txs in blocks: why Merkle tree instead of regular hash? on: April 20, 2016, 10:27:29 AM
Because then I can prove to you that a block contained a particular transaction without sending you the whole block.
1447  Bitcoin / Development & Technical Discussion / Re: Turing completeness and state for smart contract on: April 20, 2016, 04:39:09 AM
On the pedantic points, I echo what tucenaber just said-- and I could not say it better.  (Also, see #bitcoin-wizards past logs for commentary about total languages. I also consider that a major useful point for languages for this kind of system).

People looking for "turing complete" smart contracts inside a public cryptocurrency network are deeply and fundamentally confused about what task is actually being performed by these systems.

It's akin to asking for "turing complete floor wax".   'What does that? I don't even.'

Smart contracts in a public ledger system are a predicate-- Bitcoin's creator understood this. They take input-- about the transaction, and perhaps the chain-- and they accept or reject the update to the system.   The network of thousands of nodes all around the world doesn't give a _darn_ about the particulars of the computation,  they care only that it was accepted.  The transaction is free to provide arbitrary side information to help it make its decision.

Deciding if an arbitrarily complex condition was met doesn't require a turing complete language or what not-- the verification of a is in P not NP.

In Bitcoin Script, we do use straight up 'computation' to answer these questions; because that is the simplest thing to do, and for trivial rule sets, acceptably efficient.  But when we think about complex rule-- having thousands and thousands of computers all around the world replicate the exact same computation becomes obviously ludicrous, it just doesn't scale.

Fortunately, we're not limited to the non-scalablity-- and non-privacy-- of making the public network repeat computation just to verify it.  All we have to do is reconize that computation wasn't what we were doing from the very beginning, verification was!

This immediately gives a number of radical improvements:

"The program is big and I don't want to have to put it in the blockchain in advance." ->  P2SH, hash of the program goes into the public key, the program itself ends up being side information.

"The program is big but we're only going to normally use one Nth of it-- the branches related to everything going right"  -> MAST, the program is decomposed into a tree of ORs ans the tree is merkelized. Only the taken OR branches ever need to be made public; most of the program is never published which saves capacity and improves confidentiality.

"The program is big, and there are fixed number of parties to the contract. They'll likely cooperate so long as the threat of the program execution exists."  -> Coinswap transformation; the entire contract stays outside of the blockchain entirely so long as the parties cooperate.

"The program is big, and there are fixed number of parties to the contract, and I don't care if everything just gets put back to the beginning if things fail." -> ZKCP; run _arbitrary_  programs, which _never_ hit the blockchain,  and are not limited by its expressive power (so long as it supports hash-locked transactions and refunds.)

"The program is kinda big, and we don't mind economic incentives for enforcement in the non-cooperative case"  -> challenge/response verification; someone says "I assert this contract accepts," and puts up a bond. If someone disagrees, they show up and put up a bond to say it doesn't. Now the first party has to prove it (e.g. but putting the contract on the chain) or they lose their bond to the second party, if they're successful they get the bond from the second party to pay the cost of revealing the contract.

"The program is too big for the chain, but I don't want to depend on economic incentives and I want my contract to be private." ->  ZKP smart contracts; PCP theorem proves that a program can be proved probabilisticly with no more data than log the size of its transcript.  SNARKS use strong cryptographic assumptions to get non-interactive proofs for arbitrary programs which are constant size (a few hundred bytes). Slowness of the prover (and in the case of snarks, trusted setup of the public key-- though for fixed sets of participants, this can be avoided) limit the usefulness today but the tech is maturing.

All of these radical improvements in scalablity, privacy, and flexibility show up when you realize that "turing complete" is the wrong tool, that what our systems do is verification, not computation.  This cognitive error confers no advantage, outside of marketing to people with a fuzzy idea of what smart contracts might be good for in the first place.

More powerful smart contracting in the world of Bitcoin will absolutely be a thing, I don't doubt. But the marketing blather around ethereum isn't power, it's a boat anchor-- a vector for consensus inconsistency and decentralization destroying resource exhaustion and incentives mismatches. Fortunately, the cognitive framework I've described here is well understood in the community of Bitcoin experts.
1448  Bitcoin / Development & Technical Discussion / Re: How to calculate public key manually? on: April 18, 2016, 10:30:30 PM
A simplistic construction that does an inversion for every add, even using extgcd (which is far from constant time), is going to be horribly slow. And probably infeasible to do by hand. Though it's more complex to explain, protective coordinates would likely be needed to have any hope of performing a point-scalar multiply by hand, along with a big precomputed table.
1449  Bitcoin / Wallet software / Re: Bitcoin core for android? on: April 17, 2016, 11:37:44 PM
There is an android version, ABCore: https://github.com/greenaddress/abcore

The low processor performance of most android devices combined with the phenomenal growth of the blockchain really reduce it's utility, however.

1450  Bitcoin / Development & Technical Discussion / Re: Bitcoin Core Paper Wallet on: April 14, 2016, 06:46:00 AM
Who the do you think _invented_ BIP32?    Huh
1451  Bitcoin / Development & Technical Discussion / Re: SegWit and SPV-mining. What if...? on: April 14, 2016, 06:38:35 AM
Tomorrow they would say: "Sorry, man. Your client just doesn't have segwit-data. We also can not provide it to you, but we are sure that it exists somewhere so we will not reorganize the main chain."
Pedantically, that isn't how the protocol works. You cannot transfer the block without segwit data to supporting full node systems. Older systems don't get it, but it doesn't matter if they get it or not as they wouldn't enforce any rules with it. I think your question is actually about old systems, in which case the transmission part is a distraction.

Quote
The segwit data is not sent separately.
Why? One can send block to the network to all nodes 0.12.x, 0.11.x etc, wait 30 seconds, and send the segwit data
Old software indeed doesn't necessarily verify all the rules that are at play: this is unavoidable, since you cannot even know potentially all the rules that are at play as miners could be enforcing ones no one but they know about. The same holds for any rule addition, like P2SH, or CLTV, or CSV.

At least for intentional public rule additions, the process limits exposure:  In the BIP9 protocol no new rules are added until two weeks after 95% of hashpower signals an intent to enforce. During that time and before full node software will have been producing loud warnings: "Warning: Unknown block versions being mined! It's possible unknown rules are in effect", "unknown new rules are about to activate", and "Warning: unknown new rules activated", notices for a month-- allowing parties to upgrade or connect their node via another upgraded node (that filters the blocks they receive). The process also is date gated to give time for upgrades even before it potentially begins. No non-upgraded full node software will create addresses using the new rules (nor would any prudent wallet generate them until they're widely enforced in the network) nor accept unconfirmed transactions that use them as they're non-standard. One could set their wallet to automatically delay or withhold action when a new rule they haven't been updated for has gone into effect, until upgraded or manually bypassed for that particular new rule, but for virtually any application that would be excessive-- considering that 95% of hashpower are updated to enforce and a process which results in wide deployment of node software.

(As an aside, segwit would be in 0.12.2/0.12.3; and potentially 0.11.x too though it looks like there isn't demand for it)
1452  Bitcoin / Development & Technical Discussion / Re: <<How Software Gets Bloated: From Telephony to Bitcoin>> on: April 14, 2016, 06:12:33 AM
Is not placing the Merkle root for the block's witness data in a transaction rather than in the block's header, not a quintessential example of a "kludge" or a "hack"?
The block header would emphatically _not_ be the right place to put it. Increasing the header size would drive up costs for applications of the system, including ones which do not need the witness data. No one proposing a hardfork is proposing header changes either, so even if was-- thats a false comparison. (But I wish you luck in a campaign to consign much of the existing hardware to the scrap heap, if you'd like to try!)

Quote
What about the accounting trick needed to permit more bytes of transactional data per block without a hard-forking change?  This hack doesn't just add technical debt, it adversely affects the economics of the system itself.
Fixing the _incorrect costing_ of limiting based on a particular serialization's size-- one which ignores the fact that signatures are immediately prunable while UTXO are perpetual state which is far more costly to the system-- was an intentional and mandatory part of the design. Prior to segwit the discussion coming out of Montreal was that some larger sizes could potentially be safe, if there were a way to do so without worsening the UTXO bloat problem.

In a totally new system that costing would likely have been done slightly differently, e.g. also reducing the cost of the vin txid and indexes relative the txouts; but it's unclear that this would have been materially better; and whatever it was would likely have been considerably more complex (look at the complex table of costs in the ethereum greenfield system), likely with little benefit.

Again we also find that your technical understanding is lacking. No "discount" is required to get a capacity increase. It could have been constructed as two linear constraints: a 1MB limit on the non-witness data, and a 2MB limit on the composite. Multiple constraints are less desirable, because the multiple dimensions potentially makes cost calculation depend on the mixture of demands; but would be perfectly functional, the discounting itself is not driven by anything related to capacity; but instead is just a better approximation of the system's long term operating costs. By better accounting for the operating costs the amount of headroom needed for safety is reduced.
1453  Bitcoin / Development & Technical Discussion / Re: SegWit and SPV-mining. What if...? on: April 13, 2016, 07:52:18 PM
Is the following scenario valid?

1. Some unhonest segwit mining pool takes top-1000 segwit utxo and mines a block at height N with a transaction which transfers all funds to his p2pkh address
2. This block does not have segwit data portion, but it can be broadcasted to all non-segwit nodes on the network
3. All other pools have a dilema - wait the segwit data associated with this block or start mining block N+1 on the top of N
4. What if miners will use SPV-mining on the top of this block? They will create blocks at heights N+1, N+2... etc without checking the segwit-validity of block N

No different than the situation today with "spend all the coins in the first 10000 blocks, without knowing the private key; and hope that miners extend the chain without having or checking the block. The segwit data is not sent separately.

In either case the corrupt chain would be simply reorged out after miners hit their limit of mining without data or otherwise some process realizes they are mining a chain the network is rejecting. Non-validating clients would be exposed to the reorganization, ones that are validating would not be.
1454  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: April 13, 2016, 05:57:54 PM
I don't think it was specifically stated (yet?) that it would be at 95%. People only assume this as it was used in the past.
Thats how BIP9 works right now. Perhaps experience with the first BIP9 deployments (CSV/etc.) will cause the spec to be changed, but for now it's reasonable to assume that it won't be.

(There is a lot more to the trigger threshold than just "95%"-- BIP9 gets rid of the rolling measurement so the 95% from BIP9 is a much higher bar than the 95% from the rolling method used in the past; the network has never used a bar this high for a soft-fork deployment before, so we may learn some things during the roll of of the first features with it.)
1455  Bitcoin / Development & Technical Discussion / Re: <<How Software Gets Bloated: From Telephony to Bitcoin>> on: April 10, 2016, 07:09:50 PM
He glosses over the important part in order to make an extremely tenuous inference;
"Most of my brain feels that this is a brilliant trick, except my deja-vu neurons are screaming with 'this is the exact same repurposing trick as in the phone switch.' It's just across software versions in a distributed system, as opposed to different layers in a single OS."
Bolded for emphasis.

That is not "just" a minor difference that can be hand-waved. That is the raison détre of the soft fork. It is not because Bitcoin developers are concerned that they might mess something up, or that it would be a lot more work to do a hard fork, which seems to be the implication in this article.

It also ignores, or isn't aware, of the fact that we implemented segwit in the "green-field" form (with no constraint on history), in Elements Alpha-- and consider the form proposed for Bitcoin superior and are changing future elements test chains to use it.

Applying this to Bitcoin conflates misleading source code with protocol design. None of these soft fork things result in things like mislabeled fields in an implementation or other similar technical debt-- if that happens it's purely the fault of the implementation.

1456  Bitcoin / Bitcoin Discussion / Re: /btc is full of hypocrites or shills on: April 06, 2016, 07:33:24 PM
Since when has malleability had _anything_ to do with the (in)security of accepting unconfirmed payments? 0_o
1457  Bitcoin / Bitcoin Discussion / Re: Clearing the FUD around segwit on: April 05, 2016, 09:02:11 AM
You sounds like a central planner/protector for all the people. This is experimental software and open source isn't it, no one is responsible for people's financial loss, thus everyone here knows how to protect themselves by only investing risk money. But they hate being centrally controlled, it seems you value money over freedom
If third parties make decisions to take away your funds, then Bitcoin isn't money. I find it amusing that you call avoiding that, "centrally controlled".

hm - you can improve, debug and grow only with constructive critics, so I  wonder (but could understand as well)  that you lose patience here ...
You are a sience guy, so you know your theory fail once you cannot convince your critisizers .
There is little point in arguing with a climate change denier or creationist whos positions are axioms and not based on the science. At some point all there is left to say is that this is what Bitcoin is, it's what it's been since the start-- the poster has already aggressively and insultingly disregarded the advice of virtually everyone with established expertise-- and promotes a vision, things like it being okay to make changes that confiscate people's funds, that in my opinion is practically and ethically incompatible with Bitcoin. I feel no shame in telling such a person that Bitcoin may not be what they seek.

If Bitcoin is to satisfy anyone it cannot satisfy _everyone_; some demands are mutually exclusive.

old clients wont check whats after 0x00(op_0).. they automatically treat the transaction as valid.
This is the property of the address, not the signature; if it were a property of the signature you could already simply steal any coin.
1458  Bitcoin / Bitcoin Discussion / Re: Clearing the FUD around segwit on: April 05, 2016, 07:14:48 AM
You really don't need all these wasted efforts of patch making and testing if you just do a hard fork
That would be strictly harder not easier. The older behavior must be accommodated regardless or it will confiscate peoples coins.

I'm sorry you disagree with the method used by Bitcoin's creator-- as well as almost every technical expert to come after-- has fixed and maintained the system, perhaps you should use something else.
1459  Bitcoin / Bitcoin Discussion / Re: Clearing the FUD around segwit on: April 04, 2016, 09:30:56 PM
Thanks & yes, I got that. But this soft fork happens at 4) or two weeks later.
The question is, whether the new coded miners (SW enabled) will orphane the non SW chain down two that BAD block made in 2)  because they will all agree that this BAD but old block is the last valid SW one since it contains the correct LAST  correct SW  entry?
No, that block will not be invalid. The rule is not enforced for any blocks before the specific block where the activation starts, all at once for everyone upgraded (including the 95%+ hashpower that triggered it). The blockchain itself assures that the triggering is coordinated and that almost all the hashpower is running software that can enforce at that coordinated point.
1460  Bitcoin / Bitcoin Discussion / Re: ToominCoin aka "Bitcoin_Classic" #R3KT on: April 04, 2016, 04:21:41 PM
Gmaxwell R3KT - this criticism below seems reasonable and implies you have no clue how in PGP windows works, how about explaining this?
That document is a thoroughly confused rant written by some fraudster.

What the "paper" is pointing out is that although the hash preference list or "8 2 9 10 11" and the other metadata were not conceived of or implemented until a year after the claimed date (as I pointed out); it was possible, by a long series of complex manual commands to manually override the preferences and punch in whatever ones you wanted, even the 'future' ones.

You may note that it take great care to provide no citation to my actual comments, in fact it quotes me but uses an image for the text-- making it more difficult to even search for it. Allow me:

"The suspect keys claim to be October 2008; the commit was July 2009. So no, not without a time machine. It's possible that the settings could have been locally overridden to coincidentally the same defaults as now." https://www.reddit.com/r/Bitcoin/comments/3w027x/dr_craig_steven_wright_alleged_satoshi_by_wired/cxsm1yo?context=2

-- so the whole theory that this "paper" writes for pages and pages as if it were some great concealment on my part is a possibility I explicitly pointed out.

The problem with it is that it requires the user to have executed a long series of complex commands to override the preferences and have to have guessed the exact selection and ordering of the preferences that wouldn't be written for a year-- when if they preferred a particular cypher they would more likely have taken the existing "2 8 3" and stuck their choice on the front.  Not only that, but they would have had to have done so on the same day that they created a totally ordinary key and published it, yet this other key-- which looks exactly like one created with post-2009 software and entirely unlike the well known one-- was provided to no one for years, not placed on public key servers and until now and otherwise has no evidence of its prior existence. Come on, give me a break.

It's "possible", a fact a pointed out explicitly back then, but this possibility thoroughly fails Occam's razor-- especially on top of the evidence presented by others: Archive.org showed the subtle "hint dropping" added in blog entries was back-dated, added in 2013, SGI reported that the published letter on their letterhead was fake, the lack of cogent technical commentary from that party, etc.

Bringing it back on topic, I'd say that it's surprising that all these Bitcoin Classic folks believe such tripe, but in the context of all the other incompetent nonsense they believe, it doesn't seem so surprising.
Pages: « 1 ... 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 [73] 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 ... 288 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!