Bitcoin Forum
May 08, 2024, 10:15:11 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 [7] 8 9 »
121  Bitcoin / Development & Technical Discussion / Re: Pay to contributors on: September 04, 2014, 11:33:13 PM
Addresses don't pay funds. They are static identifiers for transaction outpoints, more akin to an invoice number than anything actively involved in transactions. There is a wiki article by iwilcox on this subject.
122  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 04, 2014, 10:15:25 PM
But I believe we have already discussed the "non-TC" chestnut here and the consensus was that one can abuse P2SH to escape the "no-loops" restriction.

You can't use P2SH to create loops and nobody said anything about loops anyway.
123  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 04, 2014, 06:47:29 PM
Quote
I don't really know exactly what the ring signature protocol is (any links?) so maybe there are disadvantages to doing this?

Nope. Even with the same q, nobody except the signer is able to prove that they were or weren't the signer, and if the signer forgets her q then she can't either.

The "different q's" thing was just an artifact of my initial misunderstanding when I wrote the article.
124  Bitcoin / Development & Technical Discussion / Re: Keeping Transaction Costs Down on: September 04, 2014, 01:51:29 PM
I have a short writeup on raw transactions. It has not been field-tested for understandability yet, so if things are unclear please let me know.

It advises you to do a bit of background reading to make sure you're familiar with how transactions work -- take this advice!

As for your website, how small is "small"? Would your trust model make it appropriate for probabilistic payments or having a deposit/withdraw account system?
125  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 04, 2014, 12:58:38 PM
This claim about "typed data" and "provability" is false. There are actual proofs of that coming from the people involved in designing/implementing Algol 68. I don't have any references handy, but in broad terms the progression "classic Von Neumann" -> "type-tagged Von Neumann" -> "static-typed Von Neumann/Harvard modification" strictly increases the set of programs that have provable results.

We are not talking about von Neumann architecture. We are talking about a small non-TC stack machine without mutability and a fixed opcode limit. In this case the set of allowable programs absolutely does shrink, and more importantly, the space of accepting inputs for (most) given scripts shrinks. This is easy to see --- consider the program OP_VERIFY. There would be one permissible top stack element in a typed script; in untyped script every legal stack element in (0x80|0x00)0x00* is permissible.

That said, nobody actually said that anything about the space of provable programs. What I said is that script would be easier to analyze. This is obviously true because of the tighter restrictions on stack elements, as I already illustrated. As another example, consider the sequence OP_CHECKSIG OP_CHECKSIG which always returns zero. One reason this is true today is that the output of OP_CHECKSIG always has length one while the top element of its accepting input always has length > one. To analyze script today you need to carry around these sorts of length restrictions; with typing you only need to carry around the data that CHECKSIG's output is boolean and its input is a bytestring.
126  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 04, 2014, 02:04:54 AM
To avoid this,  you generate a random blinding key Q  and sign with gP+gQ  instead, proving knoweldge of P, then you forget Q.  Later you cannot be coerced because you can honestly claim to have forgotten Q.

Tacotime and myself were quite confused by this ... after a long IRC conversation I believe the three of us landed on this description of how exactly the blinding is done.
127  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 04, 2014, 12:41:58 AM
I think you made a typo tacotime, because two keys are repeated. What you mean is
Code:
{
        "0":"02b631fc5e901982a8d130ea65f2966e99a51375030b3c9c64288f4631943ed194"
        "1":"032e76a7de5584eee15a23e872c08543fcca5445d844a6ce63d37c5d25ce377888",
        "2":"04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f"
}
and I can indeed verify your signed message with this ring Cheesy
128  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 03, 2014, 11:35:35 PM
Do you know if they are aware of our value blinding scheme described in my writeup?
129  Bitcoin / Development & Technical Discussion / Re: Unique Ring Signatures using secp256k1 keys on: September 03, 2014, 11:20:20 PM
This is great stuff tacotime! I bet we have a lot of fun with this. Maybe I will install Go, generate a key, and post it here for us to play ringsigning games.

Are you involved with Monero?

Edit: Ok, looks like we have to generate the keyring manually? To make a keypair you use the -g command. There is an example keyring that comes with program
Code:
{
        "0":"024627032575180c2773b3eedd3a163dc2f3c6c84f9d0a1fc561a9578a15e6d0e3",
        "1":"02b266b2c32ba5fc8d203c8f3e65e50480dfc10404ed089bad5f9ac5a45ffa4251",
        "2":"031ea759e3401463b82e2132535393076dde89bf2af7fc550f0793126669ffb5cd",
        "3":"03320cd05f3538159693cd253c30ec4972fa06ad10f1812951923a5ea063e9748c",
        "4":"039b9033d0377e3af7fdf4369134f3ec96aa03326fd07f89d60dc3ba70d0a19956",
        "5":"03c81094edb63ba28b1e4d5556d91dc030b725e105be94fb4005bee987f80a38f0",
        "6":"032077679a3f1579acc22308f09b7d5f597cba4ea9f314b8aaf86ab2f052fa0157",
        "7":"039b9033d0377e3af7fdf4369134f3ec96aa03326fd07f89d60dc3ba70d0a19956",
        "8":"02dcdb96d05d6cd36ce7014a69ebce8b48f8d7de46ce3bfa99482af65284697e13",
        "9":"04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f"
}
So I guess we will just copy the format.

My key for this thread is 02b631fc5e901982a8d130ea65f2966e99a51375030b3c9c64288f4631943ed194
130  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 03, 2014, 11:17:16 PM
Well, there can only be one OP_CHECKSIG...

That is false. You even could do threshold signatures with multiple OP_CHECKSIGs if you wanted to be a goof.

Quote
This requires setting a size for each data type.  I think it is basically integer, byte array and boolean (which is an int).

Script is not typed; there is only one type, "raw byte data", that is interpreted in various ways by the various opcodes. (This makes accounting quite easy actually.) And today you are required to match the byte representation of all stack objects exactly, since OP_EQUAL requires it, so arguably a total stack size limit would be an easy thing to describe precisely.

My biggest metawish for a script 2.0 would be ease of analysis.... in particular I would like separate types (uint, bool, bytedata) and explicit casts between them. I spent quite a bit of time working on script satisfiability analysis recently, and it seems the best way to describe abstract stack elements is as a bundle of complementary bounds on numeric values, boolean values, length, etc.

Bitcoin-ruby uses a typed script and has each opcode do casts.. the idea makes me smile but for consensus code it is really not appropriate, sadly. They have plans one day to replace it with a more-or-less direct port of bitcoind's script parser.
131  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 03, 2014, 08:53:37 PM
I'd add that the blocksize and scriptsize limits are easy to check -- you just have to look at the data on the wire, not understand it at all. Any limit on script stack objects is going to be nasty: script is an intricate machine with many, many weird corner cases that interact in surprising (at least) ways.

Quote from: gmaxwell
Point here was that realizing you _needed_ a limit and where you needed it was a risk.

This is probably the most important point. I had initially typed up a long reply explaining what needs to be done for OP_PUSH or OP_CAT limits (which are conceptually not so bad, but you have to be exhaustive and expect all implementors to be identically exhaustive). And then I thought, "Why did I do that? To stop stack size explosions, I guess...but did I stop that? No idea." because I would have to analyze the interactions with every single other part of the script system, and neither I nor anybody else has a complete model there. (Though I *think* I have a complete model of how everything affects the size of stack objects, at least right now.) So that's where the risk comes from.
132  Bitcoin / Development & Technical Discussion / Re: Testnet fees? on: September 02, 2014, 10:29:02 PM
Hi onezerobit,

Testnet does support transaction fees just like mainnet. In fact the only difference between testnet and mainnet (besides them being separate chains) is that testnet has a rule that allows 0-difficulty blocks after twenty minutes, because the network is monetarily worthless and therefore prone to stalls.

I assume by "verify" you mean "confirm". When a transaction is included in a block is at the discretion of the miner -- typical policies on Bitcoin are that all inputs must be sufficiently old (to deter spam) or augmented by some fee. Presumably most testnet miners do not bother changing their policy when they switch to testnet....so I recommend adding a big fee and seeing if that helps.

Andrew
133  Bitcoin / Development & Technical Discussion / Re: Using OP_CAT on: September 01, 2014, 11:44:48 AM
I'm trying to look into NXT... and I have a hard time.

Don't waste your energy. They claim to have solved various extremely difficult problems without even acknowledging that they are problems, they are closed-source and they pay a lot of shills. There is not and never has been any evidence of technical innovation from them.

Quote
It isn't available in testnet either.  It it's just a question of "enabling it"— you have to prevent it from being a memory exhaustion attack via exponential growth. (this isn't theoretically hard but it would, you know, require a little bit of work).

They are enabled already on Testnet so you can try your experiments there.

I think Peter misunderstood the question he was answering and meant "nonstandard transactions are standard on Testnet". Either that or he was just wrong Smiley.

Quote
As for the expodential groth, I read somewhere something like:  Replacing two inputs from the stack with one output that is only as long as the two inputs together... is unlikely to exhaust the memory.  And I agreed very much.

Use OP_DUP and OP_CAT in succession, and you will double the size of your (single) input.
134  Bitcoin / Development & Technical Discussion / Re: Fast synchronization idea on: August 31, 2014, 04:31:52 PM
Avoiding IP-linkage is easy by using Tor directly.

To avoid address linkage by using a "Tor-like protocol" has some serious problems:
- How do you avoid isolation or Sybil attacks? Tor itself uses a centralized repository of all nodes on the network. This doesn't prevent some TLA from running a huge chunk of the nodes, but it does make sure that every user can see a full list of nodes (and build circuits spanning a wide IP mask).

- Timing analysis. This is a big problem for Tor and would be absolutely enormous for this application, where every user downloads all their outputs in one shot then nothing ever again.

- How do you incentivize non-attackers to run nodes? There is a strong moral incentive to run Tor, plus running a node helps to shield your own traffic. If you are running a full node, then you don't need this service because you know all the outputs, and presumably you care enough about the network to be providing information about its state...so why do you want to help others who are avoiding doing this?

- Creating an entire new circuit for every output would be extremely expensive. Tor doesn't do this. It doesn't even create a new circuit for each new connection.

Generally any problem solved with a "Tor-like protocol on top of Bitcoin" would be better solved with Tor itself. In this case, using Tor only solves the first of my objections (Sybil resistance). There is also the meta-argument of this being an extremely complex solution to a specific problem.
135  Other / Meta / Non-truncated RSS feeds on: August 29, 2014, 10:38:01 PM
As described at http://wiki.simplemachines.org/smf/SMF2.0:News_and_newsletters#Settings it is possible to change the RSS feeds so that they do not truncate the text (setting Maximum message length to zero means no limit).

Currently the limit is set to some small number, which makes the RSS feed unusable without a web browser. The wiki page I linked recommends this "because some users have broken RSS readers", which is a non-sequitor, but presumably is the reason for this setting.

If we could remove the RSS message length limit, that'd be awesome. I get most of my news over RSS in my email inbox, and it severely interrupts my flow when individual feeds require special attention to be readable. It also prevents me from forwarding or archiving forum messages.

Thanks!

Andrew
136  Bitcoin / Development & Technical Discussion / Re: How 'Anonymous' is Bitcoin? on: August 29, 2014, 10:15:42 PM
What is the most advanced technology for anonymous transactions feature?
I’ve heard about zerocash and coinjoin but which one of them has a technological edge?

This is maybe more technical than what you're looking for, but I have a long Bitcoin.SE post about anonymity which talks about both CoinJoin and Zerocash.
137  Bitcoin / Development & Technical Discussion / Re: Fast synchronization idea on: August 29, 2014, 07:36:45 PM
Some thoughts:

What I believe you are proposing is to store the root of a Merkle Tree of UTXOs in each block. This would be useful for a number of applications, and if it did not require a softfork to implement, I suspect it would have happened already. Such a commitment would allow fast proofs of inclusion or exclusion of individual UTXOs at individual block heights. The cost in blockchain space would be roughly a single hash: the root.

Note that actually using the root to check inclusion/excusion/insertion/deletion of any UTXO requires a Merkle proof, which as you observe has size logarithmic in the total number of UTXOs. To be able to generate such a proof for an arbitrary output would require access to the entire UTXO set, while verifying it requires only the root. This means that without attaching proofs to transactions (which would be a huge increase in their size!), non-full validators could not follow updates to the Merkle root, nor could they generate their own proofs. This puts a big damper on the usefulness of this proposal, because you are forcing any users who don't have the UTXO set (i.e., everyone who would benefit from having a commitment in the first place) to request proofs from some full-node peer. They don't need to fear false proofs, since they can validate themselves, but there is a privacy cost, not to mention bandwidth.

It would let users quickly synchronize the blockchain by doing a SPV validation, requesting proofs of their outputs in some recent block, and checking the transactions in every block after that, but they wouldn't be able to produce their own proofs and they would need to request new proofs for each block if they wanted to avoid checking every transaction in the future.

Oh, and by the way at time of this posting there are 13057707 possibly spendable UTXOs plus 4886 provably unspendable ones. (Note that my spendability checker is really weak right now so maybe this number is quite low.) We rolled over 13 million some time last night.
138  Bitcoin / Development & Technical Discussion / Re: [ANN] Scalable Bitcoin Mixing on Unequal Inputs on: August 22, 2014, 11:39:48 PM
Very cool stuff! I really appreciate the clear expositions and diagrams.

I'll have to think a bit about DoS potential (e.g. can you join a session, commit to different values to each peer, and have them all abort when they're unable to agree on what's going on?) but the mixing mechanism is really slick. Looking forward to your upcoming CoinSwap-like proposal.

Edit: One interesting observation is that if one party tries to mix N coins, while everyone else combined has only M, where M < N, then it appears that this party will wind up mixing N - M coins with himself. If M ≥ N then nobody mixes with themselves. Is it generally true that this algorithm always leads to the least required self-mixing? Maybe this is a straightforward consequence of condition (4)?

Edit2: Ah, not quite. If there are a lot of zeroes in the priority matrix it is possible that self-matches will be taken before non-self-matches. If you were to enforce that self-matches were always lowest in the priority list, then self-matches would only be taken as a "last resort" and (unless I'm confused) this would always lead to the least possible amount of self-matching.

So I propose adapting Algorithm 2 to always sort the (n, n) pairs last, regardless of priority.

Andrew
139  Alternate cryptocurrencies / Altcoin Discussion / Re: [ANNOUNCE] Bitcoin Proof-of-Stake on: May 20, 2014, 04:21:47 PM

Hi SlipperySlope,


I see you've updated your whitepaper today (May 20), and I think also every day before that. I have a few comments, though I've got a lot of reading to do so I can't promise I'll follow up on them in a timely fashion.


1. You spend a bunch of the first part of your paper claiming that Bitcoin's proof-of-work system is wasteful and that hashing is "single-purpose". (This is a common meme on here and Reddit, I understand — in general, I'd advise you when learning about Bitcoin to not pick up ideas from this website unless they originate with somebody who is a known expert on Bitcoin.) I recently had a discussion with fenn on ##hplusroadmap about the "single-purpose" claim as well as the "zero-sum" game. Notice that there is no known way to achieve distributed consensus without proof-of-work, so these claims that the entropy production is unnecessary are extraordinary and require significant evidence.

2014-05-11 12:20:10     fenn    it's a zero sum game
2014-05-11 12:20:32     fenn    except for the value of the network of course
2014-05-11 12:20:55     fenn    network value is independent of number of hashes being performed
2014-05-11 12:21:36     kanzure do you know what the hashes are for?
2014-05-11 12:27:58     fenn    "Any block that is created by a malicious user that does not follow this rule (or any other rules) will be rejected by everyone else."
2014-05-11 12:29:18     fenn    "Each block memorializes what took place immediately before it was created."
2014-05-11 12:29:49     fenn    New blocks can't be submitted to the network without the correct answer - the process of "Mining" is essentially the process of competing to be the next to find the answer that "solves" the current block.
2014-05-11 12:30:24     fenn    each hash is a "guess" at the answer
2014-05-11 12:30:59     andytoshi       fenn: you are totally missing the forest for the trees
2014-05-11 12:31:11     andytoshi       like if you said aerobic respiration was a process for binding carbon to oxygen
2014-05-11 12:31:33     fenn    he said "do you know what the hashes are for" and i answered, what do you want
2014-05-11 12:31:40     andytoshi       and when asked what it's for, you started talking about valence electrons and how respiration gets you the right reconfiguration
2014-05-11 12:32:13     andytoshi       fenn: the hashes give a way to translate computational resources into something cryptographically verifiable
2014-05-11 12:32:21     andytoshi       that's what "proof of work" refers to
2014-05-11 12:32:35     fenn    it has nothing to do with computational resources
2014-05-11 12:32:48     andytoshi       it lets you /define/ the system mathematically so that it is hard to rewrite history
2014-05-11 12:33:08     andytoshi       fenn: the correct answer to kanzure's question was "no"
2014-05-11 12:33:24     fenn    it's just the ability to do this particular cryptographic algorithm, which happens to be implemented on something resembling a computer
2014-05-11 12:33:52     -->     drewbug1 (~Adium@c-71-207-76-172.hsd1.pa.comcast.net) has joined ##hplusroadmap
2014-05-11 12:35:21     fenn    you can take all the bitcoin asics in the world and the won't be able to add 2+2
2014-05-11 12:35:52     chris_99        heh
2014-05-11 12:36:14     <--     drewbug (~Adium@fsf/member/drewbug) has quit (Ping timeout: 240 seconds)
2014-05-11 12:36:30     andytoshi       yeah, and you can take all the aerobic biomass in the world and they won't be able to either
2014-05-11 12:36:54     andytoshi       and yet here we are huffing and puffing as we type frantically
2014-05-11 12:37:01     fenn    the difference is you say one is "computational resources" and the other isn't?
2014-05-11 12:38:09     andytoshi       ?? the difference is that respiration is used to provide useful energy to the organism while bitcoin hashing is used to translate a fact of physics to a fact of mathematics
2014-05-11 12:38:22     andytoshi       they are more alike than they are different at the level we are talking
2014-05-11 12:38:42     andytoshi       in both cases they are a mechanism for taking resources from the environment and translating them into a form that the system can use
2014-05-11 12:38:44     fenn    but cells are more general purpose than bitcoin asics
2014-05-11 12:39:06     fenn    even "specialized" cells can do a large number of things
2014-05-11 12:39:16     andytoshi       i'd like a citation that DNA is more expressive than bitcoin script..


You later suggest that checkpoints are an improvement on consensus. This is not true. Checkpoints have nothing to do with consensus. Nada. This has been beaten to death by myself and several others, and is another example of a bitcointroll meme infecting your thought. Above you say that proof-of-stake is the reason that Peercoin is so popular. But Peercoin has been centralized from the start, and has no plan for ever being decentralized. Of course a centralized consensus system is able to be more efficient than a trustless one.

You compare Bitcoin confirmation times to CC transaction approval times. This is nonsense. There is no amount of time after which CC transactions are really irreversible in the sense of Bitcoin, but even the amount of time to eliminate the trivial "call the CC company and dispute the charge" method of reversal is several months. So you should be comparing hours (for Bitcoin) to months (for CC companies). If all you care about is transaction "approval" this is limited only by the speed of the network, just like with CC's, except that the Bitcoin network is more distributed and anyone can verify transactions, so the uptime is better. Again, this has been beaten to death here.

You say that Primecoin is an example of a "useful PoW". But there is no known use for Cunningham chains, and Primecoin has an awful awful "proof" of work which fails pretty-much every point laid out in my ASIC FAQ.

You repeatedly describe the adversarial nature of the Bitcoin network as something Satoshi made up. But adversariality is a fact about the world and not something you can model away when designing a decentralized cryptosystem.


2. I haven't look into your tamper-resistant logs but I worry about how efficient these can be regarding both CPU time and bandwidth, since you have every transaction producing a bunch of these, in some cases from cell-phones or other transaction-producing devices. It looks like at every time you have a single mint who gets to decide what order transactions occur in and which ones are valid or invalid. It's not clear what happens if the node approves conflicting transactions (I guess he gets banned when the next block is produced, but then how do you decide which one was the "correct" one? This does not seem to achieve the "instantly confirmed transaction" scenario I think you are going for.)

Also, don't use Bitcoin addresses for authentication. They can be used to authenticate against "the owner of coins sent to this address" but pretty-much nothing else. Addresses are payment identifiers and confusing this purpose with that of signing keys is only going to cause user error.


3. As we discussed in Austin, this nomadic business makes historic consensus tricky. Suppose the superpeers wind up in being in Vancouver for the summer. After some time they move on to London, so I have an opportunity to buy all the old superpeers' hardware from them for cheap prices. I use them to rewrite some history sufficient that they now pass off to a system in Austin which I control. From there on I'm able to do the usual stake-grinding or whatever tricks to maintain possession of the entire superpeer network forever, just sending it to my hubs around the world. If I were andyfastow rather than andytoshi I'd even set up a bunch of shell corps to disguise that I was doing this.

The point is that if you want to prevent rewriting history, you need to trust everyone with an ability to manipulate history, forever, even long after their incentives to help the network have gone. You need to prevent old hardware from winding up in unsecured dumps, from being stolen, from being hacked, etc., etc. I'd wager most TPM chips out there will expose their keys to (at least one of) the Chinese or Americans.


4. When you talk about trusted computing, what is the actual trust model behind this? I assume you need an authenticated channel to the TPM to verify the machines' software, but when you are only talking to a node through a network it is unclear to me how you can trust that you're talking to a real TPM which is really installed in the system that you're communicating with.


Andrew


140  Bitcoin / Development & Technical Discussion / Re: Idea: Reject payment system on: March 25, 2014, 07:22:49 PM
Sebastian, the problem is that those casinos misunderstand the structure of transactions and believe in "from addresses". No amount of patching will correct the moral failing here, which is that they are doing payment processing without understanding the network or protocol they are using to do it. I have a document which discusses "from addresses" and surrounding misunderstandings.

To make transactions which require a recipient's signature would require a soft-fork (to introduce the new transaction type), and would add an extra signature verification to the process of validating transactions. Since transaction validation is done by unpaid nodes, you would need a strong usecase (and this is not one, unfortunately) to justify the added load on them, since Bitcoin's centralization depends on these nodes operating freely and diversely.

Edit: Here is a similar idea: suppose that instead of signing every output, you sign only your change output as well as a recipient's key. Then the recipient sees the transaction and chooses her own outputs, which she signs, and the transaction is invalid (i.e. not minable) until she does so. This gets the recipient cooperation that you want, but also allows the recipient to choose her own outputs (for merge avoidance) and also choose her own fee (because presumably she cares more than the sender that the transaction actually gets confirmed).

This is still a forking change of course, but it gives you some extra usecases so I figured I'd mention it.

Pages: « 1 2 3 4 5 6 [7] 8 9 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!