Bitcoin Forum
May 26, 2024, 03:34:53 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [38] 39 40 41 42 43 44 45 46 »
741  Bitcoin / Development & Technical Discussion / Re: Where is the separate discussion devoted to possible Bitcoin weaknesses. on: August 11, 2010, 04:17:11 PM
Knightmb said increasing connections and bandwidth had little effect on khash speed.

Game over. Try again?
742  Bitcoin / Bitcoin Discussion / Re: Not a suggestion on: August 11, 2010, 05:13:24 AM
It's hard to think of how to apply zero-knowledge-proofs in this case.

It's hard for me too! :-) Was interesting to re-read though!

Was hoping it would spawn some insight on a way for nodes to demonstrate that they "always follow" the block generating rules, in absence of everyone needing to have the set of all transactions to double check.

It didn't. :-)
743  Bitcoin / Bitcoin Discussion / Re: Not a suggestion on: August 11, 2010, 04:58:50 AM
Satoshi: I know you know the first part of what I'm writing, but I want others to be able to follow and for you to correct any misconceptions I might have.

I was looking at the current Merkle tree implementation trying to figure out when transactions could be removed without losing security.

In transaction graph terms, the transactions represent the nodes. The edges of the transaction graph are represented by the in-points which point to previous transactions using a BlockHash->TransHash->OutPoint kind of structure. It is the existence of an in-point that marks a previous out-point spent.

So for a transaction to be valid, you most show for every in-point in a transaction that BOTH, a previous out-point exists AND no previous in-point exists that references that out-point. So for every out-point, there are zero or one in-points referring to it. zero = unspent. one = spent.

That also means that no transaction can be culled from the block list, until both its out-points are spent. Otherwise coins will disappear.
You can however, delete all double-bound transactions as soon as you are confident the 2nd binding block will stick around. (earliest possibility)

However, as you delete transactions and replace them with their tree hashes, you lose the graph structure present in the block list. In effect, all transactions undeleted from the block list have unspent value purely because they still exist. They can no longer prove validity by ancestry since that part of the graph was culled.

Which got me thinking, is there a way to prove validity if you never put the whole transactions into the graph to begin with?

The challenge is, how do you prove that no other spends exist?  It seems a node must know about all transactions to be able to verify that.  If it only knows the hash of the in/outpoints, it can't check the signatures to see if an outpoint has been spent before.  Do you have any ideas on this?


The key is to hash the transaction information as part of the out-point hash. So instead of creating a single transaction hash, you represent the transaction as two out-point hashes. (I originally considered an in-point/transaction/out-point structure using hashes, but that proved unnecessary.)

Only transaction validators need to know the bitcoin address associated with a recorded out-point hash. That comes from the submitted antecedent transaction for an in-point of the current transaction. The antecedent transaction and out-point is hashed and presumed BOTH valid and unspent if that hash appears one-and-only-one time in the block list.

The current transaction must be signed by the key for the address in the antecedent transactions of course. If this proves valid, two new out-point hashes are generated and inserted in the current block. The in-point hashes are marked spent by including them in the current block as well. (If a hash exists twice it is spent.) If you want to represent the transaction as a unit (and the currently visible transaction graph), the in-point hashes and out-point hashes could be grouped. However, this is not strictly necessary to prove validity.

We're trying to prove the absence of something, which seems to require knowing about all and checking that the something isn't included.

In this case we are trying to prove the presence of ONE matching hash and the absence of TWO matching hashes. It does require knowing all of them to prove.

I think the prohibitions against double spending are as strong as in the current version.


==== CAUTION! ====

However, you have to consider the case where a node causes mischief by deliberate adding random "canceling hashes". In this case, the node wouldn't be able to gain access to the coins, as he has no signed transaction hashing to a valid unspent out-point hash. However, the current owner wouldn't be able to spend the coins either. The in-point would be presumed already spent.

That means the validation conditions are EXACTLY THE SAME as with the current implementation. All validating nodes must examine and validate all transactions represented in a block before accepting it and building on it.

If there exist any hashes in the proposed block that are not represented by valid transactions, the block must be rejected.
That is exactly the same as the current system's, if any transaction doesn't validate, the block must be rejected.

I had hoped the condition to pass all transactions to all validators could be weakened but I can't see how (yet) without relying on trusted delegation.

----------

An interesting feature is that this simplifies the validation process. All that needs to be done is to parse the block list (of hashes) once. As each hash is parsed you simply look it up in a hash-set. If it doesn't exist you add it. If it does exist you delete it. When you are done parsing the block list, you will have the minimal set of valid and unspent out-points. You might even be able to keep the whole set in memory. (at least for a while!)


744  Bitcoin / Bitcoin Discussion / Re: Not a suggestion on: August 10, 2010, 05:29:44 PM
Although I was recently reading about Zero-knowledge proofs

Interesting idea to revisit! Thanks. Hadn't thought of them in a while.
745  Bitcoin / Bitcoin Discussion / Re: Not a suggestion on: August 10, 2010, 02:22:09 PM
By the way, this is the way most digital notary services work. You send them a hash of a signed document and they log it permanently. Then they create a hash chain like bitcoin does. They periodically publish the current hash chain value in a newspaper or other offline redundant record.

You don't have to send your private documents/transaction to the notary for them to be time stamped and recorded. The notary is just certifying that something that matched this hash existed at this point in time.
746  Bitcoin / Bitcoin Discussion / Re: Not a suggestion on: August 10, 2010, 02:09:36 PM
You're just advocating security through obscurity.

I did mention that. I wouldn't count on this for monetary security. I would like the system to be equivalent to the current one.

However, privacy obscurity is known to add value. Your neighbors, or the FBI could me watching everything you do all day long. But they probably aren't. If you happen to become "of interest", sure they could start watching you now and from this time forward.

But the most asked for additional legal powers seems to be, "let me examine everyone's logs!" (phone calls, cell towers, email connections, facebook connections, credit/debit card transactions, Google history, browser history.) The other systems are "security though authority." Bitcoin doesn't have that.

====

By the way, I'd rather not broadcast every transaction to every node either. But that is for another thread.
747  Bitcoin / Bitcoin Discussion / Not a suggestion on: August 10, 2010, 05:45:45 AM
As some might have noticed, one of the things that bugs me about bitcoin is that the entire history of transactions is completely public. I totally understand the benefits of how this simplifies things and makes it easy for everyone to prove coins are valid.

So this is not a suggestion for a change to bitcoin. Rather it is a question about what could be possible, and what couldn't be possible.

The general question is, could the block list be/have been implemented in a way that didn't store the full transactions in the list? Specifically, *perhaps* it would be possible to store only hashes of the in-points, out-points in the block list. These would be time stamped (notarized) in the blocklist exactly as is being done now.

The major difference is that it would be the coin receiver's responsibility to store the full transaction. And perhaps he might have to store previous transactions (X) deep to show history.

Then when he wanted to transfer the coins to the next party, he would create a transaction exactly as is being done now, except he would have to submit the antecedents to the transaction for validation as well. For validation, each antecedent of the in-points would be hashed and validated as existing in the block list. The in-points would be hashed and identified in the blocklist as not yet spent. Then the transaction would be validated as is currently done.

If everything validated correctly, the additional in/out-point hashes would be added to the block. This closes the transaction's in-points, and marks the new out-point hashes as unspent.

Once a node completes the block (by winning the hashing contest), he then broadcasts the block of hashes and the related transactions+plus antecedents to the other nodes for confirmation and acceptance.

as a rough example:

{block-9
 hash-a, hash-b, hash-c, hash-x
}
{block-12
 hash-a, hash-y, hash-c, hash-d
}
{block-17
 hash-b, hash-d, hash-e, hash-z, hash-f
}

{Transaction
 {in-points: hash-x, hash-y, hash-z}
 {address, signature and other transactions stuff}
 {out-points: hash-payed, hash-change
}
 
{generating-block
 hash-x, hash-y, hash-z, hash-payed, hash-change
}

So basically, if the i/o-point hash existed twice in the block list, it has been spent. If it exists only once it has not been spent.

So in after block-17:
  a, b, c & d are spent.
  e, f, x, y, z are unspent.

The transaction spends x, y & z and creates hash-payed & hash-change, so the transaction is valid.

After the generating-block:
  a, b, c, d, x, y, & z are spent.
  e, f, payed, change are unspent.


====
The Goal:

The goal is to provide all the same security of the existing system, but to avoid creating a public graph of every transaction that is easily correlated. In this case, the hashes don't even have to associate in the block. The block could simply sort all hashes in ascending order.

In effect, I want to create real gold coins. I can give my coins to you, but everyone in the world doesn't know I did. You can give them to the next guy and prove they are pure gold coins, because you have the pedigree of the coins AND every generation in the pedigree was notarized in the public record.

====
The Question:

Satoshi showed that you can remove transactions from the block list through the Merkle tree structure, without compromising security. I guess my real question is:

"What is the earliest you can remove the transactions?"

You could argue that nodes could remember everything anyway (the web never forgets). But if you structured the protocol so that new nodes would only receive a block list of hashes, they could only remember from this moment forward. That would give a little additional privacy. (Maybe)

====
Any thoughts? Is there an obvious way that people could cheat and get rich?

748  Bitcoin / Bitcoin Discussion / Re: Bitcoin minting is thermodynamically perverse on: August 09, 2010, 06:36:05 PM
I really I understood the point of this thread but I guess I didn't.

I assumed that if you could design to do the exact same thing in the same commodity quantities and at the same protection level, BUT consuming less energy and producing less BTUs of heat, then that would be less wasteful.

You can mine gold in lots of ways, some require less resources than others. If two processes produce the same amount of gold then no point in optimizing anything else?
749  Bitcoin / Bitcoin Discussion / Re: Bitcoin minting is thermodynamically perverse on: August 07, 2010, 10:37:49 PM
I'm not opposed to for profit companies. Nor am I opposed to your for profit venture. Eventhough you have a huge advantage by learning about bitcoin before most people.

My point was, if you have only twenty-one nodes validating transactions you don't need the block list at all. There are easier ways of reaching consensus with a smaller pool of peers.
750  Bitcoin / Bitcoin Discussion / Re: A proposal for a semi-automated Escrow mechanism on: August 07, 2010, 08:14:16 PM
I understand it! It adds value with a little risk.

If you did it twice at the same time, you would have exactly the "tear two $5 bills in half and swap halves" case.

That helps the risk cut both ways.
751  Bitcoin / Bitcoin Discussion / Re: Bitcoin minting is thermodynamically perverse on: August 07, 2010, 08:09:19 PM
I can't imagine a more splintered network. Wink

You laugh and I did too. But consider that the launch of bitcoin could have gone this way.

--------

Hi I'm Satoshi, you a are the 20 largest ISP's in the world. I'm launching a new network cash service. It will add significant value to your existing customer base and make you some additional revenue.

So here is the deal, In exchange for each of you running a bitcoin transaction verification peer and signing this contract, you will receive 1,000,000 BTC. You agree (in the contract) to distribute >= 500,000 BTC to your existing customer base in the first year. You may sell them, give them away to new subscribers or as a value add to promotions. Compete among yourselves, dispose of them anyway you want. Dispose of or keep the rest to dispose of in the future.

You also agree to run the transaction verification servers honestly for X years. Here is a free client you can pass out to your customers, you can give it away or charge them a service fee. Compete among yourselves.

Each of you pay me $100 for the privilege, you'll make that back the first day.

Thank you for your $2,000 good luck to you all!

Poof, now there is a functioning bitcoin network will all 21,000,000 coins in circulation. Satoshi gets to keep the last 1,000,000 BTC + he gets the $2,000.

Bitcoin builds on the reputation of the 20 largest ISP's and is adopted immediately. 20 trusted nodes can reach consensus quickly so there is no reason for the hashing game or new minting rewards.

----

I don't think that is too pie in the sky for some of the clever salesmen I know to pull off.
752  Bitcoin / Bitcoin Discussion / Re: Bitcoin minting is thermodynamically perverse on: August 07, 2010, 06:48:01 PM
I say this half flippantly and half seriously. :-)

If there's something else each person has a finite amount of that we could count for one-person-one-vote, I can't think of it.  IP addresses... much easier to get lots of them than CPUs.

If I were you, I would have considered a mapping that included, "For each entry into the bitcoin transaction recording/gold mining lottery, please send $10 to Satoshi."

That would discourage botnet operators from thousands of nodes. And if it didn't you would have tens of thousands of dollars! ;-)

753  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 07, 2010, 06:42:39 PM
PS:---------

I really did think the CPU power contest was a brilliant solution. Then I actually did burn my leg with the bottom of my laptop. At that point I realized there was absolutely no way I was going to leave a node running just to support the network. I've run lots of P2P system just to support the network, but bitcoin is much harder to justify absent the "free money" enticement. And that enticement was not enough for me.

By assuring that you need 32 CPU machines and fat network pipes to compete in the battle to record transactions, you absolutely guarantee that the bitcoin ecosystem slides into a central core of trusted nodes and peripheral users that rely on them. That in itself is not a bad thing. (Your news server analogy is appropriate here.)

However, many people are attempting to contrast bitcoin as better than a central bank style system.

By guaranteeing that there will be a central small set of high powered, well connected and trusted nodes that clears all transactions. Where these nodes are peers among each other, but a service "superior" to the general public. And that only members of this elite core are eligible to receive "newly minted money." It really appears that you are just in the process of recreating all the existing fears.

The set of individual who have lots of BTC (like knightmb) will be the most interested in assuring transactions are recorded correctly. They will make sure they run high power honest nodes. As such, this set of BTC rich will receive a large subset of all new coins. That combined with planned price depreciation guarantees that the rich get relatively richer. The competition to this core of potentially "idle rich" is not the public at large, but scammers and BTC miners.

This is not an inspiring story to sell the public.
754  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 07, 2010, 06:05:01 PM
Once you get away from a system where each node's influence is proportional to their CPU power, then what else do you use to determine who is (approximately) one person?

That is the key question of course. I think you need to make it harder to demonstrate you are a person/node. I posted some initial thoughts in the previous post.
755  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 07, 2010, 05:15:08 PM
The method I was referring to was in the earliest version of Freenet. It was designed before the trusted connections model was implemented. I'd need to review all the details to remember the specifics. But basically you the nodes you would connect to were picked (I forget the method of picking) Then you pick a random number and broadcast it to all those nodes. Each of those nodes picks a random number and broadcasts it back to you, and to the other nodes in the set. All of the numbers are mathematically combined (don't remember the function) into your arbitrary ID. If you don't answer to that consensus ID no one with talk to you.

Don't sue me if I've got that wrong. That is from an obviously faulty memory.

The Sybil attack (named after the movie, I think) was a network segmentation attack. You create multiple personalities and try to surround a given node to deduce what they are storing. Obviously, that turned out to be possible for this method of ID generation. Hence the new friend-to-friend topography. The same limitation may be a weakness for bitcoin as well. But failures often provide the best guidance on new ways to proceed.

Forgetting the Freenet tangent for a moment...

The "Proof-of-work" technique may actually be applicable to address the ID generation problem you deftly pointed out. Instead of making it harder to generate the record of transactions, you could make it harder to generate new nodes and attach them to the network.

Suppose you (making this up as I go) required that nodes create a private key, then calculate a proof of work checksum on that key, and then the node ID became RIPE-160(SHA-256(POWCheckum(public key) + SHA-256(public key))). Or some such.

The intention being to slow node ID generation down enough for the attack to be untenable, but not so much that it makes it onerous for new nodes to join.

That combined with some last minute "salt" on the transactions, might get us most of the way there.

-------

By the way, I see what I overlooked in my previous conceptualization.

When a transaction is send to the DHT I was presuming would actually be stored on many (hopefully independent) nodes. 5 times related to each in-point and 5 times related to each out-point. Also once by the payer, and once by the payee. I guess that would be 17X redundant given 1 in and 2 outs.

Each of these places "could" resubmit the transaction back to the network if it went missing.

However, I failed to consider three things, 1) validators would only be using 5 of the 17 places for validation. 2) they have no way to query the other places of redundancy should the initial 5 locations be compromised. 3) the additional redundant places have no way of identifying the compromised nodes if they return targeted malicious lies.

Oops. But nice catch!
756  Bitcoin / Bitcoin Discussion / Re: Your Bitcoin "Elevator pitch"? on: August 07, 2010, 04:24:23 PM
Bitcoins, the only money fit for Libertarians!
757  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 07, 2010, 02:39:04 AM
Now I'm confused again.  I thought your scheme didn't have blocks, just transactions.  What do you mean, whoever solves "the block" first?

My scheme doesn't have blocks. I was referring here to how the existing system operates.

Huh?  Lets say the network has 10,000 nodes in it.  I query the network to find the network node closest to a transaction that I want to double-spend.

So I generate a private key.  It has about a 1 in 10,000 chance of being closer than the current closest node.  So I keep generating private keys until I have 5 that are closer.  It's too late for me to figure out the odds, but lets say I generate 100,000 private keys, I'm pretty darn likely to find 5.  My wimpy laptop can generate at LEAST 100 ECC keys/second, so in under 20 minutes it could generate 100,000.

I create 5 nodes with those keys (telling the rest of the network "honest, folks, I chose those keys RANDOMLY...") and I've won.

I'm trying to generate node ids that are "closer" to that transaction's hash than any other node currently on the network.  That's much easier.
Yes, It's much easier!

You've made a quite plausible argument for this particular case. Kudos!

I'm not going to do the math either because that is really not the point. I'm not proposing "The solution". I'm suggesting that the amount of compute resources doesn't need to scale so badly to satisfy bitcoin's requirements. I'm only trying to show that there are other reasonable designs that can meet bitcoin's requirements with significantly lower CPU usage, and in the case of this thread less latency.

I think with the brain-trust that is this forum, any limitations in a distributed solution are easily discovered and rectified. Just as is being done with the current implementation. Perhaps I chose a poor way to create a node ID. Freenet proposes an entirely different way of shared generation of node IDs. It doesn't suffer from the issues you point out. Perhaps it has other issues in this situation. But I'm confident there exist a distributed implementation that would work.

Is the main thrust and incredulity in your argument because you think there CANNOT be a better solution than burning 100,000 CPU at 100% 24/7 and sending 100,000+ redundant messages per transaction?

(100,000 was Satoshi's number of expected core nodes for a system that supported millions of users)

 
758  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 07, 2010, 12:19:16 AM
If you look for reasons to dismiss the idea out of hand, you will find them. However if you use the example to increase your understanding of why some P2P systems succeed yet many more fail, it will give you some insight.

In that spirit, let me answer your questions directly.

What happens when they disagree about which transaction happened first?  Majority rule?  Who decides what the majority is, and can it change if 4 of the five nodes leave the network and are replaced by another 5 nodes?

If ANY other transaction using that out-point is found there is a double spend, same rules as the current bitcoin system. The only way there can be disagreement is conflicting transactions got broadcast simultaneously but one arrived a close node A first and the conflicting one arrived at close node E first. By the end of the two 5 node broadcasts, both parties would discover the double spend.

So which one is valid? Who cares. Flip a coin. That is exactly what bitcoin does in this situation. If my node is working on a block with on transaction, and your node is working on a block with a conflicting transaction, whoever solves the block first wins. Distributed coin flipping algorithms are trivial. All of this can be done almost immediately. Much faster than in 10 minute windows. So no, the majority doesn't change if 4 nodes leave, because consensus was reached and the nodes were made consistent.

By the way, standard DHTs already address preserving data when nodes leave, and spreading the data when nodes join.

And if I know that I'm going to create a large transaction, can I do some work precomputing node IDs such that the transaction (which I haven't yet sent out) will hash to nodes that I control?   If I control all the nodes storing the transaction, then I can just answer "yes, absolutely, that transaction is valid and hasn't been double-spent..."

No, this would be a requirement constraint. It is possible for the same reason that it is impossible to generate two public keys that match to the same bitcoin address. See my previous faux pas.

Nodes would generate node addresses based upon private keys, exactly as is being done for bitcoin addresses. This makes node spoofing implausible. All of the inputs to the out-point hash are fixed except the payee, which is pre-specified. The only flexibility I can think of would be in the payment amount. If you want to iterate through all possible amounts and try to create a simultaneous 5 way hash collision, knock yourself out.

The brilliant insight behind bitcoin is the distributed timestamping mechanism; everybody agrees on an order of transactions.  I don't see how your scheme solves that problem.

It actually solves the problem in exactly the same way, just with much less CPU power.

The brilliant insight behind bitcoin's distributed time stamping mechanism is you don't need absolute timestamps at all! You only need relative order. And for conflicts in a short window, you don't have to care at all. You can simply arbitrarily choose one.

My solution does exactly the same thing. It maintains relative order among transactions. It arbitrarily reaches consensus on conflicts. Neither method has a requirement to accurately order unrelated transactions by time. Again, that was a brilliant insight.


759  Bitcoin / Bitcoin Discussion / Re: latency and locality on: August 06, 2010, 11:08:28 PM
What I'm suggesting doesn't exist yet. There was related talk about similar issues on the thermodynamic perversity of generating blocks. If I have just one central node, the system could generate a transaction block in a fraction of a second. If you wanted, it could do this only once every ten minutes. But it wouldn't need any more than a fraction of a second of CPU time on a single processor.

The reason is purely related to consensus.

So if you had two nodes, you could have them both redundantly capture all transactions, and then in a fraction of a second each could generate a block. They could then exchange block hashes and if they matched, they would have consensus. If they didn't match you could reach consensus in one of two ways. 1) compare the transactions of each block one by one, and create a block consisting of the union of all transactions. or 2) Just pick one of the blocks and go with it. The second is what bitcoin actually does now.

It just picks using the worlds most expensive coin flipper. You could accomplish the same task with a much cheaper coin flipper and nothing in bitcoin would change one bit.

That is what I mean by "design decision" a it doesn't change any of the requirements for the system's features or behavior. It just implements things differently.

----

So when I talk about distributing the transaction graph throughout a distributed hash table it is just a different possible (but currently non-coded) way of implementing bitcoin's key feature. The feature of verifying the transfer of bitcoins from one address to another, in a way that makes it impossible to double spend coins.

Optimally, if you had x,000 transactions per minute and 100 nodes, each node would only have to do 1/100th of the work of a single node handling x,000 transactions per minute. Each transaction is only required to be recorded once.

However, with the current bitcoin implementation, if you have 100 nodes, they all run at 100% cpu for x,000 transactions. If you add another 100 nodes, they all still run at 100% cpu but do exactly the same job. We used to call it "government work" when you could do a job with 5 guys, but instead they used 50 guys, because more people working is better for the economy!

So the new design constraint I'm proposing, is that for a given number of nodes (n), it should take (Order (1/n)) or maybe (Order log(n)) work per cpu to handle a given number of transactions. In this case work means CPU time and network communication. My design modifications FAIL if they give up any of the transaction protections of the existing system.

----

It turns out that the block list itself is not required to validate transactions. Only the transactions are required. So to understand what I'm suggesting you have to think about an equivalent bitcoin implementation without a monolithic block list containing every transaction in the history of the system.

Instead, the transactions are redundantly scattered across all the nodes. No node need keep a complete list off all transactions, but they must be able to quickly retrieve any transaction on demand. Optimally in Order(1) time, but Order(log(n) time would probably suffice. That reduces the storage requirements of any given node to Order(1/n).

Transactions are mapped to nodes by transaction out-points. You generate two unique out-point identifiers by hashing the transaction+out-point information. You then store the transaction redundantly on the five "closest nodes" to each out-point ID. That means for a system with 10,000 nodes. Each transaction is stored with 10X redundancy instead of 10,000X redundancy. (the 10X was an arbitrary choice, the redundancy would be based upon the DHT algorithm and characteristics of the node population.)

Now each of those 10 nodes holds that out-point data completely privately, unless another node can show a "need to know". In this case a need to know is demonstrated by submitting a signed transaction that includes a given out-point as an in-point. In this case the storage node stores the new transaction. It also returns any known transactions referencing that out-point. If there is a previous transaction in-point associated with the out-point the second transaction is a double spend.

So for any new transaction, to verify it, you send it to the five closest nodes to each in-point on the transaction. They record the transaction and immediately tell you if they've seen a double spend. If any have, it's a bogus transaction, which gets broadcast to the other close nodes.

Now the what I'm suggesting also increases anonymity because you no longer have to broadcast every transaction you make to the world. Also random individuals can't go poking around in your business.

There are many ways to map bitcoin transactions to DHTs. I just chose an example that would be easy to explain. There may be other possibilities offer improvement. But this one is sufficient to get the point across.

There is also a fun technology called a "dining cryptographers network" that could further improve some of the anonymity aspects of bitcoin.


760  Bitcoin / Bitcoin Discussion / Re: Your Bitcoin "Elevator pitch"? on: August 06, 2010, 08:54:00 PM
Nice pitch Liberty! Shows benefits and has a call to action! I'd probably follow up on that.

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [38] 39 40 41 42 43 44 45 46 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!