Bitcoin Forum
May 24, 2024, 01:55:24 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [47] 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
921  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 07, 2013, 07:08:06 PM
Finally: in my opinion, there is rough consensus that the 1MB block size limit WILL be raised. It is just a question of when and how much / how quickly.

To say there is a rough consensus that the 1MB block limit will be raised at some unspecified time in the future is missing the point. The real issue is, is there a consensus that a large fraction of the transaction volume will in the future happen off-chain? Given the range of opinions between you and Mike, who expect transaction fees to stay low enough for all but microtransactions, Pieter Wuille, who if I am correct is unsure, and Jeff Garzik, and Gregory Maxwell, who are both working on designs for off-chain transaction systems, I just don't see a consensus.

After all, for the user, the limit itself isn't the issue, the issue is how they will be expected to store and spend their Bitcoins, and what kind of security Bitcoin will provide them. I haven't seen anything from you or Mike actually talking concretely about what kind of protection from authorities and others trying to control Bitcoin that you expect Bitcoin to be able to provide. For instance, in the future, do you expect that by paying a sufficiently high fee can I expect get my transaction confirmed eventually, regardless of who I am or what the transaction is for? Do you expect me to be able to vote on what transactions do get confirmed and included in blocks, in proportion to the hashing power I possess? Do you expect me to be above to participate in Bitcoin by creating transactions and mining as a full validating node anonymously?


Satoshi said back in 2010 that he intended larger block sizes to be phased in with some simple if (height > flag_day) type logic, theymos has linked to the thread before.

I think he would be really amazed at how much debate this thing has become. He never attributed much weight to it, it just didn't seem important to him. And yes, obviously, given the massive forum dramas that have resulted it'd have been nice if he had made the size limit floating from the start like he did with difficulty. However, he didn't and now we have to manage the transition.

You know, I didn't think anything at first of the Wiki page on scaling advocating simply increasing the block size as required, even to the point of requiring full nodes to have tens of thousands of dollars worth of high speed internet connections and UTXO storage hardware. But eventually I started thinking about the implications - the turning point for me was really when I realized how large miners could use large blocks as a way to force smaller miners out of business.

Satoshi didn't foresee pool mining, the dangers of bloating the UTXO set, or even that multiple clients would be used. Bitcoin was a brilliant idea, don't get me wrong, but Satoshi was a person, not some all knowing all seeing deity. It doesn't surprise me the slightest that he might not have thought through all the implications of allowing the block size to increase without bound, just like he left Bitcoin with a scripting system that turns out to not have that many actual applications.

The massive forum dramas ultimately come down to the fact that regardless of whether the block size is left as it is, or increased, Bitcoin as we know it will change; there just isn't consensus on how it will change. You come from Google, a place of massive centralized server farms controlled by one company. Google's services work pretty well - centralization can have benefits - but many of us feel that goes down a very dangerous path. It's easy to see how a world where blocks are sufficiently large that only well funded pools with highly visible high-speed internet connections can lead to government and large businesses controlling Bitcoin. Fundamentally, distributing large amounts of data in a censorship resistant way is far harder than distributing small amounts of data.
922  Bitcoin / Development & Technical Discussion / Re: Now that we've reached the 250kb soft limit... on: March 07, 2013, 05:45:37 PM

The 250KB "limit" is simply the default maximum block size created by the reference implementation that most miners use. Nodes will still accept up to the hard maximum block size, which is 1MB.

For instance, on testnet there have been two 1MB blocks created: 00000000476781c04b82b3ea91af1a86f3a863e1c9312b50302ffa01b7bdf960 and 0000000010bf4453b170a6756d911e207734ae181af6c8c02b42787d5885b333
923  Bitcoin / Development & Technical Discussion / Re: Inflation-proofing via fraud notices on: March 07, 2013, 04:23:28 PM
I see what you mean now, but it still makes no difference. SPV clients validate NO transactions. Even if SPV clients add up all the coinbases, you can just create new money in a non-coinbase transaction and they'll accept it just the same.

No, the coinbase transactions will always sum to more Bitcoins actually in circulation because of fees. After all, a coinbase is allowed to have up to subsidy + total fees in it's output, the subsidy part sums to the total Bitcoins in circulation, and the total fees can be as high as you want in a valid block. Thus the way to pull of the fraud is with the coinbase transaction, because proving the fraud (currently) requires every transaction in the block, and every input to every transaction in that block.
924  Bitcoin / Development & Technical Discussion / Re: Block Size soft-limit maxing out this AM 6/3/13 on: March 07, 2013, 11:21:38 AM
So, despite the block reward being >$1000, and not due for halving until 3.75 years time, fees are forced to do a moonshot.

That "moonshot" is because someone created a single transaction with 94BTC in fees: 13dffdaef097881acfe9bdb5e6338192242d80161ffec264ee61cf23bc9a1164

Fees are rising, but they haven't spiked like you think they have.
925  Bitcoin / Bitcoin Discussion / Re: Bitcoin decentralization myth - is it important? on: March 07, 2013, 09:54:07 AM
Ask your self, can Mt. Gox prevent you from doing a face to face bitcoin exchange?

No.

As long as mining stays decentralized, your ability to perform a Bitcoin transaction will be exactly the same as anyone else's ability: pay the fee, and you're transaction gets confirmed. It doesn't matter if you're a huge bank settling accounts cross-borders, or just some guy who needs to wire money to your parents in Iran. You're all on a level playing ground because as long as no one entity controls more than 51% of the total hashing power, no-one can risk rejecting a valid block.
926  Bitcoin / Development & Technical Discussion / Re: Now that we've reached the 250kb soft limit... on: March 07, 2013, 04:55:27 AM
It seems to me the most fair way to decide block size it is to have the block chain size be proportional to the hashing speed of the network.  

Over Bitcoin's relatively short period, the graph of the hashing speed of the network looks like this:



...and you want to link block size to that wild roller coaster, when we haven't even seen that ASICS will do?

Even on a log scale the hashing power has been wild, and it still could be if anyone finds a partial shortcut in SHA256:



That said, since a shortcut is rather unlikely, and probably would break SHA256 entirely anyway, not to mention cause a 51% attack, I could consider supporting a max block size calculated as something like 1MB*log10(hashs/second * k) But really, I'm mostly saying that because log10 of anything doesn't grow very fast...
927  Bitcoin / Development & Technical Discussion / Re: Now that we've reached the 250kb soft limit... on: March 07, 2013, 04:13:02 AM
I'm not saying that Bitcoin will shrink, I'm saying that it will reach an equilibrium where there is no more growth in terms of new users operating directly on the block chain. Instead, there will be an industry of companies that do things off the block chain and settle up once a day or every few hours. Of course miners would love that, since the fees will be maximized. And anyone using Bitcoin as a store of value will be happy with it as well, since the network hash rate will be maximized.

This.

I'd consider myself a pretty knowledgeable guy about Bitcoin - I've even gotten a few lines of code into the reference client and once found a (minor) bug in code written by Satoshi dating back to almost the very first version.

You wanna know where I keep my coins for day-to-day spending? Easywallet. So it's centralized - so what? If it goes under I'm not going to cry about the $100 I have there, and I do care about how it ensures that every transaction I make it unlinkable, and since I access it over Tor, even Easywallet has a hard time figuring out who I am.

My savings though? Absolutely they're on the blockchain, with rock-solid security that I can trust - I've read most of the source-code myself. Sure, transactions won't be free, or even cheap, but you get what you pay for, and I know I'm getting the hashing security I paid for.

With cryptography we can create ways to audit services like Easywallet and make it impossible for them lie about how many coins back the balances in the ledgers they maintain. Eventually we can even create ways to shutdown those services instantly if they commit fraud. In fact, with some changes to the Bitcoin scripting, I think we can even make it possible for users of those services to automatically get refunds when fraud is proven, although I and others are still working on that idea - don't quote me on that yet. Tongue

The point is, we have options, and those options don't have to destroy the truly decentralized and censorship-proof blockchain we have now, just so people can make cheap bets on some silly gambling site.
928  Bitcoin / Mining / Re: Soft block size limit reached, action required by YOU on: March 07, 2013, 03:56:09 AM
Add an additional .001 optional fee in your client and your transaction will be in the next block. The blockchain flooders are cheapskates. Transactions are not supposed to be cheap enough that you can blast hundreds of them out an hour with your gambling bot.

This.

I'm running a timestamping service that's been making about two or three transactions an hour for the past few days, each with 0.0005BTC fees. Looks like about 90% to 95% of my transactions are confirming in the next block, and the rest within another block.

Seriously, if you can't spend a 0.001BTC - a bit less than 5 cents - to publish a transaction that thousands of computers have to verify and store for eternity in the one rock-solid high value decentralized currency known to man, stop complaining. I'm surprised the large-blocks/low-fees group hasn't been compared to the usual socialists wanting to centralize profit and decentralized costs yet... or maybe they have - following the forums with this crap is a full-time job.

You know, my timestamping thing, OpenTimestamps if you want to know,(1) has been called blockchain spam by some. For the technically inclined, no it doesn't bloat the UTXO space, but it does add blockchain space. However, it and other abuses like storing files in the blockchain will naturally be crowded out as transactions become more expensive, so the total damage will always be limited without centralized enforcement efforts like Mike's pleadings for miner's to "stop the satoshidice spam". Already TX fees would cost me $100USD/month to run the timestamper if every block had a timestamp transaction in it, so I'll soon have plenty of reasons to change the way it works, just like the price of gas gives people incentives not to waste it.

On the other hand, if we go ahead and let the maximum block size get as large as miners want, there's no reason not to use it for all sorts of BS things, and little we can do to stop abuse without centralization. Sadly, I suspect as it becomes more and more expensive to participate as a full-fledged miner, instead we'll just see the number of pools decrease, and P2Pool die off from lack of miners, until central efforts to "stop spam transactiosn" start happening, and then you're one step away from "stop silk road" being possible.

1) FWIW, it's not quite ready to be called production yet, but if you can find the software on github, go ahead and try it out.
929  Bitcoin / Development & Technical Discussion / Re: Lets Build TestNets to simulate the future of Bitcoin and the blocksize on: March 06, 2013, 10:10:00 PM
Actually, rather than a testnet, I'm working on doing a Bitcoin network simulator, that works out how fast blocks and transactions propagate, as well as allowing for models of miner behavior.
930  Bitcoin / Development & Technical Discussion / Re: A new bitcoin testnet3 faucet on: March 05, 2013, 03:31:35 PM
Thanks! It's been useful, sent you 0.1BTC, the real kind.
931  Bitcoin / Bitcoin Discussion / Re: Bitcoin website operators: please consider using Google sign-in on: March 01, 2013, 12:12:59 AM
I'd suggest website operators take a third approach: support Google Authenticator, or to be exact, RFC 6238 time-based one-time passwords. Basically under the hood it uses a secret key, which is cryptographically hashed with the current time, and that creates a secondary password. For your users they just install the Google Authenticator app on their smart phone, use the camera to scan a special QR code containing the secret key, and from then on after enter the 6 digit one time password every time they login in addition to their normal password. Blockchain.info and many other Bitcoin sites already use it, not to mention non-Bitcoin sites. You do need a smartphone, but they're pretty common these days. Unless hackers get your users password and their phone, they can't do anything.

Unlike Mike's suggestion of using Google sign-in, RFC 6238 doesn't send any information what-so-ever to third parties. Not when you login, or even that you are using Google Authenticator at all. For non-Bitcoin sites, I can see why Google sign-in could make a lot of sense - if you use Google analytics Google already knows when your users sign in anyway - but Bitcoin is a target and you really don't want to be one court-order away from suddenly finding that none of your customers can login. Google has a better track record than most of fighting court orders, but because they're infrastructure and employees are spread out across the world in most countries they have no choice but to follow court orders. For instance Google has an office in Argentina, and I could easily see a court order to force Google to block sign-ins to Bitcoin exchanges pushed through under the guise of enforcing that countries capital controls. Equally I can easily imagine Google getting a court order by the Argentinian government forcing them to reveal all the Google sign-in's made in that country in an attempt to identify and prosecute people violating those same capital controls. Your website wouldn't even have to be based in Argentina for any of this to happen.

Mike has a point about Google sign-in being "one strong basket", but court orders can do things no attacker ever could, and if your risk is court orders, centralization is the last thing you need.
932  Bitcoin / Development & Technical Discussion / Re: A clean solution to ALL Bitcoin problems: SatoshiDice, Block size, future fees. on: February 28, 2013, 05:39:44 PM
1. fees equal or greater than the average fees of the last 50 blocks are always accepted.
(This allows any used to put whatever high fee he may want, and it's predictable)

Do we also allow the block size to exceed the 1 megabyte limit, by not counting the size of these transactions with above-average fees towards the total block size?

...which means large miners can offer "direct-submission" contracts where they accept transactions with above average fees, then refund the fees after they get them mined, minus the orphan rate and some small cut of profit. It'd be a great way for satoshidice to operate for example, and you could submit the same transaction with multiple miners at once if the miners agree to sign their coinbases. (you'll have to audit that the orphans are real, but to do so the miner just has to keep track of every block they produce, orphaned or not)
933  Bitcoin / Development & Technical Discussion / Re: A clean solution to ALL Bitcoin problems: SatoshiDice, Block size, future fees. on: February 26, 2013, 08:04:05 PM
How does this prevent a miner from creating their own transactions to game the coefficients? The only cost is the orphan rate, which can be kept well under 1% for a miner with sufficient infrastructure.
934  Bitcoin / Development & Technical Discussion / Re: At any given point in time the entire BTC networks txs are handled by 1 miner on: February 26, 2013, 04:38:45 PM
Regarding the maximum transaction rate, I worked it out using the most efficient possible transaction type and wrote up the results on the wiki.

Basically, 10.7tx/s is possible, although the actual rate will be somewhere between that and 5.2tx/s depending on how efficiently change can used managed.
935  Bitcoin / Bitcoin Discussion / Re: Fidelity-bonded banks: decentralized, auditable, private, off-chain payments on: February 26, 2013, 07:01:09 AM
The Intel/AMD stuff isn't secure though yet.

Well, security is a spectrum, but regardless I don't think you can tap high-speed memory buses with a few thousand dollars worth of equipment, and it goes without saying that you can't easily access anything that's held in the L1/L2 caches. So you have some secure memory there and can write your software such that it encrypts data that won't fit into the cache, if you want to.

I already mentioned this to you on IRC, but I'll repeat it again: tapping high-speed memory busses is a lot easier than you would think if they go off-chip. You can always first force the memory bus to run slower than it should with the over/underclocking settings, and then secondly build some custom hardware directly on the bus itself with a cheap microscope and a steady hand to sample the signal. After all, it's crypto, provided you don't actually crash the computer, you can try over and over again until you finally hit the key.

L1/L2 cache though... much, much much harder, especially on 22nm where probing busses even for the people at Intel becomes exceptionally difficult due to capacitance. The stuff we talked about on IRC re: L2 cache locking looks like it could really work.

Who says banks can engage in fractional reserve banking? You can force chaum-token redemption to be recorded in audit logs, and those logs prevent them from getting away with that. The logs themselves can be made public, and making them public still doesn't reveal anything.

How does that work? Unless your plan is to run the entire bank inside the remotely attested secure world, complete with all the code that talks to Bitcoin, you can't know that the bank didn't just issue themselves some tokens without making a deposit.

Actually you can. Just make every chaum-token related thing update a counter of all the outstanding chaum tokens, with fraud being any mis-update of that counter. The audit log gets signed and so on, and published publicly. The tokens themselves are still perfectly private - it's just a counter.

Anyway I explained in more detail about my further fidelity-bonded ledgers idea on the bitcoin-dev email list: http://sourceforge.net/mailarchive/message.php?msg_id=30531383

And if you want to run the entire bank inside a trusted computer, sure, that plan would work, but then you don't need Chaums technique. The secure program can just generate a key and then accept encrypted deposit/withdraw commands. The database can itself be encrypted before it goes to/from disk.

I considered doing that, but I think the security in depth of chaum + trusted hardware is safer. You don't want to give attackers an incentive to break the hardware to break the security in addition to steal money; your most formidable opponents are likely to not care about theft.

What makes you think most people will keep the whole chain? All you actually need is the pruned UTXO set and that is only a few hundred megabytes today. Bitcoin could operate just fine with only 5 different organisations holding complete copies of the chain. I can't imagine any time when hard disk size is the constraining factor on running a full node.

If block-space is cheap, what make you think the UTXO set isn't going to just keep growing, and at a high rate? It's also the most expensive storage because it needs to support a lot of IOPs, yet all validating nodes must have a full copy.

Also, "5 different organizations", so basically you just need to take out five targets to do a heck of a lot of damage to Bitcoin... lovely.

This is what I don't get about you, on the one hand you're saying fidelity-bonded banks have a serious problem due to legality, they're banks basically, yet on the other hand you're happy to see a system so centralized that you expect just half a dozen entities in the world are able to maintain full historical chain data required to validate the blockchain in a truly trust-free manner. What exactly do you expect to happen when countries decide "OK, Bitcoin is illegal now."? Do you have any plans other than, "OK, you win"?

The OP's idea is well thought out...but is this a good idea? I mean, recreating banks? Seriously?

What the hell happened to revolutionary ideas? Now people want to recreate banks? What about replacing the financial infrastructure? Why are we talking about supplementing B&M banks with their digital equivalent?

Well I mentioned banks because it's the simplest version of the idea that I could explain on the non-technical forum and that can be done with Bitcoin without any core technical changes to the protocol. If you're interested, I also wrote up a non-banking version, fidelity-bonded ledgers, which can be setup such that you are only relying on the third-party to keep an accurate ledger of transactions - they can't steal funds at all. That version requires a soft-fork though to enforce the validation rules, so implementing it is less certain.


Would there be a limit to the number of receipts a bank can sign every 10 minutes?

For a given bank, yes, based on how much investment they made in hardware. A few hundred to a few thousand dollars worth of hardware could process thousands of transactions a second though; the requirement for audit logs verifiable by others is likely the real issue. A really high volume bank will actually operate, at the technical level, multiple "sub-banks" to split the load up, all of which can be made transparent to the user. (similar to how you care that you pay bitpay, not that you pay a particular address they gave you)


Garzik, Maxwell, an now ~retep are individuals who I find unusually credible.  I'll be following the work of these persons closely and potentially lending support as my resources allow.

Thank you! I also need to give credit to Gregory Maxwell: while I came up with these ideas, they've been refined through discussions with him mainly, in particular it was his suggestion to combine the fidelity bonds with trusted hardware, and he realized they provide orthogonal protections from fraud.
936  Bitcoin / Development & Technical Discussion / Re: [PATCH] increase block size limit on: February 25, 2013, 08:48:01 PM
This comment concerns me very much. Dust spam may not have economic value when denominated in bitcoin, but such "spam" can be representative of much larger transactions if the dust is representative (such as with "colored" bitcoins).

Sane "colored coin" or "smartcoin" protocols don't depend on some fixed "1 satoshi = 1 share" ratio. Rather for each transaction moving an asset around they calculate what fraction of the asset was assigned to what transaction output, which means you can divide the asset indefinitely without requiring the actual amount of Bitcoins to change. If the asset represented by a txout is worth less than the minimum transaction fee, it's still dust that doesn't make sense to spend.

It's unfortunate that the first smartcoin-type protocols weren't written that way; on the other hand none of them have actually been used in practice. I've written a protocol that does this correctly as apart of my fidelity bonds protocol, see here although I haven't written code to implement it yet.
937  Bitcoin / Development & Technical Discussion / Re: [PATCH] increase block size limit on: February 25, 2013, 07:45:42 PM

Dust is by definition a transaction output (bitcoin) so small that it is economically worthless, and will probably sit around unspent.

The fee paid is irrelevant.

Speaking of, you made a "dust-spam non-standard" patch right? Where is it?
938  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 04:56:51 PM
With full nodes building on trusted computing platform, miners with low bandwidth can mine without being full nodes themselves.

In addition to working as a normal full node, a trusted computing full node will accept encrypted queries and reply with encryption, so the operator is unable to censor. It will also prepare a list of all unconfirmed valid transactions, including only the txid, size, fee (and optionally tx priority). The list will propagate in a P2P manner. Miners will construct blocks by choosing the transactions they want to include, in addition to coinbase and any other valid tx not provided by the trusted full node.

To prevent the node from cheating (not very possible due to trusted computing), individual miner will fully validate some blocks regularly, depending on their resources. For a miner with only 50kB/s connection (i.e. 30MB/10min) while maxblocksize is 300MB, he may validate only 1 in 10 blocks.

The trusted full nodes will be supported by donation and/or subscription fee. People with many bitcoins will support/offer these nodes to protect their wealth.

Technically speaking, that's a very clever idea.

Socially speaking though, it'll be an utter failure. Miners using pool have absolutely no incentive at all to verify the blocks they produce other than some vague desire to protect Bitcoin. This is why currently pools other than P2Pool aren't verified at all - Eligius supports getblocktemplate, but mining software that uses a full validating node to verify the blocks is hardware ever used.

Pools just aren't going to buy a bunch of expensive trusted computing hardware and switch their operations to use fragile trusted computing software just to please the 1% of miners who seem to care about this issue.
939  Bitcoin / Development & Technical Discussion / Re: How a floating blocksize limit inevitably leads towards centralization on: February 24, 2013, 04:48:08 PM
However, I have extracted the transaction counts and they average 1190 each. Looking at a bunch of blocks maxing out at 250Kb they are in the region of 600 transactions each, which is to be expected. Obviously, there is a block header overhead in all of them. But this does mean that the 1Mb blocks will be saturated when they carry about 2400 transactions. This ignores the fact that some blocks are virtually empty as a few miners seem not to care about including many transactions.

So 2400 transactions per block * 144 blocks per day = 345,600 transactions per day or Bitcoin's maximum sustained throughput is just 4 transactions per second.
This is even more anemic than the oft-quoted 7 tps!

Excellent. Someone finally quoted some real numbers instead of theoretical maximums, the picture is a bit clearer now, thanks!

Those numbers are bogus and show very little understanding on the part of solex.

The problem is apparent if you look at transaction 3e4116059a0edb1134126047d9e5ebfa1619b6180153cdc8390e6e36c375a179 from block #259156, one of the blocks in solex's analysis. That transaction has one input, and 41 outputs. Now for the purposes of determining how many transactions Bitcoin can perform per second, are you going to count it as one transaction, or more than one? solex is counting it as one.

As transactions become more expensive per byte people are going to use all sorts of techniques to make transaction size smaller. For instance you can combine transactions together with other parties; each transaction includes a 10 byte header. If you get together with 20 other people, you've saved 200 bytes and you improve your privacy because you've mixed your funds with those 20 other people.

The absolute minimum transaction size(1) is for single input single-output transactions. They are 192 bytes each, 182 if transaction combining is aggressively used. 1MiB/10minutes * 182bytes = 9.6tx/s

Big eWallets services like instawallet will be able to get much closer to that theoretical limit than other services simply because they'll have a wider range of txouts to chose from. We may even see agreements between services to use each others wallets just to optimize fees, with something like ripple used to settle imbalances periodically; if off-chain transactions become supported, it's quite possible even day-to-day users will be running software that does stuff like that automatically.

1) Minimum signed transaction that is. The technical minimum is 60 bytes, but such transactions are spendable by anyone and thus offer no security.
940  Bitcoin / Bitcoin Discussion / Re: Fidelity-bonded banks: decentralized, auditable, private, off-chain payments on: February 23, 2013, 09:00:55 PM
By the way, IBMs trusted computing system is pretty much a dead end these days, it's very hard to obtain the hardware. It was never that good anyway, you had to sign consulting contracts to get the SDKs and other things. Intel/AMD have a much better system and I think x86 PC based remote attestation is the way to go for a lot of reasons. See the XMHF project (trustvisor).

The Intel/AMD stuff isn't secure though yet. While the IBM stuff really does bring the security to the level where your attackers need immense resources, because memory isn't encrypted PC-based trusted computing is still vulnerable to attackers with just a few thousand dollars worth of equipment. There is pressure to make the PC stuff secure for cloud computing, so it remains an open question what is the right approach.

Anyway, implementing trusted computing is the last step for any of this stuff; I don't want to have to solely rely on it.


Could you run a Chaum bank on the darknet? I don't think so. Even if the bank has put up a fidelity bond, the temptation to engage in fractional reserve banking would be immense, and could result in a lot of profit before the inevitable bank run. You can't really tell if this is happening because the coins you deposit are expected to be constantly moving as other people cash out their blinded tokens. I don't fully understand the time locking proposal for this reason - the blinded tokens only have value if you can turn them back into Bitcoins again, and that inherently means that your deposit can't be frozen or locked in any way.

Who says banks can engage in fractional reserve banking? You can force chaum-token redemption to be recorded in audit logs, and those logs prevent them from getting away with that. The logs themselves can be made public, and making them public still doesn't reveal anything.

Incidentally, this is why I mentioned above that there are probably good technical reasons why even off-chian chuam transactions would still require fees: you want to ensure that proving fraud is cheap, which means keeping the size of the proofs down. I expect that there would be some period in which all tokens are expected to be turned over for a given set of deposit addresses, which limits the total size of any given audit log. Because fidelity bonds themselves are only useful if fraud can be proven, I expect new bonds to get purchased over time to "start fresh".

As for time locking: I expect the tokens to themselves be Bitcoin transactions in some fashion, albeit locked so they can't be used immediately. But that discussion is out of the scope of this forum I think; I'm writing up tech specs like I did for fidelity bonds.


From a first glance, this proposal sounds very similar to what the OpenTransactions project has implemented. (see the highlighted parts above)

OpenTransactions is basically a toolkit, and yes, I do plan to do more work studying it to determine what aspects of their ideas are applicable to fidelity-bonded banks, and equally maybe they're do the same for fidelity bonds.

Aha, can you walk me through what you think ten years ago would look like? Let's say the year 2000:

Quote:

I bought a new Gateway desktop in 2000 It had windows ME.
10 GB hard drive, 860 processor. Also at that time I was on dial up.
Boy, you talk about speed. I didn't have it.


Small hard-drives were a huge issue 10 years ago. I can't see people buying multiple harddrives, just to experiment with this new-fangled "Bitcoin thing" The block size would have probably been set to something more like 100KiB, and a year or two in this exactly discussion would already be happening.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 [47] 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!