Bitcoin Forum
November 05, 2024, 06:16:14 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: proposal: delete blocks older than 1000  (Read 2690 times)
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 14, 2013, 11:32:17 PM
 #21

The freenet model does not provide for reliability, however. My past experience was that individual keys in freenet were fairly unreliable, especially if they were just community of interest keys and not linked from the main directories. It also lacked any kind of sibyl resistance on opennet, and darknet has the unfortunate bootstrapping usability issues.  Perhaps things have changed in the last couple years? (If so— anyone have any citations I can read about whats changed in freenet land?).   An obvious proof of concept would be to insert the bitcoin blockchain as is and to provide a tool to fetch the blocks that way. If nothing else another alternative transport is always good.

In any case, this is serious overkill.  We already have an address rumoring mechanism that works quite well, and the protocol already handles fetching and validation in an attack resistant manner.  If we supplement it with additional fields on what range(s) nodes are willing to serve, with some authentication to prevent flag malleability that should be adequate for our needs from a protocol perspective... and considerably simpler than a multihop DHT.
Altoidnerd
Sr. Member
****
Offline Offline

Activity: 406
Merit: 251


http://altoidnerd.com


View Profile WWW
July 14, 2013, 11:42:29 PM
 #22

Agreed. I also propose redesigning 3-stage rockets so that the top two stages carry payload and only the bottom stage carries fuel. That way twice as much could be carried into orbit for half the fuel. I am surprised it wasn't done like that for the manned space program.
We better make the speed of light higher so that optic fibers can allow much faster data transfers
I think we can achieve both of these by first making space-time riemannian instead of Pseudo-Riemannian. With euclidean space-time there should be no need for pesky limits like a constant speed of light, and the extra payload mass should be offset-able by simply moving some of the fuel you didn't need into the past.


ObOntopic:  While not all nodes need to constantly store the complete history— it is not so simple as waving some hands and saying "just keep X blocks": access to historical data is important to Bitcoin's security model. Otherwise miners could invent coins out of thin air or steal coins and later-attaching nodes would know nothing about it, and couldn't prevent.   There is a careful balancing of motivations here: part of the reason someone doesn't amass a bunch of computing power to attack the system is because of how little they can get away with if they try.


To achieve all aforementioned goals at one fell swoop, and then some, we should simply nullify the 2nd law of thermodynamics.  Without this pesky restriction, we would not be faced with mortality and therefore would feel no need to rush to any accomplishments at all.

Since we do have the second law, however, people will also tend to lose track of their wallets, ensuring that even if hoarding were somehow discouraged or even eliminated, there would be accidental unspent coins.

Do you even mine?
http://altoidnerd.com 
12gKRdrz7yy7erg5apUvSRGemypTUvBRuJ
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
July 14, 2013, 11:47:25 PM
 #23

The freenet model does not provide for reliability, however.
That's true. The cost of strong anonymity is that storage nodes are dealing with encrypted blobs whose contents they know nothing about, so they have to drop keys randomly when an individual node runs out of space.

A Bitcoin-specific storage system could do better than this, for example by dropping prunable transactions before unspent transactions.

In any case, this is serious overkill.
What kind of storage architecture will ultimately be needed if Bitcoin is going to scale as far as 106 transactions per second? Laying the groundwork for a network of that capacity is not overkill IMHO.
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 15, 2013, 01:45:27 AM
 #24

A Bitcoin-specific storage system could do better than this, for example by dropping prunable transactions before unspent transactions.
Uh. The whole point of the discussion here is providing historic transactions in order to autonomously validate the state.

There is absolutely no reason to use a DHT like system to provide UTXO data: Even if you've chosen to make a storage/bandwidth trade-off where you do not store the UTXO yourself, you would simply fetch them from the party providing you with the transaction/block as they must already have that information in order to have to validated and/or produced the transaction.

Quote
A Bitcoin-specific storage system
Yes, freenet's alpha and omega is its privacy model, but again, thats why its architecture probably teaches us relatively little of value for the Bitcoin ecosystem.

In any case, this is serious overkill.
What kind of storage architecture will ultimately be needed if Bitcoin is going to scale as far as 106 transactions per second? Laying the groundwork for a network of that capacity is not overkill IMHO.
Why not specify 10e60000 transactions per second while you're making up random numbers?   Bitcoin is a decentralized system, thats its whole reason for existence.  There aren't non-linear costs that inhibit its scaling at least not in the system itself, just linear ones— but they're significant.  Positing 10e6 transaction per second directly inside Bitcoin is _ludicrous_ (you're talking about every full node needing to transfer 80 tbytes per day just to validate, with a peak data in excess of 10gbit/sec require to obtain reliable convergence) and not possible without completely abandoning decentralization— unless you also assume a comparable increase in bandwidth/computing power, in which case it's trivial. Or if you're willing to abandon decentralization, again it's trivial. Or if you move that volume into an external system— its a question of the design of that system and not Bitcoin.

Regardless: Achieving high scale by first dramatically _increasing_ the bandwidth required but interposing a multihop DHT in the middle— when generally bandwidth has been scaling much slower than computation and storage— isn't a good start.
justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
July 15, 2013, 02:00:08 AM
 #25

A Bitcoin-specific storage system
Yes, freenet's alpha and omega is its privacy model, but again, thats why its architecture teaches us relatively little of value for the Bitcoin ecosystem.[/quote]I disagree with that, because their privacy model required them to make everything work automatically. You just start up a node and it bootstraps and specializes without any user intervention at all. This is something that other distributed storage systems, like Tahoe-LAFS, don't have.

Why not specify 10e60000 transactions per second while you're making up random numbers?   Bitcoin is a decentralized system, thats its whole reason for existence.  There aren't non-linear costs that inhibit its scaling at least not in the system itself, just linear ones— but they're significant.  Positing 10e6 transaction per second directly inside Bitcoin is _ludicrous_ (you're talking about every full node needing to transfer 80 tbytes per day just to validate, with a peak data in excess of 10gbit/sec require to obtain reliable convergence) and not possible without completely abandoning decentralization— unless you also assume a comparable increase in bandwidth/computing power, in which case it's trivial. Or if you're willing to abandon decentralization, again it's trivial. Or if you move that volume into an external system— its a question of the design of that system and not Bitcoin.
Nielsen's Law of Internet Bandwidth suggests that high end home broadband users will have 10 gbit/sec connections by 2025. Does it not make sense plan ahead?
gmaxwell
Moderator
Legendary
*
expert
Offline Offline

Activity: 4270
Merit: 8805



View Profile WWW
July 15, 2013, 02:20:57 AM
Last edit: July 15, 2013, 02:32:29 AM by gmaxwell
 #26

You just start up a node and it bootstraps and specializes without any user intervention at all. This is something that other distributed storage systems, like Tahoe-LAFS, don't have.
Sure, and nothing interesting or fancy is required for that to work.  Our blockchain space is _well defined_, not some effectively infinite sparse state space. The access patterns to it are also well defined:  All historical data is access with equal/flat small probability, and accessed sequentially. Recent blocks are accessed with an an approximately exponential decay.  Data needed to validate a new block or transaction is always available from the party that gave you that block or transaction.

So, a very nice load-balancing architecture falls right out of that.  Everyone keeps recent blocks with a exponentially distributed window size. Everyone selects a uniform random hunk of the history, size determined by their contributed storage and available bandwidth.  This should result in nearly optimal traffic distribution and is highly attack resistant in a way seriously stronger than freenet's node swapping and without the big bandwidth overheads of having to route traffic through many homes to pick up data thats ended up far from its correct specialization as IDs have drifted.

Quote
Nielsen's Law of Internet Bandwidth suggests that high end home broadband users will have 10 gbit/sec connections by 2025. Does it not make sense plan ahead?
Arguing "Does it not make sense to plan ahead" here sounds like some kind of cargo cult engineering:  "Planing ahead must be done. This is a plan. Then it must be done."

Any proposed actions need to be connected to solving actual problems (or at least ones that are reasonably and justifiably anticipated).   What you're suggesting— to the extent that its even concrete enough to talk about the benefits or costs—, would likely _decrease_ the scaling over the current and/or most obvious designs by at least constant factor, and more probably a constant plus a logarithmic factor. Worse, it would move costs from storage, which appears to have the best scaling 'law', to bandwidth which has the worst empirical scaling.

If you scale things based on the scaling laws your assuming there nothing further is required. If you strap on all the nice and pretty empirically observed exponential trends then everything all gets faster and everything automatically scales up no worse than the most limiting scaling factor (which has been bandwidth historically and looks like it will continue to be)—  assuming no worse than linear performance. There are already no worse than linear behavior in the Bitcoin protocol that I'm aware of. Any in the implementations are just that, and can be happily hammered out asynchronously over time. Given computers and bandwidth that are ~10e6 better (upto a factor of 4 or so in either direction), you can have your 10e6 transactions/s. Now— I'm skeptical that these exponential technology trends will hold even just in my lifetime. But assuming they don't, then that results in a ceiling in what you can do in a decentralized system that twiddling around the protocols can't save without tossing the security model/decentralization.

Maybe people will want to toss the decentralization of Bitcoin in order to scale it further than the technology supports. If so, I would find that regrettable, since if you want to do that you could just do it in an external system.  But until I'm the boss of everything I suspect some people will continue to do things I find regrettable from time to time— I don't, however, see much point in participating in discussions about those things, since I certainly won't be participating in them.
 
marcus_of_augustus
Legendary
*
Offline Offline

Activity: 3920
Merit: 2349


Eadem mutata resurgo


View Profile
July 15, 2013, 02:47:11 AM
 #27

I'm pretty sure that the NSA will keep a full copy of the blockchain forever ... and probably all traffic that was ever on the bitcoin network.

Maybe we can just ask them to keep the "good" copy always available for new client downloads when anyone need it?

justusranvier
Legendary
*
Offline Offline

Activity: 1400
Merit: 1013



View Profile
July 15, 2013, 02:49:20 AM
 #28

Any proposed actions need to be connected to solving actual problems (or at least ones that are reasonably and justifiably anticipated).   What you're suggesting— to the extent that its even concrete enough to talk about the benefits or costs—, would likely _decrease_ the scaling over the current and/or most obvious designs by at least constant factor, and more probably a constant plus a logarithmic factor. Worse, it would move costs from storage, which appears to have the best scaling 'law', to bandwidth which has the worst empirical scaling.
I have pretty modest aspirations for Bitcoin: I just want it to be as successful in the currency world as TCP/IP has been in the networking world; i.e. I'm looking forward to a future in which there are no Bitcoin currency exchanges because there are no longer any currencies to exchange with.

The reason I like the distributed filesystem approach is that the storage requirements of a universal currency are going to be immense, and loosening the requirement that any node maintain a full copy of everything at the same time makes it easier to solve. Freenet has a self-assembling datastore where each node specializes in terms of what keys they store, and while it doesn't guarantee that keys will be be retrievable forever, it does a good job in practise (subject to certain caveats).

That makes it a good starting point to design a system that could scale up to the kind of storage system Bitcoin would need a decade from now if it's still on the road to becoming a universal currency.

On the other hand there's no guarantee that the Dollar and the Euro are going to make it to 2025, so it's always possible that we'd need to scale up very quickly much sooner than anyone could anticipate. It certainly wouldn't be the first time for that to happen to Bitcoin.
btcusr
Sr. Member
****
Offline Offline

Activity: 405
Merit: 255


@_vjy


View Profile
July 15, 2013, 03:51:09 AM
 #29

First or oldest coins must be valued differently, as they are somehow special. I'd like to buy bitcoins mined from the first 10 blocks, for as much as 1 bit cent for 1 BTC.

May be such markets not opened up yet.. Smiley

Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!