Bitcoin Forum
August 21, 2018, 04:00:09 PM *
News: Latest stable version of Bitcoin Core: 0.16.2  [Torrent].
 
   Home   Help Search Donate Login Register  
Pages: [1] 2 3 »  All
  Print  
Author Topic: A bit of criticism on how the bitcoin client does it  (Read 2595 times)
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 06:31:16 PM
 #1

As some of you may have noticed, I have worked on my own implementation of a bitcoin node. During this process I found things, which I would like to point out, so maybe they could be improved in a future...
Basically, they are all network bandwidth wasting related.

1. [EDIT: never mind]
Maybe I have not verified it well enough, but I have an impression that the original client, whenever it sends "getblocks", it always asks as deep as possible - why to do this? It just creates unnecessary traffic. You can surely optimize it, without much effort.

2. [EDIT: never mind]
IMO, you should also optimize the way you do "getdata".
Don't just send getdata for all the block that you don't know, to all the peers at the same time - it's crazy.
Instead, try to i.e. ask each node for a different block - one at a time, until you collect them all...

3. [EDIT: please, do mind]
Last, but not least.
The blocks are getting bigger, but there have been no improvements to the protocol, whatsoever.
You cannot i.e. ask a peer for a part of a block - you just need to download the whole 1MB of it from a single IP.
Moreover, each block has an exact hash, so it is just stupid that in times when even MtGox API goes through CloudFlare to save bandwidth, there is no solution that would allow a node to just download a block from an HTTP server, and so it is forced to leech it from the poor, mostly DSL armed, peers.
The way I see it, the solution would be very simple: every mining pool can easily use CloudFlare (or any other cloud service) to serve blocks via HTTP.
So if my node says "getdata ...", I do not necessarily mean that I want to have this megabyte of data from the poor node and its thin DSL connection. I would be more than happy to just get a URL, where I can download the block from - it surely would be faster, and would not drain the peer's uplink bandwidth that much.


That's about it, but if I recall something more, I will surely append it to this topic.

Also, please don't take my criticism personally - it's only meant as a feedback, pointing out areas that I think are important to improve, becasue the official bitcoin client is already eating up my internet connection more efficiently than BitTorrent, and that is very not cool Smiley

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
1534867209
Hero Member
*
Offline Offline

Posts: 1534867209

View Profile Personal Message (Offline)

Ignore
1534867209
Reply with quote  #2

1534867209
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1534867209
Hero Member
*
Offline Offline

Posts: 1534867209

View Profile Personal Message (Offline)

Ignore
1534867209
Reply with quote  #2

1534867209
Report to moderator
1534867209
Hero Member
*
Offline Offline

Posts: 1534867209

View Profile Personal Message (Offline)

Ignore
1534867209
Reply with quote  #2

1534867209
Report to moderator
Remember remember the 5th of November
Legendary
*
Offline Offline

Activity: 1722
Merit: 1001

Reverse engineer from time to time


View Profile
May 13, 2013, 06:38:34 PM
 #2

In what language are you implementing the node? I've been having an idea of also writing one, perhaps in C, but who knows, I don't usually finish projects.

BTC:1AiCRMxgf1ptVQwx6hDuKMu4f7F27QmJC2
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 06:39:29 PM
 #3

In what language are you implementing the node? I've been having an idea of also writing one, perhaps in C, but who knows, I don't usually finish projects.
I have implemented it, in Go language: https://bitcointalk.org/index.php?topic=199306.0

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Remember remember the 5th of November
Legendary
*
Offline Offline

Activity: 1722
Merit: 1001

Reverse engineer from time to time


View Profile
May 13, 2013, 06:52:58 PM
 #4

In what language are you implementing the node? I've been having an idea of also writing one, perhaps in C, but who knows, I don't usually finish projects.
I have implemented it, in Go language: https://bitcointalk.org/index.php?topic=199306.0
I thought the moto of the Go language is simple and efficient, will it even perform well?

BTC:1AiCRMxgf1ptVQwx6hDuKMu4f7F27QmJC2
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 06:56:27 PM
 #5

In what language are you implementing the node? I've been having an idea of also writing one, perhaps in C, but who knows, I don't usually finish projects.
I have implemented it, in Go language: https://bitcointalk.org/index.php?topic=199306.0
I thought the moto of the Go language is simple and efficient, will it even perform well?
It is simple and efficient and it surely performs well.
I would even risk a statement that it performs better than the current satoshi client, though it takes much more RAM, so we are talking about different abstraction layers here.
Crypto operation are likely less efficient in Go, then what openssl gives, but my Go client only does them after checking a new highest block against the expected difficulty - so less often... Smiley

But the only way to be sure how fast it actually is, is to check it out by yourself Smiley

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
kjj
Legendary
*
Offline Offline

Activity: 1302
Merit: 1001



View Profile
May 13, 2013, 07:34:59 PM
 #6

These are all pretty common topics in IRC.  I recall some of them from the mailing list too, but I don't read that daily any more.  People are working on various parts.  For one example, see here.  The bootstrap torrent is another example.

Pitch in and help if you can.

17Np17BSrpnHCZ2pgtiMNnhjnsWJ2TMqq8
I routinely ignore posters with paid advertising in their sigs.  You should too.
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 07:46:35 PM
 #7

Sorry, but I think I can only help by implementing any of these new ideas in my client.
I just don't like to have a boss and in my code I am the boss - hope you understand Smiley

Anyway, if anyone wants to test a solution against a different client - just let me know.
I can implement it, if it's not really crazy - and then we can test each other, benefiting everyone

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
May 13, 2013, 07:49:39 PM
 #8

1.
Maybe I have not verified it well enough, but I have an impression that the original client, whenever it sends "getblocks", it always asks as deep as possible - why to do this? It just creates unnecessary traffic. You can surely optimize it, without much effort.

Not sure what you mean by "as deep as possible". We always send getdata starting at whatever block we already know. The reason for starting from early blocks and moving forward is because validation is done is stages, and at each point as much as possible is already validated (as a means to prevent DoS attacks, mostly). As most checks can only be done when you have the entire chain of blocks from genesis to the one being verified, you need them more or less in order.

Quote
2.
IMO, you should also optimize the way you do "getdata".
Don't just send getdata for all the block that you don't know, to all the peers at the same time - it's crazy.
Instead, try to i.e. ask each node for a different block - one at a time, until you collect them all...

That's not true, we only ask for each block once (and retry after a timeout), but it is done to a single peer (not to all, and not balanced across nodes). That's a known badness, but changing isn't trivial, because of how validation is done.

There is one strategy however that's pretty much accepted as the way to go, but of course someone still has to implement it, test it, ... and it's a pretty large change. The basic idea is that downloading happens in stages as well, where first only headers are fetched (using getheaders) in a process similar to how getblocks is done now, only much faster of course. However, instead of immediately fetching blocks, wait until a long chain of headers is available and verified. Then you can start fetching individual blocks from individual peers, assemble them, and validate as they are connected to the chain. The advantage is that you already know which chain to fetch blocks from, and don't need to infer that from what others tell you.

Quote
3.
Last, but not least.
Forgive me the sarcasm, but I really don't know what all the people in the Bitcoin Foundation have been doing for the past years, besides suing each other and increasing transaction fees Wink

The Bitcoin Foundation has only been around for a year or so, and they don't control development. They pay Gavin's salary, but other developers are volunteers that work on Bitcoin in their free time.

Quote
We all know that the current bitcoin protocol does not scale - so what has been done to improve it?
Nothing!
The blocks are getting bigger, but there have been no improvements to the protocol, whatsoever.
You cannot i.e. ask a peer for a part of a block - you just need to download the whole 1MB of it from a single IP.

BIP37 actually introduced a way to fetch parts of blocks, and it can be used to fetch a block with just the transactions you haven't heard about, so it avoids resending those that have already been transferred as separate transactions (though I don't know of any software that uses this mechanism of block fetching yet; once BIP37 is available on more nodes, I expect it will be). Any other system which requires negotiating which transactions to send adds latency to block propagation time, so it's not necessarily the improvement you want.

Quote
Moreover, each block has an exact hash, so it is just stupid that in times when even MtGox API goes through CloudFlare to save bandwidth, there is no solution that would allow a node to just download a block from an HTTP server, and so it is forced to leech it from the poor, mostly DSL armed, peers.
The way I see it, the solution would be very simple: every mining pool can easily use CloudFlare (or any other cloud service) to serve blocks via HTTP.
So if my node says "getdata ...", I do not necessarily mean that I want to have this megabyte of data from the poor node and its thin DSL connection. I would be more than happy to just get a URL, where I can download the block from - it surely would be faster, and would not drain the peer's uplink bandwidth that much.

There are many ideas about how to improve historic block download. I've been arguing for a separation between archive storage and fresh block relaying, so nodes could be fully verifying active nodes on the network without being required to provide any ancient block to anyone who asks. Regarding moving to other protocols, there is the bootstrap.dat torrent, and there's recently been talk about other mechanism on the bitcoin-development mailinglist.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 08:13:27 PM
 #9

Not sure what you mean by "as deep as possible". We always send getdata starting at whatever block we already know. The reason for starting from early blocks and moving forward is because validation is done is stages, and at each point as much as possible is already validated (as a means to prevent DoS attacks, mostly). As most checks can only be done when you have the entire chain of blocks from genesis to the one being verified, you need them more or less in order.
What I mean is when the client has already downloaded the full chain - and is basically waiting for a new block.
Why to ask 500 blocks back?

Quote
That's not true, we only ask for each block once (and retry after a timeout), but it is done to a single peer (not to all, and not balanced across nodes). That's a known badness, but changing isn't trivial, because of how validation is done.
OK - than I'm sorry.
It only proves how little I know about the bitcoin client, so I should not be changing it Smiley

Quote
There is one strategy however that's pretty much accepted as the way to go, but of course someone still has to implement it, test it, ... and it's a pretty large change. The basic idea is that downloading happens in stages as well, where first only headers are fetched (using getheaders) in a process similar to how getblocks is done now, only much faster of course. However, instead of immediately fetching blocks, wait until a long chain of headers is available and verified. Then you can start fetching individual blocks from individual peers, assemble them, and validate as they are connected to the chain. The advantage is that you already know which chain to fetch blocks from, and don't need to infer that from what others tell you.
I saw getheaders and I was thinking about using it. But it would basically only help for the initial chain download.
Now I think if you really want to combine the data you got from getheaders, with the parts of blocks acquired from you peers after they have implemented BIP37 (otherwise it won't be much faster) - then good luck with that project, man! Wink
I mean, I would rather prefer baby steps - even extreme, like having a central sever from which you can fetch a block, by its hash. I mean: how expensive would be that? But how much bandwidth would it save for these poor people.. Smiley

Quote
BIP37 actually introduced a way to fetch parts of blocks, and it can be used to fetch a block with just the transactions you haven't heard about, so it avoids resending those that have already been transferred as separate transactions (though I don't know of any software that uses this mechanism of block fetching yet; once BIP37 is available on more nodes, I expect it will be).
Interesting.. thank you, then maybe that should be the next thing that I will add to my client. Smiley

Quote
There are many ideas about how to improve historic block download. I've been arguing for a separation between archive storage and fresh block relaying, so nodes could be fully verifying active nodes on the network without being required to provide any ancient block to anyone who asks. Regarding moving to other protocols, there is the bootstrap.dat torrent, and there's recently been talk about other mechanism on the bitcoin-development mailinglist.
I was talking more about single blocks being available via HTTP - at the very moment when they have been mined.
I think all you need is an URL - so it should be up to a peer to choose which URL to give you. As long as the hash of the data you download from there matches what you needed, you have no reason to question his method. Otherwise, just ban the bastard Wink

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
May 13, 2013, 08:22:30 PM
 #10

Why to ask 500 blocks back?

It doesn't, as far as I know. It asks for "up 500 blocks starting at hash X", where X is the last known block.

Quote
Quote
There is one strategy however that's pretty much accepted as the way to go, but of course someone still has to implement it, test it, ... and it's a pretty large change. The basic idea is that downloading happens in stages as well, where first only headers are fetched (using getheaders) in a process similar to how getblocks is done now, only much faster of course. However, instead of immediately fetching blocks, wait until a long chain of headers is available and verified. Then you can start fetching individual blocks from individual peers, assemble them, and validate as they are connected to the chain. The advantage is that you already know which chain to fetch blocks from, and don't need to infer that from what others tell you.
I saw getheaders and I was thinking about using it.
Now I think if you really want to combine the data you got from getheaders, with the parts of blocks acquired from you peers after they have implemented BIP37 (otherwise it won't be much faster) - then good luck with that project, man! Wink

Using Bloom filtering may not be entirely viable yet, I'll have to check. The big changes is first downloading and validating headers, and then downloading and validating the blocks itself. IMHO, it's the only way to have a sync mechanism that is both fast, stable and understandable (I have no doubt that there are other emchanisms that share two of those three properties...).

Quote
I mean, I would rather prefer baby steps - even extreme, like having a central sever from which you can fetch a block, by its hash. I mean: how expensive would be that? But how much bandwidth would it save for these poor people.. Smiley

What protocol is used to actually fetch blocks is pretty much orthogonal to the logic of deciding what to fetch, and how to validate it, IMHO.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 08:29:50 PM
 #11

Using Bloom filtering may not be entirely viable yet, I'll have to check. The big changes is first downloading and validating headers, and then downloading and validating the blocks itself. IMHO, it's the only way to have a sync mechanism that is both fast, stable and understandable (I have no doubt that there are other emchanisms that share two of those three properties...).
I still think a simple solution, like "give me this part of this block/transaction", would have a much better chance of success in a short term.
And I also think that it would be nice to have something in a short term Smiley

Quote
What protocol is used to actually fetch blocks is pretty much orthogonal to the logic of deciding what to fetch, and how to validate it, IMHO.
I disagree. If you are a node behind DSL, and you have a very limited upload bandwidth, you do not want to serve blocks (and maybe even transactions), unless it is really necessary.
There are servers out there, connected to the fastest networks in the world - these you should use, as much as you can. Who is going to stop you?

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Pieter Wuille
Legendary
*
qt
Offline Offline

Activity: 1050
Merit: 1005


View Profile WWW
May 13, 2013, 08:31:48 PM
 #12

Quote
What protocol is used to actually fetch blocks is pretty much orthogonal to the logic of deciding what to fetch, and how to validate it, IMHO.
I disagree. If you are a node behind DSL, and you have a very limited upload bandwidth, you do not want to serve blocks, unless it is really necessary.
There are servers out there, connected to the fastest networks in the world - these you should use, as much as you can. Who is going to stop you?

I agree completely. But it still has nothing to do with your logic of deciding what to fetch and how to validate it. It's just using a different protocol to do it.

aka sipa, core dev team

Tips and donations: 1KwDYMJMS4xq3ZEWYfdBRwYG2fHwhZsipa
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 08:34:04 PM
 #13

I agree.
But in reality, the logic of what to fetch is only important during the initial chain download.
Later you just fetch whatever new there is...

So it is not really so important, is it? Wink

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 13, 2013, 08:48:51 PM
 #14

Maybe we should not be focusing so much on the initial blockchain download, but rather on limiting the bandwidth usage of a completely synchronized node.
As for relaying transactions, I would even go crazy enough to implement a web of trust - where you don't verify every transaction, but only random ones and then build a trust to the node that sends them to you - then you check them randomly, but less frequent.

But also for transactions - the data should be kept on WWW servers. There is no economic reason to fetch them from China Smiley

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1005


View Profile
May 13, 2013, 11:08:23 PM
 #15

Quote
I still think a simple solution, like "give me this part of this block/transaction", would have a much better chance of success in a short term.

I don't understand your point - that is exactly what Bloom filtering provides. It is deployed and working for SPV clients for some months already. There have been no issues with it. You can't use it as a full node because a full node, by definition, must download full blocks as it must know about all transactions.

Incidentally, if you're going to make sarcastic comments implying Bitcoin hasn't improved, you should actually know what you're talking about. Bloom filtering launched at the start of this year, it's not something that was originally a part of the protocol - so there have been big improvements quite recently.

For distributing the block chain you can as well use Bittorrent or some other large file distribution mechanism rather than HTTP serving, it's already possible and there are already torrents distributing the chain in this way. They aren't designed for end users because end users should eventually all end up on SPV wallets which already only download partial blocks.
grau
Hero Member
*****
Offline Offline

Activity: 836
Merit: 1000


bits of proof


View Profile WWW
May 14, 2013, 07:06:05 AM
 #16

Yes, Bloom filtering is a significant improvement to the core protocol.

In addition to serving SPV clients it is used to optimize the BOP server's communication to lightweight clients connected to its message bus.

The BOP message bus also offers an API to get blocks.
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 14, 2013, 08:08:54 AM
 #17

Quote
I still think a simple solution, like "give me this part of this block/transaction", would have a much better chance of success in a short term.

I don't understand your point - that is exactly what Bloom filtering provides. It is deployed and working for SPV clients for some months already. There have been no issues with it. You can't use it as a full node because a full node, by definition, must download full blocks as it must know about all transactions.
Well, you have just said it yourself: this Bloom filtering does not help at all, if you want to have a full node.
And I do want to have a full node - probably as well as most of you guys out there.
So how does it help us?

Quote
Incidentally, if you're going to make sarcastic comments implying Bitcoin hasn't improved, you should actually know what you're talking about. Bloom filtering launched at the start of this year, it's not something that was originally a part of the protocol - so there have been big improvements quite recently.
I think you were the one who did not know what I was talking about. Smiley
Improvements are worthless if there is no actual software, which people want to use, that gets advantage of them.

What I meant by "give me this part of this block/transaction", is literally  "give me X bytes of block Y, starting at offset Z".
So, when a new block appears in the network and I need to download it, while being connected to a number of peers, I don't ask each one of them for the same megabyte of data - instead I can just split the work into, let's say 32KB chunks, and this way fetch the entire new block from my peers much quicker.

But that would only be useful before making the protocol to support fetching blocks from HTTP servers - which is the ultimate solution, which IMO should be implemented ASAP, if you guys really care about all these small bitcoin users and their internet connections. The mining pools should help here, because it is in their very best interest to propagate the blocks they have mined as quickly as possible across the network - and what could be quicker than a static file, served via http, from the pool's domain, through a clodflare-like infrastructure?

Quote
For distributing the block chain you can as well use Bittorrent or some other large file distribution mechanism rather than HTTP serving, it's already possible and there are already torrents distributing the chain in this way. They aren't designed for end users because end users should eventually all end up on SPV wallets which already only download partial blocks.

As I said before: I would prefer to focus on improving a behavior of a node that already is synchronized, rather than focusing on making it faster to setup a new one from a scratch.
Initial chain download, and the fact that it takes so long, is inconvenient, but it is not really such a big issue for the actual network.
Besides when you setup a node from scratch, and so need to re-parse the 236+k blocks, the network communication does not seem to me as much of an issue, as all the hashing and elliptic math that your PC needs to go through.



There is another thing that came to my mind, so I will just add it here, to this post.
I believe nodes should not relay transactions that use any inputs which exist only in the memory pool. It should only relay transactions that use inputs from an actually mined blocks.
This IMHO would improve the network's traffic pretty much. A regular user never needs (usually is not never able) to spend an output that has not been mined yet - while, at the other hand, relaying of such transactions takes a huge part of his network connection.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Mike Hearn
Legendary
*
expert
Offline Offline

Activity: 1526
Merit: 1005


View Profile
May 14, 2013, 07:14:36 PM
 #18

You're trying to solve a non-existant problem: block propagation is not upload bandwidth limited today so why would anyone add such a protocol feature? That's why I'm confused. You're asking for something that just wouldn't speed anything up.
piotr_n
Legendary
*
Offline Offline

Activity: 1904
Merit: 1002


aka tonikt


View Profile WWW
May 14, 2013, 07:17:19 PM
 #19

You're trying to solve a non-existant problem: block propagation is not upload bandwidth limited today so why would anyone add such a protocol feature? That's why I'm confused. You're asking for something that just wouldn't speed anything up.
Well man, if that problem is non-existant to you, then I can only envy you the internet connection that you have at home Smiley

But even having such a great connection - if you want to create a software that would be able to import the entire block chain, from a scratch, within a few minutes - then I would rather suggest you looking into developing a hardware that supports elliptic curve math, because IMO that seems to be the weakest link in this process - not the network protocol.

And the "block propagation" is eating up a hell lot of the poor bitcoin users' bandwidth - it might not be a problem for you, but it is a problem.

Check out gocoin - my original project of full bitcoin node & cold wallet written in Go.
PGP fingerprint: AB9E A551 E262 A87A 13BB  9059 1BE7 B545 CDF3 FD0E
Peter Todd
Legendary
*
expert
Offline Offline

Activity: 1106
Merit: 1000


View Profile
May 14, 2013, 07:37:34 PM
 #20

And the "block propagation" is eating up a hell lot of the poor's bitcoins users bandwidth - it might not be a problem for you, but it is a problem.

If you don't have enough bandwidth to be CPU limited, stop trying to run a node. SPV clients are just fine for any users needs unless you want to run a mining pool or maybe operate a big business. If you really want, go get a VPS server; $20-$100/month should buy a fast enough one, at least for another year or two.

Pages: [1] 2 3 »  All
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!