Bitcoin Forum
May 05, 2024, 09:51:46 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
Author Topic: [PRE-ANN][ZEN][Pre-sale] Zennet: Decentralized Supercomputer - Official Thread  (Read 57048 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 20, 2014, 11:28:27 PM
 #161


Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714902706
Hero Member
*
Offline Offline

Posts: 1714902706

View Profile Personal Message (Offline)

Ignore
1714902706
Reply with quote  #2

1714902706
Report to moderator
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 03:47:57 AM
 #162


Super interesting conversation between ohad and HMC   Smiley

But the big question: Who is wrong?  Wink

Is there anyone out there in internetland who wants to jump in with a new perspective?

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

Hi,

Our discussion follows two main approaches:
1. Verifiable computing, and
2. Risk reduction

Zennet does not aim to verify the correctness if the computation, but offer risk reducing and controlling mechanisms. HMC's opinion is that we should stick to path 1, towards verifiable computing, and we're discussing this option also off this board. HMC also suggests that Zennet's risk reduction model is incorrect, and gives scammers an opportunity to ruin the network. I disagree.
I think that'd be enough for you to read only the last comments, since many of them are only clarifications, so you can get right into the clear ones Smiley
More information about Zennet at http://zennet.sc/about, and more details on the math part of the pricing algo available here http://zennet.sc/zennetpricing.pdf

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 05:16:42 AM
Last edit: September 21, 2014, 08:06:30 AM by HunterMinerCrafter
 #163

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

I'll try, but the key points have been shifting a bit rapidly.  (I consider this a good thing, progress.)

Socrates1024 jumping in moved the goal posts a bit, too, in ways that are probably not obvious from the thread, now.  Undecided

Perhaps we need that dedicated IRC channel sooner?

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

Did I miss any of the major issues?

Quote
I like the idea, but I'm getting paranoid about the QoS and the verifiability of pieces of work. Also I don't get the need for introduction of a new currency. I'm sure you've discussed these as I've read the first 5/6 posts, but couldn't quite follow you guys.

I also like the idea, but even if the model is fixed it still has some dangerous flaws "as given."  It will be a prime target for hackers, data theft, espionage, and even just general "griefing" by some participants.  In some respects, this is easily resolved, but in other respects it may become very difficult and/or costly.  This will, in any case, have to be a bit of a "wait and see" situation.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:23:18 AM
 #164

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?
1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree, that's just misunderstanding. I meant that when the client connects, the publisher can't know which address this ip owns, unless they challange it with some string to sign.
Yes, the POW should be invested as identity creation. Like in Keyhotee project.

Tau-Chain & Agoras
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:28:03 AM
 #165

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.

Tau-Chain & Agoras
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 05:34:18 AM
 #166

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

if we go on the new idea of "slim" paravirt provability, we might prove the running of the benchmarks themselves.

Yes, but these are related-but-distinct concerns.  Related because of #4 there.  Distinct because even with authentication to verify the correct benchmarks are run I still see a potential problem if we lack that "functional extension" from the benchmark to the job itself.  Our baseline would still be the wrong baseline, we'd just have a proof of it being the correct wrong baseline, heh.
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 21, 2014, 11:04:44 AM
 #167

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

I do agree too and seems like it was a misunderstanding based on Ohad's post.

2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

Many processes don't have the properties necessary to be authenticated. For example, you can verify the work of a miner by a simple hash function, but you can't verify the work of a neural network that simply. If the publisher has 1000 hosts on his VM, and wants to verify their works one by one, it would take a lot of computational power on his side. Also, I assume by 'work' we don't mean running a mathematical operation across hosts. I don't know the infrastructure for the VM, but the system may assume all hosts are online and cooperating in a non-malicious way, so it can build and operate an entire OS across them. If one host acts maliciously, it would endanger the integrity of the whole VM. In this perspective, assuming 1 in a 1000 defective host endangers the entire system, not just 1/1000 of work.

3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

I agree with HMC here. Any kind of benchmarking used must be run alongside the process. Any host can benchmark high and detach resources after the process has begun. The host can even do this by the network itself. Consider renting 1000 hosts to just benchmark high for a publisher and then release them. So, you either have to benchmark and process at the same time, decreasing the effective resources available, or your work supplied must be 'benchmarkable' itself. In the perspective I introduced in the last question, this does not necessarily mean every publisher should change his work, but would mean running an OS across hosts that can effectively calculate the help of each host in terms of resources. This may introduce another problem aside, any open source OS selected must be heavily changed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

I think if point 3 and 2 are solved, this won't rise. If we can identify well behaved nodes that give verifiable results with verifiable resources used, this incentive wouldn't exist. Any pricing model based on this would be sound.
MaximBitCoin
Newbie
*
Offline Offline

Activity: 1
Merit: 0


View Profile
September 21, 2014, 03:00:04 PM
 #168

Here is an interesting blog post about the project

http://data-science-radio.com/is-zennet-or-any-other-decentralized-computing-real/
Hueristic
Legendary
*
Offline Offline

Activity: 3808
Merit: 4894


Doomed to see the future and unable to prevent it


View Profile
September 21, 2014, 03:39:30 PM
 #169


Interesting article. I disagree with the premise that "Most" DC users need that level of security for their proprietary tasks. The need for such massive computational power is in itself a determent to theft as the use of the resulting data is specialized to the entity seeking it.

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 21, 2014, 05:09:10 PM
 #170


3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

we just cleared another misunderstanding (off-board), whether the mentioned contract is some futuristic promise (no) or just the agreed rate per cpu/disk/mem etc. (yes). this brought a confusion in how to avoid consumption spoofing. short answer - if publisher trusts procfs, no problem exists. if he doesn't trust it, he will be able to perform the various multivariate-linear-outlier-detection methods raised.
i think HMC now agrees that the ongoing measurements can be decomposed of past-benchmark-runnings measurements.

i hope the blogger from http://data-science-radio.com/is-zennet-or-any-other-decentralized-computing-real/ will now understand how wrong he was Wink

Tau-Chain & Agoras
xenmaster (OP)
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile WWW
September 21, 2014, 05:57:09 PM
 #171

Please joins us on #zennet at freenode
Hueristic
Legendary
*
Offline Offline

Activity: 3808
Merit: 4894


Doomed to see the future and unable to prevent it


View Profile
September 21, 2014, 06:18:27 PM
 #172

Please joins us on #zennet at freenode

During the Patriots game? Are you Insane! Tongue


http://www.ifeed2all.eu/type/american-football.html

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 06:38:33 PM
 #173

I do agree too and seems like it was a misunderstanding based on Ohad's post.

I think we do still have some "fine point" details to work out, but yet we at least agree on the actual goal here, now.

Many processes don't have the properties necessary to be authenticated. For example, you can verify the work of a miner by a simple hash function, but you can't verify the work of a neural network that simply.

Really, you can!  An ANN is really just a composition of sigmoids in a graph and you can certainly authenticate over sigmoid functions, graphs, and the composition.  You can't assert something like "it hit a correct error rate" because you can't define what would be a correct meeting of any arbitrary objective, there, but you can certainly assert "evaluation and backprop/annealing were applied correctly" and infer from that the error rate hit was the same as what you would've gotten to running locally, which is all we desire.

Quote
If the publisher has 1000 hosts on his VM, and wants to verify their works one by one, it would take a lot of computational power on his side.

This is pretty central to the debate.  While this is historically true, what we are finding in contemporary work in this field is that most of these overheads actually stem from the combination of authentication for correctness and privacy preservation.  One of the key insights that the work of Socrates1024 brings to the table is that if you remove the privacy preservation criteria (and, as a side effect of the change, remove a particular type of termination criteria - though one that I think doesn't apply to the Zennet case anyway) that the overhead of authentication is greatly reduced.

What we are mostly discussing now in our "side-band" IRC debate is if introducing a scale on the "granularity" of authenticated big step reductions creates, or can create, enough of an inflection point to control the overhead to within some acceptable bounds without sacrificing the utility of the authentication.  We think that, at least for the "resource measure authentication, only" goals of Zennet, we can.

You can think of our approach (greatly simplified) as something like this:  Authenticating (hashing over and signing a receipt for) each individual opcode (or, worse, micro-coded state transition) create a huuuuuuge overhead.  Authenticating over every stack scope (function call) state requires only hashing over call operations, creating far less overhead.  Authenticating over "an entire system run" starting from some confimed-correct hardware loaded with some confirmed-correct software requires only one hash over a certification that you did turn the power on, in the correct state, and run the thing from there.  It becomes pretty self evident that we have a clean gradient of overheads, here.

Our goal is, in a nutshell, to make the granularity of the "authenticated step" naturally large enough to be efficient but small enough to still be meaningful.  We're attempting to do this by, basically, a model of authenticating not over the entire computations but only over "each step involving a resource access."  The key realization is that we don't care if the actual math done between these resource accesses is correct - that is not what we are trying to certify!  What we are trying to certify is that resource access is done with a correct ordering, accounted for correctly in the "billing receipt" created by the worker, and that if any tampering is being done it cannot be done in a way that can be hidden from the publisher, i.e. with no evidence of the tampering included on the receipt.  We think that, under our new approach, for the worker to "get away with something" he will either have to provide visible evidence of his shenanigans to the publisher or else will have to expend significantly more resource than just running the computation correctly in order to generate a "believable" lie that will not stand out to the publisher on the receipt.

We are sketching out some details of a "toy proof of concept" (literally an adaptation of "hello world") to help confirm our theory.

Quote
Also, I assume by 'work' we don't mean running a mathematical operation across hosts.

Well, of course we do!  What we don't mean is running some specific prescribed operation, but any general operation we please. The work function can be summarized as "run our authenticated hypervisor and give us IO to the inside" basically.

Quote
I don't know the infrastructure for the VM,

HEH, neither do we, yet!  It will probably not stray too very far from what was originally proposed, just with a small peice of the VM either added or removed, depending upon how you look at it.  We've tossed around a few ideas for different approaches, but decided to defer many of those details until after we can prove the basic premise with a "hello world" type example being authenticated.

Quote
but the system may assume all hosts are online and cooperating in a non-malicious way, so it can build and operate an entire OS across them. If one host acts maliciously, it would endanger the integrity of the whole VM. In this perspective, assuming 1 in a 1000 defective host endangers the entire system, not just 1/1000 of work.

This is also somewhat central to our debate.  I think at this point we have about 5 to 7 opinions between the three of us, hehe.  I think pretty much all we *do* agree on so far about this particular aspect is that we need a much more formal treatment of this particular aspect in the specification!  Cheesy

I agree with HMC here. Any kind of benchmarking used must be run alongside the process. Any host can benchmark high and detach resources after the process has begun.

I think you missed something key, here.  Benchmarking is continuous, and ongoing, in any case.  In other words, your job is benchmarked "alongside the process" so if you start out benchmarking high and then go about removing applied resource, you will not be able to (assuming we can get the ancillary issues sorted out) continue billing without also reducing your billing rate correspondingly.  We all agree that this will work fine and that rates will converge appropriately.

What we don't agree on is the meaningfulness of the initial "baseline" benchmark that you start from, to do your initial rounds of billing before this convergence starts to "settle into" the correct values via the linear decomposition.  I don't dispute the validity of the linear solve, itself, only the applicability of a single "cannonical" or general benchmark to any initial billing for an arbitrary process.

The details on this are a bit too deep and maths-y to get into here, I think.  Join #zennet and we can wade into it if you'd like. :-)

Quote
... This may introduce another problem aside, any open source OS selected must be heavily changed.

Yes, this has come up as well.  We'd obviously like to avoid something as (insanely) effort-intensive as authenticating an entire kernel and/or VM.  Although Ohad briefly considered it as an option, I discouraged such a "moon shot" goal, favoring instead an approach more like a special purpose vm layer.

Quote
I think if point 3 and 2 are solved, this won't rise.

I agree!  If we can solve 2 and 3 then any "lower dimension" non-linearity introduced into the pricing model by an "attacking" worker becomes immediately quite visible, and the publisher can reliably abort.

Quote
If we can identify well behaved nodes that give verifiable results with verifiable resources used, this incentive wouldn't exist. Any pricing model based on this would be sound.

Exactly.  The conclusion we do all solidly agree on is that if we can verify enough such that a "big lie" becomes very self-evident and "creating lots of continuous small lies over time" becomes very computationally expensive, then the rest of the model follows soundly from that.  (Assuming ID cost is correct, i.e. my point #1 is solidly addressed as well.)

HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 06:42:52 PM
 #174

During the Patriots game? Are you Insane! Tongue

We'll still be around after, I'm sure.
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 21, 2014, 07:50:41 PM
 #175

I don't have the time to follow your discussion on IRC, but feel free to post any results here. I will check on this thread.
HunterMinerCrafter
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
September 21, 2014, 07:59:32 PM
 #176

I don't have the time to follow your discussion on IRC, but feel free to post any results here. I will check on this thread.

I'm sure we will continue to relay any pertinent conclusions here.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 24, 2014, 04:07:40 AM
 #177

below when I mention "we" I mean HMC and myself:

I have the expertise to jump right in, but your discussion has got too long. Is it possible to summarize the points of dispute so other people can join ?

1. Identity.  We agree that a PoW mining to claim identity should be used.  I say that identity should be pre-claimed, based on a global difficulty target.  Ohad says that identity should be claimed when the worker client connects to the publisher for initiation, with an arbitrary target set by the publisher.
  My key concern: Publisher has no way to know what an appropriate difficulty should be set at for any given worker.

as above, this description of this issue is based on misunderstanding. we continue working together on this mechanism.

Quote
2. Verification of execution.  We agree that it would be ideal to have authenticated processes, where the publisher can verify that the worker is well behaved.  Ohad says there doesn't need to be any, and the cost of any approach would be too high.  I say the cost can be made low and that there is a likely critical need.
  My key concern: Without a verification over at least some aspects of the job work the publisher can have far too little faith in the results of any one computation, and particularly in the results of a combination of computations.  In particular, lack of verification may lead to rational collusion by workers and added incentive to attack each-other.

we are making promising advancements on that area.

Quote
3. System benchmarking.  We agree that it would be ideal to have an appropriate system resource model presented to publishers, on which to make their determination about how dispatch and price their work.  I say the benchmark must represent that same algorithm to take baseline measurements.  Ohad says the baseline can be established with any algorithm.
  My key concern:  Many attacks based on a combined "gaming" of benchmarks and work performance might be possible if the benchmarks are not representative of the actual work to be performed.

4. Pricing mechanism.  We agree that the presented linear decomposition utility pricing objective is probably "just about right."  Ohad says his approach is entirely sound.  I say that the overall model leads to an opportunity, particularly because of prior points taken in conjunction, for an attacker to introduce relevant non-linearity by lying about results.
  My key concern: the fundamental assumption of the system, which ends up "joining together" the entire economic model, actually ends up working in reverse of what was intended, ultimately giving particular incentive to the "sell side" worker side participants to misrepresent their resources and process.

I think we now agree after clarifying that the contract does not contain any futuristic promise.

PS
To anyone who didn't note, I've credited HMC on OP.

Tau-Chain & Agoras
Hueristic
Legendary
*
Offline Offline

Activity: 3808
Merit: 4894


Doomed to see the future and unable to prevent it


View Profile
September 24, 2014, 10:06:07 PM
 #178

I have found your first customers. Cheesy

http://thehackernews.com/2014/09/the-pirate-bay-runs-on-21-raid-proof.html

“Bad men need nothing more to compass their ends, than that good men should look on and do nothing.”
CryptoPiero
Member
**
Offline Offline

Activity: 98
Merit: 10


View Profile
September 24, 2014, 10:34:05 PM
 #179


And I have found 100+ customers. Unfortunately they are botnet owners.
ohad
Hero Member
*****
Offline Offline

Activity: 897
Merit: 1000

http://idni.org


View Profile WWW
September 24, 2014, 10:35:37 PM
 #180


yeah heard of it, cool stuff Smiley

Tau-Chain & Agoras
Pages: « 1 2 3 4 5 6 7 8 [9] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!