Bitcoin Forum
November 14, 2024, 09:04:55 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Do you Accept Komodo ICO conversion vs Reject Komodo ICO conversion and fund new dev team?
Accept - 145 (68.7%)
Reject - 66 (31.3%)
Total Voters: 211

Pages: « 1 ... 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 [341] 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 ... 547 »
  Print  
Author Topic: BTCD is no more  (Read 1328490 times)
Azeh (OP)
Sr. Member
****
Offline Offline

Activity: 441
Merit: 500


View Profile
October 08, 2014, 03:22:25 AM
 #6801

We are getting almost 100 BTCD per month from the staking of the SuperNET funds.
I propose that these funds be used to fund ~100 VPS
I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.

These servers can be providing HDD for cloud storage, public routing, public access points, etc.

The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.

I need your help to complete the SuperNET, which is needed for Teleport.

James

P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers

for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space?
yes, that should be enough, so we can start with ~50
I am assuming we can change the hardware specs as needed?

yeah I think most offer scalability on the fly.
what would it take to go ahead with this?

@Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that

James



Poll is up.

In the meantime, while voting is taking place, is anyone willing to step up and take on this task?
paulthetafy
Hero Member
*****
Offline Offline

Activity: 820
Merit: 1000


View Profile
October 08, 2014, 04:04:32 AM
 #6802

We are getting almost 100 BTCD per month from the staking of the SuperNET funds.
I propose that these funds be used to fund ~100 VPS
I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.

These servers can be providing HDD for cloud storage, public routing, public access points, etc.

The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.

I need your help to complete the SuperNET, which is needed for Teleport.

James

P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers

for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space?
yes, that should be enough, so we can start with ~50
I am assuming we can change the hardware specs as needed?

yeah I think most offer scalability on the fly.
what would it take to go ahead with this?

@Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that

James



Poll is up.

In the meantime, while voting is taking place, is anyone willing to step up and take on this task?
Is it necessary to have the VPS's up all the time?  If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances.  You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need.  To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones.  It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually. 
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
October 08, 2014, 04:26:33 AM
 #6803

We are getting almost 100 BTCD per month from the staking of the SuperNET funds.
I propose that these funds be used to fund ~100 VPS
I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.

These servers can be providing HDD for cloud storage, public routing, public access points, etc.

The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.

I need your help to complete the SuperNET, which is needed for Teleport.

James

P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers

for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space?
yes, that should be enough, so we can start with ~50
I am assuming we can change the hardware specs as needed?

yeah I think most offer scalability on the fly.
what would it take to go ahead with this?

@Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that

James



Poll is up.

In the meantime, while voting is taking place, is anyone willing to step up and take on this task?
Is it necessary to have the VPS's up all the time?  If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances.  You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need.  To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones.  It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually. 
probably 12 hours a day, sometimes 18 while I am actively developing

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
paulthetafy
Hero Member
*****
Offline Offline

Activity: 820
Merit: 1000


View Profile
October 08, 2014, 04:45:23 AM
 #6804

We are getting almost 100 BTCD per month from the staking of the SuperNET funds.
I propose that these funds be used to fund ~100 VPS
I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.

These servers can be providing HDD for cloud storage, public routing, public access points, etc.

The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.

I need your help to complete the SuperNET, which is needed for Teleport.

James

P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers

for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space?
yes, that should be enough, so we can start with ~50
I am assuming we can change the hardware specs as needed?

yeah I think most offer scalability on the fly.
what would it take to go ahead with this?

@Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that

James



Poll is up.

In the meantime, while voting is taking place, is anyone willing to step up and take on this task?
Is it necessary to have the VPS's up all the time?  If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances.  You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need.  To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones.  It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually. 
probably 12 hours a day, sometimes 18 while I am actively developing
I guess it depends on the cost of renting by the month vs by the hour and the benefit of being able to clone for fast deployment.  That said, I often set up my VPS images to wget an initialization script on reboot (via cron) so that I can remotely configure them as I want.  So even with a by-the-month VPS a similar method would allow quick deployment of new code
Karlog
Member
**
Offline Offline

Activity: 66
Merit: 10


View Profile
October 08, 2014, 06:51:14 AM
 #6805

1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.

2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.

Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.


3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)


Please bear in mind that i am a bit dyslexic.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
October 08, 2014, 06:53:55 AM
 #6806

1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.

2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.

Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.


3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)


Please bear in mind that i am a bit dyslexic.
all the help we can get is appreciated
cssh can deal with dozens of servers at a time
it echoes the command from keyboard to all the servers
so 50 servers would just be 2 batches

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
Karlog
Member
**
Offline Offline

Activity: 66
Merit: 10


View Profile
October 08, 2014, 07:26:12 AM
 #6807

1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.

2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.

Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.


3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)


Please bear in mind that i am a bit dyslexic.
all the help we can get is appreciated
cssh can deal with dozens of servers at a time
it echoes the command from keyboard to all the servers
so 50 servers would just be 2 batches

Okay, I just read short about cssh, it seems pretty need.

So how long do the poll run before we are getting in touch with some of the server hosting company's?

jl777 just pm me when you have work ready :-)
snuffish
Sr. Member
****
Offline Offline

Activity: 259
Merit: 250


View Profile
October 08, 2014, 07:58:20 AM
 #6808


In your first thread post, remove the "?key=14d5fa432f64c3ff1168ca3abc24fa01&page=start". That's not a good idea, that's the private url-key for a specific account.

jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
October 08, 2014, 10:43:52 AM
 #6809

simple cases of savefile and restorefile are working in loopback mode.
also found a bunch of recursion problems
even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N
then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network.

anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption.

so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing.

I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed.
fixed a bunch of possible recursion bugs
a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers.

dont use files that are too big, there is no error handling for that, so limit to ~100kb

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
BTCDDev
Sr. Member
****
Offline Offline

Activity: 255
Merit: 251


View Profile
October 08, 2014, 12:14:19 PM
 #6810

simple cases of savefile and restorefile are working in loopback mode.
also found a bunch of recursion problems
even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N
then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network.

anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption.

so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing.

I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed.
fixed a bunch of possible recursion bugs
a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers.

dont use files that are too big, there is no error handling for that, so limit to ~100kb

James


Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration.

I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD.  Smiley

Matthew

BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc
Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD
Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1134


View Profile WWW
October 08, 2014, 12:20:07 PM
 #6811

simple cases of savefile and restorefile are working in loopback mode.
also found a bunch of recursion problems
even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N
then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network.

anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption.

so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing.

I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed.
fixed a bunch of possible recursion bugs
a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers.

dont use files that are too big, there is no error handling for that, so limit to ~100kb

James


Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration.

I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD.  Smiley

Matthew
still need randombytes.o, but we can always extract that and compile it for different OS

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
BTCDDev
Sr. Member
****
Offline Offline

Activity: 255
Merit: 251


View Profile
October 08, 2014, 12:37:02 PM
 #6812

simple cases of savefile and restorefile are working in loopback mode.
also found a bunch of recursion problems
even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N
then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network.

anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption.

so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing.

I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed.
fixed a bunch of possible recursion bugs
a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers.

dont use files that are too big, there is no error handling for that, so limit to ~100kb

James


Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration.

I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD.  Smiley

Matthew
still need randombytes.o, but we can always extract that and compile it for different OS

Yes, randombytes has its own separate ./do file we can use to build it.

BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc
Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD
Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
BTCDDev
Sr. Member
****
Offline Offline

Activity: 255
Merit: 251


View Profile
October 08, 2014, 01:48:58 PM
 #6813

Progress update!

Sending/receiving messages seems to be working well now. Teleport debugging starting very soon, probably this week.

We now have two new functions in SuperNET API: "savefile" and "restorefile"

These functions allow for the encryption of files and their secure storage. (Currently in RAM, soon in cloud)

In this line, I saved and encrypted the file m_unix:

Code:
matthew@matthew-Satellite-P845:~/Desktop/btcd/src$ ./BitcoinDarkd SuperNET '{"requestType":"savefile","filename":"../m_unix","L":0,"M":2,"N":2,"usbdir":"/tmp","password":"1234"}'

Then it gave me some information, such as 'sharenrs' and an array of 'txid' values. These MUST be remembered for restoration of the file.

Then:

Code:
./BitcoinDarkd SuperNET '{"requestType":"restorefile","filename":"m_unix","L":0,"M":2,"N":2,"usbdir":"/tmp","txids":["16996257282448948276", "15989003265508946305"],"destfile":"newfile", "password": "1234"}'

restored m_unix to a new file, newfile.

Think about the future applications that can be developed around this.  Smiley These files are also quite secure. An attacker would need to know your password, sharenrs, txids (in order), m, and n to restore your file.

It should support up to M-of-254 encryption.
Here is my output from saving m_unix with 200 of 254 encryption

http://pastebin.com/QfYrQ1Tx

However, restoring was too much for it. We are calling on testers now to test the current working limits of this value.

Matthew

P.S. if you aren't volunteering yet and would like to, head over to https://forum.thesupernet.org/index.php?topic=66.0 and get started testing!

BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc
Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD
Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
Cassius
Legendary
*
Offline Offline

Activity: 1764
Merit: 1031


View Profile WWW
October 08, 2014, 02:09:50 PM
 #6814

Making slow progress. Compiled ok but asking for BitcoinDark.conf
I created the file using pico BitcoinDark.conf and pasted in the text in OP, substituting the rpcuser= and rpcpassword= with the ones it suggested. Also changed permissions to 700 as recommended. Next time around, I got the same message.

Put your BitcoinDark.conf in .BitcoinDark in your home directory.


That directory doesn't seem to exist.
Edit: found hidden file with ls -a
Steep learning curve for me today.
when your build environment is working, you should go:
cd
git clone https://github.com/jl777/btcd
cd btcd/libjl777
./onetime
<go into nacl directory, build, lib, until you find the randombytes.o and libnacl.a>
cp randombytes.o libnacl.a ~/btcd/libjl777/libs
cd ~/btcd
./m_unix
<make a SuperNET.conf>
./BitcoinDarkd

Ok up to ./onetime

Can't find nacl directory
find / randombytes.o -> No such file or directory

BTCDDev
Sr. Member
****
Offline Offline

Activity: 255
Merit: 251


View Profile
October 08, 2014, 02:25:47 PM
 #6815

Making slow progress. Compiled ok but asking for BitcoinDark.conf
I created the file using pico BitcoinDark.conf and pasted in the text in OP, substituting the rpcuser= and rpcpassword= with the ones it suggested. Also changed permissions to 700 as recommended. Next time around, I got the same message.

Put your BitcoinDark.conf in .BitcoinDark in your home directory.


That directory doesn't seem to exist.
Edit: found hidden file with ls -a
Steep learning curve for me today.
when your build environment is working, you should go:
cd
git clone https://github.com/jl777/btcd
cd btcd/libjl777
./onetime
<go into nacl directory, build, lib, until you find the randombytes.o and libnacl.a>
cp randombytes.o libnacl.a ~/btcd/libjl777/libs
cd ~/btcd
./m_unix
<make a SuperNET.conf>
./BitcoinDarkd

Ok up to ./onetime

Can't find nacl directory
find / randombytes.o -> No such file or directory

The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o

The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.

Matthew

BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc
Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD
Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
Cassius
Legendary
*
Offline Offline

Activity: 1764
Merit: 1031


View Profile WWW
October 08, 2014, 02:36:38 PM
 #6816


The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o

The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.

Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now.
I'm very new to linux and this is a baptism by fire.
cloudboy
Hero Member
*****
Offline Offline

Activity: 690
Merit: 501


View Profile
October 08, 2014, 02:53:05 PM
 #6817


The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o

The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.

Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now.
I'm very new to linux and this is a baptism by fire.

Linux takes a while to get used to. Once you really understand how the terminal works and everything, it's a powerful tool.
Cassius
Legendary
*
Offline Offline

Activity: 1764
Merit: 1031


View Profile WWW
October 08, 2014, 02:54:22 PM
 #6818


The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o

The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.

Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now.
I'm very new to linux and this is a baptism by fire.

Linux takes a while to get used to. Once you really understand how the terminal works and everything, it's a powerful tool.

Yeah, I get that. It's pretty cool. But I have little idea what the commands I'm typing in actually mean Smiley
Cassius
Legendary
*
Offline Offline

Activity: 1764
Merit: 1031


View Profile WWW
October 08, 2014, 03:22:13 PM
Last edit: October 08, 2014, 03:35:36 PM by Cassius
 #6819

So I get a load of messages when compiling (long process), ending in:
cp: cannot stat 'BitcoinDarkd': no such file or directory
./m_unix: line 20: ./BitcoinDarkd: No such file or directory

I'm also unclear what needs to go in the SuperNET.conf file.

Edit:
I've got two directories now, btcd and bitcoindark, due to the two sets of instructions.
There's also this file: https://docs.google.com/document/d/1CGUgDAhimVhz7aHAnITeZGr2S4RiZPzF2LYZIWGbJ1E/edit
Looks like this is more comprehensive. Also that I do need to install a NXT wallet too.
I'm currently not using a VPS, just an old computer with Ubuntu on it. Is that a problem?
My purpose here is just to test and understand the SuperNET API calls to create some documentation, that's all.
junsha
Newbie
*
Offline Offline

Activity: 34
Merit: 0


View Profile
October 08, 2014, 03:38:14 PM
Last edit: October 08, 2014, 04:32:49 PM by junsha
 #6820

Hi, I still have sync problem.
I downloaded the config file on page 1, and followed the instructions.
but still have 0 connections.

Is it a known problem, and will there be a new wallet release?

Edit: after quite a long time, it start syncing Smiley  -> OK
Pages: « 1 ... 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 [341] 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 ... 547 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!