Azeh (OP)
|
|
October 08, 2014, 03:22:25 AM |
|
We are getting almost 100 BTCD per month from the staking of the SuperNET funds. I propose that these funds be used to fund ~100 VPS I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.
These servers can be providing HDD for cloud storage, public routing, public access points, etc.
The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.
I need your help to complete the SuperNET, which is needed for Teleport.
James
P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers
for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space? yes, that should be enough, so we can start with ~50 I am assuming we can change the hardware specs as needed? yeah I think most offer scalability on the fly. what would it take to go ahead with this? @Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that James Poll is up. In the meantime, while voting is taking place, is anyone willing to step up and take on this task?
|
|
|
|
paulthetafy
|
|
October 08, 2014, 04:04:32 AM |
|
We are getting almost 100 BTCD per month from the staking of the SuperNET funds. I propose that these funds be used to fund ~100 VPS I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.
These servers can be providing HDD for cloud storage, public routing, public access points, etc.
The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.
I need your help to complete the SuperNET, which is needed for Teleport.
James
P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers
for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space? yes, that should be enough, so we can start with ~50 I am assuming we can change the hardware specs as needed? yeah I think most offer scalability on the fly. what would it take to go ahead with this? @Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that James Poll is up. In the meantime, while voting is taking place, is anyone willing to step up and take on this task? Is it necessary to have the VPS's up all the time? If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances. You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need. To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones. It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually.
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
October 08, 2014, 04:26:33 AM |
|
We are getting almost 100 BTCD per month from the staking of the SuperNET funds. I propose that these funds be used to fund ~100 VPS I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.
These servers can be providing HDD for cloud storage, public routing, public access points, etc.
The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.
I need your help to complete the SuperNET, which is needed for Teleport.
James
P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers
for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space? yes, that should be enough, so we can start with ~50 I am assuming we can change the hardware specs as needed? yeah I think most offer scalability on the fly. what would it take to go ahead with this? @Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that James Poll is up. In the meantime, while voting is taking place, is anyone willing to step up and take on this task? Is it necessary to have the VPS's up all the time? If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances. You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need. To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones. It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually. probably 12 hours a day, sometimes 18 while I am actively developing
|
|
|
|
paulthetafy
|
|
October 08, 2014, 04:45:23 AM |
|
We are getting almost 100 BTCD per month from the staking of the SuperNET funds. I propose that these funds be used to fund ~100 VPS I simply need to have ~100 servers to be able to properly test things and the lack of this is causing delay.
These servers can be providing HDD for cloud storage, public routing, public access points, etc.
The funding is part of the problem that can be solved. We also need someone that can manage these servers and keep them updated with new releases, etc.
I need your help to complete the SuperNET, which is needed for Teleport.
James
P.S we can certainly start with less than 100 servers, but I cant do any meaningful testing with half a dozen servers
for 100 VPS's it would probably be in the neighbourhood of $700-1000 a month, for 100 low end 1 CPU 1GB ram and maybe 10-15gb space? yes, that should be enough, so we can start with ~50 I am assuming we can change the hardware specs as needed? yeah I think most offer scalability on the fly. what would it take to go ahead with this? @Azeh, maybe a poll to get community approval for this, we can make it for 3 months and then revisit it after that James Poll is up. In the meantime, while voting is taking place, is anyone willing to step up and take on this task? Is it necessary to have the VPS's up all the time? If testing is only going to be done for certain periods, say for a few hours every few days or whenever features need testing, then it might be worth using AWS spot instances. You can setup the initial instance and sync it, and then image & clone as many as you need for as long as you need. To test a new build you would only need to spool up an instance from the last image, compile the updated source, let it sync (only from the point it was last terminated), re-image, and spool up X number of clones. It feels like this will be a more manageable (and potentially cheaper) approach than renting X number of VPS's for a month and having to update them all individually. probably 12 hours a day, sometimes 18 while I am actively developing I guess it depends on the cost of renting by the month vs by the hour and the benefit of being able to clone for fast deployment. That said, I often set up my VPS images to wget an initialization script on reboot (via cron) so that I can remotely configure them as I want. So even with a by-the-month VPS a similar method would allow quick deployment of new code
|
|
|
|
Karlog
Member
Offline
Activity: 66
Merit: 10
|
|
October 08, 2014, 06:51:14 AM |
|
1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.
2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.
Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.
3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)
Please bear in mind that i am a bit dyslexic.
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
October 08, 2014, 06:53:55 AM |
|
1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.
2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.
Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.
3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)
Please bear in mind that i am a bit dyslexic.
all the help we can get is appreciated cssh can deal with dozens of servers at a time it echoes the command from keyboard to all the servers so 50 servers would just be 2 batches
|
|
|
|
Karlog
Member
Offline
Activity: 66
Merit: 10
|
|
October 08, 2014, 07:26:12 AM |
|
1. I think that if we are going to spend 500-1000$ a month on servers we should be able to press the price a little.
2. I would like to be a part of the team working with the servers, but i have almost 2 weeks this month where I will be in Italy driving world finals in go kart, so I will not be online that 2 weeks.
Also I am no expert in driving servers, but I would like to give a hand if someone else is in charge (at least in the beginning) as the winter is coming and I don't have much else to do after work.
3. If we are to use 20+ servers, we would need to have the ability to ether clone 1 server out to all the other. Or have some service passing out the chances, from the master server, as it else would be to time consuming (and to big error rate)
Please bear in mind that i am a bit dyslexic.
all the help we can get is appreciated cssh can deal with dozens of servers at a time it echoes the command from keyboard to all the servers so 50 servers would just be 2 batches Okay, I just read short about cssh, it seems pretty need. So how long do the poll run before we are getting in touch with some of the server hosting company's? jl777 just pm me when you have work ready :-)
|
|
|
|
snuffish
|
|
October 08, 2014, 07:58:20 AM |
|
In your first thread post, remove the "?key=14d5fa432f64c3ff1168ca3abc24fa01&page=start". That's not a good idea, that's the private url-key for a specific account.
|
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
October 08, 2014, 10:43:52 AM |
|
simple cases of savefile and restorefile are working in loopback mode. also found a bunch of recursion problems even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network. anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption. so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing. I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed. fixed a bunch of possible recursion bugs a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers. dont use files that are too big, there is no error handling for that, so limit to ~100kb James
|
|
|
|
BTCDDev
|
|
October 08, 2014, 12:14:19 PM |
|
simple cases of savefile and restorefile are working in loopback mode. also found a bunch of recursion problems even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network. anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption. so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing. I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed. fixed a bunch of possible recursion bugs a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers. dont use files that are too big, there is no error handling for that, so limit to ~100kb James Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration. I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD. Matthew
|
BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
|
|
|
jl777
Legendary
Offline
Activity: 1176
Merit: 1134
|
|
October 08, 2014, 12:20:07 PM |
|
simple cases of savefile and restorefile are working in loopback mode. also found a bunch of recursion problems even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network. anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption. so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing. I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed. fixed a bunch of possible recursion bugs a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers. dont use files that are too big, there is no error handling for that, so limit to ~100kb James Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration. I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD. Matthew still need randombytes.o, but we can always extract that and compile it for different OS
|
|
|
|
BTCDDev
|
|
October 08, 2014, 12:37:02 PM |
|
simple cases of savefile and restorefile are working in loopback mode. also found a bunch of recursion problems even the nested ciphers worked, so now it is M of N testing, then combined encrypted M of N then on the live network, so not sure I can get all this done today...
I debugged savefile and restorefile, but only in loopback mode. so maybe this will just not work with the live network. anyway I did small files and large files, encrypted, not encrypted, MofN (just 2 of 3) without encryption and MofN with encryption. so those are all the cases and the savefile automatically does a restorefile with a compare to make it easier for testing. I also updated to TweetNACL instead of the libnacl.a, much smaller source code and fully reviewed. fixed a bunch of possible recursion bugs a bunch of other things to, cant remember them all. I got setup to test locally and so I must have done over 100 builds as it is much faster when I dont have to upload to servers. dont use files that are too big, there is no error handling for that, so limit to ~100kb James Nice idea using TweetNACL, much more compact than libnacl or libsodium on Windows. Should reduce compilation frustration. I'm excited about decentralized storage. This opens up a large amount of new possibilities for future applications to be built on top of BTCD. Matthew still need randombytes.o, but we can always extract that and compile it for different OS Yes, randombytes has its own separate ./do file we can use to build it.
|
BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
|
|
|
BTCDDev
|
|
October 08, 2014, 01:48:58 PM |
|
Progress update! Sending/receiving messages seems to be working well now. Teleport debugging starting very soon, probably this week. We now have two new functions in SuperNET API: "savefile" and "restorefile" These functions allow for the encryption of files and their secure storage. (Currently in RAM, soon in cloud) In this line, I saved and encrypted the file m_unix: matthew@matthew-Satellite-P845:~/Desktop/btcd/src$ ./BitcoinDarkd SuperNET '{"requestType":"savefile","filename":"../m_unix","L":0,"M":2,"N":2,"usbdir":"/tmp","password":"1234"}'
Then it gave me some information, such as 'sharenrs' and an array of 'txid' values. These MUST be remembered for restoration of the file. Then: ./BitcoinDarkd SuperNET '{"requestType":"restorefile","filename":"m_unix","L":0,"M":2,"N":2,"usbdir":"/tmp","txids":["16996257282448948276", "15989003265508946305"],"destfile":"newfile", "password": "1234"}'
restored m_unix to a new file, newfile. Think about the future applications that can be developed around this. These files are also quite secure. An attacker would need to know your password, sharenrs, txids (in order), m, and n to restore your file. It should support up to M-of-254 encryption. Here is my output from saving m_unix with 200 of 254 encryption http://pastebin.com/QfYrQ1TxHowever, restoring was too much for it. We are calling on testers now to test the current working limits of this value. Matthew P.S. if you aren't volunteering yet and would like to, head over to https://forum.thesupernet.org/index.php?topic=66.0 and get started testing!
|
BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
October 08, 2014, 02:09:50 PM |
|
Making slow progress. Compiled ok but asking for BitcoinDark.conf I created the file using pico BitcoinDark.conf and pasted in the text in OP, substituting the rpcuser= and rpcpassword= with the ones it suggested. Also changed permissions to 700 as recommended. Next time around, I got the same message.
Put your BitcoinDark.conf in .BitcoinDark in your home directory. That directory doesn't seem to exist. Edit: found hidden file with ls -a Steep learning curve for me today. when your build environment is working, you should go: cd git clone https://github.com/jl777/btcdcd btcd/libjl777 ./onetime <go into nacl directory, build, lib, until you find the randombytes.o and libnacl.a>cp randombytes.o libnacl.a ~/btcd/libjl777/libs cd ~/btcd ./m_unix <make a SuperNET.conf> ./BitcoinDarkd Ok up to ./onetime Can't find nacl directory find / randombytes.o -> No such file or directory
|
|
|
|
BTCDDev
|
|
October 08, 2014, 02:25:47 PM |
|
Making slow progress. Compiled ok but asking for BitcoinDark.conf I created the file using pico BitcoinDark.conf and pasted in the text in OP, substituting the rpcuser= and rpcpassword= with the ones it suggested. Also changed permissions to 700 as recommended. Next time around, I got the same message.
Put your BitcoinDark.conf in .BitcoinDark in your home directory. That directory doesn't seem to exist. Edit: found hidden file with ls -a Steep learning curve for me today. when your build environment is working, you should go: cd git clone https://github.com/jl777/btcdcd btcd/libjl777 ./onetime <go into nacl directory, build, lib, until you find the randombytes.o and libnacl.a>cp randombytes.o libnacl.a ~/btcd/libjl777/libs cd ~/btcd ./m_unix <make a SuperNET.conf> ./BitcoinDarkd Ok up to ./onetime Can't find nacl directory find / randombytes.o -> No such file or directory The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o The latest btcd repo no longer needs libnacl. However, you still need randombytes.o. Matthew
|
BitcoinDark: RPHWc5CwP9YMMbvXQ4oXz5rQHb3pKkhaxc Top Donations: juicybirds 420BTCD ensorcell 84BTCD Stuntruffle: 40BTCD Top April Donations: juicybirds 420BTCD; ensorcell: 42BTCD
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
October 08, 2014, 02:36:38 PM |
|
The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o
The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.
Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now. I'm very new to linux and this is a baptism by fire.
|
|
|
|
cloudboy
|
|
October 08, 2014, 02:53:05 PM |
|
The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o
The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.
Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now. I'm very new to linux and this is a baptism by fire. Linux takes a while to get used to. Once you really understand how the terminal works and everything, it's a powerful tool.
|
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
October 08, 2014, 02:54:22 PM |
|
The nacl folder should be in btcd/libjl777. Then navigate to build/lib/ and a few more folders to randombytes.o
The latest btcd repo no longer needs libnacl. However, you still need randombytes.o.
Matthew
Thanks - hidden file and a couple of extra directories I wasn't expecting. Found it now. I'm very new to linux and this is a baptism by fire. Linux takes a while to get used to. Once you really understand how the terminal works and everything, it's a powerful tool. Yeah, I get that. It's pretty cool. But I have little idea what the commands I'm typing in actually mean
|
|
|
|
Cassius
Legendary
Offline
Activity: 1764
Merit: 1031
|
|
October 08, 2014, 03:22:13 PM Last edit: October 08, 2014, 03:35:36 PM by Cassius |
|
So I get a load of messages when compiling (long process), ending in: cp: cannot stat 'BitcoinDarkd': no such file or directory ./m_unix: line 20: ./BitcoinDarkd: No such file or directory I'm also unclear what needs to go in the SuperNET.conf file. Edit: I've got two directories now, btcd and bitcoindark, due to the two sets of instructions. There's also this file: https://docs.google.com/document/d/1CGUgDAhimVhz7aHAnITeZGr2S4RiZPzF2LYZIWGbJ1E/editLooks like this is more comprehensive. Also that I do need to install a NXT wallet too. I'm currently not using a VPS, just an old computer with Ubuntu on it. Is that a problem? My purpose here is just to test and understand the SuperNET API calls to create some documentation, that's all.
|
|
|
|
junsha
Newbie
Offline
Activity: 34
Merit: 0
|
|
October 08, 2014, 03:38:14 PM Last edit: October 08, 2014, 04:32:49 PM by junsha |
|
Hi, I still have sync problem. I downloaded the config file on page 1, and followed the instructions. but still have 0 connections. Is it a known problem, and will there be a new wallet release? Edit: after quite a long time, it start syncing -> OK
|
|
|
|
|