Bitcoin Forum
June 19, 2024, 12:50:36 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Do you Accept Komodo ICO conversion vs Reject Komodo ICO conversion and fund new dev team?
Accept - 145 (68.7%)
Reject - 66 (31.3%)
Total Voters: 211

Pages: « 1 ... 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 [338] 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 ... 547 »
  Print  
Author Topic: BTCD is no more  (Read 1328439 times)
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
October 05, 2014, 11:08:14 PM
 #6741

Maybe you guys want another blockchain-explorer:

https://bchain.info/BTCD/
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 05, 2014, 11:53:23 PM
 #6742

Maybe you guys want another blockchain-explorer:

https://bchain.info/BTCD/
very cool!
looks like blockchain.info

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
October 05, 2014, 11:55:28 PM
 #6743

Maybe you guys want another blockchain-explorer:

https://bchain.info/BTCD/
very cool!
looks like blockchain.info

Lets say it this way:

Both look like this bootstrap-Theme:

http://getbootstrap.com/examples/theme/

 Wink
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 12:05:11 AM
 #6744

Maybe you guys want another blockchain-explorer:

https://bchain.info/BTCD/
very cool!
looks like blockchain.info

Lets say it this way:

Both look like this bootstrap-Theme:

http://getbootstrap.com/examples/theme/

 Wink
nice
do you do webdev work?

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
October 06, 2014, 12:06:44 AM
 #6745

nice
do you do webdev work?


Sure.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 12:20:42 AM
 #6746

lots of projects for SuperNET at forum.thesupernet.org

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 03:08:40 AM
 #6747

[18:01] <jl777> just pushed a new version
[18:01] <jl777> ./BitcoinDarkd SuperNET '{"requestType":"ping","destip":"209.126.70.159"}'
[18:02] <jl777> you can now do a DHT ping, which should update internal routing table according to kademlia, but most importantly open a path to sendmessage to the other node
[18:02] <jl777> only ping is tested now, but I finally got it stable after a couple dozen builds so good time to push a release
[18:03] <jl777> please update with ./m_unix or you can just build the libjl777.so and update /usr/bin with make_shared in btcd/libjl777

James
[22:05] <jl777> I just pushed a version with findnode tested
[22:05] <jl777> ./BitcoinDarkd SuperNET '{"requestType":"findnode","key":"14139201960657020621"}'
[22:05] <jl777> the "key" is the NXT address of the destination acct, must be the privacyserver, eg public one
[22:06] <jl777> the private one is still inaccessible, I want to get Teleport working with the public privacy servers first
[22:07] <jl777> doing a findnode will automatically update the routing tables. theoretically if any node that you are connected to, knows of the node you are looking for, this should start a sequence that gets you in contact with that node
[22:07] <jl777> of course, this is what needs to be tested!
[22:08] <jl777> now all that is left to debug is store and findvalue.

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 09:21:29 AM
 #6748

[04:13] <jl777> I just pushed another release
[04:14] <jl777> to support data store, I had to increase the capacity as 256 bytes per UDP packet is too small
[04:14] <jl777> so I had to add data after the JSON string to maintain compatibility
[04:15] <jl777> this affects the onion packet and it was great trouble to get it always 1400 bytes
[04:15] <jl777> the reason is that there cannot be any size information in plaintext
[04:15] <jl777> the only way to know if it was encrypted for you is if you can decrypt it
[04:16] <jl777> however to even have a chance of that, you need to know the size it was encrypted at
[04:16] <jl777> so this has to be in the plaintext or implicit by the size and if the real size was used, it would defeat the purpose of doing this
[04:17] <jl777> so it is not as simple as just padding out the packet as there are unknown random number of onion layers
[04:18] <jl777> plus every hop has to reencrypt an extra layer to pad it out to max size and of course this creates the need for double decryption at each node
[04:18] <jl777> as it could very well have to forward to itself
[04:18] <jl777> anyway, I hope I got all paths working properly. with the latest version any packet that isnt the same size is a bug
[04:19] <jl777> Oh, I also put a jsoncodec in the code path
[04:19] <jl777> this compresses and decompresses to make sure it isnt getting any errors
[04:20] <jl777> every 10 DHT commands it should print out stats, for small commands the compression isnt so good, and since we are padding anyway, it seems silly
[04:20] <jl777> however I want to be able to send 1024 store commands, and with the tokenization + onion layers, this will not easily fit, so I am getting ready for this
[04:21] <jl777> please upgrade to the latest, you can tell since it uses port 7777

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
Breasal
Hero Member
*****
Offline Offline

Activity: 585
Merit: 500


View Profile
October 06, 2014, 09:54:43 AM
 #6749

[04:13] <jl777> I just pushed another release

Sounds like great progress...
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 10:19:20 AM
 #6750

[05:16] <jl777> I pushed another release. this one with the store command debugged
[05:16] <jl777> ./BitcoinDarkd SuperNET '{"requestType":"store","name":"jl777","data":"deadbeef"}'
[05:18] <jl777> it was a bit tricky to deal with the out of band data, but it now works for submitting the command with a "name" field and internally with the "key" field, which is the lower 64bits of sha256 of the string
[05:18] <jl777> and it at least parse it right and gets it to another node.
[05:19] <jl777> I am out of things to test as the findvalue really requires more than a few nodes to verify, will be going offline for a bit, hopefully when I am back online there will be many servers running

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 10:23:25 AM
 #6751

[04:13] <jl777> I just pushed another release

Sounds like great progress...
I am only debugging the easy cases, but once the bigger network is in place it will hopefully just work.
Since I cant do much without a bigger network, I figured might as well get distributed storage working
with the compression being tested with live data, I think 1kb stores can be handled, with some rare cases that have to be split up

so the out of band data was quite important for this. A day to code up the file system and a day to test for some simple setup with encrypted files in the cloud.

Of course, if a network can be built up so I can test the DHT with more than a few servers, I can do that and get Teleport validated. I say validated, as it worked over a month ago in loopback and now the network is getting stable, I dont expect much problems. I remember I had to do a bunch of accounting API, but that's just tedious work, nothing difficult like doing a DHT system from scratch over the weekend

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 10:30:26 AM
 #6752

tested findvalue, worked the first time:

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
STORE.({"result":"kademlia_store key.(4322723274388863363) data.(deadbeef) len.4 -> txid.0"})

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
{"result":"deadbeef"}

the first time, it terminates the search when another node sends a store API, which is exactly what happened. The second search just returns the value that it has locally.

So, as I predicted, I was able to get the DHT working and this is all the current calls: ping (pong), findnode (havenode), store, findvalue (havenodeB). To do it right required adding outof band binary data beyond the JSON inside the onion packet and that was a good time to get all the packets to the same size and also get the data compression in the loop.

Now I need a big network and bug reports. These are things I cannot do alone!

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
figroll
Member
**
Offline Offline

Activity: 73
Merit: 10



View Profile
October 06, 2014, 11:50:41 AM
 #6753

Maybe you guys want another blockchain-explorer:

https://bchain.info/BTCD/

website updated at http://bitcoindark.pw/block-explorer/

nice work
PhilipMorris
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250



View Profile
October 06, 2014, 12:35:05 PM
 #6754

tested findvalue, worked the first time:

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
STORE.({"result":"kademlia_store key.(4322723274388863363) data.(deadbeef) len.4 -> txid.0"})

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
{"result":"deadbeef"}

the first time, it terminates the search when another node sends a store API, which is exactly what happened. The second search just returns the value that it has locally.

So, as I predicted, I was able to get the DHT working and this is all the current calls: ping (pong), findnode (havenode), store, findvalue (havenodeB). To do it right required adding outof band binary data beyond the JSON inside the onion packet and that was a good time to get all the packets to the same size and also get the data compression in the loop.

Now I need a big network and bug reports. These are things I cannot do alone!

James


Hi James,

If there is anything I can do please let me know, i'd love to help out. However, I have close to 0 coding expirience. But I am currently following a IT study so I do have some basic knowledge. I've also sent you a PM, with some other questions. Smiley
torshammer
Full Member
***
Offline Offline

Activity: 154
Merit: 100


View Profile
October 06, 2014, 07:19:17 PM
 #6755

Trying to get the wallet to synch on my road machine .... no luck.  Even though it's my instructions in the OP.  Arrggghh.

Can  anyone comment on why the config file download is not the same as the conf details posted in the OP.  Or if there is some newer version. Or what the prob might be now ....

Thx.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 06, 2014, 07:55:18 PM
 #6756

tested findvalue, worked the first time:

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
STORE.({"result":"kademlia_store key.(4322723274388863363) data.(deadbeef) len.4 -> txid.0"})

./BitcoinDarkd SuperNET '{"requestType":"findvalue","name":"jl777"}'
{"result":"deadbeef"}

the first time, it terminates the search when another node sends a store API, which is exactly what happened. The second search just returns the value that it has locally.

So, as I predicted, I was able to get the DHT working and this is all the current calls: ping (pong), findnode (havenode), store, findvalue (havenodeB). To do it right required adding outof band binary data beyond the JSON inside the onion packet and that was a good time to get all the packets to the same size and also get the data compression in the loop.

Now I need a big network and bug reports. These are things I cannot do alone!

James


Hi James,

If there is anything I can do please let me know, i'd love to help out. However, I have close to 0 coding expirience. But I am currently following a IT study so I do have some basic knowledge. I've also sent you a PM, with some other questions. Smiley
soon we should have end user testable builds, but I am not doing that so not sure when it will be exactly

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
Cassius
Legendary
*
Offline Offline

Activity: 1764
Merit: 1031


View Profile WWW
October 07, 2014, 06:58:03 AM
 #6757

SuperNET forum is down. Anyone else having problems?
Bitinvestor
Sr. Member
****
Offline Offline

Activity: 470
Merit: 250


View Profile
October 07, 2014, 07:13:37 AM
 #6758

SuperNET forum is down. Anyone else having problems?

Yep, down for me too.

Those who cause problems for others also cause problems for themselves.
jl777
Legendary
*
Offline Offline

Activity: 1176
Merit: 1132


View Profile WWW
October 07, 2014, 09:10:47 AM
Last edit: October 07, 2014, 09:39:36 AM by jl777
 #6759

[ANN - MofNfs: store files in the SuperNET cloud using fully encrypted M of N shared secret fragments]

Since there was no large network for me to test with today, I decided to make two new API calls that allow for cloud storage of files. They are massively encrypted and also M of N is supported to deal with hash collisions, sybil attacks, offline nodes, etc. With the proper M and N settings, I think this will be quite a resilient file storage appropriate for the files you just cant lose. The comms with the cloud are via the DHT API from this weekend and the L parameter is for the max number of onion layers to use and all the packets are the same size, so there is no leakage based on packet size.

Now I am not sure what all the other decentralized storage projects are doing and I am sure what I did today is just a small portion of a full system. Still, after I debug it tomorrow, it will be an easy way to safely put things in the cloud.

char *savefile[] = {  "filename", "L", "M", "N", "usbdir", "password", 0 };
char *restorefile[] = { "filename", "L", "M", "N", "usbdir", "password", "destfile", "sharenrs", "txids", 0 };

./BitcoinDarkd SuperNET '{"requestType":"savefile","filename":"<file to save>","L":0,"M":1,"N":1,"usbdir":"<dir for backups>","password":"<can be 4char PIN>"}'

The savefile will print (and save in usbdir) the required sharenrs and txids JSON fields to use for the restorefile.
The "destfile" field is where the file will be reconstructed.

If the "usbdir" parameter is set, then local backups are made (highly recommended!) and it is used to check the data coming back from the cloud. After you verify that the cloud has a proper copy, then you can partition the various parts from the usbdir directory to various places to have two full backups, one under your local control and one in the cloud.

The max value for N is 254 and M has to be less than or equal to N. The M of N parameters are independent of the "password" field. If you are using M of N, then unless the attacker gets a hold of M pieces, they wont be able to reconstruct the file. Without the txid list, the attacker wont know how to reconstruct the file.

But why take any chances. so I made the password field use an iterative method to create what I think is a pretty practical encryption method, which is based on the name of the file, your pubNXT acct passphrase and the password itself. The length of the password determines the number of ciphers that are applied

        namehash = calc_txid(name,strlen(name));
        len = strlen(password);
        passwordhash = (namehash ^ calc_txid(keygen,strlen(keygen)) ^ calc_txid(password,len));
        for (i=0; i<len; i++)
        {
            expand_nxt64bits(key,passwordhash);
            cipherids = (password % NUM_CIPHERS);  // choose one of 18 ciphers
            privkeys = clonestr(key);
            if ( i < len-1 )
                passwordhash ^= (namehash ^ calc_txid(key,strlen(key)));
        }
  
Since the keygen is the pubNXT password, which in turn is a dumpprivkey for a BTCD address, this assures high entropy and the filename being encrypted is added to the passwordhash so that different files will have different encryption keys. By using the password to modify the initial password hash and to determine the number of ciphers and their sequence creates a lot of impact from even a short password, like a PIN

When M of N is combined with password, the attacker would need to get a hold of the name of the file, M fragments, the list of txids, the randomly generated sharenrs array and the password you used. Unless your computer is totally compromised and you divulge your short password, this seems like a pretty good level of security.

Now with the DHT there is the chance of collision, sybil attacks, inaccessible nodes, etc. I think using M of N side steps all of these issues. Also, the txid (calculated like NXT does) is based on the contents being stored, so it would take a lot of computation to be able to even get control of the nodes needed to block access to any specific content and near impossible to spoof anything. Maybe someone can come up with a sybil attack that can be done? However, without knowing the hash values of all the fragments, where will the sybils setup their attack? And will they be able to invalidate M copies that they dont know the txid for?

The following are the ciphers:
    "aes","blowfish","xtea","rc5","rc6","saferp","twofish","safer_k64","safer_sk64","safer_k128",
    "safer_sk128","rc2","des3","cast5","noekeon","skipjack","khazad","anubis","rijndael"

Having a place to store things reliably will help with managing telepods, especially since with the privateblockchains, any loss of data on your computer would not be good. So I think I will default to cloud backup for the telepods. Unless your computer is compromised and your divulge your passwords, it wont do anybody any good to get some telepod fragments. In any case, if they have compromised your computer, they would have access to the same data.

James

http://www.digitalcatallaxy.com/report2015.html
100+ page annual report for SuperNET
apenzl
Sr. Member
****
Offline Offline

Activity: 396
Merit: 250


View Profile WWW
October 07, 2014, 09:25:33 AM
 #6760

More news and summaries. Enjoy!  Smiley


Pages: « 1 ... 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 [338] 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 ... 547 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!