Lethn
Legendary
Offline
Activity: 1540
Merit: 1000
|
|
November 09, 2014, 12:41:52 AM |
|
What we need are things like mesh networks and some nice speed to go with it so it's practical to use, the technology seems pretty far off right now though but I would love to play games and use a totally decentralised internet without the need for an ISP.
|
|
|
|
davidpbrown
|
|
November 09, 2014, 12:43:08 AM |
|
Would hidden service hosts not be relatively obvious, if only for the amount of data they upload? Users of dark and normal web through Tor would be downloading more than they ever upload. Does Tor use distributed storage? I2P I expect is even more obvious, just disrupt the connections and see a site go offline.
|
฿://12vxXHdmurFP3tpPk7bt6YrM3XPiftA82s
|
|
|
deluxeCITY
|
|
November 09, 2014, 07:14:00 AM |
|
Would hidden service hosts not be relatively obvious, if only for the amount of data they upload? Users of dark and normal web through Tor would be downloading more than they ever upload. Does Tor use distributed storage? I2P I expect is even more obvious, just disrupt the connections and see a site go offline. The entry guards and the "middle nodes" would also be uploading a large amount of tor related traffic. It would probably be advantageous for a hidden service to also act as a middle node in order to hide it's identity
|
|
|
|
repentance
|
|
November 09, 2014, 07:16:48 AM |
|
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).
Ahhh.. Well, there you go. SR3, don't host it in the United States. Morons. hehehe. As for physical security, ... there are lots of methods, and although expensive you can host it yourself. Have any of the SR1 and SR2 operators seen "See More Buds" ? That talks a bit about physical security of grow houses. I've never used SR1 or SR2 or any of the others that died, so I don't know if the user interface would be affected if I had done things differently or took care of things on my end, or set up shop in some remote mountain with walls like UBL (but UBL did not have internet, bummer.) From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host. If you're using the same host as another illicit service, there's always the chance that you'll get caught in a dragnet intended for someone else. Operator stupidity is also rampant. The operator of C9 actually posted on reddit that one of her servers had been seized and that she was looking for a new host - not smart.
|
All I can say is that this is Bitcoin. I don't believe it until I see six confirmations.
|
|
|
ScreamnShout
|
|
November 09, 2014, 07:46:18 AM |
|
Well both attacks by the government on both SR's were user/admin error (although SR2 made much worse errors then SR1).
Ahhh.. Well, there you go. SR3, don't host it in the United States. Morons. hehehe. As for physical security, ... there are lots of methods, and although expensive you can host it yourself. Have any of the SR1 and SR2 operators seen "See More Buds" ? That talks a bit about physical security of grow houses. I've never used SR1 or SR2 or any of the others that died, so I don't know if the user interface would be affected if I had done things differently or took care of things on my end, or set up shop in some remote mountain with walls like UBL (but UBL did not have internet, bummer.) From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host. If you're using the same host as another illicit service, there's always the chance that you'll get caught in a dragnet intended for someone else. Operator stupidity is also rampant. The operator of C9 actually posted on reddit that one of her servers had been seized and that she was looking for a new host - not smart. I believe that reddit accepts connections from tor exit nodes so it is very well possible that the operator of C9 was connecting from tor (I am not sure what C9 even is). Also do you have a link to that many of the sites were all using the same hosting provider? This would explain how law enforcement was able to take down so many sites
|
|
|
|
Vessko
|
|
November 09, 2014, 10:09:24 AM |
|
From what I was reading earlier today, multiple sites which were taken down were using the same Bulgarian host. Link, please?
|
|
|
|
|
Dabs
Legendary
Offline
Activity: 3416
Merit: 1912
The Concierge of Crypto
|
|
November 09, 2014, 01:10:41 PM |
|
Someone make a simple "How-To" if you ever want to do something similar. That's not illegal is it? (I mean, information that is not an illegal number.) Like, don't host in Bulgaria or something.
|
|
|
|
kwukduck
Legendary
Offline
Activity: 1937
Merit: 1001
|
|
November 09, 2014, 01:52:28 PM |
|
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder. I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.
|
14b8PdeWLqK3yi3PrNHMmCvSmvDEKEBh3E
|
|
|
elliwilli
|
|
November 09, 2014, 03:26:48 PM |
|
I think it is time. I2P is objectively better and the adage is that the slower the system the more secure.
|
|
|
|
RobertDJ
|
|
November 09, 2014, 06:43:09 PM |
|
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder. I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.
I think this makes using i2P more risky for the "average" user who is not intending to use it for illegal purposes. The "average" user would risk that traffic would exit from their node that is doing illegal things and they would not have the cover of running a tor exit node that tor exit nodes have
|
|
|
|
Cubic Earth
Legendary
Offline
Activity: 1176
Merit: 1020
|
|
November 09, 2014, 07:28:18 PM |
|
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P): By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack. But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target. (I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.) I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity. I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be: - Make the GNUnet software user-friendly. - Create message board and Web functionality (like FProxy) on top of GNUnet. - Make GNUnet work over I2P. - Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user. There's an solution to traffic pattern attacks - it's just really expensive. They way you solve traffic pattern analysis is to make your protocol consume a constant amount of bandwidth all the time, regardless of whether anything is actually going on or not. I've been waiting to see 'constant bandwidth' solutions for quite some time. It would help the anonymity networks. The technique could also be applied to something like VoIP. It would consume lots of bandwidth but not necessarily and unreasonable or unworkable amount. Further, 24/7 availability could be given up in favor of some window of time, maybe one hour per day, where the constant bandwidth would be applied. It would be up to the user to take advantage of that time window. Command and control instructions, and text, take up very little bandwidth, so at least those kind of activities should only take a small bit of fake data to effectively pad the timing.
|
|
|
|
RobertDJ
|
|
November 09, 2014, 09:03:43 PM |
|
The main problem with I2P and Tor is that they only try to protect you against mostly-passive attackers who have absolutely no idea of where you might actually be on the Internet. The Tor threat model says (and this is also true of I2P): By observing both ends, passive attackers can confirm a suspicion that Alice is talking to Bob if the timing and volume patterns of the traffic on the connection are distinct enough; active attackers can induce timing signatures on the traffic to force distinct patterns. Rather than focusing on these traffic confirmation attacks, we aim to prevent traffic analysis attacks, where the adversary uses traffic patterns to learn which points in the network he should attack. But attackers looking for the real IP of a target hidden service can significantly narrow the set of possible targets by enumerating all active Tor/I2P users (using widespread traffic analysis or by having a lot of nodes on the network), and then they can further narrow it by doing intersection attacks. Once they've narrowed it down to a few hundred possibilities, they can try timing attacks against each one to get solid proof that they're the target. (I wonder if the hidden services that were not taken down in the recent bust have anything in common. Are they in a particular country that's unfriendly to NSA demands? Do they use a fixed set of trusted entry guards? Probably we won't find out, unfortunately.) I just don't think that low-latency client<->server networks can be secure. What we need are distributed data stores like Freenet so that the originator/owner of content doesn't need to always be online and moreover has plausible deniability even if they are under active surveillance. However, I really doubt that any existing anonymous data store could actually stand up to targeted traffic analysis of the content originator. Freenet seems to be put together in an especially haphazard way, without much theoretical basis for its claimed anonymity. I like a lot of what I've read about GNUnet. I think that a good path forward for anonymous networks would be: - Make the GNUnet software user-friendly. - Create message board and Web functionality (like FProxy) on top of GNUnet. - Make GNUnet work over I2P. - Increase the popularity of GNUnet+I2P so that attackers can't just do traffic analysis of every single user. There's an solution to traffic pattern attacks - it's just really expensive. They way you solve traffic pattern analysis is to make your protocol consume a constant amount of bandwidth all the time, regardless of whether anything is actually going on or not. I've been waiting to see 'constant bandwidth' solutions for quite some time. It would help the anonymity networks. The technique could also be applied to something like VoIP. It would consume lots of bandwidth but not necessarily and unreasonable or unworkable amount. Further, 24/7 availability could be given up in favor of some window of time, maybe one hour per day, where the constant bandwidth would be applied. It would be up to the user to take advantage of that time window. Command and control instructions, and text, take up very little bandwidth, so at least those kind of activities should only take a small bit of fake data to effectively pad the timing. While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
|
|
|
|
Cubic Earth
Legendary
Offline
Activity: 1176
Merit: 1020
|
|
November 09, 2014, 11:52:42 PM |
|
While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
If the cryptography is properly implemented, "real data" should appear identical to "junk data". The trick would be merging and alternating between them in a seamless way.
|
|
|
|
nothing2seeHere
Member
Offline
Activity: 110
Merit: 10
|
|
November 10, 2014, 02:43:29 AM |
|
While think kind of countermeasure could potentially work, I would think that an adversary would eventually be able to figure out what is fake data and what is "real" traffic and be able to strip out the fake data from their analysis. IMO the only real way to prevent timing attacks is to have a large amount of "real" traffic flowing to your site on a regular basis
If the cryptography is properly implemented, "real data" should appear identical to "junk data". The trick would be merging and alternating between them in a seamless way. I think that if an attacker is able to see the real data not merged with the junk data in a perfect way then someone can differentiate between the two (they may not be able to tell 100% for sure which data is "real" and which data is "junk" however they could potentially see two separate levels of data/bandwidth).
|
|
|
|
hamiltino
|
|
November 10, 2014, 02:30:37 PM |
|
maidsafe
|
stacking coin
|
|
|
bluemountain
|
|
November 11, 2014, 01:44:51 AM |
|
In order to use i2p you have to relay traffic also, making correlation and timing attacks harder. I2p also doesn't use a static 3-hop one-way path for it's traffic changing every 10 minutes, in and outbound paths in i2p are different and randomized in hop length with a set min and max. Besides this it also uses multiple paths to the destination, not just one.
If you have enough nodes then you can still potentially execute a timing attack, although the randomized number of hops and the additional traffic will make such attacks much more difficult but still not impossible.
|
|
|
|
Dabs
Legendary
Offline
Activity: 3416
Merit: 1912
The Concierge of Crypto
|
|
November 11, 2014, 02:36:08 AM |
|
Fake data is just some real data padded and then encrypted. There is no way to distinguish the two apart without the decryption keys. The fake data is simply generated by the sender, and then discarded on the receiving end.
This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
|
|
|
|
Cubic Earth
Legendary
Offline
Activity: 1176
Merit: 1020
|
|
November 11, 2014, 04:24:45 AM |
|
This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
Exactly. I hesitate to make a statement with two absolutes, but I'll go out on a limb: the concept is literally the opposite of compression. A constant stream would even be resistant to the active attack Theymos described. In the context of TOR, imagine all nodes doing their best to keep a constant 1Mbps of data flowing between peers in both directions. Then imagine big brother decides to modulate your home internet connection in some way in hopes of tracing and/or confirming an effect somewhere else. The constant flow of data would cause the modulation to be 100% damped upon reaching the first node, resulting in a downstream signal to noise ratio of zero.
|
|
|
|
bluemountain
|
|
November 11, 2014, 05:23:31 AM |
|
Fake data is just some real data padded and then encrypted. There is no way to distinguish the two apart without the decryption keys. The fake data is simply generated by the sender, and then discarded on the receiving end.
This reminds me of mp3's. Music used to be encoded as constant bit rate (CBR), now there are more variable bit rate (VBR) files. We are simply going backwards with the technology.
If you have a node that is receiving the data then you know what is fake (because you disregard it) and what is real (because you continue to relay it)
|
|
|
|
|