Bitcoin Forum
May 04, 2024, 07:16:30 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2]  All
  Print  
Author Topic: Solution to Sia/Storj/etc DDOS issues and Sybil Vulnerability  (Read 3240 times)
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 22, 2016, 05:16:51 PM
 #21

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
1714806990
Hero Member
*
Offline Offline

Posts: 1714806990

View Profile Personal Message (Offline)

Ignore
1714806990
Reply with quote  #2

1714806990
Report to moderator
1714806990
Hero Member
*
Offline Offline

Posts: 1714806990

View Profile Personal Message (Offline)

Ignore
1714806990
Reply with quote  #2

1714806990
Report to moderator
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714806990
Hero Member
*
Offline Offline

Posts: 1714806990

View Profile Personal Message (Offline)

Ignore
1714806990
Reply with quote  #2

1714806990
Report to moderator
1714806990
Hero Member
*
Offline Offline

Posts: 1714806990

View Profile Personal Message (Offline)

Ignore
1714806990
Reply with quote  #2

1714806990
Report to moderator
1714806990
Hero Member
*
Offline Offline

Posts: 1714806990

View Profile Personal Message (Offline)

Ignore
1714806990
Reply with quote  #2

1714806990
Report to moderator
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 22, 2016, 05:59:23 PM
Last edit: June 22, 2016, 06:31:52 PM by iamnotback
 #22

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.

The problem with monero is that it is not used much as a currency https://getmonero.org/getting-started/merchants the same thing happened to peercoin, hardly used for anything besides speculation.

It is hardly profitable for me to mine it desipite having 2x r9 290 and it is likely to get even worse as the blockreward continues to decrease, i am afraid the coin will end up in the hands of botnets.

Not used much as a currency? Did you miss all the post where people are talking about buying things with xmr.to? To make it short in that sense every shop that accepts bitcoin also accepts monero.

That was the main marketing innovation I saw from Monero's ecosystem. Even someone used that once to fund me.

It is befitting that Shapeshift.io copied you, given one of the threats Monero used to make when ever I would explain I wanted to work on my own experiments, was they being open source could just copy any thing that was valuable.

Btw, I was pitching the conceptual idea of XMR.to back in 2013 on BCT. It was one of the rebuttals I had for the Bitcoin maximalists. And yet again one of my ideas becomes a blockbuster success. You think I don't have a lot more of those ideas in my back pocket.
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 22, 2016, 06:35:01 PM
 #23

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.
Everything starts as an idea. Do you believe in the idea of distribution and decentralization? It all falls apart if we can't get our data out of centralized data centers. What good is a decentralized application if its just run at Amazon S3?

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 22, 2016, 06:42:49 PM
Last edit: June 22, 2016, 07:22:24 PM by iamnotback
 #24

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.

Everything starts as an idea. Do you believe in the idea of distribution and decentralization? It all falls apart if we can't get our data out of centralized data centers. What good is a decentralized application if its just run at Amazon S3?

Of course I do.

I'll paradigm-shift you. We can decentralize our servers. Abstractly I am thinking the fundamental error in decentralized file stores such as these, is we are modelling a monolith, i.e. a total order on redundancy. Paradigm-shift to a plurality of partial orders.

Btw, I like the name Storj.
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 27, 2016, 02:48:21 PM
 #25

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.

Everything starts as an idea. Do you believe in the idea of distribution and decentralization? It all falls apart if we can't get our data out of centralized data centers. What good is a decentralized application if its just run at Amazon S3?

Of course I do.

I'll paradigm-shift you. We can decentralize our servers. Abstractly I am thinking the fundamental error in decentralized file stores such as these, is we are modelling a monolith, i.e. a total order on redundancy. Paradigm-shift to a plurality of partial orders.

Btw, I like the name Storj.
Thanks its taken with permission from this post: https://bitcointalk.org/index.php?topic=53855.msg642768#msg642768

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 27, 2016, 03:18:59 PM
 #26

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.

Everything starts as an idea. Do you believe in the idea of distribution and decentralization? It all falls apart if we can't get our data out of centralized data centers. What good is a decentralized application if its just run at Amazon S3?

Of course I do.

I'll paradigm-shift you. We can decentralize our servers. Abstractly I am thinking the fundamental error in decentralized file stores such as these, is we are modelling a monolith, i.e. a total order on redundancy. Paradigm-shift to a plurality of partial orders.

Btw, I like the name Storj.

Thanks its taken with permission from this post: https://bitcointalk.org/index.php?topic=53855.msg642768#msg642768

Kudos to Gmaxwell on the name then.

I have an idea.

What about a different approach to achieving redundancy.

Redundancy is fundamentally about making sure our data is stored on more than one hard disk.

If we could disperse the bits of the data across TBs of data, then the host actually has no incentive to cheat as the host can use RAID striping to maximize their performance.

So then we probably need a blockchain to manage this coordination.
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 27, 2016, 03:35:07 PM
 #27

Wait a minute, how did you handle redundancy in your old solution? Did you do something like 3x redundancy in case some of the nodes went down?

I didn't develop (pursue) the idea beyond the conceptual investigation phase, because I determined that it wasn't a solid enough direction to pursue.

The idealism of it appeals to me of course. But I've also learned to be very skeptical of idealistic causes, because they can be intoxicating and cloud objectivity.

I am obviously going to be more circumspect about dubious project technologies, given my age. I don't have another decade to expend on something that does not pan out.

Everything starts as an idea. Do you believe in the idea of distribution and decentralization? It all falls apart if we can't get our data out of centralized data centers. What good is a decentralized application if its just run at Amazon S3?

Of course I do.

I'll paradigm-shift you. We can decentralize our servers. Abstractly I am thinking the fundamental error in decentralized file stores such as these, is we are modelling a monolith, i.e. a total order on redundancy. Paradigm-shift to a plurality of partial orders.

Btw, I like the name Storj.

Thanks its taken with permission from this post: https://bitcointalk.org/index.php?topic=53855.msg642768#msg642768

Kudos to Gmaxwell on the name then.

I have an idea.

What about a different approach to achieving redundancy.

Redundancy is fundamentally about making sure our data is stored on more than one hard disk.

If we could disperse the bits of the data across TBs of data, then the host actually has no incentive to cheat as the host can use RAID striping to maximize their performance.

So then we probably need a blockchain to manage this coordination.
Best way to do this is through Reed-Solomon erasure encoding. Its kinda like a mini RAID per file. For example we use 20-of-40. So the file is broken into 40 pieces of which you need any 20 to recover. Like you said you can't solve Sybils, but proper erasure encoding can make it many orders of magnitude more resistant.

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 27, 2016, 05:34:27 PM
 #28

...but proper erasure encoding can make it many orders of magnitude more resistant.

Even on disk failure, some sectors can often be recovered with forensics. But then you need the storage providers' reputation at risk, so they have the economic incentive to pay for that forensics. Again it seems the Sybil attack is the problem, because they can blame the failure on a disposable Sybil.
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 27, 2016, 06:52:36 PM
 #29

...but proper erasure encoding can make it many orders of magnitude more resistant.

Even on disk failure, some sectors can often be recovered with forensics. But then you need the storage providers' reputation at risk, so they have the economic incentive to pay for that forensics. Again it seems the Sybil attack is the problem, because they can blame the failure on a disposable Sybil.
Issue is not a full solution to a Sybil attack just Sybil resistance. When you get like 16 failures per trillion, its not really an issue. Even Amazon S3 has 15x more failures than that.

If in attacker has to store 10% of the entire network, but only has a 1.67e-11 chance of affecting a file I'd say thatis  good enough. Worst case you can start adding economic incentives and disincentives. Look up the attackers funds/earnings on the blockchain for 3 months. "Oops lost your file out of 1 trillion. Here is $10k taken from the attacker."

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 27, 2016, 07:09:33 PM
 #30

...but proper erasure encoding can make it many orders of magnitude more resistant.

Even on disk failure, some sectors can often be recovered with forensics. But then you need the storage providers' reputation at risk, so they have the economic incentive to pay for that forensics. Again it seems the Sybil attack is the problem, because they can blame the failure on a disposable Sybil.
Issue is not a full solution to a Sybil attack just Sybil resistance. When you get like 16 failures per trillion, its not really an issue. Even Amazon S3 has 15x more failures than that.

If in attacker has to store 10% of the entire network, but only has a 1.67e-11 chance of affecting a file I'd say thatis  good enough. Worst case you can start adding economic incentives and disincentives. Look up the attackers funds/earnings on the blockchain for 3 months. "Oops lost your file out of 1 trillion. Here is $10k taken from the attacker."

My original criticism was that what have we accomplished that I couldn't just buy at Google's cloud.
super3
Legendary
*
Offline Offline

Activity: 1094
Merit: 1006


View Profile WWW
June 27, 2016, 08:51:59 PM
 #31

...but proper erasure encoding can make it many orders of magnitude more resistant.

Even on disk failure, some sectors can often be recovered with forensics. But then you need the storage providers' reputation at risk, so they have the economic incentive to pay for that forensics. Again it seems the Sybil attack is the problem, because they can blame the failure on a disposable Sybil.
Issue is not a full solution to a Sybil attack just Sybil resistance. When you get like 16 failures per trillion, its not really an issue. Even Amazon S3 has 15x more failures than that.

If in attacker has to store 10% of the entire network, but only has a 1.67e-11 chance of affecting a file I'd say thatis  good enough. Worst case you can start adding economic incentives and disincentives. Look up the attackers funds/earnings on the blockchain for 3 months. "Oops lost your file out of 1 trillion. Here is $10k taken from the attacker."

My original criticism was that what have we accomplished that I couldn't just buy at Google's cloud.
Because a P2P network can outperform Google's cloud at half the cost. If someone offered you a new car that goes 4x faster at half the cost, would you still want to stick with your own car?

Bitcoin Dev / Storj - Decentralized Cloud Storage. Winner of Texas Bitcoin Conference Hackathon 2014. / Peercoin Web Lead / Primecoin Web Lead / Armory Guide Author / "Am I the only one that trusts Dogecoin more than the Federal Reserve?"
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
June 28, 2016, 01:03:30 AM
Last edit: February 24, 2017, 12:40:50 AM by iamnotback
 #32

...but proper erasure encoding can make it many orders of magnitude more resistant.

Even on disk failure, some sectors can often be recovered with forensics. But then you need the storage providers' reputation at risk, so they have the economic incentive to pay for that forensics. Again it seems the Sybil attack is the problem, because they can blame the failure on a disposable Sybil.
Issue is not a full solution to a Sybil attack just Sybil resistance. When you get like 16 failures per trillion, its not really an issue. Even Amazon S3 has 15x more failures than that.

If in attacker has to store 10% of the entire network, but only has a 1.67e-11 chance of affecting a file I'd say thatis  good enough. Worst case you can start adding economic incentives and disincentives. Look up the attackers funds/earnings on the blockchain for 3 months. "Oops lost your file out of 1 trillion. Here is $10k taken from the attacker."

My original criticism was that what have we accomplished that I couldn't just buy at Google's cloud.

Because a P2P network can outperform Google's cloud at half the cost. If someone offered you a new car that goes 4x faster at half the cost, would you still want to stick with your own car?

How do you calculate that? Google can locate its servers next to hydropower and pay 4 cents per KWH. They have the economy-of-scale to buy hardware cheaper and build the infrastructure for data centers. They can locate on the faster Tier 1 backbone Internet.

How can the average individual provide storage that competes  Huh

Seems to me you will just build a system that Google can Sybil attack and provide all the storage for, increasing their profits and economies-of-scale.

The only possible way I can see to prevent this, is to never pay for storage, rather only swap storage for storage. In other words, if I store 500 GB from the network, then I can also store 500 GB on the network. But then the problem is the economics of accessing it the data. Isn't this similar to what MaidSafe is doing? But then how did they corrupt that with a token to raise ICO (no use for a token if P2P trading storage)?

To deal with the economics of access, I think the data one stores for the network would need to have the same access rate pattern as the data one stores on the network. The network needs to institute this policy.
iamnotback
Sr. Member
****
Offline Offline

Activity: 336
Merit: 265



View Profile
February 24, 2017, 12:50:23 AM
Last edit: February 26, 2017, 12:45:02 PM by iamnotback
 #33

I am very sleepy, so please excuse the poor quality prose that follows...

I realized today there was a very important distinction I needed to make between the reason I originally dismissed my proof-of-diskspace concept in 2013 and the applicability to personalized storage, which wasn't the application I dismissed.

I dismissed it for proving the decentralized resource for participating in the consensus algorithm, because there was no way to produce the encrypted data variants so as to prevent the Sybil attack, because the encryption of the variants of the same data would have to be done in public, i.e. no encryption was possible.

Whereas, with personal storage, then the owner of the file can indeed provide multiple encrypted variants so as to insure those copies are stored redundantly.

So this is why although I dismissed the applicability for blockchain consensus, it could still remain a valid concept for storage.

And thus there is no real innovation in consensus problems of blockchains, nor is the token of these Sia, MaidSafe, and Storj going to have any value. To the extent that the proof-of-storage/retrievability helps make a better enterprise cloud hosting system, it will not sustain a token + blockchain by itself. The payment and blockchain for such cloud hosting would still be which ever payment system and blockchain wins overall (and that is going to require innovation in blockchain technology).

Also as I pointed out before, the P2P cloud storage would be dominated by the highest economy-of-scale vendors, not a system running on the storage of home computers connected over consumer Internet connections.

Edit: more that here:

In addition to my prior critique of "proof-of-storage", I see some additional flaws in the idea expressed as quoted below:

Coins are issued by the network based on the following formula:

-1 coin = 1gb hosted for 1 month.
-Any downtime (detected by pinging) reduces profit 10x (ie, if your mining machine is down for 1 day, you lose 10 days worth of profit for that uptime month)
-100% of your "storage" has to be downloadable by the network within 1 hour, tested by the network randomly 4 times per month (uptime month of 30 days, not calendar). If you fail this test your profits over this period are reduced by double the amount of failed download, eg, you are hosting (mining with) 4gb of space, a random download attempt occurs and only 90% of the 4gb is downloaded, then your profits are reduced by 20% untill the next random download test.
-When you start mining you do not receive profit for the first uptime week of 7 days (this is to stop people that had some downtime simply creating a new miner on a new wallet straight away)
- Ping checks are performed every 15 minutes, you need to fail 2 to be considered "down". Thus you can install an update and restart without "down time".
- Miners are also rewarded the transaction fees of the network, spread evenly to the miners based on earning.

None of these quoted are objectively provable to the public-at-large, i.e. on a blockchain. For example, proof-of-storage can work from the perspective of the owner of the data to be stored, but not from a public perspective.

Network performance can't be proven. This is one of the fundamental reasons we have to deal with Byzantine fault tolerance on networks. How do you prove to a blockchain that the ping time you measured was accurate. You can't. How do you prove downtime. You can't. If you say voting, then you have Sybil attacks on voting. Byzantine agreement can't remain unstuck without a hardfork or whales. Etc..

Sorry this is entirely impossible. It violates the basic research and fundamentals. Much more detail is in my unpublished white paper wherein I start from first principles and try to explain these fundamentals (but ends up being far too much to summarize to laymen, so I don't know if that version of the whitepaper will be the one I end up publishing).

So this is what I mean with my criticism that proof-of-storage can't even really work well even for file storage in the Storj model where each user encrypts the data to be stored (in multiple variants), because it is impossible to insure fungible performance for the data retrievability.
Pages: « 1 [2]  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!