Bitcoin Forum
April 26, 2024, 05:59:14 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 [150] 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 ... 345 »
  Print  
Author Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer  (Read 450429 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
cryptoboy.architect
Hero Member
*****
Offline Offline

Activity: 513
Merit: 500


View Profile
December 09, 2016, 09:45:20 PM
 #2981

Lannister (OP):
Last Active:   November 02, 2016, 04:56:07 PM

Might not be alive. He mentioned health issues.
1714154354
Hero Member
*
Offline Offline

Posts: 1714154354

View Profile Personal Message (Offline)

Ignore
1714154354
Reply with quote  #2

1714154354
Report to moderator
1714154354
Hero Member
*
Offline Offline

Posts: 1714154354

View Profile Personal Message (Offline)

Ignore
1714154354
Reply with quote  #2

1714154354
Report to moderator
1714154354
Hero Member
*
Offline Offline

Posts: 1714154354

View Profile Personal Message (Offline)

Ignore
1714154354
Reply with quote  #2

1714154354
Report to moderator
"Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714154354
Hero Member
*
Offline Offline

Posts: 1714154354

View Profile Personal Message (Offline)

Ignore
1714154354
Reply with quote  #2

1714154354
Report to moderator
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 09, 2016, 11:56:11 PM
 #2982

So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 12:06:11 AM
 #2983

So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.

This is a very good question. Maybe we could pick one together? I think we agree that we cannot use something that needs seconds of preprocessing, we need something blazing fast otherwise plenty of DOS attacks are inevitable.

Fast seem to me all partially homomorphic encryptions such as Unpadded RSA, ElGamal, Goldwasser–Micali, Benaloh and Paillier.
"FHE"'s such as Brakerski-Gentry-Vaikuntanathan, Brakerski, NRTU and Gentry-Sahai-Waters are not only experimental but have a significantly higher overhead.

Not sure what to do, and tbh my knowledge of FHE comes down to a few hours of reading about it. But I agree, it's a plus.
More of a plus it would be if it can be somehow used in the verify routine (like looking for a vanity address key without actually revealing the key itself to the network).
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 12:09:23 AM
 #2984

Here the Brakerski-Gentry-Vaikuntanathan-FHE implementation: I will take a look at it: https://github.com/shaih/HElib
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 12:17:22 AM
 #2985

So what I see is left is:
c) Homomorphic Encryption

Did you pick an algorithm for this yet?  I imagine this could take a bit of effort so I'd like to get a head start if you know which approach you want to take.

This is a very good question. Maybe we could pick one together? I think we agree that we cannot use something that needs seconds of preprocessing, we need something blazing fast otherwise plenty of DOS attacks are inevitable.

Fast seem to me all partially homomorphic encryptions such as Unpadded RSA, ElGamal, Goldwasser–Micali, Benaloh and Paillier.
"FHE"'s such as Brakerski-Gentry-Vaikuntanathan, Brakerski, NRTU and Gentry-Sahai-Waters are not only experimental but have a significantly higher overhead.

Not sure what to do, and tbh my knowledge of FHE comes down to a few hours of reading about it. But I agree, it's a plus.
More of a plus it would be if it can be somehow used in the verify routine (like looking for a vanity address key without actually revealing the key itself to the network).

I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 12:30:33 AM
 #2986

I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer. Unless we only allow "operations" on the encrypted big integers, and encryption and decryption is entirely up to the user alone.
Also, not sure how effective the "verify" routine can be made. Does any cryptosystem have some useable operations that would be suitable for the verify routine?
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 12:32:33 AM
 #2987

I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer.

Yep, saw that.  I'll keep looking though.  Wonder if there is a way to leverage FPGAs for this...I might have a bit of experience there ;-)
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 12:38:22 AM
 #2988

I'll start taking a look as well.  If something doesn't stand out, we may be better served by creating a roadmap with this on it and moving forward "as-is".  I think we just need to stay ahead of the other blockchains exploring this distributed computing space, which I'm confident we already are.

HELib: 33 ms for encryption and up to 5 seconds for decryption  Shocked

A bummer. Unless we only allow "operations" on the encrypted big integers, and encryption and decryption is entirely up to the user alone.
Also, not sure how effective the "verify" routine can be made. Does any cryptosystem have some useable operations that would be suitable for the verify routine?

Maybe another way to look at this is to target specific use cases.  For example, someone mentioned video renedering...what functions are required for that?  Maybe the partial homomorphic encryption systems are good enough for some of the mainstream use cases.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 12:43:50 AM
 #2989

Maybe another way to look at this is to target specific use cases.  For example, someone mentioned video renedering...what functions are required for that?  Maybe the partial homomorphic encryption systems are good enough for some of the mainstream use cases.

I personally think that video rendering is impossible (and also will be impossible for golem) because verification just takes a hell of a lot time and needs to be performed on all nodes. Golem's idea to just verify several pixels ... well ... you know as much as I do how reliable this with pseudo random validators Grin And even if you just render 25% you still have 1/4 chance to win. If you lose you continue and try again at 30%. In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

But I can imagine that we will have some decentralized storage where we can perform some work on. Like uploading a large dataset and using the nodes (which would need to support a master-slave method next to the bruteforce search approach) to perform some operations on it. We would still have to figure out how such scheme might look like. But on confidential data sets it might be wise to use homomorphic encryption. Not sure if partial HE suffices here.

Pallier for example just support a few basic operators...
mxhwr
Hero Member
*****
Offline Offline

Activity: 616
Merit: 500


View Profile
December 10, 2016, 12:44:38 AM
 #2990

watching this thread, interesting stuff
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 12:54:16 AM
 #2991

In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

LOL...I always enjoy hearing your input...informative, and it always makes me laugh  Cheesy

If I recall, some of your early interest in this project was from an academic perspective (i.e. research).  Do you think the system "as-is" is marketable to researchers, or do we need this additional level of security?  Seems like HE is still in its infancy, so I'd hate to lose out on on our advantage, but if there is limited market w/o HE, then we need to pursue it.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 01:02:25 AM
 #2992

In my personal opinion ... this is doomed to fail BIG WILLY STYLE!

LOL...I always enjoy hearing your input...informative, and it always makes me laugh  Cheesy

If I recall, some of your early interest in this project was from an academic perspective (i.e. research).  Do you think the system "as-is" is marketable to researchers, or do we need this additional level of security?  Seems like HE is still in its infancy, so I'd hate to lose out on on our advantage, but if there is limited market w/o HE, then we need to pursue it.

I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Often, optimization problems are solved iteratively. First of all, a lot of work is put in, then after the iteration step we aggregate the solutions / prepare the next steps / and then perform more operations in the next iterations. Think of it as a multi-level-bounty system. The bounties of one iteration go as input in the next iteration.

Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 01:23:24 AM
 #2993

I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Makes sense.  Just thinking through this...so we would display the solved Bounties to the author, with some sort of cut/paste feature of the outputs to ease setup of the next job?  Or is there a way to automate this even further?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 01:31:16 AM
 #2994

I think it is not far away, we can use it for any kind of search space to search it in a bruteforce manner. I think to make it perfect, we need some "synchronization" feature.

Makes sense.  Just thinking through this...so we would display the solved Bounties to the author, with some sort of cut/paste feature of the outputs to ease setup of the next job?  Or is there a way to automate this even further?

The shitty thing with manual processing is that we lose time.
Let's say you need a quick genetic algorithm over 10000 generations. Manual processing cannot be quicker as 10000*60 seconds. You get your results when you retire. It would be great to have some "intermediate" bounties (these may be different from the actual paid bounties). Better would be to collect as many good "intermediate" bounties as possible, if a threshold is reached, the elastic PL program automatically switches to round two and reuses those.

Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 01:36:30 AM
 #2995

Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.

Actually this looks interesting to me.  I might need some help getting pointed in the right direction, but I'd like to take on one or two of these simulations...I agree...other than general testing, putting the elastic network up against some real-world scenarios is probably the most beneficial thing we can do right now to prove the design works.

Maybe I'll start with simulated annealing...reminds me of my material science classes from decades past  Smiley
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 01:39:20 AM
 #2996

Maybe we could collect some common problems and think if they can be implemented now, or - if not - what it would take.

Travelling Salesman, Monte Carlo Experiments, GA optimization, Simulated Annealing, linear or non-linear optimization problems, just to name a few.

Actually this looks interesting to me.  I might need some help getting pointed in the right direction, but I'd like to take on one or two of these simulations...I agree...other than general testing, putting the elastic network up against some real-world scenarios is probably the most beneficial thing we can do right now to prove the design works.

Maybe I'll start with simulated annealing...reminds me of my material science classes from decades past  Smiley

https://github.com/chncyhn/simulated-annealing-tsp ;-) I think at the moment it might be hard to implement this, but maybe I am wrong on this
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 01:43:49 AM
 #2997

Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 01:53:52 AM
 #2998

Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?

I agree with you on this one. Maybe we (I??/You??) should just implement some real world problem clients.
This will require a constant monitoring of the blockchain (and the unconfirmed transactions) along with automated cancelling and recreation of jobs.

What we still have to think about, input might be larger than 12 integers (when it comes down to bloom filters for example). Do you think it might be wise to have at least some sort of persistent distributed storage?

And what a out "job updating"? We could allow jobs to be updated while running?
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 02:07:13 AM
 #2999

What we still have to think about, input might be larger than 12 integers

Maybe we implement a randomize function that authors can use to use the 12 inputs to randomize as many inputs as they need.

Do you think it might be wise to have at least some sort of persistent distributed storage?

I could definitely see a use case for this...not sure how to implement efficiently but should probably be on the roadmap.

And what a out "job updating"? We could allow jobs to be updated while running?

I thought about this when I first wrote the miner...right now it doesn't account for this.  However, if the UI allows an update, and submits it with a different work ID, then the miner would run as currently designed.  I was trying to avoid having to reparse the ElasticPL every 60 sec or so to see if it changed....as long as the update cancels the old work ID and creates a new one, it should work seamlessly.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 02:10:03 AM
 #3000

I think the first thing to do would be to create some sort of SDK, to submit and control jobs programmatically.
I can pull off one in python pretty quick I think!
Pages: « 1 ... 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 [150] 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 ... 345 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!