Bitcoin Forum
June 19, 2024, 03:33:31 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 [151] 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 345 »
  Print  
Author Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer  (Read 450445 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
December 10, 2016, 02:21:14 AM
 #3001

Not sure yet how this might exactly work. Imagine this algorithm:

Genetic Algorithm to solve the travelling salesman problem.
The order in which cities are visited are encoded on out chromosome.
In each iteration we generate millions of random solution candidates ... we want to take the best 1000 solutions. These are stored as a intermediate bounty (not sure how to store 1000 bounties lol)
Then, in the second generation, we take those 1000 bounties and again "mutate" those millions of times in the hope to get even better solutions. Again, 1000 (hopefully better) solution candidates are taken into the next generation.

We repeat that until we find no more better solutions.

I am right now thinking how we could model that in Elastic. If those 1000 candidates need to stored in the block chain at all. And what changes it would take to make it possible to implement such simple optimization algorithm.

At the moment, we could just "roll the dice" over and over again ... basically a primitive search.

Okay, I guess this is where I'm the non-conformist to the blockchain movement.  I would think the author should have some accountability here (i.e. run a client, or at least check in regularly)...the results should be saved locally on their workstation and sent back as an updated job via RPC.  To me, the blockchain is the engine performing the work...but I don't see why the author can't have a client that helps then analyze / prepare / submit jobs as and when they want.

Yes, the client would need to be coded to handle that logic, but does it really need to be part of the blockchain?

I agree with you on this one. Maybe we (I??/You??) should just implement some real world problem clients.
This will require a constant monitoring of the blockchain (and the unconfirmed transactions) along with automated cancelling and recreation of jobs.

What we still have to think about, input might be larger than 12 integers (when it comes down to bloom filters for example). Do you think it might be wise to have at least some sort of persistent distributed storage?

And what a out "job updating"? We could allow jobs to be updated while running?

Yes, I thought this was the approach from the beginning, actually. That's why I compared Elastic to Lego: I assumed everything that is built right now was just the blocks, everything else would generate out of it. Like, for more complex approaches (e.g. traveling salesman problem), there will emerge schemes that job authors can implement "from the outside", i.e. using Elastic, but as the building blocks, using outside libraries to improve functionality by managing, what actually needs computation.

I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 02:36:50 AM
 #3002

I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.

Ok, I'll spend some time looking at the annealing or traveling salesman problem to see if I can get a real-world example in elastic.  EK, I'm sure I'll have some questions for you  Wink

Maybe ttookk, you can create your chess example as your final exam for your newly aquired coding skills  Grin
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
December 10, 2016, 02:44:34 AM
 #3003

I think, the traveling salesman problem is perfect for testing, because it is extremely scalable: You could either generate millions of random combinations or very few, very optimized solutions.

I still like the idea of two job authors playing chess against each other, using nothing but Elastic. Bruteforcing moves might not be the most promising approach to playing chess, but it sounds doable to me.

Ok, I'll spend some time looking at the annealing or traveling salesman problem to see if I can get a real-world example in elastic.  EK, I'm sure I'll have some questions for you  Wink

Maybe ttookk, you can create your chess example as your final exam for your newly aquired coding skills  Grin

Whoa, whoa, whoa… That's like saying "I want to learn blacksmithing and the second piece I'm going to make is a full blown Katana with steel I smelted myself from raw iron ore" or something like that. Final exam for sure!

The chess thing would probably be a classic decision tree situation; the more moves you go in the future, the more options and possible outcomes there are. This would at least be the bruteforce attempt.

Nah, but seriously. I'm going to look at some chess sourcecode I found at github.
TheDR
Full Member
***
Offline Offline

Activity: 124
Merit: 100


View Profile
December 10, 2016, 06:24:49 AM
 #3004

Vernam Cipher or One Pad?(the only provably unbreakable system) Could that be used based on irrational numbers for generation or something?
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 02:20:42 PM
 #3005

EK, starting to look at fitting the traveling salesman problem into elastic.  Obviously, this problem requires distances to be calculated.  Is the expectation that the author takes care of these calculations outside of elastic and provides this as part of the raw data, or do you think we need to add some additional math functions to ElasticPL such as sqrt (and I'm sure there are others)?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 04:16:06 PM
 #3006

EK, starting to look at fitting the traveling salesman problem into elastic.  Obviously, this problem requires distances to be calculated.  Is the expectation that the author takes care of these calculations outside of elastic and provides this as part of the raw data, or do you think we need to add some additional math functions to ElasticPL such as sqrt (and I'm sure there are others)?

Yeah, I think we need additional mathematical operators.
EDIT: SQRT on integers sucks a bit, we might need to think about the design! What do you think?

I have started to implement a "Elastic NodeJS-based SDK".
This will allow us to use the elastic network programatically. Here a simple example that will create a simple work, and listen for status changes as well as new POW and bounties ;-) It's not finished yet, but will be today (also, the core client had to be extended so this version is not compatible with the 0.8.0 release).

Github:

https://github.com/OrdinaryDude/elastic-nodejs


Simple Elastic PL controller:

Code:
var elastic = require("./lib/index.js");

/* Configuration BEGIN */
var passphrase = "test";
var host = "127.0.0.1";
var port = 6876;
var use_ssl = false;
/* Configuration END */

elastic.init(passphrase, host, port, use_ssl, function(){

console.log("[!] Established connection with Elastic Core.");

/* Create a new work */
var work = {
work_title: "My simple work",
source_code: "verify m[0] >= 1000000;",
xel_per_bounty: 1,
xel_per_pow: 0.1,
deposit_in_xel: 10000
};

/* Publish and watch the new work */
elastic.publish_work(work, function(event, data){
switch(event){
case elastic.events.WORK_WAITING_TO_CONFIRM:
console.log("[*] Work with id = " + data.transaction + " is awaiting first confirmation.");
break;
case elastic.events.WORK_STARTED:
console.log("[*] Work with id = " + data.transaction + " just became live.");
break;
case elastic.events.WORK_CLOSED:
console.log("[*] Work with id = " + data.transaction + " is finished.");
break;
case elastic.events.WORK_UPDATE:
// Right now, this event is fired every block with the current work JSON object as the data!
// You can read the left balance and the number of POW/Bounties here
break;
case elastic.events.NEW_BOUNTY:

break;
case elastic.events.NEW_POW:

break;
case elastic.events.FAILURE:
console.log("[E] Failure:",data);
break;

}
});
});
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
December 10, 2016, 06:46:05 PM
 #3007

What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.
wpalczynski
Legendary
*
Offline Offline

Activity: 1456
Merit: 1000



View Profile
December 10, 2016, 07:09:09 PM
 #3008

What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Sounds like an innovative release plan with the stages and a good approach with the predetermined hard forks.  Sort of similar to the predetermined Monero forks which removes contention.

Thanks again for all your hard work.

MiningSev0
Full Member
***
Offline Offline

Activity: 206
Merit: 106

Old Account was Sev0 (it was hacked)


View Profile
December 10, 2016, 07:20:29 PM
 #3009

What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Step 1 seems to be a very great idea
But the possebillty to reset the whole chain in step two would probably disappointing everybody externally intrested - because they are waiting for a couple of months to get on board, and now the mainnet starts - they maybe buy tokens from other - and would get ripped off by a reset.
Otherwise this model would be Great to get some media-Focus.
But Bad reviews from disappointed "late-investors" could be the a killing argument for this project...

New signature to come =D
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
December 10, 2016, 07:44:11 PM
 #3010

Yeah, I think we need additional mathematical operators.
EDIT: SQRT on integers sucks a bit, we might need to think about the design! What do you think?

Nice job with the SDK.

I had thought about this float issue during my original miner design.  Having an integer based design allows us to run at the best speeds and I'd prefer not to move to a float based design.  However, if we create a small chunk of memory (maybe 1000 floats) that can be used to store this type of data if needed, we may get the best of both worlds.  It would be available if needed, and if not, I don't think we'd see any decrease in performance.

Or am I oversimplifying this?
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
December 10, 2016, 08:06:44 PM
 #3011

What about this "Mainnet Release Plan"?

Disclaimer: I am just a coder so I am not launching anything. I am just coding an open source product  as best as my knowledge allows me to. So I will not be launching anything. So this is what I thought.

Step 1:
The Mainnet is locked and offline. In order to launch it, 25% of all tokens must be redeemed. This essentially means that the users (to recall, I code this for you guys) decide thenselves when to launch or not to launch. If the whales think the code sucks, they just delay!

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

We explicitly say that the blockchain may be reverted partially or totally during this time span.

Step 3:
After some more excessive testing we roll out 1.5 which begins at block 25001 (so basically after 17,5 days approx.) and goes on forever ;-)



I think this would give us some real field testing and not just on the test net, and give us five chances to fix any future bugs.


What do you think about this? Do you have any suggestions?
Of course, we have to make the GPU miner happen first, and rethink the missing parts about the ElasticPL language.

Step 1 seems to be a very great idea
But the possebillty to reset the whole chain in step two would probably disappointing everybody externally intrested - because they are waiting for a couple of months to get on board, and now the mainnet starts - they maybe buy tokens from other - and would get ripped off by a reset.
Otherwise this model would be Great to get some media-Focus.
But Bad reviews from disappointed "late-investors" could be the a killing argument for this project...

Yes, I see a similar problem with step 2. I think there should be no going back. Hardforks are a good tool in principle, I advocated them some pages back, but I think rolling back the chain is a no-go. When a hardfork is applied, I think all balances MUST stay the way they are, no matter how unfair the distribution has become for whatever reason. If a miner is able to exploit a loophole, that should be considered a bounty.
cyberhacker
Legendary
*
Offline Offline

Activity: 1330
Merit: 1000



View Profile
December 11, 2016, 02:09:52 AM
 #3012

does that mean we can redeem from 1.0?


or to be safe to redeem after 1.5?

do we have simple win and mac build for newbies.  thanks
ImI
Legendary
*
Offline Offline

Activity: 1946
Merit: 1019



View Profile
December 11, 2016, 02:42:07 AM
 #3013


i like the release plan. one issue i see is that after mainnet launch we have basically no way to keep exchanges from listing XEL. as soon as we launch folks will start trading XEL even if its just some bogus exchange like liqui.io or something. that could lead to problems.
Bgjjj2016
Sr. Member
****
Offline Offline

Activity: 448
Merit: 250

Ben2016


View Profile
December 11, 2016, 03:29:11 AM
 #3014


i like the release plan. one issue i see is that after mainnet launch we have basically no way to keep exchanges from listing XEL. as soon as we launch folks will start trading XEL even if its just some bogus exchange like liqui.io or something. that could lead to problems.
I don't understand, what kind of problems ? Some fools gonna trade this as soon as out, but that's their problem !

My " I want that Old Toyota Camry very bad" BTC Fund :1DQU4oqmZRcKSzg7MjPLMuHrMwnbDdjQRM
Join the Elastic revolution! Elastic Network: The Decentralized Supercomputer 
ELASTIC WEBSITE|ANNOUNCEMENT THREAD|JOIN THE SLACK
bspus
Legendary
*
Offline Offline

Activity: 2165
Merit: 1002



View Profile
December 11, 2016, 11:46:57 AM
 #3015

does that mean we can redeem from 1.0?


or to be safe to redeem after 1.5?

do we have simple win and mac build for newbies.  thanks


I suppose the hardforks will not affect the original coin amounts in the genesis block. So you can claim whenever you want.

But if there is the slightest chance of reverting the blockchain, people should not be trading (well, at least should not be buying unless they realise the risk) until the final version is out. exchanges should definitely not list it until then

BTCspace
Hero Member
*****
Offline Offline

Activity: 952
Merit: 501


View Profile
December 11, 2016, 12:47:28 PM
 #3016

when will step one start?

running farm worldwide
provenceday
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000



View Profile
December 11, 2016, 01:29:22 PM
 #3017

nice to see some progress!

hopefully project can release before the end of this year.
mr.coinzy
Hero Member
*****
Offline Offline

Activity: 500
Merit: 507



View Profile
December 11, 2016, 04:30:43 PM
 #3018

I would love to see XEL listed in serious exchanges as soon as possible (considering of course all the testing that needs to take place to have a solid system). Sounds to me though that with the suggested gradual forked release, it will take substantial time till we see this actually happening. For the holders of the coin who consider liquidating some of their holdings that is not the best of plans.
I hope we can have some additional suggestions regarding the best way to roll it out to the masses.
Alohaboy?!
Hero Member
*****
Offline Offline

Activity: 1050
Merit: 506



View Profile
December 11, 2016, 04:45:00 PM
 #3019

I love the development and all the work which is done for the project. For me the first release plan sounds great.
It is much more important to have a good and stable release than releasing early for some guys to cash out.
So for me it´s fine to take as much time as needed.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
December 11, 2016, 04:53:35 PM
 #3020

(…)

Step 2:
We have a 5 stage launch. This means, that the first version, let's say 1.0, will only run from Block 1 to Block 5000 and then stop.
Then, version 1.1 will rum from Block 5001-10000, and so on. So we have five "hard forks" basically between blocks 1 and 25000 in which we can embed potential fixes or changes to the protocol without crippling the network, since old clients will just stop syncing at block X.
Since the bounds are known, I don't see a problem here.

(…)

Step 1 seems to be a very great idea
But the possebillty to reset the whole chain in step two would probably disappointing everybody externally intrested - because they are waiting for a couple of months to get on board, and now the mainnet starts - they maybe buy tokens from other - and would get ripped off by a reset.
Otherwise this model would be Great to get some media-Focus.
But Bad reviews from disappointed "late-investors" could be the a killing argument for this project...

Yes, I see a similar problem with step 2. I think there should be no going back. Hardforks are a good tool in principle, I advocated them some pages back, but I think rolling back the chain is a no-go. When a hardfork is applied, I think all balances MUST stay the way they are, no matter how unfair the distribution has become for whatever reason. If a miner is able to exploit a loophole, that should be considered a bounty.

I have to slightly soften on my firm position regarding roll-backs. We could tell exchanges not to trade XEL for the first 5000 blocks. OTC type of deals would still be possible, but should be redflagged as possible scams by the community.

However, a big problem will be, that potential job authors aren't necessarily token holders, so without some kind of trading, the "real world conditions" will not be as "real-worldly" as they should be.
Pages: « 1 ... 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 [151] 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 345 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!