tomkat
|
|
October 14, 2016, 02:56:30 PM |
|
In which exchange can I buy XEL?
It's not listed yet. Few weeks minimum to mainnet launch...
|
|
|
|
|
|
|
Make sure you back up your wallet regularly! Unlike a bank account, nobody can help you if you lose access to your BTC.
|
|
|
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
|
|
|
|
corvins
Sr. Member
Offline
Activity: 424
Merit: 250
Blockchain is the future
|
|
October 14, 2016, 03:03:53 PM |
|
In which exchange can I buy XEL?
It's not listed yet. Few weeks minimum to mainnet launch... Thanks
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 03:11:18 PM |
|
I think, given that the new PoW hash thing works out, the most important question that remains is how to model the PoW retargetting ... the way it is now sucks! Should we go for a "target value per work"? But if yes, how do we model it correctly?
Maybe lannister reads this and wants to call out a bounty for the "best scheme"?
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 03:20:30 PM |
|
Preliminaries:
Work authors push work to the Elastic network. At the same time there maybe 0 or more works active parallely. To ensure "workers" who work on the work packages are paid out continuously, they may (even if they do not solve the actual task) submit PoW packages. Those are solutions that result in a "hash" that is formed when the program was executed fully and that meets a certain target value (actually is lower than the target value).
Also: The blockchain grows by PoS only ... PoW submissions are just normal transactions that are included in the blocks.
Problem description:
Right now, all work packages share the same target value. Very quick jobs may therefore lower the target value (ie. increase the difficulty) more significantly than "long running jobs". This can have several effects ...
- on the one hand its hard to estimate a good "per POW submission price" for a work author, as it highly depends on the other work in the network. - on the other hand, effects like this may happen: Currently the target value is measured by a sliding window of the last 3 blocks and how many PoW submissions were performed in those blocks. It is basically used to rate limit the number of valid PoW submissions. The ideal number is 10 per block on average. But if there is no work in the network, the blockchain still grows (since it is backed by PoS) and when a new work is then created, the last 3 blocks had exactly 0 PoW Submissions in them. 0 Pow submissions cause the difficulty to drop again. This again may cause a huge burst, when a lot of workers/miners start submitting PoW submissions again that meet the "minimum possible difficulty" before the target value approximate its desired value again.
So now?
We need a way to regulate the rate of PoW submissions in a fair manner (fair for the work creators), and in a manner that prevents bursts of shit loads of PoW submissions after periods without any work online.
Naive Approach:
The naive approach to give each work an own target value and keep the rest as is sucks as well ... Everytime a work is newly created, the target value first has to find its correct value ... until then, the blockchain is bloated with dozens of PoW transactions.
Keeping a global target value and "remembering" the last value throughout periods of without work (means ... periods with no PoW submissions) sucks as well: Problem described in the first bullet point in the problem description still applies. Also this scheme does not adapt to miners that "shut down their miners". A potent miner could boost the difficulty sky high and then leave, rendering the entire project useless as no PoW would ever get submitted again and as a consequence the old target value would be "remembered".
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 03:52:20 PM |
|
Just in case Lannister does a bounty hunting ;-) Here my first submission on an idea, it is really bad and up for discussion: 1. Every work has its own target value 2. Once a work is created, the target value gets initialized with the average final target value of the last 10 closed jobs. 3. As long as no PoW submission is made, the target value or rather the difficulty drops very quickly so that in at most 5 blocks it reaches the "least possible difficulty" (to help readjusting to changed miner population) 4. There can be only 20 unconfirmed PoW submissions in the memory pool (and in each block) at most per work. 5. The retargetting mechanism reacts quickly, per block, to adapt to a targetvalue that results in on average 10 PoW submissions per block This approach: - mitigates the problem of a "too easy initial difficulty" - is accounts for changed number of miners (which can of course change in long periods with no work) in two ways: 1.) If the number of miners decreased or if potent miners disappeared, the takes at most 5 blocks to "heal" the problem 2.) If we have too many miners suddenly, or if potent miners joined in the meantime, they can only clutter the blockchain at a rate of max. 20 tx / block until the retarget mechanism kicks in (should take 1 or at most 2 blocks). - mitigates the attack when a potent miner "waits" until the difficulty drops to burst his precomputed PoW submissions. He, in this case, can only get through 20 at most until the difficulty readjusts in the following block. PS: If I win, I dedicate my "bounty" (if one will ever exist) to coralreefer!
|
|
|
|
coralreefer
|
|
October 14, 2016, 04:10:28 PM |
|
lol...thx EK First, I completely agree that the target should be per work package. However, I don't know if your proposal gets rid of the burst of POW submissions for low difficulty work. If no POW has been submitted yet, the difficulty lowers automatically, but the issue is whether there is no POW because it's too difficult vs no one mining that package. If no one is mining it, you would still get the burst of low difficulty POW when people jump on board. The simple solution here may be to use your proposal with a caveat that the target can't ever go below a certain level...maybe 0x0000FFFFFFF.... This may suck for the high WCET packages, but if the author sets the rewards correctly, this it would out. (i.e. if their WCET is 10x average, then you would expect their POW reward to be 10x the average or miners will go to other packages).
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 04:14:04 PM |
|
The simple solution here may be to use your proposal with a caveat that the target can't ever go below a certain level...maybe 0x0000FFFFFFF.... This may suck for the high WCET packages, but if the author sets the rewards correctly, this it would out. (i.e. if their WCET is 10x average, then you would expect their POW reward to be 10x the average or miners will go to other packages).
This is what I was thinking when I said "minimum possible diff" ... Id rather say "minimum allowed diff". The unconfirmed pow limit should limit any burst that may happen due to "new miners" entering the arena. Still not sure if this scheme is solid at all, and how it could be attacked!
|
|
|
|
coralreefer
|
|
October 14, 2016, 04:39:53 PM |
|
EK, I upgraded elastic-core, but I'm not seeing any peers.
Do you need to update your amazon aws nodes to 0.5.0? If so, once upgraded please send a few test xel to XEL-8DES-WHKN-M6SZ-8JRP5 so I can submit some work packages.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 05:53:38 PM |
|
EK, I upgraded elastic-core, but I'm not seeing any peers.
Do you need to update your amazon aws nodes to 0.5.0? If so, once upgraded please send a few test xel to XEL-8DES-WHKN-M6SZ-8JRP5 so I can submit some work packages.
Yes, the hard fork was not yet done 0.5.0 ... check your PM for the genesis account info ... you can use this to submit work on your local testnet node.
|
|
|
|
Ghoom
|
|
October 14, 2016, 06:13:00 PM |
|
$ git pull remote: Counting objects: 68, done. remote: Compressing objects: 100% (51/51), done. remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7 Unpacking objects: 100% (68/68), done. From https://github.com/OrdinaryDude/elastic-miner 7f9cea7..84ae187 master -> origin/master 7b77987..57e2e6b development -> origin/development Updating 7f9cea7..84ae187 Fast-forward lib/ElasticPL.jar | Bin 0 -> 131216 bytes lib/bcprov-ext-jdk15on-155.jar | Bin 0 -> 3466487 bytes lib/gnu-crypto.jar | Bin 0 -> 598036 bytes lib/javax-crypto.jar | Bin 0 -> 96430 bytes lib/javax-security.jar | Bin 0 -> 16969 bytes miner.jar | Bin 7401526 -> 7367002 bytes src/elastic_miner/CryptoStuff.java | 3 ++ src/elastic_miner/Main.java | 60 ++++++++++++++++++++++--------------- 8 files changed, 39 insertions(+), 24 deletions(-) create mode 100644 lib/ElasticPL.jar create mode 100644 lib/bcprov-ext-jdk15on-155.jar create mode 100644 lib/gnu-crypto.jar create mode 100644 lib/javax-crypto.jar create mode 100644 lib/javax-security.jar $ ./compile.sh javac: file not found: src/evil/ElasticPL/*.java Usage: javac <options> <source files> use -help for a list of possible options idea ?
|
|
|
|
Ghoom
|
|
October 14, 2016, 06:26:50 PM |
|
$ git pull remote: Counting objects: 68, done. remote: Compressing objects: 100% (51/51), done. remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7 Unpacking objects: 100% (68/68), done. From https://github.com/OrdinaryDude/elastic-miner 7f9cea7..84ae187 master -> origin/master 7b77987..57e2e6b development -> origin/development Updating 7f9cea7..84ae187 Fast-forward lib/ElasticPL.jar | Bin 0 -> 131216 bytes lib/bcprov-ext-jdk15on-155.jar | Bin 0 -> 3466487 bytes lib/gnu-crypto.jar | Bin 0 -> 598036 bytes lib/javax-crypto.jar | Bin 0 -> 96430 bytes lib/javax-security.jar | Bin 0 -> 16969 bytes miner.jar | Bin 7401526 -> 7367002 bytes src/elastic_miner/CryptoStuff.java | 3 ++ src/elastic_miner/Main.java | 60 ++++++++++++++++++++++--------------- 8 files changed, 39 insertions(+), 24 deletions(-) create mode 100644 lib/ElasticPL.jar create mode 100644 lib/bcprov-ext-jdk15on-155.jar create mode 100644 lib/gnu-crypto.jar create mode 100644 lib/javax-crypto.jar create mode 100644 lib/javax-security.jar $ ./compile.sh javac: file not found: src/evil/ElasticPL/*.java Usage: javac <options> <source files> use -help for a list of possible options idea ? Found in elastic-reference-client/
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 14, 2016, 07:15:12 PM |
|
$ git pull remote: Counting objects: 68, done. remote: Compressing objects: 100% (51/51), done. remote: Total 68 (delta 33), reused 4 (delta 4), pack-reused 7 Unpacking objects: 100% (68/68), done. From https://github.com/OrdinaryDude/elastic-miner 7f9cea7..84ae187 master -> origin/master 7b77987..57e2e6b development -> origin/development Updating 7f9cea7..84ae187 Fast-forward lib/ElasticPL.jar | Bin 0 -> 131216 bytes lib/bcprov-ext-jdk15on-155.jar | Bin 0 -> 3466487 bytes lib/gnu-crypto.jar | Bin 0 -> 598036 bytes lib/javax-crypto.jar | Bin 0 -> 96430 bytes lib/javax-security.jar | Bin 0 -> 16969 bytes miner.jar | Bin 7401526 -> 7367002 bytes src/elastic_miner/CryptoStuff.java | 3 ++ src/elastic_miner/Main.java | 60 ++++++++++++++++++++++--------------- 8 files changed, 39 insertions(+), 24 deletions(-) create mode 100644 lib/ElasticPL.jar create mode 100644 lib/bcprov-ext-jdk15on-155.jar create mode 100644 lib/gnu-crypto.jar create mode 100644 lib/javax-crypto.jar create mode 100644 lib/javax-security.jar $ ./compile.sh javac: file not found: src/evil/ElasticPL/*.java Usage: javac <options> <source files> use -help for a list of possible options idea ? Found in elastic-reference-client/ No, this will fail as you are pulling in the wrong ElasticPL version. just remove the ElasticPL part from the compile script or use the binary instead for now! I will fix the script in a few minutes
|
|
|
|
coralreefer
|
|
October 15, 2016, 12:10:24 AM |
|
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:
1) The good - it's faster. My C miner is running your latest changes at about 60K evals/sec
2) Bounties seem to be working correctly.
3) m[21]=--2424; Why does your parser allow this? Mine failed, then I tried it in Visual Studio and it failed as well. Is this really valid in ElasticPL?
4) However; I'm not convinced mangle_state is working correctly. I walked your sample ElasticPL through the interpreter line by line and to me it doesn't look like it fires of the mangle_state for '<', '>', '&", etc. This is the good thing about comparing your code to mine as it's a true independent test, but I don't know if the bug is in your code or mine. Mine is pretty literal and fires off for each operator...but yours seems to skip some. Not sure if this will be an issue or not.
Now that POW is running fast again, I'll spend some time this weekend to see if all my earlier concerns are resolved. For example, in the old version, I was able to manually feed canceled work packages into my miner and get paid for POW...still need to test this in the new version.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 15, 2016, 07:30:03 AM |
|
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:
1) The good - it's faster. My C miner is running your latest changes at about 60K evals/sec
Yeah, still poor! I guess the best thing you could do is generate C code from the ElasticPL AST tree directly. This should be trivial, of course several things would have to be handled correctly ;-) For example "--2424" (which at the moment is valid ElasticPL actually) would be converted to -1*(-1*(2424)). But if we would come up with a ElasticPL to C or OpenCL converter, we would see millions of evals/sec.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 15, 2016, 09:15:42 AM Last edit: October 15, 2016, 12:32:58 PM by Evil-Knievel |
|
EK, I finally got everything up and running and here's some feedback...keep in mind this is based on my crappy scratch-built ast parser:
1) The good - it's faster. My C miner is running your latest changes at about 60K evals/sec
Might be the solution to all evil!!!Just finished a rudimentary not-yet tested Elastic-to-C compiler. See current ElasticPL github tree. Just do java -jar ElasticToCCompiler.jar yourprogram.spl The test program which is written in ElasticPL m[18]=-(1>2); m[19]=-2424; m[19]=2424; m[20]=1-2424; m[21]=--2424; m[21]=-3;
verify m[1]<=50000; converts to (and saves as yourprogram.spl.c) #include <stdio.h> #include <stdint.h> #include <time.h>
int32_t m[64000];
int execute(); int main(){ clock_t start, end; int counter = 0; start = clock(); while(1==1){end = clock(); execute(); counter=counter+1; if((double)(end-start)/CLOCKS_PER_SEC >=1) break; } printf("BENCHMARK: %d evaluations per second.\n",counter); }
int execute(){ m[18] = (-1*((((1) > (2))?1:0))); m[19] = (-1*(2424)); m[19] = 2424; m[20] = ((1) - (2424)); m[21] = (-1*((-1*(2424)))); m[21] = (-1*(3)); return ((((m[1]) <= (50000))?1:0)); }
... aaaaand executes 6 MILLION TIMES PER SECOND
(at least for now without the sha256 pow check, but written in pure C this will be no real "overhead") beavis@methusalem ~/Development/elastic-pl (git)-[master] % ./a.out BENCHMARK: 6359785 evaluations per second. Now, a mining application can link that C "library" and use it for mining! Somehow ... of course PoW check and so on is not yet included in the estimation! EDIT: Also, loops are treated correctly: m[18]=-(1>2); m[19]=-2424; m[19]=2424; m[20]=1-2424; m[21]=--2424; m[21]=-3; if(m[1]==1){ repeat(5){ m[1]=m[2]<<3; } }
verify (m[1] and 255)<35000; becomes #include <stdio.h> #include <stdint.h> #include <time.h>
int32_t m[64000];
int execute(); int main(){ clock_t start, end; int counter = 0; start = clock(); while(1==1){end = clock(); execute(); counter=counter+1; if((double)(end-start)/CLOCKS_PER_SEC >=3) break; } printf("BENCHMARK: %d evaluations per second.\n",counter/3); }
int execute(){ m[18] = (-1*((((1) > (2))?1:0))); m[19] = (-1*(2424)); m[19] = 2424; m[20] = ((1) - (2424)); m[21] = (-1*((-1*(2424)))); m[21] = (-1*(3)); if (((((m[1]) == (1))?1:0)) !=0 ) {int loop1 = 0; for (; loop1 < (5); ++loop1) {m[1] = ((m[2]) << (3)); }} return ((((((m[1]) & (255))) < (35000))?1:0)); }
Aaaand things may become complicated in C, which I suspect to be levelled out by GCC anyway. Example for our "fail safe modulo function". Elastic PL: C: m[0] = ((1) != 0)?(((m[1]) % (1))):0);
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 15, 2016, 09:29:11 AM |
|
Btw. I followed HunterMinerCrafter's advise and entirely dropped the Boolean type. We are working on word width types only.
|
|
|
|
coralreefer
|
|
October 15, 2016, 12:44:35 PM |
|
Just finished a rudimentary not-yet tested Elastic-to-C compiler. See current ElasticPL github tree. Just do java -jar ElasticToCCompiler.jar yourprogram.spl The test program which is written in ElasticPL m[18]=-(1>2); m[19]=-2424; m[19]=2424; m[20]=1-2424; m[21]=--2424; m[21]=-3;
verify m[1]<=50000; converts to (and saves as yourprogram.spl.c) #include <stdio.h> #include <stdint.h> #include <time.h>
int32_t m[64000];
int execute(); int main(){ clock_t start, end; int counter = 0; start = clock(); while(1==1){end = clock(); execute(); counter=counter+1; if((double)(end-start)/CLOCKS_PER_SEC >=1) break; } printf("BENCHMARK: %d evaluations per second.\n",counter); }
int execute(){ m[18] = (-1*((((1) > (2))?1:0))); m[19] = (-1*(2424)); m[19] = 2424; m[20] = ((1) - (2424)); m[21] = (-1*((-1*(2424)))); m[21] = (-1*(3)); return ((((m[1]) <= (50000))?1:0)); }
... aaaaand executes 6 MILLION TIMES PER SECOND
(at least for now without the sha256 pow check, but written in pure C this will be no real "overhead") beavis@methusalem ~/Development/elastic-pl (git)-[master] % ./a.out BENCHMARK: 6359785 evaluations per second. Now, a mining application can link that C "library" and use it for mining! Somehow ... of course PoW check and so on is not yet included in the estimation! I think this sounds like a good approach and may finally give Elastic the performance that it needs. As mentioned previously, I only put together my miner as a way to test the AST Parser I wrote (I simply wanted to learn about AST trees). However, I'll clean up my miner and get it posted to github this weekend if someone wants to rip out my AST logic and replace it with a C interpreter that you have described above.
|
|
|
|
Evil-Knievel
Legendary
Offline
Activity: 1260
Merit: 1168
|
|
October 15, 2016, 12:47:44 PM |
|
I think this sounds like a good approach and may finally give Elastic the performance that it needs.
As mentioned previously, I only put together my miner as a way to test the AST Parser I wrote (I simply wanted to learn about AST trees). However, I'll clean up my miner and get it posted to github this weekend if someone wants to rip out my AST logic and replace it with a C interpreter that you have described above.
The ideal would be to use your AST parser to create C code on the fly, which is then compiled and linked into a library that then again is used to execute the logic inside your miner ;-) A polymorphic miner, basically, changing its own code depending on the live work. On-the-fly Inline assembler would be great (and geek) as well, but I think that way too hard for now. If you post your code to the git, I will try to "hack in" the library approach shortly after ;-)
|
|
|
|
coralreefer
|
|
October 15, 2016, 01:42:11 PM Last edit: October 15, 2016, 02:04:22 PM by coralreefer |
|
EK, I have posted my miner here: https://github.com/sprocket-fpga/xel_minerPlease keep in mind that this miner is in the earliest stages of development and still has lots of issues. Edit: The AST is stored in the array vm_ast. I'm assuming this is what you'd need to traverse for the C code interpreter.
|
|
|
|
radley
Member
Offline
Activity: 163
Merit: 10
Will add something
|
|
October 15, 2016, 02:00:59 PM |
|
where can I buy ?
|
★★【 Available for Promo】★★
|
|
|
|