Bitcoin Forum
April 24, 2024, 02:25:33 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 [117] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 ... 345 »
  Print  
Author Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer  (Read 450429 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 22, 2016, 01:48:56 PM
 #2321

That is great... you can take my new "Dual KGW3 with Bitbreak". It is similar to DGW but better. After 6h reduce the wallet the diff. This works currently in Europecoin and Bitsend.
https://github.com/LIMXTEC/BitSend/blob/master/src/diff_bitsend.h#L18

Nice one! I will have a look at it now!
1713968733
Hero Member
*
Offline Offline

Posts: 1713968733

View Profile Personal Message (Offline)

Ignore
1713968733
Reply with quote  #2

1713968733
Report to moderator
1713968733
Hero Member
*
Offline Offline

Posts: 1713968733

View Profile Personal Message (Offline)

Ignore
1713968733
Reply with quote  #2

1713968733
Report to moderator
Be very wary of relying on JavaScript for security on crypto sites. The site can change the JavaScript at any time unless you take unusual precautions, and browsers are not generally known for their airtight security.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 22, 2016, 01:52:00 PM
 #2322

Here is a question I have since a while back, it's probably answered somewhere:
How is the size of a "job-block" determined? I mean, I guess job authors could chop the work they want to be done in multiple ways (e.g. 5x2/10, 10x1/10…). How is this determined? And wouldn't this work towards or against the retargeting issues?

I'll answer immediately. But could you explain what exactly you mean by a "job block"?
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 22, 2016, 01:58:43 PM
 #2323

Here is a question I have since a while back, it's probably answered somewhere:
How is the size of a "job-block" determined? I mean, I guess job authors could chop the work they want to be done in multiple ways (e.g. 5x2/10, 10x1/10…). How is this determined? And wouldn't this work towards or against the retargeting issues?

I'll answer immediately. But could you explain what exactly you mean by a "job block"?

Well, not exactly, to be honest. The way I understand it, the job is chopped into smaller chunks, which are to be solved by the miners. I meant those chunks. How is their size determined? By the job authors, I assume?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 22, 2016, 02:24:01 PM
Last edit: October 22, 2016, 02:37:40 PM by Evil-Knievel
 #2324

Here is a question I have since a while back, it's probably answered somewhere:
How is the size of a "job-block" determined? I mean, I guess job authors could chop the work they want to be done in multiple ways (e.g. 5x2/10, 10x1/10…). How is this determined? And wouldn't this work towards or against the retargeting issues?

I'll answer immediately. But could you explain what exactly you mean by a "job block"?

Well, not exactly, to be honest. The way I understand it, the job is chopped into smaller chunks, which are to be solved by the miners. I meant those chunks. How is their size determined? By the job authors, I assume?

Actually, right now the jobs are not chopped down! They can reach from 1 to MAX_WCET regarding the complexity and the requirement for computational effort. That is the reason why we need to differ between jobs and give every job its own target value. Otherwise small (easy) jobs would get far more PoW submissions than those who take longer to execute.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 22, 2016, 04:04:35 PM
 #2325

Here is a question I have since a while back, it's probably answered somewhere:
How is the size of a "job-block" determined? I mean, I guess job authors could chop the work they want to be done in multiple ways (e.g. 5x2/10, 10x1/10…). How is this determined? And wouldn't this work towards or against the retargeting issues?

I'll answer immediately. But could you explain what exactly you mean by a "job block"?

Well, not exactly, to be honest. The way I understand it, the job is chopped into smaller chunks, which are to be solved by the miners. I meant those chunks. How is their size determined? By the job authors, I assume?

Actually, right now the jobs are not chopped down! They can reach from 1 to MAX_WCET regarding the complexity and the requirement for computational effort. That is the reason why we need to differ between jobs and give every job its own target value. Otherwise small (easy) jobs would get far more PoW submissions than those who take longer to execute.

Gotcha. You probably can't chop every job anyway, so that idea would not work.

Doesn't matter, fast difficulty retargeting with an unstoppable PoS chain looks brilliant, actually. I wouldn't even mind when miners max out the PoW submissions possible, It should be seen as part of the game. Although, is there a risk of miners specializing on this, grabbing it, then stop mining?
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 22, 2016, 05:46:34 PM
 #2326

Although, is there a risk of miners specializing on this, grabbing it, then stop mining?

That's the risk I've observed since the beginning; however, I'm come around to the point of view that if the bounty rewards are set correctly, there would not be any incentive for miners to jump around for POW rewards as this may result in them mining packages with undervalued bounty rewards.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 22, 2016, 08:52:21 PM
 #2327

Although, is there a risk of miners specializing on this, grabbing it, then stop mining?

That's the risk I've observed since the beginning; however, I'm come around to the point of view that if the bounty rewards are set correctly, there would not be any incentive for miners to jump around for POW rewards as this may result in them mining packages with undervalued bounty rewards.

Yeah, I guess one way or another, that's the best we can hope for. But I think it's possible, provided Elastic has at least some amount of traction. As with other things, this is a make or brake based on the popularity of Elastic, which is why I think it would be good to use at least some amount of the BTC collected for promotion.
Evil-Knievel, if I remember correctly, you mentioned that you work at a university and would like to use the Elastic network yourself, right? You sound like the main targeted group, then. Do you have an idea, how we could reach other people in that group, e.g. by promotion on certain sites, events or similar?
HomoHenning
Hero Member
*****
Offline Offline

Activity: 784
Merit: 500



View Profile
October 23, 2016, 11:59:23 PM
 #2328

I am very happy to have invested in this project. Thanks to anybody who is contributing to it!
Jayjay04
Legendary
*
Offline Offline

Activity: 1364
Merit: 1000



View Profile
October 24, 2016, 03:48:39 PM
 #2329

I am very happy to have invested in this project. Thanks to anybody who is contributing to it!

I am too, even though I do not understand much of what your are discussing, I see great potential here !
Keep up the good work guys !

                    ▄▄▄
                   █████
             ▄▄▄    ▀▀▀       ▄▄████╕
            █████          ▄▄███████▌
     ▄▄▄     ▀▀▀        ╓████████▀▀
    █████               ██████▀▀       ▄███▄
     ▀▀▀        ▄███▄    ▀▀▀           ▀███▀
                ▀███▀        ▄▄████╕
       ▁▄▄███▄           ▄▄████████▌
   ▁▄▆████████       ▄▄█████████▀▀       ▄███▄
 ▄█████████▀▔    ▄▄█████████▀▀           ▀███▀
▐██████▀▀▔   ,▄█████████▀▀        ▄▄▄
 ▔▀▀▀▔    ▄▄████████▀▀           █████
      ▄▄████████▀▀       ▄▄██▄    ▀▀▀
   ▄█████████▀       ▄▄███████▌       ▄▄▄
  ▐██████▀▀      ▄▄█████████▀▔       █████
   `▀▀▀      ▂▄█████████▀▀    ,▄▄µ    ▀▀▀
          ▂▆████████▀▀    ,▄██████▌
          ██████▀▀     ▄█████████▀
           ▔▀▀        ███████▀"
                      "▀▀▀╙
Hello!
STAKER
.The Next Proof-of-Stake.
KSmart Contract Tokene.














Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 24, 2016, 05:33:30 PM
 #2330

After some excessive testing I found a flaw in our new retargeting mechanism.

What if two jobs are online, one being far more lucrative than the other one.
Since we now have independent target values for both of them, with the "inverse slowstart" which drastically reduces the difficulty for jobs that get started but do not receive any POW in the first three blocks, the following happens:

1.) Miners mine the lucrative job only
2.) The second (less lucrative job) gets no PoW packages at all
3.) The difficulty starts dropping fast for the second job because it thinks that no potent miners are online anymore. This continues until it reaches the minimum possible value
4.) When the first job is over, miners all switch at once to the second job with the easiest possible difficulty.
5.) FAIL!
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 24, 2016, 05:42:12 PM
 #2331

Quote
If I remember correctly, you mentioned that you work at a university
Not anymore, I'm finished with my PhD now! But it was a pain in the ass to carry out all these optimization tasks on shitty hardware! Hopefully the guys that come after me won't have these problems.  Wink
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 24, 2016, 07:26:53 PM
 #2332

After some excessive testing I found a flaw in our new retargeting mechanism.

What if two jobs are online, one being far more lucrative than the other one.
Since we now have independent target values for both of them, with the "inverse slowstart" which drastically reduces the difficulty for jobs that get started but do not receive any POW in the first three blocks, the following happens:

1.) Miners mine the lucrative job only
2.) The second (less lucrative job) gets no PoW packages at all
3.) The difficulty starts dropping fast for the second job because it thinks that no potent miners are online anymore. This continues until it reaches the minimum possible value
4.) When the first job is over, miners all switch at once to the second job with the easiest possible difficulty.
5.) FAIL!

Hmmm, but wouldn't that mean once the difficulty is low and miners jump on it, that the difficulty would rise again?

A possible solution might be some kind of monitoring, that sets a minimum difficulty based on overall network activity. Worst case would be, that no one would jump on it, because the payout would be too low compared to the difficulty, but that would be the job authors problem.
ImI
Legendary
*
Offline Offline

Activity: 1946
Merit: 1019



View Profile
October 24, 2016, 09:43:46 PM
 #2333

Quote
If I remember correctly, you mentioned that you work at a university
Not anymore, I'm finished with my PhD now! But it was a pain in the ass to carry out all these optimization tasks on shitty hardware! Hopefully the guys that come after me won't have these problems.  Wink

grats!
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 24, 2016, 09:49:09 PM
 #2334

Quote
If I remember correctly, you mentioned that you work at a university
Not anymore, I'm finished with my PhD now! But it was a pain in the ass to carry out all these optimization tasks on shitty hardware! Hopefully the guys that come after me won't have these problems.  Wink

grats!

Indeed
klintay
Legendary
*
Offline Offline

Activity: 1775
Merit: 1032


Value will be measured in sats


View Profile WWW
October 25, 2016, 04:48:06 AM
 #2335

Quote
If I remember correctly, you mentioned that you work at a university
Not anymore, I'm finished with my PhD now! But it was a pain in the ass to carry out all these optimization tasks on shitty hardware! Hopefully the guys that come after me won't have these problems.  Wink

Congrats bro! What kind of hardware do you need?
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 25, 2016, 10:53:29 AM
 #2336

After some excessive testing I found a flaw in our new retargeting mechanism.

What if two jobs are online, one being far more lucrative than the other one.
Since we now have independent target values for both of them, with the "inverse slowstart" which drastically reduces the difficulty for jobs that get started but do not receive any POW in the first three blocks, the following happens:

1.) Miners mine the lucrative job only
2.) The second (less lucrative job) gets no PoW packages at all
3.) The difficulty starts dropping fast for the second job because it thinks that no potent miners are online anymore. This continues until it reaches the minimum possible value
4.) When the first job is over, miners all switch at once to the second job with the easiest possible difficulty.
5.) FAIL!

Hmmm, but wouldn't that mean once the difficulty is low and miners jump on it, that the difficulty would rise again?

A possible solution might be some kind of monitoring, that sets a minimum difficulty based on overall network activity. Worst case would be, that no one would jump on it, because the payout would be too low compared to the difficulty, but that would be the job authors problem.

Quote for visibility.

What would speak against adjusting minimum difficulty based on network activity?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 25, 2016, 01:08:08 PM
 #2337

What would speak against adjusting minimum difficulty based on network activity?

This was the case before ... at least somehow!
Isn't it that this is equivalent to a "global target value"?
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 25, 2016, 02:23:05 PM
 #2338

I'll throw this out there even though it has potential for issues.  Maybe we incorporate WCET into the targetting calc.  So for example, internally to the network there is a global difficulty to maintain 10 POW submissions per block, but at the work package level, that global difficulty is adjusted based on WCET, so simple jobs will have a much harder difficulty than a complex job.  This way, they all change at the same time based on the global network difficulty...not on who is mining that particular job.

To me the main issue would be how accurate the WCET calc is.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 25, 2016, 02:31:29 PM
 #2339

I'll throw this out there even though it has potential for issues.  Maybe we incorporate WCET into the targetting calc.  So for example, internally to the network there is a global difficulty to maintain 10 POW submissions per block, but at the work package level, that global difficulty is adjusted based on WCET, so simple jobs will have a much harder difficulty than a complex job.  This way, they all change at the same time based on the global network difficulty...not on who is mining that particular job.

To me the main issue would be how accurate the WCET calc is.

Also thought about this one, but I think it's really hard to accurately calculate the WCET ...
No only because of the complex functions like hashes or EC maths, but also because the C compiler would highly optimize the generated assembly, not knowing what operations were actually left in tact, what operations were replaced by others and what operations were stripped out.

You could easily generate a program which on the first sight has a WCET of 10000000 but which can be executed with only a few CPU instructions after being compiled with -O3.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 25, 2016, 02:39:42 PM
 #2340

I even thought that it could be possible to "force" everyone to work on new work for x blocks to measure the "network performance on that particular job" for the initial difficulty and just leaving it adjust normally from there.
But this again sucks because it could be used to DOS other works by repeatedly creating new jobs!
Pages: « 1 ... 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 [117] 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 ... 345 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!