Bitcoin Forum
November 07, 2024, 06:36:16 AM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 [116] 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 ... 345 »
  Print  
Author Topic: [ANN][XEL] Elastic Project - The Decentralized Supercomputer  (Read 450510 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
akhavr
Full Member
***
Offline Offline

Activity: 235
Merit: 100



View Profile
October 19, 2016, 08:00:00 AM
 #2301

Regarding the masternodes-thing, here are some more thoughts on that matter:

- Some people pose manage masternodes. Apart from working as regular nodes, it is their job to monitor the network. Both job authors and miners need a minumum number of trusted masternodes in their direct contacts list to check for updates, forks and so on. If a sufficient number of masternodes redflag a transaction or somehow warn a user that they might be on a fork, their submissions are no longer processed.

- To become a masternode, you can "borrow" XEL from users, the more you have, the more "reputation" you have.

This looks more like DPOS/LPOS.

Users can still decide to include smaller masternodes in their personal list, since it is in their best interest to have up-to-date and trustworthy masternodes as contacts, I don't see much room for shennanigans here.
Masternodes won't have direct access to to the funds, but it is in an address, that is associated with their account. Additionally to being a node with a keen eye on the whole network, a masternode will work as a staking pool, paying out shares to the contributors (although this may pose a threat, since there tends to form one really big pool over time).

If one masternode should have 1000XEL (number is arbitrary) and no two masternode should have same ip, this will push towards decentralization.

Now, two of the questions I have are:

What would we do if a reputation system would be implemented? On scenario would be, that masternodes are allowed to vote for miners with the funds they have, but I am not too keen on that idea…

I don't think getting percentages of transaction fees via the staking pool will be enough incentive to run a masternode, at least not if we want a considerable number to exist. Should there be financial incentive, and if yes, what does it look like? If masternodes would be allowed to vote for miners, there would be another income, but if they don't, how to reimburse them for what could turn out to be a lot of work, involving 4 in the morning phone calls, because the chain forked, and so on…

Morning here, not enough caffeine in the blood, can you explain in more detail why you're not ok with masternodes taking care of miners rating?  I can't see an attack vector at a glance

tomkat
Hero Member
*****
Offline Offline

Activity: 1022
Merit: 507


View Profile
October 19, 2016, 08:40:27 AM
 #2302

Regarding the masternodes-thing, here are some more thoughts on that matter:

- Some people pose manage masternodes. Apart from working as regular nodes, it is their job to monitor the network. Both job authors and miners need a minumum number of trusted masternodes in their direct contacts list to check for updates, forks and so on. If a sufficient number of masternodes redflag a transaction or somehow warn a user that they might be on a fork, their submissions are no longer processed.

Wouldn't that mean that when the user is warned to be on a fork, it's basically too late because the forked transactions couldn't be reversed anyway, could they?

- To become a masternode, you can "borrow" XEL from users, the more you have, the more "reputation" you have. Users can still decide to include smaller masternodes in their personal list, since it is in their best interest to have up-to-date and trustworthy masternodes as contacts, I don't see much room for shennanigans here.
Masternodes won't have direct access to to the funds, but it is in an address, that is associated with their account. Additionally to being a node with a keen eye on the whole network, a masternode will work as a staking pool, paying out shares to the contributors (although this may pose a threat, since there tends to form one really big pool over time).

Would be much easier to implement voting for masternodes instead doing what you described above. Why to move ("borrow") XEL between accounts if the funds aren't accessible?
With voting, users would support masternodes without a need of moving XEL anywhere.

Also, if masternodes would be allowed to vote for miners (as you described below), then it's actually the users' votes, so we are back at miners being the central position in the network, and masternodes being something, well basically, useless?

Now, two of the questions I have are:

What would we do if a reputation system would be implemented? On scenario would be, that masternodes are allowed to vote for miners with the funds they have, but I am not too keen on that idea…

I don't think getting percentages of transaction fees via the staking pool will be enough incentive to run a masternode, at least not if we want a considerable number to exist. Should there be financial incentive, and if yes, what does it look like? If masternodes would be allowed to vote for miners, there would be another income, but if they don't, how to reimburse them for what could turn out to be a lot of work, involving 4 in the morning phone calls, because the chain forked, and so on…

Perhaps, what you think about is rather a system similar to NEM with their delegated harvesting and nodes/supernodes?
To become supernode there, one needs to keep collateral of 3M XEM which has become substantial amount recently.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 19, 2016, 07:09:48 PM
Last edit: October 19, 2016, 07:43:55 PM by ttookk
 #2303

@akhavr, @tomkat, I won't use quotes this time, I think it will only clutter the answer, but here are some more thoughts on that matter. Obviously, they are open for discussion. Please be aware, that I don't think my ideas are in any way "better" than others, I'm just putting them out there. I am not completely convinced by them myself.

1.: I don't like the idea of a "hard number" one has to hold to run a masternode, because it is not easily scaleable. By fixing on a certain number, we inherently decide about how many masternodes there are going to be. I'd want to keep this open, to keep the system flexible. My idea would be to have a system that is loosely based on the proposed reputation system: The amount associated with a masternode gives it a certain amount of reputation. Users can decide by themselves, how much reputation they think is enough. There still would be an amount of XEL needed to run a node, but this amount would be comparatively low and would only be used as an anti-spam measure. Theoretically, someone could run a masternode with only that amount of XEL associated with it (which pretty much means, that every node is a potential "masternode". The point at which a node becomes a masternode would not be fixed.), but in reality, users would communicate and decide which amount of XEL associated they think is enough. Maybe, the term "masternodes" is not accurate in this specific system, since it borrows ideas from DPoS, but I'll keep using it for this post. If we need an alternative word for it, I suggest "custodians".

2.: "Borrowing" XEL as a masternode or getting votes is essentially the same thing (I called it "associated" in the paragraph above). My thought was, that the XEL would be accessible for the users all the time. The only real ways to vote on a blockchain I am aware of, is to vote by number of tokens in a given address or to burn a certain amount (Let's ignore that option here). To prevent "doublevoting", you either have to continually monitor addresses which voted, or create a snapshot at a certain point in time. Both ways are thinkable. I prefer the second one, it sounds like that one has less overhead and allows at least some level of planning ahead.

3.: The main function of masternodes would be to approve transactions, not in a sense of writing them into the blockchain, but to check if "the grammar" is right, if the client version fits and so on. If they are, they get forwarded, if not, they get rejected and the user is warned, that they are not up-to-date. When a hardfork is planned, the masternodes are the ones who are supposed to react first and to keep "bad stuff" from happening. They obviously need to check the code and decide whether the hardfork proposal comes from a trusted source or not.

4.: To be honest, the comment about masternodes not being supposed to vote for miners was based on a feeling alone. Here are some pros and cons:

Pro masternodes voting on miners:

- It would be the masternodes job to fully check a miner. Since the masternode operators reputation is in danger, the community could be sure, that at least someone actually put some due diligence into it.
- Voting for miners by all users may be quite chaotic, with only big miners being able to "stand out" and earn enough reputation. This system would also run the risk of leaving out miners with a lot of computing power, who are just bad at marketing themselves.
- While a miner may not be willing to Dox themselves to the whole internet, they may be ok to skype with a masternode operator or otherwise prove their trustworthiness.

Contra masternodes voting on miners:

- It would put a lot of power in few hands. If a big masternode operator and a miner have problems with each other, the masternode operator may have enough leverage to kick a miner out as an act of personal vengeance.
- Depending on the thoroughness of research a masternode operator would put into it, it may be easier to fool few or even one person, instead of a whole community.
- Since a masternode operators reputation would hinge greatly on the trustworthiness of the miners they chose, they may be too carefull about who they choose or not, leaving out a lot of smaller and new miners.
- The process of choosing trustworthy miners may become a "black box" for the rest of the community, with deals being made "behing closed doors". Not sure how much of a problem that would be, though.

All in all, I personally came to the conclusion that the pro arguments outweight the contra arguments, so I was wrong on that end.

5.: Since Masternodes would be an imortant part of the system and would mean a lot of work at least at sometimes, masternode operators need to be rewarded. I'm not sure what would be the best option here.

More thoughts on that matter are welcome. However, I somehow get the feeling that I am making a relatively small problem very complex. Am I wrong here?
tomkat
Hero Member
*****
Offline Offline

Activity: 1022
Merit: 507


View Profile
October 20, 2016, 06:45:50 PM
 #2304

@akhavr, @tomkat, I won't use quotes this time, I think it will only clutter the answer, but here are some more thoughts on that matter. Obviously, they are open for discussion. Please be aware, that I don't think my ideas are in any way "better" than others, I'm just putting them out there. I am not completely convinced by them myself.

1.: I don't like the idea of a "hard number" one has to hold to run a masternode, because it is not easily scaleable. By fixing on a certain number, we inherently decide about how many masternodes there are going to be. I'd want to keep this open, to keep the system flexible. My idea would be to have a system that is loosely based on the proposed reputation system: The amount associated with a masternode gives it a certain amount of reputation. Users can decide by themselves, how much reputation they think is enough. There still would be an amount of XEL needed to run a node, but this amount would be comparatively low and would only be used as an anti-spam measure. Theoretically, someone could run a masternode with only that amount of XEL associated with it (which pretty much means, that every node is a potential "masternode". The point at which a node becomes a masternode would not be fixed.), but in reality, users would communicate and decide which amount of XEL associated they think is enough. Maybe, the term "masternodes" is not accurate in this specific system, since it borrows ideas from DPoS, but I'll keep using it for this post. If we need an alternative word for it, I suggest "custodians".

With "comparatively low" I guess you mean a low percentage (like 5-10%) of collateral in Dash or NEM ?
Well, low collateral amount with additional users voting would be similar to DPoS but since XEL is PoW, then we could call it DPoW Wink
The result would be regular nodes voting for their delegates that are doing network tech maintenance.
However, the key to succesful implementation of masternodes is a balance between the reward they get and the service they provide.
In Dash, masternodes provide important network services like anonimization and instant send, and those services aren't planned in XEL afaik (nor exist in Next on which XEL is based).
So, the only task for masternodes in XEL would be the technical maintenance, which I doubt could be valued at incentivizing level Smiley

2.: "Borrowing" XEL as a masternode or getting votes is essentially the same thing (I called it "associated" in the paragraph above). My thought was, that the XEL would be accessible for the users all the time. The only real ways to vote on a blockchain I am aware of, is to vote by number of tokens in a given address or to burn a certain amount (Let's ignore that option here). To prevent "doublevoting", you either have to continually monitor addresses which voted, or create a snapshot at a certain point in time. Both ways are thinkable. I prefer the second one, it sounds like that one has less overhead and allows at least some level of planning ahead.

Or, the voting could be a transaction with a sub-token (called VoXEL for instance Smiley ), which the users could send (or withdraw) to/from masternodes' voting accounts.
In Next it's possible I think with assets right?

3.: The main function of masternodes would be to approve transactions, not in a sense of writing them into the blockchain, but to check if "the grammar" is right, if the client version fits and so on. If they are, they get forwarded, if not, they get rejected and the user is warned, that they are not up-to-date. When a hardfork is planned, the masternodes are the ones who are supposed to react first and to keep "bad stuff" from happening. They obviously need to check the code and decide whether the hardfork proposal comes from a trusted source or not.

The worst think that might happen would be malicious (group of) masternodes in XEL's early days.
That leads straight to the conclusion that IF we want masternodes, then they should be implemented at some later development stage.

4.: To be honest, the comment about masternodes not being supposed to vote for miners was based on a feeling alone. Here are some pros and cons:

Pro masternodes voting on miners:

- It would be the masternodes job to fully check a miner. Since the masternode operators reputation is in danger, the community could be sure, that at least someone actually put some due diligence into it.
- Voting for miners by all users may be quite chaotic, with only big miners being able to "stand out" and earn enough reputation. This system would also run the risk of leaving out miners with a lot of computing power, who are just bad at marketing themselves.
- While a miner may not be willing to Dox themselves to the whole internet, they may be ok to skype with a masternode operator or otherwise prove their trustworthiness.

Contra masternodes voting on miners:

- It would put a lot of power in few hands. If a big masternode operator and a miner have problems with each other, the masternode operator may have enough leverage to kick a miner out as an act of personal vengeance.
- Depending on the thoroughness of research a masternode operator would put into it, it may be easier to fool few or even one person, instead of a whole community.
- Since a masternode operators reputation would hinge greatly on the trustworthiness of the miners they chose, they may be too carefull about who they choose or not, leaving out a lot of smaller and new miners.
- The process of choosing trustworthy miners may become a "black box" for the rest of the community, with deals being made "behing closed doors". Not sure how much of a problem that would be, though.

All in all, I personally came to the conclusion that the pro arguments outweight the contra arguments, so I was wrong on that end.

5.: Since Masternodes would be an imortant part of the system and would mean a lot of work at least at sometimes, masternode operators need to be rewarded. I'm not sure what would be the best option here.

More thoughts on that matter are welcome. However, I somehow get the feeling that I am making a relatively small problem very complex. Am I wrong here?


The "cons" above are more convicing at the moment (at least for me) Smiley
This is because incorrect, or too early implementation, of any feature may create more harm than good to the network.

As a side note, as EK mentioned, perhaps Bitcoin's soft-fork self-governance with their 75% level of feature activation could be the easiest solution at this stage?
The majority will always find its way, won't it...?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 20, 2016, 09:15:36 PM
 #2305

Had a lot of paperwork to do today! I will catch up with all messages and comment in a few hours.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 20, 2016, 09:55:47 PM
Last edit: October 20, 2016, 10:19:58 PM by ttookk
 #2306


With "comparatively low" I guess you mean a low percentage (like 5-10%) of collateral in Dash or NEM ?
Well, low collateral amount with additional users voting would be similar to DPoS but since XEL is PoW, then we could call it DPoW Wink
The result would be regular nodes voting for their delegates that are doing network tech maintenance.
However, the key to succesful implementation of masternodes is a balance between the reward they get and the service they provide.
In Dash, masternodes provide important network services like anonimization and instant send, and those services aren't planned in XEL afaik (nor exist in Next on which XEL is based).
So, the only task for masternodes in XEL would be the technical maintenance, which I doubt could be valued at incentivizing level Smiley

If we would put the burden of choosing trustworthy miners on the masternodes' shoulders as well, this would be more than just tech maintenance. Miners would pay a percentage to the masternodes as payment. I agree that this may seduce a masternode operator to not look too closely at a miners doings, but the miner would probably have to bribe more than just one masternode operator to gain enough trust. Plus, this would ruin the reputation of the masternode.

2.: "Borrowing" XEL as a masternode or getting votes is essentially the same thing (I called it "associated" in the paragraph above). My thought was, that the XEL would be accessible for the users all the time. The only real ways to vote on a blockchain I am aware of, is to vote by number of tokens in a given address or to burn a certain amount (Let's ignore that option here). To prevent "doublevoting", you either have to continually monitor addresses which voted, or create a snapshot at a certain point in time. Both ways are thinkable. I prefer the second one, it sounds like that one has less overhead and allows at least some level of planning ahead.

Or, the voting could be a transaction with a sub-token (called VoXEL for instance Smiley ), which the users could send (or withdraw) to/from masternodes' voting accounts.
In Next it's possible I think with assets right?


I don't see how using an asset would have any advantage. They would be traded as an independent currency, completely undermining the idea voting would have, which would be that those holding a big amont of XEL would have more voting weight than those with little XEL. With a dedicated voting asset, you might hold no XEL at all and still be able to vote. And if you somehow couldn't trade them, they'd be absolutely pointless.

3.: The main function of masternodes would be to approve transactions, not in a sense of writing them into the blockchain, but to check if "the grammar" is right, if the client version fits and so on. If they are, they get forwarded, if not, they get rejected and the user is warned, that they are not up-to-date. When a hardfork is planned, the masternodes are the ones who are supposed to react first and to keep "bad stuff" from happening. They obviously need to check the code and decide whether the hardfork proposal comes from a trusted source or not.

The worst think that might happen would be malicious (group of) masternodes in XEL's early days.
That leads straight to the conclusion that IF we want masternodes, then they should be implemented at some later development stage.
Well how likely is it? Who do you think will the first masternodes be? Probably guys working in this thread. If you can't trust them, XEL is fucked right now, you don't need to wait for the mainnet launch.

4.: To be honest, the comment about masternodes not being supposed to vote for miners was based on a feeling alone. Here are some pros and cons:

(…)

All in all, I personally came to the conclusion that the pro arguments outweight the contra arguments, so I was wrong on that end.

5.: Since Masternodes would be an imortant part of the system and would mean a lot of work at least at sometimes, masternode operators need to be rewarded. I'm not sure what would be the best option here.

More thoughts on that matter are welcome. However, I somehow get the feeling that I am making a relatively small problem very complex. Am I wrong here?


The "cons" above are more convicing at the moment (at least for me) Smiley
This is because incorrect, or too early implementation, of any feature may create more harm than good to the network.

I don't understand the last part. What do early implementations of features have to do with masternodes and miners? The question at hand was, whether masternodes should be the ones to vote foe miner or the network as a whole. You'd have to implement it either way.
About the pro and contra arguments, I'm open. Im not completely sure about it…

As a side note, as EK mentioned, perhaps Bitcoin's soft-fork self-governance with their 75% level of feature activation could be the easiest solution at this stage?
The majority will always find its way, won't it...?

I haven't given it much thought, to be honest. But I think a key problem is, that bitcoins features are backwards compatible and/or the implementations can be rolled out over a long period of time to give users time to adapt, while in an extreme case, this might not be true on the Elastic network. In that case, the soft fork idea might not be sufficient.
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 20, 2016, 11:49:26 PM
 #2307

Hi EK, I think I've got the c translator logic in place in my miner.  The main thing missing now are the various crypto algos that ElasticPL supports.  Do you happen to know of any good C libraries for the various crypto functions you want included?
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 21, 2016, 10:23:12 AM
 #2308

Hi EK, I think I've got the c translator logic in place in my miner.  The main thing missing now are the various crypto algos that ElasticPL supports.  Do you happen to know of any good C libraries for the various crypto functions you want included?

Great ;-)
Since 5 hours i am already trying to get rid of any "file writing" and "external compiler" calling.
I have linked LLVM and CLANG to the binary,
then I programmatically compile the C code into a LLVM module so I can call the functions directly from C (actually, had to switch to CPP) without any library loading.

The LLVM interface works fine, its just that something still screws up with the unique_ptr thing that is returned by the following routing. I will continue trying.

FYI: This is how I build a LLVM module from the generated C code ... all purely in memory:

Code:
#include "llvm/Option/Arg.h"
#include "clang/CodeGen/ObjectFilePCHContainerOperations.h"
#include "clang/Config/config.h"
#include "clang/Driver/DriverDiagnostic.h"
#include "clang/Driver/Options.h"
#include <clang/CodeGen/CodeGenAction.h>
#include "clang/Frontend/CompilerInstance.h"
#include "clang/Frontend/CompilerInvocation.h"
#include "clang/Frontend/FrontendDiagnostic.h"
#include "clang/Frontend/TextDiagnosticBuffer.h"
#include "clang/Frontend/TextDiagnosticPrinter.h"
#include "clang/Frontend/Utils.h"
#include "clang/FrontendTool/Utils.h"
#include "llvm/ADT/Statistic.h"
#include "llvm/LinkAllPasses.h"
#include "llvm/Option/ArgList.h"
#include "llvm/Option/OptTable.h"
#include "llvm/Support/Compiler.h"
#include "llvm/Support/ErrorHandling.h"
#include "llvm/Support/MemoryBuffer.h"
#include "llvm/Support/ManagedStatic.h"
#include "llvm/Support/Signals.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Support/Timer.h"
#include "llvm/Support/raw_ostream.h"
#include <clang/Frontend/CompilerInstance.h>
#include <clang/Basic/DiagnosticOptions.h>
#include <clang/Frontend/TextDiagnosticPrinter.h>
#include <clang/CodeGen/CodeGenAction.h>
#include <clang/Basic/TargetInfo.h>
#include <llvm/Support/TargetSelect.h>
#include <iostream>
using namespace clang;
using namespace llvm::opt;
using namespace llvm;
using namespace std;


llvm::Module* compile_using_lvm(char* code)
{
  // Path to the C file
  string testCodeFileName = "work_lib.c";

  // Map code filename to a memoryBuffer
  std::string code2(code);
  StringRef testCodeData(code2);
  unique_ptr<llvm::MemoryBuffer> buffer = llvm::MemoryBuffer::getMemBufferCopy(testCodeData);

  InitializeAllTargetMCs();  
    InitializeAllAsmPrinters();
    InitializeAllTargets();
  
    // Prepare compilation arguments
    vector<const char *> args;
    args.push_back(testCodeFileName.c_str());
    args.push_back("-I/usr/include");
  args.push_back("-I./include");
    // Prepare DiagnosticEngine
    DiagnosticOptions DiagOpts;
    TextDiagnosticPrinter *textDiagPrinter =
            new clang::TextDiagnosticPrinter(errs(),
                                         &DiagOpts);
    IntrusiveRefCntPtr<clang::DiagnosticIDs> pDiagIDs;
    DiagnosticsEngine *pDiagnosticsEngine =
            new DiagnosticsEngine(pDiagIDs,
                                         &DiagOpts,
                                         textDiagPrinter);

    // Initialize CompilerInvocation
    CompilerInvocation *CI = new CompilerInvocation();
    CompilerInvocation::CreateFromArgs(*CI, &args[0], &args[0] +     args.size(), *pDiagnosticsEngine);

    // Map code filename to a memoryBuffer
    CI->getPreprocessorOpts().addRemappedFile(testCodeFileName, buffer.get());


    // Create and initialize CompilerInstance
    CompilerInstance Clang;
    Clang.setInvocation(CI);
    Clang.createDiagnostics();

  

    // Create an action and make the compiler instance carry it out
  std::unique_ptr<clang::CodeGenAction> Act(new clang::EmitLLVMOnlyAction());
  Clang.ExecuteAction(*Act);
  //  return nullptr;
  
  // Grab the module built by the EmitLLVMOnlyAction
 std::unique_ptr<llvm::Module> module = Act->takeModule();
/* for (auto curFref = module->getFunctionList().begin(),
              endFref = module->getFunctionList().end();
              curFref != endFref; ++curFref) {
        std::cout << "found function: " << curFref->getName().str() << "\n";
    }*/
    buffer.release();
  return module.release();
}
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 21, 2016, 12:24:11 PM
 #2309

Hi EK, I think I've got the c translator logic in place in my miner.  The main thing missing now are the various crypto algos that ElasticPL supports.  Do you happen to know of any good C libraries for the various crypto functions you want included?

I have pushed a working linux version which uses the internal LLVM compiler to transform the work into machine code (JIT Engine).
https://github.com/OrdinaryDude/experimental-llvm-miner

However, the speeds are slower than with the other approach:

[14:22:55] CPU0: 267.24 kEval/s
[14:22:56] CPU4: 274.49 kEval/s
[14:22:56] CPU1: 304.25 kEval/s
[14:22:56] CPU3: 324.36 kEval/s
[14:22:56] CPU2: 293.04 kEval/s

Is around 1,4 Million executions per second  Undecided Damn, and I spent so much time on this approach!


The old hacky approach is a lot faster:

[14:31:34] CPU2: 699.66 kEval/s
[14:31:34] CPU3: 609.22 kEval/s
[14:31:34] CPU0: 692.43 kEval/s
[14:31:34] CPU4: 834.46 kEval/s
[14:31:34] CPU1: 708.38 kEval/s
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 21, 2016, 01:40:40 PM
 #2310

Hi EK, I think I've got the c translator logic in place in my miner.  The main thing missing now are the various crypto algos that ElasticPL supports.  Do you happen to know of any good C libraries for the various crypto functions you want included?

I have pushed a working linux version which uses the internal LLVM compiler to transform the work into machine code (JIT Engine).
https://github.com/OrdinaryDude/experimental-llvm-miner

However, the speeds are slower than with the other approach:

[14:22:55] CPU0: 267.24 kEval/s
[14:22:56] CPU4: 274.49 kEval/s
[14:22:56] CPU1: 304.25 kEval/s
[14:22:56] CPU3: 324.36 kEval/s
[14:22:56] CPU2: 293.04 kEval/s

Is around 1,4 Million executions per second  Undecided Damn, and I spent so much time on this approach!


The old hacky approach is a lot faster:

[14:31:34] CPU2: 699.66 kEval/s
[14:31:34] CPU3: 609.22 kEval/s
[14:31:34] CPU0: 692.43 kEval/s
[14:31:34] CPU4: 834.46 kEval/s
[14:31:34] CPU1: 708.38 kEval/s


I'm not sure the old way was all that bad.  They will still have to have the compiler installed on their computer, so although complete integration would be great, I don't see that it is necessary.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 21, 2016, 02:07:17 PM
 #2311

I'm not sure the old way was all that bad.  They will still have to have the compiler installed on their computer, so although complete integration would be great, I don't see that it is necessary.

The nice thing about the (lib)LLVM would be, that it can be linked in statically (both for windows and linux) and the miner could be distributed as one standalone binary without any neccessary dependencies. But it's not worth losing 50% of the speed. So best we stick to the way it is now  Wink

About the library ... i think openssl would be the way to go! I will ask google now!
coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 21, 2016, 04:30:03 PM
 #2312

The nice thing about the (lib)LLVM would be, that it can be linked in statically (both for windows and linux) and the miner could be distributed as one standalone binary without any neccessary dependencies.

I didn't realize you'd be able to do that.  Yeah that would be great...hope you can pull it off.

I've made a pretty big change to my parser now to handle parameters being passed to crypto functions (hopefully I didn't break anything).  So at this point it should now just be a matter of tracking down the crypto source and creating wrappers for the crypto functions (to correct for endianess).  And of course the parser needs a lot more error handling...just not sure how much effort to put into it as the elastic-core should have already done a more thorough syntax check before the miner ever gets the work.

just an fyi...right now for testing purposes, I have all the crypto functions included in the parser, so they will get into the C source file.  However, the compile step will obviously fail as the source code for the crypt functions is not there yet.
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 22, 2016, 09:10:55 AM
 #2313

I have almost finished the only matured idea for the retargeting mechanism so far for the next testnet iteration.

I will summarize the other ideas which we discussed here soon. So far I have posted the current description here:
http://elastic-project.com/retargeting. Discussions, detected flaws and criticism and totally different approaches are always welcome. I am not meant to do this alone  Wink

I have termed the retargeting mechanism the ...

Retaining Inverse Slow-Start

Preliminaries

Work authors push work to the Elastic network which they want to have solved by the “miners”. It is important to clarify that at the same time, there maybe 0 or more works active in parallel. To ensure “workers” who work on the work packages are paid out continuously, they are eligible (even if they do not solve the actual task) to submit PoW packages on a regular basis. This principle is similar to what traditional Bitcoin mining pools use, to ensure constant payouts to their miners. Ultimately, those PoW packages are solutions of which the “hash” (that is formed through the execution of the actual program) meets a certain target value. When we say it “meets” a target value we actually mean that it is lower than a constantly adjusted target value. The target value is meant to be adjusted in a way, that per work there are no more than 10 PoW packages submitted in each block.

At this point it is worth to note that the blockchain grows by PoS only. That is, PoW submissions are just normal transactions that are included in the blocks just as any other transaction is.

Problems with Traditional Retargeting Methods

Traditionally, blockchains use PoW submissions to extend the blockchain and the required effort to find a solution that matches a specific target value is always the same (e.g. it take on average always the same amount of hash calculations until a certain target value is met). On the contrary, in Elastic we have many different work packages which may differ significantly in the required computational effort: there may be very simple programs as well as programs that require a significantly higher amount of processing power. If all work packages work on one global target value, several problems may arise.

Very fast executing jobs may lower the target value (i.e. increase the difficulty) more significantly than jobs which need more computational effort. At the same time, the switch from a fast job to a complex job may result in a heave drop in the PoW submission rate until the target value readjusts and vice versa. This may cause heavy oscillations both in the target value and in the rate at which PoW submissions are submitted.

Furthermore, the target value tries to find a good measure of “computational investment”. That means, that one PoW submission should be worth the same amount of computational effort across all open works. This cannot be guaranteed if different jobs (with different computational footprints) are working towards the same target value. As a matter of fact, those differences make it hard to estimate a good “per PoW submission price” for a work author, as it (in the traditional scheme) highly depends on the other work live work packages in the network.

Another problem is also latent in these traditional approaches: usually, the target value is measured by a sliding window of the last x blocks and the required time to find them so that-on average-one PoW is found in a preconfigured amount of time. In approaches where the blockchain is extended by the PoW submissions itself, this scheme may be valid. In the case of Elastic, however, it may happen that there are no works live but the blockchain is still extended by the PoS miners. No matter how precise the target value has converged, after long times without work it may happen, that the target value approaches the least possible value: it is impossible for such approach to distinguish between “no PoW miners are active” and “no PoW are submitted because there is no work to do”. This effect may cause immense bursts of PoW submissions, when a new work is created after such long period without work and wakes up a huge number of miners which were on stand-by before.

The Solution

The solution to this problem is a retargeting mechanism, which we termed “Retaining Inverse Slow-Start”. Retaining, because we aim for “remembering” the average computational power of the miners as a whole across multiple work packages and across “time gaps” with no work live in the network. Inverse Slow-Start, because we use a mechanism similar to an “inverted” version of TCP's Slow-Start mechanism to quickly adapt the retained target value both to the computational processing power of all miners in case it changed unnoticed (when there is no work, no computational power measurement can be carried out) and to the varying computational effort required for different work packages.

In detail, the retargeting mechanism works as follows:

  • Every work has its own target value.
  • Once a work is created, the target value gets initialized with the average final target value of the last 10 finished jobs.
  • There can be only 20 unconfirmed PoW submissions in the memory pool (and in each block) at most per work.
  • As long as no PoW submission is made, the difficulty drops exponentially so that in at most 3 blocks it reaches the “least possible difficulty”. This is the “Inverse Slow-Start” to help bootstrapping in case of significant changes in the total computational power of the network. Otherwise, the difficulty is adjusted by means of DGW. Hence, If the number of miners decreased significantly (due to potent miners abandoning the ship), work packages do not end up with no PoW submitions at all due to the difficulty being too high for the rest of the miners: it takes at most 3 blocks to “heal” the problem.
  • That is, if we have too many miners suddenly, or if potent miners joined in the meantime, they can only clutter the blockchain at a rate of max. 20 tx / block and work until the retarget mechanism fixes things. If we suddenly have only a few miners left, the target value “exponentially” heals itself.
  • Due to the usage of DGW (again, in case of constant submissions of PoWs regardless of the rate) the retargeting mechanism reacts quickly, per block, to adapt each work's target value so that the work receives, on average, 10 PoW submissions per block.
  • The above scheme inherently mitigates the attack when a potent miner “waits” until the difficulty drops to burst his precomputed PoW submissions. He, if he is the only miner live, in this case, can only get through at most 20 PoW submissions until the difficulty readjusts in the following block.
provenceday
Legendary
*
Offline Offline

Activity: 1148
Merit: 1000



View Profile
October 22, 2016, 10:45:22 AM
 #2314

I have almost finished the only matured idea for the retargeting mechanism so far for the next testnet iteration.

I will summarize the other ideas which we discussed here soon. So far I have posted the current description here:
http://elastic-project.com/retargeting. Discussions, detected flaws and criticism and totally different approaches are always welcome. I am not meant to do this alone  Wink

I have termed the retargeting mechanism the ...

Retaining Inverse Slow-Start

Preliminaries

Work authors push work to the Elastic network which they want to have solved by the “miners”. It is important to clarify that at the same time, there maybe 0 or more works active in parallel. To ensure “workers” who work on the work packages are paid out continuously, they are eligible (even if they do not solve the actual task) to submit PoW packages on a regular basis. This principle is similar to what traditional Bitcoin mining pools use, to ensure constant payouts to their miners. Ultimately, those PoW packages are solutions of which the “hash” (that is formed through the execution of the actual program) meets a certain target value. When we say it “meets” a target value we actually mean that it is lower than a constantly adjusted target value. The target value is meant to be adjusted in a way, that per work there are no more than 10 PoW packages submitted in each block.

At this point it is worth to note that the blockchain grows by PoS only. That is, PoW submissions are just normal transactions that are included in the blocks just as any other transaction is.

Problems with Traditional Retargeting Methods

Traditionally, blockchains use PoW submissions to extend the blockchain and the required effort to find a solution that matches a specific target value is always the same (e.g. it take on average always the same amount of hash calculations until a certain target value is met). On the contrary, in Elastic we have many different work packages which may differ significantly in the required computational effort: there may be very simple programs as well as programs that require a significantly higher amount of processing power. If all work packages work on one global target value, several problems may arise.

Very fast executing jobs may lower the target value (i.e. increase the difficulty) more significantly than jobs which need more computational effort. At the same time, the switch from a fast job to a complex job may result in a heave drop in the PoW submission rate until the target value readjusts and vice versa. This may cause heavy oscillations both in the target value and in the rate at which PoW submissions are submitted.

Furthermore, the target value tries to find a good measure of “computational investment”. That means, that one PoW submission should be worth the same amount of computational effort across all open works. This cannot be guaranteed if different jobs (with different computational footprints) are working towards the same target value. As a matter of fact, those differences make it hard to estimate a good “per PoW submission price” for a work author, as it (in the traditional scheme) highly depends on the other work live work packages in the network.

Another problem is also latent in these traditional approaches: usually, the target value is measured by a sliding window of the last x blocks and the required time to find them so that-on average-one PoW is found in a preconfigured amount of time. In approaches where the blockchain is extended by the PoW submissions itself, this scheme may be valid. In the case of Elastic, however, it may happen that there are no works live but the blockchain is still extended by the PoS miners. No matter how precise the target value has converged, after long times without work it may happen, that the target value approaches the least possible value: it is impossible for such approach to distinguish between “no PoW miners are active” and “no PoW are submitted because there is no work to do”. This effect may cause immense bursts of PoW submissions, when a new work is created after such long period without work and wakes up a huge number of miners which were on stand-by before.

The Solution

The solution to this problem is a retargeting mechanism, which we termed “Retaining Inverse Slow-Start”. Retaining, because we aim for “remembering” the average computational power of the miners as a whole across multiple work packages and across “time gaps” with no work live in the network. Inverse Slow-Start, because we use a mechanism similar to an “inverted” version of TCP's Slow-Start mechanism to quickly adapt the retained target value both to the computational processing power of all miners in case it changed unnoticed (when there is no work, no computational power measurement can be carried out) and to the varying computational effort required for different work packages.

In detail, the retargeting mechanism works as follows:

  • Every work has its own target value.
  • Once a work is created, the target value gets initialized with the average final target value of the last 10 finished jobs.
  • There can be only 20 unconfirmed PoW submissions in the memory pool (and in each block) at most per work.
  • As long as no PoW submission is made, the difficulty drops exponentially so that in at most 3 blocks it reaches the “least possible difficulty”. This is the “Inverse Slow-Start” to help bootstrapping in case of significant changes in the total computational power of the network. Otherwise, the difficulty is adjusted by means of DGW. Hence, If the number of miners decreased significantly (due to potent miners abandoning the ship), work packages do not end up with no PoW submitions at all due to the difficulty being too high for the rest of the miners: it takes at most 3 blocks to “heal” the problem.
  • That is, if we have too many miners suddenly, or if potent miners joined in the meantime, they can only clutter the blockchain at a rate of max. 20 tx / block and work until the retarget mechanism fixes things. If we suddenly have only a few miners left, the target value “exponentially” heals itself.
  • Due to the usage of DGW (again, in case of constant submissions of PoWs regardless of the rate) the retargeting mechanism reacts quickly, per block, to adapt each work's target value so that the work receives, on average, 10 PoW submissions per block.
  • The above scheme inherently mitigates the attack when a potent miner “waits” until the difficulty drops to burst his precomputed PoW submissions. He, if he is the only miner live, in this case, can only get through at most 20 PoW submissions until the difficulty readjusts in the following block.
Retaining Inverse Slow-Start

that's some nice design and innovations.

coralreefer
Sr. Member
****
Offline Offline

Activity: 464
Merit: 260


View Profile
October 22, 2016, 12:49:29 PM
 #2315

I have almost finished the only matured idea for the retargeting mechanism so far for the next testnet iteration.

Sounds good, looking forward to trying it out.

I do have one request.  When the network has already received the map POW submissions for the block, please send a response that more clearly identifies this condition instead of "duplicate".  This way the miner can clearly identify if they truly did send a duplicate vs when the network is no longer accepting POW

Also, when you make this change, I'm still hoping you'll put the WCET value in the work package so the miner doesn't need to recalculate every few seconds to determine the best work package.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 22, 2016, 01:02:44 PM
 #2316

Simple and elegant solution. Since the chain is moving thanks to PoS, regular problems, i.e. stalling the chain, don't occur. Nice.

Here is a question I have since a while back, it's probably answered somewhere:
How is the size of a "job-block" determined? I mean, I guess job authors could chop the work they want to be done in multiple ways (e.g. 5x2/10, 10x1/10…). How is this determined? And wouldn't this work towards or against the retargeting issues?

I mean, I think we've found a nice solution here, but I wondered whether the sizing of job-blocks could be used to adjust the difficulty as well.
Limx Dev
Copper Member
Legendary
*
Offline Offline

Activity: 2352
Merit: 1348



View Profile
October 22, 2016, 01:19:29 PM
 #2317

I have almost finished the only matured idea for the retargeting mechanism so far for the next testnet iteration.

I will summarize the other ideas which we discussed here soon. So far I have posted the current description here:
http://elastic-project.com/retargeting. Discussions, detected flaws and criticism and totally different approaches are always welcome. I am not meant to do this alone  Wink

I have termed the retargeting mechanism the ...

Retaining Inverse Slow-Start

Preliminaries

Work authors push work to the Elastic network which they want to have solved by the “miners”. It is important to clarify that at the same time, there maybe 0 or more works active in parallel. To ensure “workers” who work on the work packages are paid out continuously, they are eligible (even if they do not solve the actual task) to submit PoW packages on a regular basis. This principle is similar to what traditional Bitcoin mining pools use, to ensure constant payouts to their miners. Ultimately, those PoW packages are solutions of which the “hash” (that is formed through the execution of the actual program) meets a certain target value. When we say it “meets” a target value we actually mean that it is lower than a constantly adjusted target value. The target value is meant to be adjusted in a way, that per work there are no more than 10 PoW packages submitted in each block.

At this point it is worth to note that the blockchain grows by PoS only. That is, PoW submissions are just normal transactions that are included in the blocks just as any other transaction is.

Problems with Traditional Retargeting Methods

Traditionally, blockchains use PoW submissions to extend the blockchain and the required effort to find a solution that matches a specific target value is always the same (e.g. it take on average always the same amount of hash calculations until a certain target value is met). On the contrary, in Elastic we have many different work packages which may differ significantly in the required computational effort: there may be very simple programs as well as programs that require a significantly higher amount of processing power. If all work packages work on one global target value, several problems may arise.

Very fast executing jobs may lower the target value (i.e. increase the difficulty) more significantly than jobs which need more computational effort. At the same time, the switch from a fast job to a complex job may result in a heave drop in the PoW submission rate until the target value readjusts and vice versa. This may cause heavy oscillations both in the target value and in the rate at which PoW submissions are submitted.

Furthermore, the target value tries to find a good measure of “computational investment”. That means, that one PoW submission should be worth the same amount of computational effort across all open works. This cannot be guaranteed if different jobs (with different computational footprints) are working towards the same target value. As a matter of fact, those differences make it hard to estimate a good “per PoW submission price” for a work author, as it (in the traditional scheme) highly depends on the other work live work packages in the network.

Another problem is also latent in these traditional approaches: usually, the target value is measured by a sliding window of the last x blocks and the required time to find them so that-on average-one PoW is found in a preconfigured amount of time. In approaches where the blockchain is extended by the PoW submissions itself, this scheme may be valid. In the case of Elastic, however, it may happen that there are no works live but the blockchain is still extended by the PoS miners. No matter how precise the target value has converged, after long times without work it may happen, that the target value approaches the least possible value: it is impossible for such approach to distinguish between “no PoW miners are active” and “no PoW are submitted because there is no work to do”. This effect may cause immense bursts of PoW submissions, when a new work is created after such long period without work and wakes up a huge number of miners which were on stand-by before.

The Solution

The solution to this problem is a retargeting mechanism, which we termed “Retaining Inverse Slow-Start”. Retaining, because we aim for “remembering” the average computational power of the miners as a whole across multiple work packages and across “time gaps” with no work live in the network. Inverse Slow-Start, because we use a mechanism similar to an “inverted” version of TCP's Slow-Start mechanism to quickly adapt the retained target value both to the computational processing power of all miners in case it changed unnoticed (when there is no work, no computational power measurement can be carried out) and to the varying computational effort required for different work packages.

In detail, the retargeting mechanism works as follows:

  • Every work has its own target value.
  • Once a work is created, the target value gets initialized with the average final target value of the last 10 finished jobs.
  • There can be only 20 unconfirmed PoW submissions in the memory pool (and in each block) at most per work.
  • As long as no PoW submission is made, the difficulty drops exponentially so that in at most 3 blocks it reaches the “least possible difficulty”. This is the “Inverse Slow-Start” to help bootstrapping in case of significant changes in the total computational power of the network. Otherwise, the difficulty is adjusted by means of DGW. Hence, If the number of miners decreased significantly (due to potent miners abandoning the ship), work packages do not end up with no PoW submitions at all due to the difficulty being too high for the rest of the miners: it takes at most 3 blocks to “heal” the problem.
  • That is, if we have too many miners suddenly, or if potent miners joined in the meantime, they can only clutter the blockchain at a rate of max. 20 tx / block and work until the retarget mechanism fixes things. If we suddenly have only a few miners left, the target value “exponentially” heals itself.
  • Due to the usage of DGW (again, in case of constant submissions of PoWs regardless of the rate) the retargeting mechanism reacts quickly, per block, to adapt each work's target value so that the work receives, on average, 10 PoW submissions per block.
  • The above scheme inherently mitigates the attack when a potent miner “waits” until the difficulty drops to burst his precomputed PoW submissions. He, if he is the only miner live, in this case, can only get through at most 20 PoW submissions until the difficulty readjusts in the following block.
Retaining Inverse Slow-Start

that's some nice design and innovations.



At this point it is worth to note that the blockchain grows by PoS only. That is, PoW submissions are just normal transactions that are included in the blocks just as any other transaction is.

That is great... you can take my new "Dual KGW3 with Bitbreak". It is similar to DGW but better. After 6h reduce the wallet the diff. This works currently in Europecoin and Bitsend.
https://github.com/LIMXTEC/BitSend/blob/master/src/diff_bitsend.h#L18


Bitcore BTX - a UTXO fork of Bitcoin - since 2017
___██ WebSite
██ Telegram
___██ Github
██ Github - Releases/ Wallets
___██ SBTX Pancakeswap
██ ChainzID Explorer
___██ UTXO fork
██ Coinmarketcap.com
FreemanIT
Newbie
*
Offline Offline

Activity: 3
Merit: 0


View Profile
October 22, 2016, 01:26:58 PM
 #2318

Someone from XEL team in thread, say me pls. how obtain XEL for ICO.
ttookk
Hero Member
*****
Offline Offline

Activity: 994
Merit: 513


View Profile
October 22, 2016, 01:39:36 PM
 #2319

Someone from XEL team in thread, say me pls. how obtain XEL for ICO.

1.: There is no "XEL Team". There are just a bunch of guys working on it.

2.: You can't claim XEL right now. You can when the mainnet goes live. There is no ETA on that, yet.

3.: If you want it to go faster, help Elastic Project with your ideas Wink
Evil-Knievel
Legendary
*
Offline Offline

Activity: 1260
Merit: 1168



View Profile
October 22, 2016, 01:47:24 PM
 #2320

I have almost finished the only matured idea for the retargeting mechanism so far for the next testnet iteration.

Sounds good, looking forward to trying it out.

I do have one request.  When the network has already received the map POW submissions for the block, please send a response that more clearly identifies this condition instead of "duplicate".  This way the miner can clearly identify if they truly did send a duplicate vs when the network is no longer accepting POW

Also, when you make this change, I'm still hoping you'll put the WCET value in the work package so the miner doesn't need to recalculate every few seconds to determine the best work package.

Will do both today ;-)
Pages: « 1 ... 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 [116] 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 ... 345 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!