Bitcoin Forum
May 08, 2024, 03:44:40 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 [1286] 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 ... 1832 »
  Print  
Author Topic: ★★DigiByte|极特币★★[DGB]✔ Core v6.16.5.1 - DigiShield, DigiSpeed, Segwit  (Read 3055611 times)
MentalCollatz
Newbie
*
Offline Offline

Activity: 40
Merit: 0


View Profile
February 21, 2016, 08:31:43 PM
 #25701

The idea that a huge increase in the hashrate of any one given algo only affects the diff of that particular algo and thus magically levels the playing field is also ludicrous, unless, that is, there was a major undocumented change in the DigiSpeed update.

Wait a minute...are you saying that the hashrate of one algorithm affects the diff of other algorithms or are you saying that the fact that it doesn't doesn't level the playing field?

Please don't take this as unfounded criticism. My numbers suggest something very different from what you suggest, and I think that my analysis is quite rigourous and merits serious response - I'm asking for clarification and/or documentation so I can better understand; I am not attacking.

Your analysis may very well be rigorous, but I can't find your analysis - only your conclusions and a rough description of your sources.  If you want a serious response, post your source data, the conclusions you derive from the source data, and any required logical steps that go beyond "basic math skills".  I shouldn't have to re-do all your work just to have a conversation with you about the results.
1715139880
Hero Member
*
Offline Offline

Posts: 1715139880

View Profile Personal Message (Offline)

Ignore
1715139880
Reply with quote  #2

1715139880
Report to moderator
1715139880
Hero Member
*
Offline Offline

Posts: 1715139880

View Profile Personal Message (Offline)

Ignore
1715139880
Reply with quote  #2

1715139880
Report to moderator
1715139880
Hero Member
*
Offline Offline

Posts: 1715139880

View Profile Personal Message (Offline)

Ignore
1715139880
Reply with quote  #2

1715139880
Report to moderator
Bitcoin addresses contain a checksum, so it is very unlikely that mistyping an address will cause you to lose money.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715139880
Hero Member
*
Offline Offline

Posts: 1715139880

View Profile Personal Message (Offline)

Ignore
1715139880
Reply with quote  #2

1715139880
Report to moderator
1715139880
Hero Member
*
Offline Offline

Posts: 1715139880

View Profile Personal Message (Offline)

Ignore
1715139880
Reply with quote  #2

1715139880
Report to moderator
HR
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011


Transparency & Integrity


View Profile
February 21, 2016, 09:51:37 PM
 #25702

The idea that a huge increase in the hashrate of any one given algo only affects the diff of that particular algo and thus magically levels the playing field is also ludicrous, unless, that is, there was a major undocumented change in the DigiSpeed update.

Wait a minute...are you saying that the hashrate of one algorithm affects the diff of other algorithms or are you saying that the fact that it doesn't doesn't level the playing field?
You're one of the DigiSpeed update coders, you tell me. Don't you think that is the logical direction of information flow? Are they completely independent of one another? Total aggregate hashrate of the combined 5 algos means nothing in the dynamic adjustment of each individual algo's diff? Each algo's diff is completely independent of the others and is dependent exclusively and only on its own hashrate?

Please don't take this as unfounded criticism. My numbers suggest something very different from what you suggest, and I think that my analysis is quite rigourous and merits serious response - I'm asking for clarification and/or documentation so I can better understand; I am not attacking.

Your analysis may very well be rigorous, but I can't find your analysis - only your conclusions and a rough description of your sources.  If you want a serious response, post your source data, the conclusions you derive from the source data, and any required logical steps that go beyond "basic math skills".  I shouldn't have to re-do all your work just to have a conversation with you about the results.

That's exactly right: you should do your own work, especially since you haven't already done it on your own and of your own initiative. Or do you think I'm here to work for you? I've done my job by alerting you to the fact that with basic math skills and the data source mentioned (that is freely available to the public), anyone can confirm it. It's your problem, not mine, if you don't care, and judging by your attitude, I guess it's just that, correct? If you don't care, why should I?

In any event, it's a moot issue until the first question above is answered.

Thanks in advance.


HR
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011


Transparency & Integrity


View Profile
February 21, 2016, 09:58:30 PM
 #25703

I’m going to use these figures, pulled out of a post a little way back by a well know member of our community. I’m going to consider that these figures are in the ball park of correctness but it does not particularly matter for the purpose of this exercise.

5000 DGB of mining

SHA-256 ASIC   using   2.4KWh

Scrypt ASIC        using  7.5KWh

GPU                     using  9.9KWh

Then using some special magic, I’m going to make SHA size of the network just over 4 times bigger and the Scrypt size about 1.3 times bigger. Abracadabra, they are all on an equal footing and the DigiByte network is considerably stronger than it was before I used the special magic. I’m going to need to use a bit more magic again when the latest ASIC technology hits mining.

I know I don’t really have any special magic to use, that’s why knights are so important!

We could follow the other suggestion that has been made to tackle this issue and do away with the ASIC part of the network and replace it with other algorithms but I will argue that this would require more hard forks and would actually leave our network weaker than it is already.



I don't want to get entangled in this too much but I just wanted to mention that I've been able to undervolt my 280x and mine qubit to get about 11 MH/s and that only runs at about 150 watts according to my Kill-a-watt.  11 Mh/s is roughly about 5k dgb per day which will put my 150 watts at about 3.6kw per day.  Still not as efficient a sha256 asic but a lot less than the 9.9kwh mentioned above.  I am guessing that's for either Skein or Scrypt on a GPU.  

Great to see your presense Kayahoga, as always. Those are some very impressive results indeed! Let me guess, a dual card NVIDIA rig?

Phenomenal.


Actually its not NVIDIA, its a single AMD MSI 280x.  My "gaming" rig.  Undervolting is the key, you cant do it with every card which is why I went with the MSI 280x.  Back when I had my rigs once I figured out how to properly undervolt I was saving about $50 a month in electricity with only sacrificing a small amount of hashing power.  

Wow! That's more than double the hashrate I've ever seen published for a 280x ( http://asistec-ti.com/phpbb/viewtopic.php?f=25&t=38 ). Incredible. And you're still mining and those are current rewards?

EDIT: if you post your configuration, I'll add it to the list.


The magic isn't in the config, its the new optimized miner.  Check out the miner's from nicehash.  They keep up on all the latest builds and optimized kernels.  

Either way...I use  --intensity 18 --worksize 64 -g 2    A 280x is one of the few cards that should always have a -g 2,  my engine stays at 1020 since I'm undervolting.  I am pretty sure if I threw more juice at it I could go for 1100 and get close to 12 MH

Edit: Yes I'm still mining, just on my gaming rig with a single GPU, mining away today on my 280x

That's wild! Thanks for the tip. Someone else mentioned them again not too long ago on my forum and I appreciate you confirming that.

Cheers

Edit: btw, did you finally end up having any luck with Craptsy?

bitkapp
Hero Member
*****
Offline Offline

Activity: 517
Merit: 500


aka alaniz


View Profile
February 21, 2016, 10:00:52 PM
 #25704

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

HR
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011


Transparency & Integrity


View Profile
February 21, 2016, 10:15:57 PM
 #25705

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

That's not the issue. The block find distribution is 20/20/20/20/20. About that there is no doubt.

The question is how does MultiShield adjust each algo's diff and as a function of exactly what? We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

bitkapp
Hero Member
*****
Offline Offline

Activity: 517
Merit: 500


aka alaniz


View Profile
February 21, 2016, 11:36:46 PM
 #25706

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

That's not the issue. The block find distribution is 20/20/20/20/20. About that there is no doubt.

The question is how does MultiShield adjust each algo's diff and as a function of exactly what? We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

I'm not very good at C++ but I suspect the answer to your question lies here: https://github.com/digibyte/digibyte/blob/master/src/miner.cpp#L155

Jumbley
Legendary
*
Offline Offline

Activity: 1218
Merit: 1003



View Profile
February 21, 2016, 11:57:25 PM
 #25707

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

That's not the issue. The block find distribution is 20/20/20/20/20. About that there is no doubt.

The question is how does MultiShield adjust each algo's diff and as a function of exactly what? We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

I'm not very good at C++ but I suspect the answer to your question lies here: https://github.com/digibyte/digibyte/blob/master/src/miner.cpp#L155
It's an interesting question but I don't understand what it has to do with distribution energy cost.
If we accept that each algorithm finds 20% of the blocks. If we were to double the mining on any one of them, wouldn’t it be reasonable to expect ‘the amount of energy that we used’ to also double, even though the same amount of blocks would be mined?
MentalCollatz
Newbie
*
Offline Offline

Activity: 40
Merit: 0


View Profile
February 22, 2016, 12:11:43 AM
Last edit: February 26, 2016, 05:06:19 AM by MentalCollatz
 #25708

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

That's not the issue. The block find distribution is 20/20/20/20/20. About that there is no doubt.

The question is how does MultiShield adjust each algo's diff and as a function of exactly what? We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

I suggest reading the source code.  That's exactly right: you should do your own work, especially since you haven't already done it on your own and of your own initiative. Or do you think I'm here to work for you? I've done my job by alerting you to the fact that with basic coding skills and the source code mentioned (that is freely available to the public), anyone can confirm it. It's your problem, not mine, if you don't care, and judging by your attitude, I guess it's just that, correct? If you don't care, why should I?

Trolling aside, it turns out you don't even need that.  Even if the only thing you know about MultiShield is that it keeps each algorithm's share at 20%, that is already sufficient to answer your question (with basic math skills, of course).  You can multiply and divide can't you?

Okay, trolling truly off now.  One thing you seem to be confused about is the latest hardfork.  MultiShield was not changed at all in the DigiSpeed hardfork (other than changing a few constants so blocks would be faster).  The formula that estimates work per block was changed, but this is unrelated to difficulty adjustments.

Edit: apparently HR didn't get it, so here's some context:

Your analysis may very well be rigorous, but I can't find your analysis - only your conclusions and a rough description of your sources.  If you want a serious response, post your source data, the conclusions you derive from the source data, and any required logical steps that go beyond "basic math skills".  I shouldn't have to re-do all your work just to have a conversation with you about the results.

That's exactly right: you should do your own work, especially since you haven't already done it on your own and of your own initiative. Or do you think I'm here to work for you? I've done my job by alerting you to the fact that with basic math skills and the data source mentioned (that is freely available to the public), anyone can confirm it. It's your problem, not mine, if you don't care, and judging by your attitude, I guess it's just that, correct? If you don't care, why should I?

The "multiply and divide" line was similarly copied from one of your posts which seems to no longer exist.
Kayahoga
Full Member
***
Offline Offline

Activity: 146
Merit: 100


View Profile
February 22, 2016, 01:08:24 AM
 #25709


That's wild! Thanks for the tip. Someone else mentioned them again not too long ago on my forum and I appreciate you confirming that.

Cheers

Edit: btw, did you finally end up having any luck with Craptsy?



No I never did, Once I saw they weren't updating after the hard fork I then tried to get out with BTC but at that point everything was going south and things were shutdown a few weeks later.  I should have gotten some out through another coin but I was stubborn and didn't want to lose half my DGB.  In the end I lost all of it.  I guess that the bright side is that I involuntarily took out several million out of the DGB ecosystem =).  I had all my eggs in cold wallets but I had some on the exchange that needed to be traded to pay for bills / rig debt etc and that's what I lost so...live and learn I guess. 
DigiByte (OP)
Legendary
*
Offline Offline

Activity: 1722
Merit: 1051


Official DigiByte Account


View Profile WWW
February 22, 2016, 03:50:49 AM
 #25710

HR as far as I understand it 20% of the blocks are ASIC, 20% are Scrypt, 60% are GPU (I do not know if these are equally distributed).

That's not the issue. The block find distribution is 20/20/20/20/20. About that there is no doubt.

The question is how does MultiShield adjust each algo's diff and as a function of exactly what? We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

Yes block distribution is 20/20/20/20/20. To compare the exact changes in the last hard fork compare GetNextWorkRequiredV3 to GetNextWorkRequiredV4.

Current difficulty adjustment code:
https://github.com/digibyte/digibyte/blob/master/src/main.cpp#L1670
Code:
static unsigned int GetNextWorkRequiredV4(const CBlockIndex* pindexLast, const CBlockHeader *pblock, int algo,bool log)
{
unsigned int nProofOfWorkLimit = Params().ProofOfWorkLimit(algo).GetCompact();

// Genesis block
if (pindexLast == NULL)
return nProofOfWorkLimit;

if (TestNet())
{
// Special difficulty rule for testnet:
// If the new block's timestamp is more than 2* 10 minutes
// then allow mining of a min-difficulty block.
if (pblock->nTime > pindexLast->nTime + nTargetSpacing*2)
return nProofOfWorkLimit;
else
{
// Return the last non-special-min-difficulty-rules-block
const CBlockIndex* pindex = pindexLast;
while (pindex->pprev && pindex->nHeight % nInterval != 0 && pindex->nBits == nProofOfWorkLimit)
pindex = pindex->pprev;
return pindex->nBits;
}
}

if(log)
{
LogPrintf("GetNextWorkRequired RETARGET\n");
LogPrintf("Algo: %s\n", GetAlgoName(algo));
LogPrintf("Height (Before): %s\n", pindexLast->nHeight);
}

// find first block in averaging interval
// Go back by what we want to be nAveragingInterval blocks per algo
const CBlockIndex* pindexFirst = pindexLast;
for (int i = 0; pindexFirst && i < NUM_ALGOS*nAveragingInterval; i++)
{
pindexFirst = pindexFirst->pprev;
}

const CBlockIndex* pindexPrevAlgo = GetLastBlockIndexForAlgo(pindexLast, algo);
if (pindexPrevAlgo == NULL || pindexFirst == NULL)
{
if(log)
LogPrintf("Use default POW Limit\n");
return nProofOfWorkLimit;
}

// Limit adjustment step
// Use medians to prevent time-warp attacks
int64_t nActualTimespan = pindexLast-> GetMedianTimePast() - pindexFirst->GetMedianTimePast();
nActualTimespan = nAveragingTargetTimespanV4 + (nActualTimespan - nAveragingTargetTimespanV4)/4;

if(log)
LogPrintf("nActualTimespan = %d before bounds\n", nActualTimespan);

if (nActualTimespan < nMinActualTimespanV4)
nActualTimespan = nMinActualTimespanV4;
if (nActualTimespan > nMaxActualTimespanV4)
nActualTimespan = nMaxActualTimespanV4;

//Global retarget
CBigNum bnNew;
bnNew.SetCompact(pindexPrevAlgo->nBits);

bnNew *= nActualTimespan;
bnNew /= nAveragingTargetTimespanV4;

//Per-algo retarget
int nAdjustments = pindexPrevAlgo->nHeight + NUM_ALGOS - 1 - pindexLast->nHeight;
if (nAdjustments > 0)
{
for (int i = 0; i < nAdjustments; i++)
{
bnNew *= 100;
bnNew /= (100 + nLocalTargetAdjustment);
}
}
else if (nAdjustments < 0)//make it easier
{
for (int i = 0; i < -nAdjustments; i++)
{
bnNew *= (100 + nLocalTargetAdjustment);
bnNew /= 100;
}
}

if (bnNew > Params().ProofOfWorkLimit(algo))
{
if(log)
{
LogPrintf("bnNew > Params().ProofOfWorkLimit(algo)\n");
}
bnNew = Params().ProofOfWorkLimit(algo);
}

if(log)
{
LogPrintf("nAveragingTargetTimespanV4 = %d; nActualTimespan = %d\n", nAveragingTargetTimespanV4, nActualTimespan);
LogPrintf("Before: %08x  %s\n", pindexPrevAlgo->nBits, CBigNum().SetCompact(pindexPrevAlgo->nBits).getuint256().ToString());
LogPrintf("After:  %08x  %s\n", bnNew.GetCompact(), bnNew.getuint256().ToString());
}

return bnNew.GetCompact();
}

DigiByte (OP)
Legendary
*
Offline Offline

Activity: 1722
Merit: 1051


Official DigiByte Account


View Profile WWW
February 22, 2016, 03:58:51 AM
 #25711

Here is the first weighting code (pre DigiSpeed):
https://github.com/digibyte/digibyte/blob/master/src/main.h#L866
Code:
    int GetAlgoWorkFactor() const
    {
        if (!TestNet() && (nHeight < multiAlgoDiffChangeTarget))
        {
            return 1;
        }
        if (TestNet() && (nHeight < 100))
        {
            return 1;
        }
        switch (GetAlgo())
        {
            case ALGO_SHA256D:
                return 1;
            // work factor = absolute work ratio * optimisation factor
            case ALGO_SCRYPT:
                return 1024 * 4;
            case ALGO_GROESTL:
                return 64 * 8;
            case ALGO_SKEIN:
                return 4 * 6;
            case ALGO_QUBIT:
                return 128 * 8;
            default:
                return 1;
        }
    }

New Adjusted Weighting Code (Post DigiSpeed):

Code:
    CBigNum GetBlockWorkAdjusted() const
    {
        if (nHeight < workComputationChangeTarget)
        {
            CBigNum bnRes;
            bnRes = GetBlockWork() * GetAlgoWorkFactor();
            return bnRes;
        }
        else
        {
            CBigNum bnRes = 1;
            CBlockHeader header = GetBlockHeader();
            // multiply the difficulties of all algorithms
            for (int i = 0; i < NUM_ALGOS; i++)
            {
                unsigned int nBits = GetNextWorkRequired(pprev, &header, i,false);
                CBigNum bnTarget;
                bnTarget.SetCompact(nBits);
                if (bnTarget <= 0)
                    return 0;
                bnRes *= (CBigNum(1)<<256) / (bnTarget+1);
            }
            // Compute the geometric mean
            bnRes = bnRes.nthRoot(NUM_ALGOS);
            // Scale to roughly match the old work calculation
            bnRes <<= 7;
            return bnRes;
        }
    }

DigiByte (OP)
Legendary
*
Offline Offline

Activity: 1722
Merit: 1051


Official DigiByte Account


View Profile WWW
February 22, 2016, 04:03:42 AM
 #25712

iOS Wallet update:

Our test wallet has been approved by the Apple iTunes store. If you have DigiBytes stuck in the old iOS wallet and would like to help us test the new wallet please email dev@digibyte.co your Apple ID email and we will add you to the Test Flight list.

Cheers,


HR
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011


Transparency & Integrity


View Profile
February 22, 2016, 07:58:37 AM
Last edit: February 22, 2016, 08:30:34 AM by HR
 #25713

Here is the first weighting code (pre DigiSpeed):
https://github.com/digibyte/digibyte/blob/master/src/main.h#L866
Code:
    int GetAlgoWorkFactor() const
    {
        if (!TestNet() && (nHeight < multiAlgoDiffChangeTarget))
        {
            return 1;
        }
        if (TestNet() && (nHeight < 100))
        {
            return 1;
        }
        switch (GetAlgo())
        {
            case ALGO_SHA256D:
                return 1;
            // work factor = absolute work ratio * optimisation factor
            case ALGO_SCRYPT:
                return 1024 * 4;
            case ALGO_GROESTL:
                return 64 * 8;
            case ALGO_SKEIN:
                return 4 * 6;
            case ALGO_QUBIT:
                return 128 * 8;
            default:
                return 1;
        }
    }

New Adjusted Weighting Code (Post DigiSpeed):

Code:
    CBigNum GetBlockWorkAdjusted() const
    {
        if (nHeight < workComputationChangeTarget)
        {
            CBigNum bnRes;
            bnRes = GetBlockWork() * GetAlgoWorkFactor();
            return bnRes;
        }
        else
        {
            CBigNum bnRes = 1;
            CBlockHeader header = GetBlockHeader();
            // multiply the difficulties of all algorithms
            for (int i = 0; i < NUM_ALGOS; i++)
            {
                unsigned int nBits = GetNextWorkRequired(pprev, &header, i,false);
                CBigNum bnTarget;
                bnTarget.SetCompact(nBits);
                if (bnTarget <= 0)
                    return 0;
                bnRes *= (CBigNum(1)<<256) / (bnTarget+1);
            }
            // Compute the geometric mean
            bnRes = bnRes.nthRoot(NUM_ALGOS);
            // Scale to roughly match the old work calculation
            bnRes <<= 7;
            return bnRes;
        }
    }

Okay, pretty much what I thought: those magic knights don't exist, and the level of understanding about the second most important aspect of DigiByte is, in general, close to zero.

Considering just how important it is to have a good understanding of this, would it be too much to ask for an explanation in layman's terms?

As a guide, not that you need it as you clearly answered both with your posting of the respective codes, but in order to help with the framing of that explanation for the average person, I'll remind of the framing of the last questions:

We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

Looks to me like the overall relationship is still very similar, in that all 5 algos have their difficulties to mine calculated from the same base input, but, not being an expert, I don't want to misrepresent the facts and prefer to hear your, expert, explanation for the masses.

More than anything because I could be wrong. As I said, I'm not an expert, and in this matter I consider myself to be a member of the masses.

Very much appreciated!


DigiByte (OP)
Legendary
*
Offline Offline

Activity: 1722
Merit: 1051


Official DigiByte Account


View Profile WWW
February 22, 2016, 09:10:02 AM
 #25714


Okay, pretty much what I thought: those magic knights don't exist, and the level of understanding about the second most important aspect of DigiByte is, in general, close to zero.

Considering just how important it is to have a good understanding of this, would it be too much to ask for an explanation in layman's terms?

As a guide, not that you need it as you clearly answered both with your posting of the respective codes, but in order to help with the framing of that explanation for the average person, I'll remind of the framing of the last questions:

We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

Looks to me like the overall relationship is still very similar, in that all 5 algos have their difficulties to mine calculated from the same base input, but, not being an expert, I don't want to misrepresent the facts and prefer to hear your, expert, explanation for the masses.

More than anything because I could be wrong. As I said, I'm not an expert, and in this matter I consider myself to be a member of the masses.

Very much appreciated!


A great place to begin answering this question is with MentalCollatz's original post on the GitHub repo. And also many, many thanks to MentalCollatz for contributing this code and insight to DigiByte.

https://github.com/digibyte/digibyte/pull/36
Quote
First off, this will fix the safe mode warnings once and for all. But it does so much more than that. Currently, an attacker can 51% attack the network with roughly 60% of SHA256D and nothing else. After this change, an attacker with 90% of the SHA256D hashrate and 33% of each of the other 4 algorithms would have insufficient hashpower to mount a 51% attack.

The new formula was chosen as a function of the difficulties and based on these criteria:
1. It should be a symmetric funtion
2. It should be order 1 homogenous
3. It should be homogenous with respect to each variable

Or in plain English:
1. There should be nothing algorithm-specific (such as per-algorithm weights) nor should it depend on which algorithm actually solved the block.
2. If all difficulties double, the block work should double
3. If one difficulty doubles, the block work should change by some constant factor

There is only one function (save multiplication by a constant factor) satisfying all 3 conditions: the geometric mean. As an added bonus, because of how the difficulty algorithm works the geometric mean can change by at most 3% from block to block (which addresses the safe-mode warning issue).

In order to 51% attack the network, the product of the attacker's hashrates must exceed the product of the network's hashrates. In particular, the attacker must have some hashrate in all 5 algorithms.

So to summarise, there is no longer an individual algorithm weighting (workfactor) but a geometric mean (nthRoot) of all algos. This allows for five birds to be killed with one stone; preventing time warp attacks, eliminating safe mode error, improving difficulty adjustments, making sure each algo is only getting 20% of all the blocks and dramatically increasing the difficulty of a 51% attack. If you look back at the previous several thousand blocks and calculate the average % of each algo you will get the 20% average. Things are working as expected in that regard.

Now as to electrical efficiency of each mining setup. ASIC's are by their very nature designed to be highly efficient: https://en.wikipedia.org/wiki/Application-specific_integrated_circuit

The original idea was to allow for people with both used, obsolete ASIC's and GPU's as well as some CPU's to be able to mine. As time has gone on the mining scene has undoubtedly changed and we now have two algos (Sha256 and Scrypt) dominated by ASIC's. As time goes on these two algos miners will get more electrically efficient but still only account 40% of new DGB coming into circulation. The other 60% is up for grabs from GPU's.

As ASIC's get cheaper and more efficient it will be easier to distribute ASIC miners to people such as gamers. Remember the only thing a Sha256 or Scrypt ASIC can be used for is to mine a digital currency.

Does this answer everyones questions? This is a very complex topic so we understand the confusion. We are always open to new ideas and suggestions as technology is rapidly changing.









bitkapp
Hero Member
*****
Offline Offline

Activity: 517
Merit: 500


aka alaniz


View Profile
February 22, 2016, 09:14:56 AM
 #25715

Thanks for clearing this issue up Jared, much appreciated. I hope we can lay this issue to rest now and get on with more interesting developments/suggestions.

HR
Legendary
*
Offline Offline

Activity: 1176
Merit: 1011


Transparency & Integrity


View Profile
February 22, 2016, 10:25:40 AM
 #25716


Okay, pretty much what I thought: those magic knights don't exist, and the level of understanding about the second most important aspect of DigiByte is, in general, close to zero.

Considering just how important it is to have a good understanding of this, would it be too much to ask for an explanation in layman's terms?

As a guide, not that you need it as you clearly answered both with your posting of the respective codes, but in order to help with the framing of that explanation for the average person, I'll remind of the framing of the last questions:

We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

Looks to me like the overall relationship is still very similar, in that all 5 algos have their difficulties to mine calculated from the same base input, but, not being an expert, I don't want to misrepresent the facts and prefer to hear your, expert, explanation for the masses.

More than anything because I could be wrong. As I said, I'm not an expert, and in this matter I consider myself to be a member of the masses.

Very much appreciated!


A great place to begin answering this question is with MentalCollatz's original post on the GitHub repo. And also many, many thanks to MentalCollatz for contributing this code and insight to DigiByte.

https://github.com/digibyte/digibyte/pull/36
Quote
First off, this will fix the safe mode warnings once and for all. But it does so much more than that. Currently, an attacker can 51% attack the network with roughly 60% of SHA256D and nothing else. After this change, an attacker with 90% of the SHA256D hashrate and 33% of each of the other 4 algorithms would have insufficient hashpower to mount a 51% attack.

The new formula was chosen as a function of the difficulties and based on these criteria:
1. It should be a symmetric funtion
2. It should be order 1 homogenous
3. It should be homogenous with respect to each variable

Or in plain English:
1. There should be nothing algorithm-specific (such as per-algorithm weights) nor should it depend on which algorithm actually solved the block.
2. If all difficulties double, the block work should double
3. If one difficulty doubles, the block work should change by some constant factor

There is only one function (save multiplication by a constant factor) satisfying all 3 conditions: the geometric mean. As an added bonus, because of how the difficulty algorithm works the geometric mean can change by at most 3% from block to block (which addresses the safe-mode warning issue).

In order to 51% attack the network, the product of the attacker's hashrates must exceed the product of the network's hashrates. In particular, the attacker must have some hashrate in all 5 algorithms.

So to summarise, there is no longer an individual algorithm weighting (workfactor) but a geometric mean (nthRoot) of all algos. This allows for five birds to be killed with one stone; preventing time warp attacks, eliminating safe mode error, improving difficulty adjustments, making sure each algo is only getting 20% of all the blocks and dramatically increasing the difficulty of a 51% attack. If you look back at the previous several thousand blocks and calculate the average % of each algo you will get the 20% average. Things are working as expected in that regard.

Now as to electrical efficiency of each mining setup. ASIC's are by their very nature designed to be highly efficient: https://en.wikipedia.org/wiki/Application-specific_integrated_circuit

The original idea was to allow for people with both used, obsolete ASIC's and GPU's as well as some CPU's to be able to mine. As time has gone on the mining scene has undoubtedly changed and we now have two algos (Sha256 and Scrypt) dominated by ASIC's. As time goes on these two algos miners will get more electrically efficient but still only account 40% of new DGB coming into circulation. The other 60% is up for grabs from GPU's.

As ASIC's get cheaper and more efficient it will be easier to distribute ASIC miners to people such as gamers. Remember the only thing a Sha256 or Scrypt ASIC can be used for is to mine a digital currency.

Does this answer everyones questions? This is a very complex topic so we understand the confusion. We are always open to new ideas and suggestions as technology is rapidly changing.


I'm sorry, but it still doesn't answer my question, and I must assume that is due to my inability to adequately frame the question. I will make another attempt by putting then into in a closed, yes/no, format that could/should be followed by an explanation.

Are the individual difficulties to mine calculated based on total aggregate hashrate (the 5 combined hashrates)?

Are the individual difficulties to mine calculated solely based on the hashrate of the individual algo in question?

When the diff of one algo rises (or falls), the diff of the other algos rise (or fall) in equal proportion?


24hralttrade
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500


Community Liaison,How can i help you?


View Profile WWW
February 22, 2016, 11:36:26 AM
 #25717

Need to get the word out!

https://twitter.com/Alttrade/status/701731690306211840

Want to see the Future of Retail omnichannel demo store powered by Digibyte & Tofugear teams?
Please feel free to contact me if you have anything to report or you have any questions.
Jumbley
Legendary
*
Offline Offline

Activity: 1218
Merit: 1003



View Profile
February 22, 2016, 01:21:47 PM
 #25718


Okay, pretty much what I thought: those magic knights don't exist, and the level of understanding about the second most important aspect of DigiByte is, in general, close to zero.

Considering just how important it is to have a good understanding of this, would it be too much to ask for an explanation in layman's terms?

As a guide, not that you need it as you clearly answered both with your posting of the respective codes, but in order to help with the framing of that explanation for the average person, I'll remind of the framing of the last questions:

We know that before the most recent hardfork, global network hashrate (that is the combined total of the 5 algos) was key, and that all the algos diffs adjusted in response to changes in aggregate hashrate. The question is how much did that change? To what precise degree are the algos currently independent, and to what degree are they still inter-dependent?

Looks to me like the overall relationship is still very similar, in that all 5 algos have their difficulties to mine calculated from the same base input, but, not being an expert, I don't want to misrepresent the facts and prefer to hear your, expert, explanation for the masses.

More than anything because I could be wrong. As I said, I'm not an expert, and in this matter I consider myself to be a member of the masses.

Very much appreciated!


A great place to begin answering this question is with MentalCollatz's original post on the GitHub repo. And also many, many thanks to MentalCollatz for contributing this code and insight to DigiByte.

https://github.com/digibyte/digibyte/pull/36
Quote
First off, this will fix the safe mode warnings once and for all. But it does so much more than that. Currently, an attacker can 51% attack the network with roughly 60% of SHA256D and nothing else. After this change, an attacker with 90% of the SHA256D hashrate and 33% of each of the other 4 algorithms would have insufficient hashpower to mount a 51% attack.

The new formula was chosen as a function of the difficulties and based on these criteria:
1. It should be a symmetric funtion
2. It should be order 1 homogenous
3. It should be homogenous with respect to each variable

Or in plain English:
1. There should be nothing algorithm-specific (such as per-algorithm weights) nor should it depend on which algorithm actually solved the block.
2. If all difficulties double, the block work should double
3. If one difficulty doubles, the block work should change by some constant factor

There is only one function (save multiplication by a constant factor) satisfying all 3 conditions: the geometric mean. As an added bonus, because of how the difficulty algorithm works the geometric mean can change by at most 3% from block to block (which addresses the safe-mode warning issue).

In order to 51% attack the network, the product of the attacker's hashrates must exceed the product of the network's hashrates. In particular, the attacker must have some hashrate in all 5 algorithms.

So to summarise, there is no longer an individual algorithm weighting (workfactor) but a geometric mean (nthRoot) of all algos. This allows for five birds to be killed with one stone; preventing time warp attacks, eliminating safe mode error, improving difficulty adjustments, making sure each algo is only getting 20% of all the blocks and dramatically increasing the difficulty of a 51% attack. If you look back at the previous several thousand blocks and calculate the average % of each algo you will get the 20% average. Things are working as expected in that regard.

Now as to electrical efficiency of each mining setup. ASIC's are by their very nature designed to be highly efficient: https://en.wikipedia.org/wiki/Application-specific_integrated_circuit

The original idea was to allow for people with both used, obsolete ASIC's and GPU's as well as some CPU's to be able to mine. As time has gone on the mining scene has undoubtedly changed and we now have two algos (Sha256 and Scrypt) dominated by ASIC's. As time goes on these two algos miners will get more electrically efficient but still only account 40% of new DGB coming into circulation. The other 60% is up for grabs from GPU's.

As ASIC's get cheaper and more efficient it will be easier to distribute ASIC miners to people such as gamers. Remember the only thing a Sha256 or Scrypt ASIC can be used for is to mine a digital currency.

Does this answer everyones questions? This is a very complex topic so we understand the confusion. We are always open to new ideas and suggestions as technology is rapidly changing.


I'm sorry, but it still doesn't answer my question, and I must assume that is due to my inability to adequately frame the question. I will make another attempt by putting then into in a closed, yes/no, format that could/should be followed by an explanation.

Are the individual difficulties to mine calculated based on total aggregate hashrate (the 5 combined hashrates)?NO

Are the individual difficulties to mine calculated solely based on the hashrate of the individual algo in question?YES

When the diff of one algo rises (or falls), the diff of the other algos rise (or fall) in equal proportion?
NO

I have provide my answers to your question above because logic tells me that this must be the case for the 20% per algo distribution to be maintained.
ghostycc
Sr. Member
****
Offline Offline

Activity: 270
Merit: 250


Lovin' Crypto


View Profile
February 22, 2016, 03:20:25 PM
 #25719

Do we have more gaming features coming soon ?

How about aiming at buying games on steam ?
Would love to buy my DLC and stuff with DGB's!

We want to see DGB more involved into the digital buying markets. For now we have a great community. Being able to buy more with digital currencies would be perfect.
I know it must be hard to set with the price evolving. But did ya try to start talking with these kinds of market to set something up ?

Might be a good thing to be able to buy all these cell-phone games pack aswell. Like candy crush, clash of clans etc.

What do you guys think of this ? "Buy your gems with DGB" even if I'm not using it myself... People who are managing the market of these games wouldn't be against something like that I'm pretty sure.


Stop these Hype money, get in DGB!
FrenchFrog FTW
bitkapp
Hero Member
*****
Offline Offline

Activity: 517
Merit: 500


aka alaniz


View Profile
February 22, 2016, 04:38:45 PM
 #25720

We have CS:GO coming out soon on DGB gaming. I am not aware of any further updates apart from improvements to the user interface and little things.

Pages: « 1 ... 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 [1286] 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 ... 1832 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!