Bitcoin Forum
December 15, 2019, 05:34:29 AM *
News: Latest Bitcoin Core release: 0.19.0.1 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [30] 31 32 33 34 35 36 37 38 »
  Print  
Author Topic: [ANN] ParallelCoin - DUO - SHA256 + Scrypt | HardFork Soon!!! We are going Go!  (Read 58548 times)
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 08, 2018, 09:39:05 PM
 #581

It's nice to know I'm not talking to myself  Grin

So, more news, and general commentary. I got through a lot of tasks today and I need to debrief a bit.

There is 7 new algorithms being added: blake14lr (as used in decred), blake2b (sia), Lyra2REv2 (vertcoin), skein (one of myriad's 5), x13 and x11. These were selected because all of the coins that they are associated with have the standart bitcoin mining RPC API, and none of these require any changes to the block header structure (equihash requires an additional 1134 bytes, monero's difficulty adjustment is double the length at 64 bits) and basically most of it is tedious administrative work going through all the places where I already added the support for Scrypt to the btcd base, now there will be 9 in total. I am most of the way through the initial implementation.

The difficulty adjustment deserves a little more discussion. KGW and Dark Gravity Wave are more well known continuous difficulty adjustment algorithms, and as people would know, the Verge attack was due to flaws in the DGW. I spent a lot of the last week staring at logs of the new node mining, well, only one algorithm, but this doesn't substantially matter because all of them are poisson point processes anyway, so if designed right it should behave not too differently with more. It might be a day or two before I have them fully working, and then I can see how my algorithm copes with the multiple algorithms.

So, the existing difficulty adjustment is terrible, I don't know how long the original authors spent testing but I suspect not very long. It bases its calculations on the time between a block of an algorithm and the previous block of the algorithm, with a mere 10 sampled to generate the average upon which it adjusts linearly.

Firstly, yes, of course this new algorithm uses exponents, not squares, as in many of the others, but rather, a cubic curve, which is shifted with adding one to the result and taking one from the divergence ratio (target:actual) then cubing it. This is a slightly rough description:

https://github.com/parallelcointeam/pod/blob/master/docs/parabolic_filter_difficulty_adjustment.md

One subject I didn't fully address because it wasn't on my mind so much as I was simply testing to see how it works, was the matter of how the different algorithms cooperate with each other. The many schemes I see, monero's and others, do a LOT of processing and slicing and weighting and such, and I think if this can be avoided, it is better, as this is a process that takes place during replay, and nothing is more irritating than waiting for sync, because of bandwidth or disk access problems. It can be worse sometimes also depending on the database, leveldb is not bad but I would prefer, down the track, to switch it all to the Badger database, which separates the key and value fields, and keeps (optionally) all of the keys in memory.

So, I basically (lazily) implemented it, without too much thinking beforehand, such that for each algorithm, it always steps back to the previous block of that algorithm, and then walks backwards some amount of blocks (probably 288 blocks, but I'm not completely sure yet), and simply uses these two times, subtracts the older time from the newer, and adjusts from that.

What I did not realise immediately was how this would work with especially this many algorithms. As I have discussed in that document above, the difficulty is dithered by a tiny amount, just the last two bits, after getting that average block time, and then feeding it into that curve filter. This is a unique feature for difficulty adjustments, I thought about more 'random' ways of doing it but the fact is when you are dithering, you should not shuffle things too much. The bit-flipping may not be sufficient to smooth out the bumpy, pointy timestamp resolution, but it probably is a long way towards eliminating resonances caused by common factors between numbers.

So, in every case, the difficulty adjustment is computed based on, what will often be, some time in the past, maybe 1 block, maybe hundreds. The adjustment based on this will influence only the one algorithm's blocks, but it also combines with the other blocks mingled together. But the most important effect, which was just a side effect, in my mind, that I didn't model yet, is that no matter how long the gap is while an algorithm isn't being used, it is as though no time has passed, so each of the different algorithms will bring new, complex 'echoes' into the feedback loop that computes difficulty, which I figure will probably somewhat resemble reverb or radiosity in sound and light respectively. Or in other words, the possibly different inherent distributions of solutions, which are already independent from each other, will combine a lot of different numbers together.

Pure gaussian noise is by its nature soft and fluffy, the sound is like the whooshing of breath, and visually it is blurring, density of dots, for example, is used for most forms of print matter, and the best looking images are ones that have the natural chaotic patterning as you see in film. Even many new developments in camera technology, and displays, are increasingly using multiple exposures, and highly pattern-free algorithms like Floyd-Steinberg and later, more fluffy types of ditherers.

So, in a simple way of explaining it, the only thing that really differentiates how I am implementing this, is I am making maximum use of the benefits of poisson point processes and gaussian distribution with intentional noise. Too much will overwhelm it, but I think between the bit flips and the time-traveling block difficulty that always ignores the immediate previous block, maybe many more, since the previous time the algorithm was used for a block, these blocks will appear randomly, and the echos will be sometimes short, nearly not an echo at all, and sometimes quite a ways back. These will definitely add more randomness to the edges of the adjustments, and this is very important, in my opinion, as the reality is, difficulty adjustment regimes really are not up to date compared to other fields of science where control systems are implemented.

You may have heard of Fuzzy Logic, and probably don't realise that almost all cars, washing machines, and many many devices now use fuzzy logic systems to progressively adapt via feedback loops to keep a random or undesirably shaped response pattern from appearing in the behaviour of the device. These unwanted patterns tend to also cause inefficiency and catastrophic changes if they are not mitigated.

The new stochastic parabolic filter difficulty adjustment scheme should hopefully fix almost all of the issues relating to blockchain difficulty attacks, most of them, the most devastatingly efficient, as was the first live instance of a time warp attack, on Verge, earlier this year, and the attacker succeeded in freezing the clock and issuing arbitrary tokens (of course limited by block height/reward size, but not in sheer number).

The only remaining vector of attack is the pure and simple 51%, just simply overwhelming the chain for long enough to be able to rewrite it.

Just some brief comments about the timing behaviour that I observed so far in tests, with the dithering added, particularly, the block times started to nicely swing quite a ways back and forth, in an almost alternating pattern. It's probably actually a 4-part pattern, because 4 different ways 2 bits can be  set. Firstly, it does not react strongly to blocks when they fall within the inner 20% of divergence, between 80% and 125%, and as such, even with quite dramatic changes in hashpower, it stays pretty close, because basically once block time falls, as it is prone to, randomly, at the further out edges, from 50% to 200% (smaller and bigger) the algorithm kicks them hard, so in the events of a sharp rise in hashpower, what happens is the difficulty rapidly increases, *but* it does not do so in a linear fashion. It is dithered in the smaller harmonics, as well as being a smooth surface like a mirror (and made smoother by dithering), so it homes in on the correct adjustment quite quickly.

From what I saw so far, a sharp rise sees the difficulty accelerates upwards, but not smoothly, and because it isn't smooth, though when a big miner stops mining, yes there may be longer gaps between, but as I mentioned, the block times seem to pretty well oscillate in a 1:4 and 1:2 ratio, usually within 3 blocks it will, as it did going up, go back down.

Most algorithms don't try to address the issue of aliasing distortion in the signal, and it is this that makes difficulty adjustment algorithms get stuck or oscillate excessively.

Anyway, I will have all of the bits changed to fit the 7 new algorithms in the next days, and then we will be able to see more concrete data about what additional effect the 9 different algorithms will have, as well as the pattern that it ignores the newest block if it is the same algorithm, meaning that due to the random distribution of the algorithms, the time will also be dithered as a product of the poisson process of finding solutions.

Blockchains basically take a random source, and attempt to modulate the process with difficulty to make it more regular. Regular blocks is very important because of the width of the amount of time between blocks being somewhat long, and in fact blockchains can't really do much better than 2.5 minutes, at best, and in my opinion, 5 minutes is a good amount of time, and long enough. Between 10 minute blocks, there is 600 seconds. You would not be happy if there was only 600 pixels across your screen (these days), and I think it helps people understand what is going on when an analogous phenomena is compared.

Blockchains are highly dependent on clocks, and it is because of the many random factors between them, that it is also the most difficult thing to really get right. Many of the older coins have terrible difficulty adjustments, bitcoin only adjusts every 2 weeks, and the only reason it can get away with it is because of so much loyal hashpower, the volatility is small. But the more ASICs that are available, the more GPUs, the more variance there can be between hashpower and profitability on various chains, the more of a problem it gets to be for more chains.

I have thought about other strategies, and maybe they will be added later, but I am not sure yet if it is even necessary. The most challenging thing with difficulty adjustment and the main type of attack carried out on pretty much a daily basis, even, mostly unintentional, is that blockchains can be caused to pause for indeterminate amounts of time when difficulty adjusts up due to high hashpower, and then when it goes away, it cannot go back down until a new block hits the old target. Min diff blocks, as used in test networks, do not work in the real world, because it is not possible to come to a clear consensus about time, and the network can be highly divided about whether the necessary amount of time elapsed or not, and thtis gets even worse when you add in that people alter their nodes to feed fake times or divergent times as part of attempts to game the system.

I think the solution is just noise, basically, in case you hadn't figured it's all about that, and using noise to eliminate distortion. Even if the chain has dramatic, like maybe even as high as 20x jumps in difficulty, that with added wiggle, when difficulty ramps up, it does not get caught, either slowed or sped up, in moving up, it wiggles all the way to the top, and at the top, though there will be a time of gap between, the 'time travelling' averaging I have devised, likely will further counter this.

There will naturally be clumpiness in the distribution, even given a continuous, long time of all algorithms at a stable hashrate, and, especially with so many algorithms, now there is 9, this means that when difficulty goes up a lot, an algorithm that has gone through a long dry patch will have a potentially lower (and also higher) target already, and thus where the blocks with algos with not many recent solutions find a solution, or another giant pool shows up, it gives far more opportunities for a block to come in on target, sooner than it would have otherwise.

The real results of course will be seen with more testing and when the hardfork activates. I hold to Proof of Work as the currently and still and will probably be best strategy for protecting the security of the blockchains, as I saw first hand how easily it is for proof of stake blockchains to get extremely badly distributed. Proof of Stake also, in the beginning, is very cheap to acquire, and after a short time, only the early people or someone with a lot of money can have any influence. This is not conducive to adoption, nor is it conducive to security.
1576388069
Hero Member
*
Offline Offline

Posts: 1576388069

View Profile Personal Message (Offline)

Ignore
1576388069
Reply with quote  #2

1576388069
Report to moderator
Best ratesfor crypto
EXCHANGE
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.
1576388069
Hero Member
*
Offline Offline

Posts: 1576388069

View Profile Personal Message (Offline)

Ignore
1576388069
Reply with quote  #2

1576388069
Report to moderator
1576388069
Hero Member
*
Offline Offline

Posts: 1576388069

View Profile Personal Message (Offline)

Ignore
1576388069
Reply with quote  #2

1576388069
Report to moderator
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 11, 2018, 09:56:52 AM
 #582

Just a short note this time...

So, we had several options open, adding more PoW algorithms, and Merged Mining.

After some research on the latter, it became clear that it only expands the attack surface for malicious miners, only Doge had any gain from its introduction, and it has killed Namecoin, pretty much dead, and Myriad has been able to avoid being destroyed by it because only scrypt and sha256d are merged. But Myriad doesn't seem to have generally benefited either, like DUO and many small coins, hovering around a very small market value, unable to build momentum, and low volumes.

I added 7 new hashes, Lyra2REv2, Skein, X11, X13, Blake2b, Blake14lr, and Keccac. After some testing, it was found that the X13 implementation haemorrhaged memory, consuming all of my 32gb of memory in about an hour. Strangely, the Blake2b hash appeared to not function at all, though strangely it seemed to produce solutions now and then, no idea how when the system monitor reported zero CPU activity.

So I had to discard these, and anyway, being that both were from the same family of functions, I thought this would be a good opportunity to make the functions more diverse. I searched for a while and came up with GOST, the one used by Sibcoin in addition to x11, and I also selected Whirlpool, as it is from also an entirely different construction as most of the others.

In actual fact, Lyra2REv2 and Scrypt are Key Derivation functions, Lyra is based on Blake, I think? Maybe something else? It uses a scrambling function construction called Sponge. Well, anyway, KDFs are used mostly for turning passwords into symmetric keys for encrypting data, such as the wallet data, usually using AES-SHA-256 or GCM.

TL;DR

Anyway, I'll just get to the point. What I discovered was that inadvertently, my design whereby the averaging window, the first and last in a number of sequential blocks, the time between them, is found simply by walking back through the chain to find the first previous instance of a particular version.

Because of the inherently random nature of finding solutions for blocks, the distance backwards for each algorithm's averaging window is also randomised, and, the most important thing is:

each algorithm is independent but the chain rate is bound together.

What this basically means is that in order to mount a hashrate attack on such a chain is that you first have to be looking for solutions with more than half the algorithms. Inherently, the extreme rise caused by a large pool showing up to grab a few blocks during a low diff or high price period, only affects the algorithm being mined immediately.

As such an event leads to a string of the same algorithm blocks, this shortening of the average block time is not immediately changing the difficulty target of all the other algorithms.

What this means is that any attempt to 51% attack the chain will require a quite high logistical burden:

- one has to mine to more than half of the algorithms to have a sustained effect on the block times,

- all of the other algorithms not being targeted are likely to prevent a long delay before the next block arrives

- there is no rhythm or timing that can be used to amplify the effect with resonance, as the system is inherently stochastic, and randomness includes not repeating numbers or factors of numbers

I was finding that with this many algorithms the time to convergence given a flat hashrate was excessively slow, using the parabolic filter, so I am now in testing with a plain linear adjustment (target/actual*difficulty), and it appears that only a small amount of bit-flipping is required to jiggle the difficulty value a little to reduce its response to aliasing distortion.

As far as I can tell from my research, nobody has done 9 algorithms for one coin before, I think Myriad is the most, with 5, and most of these algorithms use complex filters over the averaging window, such as the original Parallelcoin method which found the time between all of the scrypt or sha256d blocks and the previous block, grabs 10, and then works from this average. I believe others also try to isolate the algorithms.

When I told @marcetin about my idea of giving this hardfork the codename 'Plan 9 from Crypto Space' and that two of the extra 7 algorithms had serious implementation bugs, he said "There must be 9, that name is awesome", or something like this. So, I looked for algorithms and came up with GOST and Whirlpool.

Whirlpool, strangely, has not found itself in any PoW algorithm to date, which is strange, because it's a particularly good one, it was most notably used by Truecrypt. GOST only appears in a couple of coins, and not by itself.

SHA256D, Scrypt, Keccac, Skein, X11, Lyra2REv2, and Blake14lr are all used exactly as I have implemented them in numerous coins, so given the right getwork configuration one will be able to use ASICs and GPU to mine these ones, depending on what you have available, what is sitting idle collecting dust.

On the other hand, the simple GOST and Whirlpool both do not have existing ASIC or GPU miners, though likely modification would allow an X11-GOST miner to just deploy the GOST function by itself, Whirlpool will be constrained to CPU only for some time. Eventually someone will probably implement them for OpenCL and CUDA, in fact this announcement may cause someone to spend a few weeks writing them in preparation. But possibly at first only CPU mining will be possible with these two.

If you followed the explanation above of how the indiscriminate averaging window (only constraint being it ends with the algo in question) causes stochastic behaviour and latency in the difficulty adjustment, what this means essentially is that no matter what algo you mine with, each algo cannot, for long, take more than 1/9th of the total hashing power, and the only miner for Whirlpool and GOST will initially be the new Pod non-wallet full node.

Sure, maybe this enables botnets, but that would still only be 2/9th of the hashpower affected this way.

Also, I should note that because so many options are available, and their stochastic latency of averaging windows, it doesn't matter if ASICs mine the coins, as would be attackers have at least 5x as much logistical burden to an attack attempt. I am not against ASICs, I just want to make it so that miners don't wake up one day and suddenly they are minnows up against a whale of hashpower.

The multi-algorithm based security of the chain hashrate is definitely an innovation, although only a logical simple step beyond multi-algo chains. And if the difficulty of arranging so many algorithms gets easier, we can probably add some more algorithms to further dilute the effect.

After this the Plan 9 from Crypto Space hardfork, Parallelcoin should be the most hashrate attack resistant small coin in the space.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 11, 2018, 01:28:22 PM
 #583

My test console, going top left downwards and then right each terminal is a node mining the following algorithms:

sha256d
blake14lr
whirlpool
lyra2rev2
skein
x11
gost
keccak
scrypt

https://i.postimg.cc/hX71nLRD/Screenshot-from-2018-12-11-14-24-40.png

You can see in the terminals that have found blocks recently which algorithm, how far their block times are diverging according to the adjustment windows, they essentially act independently, and randomly.
SamaelDNM
Member
**
Offline Offline

Activity: 700
Merit: 13

New exchange generation


View Profile WWW
December 12, 2018, 08:21:11 PM
 #584

Good climb  Shocked



lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 12, 2018, 10:24:21 PM
 #585

Not gonna get *too* excited just yet but it's a nice splash of cold water in the face Smiley I'm now building the proper hardfork framework so I can merge the alpha development branch back easily... It's been a pretty rough month so far.

I'm basically not going to get paid anything unless the market likes what I am doing, and this is, even if it's a P&D for the moment, or just another community member playing silly buggers, it still woke me up.

Oh, of course, nothing is 100% confirmed but I will be scheduling the hard fork to fall somewhere around 18 January, this should be plenty of time. The difficulty adjustment seems robust in testing so far, I only need to tidy up the mining RPC functions and I'll be able to test semi-realistic increases in difficulty and see how it responds. I won't really be over the moon until I see it consistently recover fast and stay on the clock even with a big drop in network hashrate.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 13, 2018, 02:32:49 PM
 #586

I was advised by my boss, @marcetin, to try to put more of my rambling thoughts into a more accessible medium, as, rightly, the possibilities and the designs I am working on, have great potential especially for us parallelcoiners, but also more broadly to proof of work altcoins in general, as obviously a good model to deal with the circumstances of the millieu, decreasing instability and insecurity, is good for everyone, and brings more money to the table for all.

So, as I am now almost finished adding an effective hardfork switch system, and now most of the remaining work is on the mining RPC API, I have written an article that I have posted on the Steem-fork forum system being run, very protectively, by my good friend Bilal Haider, a project that I will become more involved with once we make the milestone and have the full suite ready for the parallelcoin network to upgrade.

It's about how Proof of Work is the only real way to protect networks, but that it needs a radical revamp in order to survive:

https://fast.bearshares.com/cryptocurrency/@loki/pow-s-not-dead

More generally, regarding the progress, as I said, the core network functionality changes are mostly complete, and will now enter the intensive alpha test phase once I have the GetWork functions working on all algorithms.

I found a suitable miner to base an official multi-algorithm GPU miner in the Decred miner, it is written in Go, and currently contains the OpenCL and Cuda kernels for Blake14lr, being one of the algorithms Parallelcoin will have post-fork. It should not be excessively difficult to expand it to at least 8 of the 9 algorithms, as GPU miners exist for most of them, Lyra2REv2, Skein, Gost (as part of x11-gost), x11, keccak, and sha256d and scrypt. The repository has been forked and I will start working with it immediately afterwards.

Once the getwork functionality and multi-algo RPC ports are all fully functional, the new miner will be available to use. One of the modes I plan to make for it is algorithm-hopping, so that it automatically switches to the lowest difficulty (based on benchmarks) currently on the chain, a feature that will help keep the chain robust against attacks, as when some algos are hit with high hashpower, the others, in proportion to the rise, will lower, as part of ensuring the clock is maintained within as tight a range as is possible.

So, anyway, back to work, gotta wrangle these miner RPCs!
marcetin
Legendary
*
Offline Offline

Activity: 1101
Merit: 1008


ParalleCoin's ruler from the shadow


View Profile WWW
December 13, 2018, 05:33:34 PM
 #587

I was advised by my boss, @marcetin

I am the leader not a boss but yes ok I will take all responsibility Smiley

Anyhow, it was long time from my last login here, but that was just because I had no time as Loki and me were developing new ParallelCoin platform, which is on 95% to be ready for the release. So as some people got from Loki's writings on Discord how much we made and where we are and that in mid January we will do fork interest has rise and Cryptopia market shows that.

My hope is to settle all things by end of next month and that all things from step 1 will be done and released. I know this all took so long but we finally have new coin code, all golang with whole system com-http for indexing coins and bitnodes for coin node hosting. Bitnodes will go live in next couple of days. Things have been changed from my initial plan, as you know I worked with mblados (Rod Zisso) who act as a jerk and stop responding me after months of telling me he is working on the coin, do not work ever with that guy if he is still here, and if he see this FUCK YOU ROD. I am still mad at that guy as he let me hang as a full and stop answering my messages.

On the new site you will get all explanations and we will create new and when fork happen.

So stay tuned and know that this is our life story and there is no back, it is true story, we dont ask for money but we have are just combining all accessible and suitable technologies to expand cryptocoins tech story.
As we here say last days for us is like "ParallelCoin or death", because we are really last 6 months in this: sleep, eat, programming. I had even learn to do programming in golang so all my new work is in go, which people still did not recognize full as it is the language of internet thing, servers etc, build by Google to solve googles problems which are greatest server problems in the world. Also 3 legendary guys are involved which each of them was involved in developing at least 3 programming languages before.

So as I was mentioned as in charge I announce with this post that our "PLAN 9" goes live from this moment!!!

Cheers!

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  BITNODES   MANAGED NODE VPS HOSTING SERVICES    SUPPORT YOUR COIN WITH A NODE 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
My fellow members, ask not what the community can do for you, ask what you can do for the community. CCW-WebRes-BitStickers-AnonStickers.shop------------------------------ ParallelCoin

The Secret of Success is to find out where people are going.. and get there first! - Mark Twain
Bitrated user: marcetin.
marcetin
Legendary
*
Offline Offline

Activity: 1101
Merit: 1008


ParalleCoin's ruler from the shadow


View Profile WWW
December 13, 2018, 10:30:14 PM
 #588

.
Why Go?

Cross-Platform and Binary
Go can build binaries for almost every operating system that anyone runs on any CPU, which greatly improves the use of resources compared to interpreted or virtual machine execution

Garbage Collection
When processing large amounts of data, it is critical that anything no longer in use is returned to the pool so it can be used for something else - in Go it is automatic, and most of the time, optimal. No more memory leaks!

Great Debugging and Profiling tools
Go lets you inspect every aspect of your software's performance in great detail, and the support for debugging is nearly perfect

Easy to Read
Which means it is easy to maintain and easy to change, and with its standard formatting, no longer do you have to think about where you can break a line and where you can continue, it always looks good!

Strict on Security
It is much simpler in go to ensure that data is not going where it shouldn't be - most security problems come from bugs

Concurrent
Whether your process is just a simple pipeline, or multiple processes handling different stages and input and outputs, Go makes it easy to keep everything synchronized and working to maximum capacity, as well as being able to scale up quickly at peak times

Libraries
Go has a huge and growing number of repositories to do almost everything. Because it is easy to interface with C, it should not be long before a lot of system software is written in it. Yet you can use it equally well for a website as any kind of desktop or mobile application


--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  BITNODES   MANAGED NODE VPS HOSTING SERVICES    SUPPORT YOUR COIN WITH A NODE 
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
My fellow members, ask not what the community can do for you, ask what you can do for the community. CCW-WebRes-BitStickers-AnonStickers.shop------------------------------ ParallelCoin

The Secret of Success is to find out where people are going.. and get there first! - Mark Twain
Bitrated user: marcetin.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 15, 2018, 01:06:23 AM
 #589

I have now extended the hard fork logic to cover everything now, pre hardfork mainnet information queries don't show new algorithms.

The extra algorithm ports are all working, though they open before hardfork but will return an error saying they are not active if the block height is not past the hard fork starting block height (so when it gets close to hard fork you can just start the miner on it and it will start mining the moment the block arrives). The testnet always runs the latest hardfork by default, currently only one, but for the next one it is just adding sections to cover rule changes, all the slots are there to fill.

I have also reinstated getwork, which allows the decred gocoin GPU miner to work with it directly. I figure that unlike an ASIC, the wire to the miner is pci-express and the time used getting new solutions is not such a big percentage of the time it can be hashing as for an ASIC, and being that most ASICs are connected by serial ports far slower than PCI-express

I should have all available kernels for at least OpenCL, it may be harder to find kernels for the algorithms with Cuda. I will be automating its benchmarking function for use with an automatic lowest-difficulty  algorithm switcher, to maximise profits for GPU miners, and help keep the floor from rising too fast before the price catches up to it.

There definitely is the potential to add more algorithms but I would be of the opinion it would get a bit silly much past 20. It just needs to represent a broad enough spread of types of hardware with the better efficiency ratios being geographically and ownership-wise less likely to give one miner an edge, especially as I definitely agree that pools and getblocktemplate ensures the pools can't so easily abuse their users to attack coins. We will be building a pool that works this way also and probably need to also make a pipe that users can configure to proxy to gbt for getwork only hardware should there be a need for it.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 19, 2018, 05:06:26 AM
 #590

Ok, another significant milestone, so I need to make a short report.

I got all the getwork stuff working properly now, and initially I was going to work from Decred's Gominer GPU miner, but after spending a couple of days fighting with it and my previously incomplete work rpc stuff, I am writing a new, clean miner app based on the very neat and tidy config/version/etc frameworks used in these btcsuite apps.

The miner already now makes a lot of things automatic, and I simplified a lot of stuff. No more strings of complicated flags, just one parser for credentialed URL, an option to only mine on one, and/or to automatically scan for the other 8 ports counting up from the first one, or optionally a custom list. I probably should think about maybe fallbacks, but being that it's currently being made for solo mining first (for me, the developer!) I am getting that side of it done first. First up I will implement the simplest getwork protocol, and then, I have to implement the rpcclient part for getblocktemplate, as it is strangely missing from the RPC library (guess they had nobody wanting it, bitcoin and all, and obviously also nobody is really using btcd for mining, even on the back-end of a pool)

I did a lot of reading of OpenCL code (and a bit of Cuda) and came to the conclusion that I am not going to target Cuda at all, I am only one man, I can only do so much, and OpenCL covers the broadest number of hardware platforms and of course I would love it if we can add cuda to it later but it's not so important at this stage. Stream compute kernels that I saw are written in all different ways, with different structures, some full of preprocessor macros, and none of them with any kind of protocol standard for invoking, loading and getting the responses back from the processor. I will design a simple and easy to understand framework to write the 9 kernels required. Also, for those who might just want to 'play' with mining, the kernels will be designed to run for short times, probably aiming for no more than the time of a single 60hz video frame, so when the 'interactive' mode is enabled, you can work as normal on your machine, without excessive latency being caused to video updates. This is one area where Nvidia's platform is superior to OpenCL, but it's just a matter of priorities for the programmers. Most CL mining is done with headless, AMD rigs. Mining with Nvidia can be a real pain in the ass because Cuda doesn't run unless the X server is also running.

Main thing at this stage is for it to be decently fast kernels and fast enough that between all the users I know already who will be doing a bit of mining, it will be a substantial boost to the baseline hashrate.

The miner will have a default automatic mode where it selects the algorithm with the lowest difficulty to block solution odds ratio based on a benchmark, and combined with pools, the lower margin of profit will be compensated for by chasing the easiest blocks and this will also help ensure that GPU miners are not unduly excluded from being able to mine. In the future, if big pools and ASIC miners get to be an irritation, despite all the measures to mitigate their influence, the nuclear option will be in constructing arbitrary, not-implemented-in-custom-silicon algorithms to bolster the lower end of hashrate capabilities, and further insure against centralisation of mining power.

The current master branch at https://github.com/parallelcointeam/pod is working perfectly on mainnet already. I have set back the hard-fork height maybe another week into january, when it is ready it will be fixated and announced.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 19, 2018, 04:43:30 PM
 #591

Well, the whole market... if you look at the last 5 years of bitcoin price also... I don't think we are quite at the bottom yet.

I know this process is taking some time but I am living poor as a student, so that I can do this. I love crypto and p2p tech and I'm sure once the shiny new GUI wallet is finished and everything is proven in testing we will have a freshly updated chain to build on, there will be increased interest. In the process of building I have inevitably ended up studying code and architectures from I guess maybe close to a hundred other projects.

There is now a big problem for any project to gain and hold userbase, too few things really stand out.

The important thing is that we are all involved in building the future of how humans communicate and trade. We are up against several massive adversaries, governments and large multinational corporate cartels. They don't want to give up their control. Every way they can slow us down, they are trying.

I remember 2014. That was when this project originally started. Everyone was saying crypto was over because of the MtGox crash and hack. People are once again saying that crypto is over.

I am sure that we will find the bottom by summer time.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 20, 2018, 04:47:26 AM
 #592

Pre-release fatigue is starting to get to me so I am quitting with less important things in order to get the core release done.

It does just so happen that today due to my early work on the kopach GPU miner that I found and eliminated a bunch of problems relating to mining work RPC and processing, so in a way this is also decided because it is getting very close to usable and I think that the people will be happy enough for a start to have the upcoming hardfork and protocol improvements running. Miners won't mind so much if they have to use the podctl CLI client to send their coins to exchanges, so we can slate finishing this along with getting the GPU miner done. I'll put together some simple instructions on how to do basic operations using the CLI.

So, I am completing the final parts as follows, in this order:

  • Finish changing RPC (mainly responses) to incorporate the new protocol features
  • Test miner with ccminer/sgminer/bfgminer on all possible algorithms, ensure correct blocks are submitted and all that
  • Make sure wallet is working, change any RPC features related to protocol changes (if any, I don't think there is any)
  • Package releases for easy, friendly installation, they already self-autoconfigure. There will be binary installers for Arch, debs and rpms, I will try to find the simplest way to make a windows installer also, and mac is a little simpler
  • Update parallelcoin.info to new branding, new protocol features and a roadmap. Maybe a draft white paper
  • Fixate the hard fork height
  • Create a new ANN thread to give info about protocol changes, where to get the new clients, and when users have to upgrade in order to stay on the main chain[/i]


There is a few small other things to note. The simple Whirlpool 256 bit hash and GOST Stribog 256 bit hashes have no existing GPU or ASIC miners. So of course we will be filling the office with every piece of hardware capable of hashing, even my partly broken old mobile phones - I have already confirmed that it is possible to run the node inside Termux on Android, so this is a possibility for anyone. Because also it is not consensus related, I can (*after the release*) add multi-algo switching to the built-in CPU miner, tonight I just changed it so it can switch algorithms while running, and it probably won't take much to change it - add a benchmark function and use this to select the algorithm with the highest odds of finding a solution. Not a big priority but a relatively easy task to complete in more relaxed conditions.

It is pretty rare when it is at all practical to mine with CPU. Effectively due to the design of the system, these two non-GPU/ASIC mineable algorithms are set to always win an equal share in proportion with the number of algorithms, or 2/9ths of the blocks will go to those mining these algorithms, and I suppose this may induce someone to build kernels, at least, for ccminer/sgminer, and I know both algorithms have kernels already, just not put into place.

The other note I have to make is that inevitably in the first few days block times will be shorter as the network adjusts to the new algorithms. So, for us here, living in the second world, responsible for updating this thing, obviously we are going to do our derndest to get as many of those early easy blocks, not that we will be alone in this, but we should manage to maybe make a few score DUO out of it. Obviously a clever person might be able to get GPUs mining it, with the few weeks warning. Whatever, hype is good, people are always happy about a chance to share in the rewards.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 23, 2018, 11:20:41 AM
 #593

The christmas cheer has seen me taking a bit more relaxed work approach in the last couple of days, and as a consequence I rambled back over to the mining algorithms and difficulty parts again, and discovered there is a Go implementation of Cryptonight version 7. It has all three variants, including the new one which I think is yet to go live on the Monero network.

For some reason, I thought the proofs were different, as cryptonote protocol uses different proofs. But the actual hashing algorithm is the standart run of the mill 256 bit hash. So it has been added, and then I looked around a bit more and discovered the Aurora2i Key Derivation Function (Scrypt is a KDF, probably cn7 could be one too, funny enough).

I also played a bit with Desmos to alter the adjustment curve to be less flat in the center. It's being tested now, along with some big changes that mean CPU mining only is going to be the rule for the early days at least, and many impediments to even GPU implementation.

Here's how it works now:

Every algorithm type, being 9, blake14lr, cryptonight7v2, keccak, lyra2rev2, scrypt, sha256d, skein, stribog, x11 also includes a first pass through Aurora, which is set at a buffer size of 64kb, then the algorithm named in that list, except for cn7v2, which does not have another hash required.

The strategy works in this way to impede increasing efficiency - In order to implement this scheme on a GPU, you need to write a kernel that has all the required component hash functions, and it is different for all 9. You could maybe write one kernel with a mode setting, but any way you slice it, the kernel has to be bigger, which is a latency cost for a miner.

Each algorithm requires, in total, about 6 different algorithms, one being different in each of the set of 9, one having one less.

As well as this, to further reduce shortcuts, the block time is being reduced to 30 seconds, which does somewhat increase the chances of orphans being mined, but it also ensures the chain clears transactions quite quickly.

The multiple algorithms also can be mined with an automatic random selection mode, which I will be recommending for users, as this further decreases the possibility of an attacker being able to determine sufficiently quickly which algorithm is the weakest (lowest difficulty), at any given time, as many miners running random selected algorithms on each block round means that each new block has an entirely different and impossible to predict distribution.

So, even if there is someone mining with GPU, they have less of the advantages given by these algorithms, especially cn7v2, which runs best with 2mb of CPU cache per thread. Ryzen processors are pretty much on par with the AMD GPU of equivalent price, when it comes to mining, because the GPU has far less cache and basically cannot mine effectively with all of its cores. The main memory is faster but CPU cache memory is faster than GPU memory.

The requirement of the common standard compute-hard hashes in most of these algorithms in the middle of the pipeline means that the only way to make it more efficient is to set up pipelines so several cores focus on one stage, pass it to another, and then to the final stage. First is Aurora2i, then a compute hard algorithm, and then memory hard cn7v2. This makes the interconnect between stages in such a hypothetical miner a source of serious latency and contention issues, which slows down the solution search in a way that cannot be reduced significantly. On a GPU this would mean you would have to construct multiple kernels with direct interconnect to pass through the result from each processing phase. The CPU will be fastest at cn7v2, an ASIC would do the compute hard hashes better, and probably Aurora (based on blake2b) probably does well on GPU. But as you can see, linking the optimal hardware set in the pipeline, gives you all kind of transport contention issues, how do you stream from a GPU computing Aurora2i, to an ASIC core hashing (for eg, sha256d, or blake14lr), and it then would have to pass its result to the CPU, to make the final cn7v2 hash, and then check if it hits the target.

Obviously, the complexity of this makes engineering a significantly more efficient solver much harder, and compounding this, the random algorithm selection, if used extensively, makes it impossible to have any advantage in selecting a faster arrangement and maximising the latency required for GPU solvers if they try to match this in order to gain an edge.

So, in summary, after hard fork, and for maybe up to 3 months, at least, if not longer, there won't be anyone with an advantage, except maybe people running Ryzen processors, especially Threadrippers, which are basically built to do Cryptonight7 hashes, which would, ideally, and hopefully will always be, the best way to mine DUO after the fork.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 23, 2018, 06:11:58 PM
Last edit: December 24, 2018, 12:08:19 AM by lokiverloren
 #594

I've not really done *that* much today, but I am pretty satisfied with the revised adjustment curve, which is now not flat in the centre, and it converges pretty good on the target, within about 40 blocks from minimum to hitting target on my ryzen5 cpu.

I have now got the randomised mining mode fully working, testing it with 3 nodes in a little testnet, each using two cores each, and cpu utilisation stays around 80%, and there is definitely less cache pressure as a result, so I'd guess the solver is doing more calculations per thread per time. Each mining thread selects a different algorithm, so it is mining with 6, and every time a block is found it restarts and a new set of 6 are used.

The next thing I wanted to look at was eliminating the stair-step halving, and I came up with an exponential decay curve that interpolates smoothly from the original halvings. I estimate that the theoretical supply rate for the token was originally meant to be about 24% increase per year, and due to the small block reward size, I decided not to lower it to a more reasonable 5-10%. I don't think it will be a problem and less tokens can have a negative impact on the market as userbase expands, as it lowers the price precision available.

This is the formula that calculates the reward based on the block height:

// Plan 9 hard fork prescribes a smooth supply curve made using an exponential decay formula adjusted to fit the previous halving cycle
r = int64(2.7 * (math.Pow(2.7, -float64(height)/375000.0)) * 100000000)

https://i.postimg.cc/gkzvCDv9/unknown.png

(note, for illustration purposes I will multiply the height factor by 1000 to show what the reward level is at each 1000 blocks)

The image linked to above shows the existing supply rate, with the orange dotted horizontal lines, the vertical green lines are the halvening heights. The red curve is produced by that formula. You can see that it cuts the orange horizontal lines about the same point each time, and through the green vertical lines also at the same ratio as each previous.

The difficulty adjustment curve used to be this:

adjustment := 1 + (divergence-1)*(divergence-1)*(divergence-1)

It will instead be calculated as follows:

d := divergence - 1
adjustment := 1 + d + d*d + d*d*d

https://i.postimg.cc/7xdjqCst/Screenshot-from-2018-12-23-19-04-52.png

The red curve is the old version, you can see it is very flat between about 0.8 and 1.25. It doesn't always change the difficulty, and as the Digibyte people observed, when devising their 'digishield' adjustment regime, they said that it is important to always move it. So, the purple line represents how it will adjust now, and it will almost always move it at least a couple of bits back and forth.

Currently the dithering basically consists of flipping the last 2 bits of the 'bits' value, representing a difference of about 1/2^21 - very small, and also, maybe not ideal. I am considering changing it to be derived from the last byte of the best block hash, as a signed 8 bit integer, meaning it will vary on average by a factor of 64 (2^6, to put it in perspective with the full precision). I'm not sure if it's necessary, as what happens now is pretty randomised, depending entirely on the timing of the blocks, in the last two bits, 3 becomes zero, 2 becomes 1, 1 becomes 2 and 0 becomes 3. It's quite small, but I meant it to only jitter it a small amount. More than this might make it prone to drift.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 25, 2018, 03:14:15 AM
 #595

Another update, since I've got nothing else to do and no other place to be, and not feeling like sleeping after spending too much time napping today.

I had been observing so far that the simple 'average from last block of algo' scheme was perturbing the average block time, so I am biting the bullet and writing a per-block averaging scheme. But it's not your typical one, and I don't think a single algo difficulty adjustment is relevant to multi algo, even if it is only two, and definitely for more than 3.

So I am testing an exponential weighted average. It still looks only at the previous block per algo, meaning the adjustments won't affect algos that are not before a spike in hashpower, but instead of using an indiscriminate, fixed width window, instead I am using weighting. I haven't quite got the algorithm right yet, as it appears to not be counting as many previous blocks as it should be (and I am getting an odd NaN problem with the adjustments at first), but this is how the scheme is working now:

1. Gather the full set of previous blocks per algorithm, specifically their timestamps.
2. Calculate the time between each of the blocks
3. Adjust the weight of each block time such that they decay at some rate, something like 90%, I think
5. Add the adjusted actual times together, and divide them by the sum of adjusted target times, divide, to get the divergence
6. Plug the divergence value into the 4 term polynomial that defines the adjustment curve, then modify the difficulty.

Even though I haven't quite yet got the algorithm 100% working exactly as described above, it already achieves pretty good convergence on block time target.

Outside of that, I am also encountering issues that I can only assume relate to network timeouts and such on my 3 node testnet. I have reduced the retry period to 1 second from the default of 5, as previously blocks seemed to be coming in at multiples of 5 seconds a lot. It may not be a change that is needed on the live network, but I suspect it might be because 5 seconds was also related to the 10 minute block time and this goes fully 20x shorter than that. Block propagation encounters many bottlenecks due to this, and thus forces also the lengthening of the block time.

Bitcoin's engineers, and more or less any crypto with a big miner userbase doesn't have to deal with this problem. I would love to be a fly on the wall when they are faced with it. In the long run, it's an issue that will have to be engineered into the protocol, but many people in the space protect themselves from this through their sheepish behaviour. Plus, it's not all bad - because the alts that get this right are pretty much guaranteed success, simply by the way that the population of miners shifts from mostly parasites to fans.

Well, it needs some more work yet, I don't think my implementation is correct, or efficient at this point. The thing is not showing me nearly as many past block time samples as I expect, nor are they weighted as I expect. But it's pretty promising so far.

Aside from that, I have a rudimentary randomisation script that enables and disables threads on the miner, to simulate real variance in network hashpower, and each thread automatically randomly selects the algorithm whenever the template gets stale. This will help a lot. My old Ryzen 7 would have been more fun to work with, as this 5 has only 6 cores, but then again, in a way this simulates the fluctuating network latency as caches get full and cause waiting.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 27, 2018, 06:50:02 PM
 #596

Experience and testing are very important... and I don't want to let something on a live network until my confidence is very high, and my criteria are a lot higher than most... so there is a development process, and I am writing about it here in order to allow people a view, as well as, in case anyone wants to contribute, some feedback. But in the absence of that, it is enough to describe what I am doing.

I have spent a lot of hours staring at logs slowly moving up my screen, making changes, resetting the chain, waiting for results... and I have learned many things about different strategies for difficulty adjustment.

One really key thing relating to multi-algorithms is that there can't be total decoupling of an algo from the timing of the rest of the chain. But tying it to the newest block of an algo causes problems of oscillations and keeping them separate leads to oscillations between min diff and ten or more times the target passing before a block can be found.

So, the current thing I am testing combines the two averages.

On one hand, each algorithm computes an exponentially weighted average, based on only the algorithm's blocks. This aims to have the algorithm tend towards one block being of equal distribution, or in other words, it is based on the target multiplied by the number of algorithms.

However, running alone, this leads to the oscillation between stupidly long blocks and too many of one coming all in quick succession.

So now, I am experimenting with adding some weight to the whole chain's block times. Just a simple average of some number of blocks in total, currently about 300. The weighting did not seem to be enough at a 50/50 between one average and the other, so right now I am testing biasing the global block time average as 3 times as much weight as the per-algorithm average, as making it match 1/9th of the average did not respond well. Now I can see also, in my log outputs, the overall average, the per algo average, the divergence and the compounded adjustment. Now when it is over target, the majority of algos are also raising the difficulty, and the ones that are still too short, are lowering it.

It seems to balance better, but I won't know exactly what the best ratios are until I have run enough tests to see how it pans out over a longer time period.

One of the things that I am noticing a lot has to do with the latency of nodes updating other nodes about new blocks. It seems to me that the connection-oriented TCP connections used in conventional blockchains are not nearly as fluid in propagation as they should be. I'm not going to address this issue in this hard fork, as the issue is less important with more hash power and the intervening natural network latency fluctuations, but I for sure would like to see an architecture with full duplex communication by streaming between nodes.

But that's for the future, so I won't dwell too much on that.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 28, 2018, 01:28:57 PM
 #597

Some results!

https://github.com/parallelcointeam/pod/blob/master/isolatedtestnet/run1.csv

The all-time (since block one) average for this is about 28 seconds per block. The variance seems to generally stay between 0.5 and 2.0 most of the time, and if the all-time average is shown progressively advancing you can see it oscillates randomly between about 25 and 35 seconds.

This data set was generated using an exponential weighting for the per-algorithm blocks multiplied by the all-time average with a factor of 0.75, I am going to run it with 0.9 (meaning it will average from a much longer range). You can see just by looking at the sequence of algorithms that they don't often run too short but there is periods where it diverges more. I think the bigger window will smooth this out, and I will be putting that data set in the repo also afterwards.

I have debug prints of the retarget parameters in the code currently, not perfectly pretty or anything, but it looks like this:


retarget: blake14lr height 36, old 1f1b1fa9 new 1f333493 average 38.79 weighted 43.797 blocks in window: 3 adjustment 1.887848271
retarget: keccak height 36, old 1f1b1fa9 new 1f2e7629 average 38.79 weighted 39.739 blocks in window: 3 adjustment 1.712952696
retarget: sha256d height 36, old 1f1b1fa9 new 1f23131b average 38.79 weighted 5.532 blocks in window: 2 adjustment 1.293137255
retarget: skein height 36, old 1f1b1fa9 new 1f23131b average 38.79 weighted 18.295 blocks in window: 3 adjustment 1.293137255
retarget: lyra2rev2 height 36, old 1f1b1fa9 new 1f23131b average 38.79 weighted 12.725 blocks in window: 2 adjustment 1.293137255
retarget: stribog height 36, old 1f1b1fa9 new 1f23131b average 38.79 weighted 3.957 blocks in window: 4 adjustment 1.293137255
retarget: scrypt height 36, old 1f1b1fa9 new 1f2c8107 average 38.79 weighted 38.065 blocks in window: 3 adjustment 1.640781434
retarget: cryptonight7v2 height 36, old 1f1b1fa9 new 1f223f51 average 38.79 weighted 29.292 blocks in window: 2 adjustment 1.262636357
retarget: x11 height 36, old 1f1b1fa9 new 1f23131b average 38.79 weighted 20.641 blocks in window: 4 adjustment 1.293137255
retarget: stribog height 37, old 1f23131b new 1f2ae479 average 39.43 weighted 27.914 blocks in window: 5 adjustment 1.222891696
retarget: keccak height 37, old 1f23131b new 1f3d1053 average 39.43 weighted 39.739 blocks in window: 3 adjustment 1.740966977
retarget: cryptonight7v2 height 37, old 1f23131b new 1f2d02c2 average 39.43 weighted 29.292 blocks in window: 2 adjustment 1.283285993
retarget: sha256d height 37, old 1f23131b new 1f2e191c average 39.43 weighted 5.532 blocks in window: 2 adjustment 1.314285714
retarget: lyra2rev2 height 37, old 1f23131b new 1f2e191c average 39.43 weighted 12.725 blocks in window: 2 adjustment 1.314285714
retarget: scrypt height 37, old 1f23131b new 1f3a7db1 average 39.43 weighted 38.065 blocks in window: 3 adjustment 1.667615399
retarget: x11 height 37, old 1f23131b new 1f2e191c average 39.43 weighted 20.641 blocks in window: 4 adjustment 1.314285714
retarget: blake14lr height 37, old 1f23131b new 1f434c6a average 39.43 weighted 43.797 blocks in window: 3 adjustment 1.918722861
retarget: skein height 37, old 1f23131b new 1f2e191c average 39.43 weighted 18.295 blocks in window: 3 adjustment 1.314285714


'average' is the all-time average, 'weighted' is the result of the exponential weighting per algorithm, and the two numbers and their ratio to the target (currently in testing for 30 seconds but I am not 100% sure it will stay this way) you can see that in this case the time has stretched to 39 seconds per block, and so all of the adjustments are over 1. The more the 'weighted' value is, the higher, and vice versa.

Tweaking the per-block weighting may improve things, so I am testing several variants. I think 0.9 will probably be quite good, as it should stretch at least to 20 previous blocks. The width of variance this allows depends a lot on this factor. Obviously the ideal is that very very rarely blocks come in at under 10 seconds or over 40 seconds, being the a variance of 33.3% and it does seem to do this quite well, the average rarely goes under 20 seconds and rarely over 40, so this is very much satisfactory for ensuring regular block times.

In the event of one algorithm being mined at 10x the rate of others, the algorithm will aggressively adjust the difficulty for this algorithm. It's important to note that the multiple algorithms are being used as part of the security against block time manipulation. It also acts as a limiter on how well an ASIC can perform on the chain, or more specifically, requiring that the ASIC mines all of the algorithms at the same time. This is the only way to force a shortening of the overall block time that will not spring back hard against it. Complexity, and the requirement of a parallel pipeline for each algorithm should make ASICs less than 20% efficiency advantage, and GPUs are also similarly hamstrung by this pipeline, so maybe they will gain 10% once implemented.

But the main thing I'm interested in is that mining remains accessible to most people, as the power cost and load of this will not add a lot to an enthusiast's computer's load. Yes, I think it would be good to make what essentially could be injected into people's machines to mine by botnet miners.

Botnets will of course be able to mine this coin. But ultimately that's none of our business, and if it's worth coordinating such mischief then it's good for us also, as obviously the coin is in demand. In the future possibly there may be countermeasures against botnets. Botnets tend to have high latency, and I think that 30 second blocks naturally limits how productive a botnet farm can be, as it will tend to have higher latency in its control channels so this means that it will dig on blocks longer than proper miners will, once a new block comes in, because proper miners will have fast, low latency connections.

There is no practical way at this point to further improve the speed of the chain, as the peer to peer network code has to be added to, improved, and the only way I can see to tighten things up is, one: node operators need to make sure their time never drifts more than about 10 seconds from the NTP clocks, really, I would say they don't want to be more than 3 seconds skewed, and the other thing that would be required would be to write new peer gossip network protocol that does not, as it does now, disconnect all nodes at once when moving them around, and the connection should not be a high-overhead (in initiation) TCP connection, but instead, streaming duplex UDP with FEC to reduce retransmit latency, for announcing the arrival of new blocks, as well as an effective 'epidemic' transmission where nodes pick the lowest latency peers nearby to stream their updates to.

I would want to improve this down the track anyway. The same error corrected UDP streaming protocol has of course many more uses beyond accelerating and tightening the convergence of the mempool and new blocks, it is well suited to of course audio and video and instant messaging as well.

I will make a few runs for decent numbers of hours with different exponential weighting factor, but I think that this regime should work quite well.

Oh, so, currently, if I made no further changes, it will average block time based on the first block, directly after the genesis in early 2014, and this means that likely actual block times will be shorter for a long time. I'm not sure if this is a good thing, as 30 seconds is pretty much as tight as I think you can get with the peer protocol they have currently, and with the all-time block time average of 12m45s the new algorithm will pull them down to about equivalent of 2-3 minutes.

I'm not completely decided about this question. I'm inclined towards instead setting the 'all time' starting block to be the first block of the fork. The reason why I am saying this is that naturally if it was based on the original target time, and the reward was computed on height, and not on timestamps (which is a bad idea anyway) then supply will swell at a rate double currently, and this is frankly way too fast. What I think will be percieved as most fair is to start at exactly 2 duo per block from the fork block onwards. I want to constrain the supply rate, I think the current (theoretical) 25% inflation rate is terribly high. I would want it to be at most 10%.

With 30 second blocks, the reward would thus be starting at 0.2 duo per block, and the exponential decay means that always 90% of the supply expansion is in the first year, and this is why I talk about increasing the precision of the token. Recalibrating this, so, for example, to raise the post-fork reward to 2 DUO per 30 seconds would require dividing the pre-fork tokens by a factor of 10 in their current value, but the point is it increases the precision by one decimal place.

So, my current, tentative idea is that indeed I will raise the block reward by 10, as it will fall to the same level in 1 year at a 10% supply expansion rate, but at the same time, it will increase the overall supply. So, it will increase the potential for a bigger sell side, but this would be a temporary situation, as after 1 year it would be at 2 duo per 5 minutes, and in 2 years down to 0.2 duo per 5 minutes. I think the supply is a little too tight anyway, but this way, after the first flush in the first 6 months, supply will be lower, and so, this is a big incentive to put miners on the network in the short term, and this will swell userbase, and so long as we keep doing the right things by our users (miners, traders and e-commerce) we can sustain the userbase as the supply tightens to a rate that keeps in sync better with the economy as a whole.

I will be discussing the subject in the Discord chat, which you can find here: https://discord.gg/nJKts94
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 28, 2018, 02:49:19 PM
 #598

Just a note about a further improvement - the use of an all-time average necessarily means eventually the factor will greatly decrease in its adjustment precision, so I am adding a second factor, a 'half time' average, which is computed from the block at (integer rounded) half the block height to the first block (which will be set as the hard fork activation height). This will add a further, half-decaying precision adjustment that pegs to a long time range to help reduce oscillation a bit further.

I may add a quarter or 1/3 or other, smaller long run average but I suspect that half will be sufficient.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 28, 2018, 08:14:18 PM
Last edit: December 29, 2018, 01:51:10 AM by lokiverloren
 #599

Because of the greater amount of complexity of the PoW, I am finding that the 30 second testing block time is too short, and every so often it slows down enough that adjustment hits the minimum difficulty and a series of short blocks occurs, so I am going to leave it at 5 minutes. I am inclined towards the strategy adopted by Bitcoin and Litecoin and implement a form of Lightning for faster clearance. Many functions don't require realtime confirmation, and even 30 seconds is too slow for physical point of sale anyway.

I am a firm believer in the Proof of Work scheme, and I absolutely do not want to put anything involving staking at the base level. I think stake can be useful at higher levels of the protocols, like social networks, but not so much good on the replication protocol level.

A thing that is on my mind is eliminating the tcp spin-up/teardown overhead... Currently, even all running on localhost, nodes disconnect and then reconnect for no reason I can discern, and when this happens as a block is found, the other nodes can spend quite a long time (up to 5 seconds sometimes) mining a block that is stale but *even on localhost* they don't know this yet! insane!

So, it is not on my agenda for this hard fork, which I am getting close to a settled difficulty and mining architecture, but I am very much in favour of adding a new connection method between peers, and yes, I think even I should make RPC extensions using KCP reliable UDP connections - a decent server with a 100mbit connection can handle 5000! open sessions and has zero overhead for keeping connections open and low retransmission rates when reed solomon erasure coding is added to the data. With this kind of connection between peers, at least, a block discovery should mean the most time nodes will keep digging on a stale block will be under 400ms for the worst connection at opposite sides of the planet.

Right now, and for most crypto network protocols, this is the best case even in a loopback testnet. I think, if it works as well as is required for game servers, which is how most of them work, using RDP streaming, the same benefits could be realised for blockchain. If the TCP overhead latency were eliminated, I think that 30 second blocks would be hardly pushing, and since, as I have seen, on the Steem network, 19 nodes can hit a 3 second block cycle, who cares about the super tight clocks that staking chains can do, if you can make PoW chain run at 5 second blocks... One of the interesting things about this is that no matter how much hashpower you have, a 5 second block time on a chain would mean you barely even get started digging a block before someone else finds it. It's an interesting thing I have noticed and with 30 second block times now and then I can see, because each node only has one algo mining, that the fact it goes straight to digging on the block it just made...

As for progress, I think I am only a few days away from being confident I have built a good difficulty adjustment mechanism, I also have the issue of the early blocks of the fork and inevitable process of retargeting. I think, looking at how it works so far, that the chain will be fully on its proper clock within 6-12 hours, so there will be a 'fair hardfork' period of 288 blocks, at the same 0.02 as the initial 998 blocks. Thereafter, the closely fitted exponential decay curve will be implemented and start from 2 and decay at approximately 75% per year, or, about 25% annual supply expansion rate, which roughly matches the 250,000 block halving cycle, but without the stair steps.
lokiverloren
Newbie
*
Offline Offline

Activity: 80
Merit: 0


View Profile
December 30, 2018, 10:07:15 PM
Last edit: December 31, 2018, 01:41:39 AM by lokiverloren
 #600

Experimenting with shorter block times and being able to watch the appearance of orphans and trying several different multiple averages, I am now close to something that seems fully functional. But the main thing I learned is that the connection protocol on the p2p network has several arbitrarily long delays that are coded in that cause blocks in the propagation of new blocks, sometimes by up to 5 seconds, and since I am testing on loopback, that delay is very visible.

So I have tweaked the timeouts and delay timers in the network protocol, and it has all by itself reduced orphans significantly, and smoothed out the block production timing. The current master works fine on the main net and I have now updated it to have these better connection parameters, and it will definitely improve the performance of the network.

The averaging now uses an exponential weighted per-algo interval target, an all-history average block time, to help it tend towards long term being the target time, and a trailing target that sits back approximately 9 hours before the current head block, to help reduce short term variations. The cubing of the per-algo interval target gives it more weight in the calculation and adjusts to keep the algorithms distributed evenly.

After a little more testing of this I think I will be building a miner app that works with the getwork RPC command that I reinstated. I'm not sure it is properly compatible with regular miners, it sorta looks like it pads out some of the fields wrongly. But it doesn't really matter so much at this point. There will not be a wide distance between smaller and larger miners in the network for some time, and hopefully, GPU miners won't be so much faster that solo CPU mining is not viable.

The biggest reason to get rid of the long block time, and shorten it as much as possible, is that this helps with keeping the distribution broad. Solo mining loses viability when chance of getting a block extends into the range of days, and necessarily, less blocks means more hash power is used per block. There is other ways to get efficiencies and I expect that possibly GPU miners will be more efficient, but it should not be as big as with more usual, non-pipelined multi-algo mining compared to this. Specialisation has less viability on a chain where you can't squeeze significantly more out of your mining by focusing on one algorithm, and thus raises the complexity of the getting better efficiency.

The other thing is that shorter block times mean overall lower time of clearance, which is a desirable feature. Of course it has to be balanced with security and throughput, but why should it be any slower than it has to be? Network timeouts are entirely non-consensus related, and variations usually won't cause nodes to be banned, so long as they are not significant, but since the network will fully upgrade to the new node, it will improve it even before the fork takes effect.

To improve the time it takes for me to see the block times balancing properly, I have set delays for the first 15 blocks of 30 seconds each. I don't think I can really enforce that on the live network, but the all-time and trailing averages will immediately take effect once the first block of each new algorithm comes in, and the weighting begins being added to the calculation from the second block for each algorithm, so probably it will converge quickly. If the currently being tested speed passes, it will be 15 second blocks, 240 blocks per hour, 5760 blocks per day, and the trailing average length is at 640 blocks per algorithm, which will be approximately one day of averaging window, combined with the all-time average.

Plan 9:
9 second blocks
9 complex new algorithms

I am just updating this to report that I have found even with a random submission latency between 256 and about 4384 milliseconds of delay sending to two other nodes in a 3 way testnet, roughly similar to the same happening to two thirds of the network, that it does not make excessive numbers of orphan blocks, hardly any. I will have to test it with a testnet running with one node on a 4G connection and two VPS nodes one in asia one in america, with the testnet submission delay disabled, see if it is similar on a realistic global network.

Seeing this is reinforcing my intuition that most chains are not really exploring the issue of the underlying network protocol as a factor in delays and wasted work on orphans. I estimate that if that is averaged out, the typical delay on my test is about 2 seconds, which, if we give a wide margin of 300ms to propagate to 8 nodes, that's 8^5 (32k) nodes that can receive the messages in under 1/4 of the block time, so it would be a pretty rare situation that the whole network lags longer than this.

I suspect also that the per-algo block time average helps as well, tending to reinforce an even delay time between blocks over a 9 block period.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 [30] 31 32 33 34 35 36 37 38 »
  Print  
 
Jump to:  

Sponsored by , a Bitcoin-accepting VPN.
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!