Coinchange
|
|
April 21, 2017, 08:18:47 PM |
|
trading volume extremely low
|
|
|
|
tearodactyl
|
|
April 21, 2017, 10:38:14 PM Last edit: April 24, 2017, 11:24:23 PM by mprep |
|
I see on page 1: "faster block verifications 4x"... so what is the target block time?
As the dev said: 2-minute blocks at 10 coins == 7200 coins/day (the same rate as BTC and Zcash before the first halvening) What he's referring to regarding the 'verification' are the Equihash PoW algorithm parameters. (Google it) Tearo TL:DR Numerous earlier attempts by various cryptocurrencies to make PoW memory hard did not succeed. The reasoning was to make ASICs impractical and to nerf GPUs, to support wider distribution of mining. Most of the time people talked about the 51% malicious attack, but current Bitcoin fiasco shows that mining consolidation may lead to severe governance problems. And we have discovered that large chunks of hash moving to and from a smaller chains can be disruptive and even predatory. Zcash went to Equihash exactly due to its promise of being memory hard. However, due to significant community contribution to optimize the CPU but particularly the memory requirements, the parameters that the Zcash team chose initially turned out not to be hard enough to nerf GPUs. And they never found their way to increase them prior to launch, even though the choice was questioned already. Zcash uses N=200 K=9, which produces a 1364 byte solution. The whole point of this Zero project is to toughen up the parameters, to N=192 K=7. The resulting PoW solution actually goes down in size to 420 and takes less time to verify, which is great. Meanwhile, the RAM and CPU complexity increases a bunch. However, even these may not be hard enough, as you can see that the hash is overwhelmingly dominated by the GPU farms and CPU mining is not feasible. Personally, I'd like to test increasing the parameters further. So anyone likewise inclined, do message me.
Blockchain Dynamics podcast, which is my fav Altcoin news source, keeps giving our little Zero continued coverage http://www.blockchaindynamics.net/episode-59-outline.htmlIt was also mentioned in earlier episodes (55-57). Give it a listen, and you can also thank them for supporting the cause in their forum on Cryptopia https://www.cryptopia.co.nz/Forum/Thread/634?page=6
|
|
|
|
AKRO
Member
Offline
Activity: 83
Merit: 10
|
|
April 24, 2017, 06:48:27 PM |
|
Fascinating algo, i'm trying to stop my mind from immediately wondering what VEGA will do with its HBM2 memory, super-wide bus and if half, single, or double precision is the deciding factor with this algorithm. In any case, I can definitely see this coin finding its niche, especially if the novel differences setting it aside from others come into play in a significant or generally positive way.
|
|
|
|
tearodactyl
|
|
April 24, 2017, 09:44:09 PM Last edit: April 24, 2017, 11:24:39 PM by mprep |
|
Fascinating algo, i'm trying to stop my mind from immediately wondering what VEGA will do with its HBM2 memory, super-wide bus and if half, single, or double precision is the deciding factor with this algorithm.
The Equihash algo is not particularly CPU intensive, and the Blake2 rounds it does, it's all simple integer arithmetic and bit ops. It was picked specifically because of the steep memory curve. So, if you compare architecturally say an i7 with 4 cores against a cutting edge GPU with dozens of cores, the performance does not scale with the number of cores. The GPUs are using super fast memories, but an i7 has a complex cache structure with 6+ MB of on-chip SRAM. So, if we were to pick Equihash parameters, such that four threads would require 8 GB of memory, resulting performance difference may not be huge, given highly optimized implementations tuned to the memory architecture specifics. These GPU dies are really huge, and that makes them inherently expensive and always leaves room for yield issues. So the best price per Equihash solution has not been fully explored yet. But it's clear that a dedicated ASIC maker would have to compete with the likes of Intel and AMD, for designing complex memory subsystems and accessing volume production in the most advanced semiconductor processes. Good luck with that... Tearo
Hey, nobody bothered to leave a message for the BD folks, telling them how much we love them and appreciate their coverage of Zero. Wake up and promote the coin, you lazy bums!
|
|
|
|
AKRO
Member
Offline
Activity: 83
Merit: 10
|
|
April 26, 2017, 03:55:19 AM |
|
Fascinating algo, i'm trying to stop my mind from immediately wondering what VEGA will do with its HBM2 memory, super-wide bus and if half, single, or double precision is the deciding factor with this algorithm.
The Equihash algo is not particularly CPU intensive, and the Blake2 rounds it does, it's all simple integer arithmetic and bit ops. It was picked specifically because of the steep memory curve. So, if you compare architecturally say an i7 with 4 cores against a cutting edge GPU with dozens of cores, the performance does not scale with the number of cores. The GPUs are using super fast memories, but an i7 has a complex cache structure with 6+ MB of on-chip SRAM. So, if we were to pick Equihash parameters, such that four threads would require 8 GB of memory, resulting performance difference may not be huge, given highly optimized implementations tuned to the memory architecture specifics. These GPU dies are really huge, and that makes them inherently expensive and always leaves room for yield issues. So the best price per Equihash solution has not been fully explored yet. But it's clear that a dedicated ASIC maker would have to compete with the likes of Intel and AMD, for designing complex memory subsystems and accessing volume production in the most advanced semiconductor processes. Good luck with that... Tearo
Hey, nobody bothered to leave a message for the BD folks, telling them how much we love them and appreciate their coverage of Zero. Wake up and promote the coin, you lazy bums! would you happen to know how well it hashes on an i7, say dual vs single channel?
|
|
|
|
tearodactyl
|
|
April 26, 2017, 05:26:45 AM |
|
would you happen to know how well it hashes on an i7, say dual vs single channel?
Will be re-doing benchmarking in about two weeks and let you know. What i7 MB are you using, with what speed memory? Tearo
|
|
|
|
tearodactyl
|
|
April 26, 2017, 05:33:50 AM |
|
I have no tool to explore the network node numbers. I only tried to analyze how many nodes my node could connect to, and after discovering peers how many of them accepted connections (log shows connect failures).
Hm, anyone can jump in to correct me, but the two hosted nodes that are mentioned via addnode in the .conf file, they are probably reseeding addresses of nodes seen earlier but no longer there? Right now my VPS node is swamped by another project, but when that's over, will leave a Zero node up in the background. zerodev2, if it's not too much trouble, could you post a pre-built version of the Windows node on Github? [with all the disclaimers] Don't have the cycles to get into Windows builds right now, but could run one on an available box. Tearo P.S. Kudos on filing a pull request with the Zero Github repo.
|
|
|
|
AKRO
Member
Offline
Activity: 83
Merit: 10
|
|
April 28, 2017, 12:34:55 AM Last edit: April 28, 2017, 01:20:24 AM by AKRO |
|
would you happen to know how well it hashes on an i7, say dual vs single channel?
Will be re-doing benchmarking in about two weeks and let you know. What i7 MB are you using, with what speed memory? Tearo ah, i don't have an i7 just yet but i've been eyeing some upgrades, currently on pretty entry-medium grade cpus. I get about 0.2 sol/s on an AMD quad -t1 but the pool suggests a bit more than that ~3S/s but my guess is it's such a low number the pool can't give a solid estimate. in either case it does get a little unstable and it comes really close to my ram limit, which might be holding it back a bit, i'm going to Frankenstein my parts and have it run on 16gb. *edit* I had it run on -t3 and I had a few invalid shares, a lot actually, but it seems to be less than 1 sol/s. on a 3 threads on a steamroller quad that gets toasty.
|
|
|
|
tearodactyl
|
|
April 28, 2017, 04:09:19 AM |
|
I had it run on -t3 and I had a few invalid shares, a lot actually, but it seems to be less than 1 sol/s. on a 3 threads on a steamroller quad that gets toasty.
You may be better off running 2 threads at 80% CPU load than say 3 and pegged at 100% and forcing the system to swap all the time and perhaps overheat. Try it... Your mileage will very. Tearo P.S. Will have an updated set of CPU and GPU results in a week.
|
|
|
|
RisitasIssou
Newbie
Offline
Activity: 14
Merit: 0
|
|
April 30, 2017, 11:11:30 AM |
|
Hello How can I get in to the slack ? Because the link in the first post doesn't work anymore Is there a website forthcoming ? thks
|
|
|
|
matejbilahora
Sr. Member
Offline
Activity: 1418
Merit: 275
Community built, Privacy driven
|
|
April 30, 2017, 01:18:10 PM |
|
Hello How can I get in to the slack ? Because the link in the first post doesn't work anymore Is there a website forthcoming ? thks
Pm me with your email address
|
|
|
|
Coinchange
|
|
May 01, 2017, 02:48:18 PM |
|
Volume kicking in - marketcap nearing 200 $ - nice
|
|
|
|
|
kilo17
Legendary
Offline
Activity: 980
Merit: 1001
aka "whocares"
|
|
May 02, 2017, 07:55:42 AM |
|
I posted this on the other thread, can you make available the 4gb version for Linux. answered on the other thread Thanks
|
Bitcoin Will Only Succeed If The Community That Supports It Gets Support - Support Home Miners & Mining
|
|
|
tearodactyl
|
|
May 02, 2017, 04:58:11 PM |
|
optiminer, great news for so many smaller miners here. What did it take to fit into the 4GB, please? Tearo
|
|
|
|
tearodactyl
|
|
May 02, 2017, 05:03:20 PM |
|
I posted this on the other thread, can you make available the 4gb version for Linux. answered on the other thread This is the thread in question https://bitcointalk.org/index.php?topic=1896901.msg18834287#msg18834287Summary: Windows 64 bit supported, Linux version is coming, different binaries for AMD 4GB and 8GB cards, with the 8GB version (not surprisingly) being faster. Tearo
|
|
|
|
optiminer
|
|
May 02, 2017, 08:41:45 PM |
|
optiminer, great news for so many smaller miners here. What did it take to fit into the 4GB, please?
It reuses some of the memory buffer between the rounds at the expense of less efficient memory access patterns. And cutting down the bucket sizes a bit. This means it will find a bit fewer solutions per iteration.
|
|
|
|
|
tearodactyl
|
|
May 03, 2017, 06:50:05 PM |
|
optiminer, great news for so many smaller miners here. What did it take to fit into the 4GB, please?
It reuses some of the memory buffer between the rounds at the expense of less efficient memory access patterns. And cutting down the bucket sizes a bit. This means it will find a bit fewer solutions per iteration. Right on! Do you think some of these optimizations could apply to the CPU miner code as well? Tearo
|
|
|
|
optiminer
|
|
May 03, 2017, 08:10:25 PM |
|
optiminer, great news for so many smaller miners here. What did it take to fit into the 4GB, please?
It reuses some of the memory buffer between the rounds at the expense of less efficient memory access patterns. And cutting down the bucket sizes a bit. This means it will find a bit fewer solutions per iteration. Right on! Do you think some of these optimizations could apply to the CPU miner code as well? Tearo These 'optimizations' are for making the algo run with 4gb instead of 8gb but slower. So, this would only make sense for the CPU if you are short on memory. I still believe that equihash is much better suited for GPUs than CPUs even with the changes params. You could increase the memory usage even more like 8x more than currently. Then, it would be difficult to implement it with less then 32gb ram which would make it difficult to find a GPU where it works on.
|
|
|
|
|