tobeaj2mer01 (OP)
Legendary
Offline
Activity: 1098
Merit: 1000
Angel investor.
|
|
August 11, 2016, 02:21:33 AM |
|
I want to start a crowdsale to help developing a GPU-miner for Zcash, rules are as below, kindly let me know your opinion.
1. All of the money will go to the guy who developed a GPU-miner for Zcash 2. Every participate will donate 0.2 BTC, when miner is ready, every participate will get a copy of the miner, this miner will NOT go to public 3. These BTC will go to a trustworthy escrow, when miner is okay then it will go to the developer 4. Miner should be released before Zcash first version release.
|
Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
|
|
|
bbc.reporter
Legendary
Offline
Activity: 3066
Merit: 1478
|
|
August 12, 2016, 07:58:03 AM |
|
Is it not risky to commit to this? According to the zcash team the algo could change in the future which may or may not allow GPU mining.
|
| | . .Duelbits│SPORTS. | | | ▄▄▄███████▄▄▄ ▄▄█████████████████▄▄ ▄███████████████████████▄ ███████████████████████████ █████████████████████████████ ███████████████████████████████ ███████████████████████████████ ███████████████████████████████ █████████████████████████████ ███████████████████████████ ▀████████████████████████ ▀▀███████████████████ ██████████████████████████████ | | | | ██ ██ ██ ██
██ ██ ██ ██
██ ██ ██ | | | | ███▄██▄███▄█▄▄▄▄██▄▄▄██ ███▄██▀▄█▄▀███▄██████▄█ █▀███▀██▀████▀████▀▀▀██ ██▀ ▀██████████████████ ███▄███████████████████ ███████████████████████ ███████████████████████ ███████████████████████ ███████████████████████ ███████████████████████ ▀█████████████████████▀ ▀▀███████████████▀▀ ▀▀▀▀█▀▀▀▀ | | OFFICIAL EUROPEAN BETTING PARTNER OF ASTON VILLA FC | | | | ██ ██ ██ ██
██ ██ ██ ██
██ ██ ██ | | | | 10% CASHBACK 100% MULTICHARGER | │ | | │ |
|
|
|
rdnkjdi
Legendary
Offline
Activity: 1256
Merit: 1009
|
|
August 16, 2016, 07:02:30 AM |
|
I would most certainly like to be involved in this. In order to do it right - I would suggest perhaps trying to find a few developers and see what they would charge.
Tromp, Wolf0 and I can't think of the other two I remember (one was the guy who built the Monero miner).
People are super non committal - but I would be happy to pay significantly more for a closed miner as long as the money was held in ESCROW by a reputable BTT member.
|
|
|
|
nerdralph
|
|
August 16, 2016, 04:06:55 PM |
|
I might be interested. I've done a bunch of work on ethminer in the past: https://github.com/nerdralph/ethminer-nr/tree/110I was already researching zcash/equihash, and communicated with Alex Biryukov about the Equihash paper. I can write OpenCL, but not CUDA. Even though they are similar, I'm not interested in doing CUDA code. I'd need at least 10 BTC to write it.
|
|
|
|
YIz
|
|
August 16, 2016, 04:14:48 PM |
|
A GPU miner for Zcash will get released to the public eventually. and you will need a serious amount of Bitcoin in order to get someone to code one for you.
|
|
|
|
Mugatu
Member
Offline
Activity: 93
Merit: 10
|
|
August 16, 2016, 08:03:39 PM |
|
I may be able to write a CUDA based miner.
I haven't spent very much time looking at zcash/equihash yet, and won't be able to for a few weeks yet.
I would need to look into this more before I provide a quote
|
|
|
|
tobeaj2mer01 (OP)
Legendary
Offline
Activity: 1098
Merit: 1000
Angel investor.
|
|
August 22, 2016, 02:03:55 AM |
|
There is another way for this thing, you can build a zcash GPU-miner and sell it to the public, maybe you will get a lot of money, for sure I will buy one if it's not expensive.
|
Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1108
|
|
August 22, 2016, 02:38:04 AM |
|
I would most certainly like to be involved in this. In order to do it right - I would suggest perhaps trying to find a few developers and see what they would charge.
Tromp, Wolf0 and I can't think of the other two I remember (one was the guy who built the Monero miner).
It's a little early to be developing GPU miners given that the CPU miner is still being optimized, and is lacking important features like multi-threading. Once the CPU miner looks reasonably optimized, I might try my hand at a CUDA version.
|
|
|
|
Q_R_V
Sr. Member
Offline
Activity: 428
Merit: 250
Inactivity: 8963
|
|
August 22, 2016, 12:48:31 PM |
|
There is another way for this thing, you can build a zcash GPU-miner and sell it to the public, maybe you will get a lot of money, for sure I will buy one if it's not expensive.
Miner with built in fee would be the best idea, something like CDM.
|
|
|
|
Ty13rDerden
Jr. Member
Offline
Activity: 157
Merit: 7
|
|
August 28, 2016, 09:43:30 PM Last edit: August 28, 2016, 10:18:21 PM by Ty13rDerden |
|
Formal proposals should be made here on the Zcash forum: https://forum.z.cash/t/crowdfund-gpu-miner/1324Most people have opted to commit 1 BTC each for access to the miner.
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1108
|
|
September 26, 2016, 02:45:38 PM |
|
I would most certainly like to be involved in this. In order to do it right - I would suggest perhaps trying to find a few developers and see what they would charge.
Tromp, Wolf0 and I can't think of the other two I remember (one was the guy who built the Monero miner).
It's a little early to be developing GPU miners given that the CPU miner is still being optimized, and is lacking important features like multi-threading. Once the CPU miner looks reasonably optimized, I might try my hand at a CUDA version. I ended up having to optimize the CPU miner myself: https://forum.z.cash/t/breaking-equihash-in-solutions-per-gb-second/1995
|
|
|
|
Nauticalam
Newbie
Offline
Activity: 14
Merit: 0
|
|
September 26, 2016, 03:15:33 PM |
|
There is another way for this thing, you can build a zcash GPU-miner and sell it to the public, maybe you will get a lot of money, for sure I will buy one if it's not expensive.
Miner with built in fee would be the best idea, something like CDM. 0.2 bitcoin is quite expensive for the small miners. So it is better to use the fee based system for small miners.
|
|
|
|
nerdralph
|
|
September 26, 2016, 04:01:51 PM |
|
I would most certainly like to be involved in this. In order to do it right - I would suggest perhaps trying to find a few developers and see what they would charge.
Tromp, Wolf0 and I can't think of the other two I remember (one was the guy who built the Monero miner).
It's a little early to be developing GPU miners given that the CPU miner is still being optimized, and is lacking important features like multi-threading. Once the CPU miner looks reasonably optimized, I might try my hand at a CUDA version. I ended up having to optimize the CPU miner myself: https://forum.z.cash/t/breaking-equihash-in-solutions-per-gb-second/1995I think you are making a mistake assuming the performance scales with available memory. I've already figured out how to create the initial table of 2M hashes in a fully parallel way, such that on a GPU with n compute units, each one creates 2M/n of the hashes. Evenly distributing the sorting amongst the CUs is looking a lot harder. I'm confident I can get the memory requirements at k=9, n=200 down to 128MB. Since each hash is only 25 bytes, using index compression it might be possible to do an efficient solution using half that. The memory I/O requirements should be less than 4GB (i.e. < 4GB of reads + writes), meaning a low-end GPU like a R7 370 could do 10s of solutions per second.
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1108
|
|
September 26, 2016, 04:38:47 PM |
|
I think you are making a mistake assuming the performance scales with available memory.
I make no such assumption. Please read solardiz' reply, which I fully agree with. I'm confident I can get the memory requirements at k=9, n=200 down to 128MB.
I would be quite shocked if you can get peak memory usage down to that!
|
|
|
|
nerdralph
|
|
September 26, 2016, 06:23:14 PM |
|
I think you are making a mistake assuming the performance scales with available memory.
I make no such assumption. Please read solardiz' reply, which I fully agree with. Your statement: "That gives it an estimated single-threaded time*space performance of 0.063 S/s / 0.54GB ~ 0.12 S/GBs" If it doesn't scale with memory, then it was nonsense to include the memory use in GB as a multiplier in performance. I'm confident I can get the memory requirements at k=9, n=200 down to 128MB.
I would be quite shocked if you can get peak memory usage down to that! The hashes themselves only take 50MB, so 128MB is not an unreasonable target. After the first sort, you only need to keep 200-20=180 bits/hash + 21-bits of index per entry, since you only care about those that collide on the first 20 bits.
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1108
|
|
September 26, 2016, 06:50:28 PM |
|
Your statement: "That gives it an estimated single-threaded time*space performance of 0.063 S/s / 0.54GB ~ 0.12 S/GBs" If it doesn't scale with memory, then it was nonsense to include the memory use in GB as a multiplier in performance.
It makes about as much sense as measuring a country's productivity in GDP per capita. Does GDP scale linearly with capita? Can you just stick more people in a country to increase its GDP? No, it takes many other resources. Is GDP per capita nonsense then? Maybe there are even better analogies, but that's all I could think of for now. The hashes themselves only take 50MB, so 128MB is not an unreasonable target. After the first sort, you only need to keep 200-20=180 bits/hash + 21-bits of index per entry, since you only care about those that collide on the first 20 bits.
You need a fair bit of space to hold the index-lists...
|
|
|
|
nerdralph
|
|
September 26, 2016, 07:27:23 PM Last edit: September 26, 2016, 08:22:54 PM by nerdralph |
|
Your statement: "That gives it an estimated single-threaded time*space performance of 0.063 S/s / 0.54GB ~ 0.12 S/GBs" If it doesn't scale with memory, then it was nonsense to include the memory use in GB as a multiplier in performance.
It makes about as much sense as measuring a country's productivity in GDP per capita. Does GDP scale linearly with capita? Can you just stick more people in a country to increase its GDP? No, it takes many other resources. Is GDP per capita nonsense then? Maybe there are even better analogies, but that's all I could think of for now. The hashes themselves only take 50MB, so 128MB is not an unreasonable target. After the first sort, you only need to keep 200-20=180 bits/hash + 21-bits of index per entry, since you only care about those that collide on the first 20 bits.
You need a fair bit of space to hold the index-lists... You're engaging in logical fallacies referring to the GDP analogy, besides taking the discussion on a tangent. When I'm done writing a GPU implementation, I'll be quoting performance in solutions/second. Having an 8GB card will give no more performance than 4GB of memory at the same clock speed. As for space for indexes, 21 bits isn't that significant. After the first sort round, you need 2*21 bits + 180 bits hash = 222 bits. After the 2nd round, you need 4*21 bits + 160 bits hash = 244 bits, which fits nicely in 32-byte fixed size records. By the 3rd round the size of the indexes overtakes the hash size, but the number of hashes diminishes. My guess is the zcash c++ implementation is using inefficient data structures that contain 64-bit pointers, bloating the memory use. I also think it is possible to modify Wagner's algorithm in ways that may optimize performance. edit: here's a paper by DJB that discusses optimizing Wagner's algorithm. Optimizations may or may not be possible while still conforming to equihash's algorithm binding requirement. https://cr.yp.to/rumba20/expandxor-20070411.pdf
|
|
|
|
tromp
Legendary
Offline
Activity: 990
Merit: 1108
|
|
September 26, 2016, 09:37:48 PM |
|
When I'm done writing a GPU implementation, I'll be quoting performance in solutions/second.
Sure, that makes most sense. But as Jonathan Toomim pointed out in the Zcash forum, and as SolarDiz agreed with Sol/s and Sol/(GiB•s) are two distinct metrics. Both are important.
I simply chose the somewhat less common measure because 1) for a memory-hard algorithm, it's crucial that you cannot trade off memory for speed without penalty. 2) i didn't want to disclose the precise Sol/s performance of my miner. As for space for indexes, 21 bits isn't that significant. After the first sort round, you need 2*21 bits + 180 bits hash = 222 bits. After the 2nd round, you need 4*21 bits + 160 bits hash = 244 bits, which fits nicely in 32-byte fixed size records. By the 3rd round the size of the indexes overtakes the hash size, but the number of hashes diminishes.
And by the 8th round you need 256*21 bits + 40 bits =5416 bits = 677 bytes. That's where you need to get creative. edit: here's a paper by DJB that discusses optimizing Wagner's algorithm.
I'm familiar with some of DJB's papers on wagner's algorithm, although not that particular one.
|
|
|
|
nerdralph
|
|
September 27, 2016, 12:04:08 AM |
|
As for space for indexes, 21 bits isn't that significant. After the first sort round, you need 2*21 bits + 180 bits hash = 222 bits. After the 2nd round, you need 4*21 bits + 160 bits hash = 244 bits, which fits nicely in 32-byte fixed size records. By the 3rd round the size of the indexes overtakes the hash size, but the number of hashes diminishes.
And by the 8th round you need 256*21 bits + 40 bits =5416 bits = 677 bytes. That's where you need to get creative. Possibly. If the c++ implementation is using 64-bits for each index, and takes ~1GB of peak memory, using packed 21-bit indexes cuts the memory used by 2/3rds. You also probably know that GPUs are more efficient at manipulating 32-bit and 24-bit values than modern CPUs (which are optimized for 64-bit register operations and larger cache lines). I don't know about NVidia, but on AMD GCN the memory is divided into 32-bit channels, and all channels can be reading or writing simultaneously. Combined with the well-known higher peak bandwidth of GDDR5 vs DDR3/4 makes the usable memory bandwidth on the GPU much higher. Even if I run into problems and can't get the memory requirements much lower than 256MB due to size of the index storage at the final stages, the larger record size will make it easier to max out the memory bandwidth. With the latencies of GGDR5 and the way page open/close is queued, I find you need to be writing or reading in at least 128-byte chunks to maximize memory bandwidth.
|
|
|
|
bluedeep
Legendary
Offline
Activity: 977
Merit: 1011
|
|
September 27, 2016, 12:05:29 AM |
|
Where can we read more info about Zcash? Is there any BitcoinTalk Ann? Any links are welcome.
|
| | | | | | | | IDO PARTNER▄▄▄▀▀▀██▄▄ ████▀▀░▀███ ███▌████▐███ ███▌████████▌ ███▌█████████ ███▌█████████ ███▌████▐███▌ ▄▄▄▄▄█▀░███▀ ▀▀██▀▄██▀▀ | . DAOStarter |
| IEO PARTNER
▄███▄ ▄██▀▀░▀▀██▄ ██░░░▄░░░▀▀ ██░░▀██▄▄░░ ▀██▄▄░▀▀██▄ ░░▀▀██▄░░██ ▄▄░░░▀░░░██ ▀██▄▄░▄▄██▀ ▀███▀ | . Coinstore |
| LISTING
██▀▀▀▀████▄▄ ███▄▄▄▄▄▄▄█▀█ ██▄▄▄▄▄▄▄▄░▐▐▌ █████▄▄▄▄▄▄▌█ ███▄▄▄▄▄▄▄▄▀█▄ ███▄▄▄▄▄▀▀▄▀█▐▌ ██▄▄▄▄░▄▄▄░░▐██ ▄▄▄▄▄▄▄▄▄▄▄▀█▄▀ ░▄░▄▄▄▄▄▄▄▄▄▀ |
▄███▄░▄███▄ ▄███████████▄ ▄█████████████▄ ▄███████████████▄ ▀███████████████▀
| BitMart MEXC |
| DEX PARTNERS
▄ ░▀▄░█▀▀▄▄▄ ░▀▄▀████▀███▄ ██▀██████▄▄▄▄▀ ███░░▀████▀▀██▄ ▄█████████░█▄░▀ █▀█▄███▀▀█████ ▀▀▀░▀█▀▀▄█████ ██████▀░▀▄ |
█████▄▄▄▄▀ ██▄▀█████▀░░▄▄ ░▀█▄████▀███▄▀▀ ▀█▄▀▀█▀█▀▄░▀████▀ █▄██▄░▀▄▀▄▀██▀▀ ██▀▀ | UNISWAP LobsterSwap |
|
|
|
|
|