papampi
Full Member
Offline
Activity: 686
Merit: 140
Linux FOREVER! Resistance is futile!!!
|
|
November 05, 2017, 06:22:53 PM |
|
I installed nvoc 19 1.4 it works fine except - auto temp control I'm constantly getting this message -
sudo: unable to resolve host 19_1_4 Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.
Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.
All done. GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50
I've setup PL to 150W but somehow it shows Power 50 Can you help me please?
Open Maxximus007_AUTO_TEMPERATURE_CONTROL find this line : POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -5 | head -c -3 )
and change it with : POWERLIMIT=$(echo -n $PWRLIMIT | tail -c -6 | head -c -3 )
You can also edit your host name with sudo nano /etc/hosts sudo nano /etc/hostname
|
|
|
|
azertyuiopvnx
Newbie
Offline
Activity: 52
Merit: 0
|
|
November 05, 2017, 06:26:18 PM |
|
Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs). I would like to SSH on the rigs individually from a remote location from a different IP address.
I was thinking about setting SSH on a different port on each rig, for example: rig1 SSH on port 1024 rig2 SSH on port 1025 rig3 SSH on port 1026 And so on...
On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on
Do you think it's a good idea or are there better ways to do this?
just redirect a different port for each rig so when you connect: XXX.XXX.XXX.XXX port 10001 for rig1 redirect to 192.168.1.11 port 22 for rig1 XXX.XXX.XXX.XXX port 10002 for rig2 redirect to 192.168.1.12 port 22 for rig2 etc... If you are sing putty, just create a new shortcut with -P 1000x for each rig I did the same and it works great
|
|
|
|
Stubo
Member
Offline
Activity: 224
Merit: 13
|
|
November 05, 2017, 06:39:58 PM |
|
I installed nvoc 19 1.4 it works fine except - auto temp control I'm constantly getting this message -
sudo: unable to resolve host 19_1_4 Power limit for GPU 00000000:0E:00.0 was set to 150.00 W from 150.00 W.
Warning: persistence mode is disabled on this device. This settings will go back to default as soon as driver unloads (e.g. last application like nvidia-smi or cuda application terminates). Run with [--help | -h] switch to get more information on how to enable persistence mode.
All done. GPU 12, Target temp: 70, Current: 58, Diff: 12, Fan: 30, Power: 50
I've setup PL to 150W but somehow it shows Power 50 Can you help me please?
I don't know if this will totally fix the issue but the sudo error re: host resolution can be corrected by fixing the hostname. In 19-1.4, there is an issue in that the hostname in /etc/hosts and /etc/hostname do not match. IIRC, /etc/hostname has 19_1_4 and /etc/hosts has m1-desktop. Edit one or the other or both and make them match. If you edit /etc/hostname, you will have to reboot. Hope this helps.
|
|
|
|
JayneL
Member
Offline
Activity: 104
Merit: 10
|
|
November 05, 2017, 06:41:26 PM |
|
Been busy lately; l will try to respond to the pm's I haven't gotten to and posts in the thread either tonight or tomorrow.
I will explain how the execution logic works in nvOC.
There are some problems with the newest Nvidia driver; so I will roll it back for the next update.
Hi my idol, if you have time can you help me how to add more algo on nicehash auto switch, i just want to add Cryptonight on it, but i encounter errors when i try to add it on the code on the 3main together with other algo. thanks more power
|
|
|
|
codereddew12
Newbie
Offline
Activity: 36
Merit: 0
|
|
November 05, 2017, 07:04:27 PM |
|
...Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
I'll get my "DOH!", for mining ETH with that many NVIDIA cards, out of the way right off the bat. That being out of the way: Granted, in the real world, the loss isn't 1:1; however, for ease of math, we'll pretend it is. If you have a 150 TDP card and you down the output by 30%, then you have taken a 1500W set of cards and lowered them to 1000W. Now you have a 500W reduction in power that is the same as the total amount of power required to power 3.3333 cards at full power (for ease of math we will call this 3 cards). So, you have an effective rate of 7 cards and have 10 cards sitting on the rack. To what end? Yes, it's at the lower end of stable, but what is the point? Not counting the 1060s and your other rig(s) that make up your other 8 cards.... Even if my numbers are off by 1/ 2, and we pretend you paid wholesale ($375) prices for those cards, you have $624 worth of cards sitting idle the save $438 per year in consumption while giving up 49% of your potential earnings (by running cards at hashrates of as low as 30 when they can hit as high as 58). It's something that makes less and less sense the more and more cards you run. Since when can a 1070 hit 58mh/s?
|
|
|
|
ComputerGenie
|
|
November 05, 2017, 07:30:28 PM |
|
Since when can a 1070 hit 58mh/s?
I don't do ETH (nor do I understand doing it with NV cards), but: https://www.youtube.com/watch?v=cdeA7s9SmRY
|
If you have to ask "why?", you wouldn`t understand my answer. Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
|
|
|
WaveFront
Member
Offline
Activity: 126
Merit: 10
|
|
November 05, 2017, 08:22:04 PM |
|
Let's suppose that I have several rigs running nvOC behind a router with one public IP and a NAT server (the rigs have static private IPs). I would like to SSH on the rigs individually from a remote location from a different IP address.
I was thinking about setting SSH on a different port on each rig, for example: rig1 SSH on port 1024 rig2 SSH on port 1025 rig3 SSH on port 1026 And so on...
On the router I would setup virtual servers to redirect traffic on port 1024 to rig 1, 1025 to rig 2 and so on
Do you think it's a good idea or are there better ways to do this?
just redirect a different port for each rig so when you connect: XXX.XXX.XXX.XXX port 10001 for rig1 redirect to 192.168.1.11 port 22 for rig1 XXX.XXX.XXX.XXX port 10002 for rig2 redirect to 192.168.1.12 port 22 for rig2 etc... If you are sing putty, just create a new shortcut with -P 1000x for each rig Yes of course :-), no need to put SSH on a different port. Thanks Temporel
|
|
|
|
codereddew12
Newbie
Offline
Activity: 36
Merit: 0
|
|
November 05, 2017, 08:32:39 PM |
|
That's not real. It's supposedly a mod use to modify the REPORTED hashrate to make it appear higher than what the EFFECTIVE actually is. The video you linked is all a scam for $400+ See https://bitcointalk.org/index.php?topic=2145776.0 So now, back to your question, why is it detrimental to mine at 30% less power when the effective hashrate is the same as it would be on 100% power? And I mine ETH on NV cards because 1) it's more efficient. You are getting 30Mh/s on 100-110W with NV vs. 29-31Mh/s whatever the wattage is for the rx 470/480/580s which I believe was around 130s-140s when I looked into it a while ago. On top of that, you don't have to go through all of the BIOS mods with NV as you would for the Rx cards and risk the chance of bricking.
|
|
|
|
leenoox
|
|
November 05, 2017, 08:38:19 PM |
|
...Been mining ETH and my hasharates have been 30-31 per 1070. I never thought you could have a PL "too low" if it's within the supported wattage of the card. Of course, some GPUs, like the MSI 1070 Gaming X requires minimum 115 watts but some other 1070s can go to as low as 90 so that's why I say average is roughly 100W per card. It's bee stable like this for nearly 2 weeks now so I don't see why it's a big issue if it's stable?
I'll get my "DOH!", for mining ETH with that many NVIDIA cards, out of the way right off the bat. That being out of the way: Granted, in the real world, the loss isn't 1:1; however, for ease of math, we'll pretend it is. If you have a 150 TDP card and you down the output by 30%, then you have taken a 1500W set of cards and lowered them to 1000W. Now you have a 500W reduction in power that is the same as the total amount of power required to power 3.3333 cards at full power (for ease of math we will call this 3 cards). So, you have an effective rate of 7 cards and have 10 cards sitting on the rack. To what end? Yes, it's at the lower end of stable, but what is the point? Not counting the 1060s and your other rig(s) that make up your other 8 cards.... Even if my numbers are off by 1/ 2, and we pretend you paid wholesale ($375) prices for those cards, you have $624 worth of cards sitting idle the save $438 per year in consumption while giving up 49% of your potential earnings (by running cards at hashrates of as low as 30 when they can hit as high as 58). It's something that makes less and less sense the more and more cards you run. Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
|
|
|
|
ComputerGenie
|
|
November 05, 2017, 08:46:42 PM |
|
Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
There is exactly 0 chance that you can cut the power consumption of any electronic component and receive the exact same output. If you could, Bitmain would already have an S11 on the shelves that uses 1 watt. If my math fails to make sense to you, it's likely because you don't even math, bro.
|
If you have to ask "why?", you wouldn`t understand my answer. Always be on the look out, because you never know when you'll be stalked by hit-men that eat nothing but cream cheese....
|
|
|
leenoox
|
|
November 05, 2017, 09:08:44 PM |
|
Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
There is exactly 0 chance that you can cut the power consumption of any electronic component and receive the exact same output. If you could, Bitmain would already have an S11 on the shelves that uses 1 watt. If my math fails to make sense to you, it's likely because you don't even math, bro. Which part of my post you didn't understand? Which part of other people posts responding to you, you didn't understand? Just because the card can run at 150 watts (that is when running 3d intensive calculations, eg. playing games) doesn't mean that ethash algo requires the card to run at full power to achieve max hashrate. You better get your facts straight before resorting to insults! Ethash doesn't require GPU core to run at max power, it is barely even using it... ethash is memory intensive algo and it is pushing mem to its limits but not the core, hence the low PL is possible without affecting the hashrate. Once again, why waste more power if there is no gain?
|
|
|
|
kk003
Member
Offline
Activity: 117
Merit: 10
|
|
November 05, 2017, 09:15:11 PM |
|
Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
There is exactly 0 chance that you can cut the power consumption of any electronic component and receive the exact same output. If you could, Bitmain would already have an S11 on the shelves that uses 1 watt. If my math fails to make sense to you, it's likely because you don't even math, bro. My 1060 gpus gives practically the same hash mining etc at 75 ~ 100+ (around 24Mh/s). The only reason I have a 13 gpu 1060 rig with a PL of 95 is because reduces the cpu load average.
|
|
|
|
codereddew12
Newbie
Offline
Activity: 36
Merit: 0
|
|
November 05, 2017, 10:29:37 PM Last edit: November 05, 2017, 10:48:55 PM by codereddew12 |
|
Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
There is exactly 0 chance that you can cut the power consumption of any electronic component and receive the exact same output. If you could, Bitmain would already have an S11 on the shelves that uses 1 watt. If my math fails to make sense to you, it's likely because you don't even math, bro. No one ever said the SAME exact output, so of course you're going to make assumptions to fit your (greatly warped may I add) perception. 150W gives me 31 Mh/s whereas 100W gives me 30.5 Mh/s so in all practicality it is essentially MORE EFFICIENT (as we've been trying to tell you) to do it this way. The only reason why I questioned if a lower PL was "bad" was based on what you guys were saying and making it seem like lower wattage was a "bad" thing. But I can see now that I obviously can't take what you say with any type of validity, but I guess people will firmly stand in what they think is correct even when they don't have a clue what they're talking about. So before you go making smart ass statements like the one below: There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7 Maybe you should at least try put forth a worthwhile post that helps instead of trying to belittle someone in a topic you are obviously less versed in. Also back to my original question: By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just wanted to get some feedback regarding this setup.
|
|
|
|
kk003
Member
Offline
Activity: 117
Merit: 10
|
|
November 05, 2017, 11:15:56 PM |
|
Your math doesn't make sence in real world. Either he run his 1070 at 100 or 150 wats PL, the hashrate for ETH will not increase, it will remain at 30-31 MH/s. So he is effectivly saving 50 watts per card while getting the same hashrate, what's wrong with that?
There is exactly 0 chance that you can cut the power consumption of any electronic component and receive the exact same output. If you could, Bitmain would already have an S11 on the shelves that uses 1 watt. If my math fails to make sense to you, it's likely because you don't even math, bro. No one ever said the SAME exact output, so of course you're going to make assumptions to fit your (greatly warped may I add) perception. 150W gives me 31 Mh/s whereas 100W gives me 30.5 Mh/s so in all practicality it is essentially MORE EFFICIENT (as we've been trying to tell you) to do it this way. The only reason why I questioned if a lower PL was "bad" was based on what you guys were saying and making it seem like lower wattage was a "bad" thing. But I can see now that I obviously can't take what you say with any type of validity, but I guess people will firmly stand in what they think is correct even when they don't have a clue what they're talking about. So before you go making smart ass statements like the one below: There is no part of me that will ever understand the idea behind taking 10 cards and intentionally turning them into 7 Maybe you should at least try put forth a worthwhile post that helps instead of trying to belittle someone in a topic you are obviously less versed in. Also back to my original question: By the way, a little off topic, but do you guys think it's ok to run 12 GPUs (all 1070s except for 2 which are 1060s) on two EVGA G3 850W PSUs? I downvolted all of the cards and each 1070 is right around 100W/piece and 1060s are roughly 80W. Been stable now for over a week, just wanted to get some feedback regarding this setup.
I friend has 1070/100W MC 1160 and 31.4 Mh/s on etc and as you said running stable (in this case for weeks). As said before the 1060 can usually run stable a 75W but I prefer 80 ~ 90. Your PSUs have enough Watts as surely you know. I guess that if well connect they should run ok.
|
|
|
|
VoskCoin
|
|
November 06, 2017, 03:07:11 AM |
|
I cannot get nvOC 19.4 or 17 to work with the h110 btc pro mobo for the life of me, only can get one card to successfully mine when I plug 2 in it crashes.
any input/ help here? also is dstm miner going to be added?
|
|
|
|
fullzero (OP)
|
|
November 06, 2017, 04:17:06 AM Last edit: November 06, 2017, 05:03:58 AM by fullzero |
|
I cannot get nvOC 19.4 or 17 to work with the h110 btc pro mobo for the life of me, only can get one card to successfully mine when I plug 2 in it crashes.
any input/ help here? also is dstm miner going to be added?
Most likely problem is: you aren't powering the molex ports on the mobo and you haven't disabled the power warning setting in the bios. When you don't intend to power the molex ports on the H110; attach a single gpu to the 16x slot ( via riser or direct ), remove the storage device (usb or ssd ) boot the rig. It will launch the bios. Find the power warning setting and disable it. After you have disabled it save and reboot; after the bios has fully reloaded. Power off the rig and attach all risers / storage device. power on let me know if this is the problem Edit: I have read your earlier post where you explain you are using an atx psu for the 24 pin / cpu / molex and a server psu for the GPUs. I highly recommend using a pico PSU in place of the atx psu (for the 24pin and CPU power ( you can use a 4pin with an H110 read the manual to see which side should be powered) and modifying the bios to not use the molex ports as above. (note if you use a pico ensure you don't power the molex ports on the mobo with it) https://www.newegg.com/Product/Product.aspx?Item=9SIACJ45VN4104and a 6pin pcie to barrel jack adapter like this: ( or make your own one of these ) https://www.amazon.com/60cm-Express-5-5X2-5mm-Plugs-Gridseed/dp/B00MTI68IE
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
|
November 06, 2017, 04:49:38 AM |
|
A few updates I've need to do since installing 1.4 that may help others: Many errors in my syslog referencing timeout waiting for device - There is a hardcoded Sandisk drive in /etc/fstab that should be commented out: UUID=55184403759586FB /mnt/55184403759586FB auto nosuid,nodev,nofail,x-gvfs-show,ro 0 0 /dev/disk/by-id/usb-SanDisk_Cruzer_Blade_4C530001260812105231-0:0-part1 /mnt/usb-SanDisk_Cruzer_Blade_4C530001260812105231-0:0-part1 auto nosuid,nodev,nofail,x-gvfs-show,ro 0 0 Also, if you adjust wattage limits into the triple digits (more than 100w) the temp script seems to throw issues now. Changing: echo -n 117.00| tail -c -5 | head -c -3 to echo -n 117.00| tail -c -6 | head -c -3 seems to bring things back to normal. Still seeing lower hashrates than I did on version 1.1, due to cards using about 80% of their available power limit and not sure why. Also seeing hostname errors and plenty of small other things on a vanilla build (literally just changed 5 items in 1bash) - v1.1 was rock solid, and while I love the idea of these new features, I value stability of mining operation above all else and it seems like we're going backwards a bit on that front. I checked and I did malform the fstab; thanks for finding the error in the auto temp and letting me know about the fstab problem. .
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
|
November 06, 2017, 04:52:46 AM |
|
I have been spending a bit of time reading through all of the bash scripts that make up nvOC. I urge everybody to do that because then you will learn both how it works and how you may be to take advantage of some of the vast functionality that is offered. To make that easier to do, I found a python script "beautify_bash.py" that you can run against them (1bash, 2unix, 3main, etc.) to make them more readable. https://github.com/ewiger/beautify_bashIt basically just reformats the code with the proper syntax indentation for loops and if statements. The first thing that I did with the 19-1.4 release the other day was to beautify all of the scripts, then make my personal miner tweaks to those versions and finally deploy those to my miners. I have found no issues from doing this. The basic steps to get it going are: -Download the zip file to your PC, unzip it to get to beautify_bash.py -Use WinSCP or suitable substitute to sftp it down to your rig(s) in /home/m1 directory -Login to rig, make the script executable "chmod 750 beautify_bash.py" -Execute against the scripts you use: ./beautify_bash.py 1bash ./beautify_bash.py 2unix ./beautify_bash.py 3main ./beautify_bash.py 4update ./beautify_bash.py IAmNotAJeep_and_Maxximus007_WATCHDOG ..etc It will beautify the script and leave a copy of the original as <filename>~. Perhaps there is some reason for releasing the scripts with no syntax formatting as they are but I have yet to find it. Hope this helps. I have started to not use standard formatting with bash at all. I wasn't really thinking about readability for others. This script looks interesting; can probably save a lot of time when used systematically.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
|
November 06, 2017, 05:10:49 AM |
|
ok.. so my usb stick gets corrupted about once a day now.. my mining rig already has a m.2 ssd i originally had windows on. i would like to install NVOC on the ssd, however my other computer only has 1 m.2 slot which it boots from, so im trying to think of a way to install this OS.. the normal way of installing this with raw hdd copy would wipe the os thats completing the copy on the miner, is there a method of installing beside windows on the same drive in a different partition? and dual booting? or is there a way to install this OS on my usb, then running a install like ubuntu does from a live session? im really wishing i had a sata ssd on this rig. trying to find a work around is driving me nuts.
so, i guess there is no option for a workaround? -1.4 should solve your problems. I recommend using one of these adapters when using an SSD as your rig has less cables and it is easier to swap out: https://www.amazon.com/StarTech-SATA-Drive-Adapter-Cable/dp/B00HJZJI84or any similar adapter. thanks fullzero... i think you misunderstood my drive though.. i have a m.2 ssd, not a sata ssd.. and i have no other cumputer with a extra m.2 slot to write NVOC to it.. is there a way to write this os to this drive as a dual boot? if so then i could do that from a windows partition. otherwise i have to way of installing this OS to the m.2....... also, what is in 1.4 that you think will fix the issue? You should be able to safely run -1.4 alongside windows; so long as you select the storage device with nvOC to boot. I combine an adapter like this: https://www.amazon.com/StarTech-SATA-Drive-Adapter-Cable/dp/B00HJZJI84with an adapter like this: https://www.amazon.com/QNINE-Adapter-Converter-SATA3-0-Desktop/dp/B01NBJJ34Qto easily use m2 ssd's like usb keys.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
fullzero (OP)
|
|
November 06, 2017, 06:14:27 AM |
|
Dear fullzero, thanks for your hard work on nvoc. Have been using it for half a year already and very much pleased with it. Are you planning to add BTG in the new release? Didn't see it in 1.4. Would be great. Thanks a lot
I will add it to the next release.
|
|
|
|
mnh_license@proton.me https://github.com/hartmanm How difficulty adjustment works: Every 2016 blocks, the Network adjusts the current difficulty to estimated difficulty in an attempt to keep the block generation time at 10 minutes or 600 seconds. Thus the Network re-targets the difficulty at a total difficulty time of: 2016 blocks * 10 minutes per block = 20160 minutes / 60 minutes = 336 hours / 24 hours = 14 days. When the Network hashrate is increasing; a difficulty ( 2016 blocks ) should take less than 14 days. How much less can be estimated by comparing the % Network hashrate growth + what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ) against what the Network hashrate was at the beginning of the difficulty ( 2016 blocks ). This is only an estimate because you cannot account for "luck"; but you can calculate reasonably well using explicitly delimited stochastic ranges. The easy way to think about this is to look at this graph and see how close to 0 the current data points are on its y axis. If the blue line is above 0 the difficulty ( 2016 ) blocks should take less than 14 days; if it is below it should take more. http://bitcoin.sipa.be/growth-10k.png
|
|
|
|