Grim
|
|
May 27, 2015, 03:58:26 PM |
|
new numbers from lyra (work still in progress): gtx980: 1900kh/s gtx780ti: 2400kh/s gtx750ti: 990kh/s Good job. Interesting to see that the 780ti is outperforming the 980. Will you share your changes or keep it private? yes it is a little strange, the 780ti runs at 95% tdp while the 980's and 750ti runs rather 65/66% (which could mean that the 980 could do a lot better) will have a look at the ptx, the answer is probably there... For the moment those changes will remain private 780ti = 384bit vram ...
|
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
May 27, 2015, 04:31:35 PM |
|
new numbers from lyra (work still in progress): gtx980: 1900kh/s gtx780ti: 2400kh/s gtx750ti: 990kh/s Good job. Interesting to see that the 780ti is outperforming the 980. Will you share your changes or keep it private? yes it is a little strange, the 780ti runs at 95% tdp while the 980's and 750ti runs rather 65/66% (which could mean that the 980 could do a lot better) will have a look at the ptx, the answer is probably there... For the moment those changes will remain private 780ti = 384bit vram ... true however the 780ti is way behind the 980 on neoscrypt which is more demanding at that level
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
May 27, 2015, 05:02:05 PM |
|
I strongly recommend high fan settings for anyone that can afford the extra noise that this will cause. Modern day GPUs can certainly run very hot without any trouble, heck, the 980 specifications allow for 98C temperatures, however, with cards at full load for prolonged periods of time (aka, mining), higher temperatures will simply cause higher failure rates.
Failure rates vary greatly, and everyone's experience is their own. In any case, with lower average temperatures, one should expect less failures over time. For most of us, I think we tend to replace the hardware (upgrade, new toys, etc) before the GPUs fail definitively - but there's certainly exceptions now and then.
My "hot" 980 runs 80% fan, and the "cold" one 70% fan (both reference cooler designs), and both of them sit at 70C (+/-2) temperatures while hashing non stop. I think that 80C is perfectly ok for these cards, but if you can run them cooler, all the better.
Note that a lot of the GPU failures are caused by VRMs running too hot (most times, they are much hotter than the GPU core itself). Cards with backplates should do better in this respect, but only slightly.
Happy mining heating!
The reported temperature of the GPU is coming from the core itself and there are usually no temperature sensors on the Mosfets/VRMs which always have higher temperatures as they are not directly connected to the heatsink (some cards do have additional sensors there which you can read with GPU-Z). Granted, these modules also have a higher maximum operating temperature than the core itself but their lifespan is based on the operating temperature and the duration at those temperatures. If you haven't, I recommend picking up a cheap infrared thermometer to see for yourself. A 750 Ti running at 60°C at 70-80% fan speed will still have VRM temperatures around 70-80°C depending on design (how much air gets to the parts) and overclock. Which is perfectly fine but if your core is running at 80-90°C the temperature of the VRM/Mosfets might be so high that they will probably fail within a few years. This site is for motherboards but should give you an idea about estimating VRM lifespan: http://event.asus.com/mb/5000hrs_vrm/
|
Not your keys, not your coins!
|
|
|
bensam1231
Legendary
Offline
Activity: 1750
Merit: 1024
|
|
May 27, 2015, 07:24:26 PM |
|
80°C for a GTX980 is a normal operating temperature. I have a similar setup (two GTX 980 evga) and with quark algo and a reasonable overclock ( +100 but they are already oc : so I ran them at about 1500mhz @gpu) they are between 77-80.
Those mining with amd card are used to go higher!!!
I've mined at 94C for months ;-) we have destroyed about 16 x gigabyte 280x oc cards with those temps ... 1 x gigabyte 750ti oc ( powered ) has a fan that is shutdown due to temps as well ... thank God they have a 3 year warranty ... ... #crysx maybe the 290(x) is different, one of them run for about a year at 94C and is still fine, overclocking and all :-) The 290/x series were designed to operate at up to 95c at which point the fan ramps up to whatever your maximum threshold is and once it reaches whatever you set the maximum to it starts downclocking to stay under 95c. AMD said they were perfectly alright operating at those temperatures. That was what they were engineered to do. That doesn't apply to 280/x series though. The conception that it's 'too hot' is based on what people know about other GPUs besides those.
|
I buy private Nvidia miners. Send information and/or inquiries to my PM box.
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
|
May 27, 2015, 07:50:14 PM |
|
The 290/x series were designed to operate at up to 95c at which point the fan ramps up to whatever your maximum threshold is and once it reaches whatever you set the maximum to it starts downclocking to stay under 95c. AMD said they were perfectly alright operating at those temperatures. That was what they were engineered to do. That doesn't apply to 280/x series though. The conception that it's 'too hot' is based on what people know about other GPUs besides those.
For gaming but not for 24/7 heavy use.
|
Not your keys, not your coins!
|
|
|
rednoW
Legendary
Offline
Activity: 1510
Merit: 1003
|
|
May 27, 2015, 08:43:19 PM |
|
new numbers from lyra (work still in progress):
gtx980: 1900kh/s gtx780ti: 2400kh/s gtx750ti: 990kh/s
nice, my gtx750 with latest sp_ mod can do only 905khs. (1510/1600) Performance is memory constrained ...
|
|
|
|
Slava_K
|
|
May 27, 2015, 10:27:05 PM |
|
excess #endif in ccminer.cpp line 480 in latest commit. now it is all right
|
|
|
|
scryptr
Legendary
Offline
Activity: 1797
Merit: 1028
|
|
May 27, 2015, 10:33:30 PM Last edit: May 27, 2015, 10:49:18 PM by scryptr |
|
YAAMP--
I just got a payment from Yaamp for the small remainder of bitcoin that I had held there. The Yaamp bitcointalk thread has a post from Yaamp stating that they are trying to return to active business. --scryptr
EDIT: There are active miners on Yaamp. Fees look lower. --scryptr
|
|
|
|
flipclip
Member
Offline
Activity: 111
Merit: 10
|
|
May 28, 2015, 12:45:33 AM |
|
YAAMP--
I just got a payment from Yaamp for the small remainder of bitcoin that I had held there. The Yaamp bitcointalk thread has a post from Yaamp stating that they are trying to return to active business. --scryptr
EDIT: There are active miners on Yaamp. Fees look lower. --scryptr
They also have a rental area in beta.
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
May 28, 2015, 02:45:30 AM |
|
I strongly recommend high fan settings for anyone that can afford the extra noise that this will cause. Modern day GPUs can certainly run very hot without any trouble, heck, the 980 specifications allow for 98C temperatures, however, with cards at full load for prolonged periods of time (aka, mining), higher temperatures will simply cause higher failure rates.
Failure rates vary greatly, and everyone's experience is their own. In any case, with lower average temperatures, one should expect less failures over time. For most of us, I think we tend to replace the hardware (upgrade, new toys, etc) before the GPUs fail definitively - but there's certainly exceptions now and then.
My "hot" 980 runs 80% fan, and the "cold" one 70% fan (both reference cooler designs), and both of them sit at 70C (+/-2) temperatures while hashing non stop. I think that 80C is perfectly ok for these cards, but if you can run them cooler, all the better.
Note that a lot of the GPU failures are caused by VRMs running too hot (most times, they are much hotter than the GPU core itself). Cards with backplates should do better in this respect, but only slightly.
Happy mining heating!
Another reason for high fan settings is that some cards start throttling before the default fan profile ramps up the rpms. This is especially obvious on my 980 with default clocks. Nvidia has obviously chosen to prioritize noise over peak performance. It was not configured to run hot for extended periods of time. Nividia also messed up on Linux by only allowing custom fan profiles on monitor-attached cards. It would be in everyone's interest to remove that restriction so we miners can get top performance while also benefitting from a longer MTBF on cards used for mining.
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 2870
Merit: 1091
--- ChainWorks Industries ---
|
|
May 28, 2015, 03:15:25 AM |
|
I strongly recommend high fan settings for anyone that can afford the extra noise that this will cause. Modern day GPUs can certainly run very hot without any trouble, heck, the 980 specifications allow for 98C temperatures, however, with cards at full load for prolonged periods of time (aka, mining), higher temperatures will simply cause higher failure rates.
Failure rates vary greatly, and everyone's experience is their own. In any case, with lower average temperatures, one should expect less failures over time. For most of us, I think we tend to replace the hardware (upgrade, new toys, etc) before the GPUs fail definitively - but there's certainly exceptions now and then.
My "hot" 980 runs 80% fan, and the "cold" one 70% fan (both reference cooler designs), and both of them sit at 70C (+/-2) temperatures while hashing non stop. I think that 80C is perfectly ok for these cards, but if you can run them cooler, all the better.
Note that a lot of the GPU failures are caused by VRMs running too hot (most times, they are much hotter than the GPU core itself). Cards with backplates should do better in this respect, but only slightly.
Happy mining heating!
Another reason for high fan settings is that some cards start throttling before the default fan profile ramps up the rpms. This is especially obvious on my 980 with default clocks. Nvidia has obviously chosen to prioritize noise over peak performance. It was not configured to run hot for extended periods of time. Nividia also messed up on Linux by only allowing custom fan profiles on monitor-attached cards. It would be in everyone's interest to remove that restriction so we miners can get top performance while also benefitting from a longer MTBF on cards used for mining. i cant confirm that on our farm ( 5 or 6 card ) miners - as ( visually ) they all seems to be running the same fan speeds ... but visual estimations are highly inaccurate - so i could be completely way off base here ... i really hope you are wrong about this - as this could lead to catastrophic failure if left too long on the majority of the cards ... #crysx
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
|
May 28, 2015, 04:11:32 AM |
|
I strongly recommend high fan settings for anyone that can afford the extra noise that this will cause. Modern day GPUs can certainly run very hot without any trouble, heck, the 980 specifications allow for 98C temperatures, however, with cards at full load for prolonged periods of time (aka, mining), higher temperatures will simply cause higher failure rates.
Another reason for high fan settings is that some cards start throttling before the default fan profile ramps up the rpms. This is especially obvious on my 980 with default clocks. Nvidia has obviously chosen to prioritize noise over peak performance. It was not configured to run hot for extended periods of time. Nividia also messed up on Linux by only allowing custom fan profiles on monitor-attached cards. It would be in everyone's interest to remove that restriction so we miners can get top performance while also benefitting from a longer MTBF on cards used for mining. i cant confirm that on our farm ( 5 or 6 card ) miners - as ( visually ) they all seems to be running the same fan speeds ... but visual estimations are highly inaccurate - so i could be completely way off base here ... i really hope you are wrong about this - as this could lead to catastrophic failure if left too long on the majority of the cards ... #crysx Throttling will reduce the potential for damage at the cost of performance. My 980 runs at 80C on default fan profile. The clock fluctuates just above the base rate to maintain that temp. With the fan at 80% the temp fluctuates 4 to 8C lower and the clock is steady near the boost rate. I don't have actual hash rate comparisons but you get the picture. The 750ti, which I believe your farm is mostly made of, seems to run cooler by default so it's the second card in my Linux rig.
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 2870
Merit: 1091
--- ChainWorks Industries ---
|
|
May 28, 2015, 05:08:04 AM |
|
I strongly recommend high fan settings for anyone that can afford the extra noise that this will cause. Modern day GPUs can certainly run very hot without any trouble, heck, the 980 specifications allow for 98C temperatures, however, with cards at full load for prolonged periods of time (aka, mining), higher temperatures will simply cause higher failure rates.
Another reason for high fan settings is that some cards start throttling before the default fan profile ramps up the rpms. This is especially obvious on my 980 with default clocks. Nvidia has obviously chosen to prioritize noise over peak performance. It was not configured to run hot for extended periods of time. Nividia also messed up on Linux by only allowing custom fan profiles on monitor-attached cards. It would be in everyone's interest to remove that restriction so we miners can get top performance while also benefitting from a longer MTBF on cards used for mining. i cant confirm that on our farm ( 5 or 6 card ) miners - as ( visually ) they all seems to be running the same fan speeds ... but visual estimations are highly inaccurate - so i could be completely way off base here ... i really hope you are wrong about this - as this could lead to catastrophic failure if left too long on the majority of the cards ... #crysx Throttling will reduce the potential for damage at the cost of performance. My 980 runs at 80C on default fan profile. The clock fluctuates just above the base rate to maintain that temp. With the fan at 80% the temp fluctuates 4 to 8C lower and the clock is steady near the boost rate. I don't have actual hash rate comparisons but you get the picture. The 750ti, which I believe your farm is mostly made of, seems to run cooler by default so it's the second card in my Linux rig. it is jo ... the 750ti oc lp card runs quite cool - but like you said - probably at the cost of performance ... which really isnt a bad tradeoff ... i am looking at a cooling system ( evaporative ) to test with the cards ... it seems the cards are immune to any moisture being at the temperature they always run at - so im keen to test ... the unit is huge - and runs at a max of 10A ( normal power here @ 240V ) - so it will be a massive power saving ( im talking Amperes as opposed to AUD ) which means the farm can grow by at least another few machines ... the air conditioning unit cant cope - and draws SO much power ... so if this evaporative unit works - and works well - we will have an answer to the excess power spillage that the air conditioning system uses ... so the cooler the cards - the better the performance - the longer they last ... at least - thats the theory ... #crysx
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
May 28, 2015, 06:53:25 AM |
|
I have now added --gpu-memclock and --gpu-engine
I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds) However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)
If anyone can make it work there is a commandline tool available here that you can test:
C:\\Progra~1\\NVIDIA~1\\NVSMI\\nvidia-smi
|
|
|
|
rednoW
Legendary
Offline
Activity: 1510
Merit: 1003
|
|
May 28, 2015, 07:54:35 AM |
|
I have now added --gpu-memclock and --gpu-engine
I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds) However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)
If anyone can make it work there is a commandline tool available here that you can test:
"Supported products: - Full Support - All Tesla products, starting with the Fermi architecture - All Quadro products, starting with the Fermi architecture - All GRID products, starting with the Kepler architecture - GeForce Titan products, starting with the Kepler architecture - Limited Support - All Geforce products, starting with the Fermi architecture " This tool doesn't want to give full control for simple geforce cards. I think such programs as gpu-z or nvidia inspector or msi afterburner have special driver communication hacks that are not available as open source ((( Some more: "reading various sensors of graphics cards isn't as easy as people might imagine it to be. In GPU-Z's case, it needs to read and write to the I2C bus via MMIO(Memory Mapped Input-Output) on the graphics card, this can only be achieved through what is called a kernel-mode driver on Microsoft Windows operating systems. And that is exactly how GPU-Z does it, but have you ever wondered if the way GPU-Z is doing it is safe? The driver that GPU-Z uses is a digitally signed kernel-mode driver, so it can run with DSEO enabled without asking the user for permission, it can access physical memory at a whim. You use DeviceIoctl, you specify an address in physical memory and size and the driver returns a pointer to that address, now you can fiddle with kernel space memory however you like."
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 2870
Merit: 1091
--- ChainWorks Industries ---
|
|
May 28, 2015, 08:32:36 AM |
|
I have now added --gpu-memclock and --gpu-engine
I just use the nvidia-smi and it seems to fail to adjust the clocks on the 750ti's, but report no problems on the gtx 970 (if you set the correct speeds) However, when I monitor the GPU in GPU-z the clocks are not changed.. (might be a permission issue or something)
If anyone can make it work there is a commandline tool available here that you can test:
C:\\Progra~1\\NVIDIA~1\\NVSMI\\nvidia-smi
maybe this is why we are having so much trouble adjusting ( regardless of whether it is linux or windows ) the clocks on the 750ti oc cards here? ... it might be wise to invest in a higher level card ... what would you suggest for the higher level card? ... even if all we do is test with it ... #crysx
|
|
|
|
rednoW
Legendary
Offline
Activity: 1510
Merit: 1003
|
|
May 28, 2015, 08:38:30 AM |
|
what would you suggest for the higher level card? ... even if all we do is test with it ...
Titan X )))
|
|
|
|
Epsylon3
Legendary
Offline
Activity: 1484
Merit: 1082
ccminer/cpuminer developer
|
|
May 28, 2015, 09:52:18 AM |
|
you made again a bunch of code mistakes :
case '1070' should be case 1070
-ac --applications-clocks= Specifies <memory,graphics> clocks
(not the reverse)
indeed, nvidia-smi was updated (on linux too in 346.72) but its maybe only a first step and not fully implemented by nvidia
|
|
|
|
rednoW
Legendary
Offline
Activity: 1510
Merit: 1003
|
|
May 28, 2015, 10:03:47 AM |
|
I think all this stuff can be usefull in complex: when you can monitor gpu temp and adjust engine-memory frequencies and fan speed. By now you just trying to invoke external executable to set clocks, it can be done easily in .bat file prior the ccminer launch. In current state I think this part of your work is useless (((
|
|
|
|
sp_ (OP)
Legendary
Offline
Activity: 2954
Merit: 1087
Team Black developer
|
|
May 28, 2015, 10:24:49 AM |
|
Calling the api calls directly is currently not working on win32. Thats why i use the commandline. The code was pretty bugged Late night quickchanges (after work (10hour workdayin c#)) In the next version I will retrieve the supported clock's and select the closest match. A memclock of 1504 is supported but not 1505. If you specify --gpu-memspeed 1505 it will crash. I want it to force it to 1504 wich is supported. This is something you cannot do in the commandline
|
|
|
|
|