Bitcoin Forum
May 02, 2024, 09:26:46 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: [1]
1  Other / Off-topic / Radeon VII with Ryzen 2700X in 12 PC games at 2560x1440p on: February 10, 2019, 03:06:30 PM
For this testing, only official benchmarks produced by Game Developers will be used: only the official Game Developers tools for counting FPS will be used.

The official benchmarking tools Game Developers release will show less difference between GPUs because Game Developers include representative cross-sections of their PC game. Most PC games have entire segments of maps that are less demanding for GPUs, which wipe out a portion of the FPS differences between GPUs around similarly performance points.

At 2560x1440p official benchmark tool released by Games Developers will show lower performance differences of around 4% between GPUs at similar performance levels when compared to the using a 60 second testing of a tiny part of a game that is more demanding of a GPU. For example, Hitman 2 has 7 maps and only 2 of those maps are demanding of GPUs: a) Miami; b) Mumbai. 71% of this PC game is played on maps that will show very low differences between GPUs around a similar performance metric.

Next AMD Labs released 25 PC game benchmarks at 3840x2160p, Fallout 76 (68.35%), Doom 2016 (33.18%) Battlefield One (35.98%), Battlefield V (33.19%), The Witcher 3 (33.82%) cannot be benchmarked as they do not have official benchmark tools from their respective Games Developers, which will reduce FPS delta of the Radeon VII by 3.3% over my Red Devil Vega 64.

Therefore, do note that 5 of the biggest FPS increases to be had from buying the Radeon VII will not be included in the results for shown below for games.

Fan Noise
The Radeon VII is quieter than my Reference Blower RX Vega 56 and massively quieter than my Powercolor Red Devil Vega 64. There is nothing really to report on fan noise, other then it is much quieter than the previous generations I own.

Radeon Drivers and Windows 10 1809 Version
Nothing to report on this either, zero game crashes to report and Radeon VII was completely stable in everything I played or tested.

Esports reductions in Blurriness or Fuzziness
This has increased again, as part of AMD's sponsorship of Fnatic (a leading professional esports team) Radeon VII has another bump to sharpness and clarity of rendered frames when gaming in rapidly changing FPS scenarios. Testing in Forza Horizon 4, there was a noticeable increase in final rendered frame clarity at 2560x1440p over Red Devil Vega 64. I did, also, test in Star Wars Battlefront 2 and there was another bump in clarity in the fast-paced competitive multiplayer.

Power Consumption
Powercolor Red Devil Vega 64
Adrenalin 19.2.1 335watts Averaged Out -/+ 10watts depending PC game Engine.
105% over Reference Blower Vega 64

Sapphire Radeon VII (AMD manufactured)
Adrenalin 19.2.1 255watts Averaged Out -/+ 10watts depending PC game Engine.

Testing with a Watt-Meter from the Plug Socket
A total of 10 PC games were measured at the wall and Power Draw reductions where observed in every single PC game of around 80watts using the default Power Plans for both GPUs.

Examples:
Strange Brigade Benchmark - every option set to Ultra
Red Devil Vega 64 Total System Power Draw =501watts
Radeon VII Total System Power Draw =390watts
110watt power draw reduction during the Strange Brigade Benchmark.

Shadow of the Tomb Raider - Highest Preset.
Red Devil Vega 64 Total System Power Draw =484watts
Radeon VII Total System Power Draw =404watts
80watt power draw reduction during the Shadow of Tomb Raider Benchmark.

FPS Result Performance Increases at 2560x1440p 12 Games

Ashes of the Singularity Vulkan Crazy Preset
Red Devil Vega 64 =54.9FPS (100%)
Radeon VII =62.8FPS (114.4%)

AC Odyssey highest Preset
Red Devil Vega 64 =50FPS (100%)
Radeon VII =60FPS (120%)

Deus EX Mankind DX12 Divided Ultra
Red Devil Vega 64 =59.1FPS (100%)
Radeon VII =73.9FPS (125%)

Far Cry 5 Ultra TAA
Red Devil Vega 64 =86FPS (100%)
Radeon VII =102FPS (118.6%)

Forzia Horizon 4 Ultra
Red Devil Vega 64 =100FPS (100%)
Radeon VII =111.3FPS (111.3%)

Hitman DX12 Ultra
Red Devil Vega 64 =111.75FPS (100%)
Radeon VII =128.4FPS (115.9%)

Middle-Earth Shadow War HD Textures 8.44GBs
Red Devil Vega 64 =74FPS (100%)
Radeon VII =87FPS (117.5%)

Rise of the Tomb Raider DX12 Highest Preset
Red Devil Vega 64 =88.47FPS (100%)
Radeon VII =107.3FPS (121.3%)

Rainbow Six Seige Ultra 100% TAA and 100% Render Scaling
Red Devil Vega 64 =103.1FPS (100%)
Radeon VII =128.7FPS (124.8%)

Shadow of Tomb Raider DX12 Highest Preset
Red Devil Vega 64 =67FPS (100%)
Radeon VII =82FPS (122.4%)

Strange Brigade DX12 Ultra
Red Devil Vega 64 =106FPS (100%)
Radeon VII =138FPS (130.2%)

The Division DX12 Highest Preset
Red Devil Vega 64 =85.3FPS (100%)
Radeon VII =105.3FPS (123.4%)

The 12 game average Radeon VII offers a 20.4% performance increase over the Powercolor Red Devil Vega 64. Consequently, Radeon VII does run around 3.2% slower at 2560x1440p, when compared to its' results at 3840x2160p posted by me: https://www.reddit.com/r/Amd/comments/apgt62/radeon_vii_13_game_benchmark_result_at_3840x2160p/

At 2560x1440p Radeon VII offers a 25.4% performance increase over a Reference Blower RX Vega 64. When the 5 PC games that show the greatest performance increase deltas are added back into the figures this difference is 28.7% over the Reference Blower RX Vega 64. In the most demanding segments of a PC game, Radeon VII will offer an extra 4% performance or 32.7% more performance than a Reference Blower Vega 64.

Performance is as expected, slightly down at 2560x2160 versus 3840x2160p, but the change in performance is relatively minor. More importantly, the Esport reductions in Blurriness and Fuzziness are certainly welcomed and make the Radeon VII a much more attractive purchase for competitively orientated gamers or gamers who love car driving games.

Finally,
I've been playing The Division 2 Beta at Ultra DX12 Settings and VRAM usage is 14GBs at 2560x1440p.
I can confirm Call Of Duty Black Ops 4 Multiplayer is using 15GBs VRAM usage at 2560x1440p.
I can confirm, Apex Legends, when memory slider is moved to MAX position is using 15GBs of VRAM at 2560x1440p.

So, gamers have not had long to wait for the benefits of that 16GBs of VRAM to be seen at this resolution.

Notes:
Hitman 2 in-game benchmark throws away the first 5 seconds of the Simulation; there is an FPS cap of 45FPS at the beginning of the Simulation as the graphics engine initializes assets for Vega GPUs. However, the Radeon VII was not recognised as Vega GPU by the games engine at 2560x1440p and the Simulation did not implement the 5-second deduction. Therefore, it was excluded until a patch fixes the bug in Simulation at 2560x1440p.
2  Other / Off-topic / Radeon VII 3840x2160p 13 PC Game Benchmarks on: February 09, 2019, 12:05:47 PM
For this testing, only official PC games with in-game benchmarks will be used that I own; only official Game Developers FPS counters will be used.

The In-game benchmarks will show less difference between GPUs because Game Developers include a representative cross-sections of their PC game. Most PC games have entire segments of maps that are less demanding for GPUs, which wipe out a portion of the FPS differences between GPUs around similarly performance points compared to the most demanding parts of PC games.

For example, Hitman 2 has 7 maps and only 2 maps are demanding of GPUs: a) Miami; b) Mumbai. 71% of this game is played on maps that will show low differences between GPUs around a similar performance metric.

AMD Labs' released FPS result for 25 PC games at 3840x2160p. The average difference was 28.59% between the RX Vega 64 LC versus Radeon VII. The switch to in-game benchmarks will show around a 4% lower difference between the two GPUs.

New target Metric=24.59%

Next,  Fallout 76 (68.35%), Doom 2016 (33.18%) Battlefield One (35.98%),  Battlefield V (33.19%), The Witcher 3 (33.82%) cannot be benchmarked as they do not have the Official Benchmarks Tool provided by their Game Developers, which will reduce FPS difference by 3.3%.

New Target Metric=21.3%

Finally, my  Powercolor Red Devil Vega 64 averages out to be 2.7% slower than the RX Vega 64 LC model on the stock Power Plan used by AMD Labs.

New Target Metric=24%

Therefore, using Official Benchmarks Tools, we should expect to see around a 24% performance increase between these two GPUs:
1) Powercolor Red Devil RX Vega 64 100%
2) Radeon VII 124%

Test System
Ryzen 2700X PBO - Powerdraw allowed up to 150watts.
DDR4-3421-CL14 Low Latency Subtimings at 1.47volts.
7 PC case fans and one 240mm AIO for the CPU.
Gigabyte Auros X470 7 WiFi (BLCK 100.7) with onboard Audio used.
500GB SSD.
Corsair 850Watt Platinum PSU 94% Efficiency.

GPUs
Powercolor Red Devil Vega 64
Adrenalin 19.2.1 335watts Averaged Out -/+ 10watts depending on the PC game Engine.
Performance 106.3% over Reference Blower Vega 64

Sapphire Radeon VII (AMD manufactured)
Adrenalin 19.2.1 255watts Averaged Out -/+ 10watts depending on the PC game Engine.

Testing with a Watt-Meter from the Plug Socket
Default Power Plans for both GPUs.

Strange Brigade Benchmark - every option set to Ultra
Red Devil Vega 64 Total System Power Draw =501watts
Radeon VII Total System Power Draw =390watts
110watt power draw reduction during the Strange Brigade Benchmark.

Shadow of the Tomb Raider - Highest Preset.
Red Devil Vega 64 Total System Power Draw =484watts
Radeon VII Total System Power Draw =404watts
80watt power draw reduction during the Shadow of Tomb Raider Benchmark.

A  total of 10 PC games were measured at the wall and Power Draw reductions where observed in every single PC game and average out around 80watts amount.

FPS Result Performance Increase at 3840x2160p 13 Games

Ashes of the Singularity Vulkan Crazy Preset
Red Devil Vega 64 =49FPS (100%)
Radeon VII =58.3FPS (119%)

AC Odyssey highest Preset (Cloudless Day)
Red Devil Vega 64 =34FPS (100%)
Radeon VII =41FPS (120.5%)

Deus EX Mankind DX12 Divided Ultra
Red Devil Vega 64 =31.3FPS (100%)
Radeon VII =39.6FPS (126.5%)

Far Cry 5 Ultra TAA
Red Devil Vega 64 =47FPS (100%)
Radeon VII =60FPS (127.5%)

Forzia Horizon 4 Ultra
Red Devil Vega 64 =66.7FPS (100%)
Radeon VII =75.9FPS (114%)

Hitman DX12 Ultra
Red Devil Vega 64 =64.30FPS (100%)
Radeon VII =78.15FPS (121.5%)

Hitman 2 Ultra Mumbia Benchmark
Red Devil Vega 64 =50.5FPS (100%)
Radeon VII =62.47FPS (124%)

Middle-Earth Shadow War HD Textures 8.8GBs
Red Devil Vega 64 =43FPS (100%)
Radeon VII =53FPS (123%)

Rise of the Tomb Raider DX12 Highest Preset
Red Devil Vega 64 =49.95FPS (100%)
Radeon VII =58.88FPS (118%)

Rainbow Six Seige Ultra 100% TAA and 100% Render Scaling
Red Devil Vega 64 =51FPS (100%)
Radeon VII =65FPS (127.5%)

Shadow of Tomb Raider DX12 Highest Preset
Red Devil Vega 64 =36FPS (100%)
Radeon VII =45FPS (125%)

Strange Brigade DX12 Ultra
Red Devil Vega 64 =64FPS (100%)
Radeon VII =87FPS (136%)

The Division DX12 Highest Preset
Red Devil Vega 64 =48.1FPS (100%)
Radeon VII =59.9FPS (124.5%)

The Performance Increase over the 13 games is 23.6%. Therefore, no real surprise in the performance delta for the Radeon VII, but a big surprise in power consumption.

Returning to power consumption, Radeon VII draws identical power to a GTX 1080TI Founders Edition for PC gaming and about 30watts more than the RTX 2080 Founders Edition.

The 300watt TDP rating appears to be Compute or OpenCL productivity watt limit for things like Blender, Luxmark, etc.

Ryzen 2700X plus Radeon VII Comparative analysis

Far Cry Ultra TAA (RTX 2080 FE with I9 7920 X @4.4 GHZ All Core)
RTX 2080 FE =56FPS
https://www.youtube.com/watch?v=b03PqtYUp-0
Far Cry Ultra TAA (Ryzen 2700X PBO)
Radeon VII =60FPS
The performance increase is 7.1% in the Official Benchmark for Far Cry 5.

Forza Horizon 4 Ultra (Intel I7 6700K at 4.7GHz with RTX 2080 OC)
RTX 2080 OC =69.9FPS
https://www.youtube.com/watch?v=CDoepsPwFlA
Forza Horizon 4 Ultra (Ryzen 2700X PBO)
Radeon VII =75.9FPS
The performance increase is 8.6% in the Official Benchmark for Forza Horizon 4.

Strange Brigade all settings at Ultra (Intel Core i7 8700K at 5GHz)
Asus Strix RTX 2080 O8G Gaming =67FPS
https://www.youtube.com/watch?v=EIeFoG4k_IY
Strange Brigade all settings at Ultra (Ryzen 2700X PBO)
Radeon VII =87FPS
The performance increase is 29.8% n the Official Benchmark for Strange Brigade.

As can be seen, for owners of Ryzen CPUs, buying the Radeon VII is a bit of a no-brainer decision, as expected the Radeon Drivers perform better with a Ryzen CPUs installed than there Intel CPU counterparts at 3840x2160p.

Notes on testing:
Hitman 2 in-game benchmark throws away the first 5 seconds of the Simulation; there is an FPS cap of 45FPS at the beginning of the Simulation as the graphics engine initializes assets for Vega GPUs. However, the graphics engine can take up to 8 seconds to initialize the assets with the 45FPS framecap in place for Vega GPUs. Therefore, the Game Developers should increase the starting dustbin on FPS results to 10 seconds in the Simulation to give the final pop up result reading more consistency between runs of the Simulation.
3  Alternate cryptocurrencies / Mining (Altcoins) / Youtubers blaming crypto-currency miners for gaming shortage LIE on: March 04, 2018, 05:14:35 PM
Youtubers and some internet website have been blaming crypto-currency miners for the shortage of gaming GPUs.

This is a lie, because the real cause of GPU shortages is a decrease in GPU shipments from Nvidia, which has had (historically) 70% of market share for GPU shipments.  

It is a well-known fact, Nvidia made significant inroads into AMDs market dominance for crypto-currency sales with the launch of the Pascal GPUs.

Before, the launch of Pascal GPUs – Nvidia’s gaming sales where around $800 million per quarter in 2016. But, from Q3 2016 Nvidia’s Pascal sales to crypto-currency miners increased by $300 million per quarter. Most people, on the Bitcointalk forums, will know the Pascal GPUs where a big hit with crypto-currency miners. And Nvidia’s gaming GPU sales rose from $800 million per quarter to over $1244 million per quarter after it mainstream launches.

In fact: I remember the retails prices of RX 480s and RX 470s slumping down in latter half of 2016 and early 2017: RX 480 8GB was selling for as little as $240 and the RX 480 4GB was around $210. Then, a memory strip mod was discovered for the RX 480s and RX 470s in 2017, which raised the mining productivity of GPUs by 13.2%. Also, a number of crypto-currency miners who had bought Pascal GPUs experienced unusually high failure rates on there Pascal GPUs. This led, people to switch back to AMD GPUs in Q1 and Q2 2017.

Finally, when AMD could not meet demand for GPUs by crypto-currency miners in Q3 and Q4, crypto-currency miners started buying Nvidia GPUs again. However this was offset by a huge sell-off defective GPUs cores by Nvidia to large crypto-currency miners. These defective GPU cores were sent over to AIB partners, who made them into mining cards (usually lacking display outputs) and gave them 3 month warranties. The proceeds for these defective GPU cores sales to crypto-currency miners was credited into Nvidia’s Q4 gaming GPU sales; it rose from $1251 million in Q3  to £1739 million for Q4. Yelp, large crypto-currency miners bought about $350 million worth of defective gaming GPUs cores from Nvidia in 2017, once minus the extra sales from the launch of GTX 1070 TI.


This is general background of crypto-currency miners purchasing history from Nvidia and AMD, which does not indicate causal effect for gaming GPU shortages, because crypto-currency miners buying habits were well-known for many years.

This brings us to the real reason for the GPU shortages in 2017 and this year. Nvidia has been reducing its GPU shipments for 2017, by over 9%.

Jon Peddie Research (https://www.jonpeddie.com/).

Total GPU shipments down by -4.8% from year to year.

Desktop GPU shipments down by -2 from year to year.

Notebook GPU shipments down by -7 from year to year.

AMD GPU shipments rose by 8.1% year to year.

2016 GPU market share of GPU shipments. AMD 29.5%. Nvidia 70.5%
2017 GPU market share of GPU shipments. AMD 33.7%. Nvidia 66.3%

Nvidia has increased its datacentre sales from $250 million per quarter at the end of 2016 to $600 million per quarter by the end of 2017. Datacentres require bigger GPU cores, which comes at the cost of lower yields per silicon waffer as these GPUs core are very large. To meet the demand for this, Nvidia diverted more of its foundry production to make them. This meant less production at foundry for gaming GPUs and notebooks GPUs, which meant a reduction in amount of total GPU shipments made in 2017.

Next, shipping less gaming GPUs should reduce gaming GPUs turnover, but this did not happen for  Nvidia because it launched very expensive GPUs like the GTX 1080 TI and GTX Titan Xp. This allowed Nvidia to maintain gaming sales turnover, more or less flat, whilst shipping less gaming GPUs. But, again the GTX 1080 TI and Titan Xp are bigger GPU cores, which means a reduction in net GPU shipments for the year because there is drop in yield per wafer as size of the GPU core increases.

And, datacentre demand for super-sized GPU cores has only increased in 2018 – so this will see a further reduction in GPU shipments for Q1 and Q2 of 2018.

As can be seen, the GPU shortages last year and this year are far more about Nvidia reducing its total GPU shipments for gaming GPUs and its move to making more big ticket gaming GPUs, than about the crypto-currency miners having periodic spikes in their demands for extra GPUs.

However, when you look at Youtubers and internet website coverage about shortages of gaming GPUs the fact Nvidia has been reducing its GPU shipments for 14 months has never been mentioned at all. Instead, crypto-currency miners have been blamed for it all.
4  Alternate cryptocurrencies / Mining (Altcoins) / HOW TO SET UP OVERCLOCKING AND FAN CONTROL ON UBUNTU 16.04 FOR NVIDIA CARDS on: December 12, 2016, 05:05:25 PM
This is quick Guide for setting up multiple Nvidia GPUs on Ubuntu 16.04LTS and 17.10LTS with Full Desktop.

Enabling all GPUs with overclocking and fan control.

CURRENTLY, THIS GUIDE REQUIRES (UBUNTU 16.04.04LTS - LATEST VERSION) ONE NVIDIA GPU NEEDS TO BE CONNECTED TO A MONITOR.

IT IS NOW SUPER-SIMPLE.

These are the steps are now:

1) install Ubuntu 16.04LTS or 17.04LTS

2) Update the Operating System via Software Centre. REBOOT

3) Go to Additional Drivers and switch the the CPU drivers, if not automatically loaded. (If you have problems with CPU drivers switch back to the Ubuntu default) REBOOT

4) Go to Additional Drivers and switch too Nvidia Drivers - I recommend you use default 378 driver optimised for Ubuntu OS. REBOOT

5) Open a Terminal and enter each line:

sudo update-grub

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

(You can run with offer coolbits settings, 31 is frequently used as well).
 
REBOOT

Fans control and overclocking is now enabled.

To finish, you will create a startup sh file for each GPU so the overclocks and fan speed are loaded when you log into Ubuntu 16.04LTS.

Create some empty documents on Ubuntu Desktop and call them whatever you like. Make sure the filename has .sh at the end.

Paste in:

!/bin/bash

nvidia-settings -a '[gpu:0]/GPUGraphicsMemoryOffset[3]=100'

nvidia-settings -a '[gpu:0]/GPUMemoryTransferRateOffset[3]=100'

nvidia-settings -a '[gpu:0]/GPUFanControlState=1'

nvidia-settings -a '[fan:0]/GPUTargetFanSpeed=80'

Amend the clocks (GPU and Memory) and fan speeds to whatever you're comfortable with. Make separate documents for each GPU by changing the numbers for each card.

Save and open properties and make each file "executable".

Go to Startup Applications and ADD each .sh to the programs you run when you log in.

--------------------------------------------------------------------------Problems setting nvidia-xconfig for multi-GPU rigs try this work-around---------------------------

Firstly, log in and enable overclocking on one GPU:

sudo nvidia-xconfig -a --cool-bits=28

Log out and then log back in, then enable overclocking on all the other identical GPUs:

sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Then log out and log back in and all the GPUs will have overclocking and fan control enabled.


-------------------------------------------------------------------------Reducing Watts Used By the GPUs---------------------------------------------------------------------------------------------------

Set Nvidia Drivers to persistent state (you must be in root - open terminal and enter "sudo -i"):

nvidia-smi -pm 1

First ask nvidia-smi what's the max power and min power limits are:

nvidia-smi -i 0 -q -d POWER

This will show MAX Power and MIN POWER allowed.

GTX 750TI as an example:
MIN POWER 30 W
MAX POWER 38.5 W

Then, you can reduce the watts to the MIN POWER allowed:

sudo nvidia-smi -pl 30

This gives you a net reduction of 22%.

Tested on Ubuntu, with max GPU load via running Unigine Heaven 4 Benchmark at MIN POWER.

For rigs with identical GPUs, you can set all power watts for all the cards at the same time with:
nvidia-smi -pm 1
sudo nvidia-smi -pl 30

---------------------------------------------------------------------Losing Share Efficiency after Updating the OS Security-----------------------------------------------------------------------

Switch to Ubuntu stock non-Nvidia drivers - Roboot.
On next boot up switch back to 378 drivers - Roboot
Re-enable overclocking and fan control.
Performance on share efficiency will be restored expected rates.
 

-------------------------------------------------------------------------PSU Capacitor Ageing--------------------------------------------------------------------------------------------------------------

The principle effects of this, will be a loss of efficiency. A PSU running at 88% efficiency after a 5 years will run at lower efficiency, closer to 78%.

Naturally, this will lead to higher wasted watts, depending on your locations electricity pricing buying a new PSU could be a worthwhile undertaking.

An extra 80watts wasted on a 800watt load does work out to be $84 (at $0.12 1 kilowatt-hour (kWh)) in a year.


----------------------------------------------------------Old Method as Reference Material and No Longer Needed-----------------------------------------------------------------------------

Install Ubuntu 16.04 with enabling software update options for Ubuntu Development team and third parties.

On reboot, after installation open up Ubuntu Software - update the the OS via Ubuntu Software (important to use the OS tool and not a terminal) and reboot.

Next, Software & Updates , and then Additional Drivers and install the Nvidia 367.57 drivers - these include extra tweaks from the Ubuntu development team for max GPU performance. Unfortunately, they do not allow you overclocking, but you will fix this latter on.

Next, go to Search Your Computer and bring up Nvidia Contol Center. And X Server Configuration and save the configuration file.

Next, open a terminal and enter the following:

sudo update-grub
sudo nvidia-xconfig -a --cool-bits=28 --allow-empty-initial-configuration

Reboot

This will enable all GPUs with screens and fan control on all GPUs.

Now, lets go get the latest drivers from Nvidia and Cuda 8 from their website or whatever Cuda version you need for your mining software. Save these downloads too the default Download folder.

Next, you will disable the Nvidia 367.57 driver to install the latest Nvidia drivers by going back into Software & Updates , and then Additional Drivers and select Nouveau Display Drivers. Apply changes.

Reboot


--------------------------------INSTALL DRIVERS VIA ADDITIONAL DRIVERS UBUNTU 16.04LTS----------------------------
Intstall the latest drivers with following instructions:

Press Control Alt F2 to get into non-desktop display.

Log in.

Switch off x-server with:

sudo service lightdm stop.

Go to downloads folder with:

cd ~/Downloads

ls

This will display driver name and run with:

sudo sh ./AND NVIDIA NAME LISTED

There will be two error messages, but select continue installation and say yes at the prompts.

Then, switch the x-server back on with:

sudo service lightdm start.

Reboot
---------------------------------------------------------------------------------------------------------------------------------------------------
Now, when you go back to the Nvidia Control Panel, it will show overclocking is enabled on all GPU's.

Next, install Cuda 8 with by opening a terminal in the Downloads folder:

sudo sh cuda_8.0.44_linux.run

Press "Control C" too fast foward to the end of EULA and "accept".

Say "No" to install drivers (trying to install these drivers when you have active Nvidia drivers will wreck the OS) and yes to toolkit, link and samples.
5  Alternate cryptocurrencies / Mining (Altcoins) / Ethereum GPU Mining Optimisations for Pools on: August 23, 2016, 07:27:11 AM
Many GPUs rig owners have been reporting getting 10%-25% less shares or payouts for mining Dagger (Ethereum) at the pools, when compared to the mining calculators. I decided more than 2 weeks ago to research this topic for the benefit of the GPU rig owners' community.

For these benchmarks – I will be using:

Claymore Dual Miner 6.2 Windows and Linux:
https://bitcointalk.org/index.php?topic=1433925.0

Modified Titan Nvidia Drivers, to be used for 1070s and 980s:
https://drive.google.com/drive/folders/0B69wv2iqszefQVRfOEhWS0FCdUE
These drivers are no longer recommended, as they can cause the PC to crash on restarts - switch to latest drivers

Latest AMD Drivers:
Crimson Edition 16.8.2 Hotfix

OS:
Windows 7 64Bit

-----------------------------Aimed to remove all causes of lost shares at pools------------------

Test settings and Hardware setups:

AMD RIG
1) Phenom 555 dual core processer, 1 core disabled and overclocked to 3.5Ghz.
2) Asus Crosshair Formula IV motherboard
3) Antec PSU 850watts
4) 8GB DDR3 running at 1066mhz (faulty memory controller on Phenom and can only run memory at the lower speed).


INTEL RIG
1) Sandybridge 2500K Quad, 2 cores disabled and set to 3.3Ghz
2) MSI Z77A-GD65
3) EVGA PSU 600watts
4) 8GB DDR3 running at 1600mhz (memory speed does not really matter with this processer)

Firstly, running the CPU at 3.5Ghz and 3.3Ghz does result in more shares (4% more then when power saving modes for Intel and AMD) accepted per hour, when dualmining or solo mining at a pool, but above 3.5Ghz there are no extra shares found per hour. By disabling 1 core and 2 cores in the motherboard bios, you can bring the electricity usage back down to an acceptable level.

Window 7 OS - Special settings:
1) Aero graphics disabled and put into High Contrast Black.
2) "Desktop Manager" disabled via "Services". It is debatable as to whether this good for long term OS stability, but helps in benchmarking.
3) “Superfetch” disabled via “Services”; "Superfetch" destroys SSDs performance figures over time.
4) AMD Hotfix patch for FX CPU installed for the Phenom CPU

AMD GPUs

1) earliest iteration of R9 290 (ASUS) - overclocked to 1100mhz and memory 1300mhz. Powerlimit 20% Sweet Spot for Mining ETH and SC.

*Powerlimit above or below 20% produce less shares accepted per hour.
*Can be overclocked to 1150mhz with 50mvolts, but produces less shares per hour, then when no extra voltage is added for solo mining.
*Dual mining can be overclocked 1130mhz with 50mvolts, but produces less shares per hour, then when no extra voltage is added for dual mining.

This GPU does not like extra volts and will hash lower when overvolted and when undervolted.

2) XFX Radeon R7 370 2GB Double Dissipation - overclocked too 1180mhz and memory overclocked too 1500mhz, Powerlimit 20% Sweet Spot for Mining ETH and SC.

No options on MSI Afterburner to change Powerlimit above 20% or add extra mvolts.


Nvidia GPUs

Go to Nvidia Panel - Scroll down to "Power Management Mode" and change to "Prefer Maximum Performance" and click "apply". The default "optimal" setting attempts to save a few watts, but causes rejected shares in dual mining mode. Updated 31.08.16.

A matched pair MSI GTX 1070 8GB Aero OC Edition Graphics Card.

1) Fans set to 90%.
2) It is recommended: you go into "Device Manager" and disable all Nvidia HDMI drivers


Currently, these MSI card shipped with Micron memory modules, however this is a lottery because manufacturers ship the same cards with either Micron or Samsung memory. Bios updates have been released for Micron Nvidia cards by Palit, Gainward and EVGA cards, but MSI is still working on bios update release.

More info here:
http://www.guru3d.com/news-story/manufacturers-roll-out-firmware-updates-for-geforce-gtx-1070-due-to-memory-issue.html

Interesting Discovery 1:

When the 1070’s were put onto AMD RIG, they found more than 11% (can be as high as 20%) less ETH shares per hour then when they were on the INTEL RIG.

When AMD GPUs were place into the INTEL RIG, they found 8.5% less shares ETH per hour than when they were in the AMD RIG.

Yelp, Nvidia Compute is being primarily coded for INTEL RIGS and AMD openCL is primarily being coded for AMD RIGS. Not having the right Chipset/CPU combination on the RIG for your GPUs is going to lose you a lot of ETH shares per hour at the pool.


---------------------------------Claymore Dual Miner Fees----------------------------------------------

3% better hash-rate then public releases.

Fees paid to Claymore:

1% for solo pool mining ETH.
2% for dual mining ETH and SC or ETH and DECRED.

Consequently, you won’t actually notice the fees, when mining at a pool, because the Dualminer is 3% faster for solo mining ETH at the pools.

Special Setting for AMD GPUs – set all GPUs to slow mode or “-etha 1” in the bat.

Benchmark Dual Mining ETH and SC versus Solo Mining for AMD GPUs 200 and 300 series.

Dual Mining ETH and SC sweet spot setting is: -dcri 22

Solo mining ETH is less efficient then dual mining ETH and SC, efficiency of shares found per hour is better by 2.5%.

Yes, you read that correctly everybody - dual mining ETH and SC gets you more ETH at the pool, than solo mining ETH by itself.

Why would this be the case: these GPUs were built for gaming loads - give the GPUs the correct load and you get more efficiency and more ETH at pool.

Above a setting of -dcri 22 for dual mining - you lose a lot of ETH at the pool.

Setting -dcri 40, reduction in ETH hashrate is 8%.

However, there is 14.6% drop in shares found per hour at the pool.

Net disadvantage of this higher SC setting is: -6.6% compared to the reduction in ETH hashrate.  

Therefore, not really much reason not to Dual mine, plus there is an extra 2.5% Eth at pool each day.

Benchmark for 1070's for dual mining SC and ETH using Nvidia 369.05 drivers provided by Claymore's for his Dual Miner!

DO NOT SET A FAST OR SLOW SETTING (-etha 0 or 1) in the bat file.

It appears: the sweet spot is 50, this maximises the Ethereum shares found per hour for each 1070's hashpower.

Below 50, you lose a lot of shares (6.29%) accepted per hour on ETH and above 50 you lose a lost of shares accepted per hour on ETH.

For example: at SC setting of 70, you see a 6.29% drop in Ethereum shares accepted found, but hashpower has only been reduced by 1.9% in the dual mining mode.

Therefore, you end up being 4.39% worse off, in ETH mining efficiency.

Should people find the the sweet spot for dual mining ETH and Decred for AMD or NVIDIA GPUs, please post a reply.

I will be fine tuning the Dual Mining the 1070's settings in next week and will post and update.


----------------------------------------Turning to rumours of Bots intercepting you pool connection and stealing shares------------------------------------------------

I have no evidence that Bots exist - e.g. they intercept your pool connection and pinch your shares and reassign them to the Bot operator's account or wallet address!

It is alleged Bots steal up to 12% from the major (established) crypto pool shares from a mining rig.

It is alleged Bots steal up to 25% from the minor (under development) crypto pool shares from a mining rig.

Therefore, I feel it is something that needs addressing, even if what follows is an entirely hypothetical - since I am explaining how to stop Bots, when it is only a rumour Grin

Firstly, with mining software, which is open source (no fee disconnections) the solution is pretty straightforward.

There are 2 types of pools: a) wallet address pools; b) sign in accounts.

a) Simply use a new wallet address each time you restart your miner - a Bot can only intercept old connections from the public list of pool wallet addresses. Where it to indiscriminately intercept all connections to a pool it would be discovered.  

b) Set up a new worker, with the longest randomised password on each restart of your crypto rig. Delete of, all, the very oldest workers as you go along to help the account mining pool out.

Claymore Dual Miner - private miner with fees:

a) Using new wallet address will work, but each time you disconnect there is always the chance a bot operator will be able to intercept your re-established connection. However, they won't know the exact time of disconnections, therefore it will be hit and miss. The 2nd algo is always connected, so simply using a new wallet address each time you restart will keep the bot out.

b) Set up a new worker for both account mining pools, with the longest randomised password that the pool allows. Eventually, a bot operator (if you have one who is a hacker) will be able to crack the password on the ETH mining pool and intercept your shares. Therefore, you will need to set up new workers and passwords every 1-2 days to maintain your share averages and ETH payouts. Delete of, all, the very oldest workers as you go along to help the mining pool out.

Future major improvements: to the Claymore Miner would be not to disconnect from the main ETH mining pool, but merely to stop doing work for up to 72 seconds when doing Claymore's fee on his mining connection.

Therefore, some hypothetical solutions to deal with rumours of Bots on the POW algos.
6  Alternate cryptocurrencies / Announcements (Altcoins) / CYPHER [CYP] [POS] [YOBIT - C-CEX] # # HOLDERS THREAD on: March 31, 2016, 05:23:07 PM
Due the closure of the official thread for CYPHER [CYP] by the developer, a new thread for CYP holders to post ideas on or vent their frustrations upon Wink

Original Thread: https://bitcointalk.org/index.php?topic=1006527.0

Specifications:
Name: Cypher [CYP]
Algorithm: QuBit [Super secure hashing: 5 rounds of hashing functions
(luffa, cubehash, shavite, simd, echo)]
Qubit block generation: 120 second block time
PoS: 10%
Max supply: 6.333.700 Approx.

EXCHANGES:

YOBIT: https://yobit.net/en/trade/CYP/BTC
C-CEX: https://c-cex.com/?p=cyp-btc

WINDOWS WALLET + CONFIGURATION FILE
https://www.dropbox.com/s/9t0m0ct36rwv1wb/Cypher-qt-V3.7z?dl=0

FACEBOOK:
https://www.facebook.com/cyphercoincommunity1/

All available nodes listed on C-CEX (about 7 connections work)

cypher.conf

listen=1
server=1
daemon=1
rpcuser=setyourname
rpcpassword=setyourpassword
rpcport=5424
rpcallowip=127.0.0.1
addnode=51.254.119.94:5424
addnode=85.25.198.151:5424
addnode=51.254.135.148:5424
addnode=5.9.36.211:60035
addnode=51.254.200.59:55133
addnode=204.11.237.233:49936
addnode=51.255.40.210:43637
addnode=51.254.100.104:40408
addnode=51.254.100.106:34078
addnode=51.254.100.105:5424
addnode=51.254.119.94:54873
addnode=85.214.23.49:47606
addnode=82.9.139.178:49207
addnode=122.151.176.247:42632
addnode=2.101.132.113:49202
addnode=188.165.3.6:59524
addnode=180.183.17.89:7563
addnode=58.96.109.62:44012
addnode=37.110.213.4:60214
addnode=85.139.116.252:49308
addnode=58.96.109.62:46918
addnode=49.228.41.109:25548
addnode=118.211.232.170:49641
addnode=82.9.139.178:60360
addnode=58.96.109.62:44016
addnode=58.96.109.62:38684
addnode=85.139.116.252:49371
addnode=82.9.139.178:55760
addnode=184.22.73.61:1557
addnode=58.96.109.62:33892
addnode=49.228.40.83:5707
addnode=49.228.40.83:8990
addnode=184.22.65.3:34358
addnode=58.96.109.62:57780
addnode=49.228.41.101:62288
addnode=122.151.176.247:54200
addnode=122.151.176.247:57780
addnode=122.151.176.247:55312
addnode=49.228.41.101:28783
addnode=118.211.238.24:61450
addnode=184.22.74.26:26833
addnode=184.22.74.26:1608
addnode=51.254.100.104:47405
addnode=51.254.100.106:37631
addnode=51.255.40.210:58044
addnode=49.228.42.131:2441
addnode=82.9.139.178:51113
addnode=49.228.42.131:24548
addnode=82.9.139.178:64386
addnode=49.228.42.131:27998
addnode=184.22.76.42:7567
addnode=184.22.76.42:1988
addnode=82.9.139.178:49210
addnode=184.22.76.42:19739
addnode=49.228.40.255:43860
addnode=82.9.139.178:50736
addnode=177.54.154.35:10699
addnode=212.68.41.83:54512
addnode=177.42.116.79:56118
addnode=82.9.139.178:49650
addnode=82.9.139.178:58651
addnode=184.22.64.59:52679
addnode=82.9.139.178:64370
addnode=177.207.21.96:61405
addnode=177.158.200.145:61644
addnode=122.151.176.247:48164
addnode=177.54.154.35:19869
addnode=177.54.154.35:3315
addnode=184.22.64.59:57939
addnode=82.9.139.178:49198
addnode=82.9.139.178:57522
addnode=49.228.43.137:64898
addnode=188.165.3.6:52803
addnode=49.228.43.137:17361
addnode=118.211.234.228:64751
addnode=118.211.234.228:52813
addnode=82.9.139.178:60407
addnode=82.9.139.178:50457
addnode=82.9.139.178:50820
addnode=93.86.33.25:28964
addnode=82.9.139.178:53420
addnode=184.22.75.158:3527
addnode=49.228.37.252:2771
addnode=49.228.37.252:54534
addnode=51.254.100.104:51445
addnode=51.254.119.94:34639
addnode=51.254.100.106:50077
addnode=51.254.100.105:50727
addnode=77.99.98.229:63750
addnode=82.9.139.178:60187
addnode=49.228.37.252:60651
addnode=82.9.139.178:58546
addnode=82.9.139.178:58547
addnode=82.9.139.178:51043
addnode=82.9.139.178:55857
addnode=82.9.139.178:49231
addnode=49.228.32.219:50783
addnode=177.16.230.104:64297
addnode=184.22.66.110:22203
addnode=149.202.98.160:10227
addnode=85.139.116.252:49555
addnode=184.22.72.3:30219
addnode=184.22.72.3:58840
addnode=177.54.154.35:6543
addnode=178.162.198.111:3240
addnode=51.254.216.250:5424
addnode=51.254.216.251:47716
addnode=51.254.100.105:44350
addnode=51.254.100.106:50996
addnode=178.162.198.111:13639
addnode=46.28.68.158:5412
addnode=87.103.171.14:7310
addnode=51.254.100.104:35058
addnode=51.254.100.105:35501
addnode=51.254.100.106:58395
addnode=94.253.240.15:50340
addnode=51.254.216.25:39034
addnode=118.211.237.33:59654
addnode=203.91.244.188:40870
addnode=118.211.237.33:55005
addnode=171.25.193.77:58041
addnode=82.9.139.178:54916
addnode=82.9.139.178:56954
addnode=188.113.203.76:55996
addnode=82.9.139.178:57189
addnode=37.110.214.85:57314
addnode=82.9.139.178:59642
addnode=203.91.244.188:51018
addnode=82.9.139.178:52028
addnode=51.254.216.25:38886
addnode=51.254.216.251:49139
addnode=51.254.216.250:38944
addnode=82.9.139.178:49270
addnode=82.9.139.178:49618
addnode=82.9.139.178:57283
addnode=85.139.116.252:60464
addnode=82.9.139.178:49366
addnode=82.9.139.178:49367
addnode=82.9.139.178:60621
addnode=82.9.139.178:49978
addnode=82.9.139.178:49979
addnode=82.9.139.178:49879
addnode=82.9.139.178:49946

Pages: [1]
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!