Bitcoin Forum
May 21, 2024, 11:48:48 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 »  All
  Print  
Author Topic: Estimated Hash Rates for the RTX 2080 and RTX 2080 Ti  (Read 11442 times)
rockmoney
Sr. Member
****
Offline Offline

Activity: 439
Merit: 297


www.amazon.com/shops/MinersSupply


View Profile WWW
September 02, 2018, 09:26:25 PM
 #21

Was hoping to see a bit more solidity in the speculated stats for the new RTX 2080, especially since the Founders Edition is currently available on NVIDIA website for $799 as a pre-order, and all of the specs have been released. Will keep an eye out for more info, but I imagine these cards will be harder to find at first as time goes on, mainly because the two other editions are already sold out on NVIDIA website.

I am also seeing 11GB editions of this card on second and third party websites. I assume this 11GB version would be the one to get for ETH mining, as it will certainly remain relevant for much longer than 8GB versions (again, for ETH mining).

Anyway, thanks for all of the info so far, and please post any new info as it becomes available!  Wink


GekkoScience Products on Amazon Prime 1-DAY SHIPPING 
infofront
Legendary
*
Offline Offline

Activity: 2632
Merit: 2780


Shitcoin Minimalist


View Profile
September 10, 2018, 03:39:22 PM
 #22

Is it technically possible to do any mining on the new ray tracing cores?
Wananavu99
Full Member
***
Offline Offline

Activity: 345
Merit: 131



View Profile
September 10, 2018, 08:34:20 PM
 #23

Just wait for AMD Navi 7nm, that should force Nvidia to lower prices.  For a possible 15-20% hashrate increase on current prices, dependent on electrical costs, ROI could be minimum or even negative.  It's bear market right now and some coins are honestly thinking of moving away from PoW. 
KevinMiles
Copper Member
Jr. Member
*
Offline Offline

Activity: 110
Merit: 1


View Profile
September 11, 2018, 04:29:28 PM
 #24

The Turing new cores are ASICS. Imagine someone write a miner to use Tensor and RTX cores. As a result it will be possible to mine while gaming with the Cuda cores. I bet some sophisticated miner will pop up soon in order to utilize the nvidia new marvel.
zenstrive
Jr. Member
*
Offline Offline

Activity: 46
Merit: 4


View Profile
September 14, 2018, 03:18:31 AM
 #25

If Volta is used as reference, cryptonight v7 hashrate is still lower than Vega 56
revenant2017
Sr. Member
****
Offline Offline

Activity: 728
Merit: 252


Healing Galing


View Profile
September 14, 2018, 03:31:29 PM
 #26

The Turing new cores are ASICS. Imagine someone write a miner to use Tensor and RTX cores. As a result it will be possible to mine while gaming with the Cuda cores. I bet some sophisticated miner will pop up soon in order to utilize the nvidia new marvel.
This is also what i have in mind with Tensor Cores. If someone could create and optimized Tensor cores for mining and also the speed of GDDR6, the possibility is the mining hashrate can be higher than past architectures. The sweet spot is at RTX2080 base on it's price/performance.
giagge
Legendary
*
Offline Offline

Activity: 1134
Merit: 1001


View Profile
September 15, 2018, 03:23:55 PM
 #27

I think the RTX and tensor core is much faster to Xilinx fpga .
vuli1
Jr. Member
*
Offline Offline

Activity: 238
Merit: 3


View Profile WWW
September 15, 2018, 04:59:58 PM
 #28

no one got the new gpu yet, to give us the results.

★ PRiVCY ➢ Own Your Privacy! ➢ Best privacy crypto-market! ★
✈✈✈[PoW/PoS]✅[Tor]✅[Airdrop]✈✈✈ (https://privcy.io/)
markiz73
Hero Member
*****
Offline Offline

Activity: 1190
Merit: 641



View Profile
September 15, 2018, 06:41:35 PM
 #29

In the near future, the test results should be released, but they will not reflect all the capabilities of the new generation of video cards.
After a few months, the developers optimize the software for the new GPU architecture and memory and then we will get a real result.

Here are a few tests of  RTX 2080 Ti & RTX 2080
https://videocardz.com/77983/nvidia-geforce-rtx-2080-ti-and-rtx-2080-official-performance-unveiled
R0land
Jr. Member
*
Offline Offline

Activity: 208
Merit: 3


View Profile
September 15, 2018, 07:08:41 PM
 #30

The ethash speed is limited by the bandwith.
With a GDDR6 384-bit memory bus you have ~768 GB/s  bandwidth --> ~90 Mh/s eth.
With a GDDR6 256-bit memory bus you have ~512 GB/s  bandwidth --> ~60 Mh/s eth.
cudapop
Member
**
Offline Offline

Activity: 93
Merit: 41


View Profile
September 16, 2018, 03:43:14 AM
 #31

The memory bandwidths you listed above are for 16 Gbps GDDR6. The RTX cards ship with 14 Gbps speed memory, bandwidths are: 2080Ti (352 bit bus) = 616 GB/s, 2080 (256 bit bus) = 448 GB/s.
nitrobg
Member
**
Offline Offline

Activity: 413
Merit: 17


View Profile
September 16, 2018, 07:38:09 AM
 #32

Those 14Gbps chips can probably be overclocked to at least 15Gbps - just like most GDDR5 can reach 9Gbps. Perhaps it is the lowest 100% stable speed for all chips.
cudapop
Member
**
Offline Offline

Activity: 93
Merit: 41


View Profile
September 16, 2018, 07:55:33 AM
 #33

Yes that's true, but when using the memory bandwidth values to estimate Ethash hashrates, the value obtained is actually a theoretical zero-latency hashrate. But since Ethash takes each DAG sample from a pseudo-random location, latency actually has a substantial effect. So one conservatively assumes that the overclocking is used largely to make up for the actual latencies as compared to the theoretical zero-latency hashrate.

So for example, a 192-bit bus GTX 1060 with 8 Gbps memory has a theoretical zero-latency Ethash hashrate of around ~23.4 MH/s. An overclock to 9 Gbps would raise this to a theoretical zero-latency rate of ~26.4 MH/s. But due to the actual latencies involved, what we see in reality is hashrates closer to the ~23 MH/s value.
RivAngE
Full Member
***
Offline Offline

Activity: 728
Merit: 169


What doesn't kill you, makes you stronger


View Profile
September 17, 2018, 10:26:57 AM
 #34

I think the RTX and tensor core is much faster to Xilinx fpga .

Let me try calculate that...

For starters we ignore the RTX cores, these are doing a very specific kind of job and I don't expect them to have any use in mining.
 - The Nvidia's Tensor cores for a 2080ti have an FP16 (aka half precision) computation performance of 110 TFLOPs!
 - Xilinx most powerful FPGA can offer 10,948 GFLOPs, or almost 11 TFLOPs (source: https://www.xilinx.com/products/technology/dsp.html#solution)

Xilinx doesn't mention which model resulted in this performance, only that it's from the UltraScale+ family. The fastest available FPGA for mining is this one https://store.mineority.io/sqrl/cvp13/ which costs $6,370 before tax and if you import it in Europe.... I don't want to think about it. This FPGA miner uses the Virtex UltraScale+ VU13P.

Nvidia's Tensor cores can also offer mixed precision computation which, to be honest, I have no idea what it is, if Xilinx offers this or if it matters for mining!

So as far as pure computation performance per $ goes, the RTX 2080ti is putting everything else miles away... and people call the RTX series overpriced Roll Eyes
I wish however someone could go into more detail about how important the above numbers are and which algorithms would benefit more from this because I'm sure TFLOPs are NOT the only factor.


For comparison, 1080ti offers 13 TFLOPs FP32 (aka Single Precision), shader GPU only. I don't know if the 1080ti could compute FP16 but if it could that'd be 26 TFLOPs (double the FP32).
The 2080ti shader GPU offers 16 TFLOPs FP32 on top of the Tensor core which I mentioned above.

PS: That post took me more than half an hour to gather all these numbers from valid sources! omg Shocked
cudapop
Member
**
Offline Offline

Activity: 93
Merit: 41


View Profile
September 17, 2018, 10:57:21 AM
Last edit: September 17, 2018, 11:15:32 AM by cudapop
Merited by RivAngE (1)
 #35

Note that the 110 FP16 tflops of performance of the tensor cores is in one specific operation only: 4x4 matrix multiplication and accumulate. That's all tensor cores can do. It's essentially an ASIC which is designed to take two 4x4 matrices of FP16 values and multiply them, accumulating the result into a 4x4 matrix of FP32 values. You can't use a tensor core for anything else except to do matrix multiply and accumulate.

That's why it's rated at such a high tflops number: because it's hardware has been designed to do only matrix multiply and accumulate, it has no other functional use beyond that, you can't reprogram a tensor core to perform any other operation. Think of a tensor core like an S9 ASIC, but instead of doing sha256 all it does is 4x4 matrix multiply and accumulate.

On the other hand, Xilinx logic cells can be reconfigured to perform different operations, it's completely flexible, hence VHDL/Verilog development. In fact, using something like Xilinx's SDAccel you can write a C++/OpenCL program and have it built into a bitstream to run on a Xilinx FPGA.
RivAngE
Full Member
***
Offline Offline

Activity: 728
Merit: 169


What doesn't kill you, makes you stronger


View Profile
September 17, 2018, 11:26:53 AM
 #36

Note that the 110 FP16 tflops of performance of the tensor cores is in one specific operation only: 4x4 matrix multiplication and accumulate. That's all tensor cores can do. It's essentially an ASIC which is designed to take two 4x4 matrices of FP16 values and multiply them, accumulating the result into a 4x4 matrix of FP32 values. You can't use a tensor core for anything else except to do matrix multiply and accumulate.

That's why it's rated at such a high tflops number: because it's hardware has been designed to do only matrix multiply and accumulate, it has no other functional use beyond that, you can't reprogram a tensor core to perform any other operation. Think of a tensor core like an S9 ASIC, but instead of doing sha256 all it does is 4x4 matrix multiply and accumulate.

On the other hand, Xilinx logic cells can be reconfigured to perform different operations, it's completely flexible, hence VHDL/Verilog development. In fact, using something like Xilinx's SDAccel you can write a C++/OpenCL program and have it built into a bitstream to run on a Xilinx FPGA.

Oh... I see! Good explanation!
Then I guess that a specific algo has to build which would take advantage of this operation. A software developer who creates miners probably wouldn't be able to take advantage of this core for the current algos.
cudapop
Member
**
Offline Offline

Activity: 93
Merit: 41


View Profile
September 17, 2018, 11:35:33 AM
 #37

There are two current algos I am aware of which use matrix multiply: Tensority and Groestl.

I posted some details on them wrt Turing's new features in this post: https://bitcointalk.org/index.php?topic=4948083.msg44769341#msg44769341
goxed
Legendary
*
Offline Offline

Activity: 1946
Merit: 1006


Bitcoin / Crypto mining Hardware.


View Profile
September 19, 2018, 05:15:19 PM
 #38

Here's the sauce
https://www.computerbase.de/2018-09/nvidia-geforce-rtx-2080-ti-test/6/#abschnitt_ethereummining_ist_nicht_ueberragend




Revewing Bitcoin / Crypto mining Hardware.
Meteorite777
Jr. Member
*
Offline Offline

Activity: 68
Merit: 6


View Profile
September 19, 2018, 06:27:04 PM
Merited by dbshck (1)
 #39


If that's true; and RTX 2080Ti only hashes at 50 Mh or less that would make it just at fast as a GTX 1080Ti with the ETHLargement Pill, correct? That would make it a massive flop for mining ETH atleast.
revenant2017
Sr. Member
****
Offline Offline

Activity: 728
Merit: 252


Healing Galing


View Profile
September 19, 2018, 06:56:40 PM
 #40


If that's true; and RTX 2080Ti only hashes at 50 Mh or less that would make it just at fast as a GTX 1080Ti with the ETHLargement Pill, correct? That would make it a massive flop for mining ETH atleast.
If they developed an EthLargement pill for the newer Turing architecture, we might speculate atleast a 50% increase in hashrate at max or around 84mh/s.
Pages: « 1 [2] 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!