Besides the G2 using a larger 140mm double ball bearing fan compared to the 130mm Hydraulic Dynamic Bearing fan on the G3, I would think the smaller casing on the G3 would heat up faster, therefore requiring the smaller fan to run at a higher fan speed. I think the Hydraulic Dynamic Bearing fan on the G3 is designed to be quieter. The double ball bearing fans should be more durable.
|
|
|
No further updates about the the upcoming Constantinople fork were provided at the last Etherum Foundation core developer meeting on Friday other than saying further updates should be provided at the next meeting in two weeks, some discussion about implementing account abstraction in Constantinople and a shout-out to BBT https://www.youtube.com/watch?v=biNCOCQdjQ0&feature=youtu.be&t=2985
|
|
|
In a dual PSU setup you can figure out how much is being used by risers if you use the main PSU connected to the motherboard to power all the risers and use a secondary PSU to power all the VGA inputs on the cards. Then subtract 60-80 W for the motherboard from the total power of the primary PSU and you will have how much is being used by the risers. In my experience having a dual and triple PSU setup as mentioned with RX 570/580's, risers will use between 45-50 W each when dual mining and about 60 W on Linux.
|
|
|
Hi Claymore,
I've received a few reports from users running Awesome Miner to monitor Claymore Ethereum miner about issues with larger number of GPU's. In some systems when you have a large number of GPU's, you get a timeout on the Claymore monitoring API - it simply doesn't respond within a reasonable time. If the user is starting the mining process with a few GPU's less, then it works fine again. So when you go above a certain number of GPUs, the monitoring API isn't working even if the mining itself is running fine.
Is this a know issue? Are there any workarounds?
Thanks!
Please send me some logs so I can check the reason. I have the same problem with eth manager too, all rigs up to 6 cards work perfect, all ( 6 rigs ) with 12 cards timeout constantly and reconnect after 10sec to 10 min for couple of secconds before it timeouts again Exact same problem with v10.4. On a 6 card rig EthMan reporting works fine. On my ASRock H110 Pro BTC+ 13 card build EthMan drops the connection and reconnects frequently. I am running the Adrenaline v17.12.2 drivers on both rigs and it makes no difference if EthMan is running on the 6 card or 13 card rig.
|
|
|
www.whattomine.com and take your pick. Basically, you could through a dart on all the minable coins on coinmarketcap.com and whatever it lands on would be profitable to mine currently.
|
|
|
If you want to mine ETH,LTC,BCH,BTC you can use a GDAX (owned by Coinbase) wallet address. From there you trade to BTC or USD/EUR and transfer the funds to Coinbase for free, which also has cold storage available for free and all digital currency that Coinbase holds is insured. You can transfer USD to your bank account for a fee or get a Shift debit Visa card connected to your Coinbase wallet to spend the BTC with no fees.
The minimum BTC, BCH, ETH, or LTC deposit amount is 0.01 and GDAX only charges 0.30% fee on the taker of the trade. Transfers from a GDAX wallet to a Coinbase wallet are free and instant. Cash back transactions with the Shift card and purchases are also free or you can do an ATM cash withdrawal for a $2 fee + the ATM fee.
Thank you for your help! May I ask you how does it work if I want to mine different altcoin. I mean no ETH or BTC or BCH, but something like IOTA, Zcash or VeChain. If I successfully mine, do I need special crypto wallet to send it to? I don't understand how does it work with other altcoins. Thank you for answer! The Ledger hardware wallets support many different Altcoins and are very secure. Just make sure you backup up your recovery key to a safe place in case you lose or it breaks. Another option is to mine to a paper wallet address. You can monitor the funds on a block explorer and no internet is required to receive funds.
|
|
|
If you want to mine ETH,LTC,BCH,BTC you can use a GDAX (owned by Coinbase) wallet address. From there you trade to BTC or USD/EUR and transfer the funds to Coinbase for free, which also has cold storage available for free and all digital currency that Coinbase holds is insured. You can transfer USD to your bank account for a fee or get a Shift debit Visa card connected to your Coinbase wallet to spend the BTC with no fees.
The minimum BTC, BCH, ETH, or LTC deposit amount is 0.01 and GDAX only charges 0.30% fee on the taker of the trade. Transfers from a GDAX wallet to a Coinbase wallet are free and instant. Cash back transactions with the Shift card and purchases are also free or you can do an ATM cash withdrawal for a $2 fee + the ATM fee.
|
|
|
My vote with new GPU's is for RX 570 4GB cards. The problem is they've gone up ~25% over the last month and are currently hard to find at a decent price. Best chance is to sign-up for auto notify stock alerts on the retailer websites or monitor nowinstock.net. For used GPU's I would go with HD 78XX/79XX 2GB GPU's which you can find on eBay or Craigslist for $100>.
|
|
|
It should show up when you hit the 's' key.
|
|
|
You download DDU from the developer website, google DDU and it's the first link that comes up. Before installing the latest Nvidia driver, you should use DDU in safe mode to uninstall the driver Windows automatically installs. The first time you run DDU it will pop up a warning it's not running in safe mode, which you can enable in the settings of the program. After you enable the setting, launch DDU again and select to boot in to safe mode from the drop down menu. When DDU reboots in to safe mode it automatically disables the Windows automatic driver install, then select clean and shutdown to remove the drivers. After shutdown connect all the x1 GPU risers to the motherboard boot back to Windows and install the latest driver for your cards from the Nvidia website. I also enable the group policy to disable driver updates since you don't want Windows Update to changing the drivers you install.
gpedit ==> Computer Configuration ==> Administrative Templates ==> Windows Components ==> Windows Update and set 'Do not include drivers with Windows Update' to enabled.
You can also install the network driver from Windows.
|
|
|
I would remove the drivers in safe mode using DDU, shutdown and connect all the GPU's then install the latest driver. Verify all the GPU's are listed in Device Manager.
You can download the latest network driver from the motherboard support page on the manufacturer's website or use the driver on the support CD that came with the motherboard.
|
|
|
Shutdowns are usually a power related issue. What PSU and how many cards are you trying to run? It could also be too much overclock. What are your settings?
|
|
|
I'm interested in hearing how you put 4 PCI-E cards on a board with only 2 PCI-E slots without using splitters.
|
|
|
I think it never will be support 19 normal gpus. Amd have never said that. You have to combine it with p106 or p104 which normal people cant buy.
^^This I don't recall Asus ever said the motherboard would support 19 regular cards. That was just wishful thinking from those that bought the board by taking the driver update Asus referred to for removing the previous 8 card limit to meaning they could use all regular cards with the board. Although Asus did a bad job of making the limitations of the board clear, all the literature said you needed to use mining cards for more than 11 AMD GPU's.
|
|
|
You can install Windows either using the onboard video or a single card connected to the PCI-E x16 slot through a riser. You can flash/save the Bios of up to 10 cards connected in Windows using AtiFlash and Powershell. Connect 10 cards and save the orginal Bios by naming it according to the cards index in Powershell, so you know which card it belongs to. Then remove all the cards and connect the rest of the cards to save the Bios. You need to verify all the cards the have the exact same memory type and Bios with Polaris Bios Editor before you flash the Bios back to the card. If they all have the same Bios and memory type, you can flash the same Bios to all the cards after you do the Bios mod. https://www.youtube.com/watch?v=Syaf6o1SEUchttps://www.youtube.com/watch?v=xKlXDHdCtqM&feature=youtu.be&t=10032https://www.techpowerup.com/download/ati-atiflashhttps://github.com/jaschaknack/PolarisBiosEditor/tree/9ec64066eecdb55ac86da7bc82181eaab2161d51After you have all the cards Bios modded, run DDU in safe mode to remove the driver installed by Windows and disable the Windows automatic driver install, then select clean and shutdown to remove the drivers. To run more than 8 cards in Windows you need to use the latest AMD Adrenaline drivers. After shutdown connect all the x1 GPU risers to the motherboard and install the latest driver. Also enable the group policy to disable driver updates since you don't want Windows Update to update the drivers. gpedit ==> Computer Configuration ==> Administrative Templates ==> Windows Components ==> Windows Update and set 'Do not include drivers with Windows Update' to enabled. Afterburner will not work with the latest drivers and multiple cards. Either use the Claymore config file to apply the overclock and undervolt settings at runtime, or use OverNdrive tool to create overclocking profiles for your cards as explained on www.mining.help.
|
|
|
The maximum PCI-E specification of a PCI-E 8-pin connector is 150W, not 360W. The maximum specification for a 6-pin PCI-E connector is 75W. The PCI-E specification is not just based on wire gauge. It's also based on power delivery. Wires and connectors have resistance. The voltage drop and power dissipation as heat increase as you increase the current. The resistance of connectors also tends to increase as they are plugged and unplugged so after enough uses they can overheat and even melt when passing a large current. Which is why it's NOT safe to overload a 8-pin connector with more than the rated specification. ESPECIALLY for a mining rig under constant high load 24/7. Yes that is rated to the pcie pins including the socket and connectors. I'm saying wire wise there is no issue for 1 8 pin into 2 8pin. 150w limit is to the Socket on gpu side which is still draw of 1 gpu, on psu side the 150w limit is not true. Howevery I do agree that through the entire connection, the highest stress point is at the psu 8pin connection sockets. If any overdraw happens this is where it is most likely to heat up first. Brand of PSu is very important, that why we are paying more for the corsairs, all the ratings are standards to abide to at minimum but good brands always surpass them by quite abit. I have tried running the hx1200 at 1400w for almost 3 months 24/7 ambient temp at 28c with no issues. Hence the confidence for their build. Of cos this does not apply to any other brands. PSU rail ratings differ between models from the same brand and even different VGA ports on the same PSU. e.g. EVGA specifically says in their G2/P2 manuals the 8-pin with 6-pin pigtail cables should only be used in the last two VGA ports. Physics doesn't change because you pay more for a PSU. The fact is by running a PSU over it's rated capacity you are greatly reducing the efficiency and lifespan of the PSU, which is a bad idea for a mining rig. Your experience also doesn't change the fact that you and pulling more watts through a connector than it was intended to deliver. Your experience with one PSU brand also doesn't mean others have that same experience. https://www.youtube.com/watch?v=PXoPFPqU3Y8&feature=youtu.be&t=3798
|
|
|
|