vg54dett
|
|
December 18, 2016, 04:20:29 PM |
|
I was fortunate to be mining at mmpool.org we are a very small btc pool that uses a complex interesting payout plan.
The pool is so small we take 2 3 even 4 months to hit a block.
But we hit 45 hours ago and I got 2.4 btc So I do not have to worry about holding my zec.
Phil , I 'm going to give a try with mmpool. We just have to put BTC address and a nick name in the website form to register ? then mine to this nickname and wait for maybe a payout someday , right ?
|
|
|
|
Ziscadas
|
|
December 18, 2016, 05:00:33 PM |
|
Hi Phil, Thanks to your BUY tip, I put up an 2 BTC order for ZEC at Polo 35$ and got in at 34.99$ today. ZEC just got a boost awhile ago to 37.81$...... hope it goes past 40$ Nice pre christmas present this to buy the Panda Miner. Appreciate the buy tip Do you also think it will not drop further?
|
|
|
|
smaxz
Sr. Member
Offline
Activity: 430
Merit: 253
VeganAcademy
|
|
December 18, 2016, 05:14:03 PM |
|
According to whattomine.com ... ZEC is still up there. So I left the 390s to continue selling ZEC hash at Nicehash for BTC. I made a handsome sum of profits on Philip's previous ETH tips. So I will buy some ZEC on Polo today Hope it will be a Merry Christmas present and enough funds to get a unit of that Panda Miner! just I was about to give up on ZEC... this happens.... well looking forward to see how much more my 480s and 390s can run. You must have low electricity price. Mining XMR and ZEC were very "nice" to my power bill - still expensive but not as expensive as mining ETH - highest power consumption and most heat be grateful you don't have memories scrypt mining.
|
- NGdTwHRSdnThdi1drQuHGT3khAHRtZ1HMq -
|
|
|
citronick
Legendary
Offline
Activity: 1834
Merit: 1080
---- winter*juvia -----
|
|
December 18, 2016, 05:19:31 PM |
|
Hi Phil, Thanks to your BUY tip, I put up an 2 BTC order for ZEC at Polo 35$ and got in at 34.99$ today. ZEC just got a boost awhile ago to 37.81$...... hope it goes past 40$ Nice pre christmas present this to buy the Panda Miner. Appreciate the buy tip Do you also think it will not drop further? Up or down - hard to tell this ZEC.... however I have already put a sell order at 39.99$ Dont be greedy - there is no such thing as bad profit.
|
If I provided you good and useful info or just a smile to your day, consider sending me merit points to further validate this Bitcointalk account ~ useful for future account recovery...
|
|
|
citronick
Legendary
Offline
Activity: 1834
Merit: 1080
---- winter*juvia -----
|
|
December 18, 2016, 05:23:02 PM |
|
According to whattomine.com ... ZEC is still up there. So I left the 390s to continue selling ZEC hash at Nicehash for BTC. I made a handsome sum of profits on Philip's previous ETH tips. So I will buy some ZEC on Polo today Hope it will be a Merry Christmas present and enough funds to get a unit of that Panda Miner! just I was about to give up on ZEC... this happens.... well looking forward to see how much more my 480s and 390s can run. You must have low electricity price. Mining XMR and ZEC were very "nice" to my power bill - still expensive but not as expensive as mining ETH - highest power consumption and most heat be grateful you don't have memories scrypt mining. If you are GPU mining scrypt circa 2014 then yes I think I know how that feels.... unless you have ASICS miner.
|
If I provided you good and useful info or just a smile to your day, consider sending me merit points to further validate this Bitcointalk account ~ useful for future account recovery...
|
|
|
QuintLeo
Legendary
Offline
Activity: 1498
Merit: 1030
|
|
December 18, 2016, 10:47:21 PM |
|
be grateful you don't have memories scrypt mining. Scrypt was no worse on my 7xxx series cards than the DNet work they were doing before (and after) I got involved in Scrypt GPU mining. It was actually better in one way - I didn't get frustrated that the AMD LINUX drivers didn't allow memory UNDERclock. 8-)
|
I'm no longer legendary just in my own mind! Like something I said? Donations gratefully accepted. LYLnTKvLefz9izJFUvEGQEZzSkz34b3N6U (Litecoin) 1GYbjMTPdCuV7dci3iCUiaRrcNuaiQrVYY (Bitcoin)
|
|
|
GabryRox
|
|
December 18, 2016, 11:09:35 PM |
|
To Phil and anyone else about to or considering putting Claymore's CPU miner to work, here are some findings from my 3 PCs over the past few days:
- i7 6800K (6 cores, 12 threads, 15MB Cache): default settings (6 threads) results in 313 H/s. adding a "-t 7" to use 7 threads instead of 6 results in 347 H/s. -t 7 appears to be the optimal setting for this CPU
- i7 4790K (4 cores, 8 threads, 8MB Cache): default setting of 4 threads results in about 307 H/r, which appears to be the optimal setting for this CPU
-i7 2600K ((4 cores, 8 threads, 8MB Cache): default setting of 4 threads results in about 190 H/r but note that this is in slow mode. I need to tweak a few things but since this is my main PC and I have other things running on it, I haven't had a chance to tweak and reboot yet. But, I think in fast mode this will get in the mid 200's, so about 230-240 maybe. I will update in a few days when I have more data
The main thing I wanted to point out is that my tests confirmed the statements I have seen on other threads about this CPU miner being very reliant on Cache... meaning, each thread you have mining required 2MB of cache to work effectively. Case in point on the top 2 above, but especially on the 6800k. Increasing from 6 to 7 threads (requiring 14MB of Cache vs 12MB) resulted in an increase of about 35 H/s. BUT... when I tried to use 8 threads, the H/r actually dipped back down around the 300 mark. Basically, it not only doesn't increase the rate over 7 threads but it reduces it.
Another thing to check when running this miner is the mode (slow vs fast). To enable fast mode, you need to run the EXE as Admin. The readme says you only have to do this once, then after reboot it will run in fast mode without running as Admin, but I have had mixed results with this, so just always run it as Admin with no ill effects.
The one thing I do find interesting are the results between the 6800 & 4790. The 6800 gets max 347 using 7 threads (about 50 per thread) but the 4790, which is 2 generations older, gets about 75 hashes per thread. Not sure why this is or whether to be happy with the 4790 or disappointed with the 6800 lol.
|
|
|
|
GabryRox
|
|
December 18, 2016, 11:39:33 PM |
|
So, I am getting ready to strap-mod the bios on my 2nd mining rig. However, I realized that I need to know the device numbers of each of the 5 GPUs in order to do this, since the actual bios flash command line specifies the device to flash.
This was not a concern on rig#1 because this has 4 identical MSI 470s in it. However, on rig #2 I have 1 sapphire nitro 470 8GB in the mix, so I need to make sure I am flashing the correct cards. It may actually be that that sapphire takes the same strap-mod (more on that later) but i really need to know how to identify which device number windows as assigned to each of my 5 GPUs.
If someone could point me to a resource of education on this or suggest a way to easily identify them on Windows 7 I would appreciate it.
For instance, what determines which device # is assigned? Is it the order you install the cards? Or the PCIe slot they are plugged into? Or, is it something else?
The PCIe slots on my ASRock BTC R2 Mobo are numbered 1-6 from right to left, with the lone x16 slot (currently unoccupied) being slot #2. However, when I go into device manager and check the properties for each GPU, it shows only PCI Bus 2-6 for the 5 GPUs, meaning it shows nothing as PCI Bus 1, but I really don't know how that correlates anyway since I know the device #s I am looking for begin with 0, so am assuming i should be looking for device #s 0-5, not 1-6. FYI, my lone Sapphire is plugged into PCIe slot #4 (3rd installed slot from the right) and I am pretty sure (but not 100% confident) that it was also the 3rd GPU I installed. Should this mean that this is device #2 (after 0 and 1)? or, is it not that straight forward. Thanks in advance for any suggestions.
|
|
|
|
philipma1957 (OP)
Legendary
Offline
Activity: 4312
Merit: 8849
'The right to privacy matters'
|
|
December 19, 2016, 12:06:07 AM |
|
So, I am getting ready to strap-mod the bios on my 2nd mining rig. However, I realized that I need to know the device numbers of each of the 5 GPUs in order to do this, since the actual bios flash command line specifies the device to flash.
This was not a concern on rig#1 because this has 4 identical MSI 470s in it. However, on rig #2 I have 1 sapphire nitro 470 8GB in the mix, so I need to make sure I am flashing the correct cards. It may actually be that that sapphire takes the same strap-mod (more on that later) but i really need to know how to identify which device number windows as assigned to each of my 5 GPUs.
If someone could point me to a resource of education on this or suggest a way to easily identify them on Windows 7 I would appreciate it.
For instance, what determines which device # is assigned? Is it the order you install the cards? Or the PCIe slot they are plugged into? Or, is it something else?
The PCIe slots on my ASRock BTC R2 Mobo are numbered 1-6 from right to left, with the lone x16 slot (currently unoccupied) being slot #2. However, when I go into device manager and check the properties for each GPU, it shows only PCI Bus 2-6 for the 5 GPUs, meaning it shows nothing as PCI Bus 1, but I really don't know how that correlates anyway since I know the device #s I am looking for begin with 0, so am assuming i should be looking for device #s 0-5, not 1-6. FYI, my lone Sapphire is plugged into PCIe slot #4 (3rd installed slot from the right) and I am pretty sure (but not 100% confident) that it was also the 3rd GPU I installed. Should this mean that this is device #2 (after 0 and 1)? or, is it not that straight forward. Thanks in advance for any suggestions.
do not be stupid pull the sapphire and just flash the other cards. (all the same correct?) msi rx 470's then flash the sapphire by itself. and I am really pretty sure you need a different bios to flash it or brick city
|
|
|
|
GabryRox
|
|
December 19, 2016, 12:45:34 AM |
|
So, I am getting ready to strap-mod the bios on my 2nd mining rig. However, I realized that I need to know the device numbers of each of the 5 GPUs in order to do this, since the actual bios flash command line specifies the device to flash.
This was not a concern on rig#1 because this has 4 identical MSI 470s in it. However, on rig #2 I have 1 sapphire nitro 470 8GB in the mix, so I need to make sure I am flashing the correct cards. It may actually be that that sapphire takes the same strap-mod (more on that later) but i really need to know how to identify which device number windows as assigned to each of my 5 GPUs.
If someone could point me to a resource of education on this or suggest a way to easily identify them on Windows 7 I would appreciate it.
For instance, what determines which device # is assigned? Is it the order you install the cards? Or the PCIe slot they are plugged into? Or, is it something else?
The PCIe slots on my ASRock BTC R2 Mobo are numbered 1-6 from right to left, with the lone x16 slot (currently unoccupied) being slot #2. However, when I go into device manager and check the properties for each GPU, it shows only PCI Bus 2-6 for the 5 GPUs, meaning it shows nothing as PCI Bus 1, but I really don't know how that correlates anyway since I know the device #s I am looking for begin with 0, so am assuming i should be looking for device #s 0-5, not 1-6. FYI, my lone Sapphire is plugged into PCIe slot #4 (3rd installed slot from the right) and I am pretty sure (but not 100% confident) that it was also the 3rd GPU I installed. Should this mean that this is device #2 (after 0 and 1)? or, is it not that straight forward. Thanks in advance for any suggestions.
do not be stupid pull the sapphire and just flash the other cards. (all the same correct?) msi rx 470's then flash the sapphire by itself. and I am really pretty sure you need a different bios to flash it or brick city Well yeah, obviously I could pull that orphan Sapphire as you mention, but I would rather avoid that if possible for a couple of reasons. Firstly, I had a hell of a time getting 5 GPUs working on this rig so aside from eventually trying to get a 6th working in that x16 slot, I'm not eager to start unplugging and re-plugging cards in case it screws things up again. The other reason is this... I am even unsure how pulling that card would affect my device numbers? For instance, will windows re-number the GPUs after reboot only based on the now 4 that are plugged in? meaning they would then be 0, 1, 2 & 3? Is windows really so lame that it doesn't have a utility or easy way to ID which GPU it assigned which device number? It seems more likely that there is a way, I just can't figure it out.
|
|
|
|
ps_jb
|
|
December 19, 2016, 01:33:11 AM Last edit: December 20, 2016, 04:39:20 AM by ps_jb |
|
For instance, what determines which device # is assigned? Is it the order you install the cards? Or the PCIe slot they are plugged into? Or, is it something else?
Try Atiflash -i So you can see immediately the number of card you want to flash
|
|
|
|
philipma1957 (OP)
Legendary
Offline
Activity: 4312
Merit: 8849
'The right to privacy matters'
|
|
December 20, 2016, 01:42:43 PM |
|
I added 3 more msi rx 470 the rack is now at 19 of them. I am getting 1 today it will be at 20 of them.
I will do some photos later today.
|
|
|
|
Hotmetal
|
|
December 20, 2016, 01:59:14 PM |
|
I added 3 more msi rx 470 the rack is now at 19 of them. I am getting 1 today it will be at 20 of them.
I will do some photos later today.
At what point do you decide to throw everything into an industrial warehouse?
|
|
|
|
citronick
Legendary
Offline
Activity: 1834
Merit: 1080
---- winter*juvia -----
|
|
December 20, 2016, 02:41:27 PM |
|
I added 3 more msi rx 470 the rack is now at 19 of them. I am getting 1 today it will be at 20 of them.
I will do some photos later today.
At what point do you decide to throw everything into an industrial warehouse? yes, pix please - always looking forward to learn from Phil. A lot of capital cost into warehouse solution... but ultimately that's the best if you want to go big and churn hard revenue. My story started when my wife "threw me out" in favor of extra space at home.... so after 7 x rigs, me and a few buddies started on our mining project (GPU, BTC, X11, Scrypt) where only GPU farm is within our full control. The rest of the mining farm, BTC and Scrypt are hosted in Labrador. .... but expect to cough up some dough on electricals, cabling, metal racks with wheels, big working desk, temperature controls/fans/digital gauge/remote relay if possible, CCTV, physical security, RAS/Radmin/Teamviewer servers and backup server, big-ass router 48 port and a few smaller routers, broadband, rig-resetter via remote relay - see Tyatanick's SRR, small fridge and microwave oven (trust me you gonna need this)..... the list goes on.... but when all is setup and almost lights out operation, it's all worth it. I only visit my warehouse max 3 times a week. Everything is remote control from home, with 3 x 27" screens on Win10 and my trusty iMac 27" as main desktop and backup for all the wallets.
|
If I provided you good and useful info or just a smile to your day, consider sending me merit points to further validate this Bitcointalk account ~ useful for future account recovery...
|
|
|
Hotmetal
|
|
December 20, 2016, 02:49:36 PM |
|
yes, pix please - always looking forward to learn from Phil.
A lot of capital cost into warehouse solution... but ultimately that's the best if you want to go big and churn hard revenue.
My story started when my wife "threw me out" in favor of extra space at home.... so after 7 x rigs, me and a few buddies started on our mining project (GPU, BTC, X11, Scrypt) where only GPU farm is within our full control. The rest of the mining farm, BTC and Scrypt are hosted in Labrador.
.... but expect to cough up some dough on electricals, cabling, metal racks with wheels, big working desk, temperature controls/fans/digital gauge/remote relay if possible, CCTV, physical security, RAS/Radmin/Teamviewer servers and backup server, big-ass router 48 port and a few smaller routers, broadband, rig-resetter via remote relay - see Tyatanick's SRR, small fridge and microwave oven (trust me you gonna need this)..... the list goes on.... but when all is setup and almost lights out operation, it's all worth it. I only visit my warehouse max 3 times a week. Everything is remote control from home, with 3 x 27" screens on Win10 and my trusty iMac 27" as main desktop and backup for all the wallets.
Now, this time of thing really interests me! Some questions: * How do you figure out which rigs are using what amount of power since everyone is mining different things? * Tyatanick's SRR? Do you have a link? (I've tried searching for it, no luck). * At what point can you retire and just live off the profits of the rigs? (yes, it's a serious question ) * How big is the entire operation? Have pics etc?
|
|
|
|
citronick
Legendary
Offline
Activity: 1834
Merit: 1080
---- winter*juvia -----
|
|
December 20, 2016, 03:31:08 PM |
|
yes, pix please - always looking forward to learn from Phil.
A lot of capital cost into warehouse solution... but ultimately that's the best if you want to go big and churn hard revenue.
My story started when my wife "threw me out" in favor of extra space at home.... so after 7 x rigs, me and a few buddies started on our mining project (GPU, BTC, X11, Scrypt) where only GPU farm is within our full control. The rest of the mining farm, BTC and Scrypt are hosted in Labrador.
.... but expect to cough up some dough on electricals, cabling, metal racks with wheels, big working desk, temperature controls/fans/digital gauge/remote relay if possible, CCTV, physical security, RAS/Radmin/Teamviewer servers and backup server, big-ass router 48 port and a few smaller routers, broadband, rig-resetter via remote relay - see Tyatanick's SRR, small fridge and microwave oven (trust me you gonna need this)..... the list goes on.... but when all is setup and almost lights out operation, it's all worth it. I only visit my warehouse max 3 times a week. Everything is remote control from home, with 3 x 27" screens on Win10 and my trusty iMac 27" as main desktop and backup for all the wallets.
Now, this time of thing really interests me! Some questions: * How do you figure out which rigs are using what amount of power since everyone is mining different things? * Tyatanick's SRR? Do you have a link? (I've tried searching for it, no luck). * At what point can you retire and just live off the profits of the rigs? (yes, it's a serious question ) * How big is the entire operation? Have pics etc? 1. Labrador hosting is by kw/h metering (I dont have to worry about PSUs), so doesn't matter which miner uses what, I prepay 3 months in advance to get maximum discounts. They get Hydro power so if you have big time mining operations - you need to go to the cheapest power source to see profits. I get full remote access to the miners and also have my own subnet. 2. SRR - https://simplemining.net/page/simpleRigResetter3. I retired in Jan 2016 and my group (me and few close buddies) did some serious math .... begged our wives to allow us to do this project -- LOL --- we made a handsome return during the early ETH and BTC days. We already had about 3GHs ETH GPU farm back then (thanks to Claymore's magic)... that got us going and also got more informed about mining in general - just being smart in when to invest and when to hold coins. 4. My group has 108 x S9s, 120 x Avalon 7s.... about 2 PH farm. About 15GHs X11 farm and small Scrypt farm; 6 x A4s. Let's just say, the BTC & GPU farm pays the cheque..... and the X11 & LTC farm pays the operation (power, rental, etc). So yes, I think me and the group are enjoying our post retirement nicely and most importantly, the spouses are happy to see us learning new stuff/tech/friends... keeping us busy instead of bumming around at home. 5. Apologies my group is paranoid on security and sharing pictures. I am the Adminstrator for the project while the boys manages the Polo accounts, investments and wallets. 6. I did share some basic pics on the GPU farm in this thread. I have started to streamline the GPU farm into Biostar Z170 4-PCI riserless format like Phil's - wished I had started like this earlier on -- it could have been so simple to manage and cheaper, rather than wasting hours troubleshooting why my 5th or 6th GPU doesn't work.... or why that blew the PSU and what not.
|
If I provided you good and useful info or just a smile to your day, consider sending me merit points to further validate this Bitcointalk account ~ useful for future account recovery...
|
|
|
Exsumane
Newbie
Offline
Activity: 30
Merit: 0
|
|
December 20, 2016, 03:42:38 PM |
|
I added 3 more msi rx 470 the rack is now at 19 of them. I am getting 1 today it will be at 20 of them.
I will do some photos later today.
Do you add the cards from the mining profit or use the extra cash?
|
|
|
|
Hotmetal
|
|
December 20, 2016, 04:57:36 PM |
|
1. Labrador hosting is by kw/h metering (I dont have to worry about PSUs), so doesn't matter which miner uses what, I prepay 3 months in advance to get maximum discounts. They get Hydro power so if you have big time mining operations - you need to go to the cheapest power source to see profits. I get full remote access to the miners and also have my own subnet.
Hrmm.. Did everyone invest the same amount at the start? Thanks, I can see this coming in handy. 3. I retired in Jan 2016 and my group (me and few close buddies) did some serious math .... begged our wives to allow us to do this project -- LOL --- we made a handsome return during the early ETH and BTC days. We already had about 3GHs ETH GPU farm back then (thanks to Claymore's magic)... that got us going and also got more informed about mining in general - just being smart in when to invest and when to hold coins. Yeah my wife is eyeballing me daily Would you say it's too late to get into mining now? 4. My group has 108 x S9s, 120 x Avalon 7s.... about 2 PH farm. About 15GHs X11 farm and small Scrypt farm; 6 x A4s. Let's just say, the BTC & GPU farm pays the cheque..... and the X11 & LTC farm pays the operation (power, rental, etc). So yes, I think me and the group are enjoying our post retirement nicely and most importantly, the spouses are happy to see us learning new stuff/tech/friends... keeping us busy instead of bumming around at home. How old are you gents? If you had to do it again, what would you do differently? Aren't the BTC miners a nightmare when they break? 5. Apologies my group is paranoid on security and sharing pictures. I am the Adminstrator for the project while the boys manages the Polo accounts, investments and wallets.
Perfectly understandable. What do you use to manage such a large farm? 6. I did share some basic pics on the GPU farm in this thread. I have started to streamline the GPU farm into Biostar Z170 4-PCI riserless format like Phil's - wished I had started like this earlier on -- it could have been so simple to manage and cheaper, rather than wasting hours troubleshooting why my 5th or 6th GPU doesn't work.... or why that blew the PSU and what not.
Urgh.. Those cards must get insanely hot when packed in so close together.. Surely that must push up the temp drastically? What PSU are you using for those cards? I personally haven't had any issues with a 5th or 6th card. I'm using a Coolermaster 1200w PSU which is pretty decent.
|
|
|
|
|
|
|