It will eventually even out in the end so dont worry ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif)
|
|
|
You are using them on a 5850 right? What clock are you running them at? im running one at 900 the other at 850. On a sidenote, as I was switching cards on a different rig the other day, I removed a 6pin PCIe plug (no adapter) from one of my older PSUs. It has a white PCIe connector and it had a brown tint near one of the pins as well. Nothing too serious, but clearly it had gotten hot. This PSU was powering a 5870 @ 1 GHz. So i began wondering if perhaps underpowered PSUs dropped voltage too much, increasing the amperage, but it measured 11.7V under load. So Im not sure whats causing it with me ![Huh](https://bitcointalk.org/Smileys/default/huh.gif) I do know I dont like it one bit. And that I wish all connectors where white so you can actually see if something is getting too hot or has ever gotten too hot. There is no way to tell with these black connectors until its likely too late. My new PSU cant arrive soon enough... though Im not sure this really is a PSU problem. 5850 are at 970 and 1.17V what psu was it? if the PCIe plug gets hot then there's clearly something wrong, even the best psus can break
|
|
|
Keep using them at your own peril. At the very least check them.
Yes I did, several times, first time when you created this topic. They did not melt/burn so far and aren't hot.
|
|
|
I have been running overvolted sapphire extreme HD5850s with this kind of molex adapters for months without problems.
Don't take my word for it,do your own math: the spec clearly says 1.5A per contact. 3 contacts equal 4.5AYou did notice this is a SATA - "molex" adapter, not a molex - PCIE adapter, right? Sorry I think I need some sleep, I meant molex-pcie adapters, but SINGLE molex, as this is what sapphire supplied their 5850 extreme cards with. Not sure but I think the reason why P4man had his connectors burned was either: a) he was unlucky to stumble upon a faulty adapter with congenital faults or b) there are numeours companies producing these adapters and some are of worse quality than others. so far I only dared to connect a X600XT with sata adapter ![Wink](https://bitcointalk.org/Smileys/default/wink.gif)
|
|
|
thats the impression i got too, which i why i won't work on it without some hardware to test with ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) Would remote access as suggested above by bulanula above be fine? Cards I have are 5830, 5850s, 5870s, 6770 and 6850 but non-referent
|
|
|
Are you gonna use those 580's for mining only and 24/7? it is a waste of electricity, you would do better selling them and buying hd 58xx or 5970 ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif)
|
|
|
temp shown is whatever the cards sends in response to a request for "temp_0" or something like that, which is all that most GPUs have. if you'd like something else, describe it in detail and I'll be happy to give you a quote for the work.
I'd just like to see temperatures from sensors like in GPU-Z which reads temp. from 3 or 4 different sensors (core, memory, vrm?), of course it depends on the card, dual gpu cards have more. Reason why I'm asking is because core temps have usually been the lowest on my cards, memory 2-12C warmer. If my core temp. is 80C and mem temp is 82C it's fine for me but if my core temp is 80C and memory 90C+ then I'm not that happpy ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif) If it is doable for you - how big of a bounty should I collect? ;] I think people with dual-gpu cards would also be interested.
|
|
|
Not sure on the powered pcie part, but if you are planning to boot off a usb stick, I'd go with a different brand of motherboard as I have heard there are problems with gigabyte boards when booting from the usb ports.
I have a Gigabyte mobo running BAMT from 8 GB QPI (that's the brand name) usb stick. It's a GA-965-S3.
|
|
|
Ok experimented with affinity. Oddly enough it does not change the hashrate of AMD cards. It did however, drop my Nvidia's from 130MH or so to 6MH when I set affinity to one core. No idea why.
nVidia cards are more CPU-heavy
|
|
|
Any way to read from more than one sensor on a card? temperature shown in gpumon is from GPU core, right?
|
|
|
Now CPU-wise ok I might be $120 over from getting cheapo single cores. When I bought an X3 and saw that the two cards pegged it to 100%, I wasn't sure if it was just a bug compared to the CPU being powerful enough to keep the cards fed. I think now it is rather the former and if a single core is really good enough to mine with then that's good to know.
CPU usage bug depends from cards, drivers and system you're using, if you are using linux you will probably be able to avoid this bug but if windows is a must then in the task manager you can change cpu affinity for your miner and make it run only on one core so CPU usage falls to 50% (assuming no other cpu resource consuming apps are not running in the background). On 12.1 drivers with 2.6 sdk I noticed lower hashrates when miner was using 100% of core0 on Q9300 @ 1.8GHz. Only when I let it use all 4 cores would it return back to 'normal' speed. No such issue on previous drivers (11.5, 2.3 sdk)
|
|
|
Teh-Real-Link... very neat.. and nice.. but are you using these PCs for anythings aside from mining? If the answer is no you could have saved a ton of money with:
1) buying second hand hardware 2) using linux (if you are inexperienced with non-windows systems, BAMT is the best way to go for mining - you can start mining within less than a quarter of an hour at most) 3) cheap 1, max. 2 core low-end cpu with a box cooler 4) no case or improvised one (can cost $0-10). 5) less ram (2 GB for win7 64-bit, 1 GB for other OS'es) 6) LCDs, keyboards, hdds (you can use a 4 GB memory stick) etc. 7) cheaper motherboards (but not cheapest, best if there is at least ATX 24-pin connector and EPS12V 8-pin) - something with 1-2 PCI-E x16 slots and 3-4 PCI-E x1 slots + extenders = saved $$$ and lower temperatures.
|
|
|
What are vrm temps at stock voltage and core?
|
|
|
Thanks for clarifying but I'm still not sure how much load can my circuit sustain, saw 16A labels on my sockets, I'll have to ask the owner of the house ![Smiley](https://bitcointalk.org/Smileys/default/smiley.gif) btw. made a mistake, 16A not 15A
|
|
|
squatting domains is [more or less] ILLEGAL, and at least there's no doubt that it is morally unacceptable.
ROTFL
|
|
|
To people with 30 or 60A 240V circuits... not sure if I understood but does that mean you can draw up to 7.2kW or 14.4kW from one socket or from all sockets in one room (same circuit)..?
I can draw max. 15A*230V from each of my sockets (=3.45kW) but not sure if it was all right if I loaded all sockets in my room (or at home) like that. Not really a problem with 2.2GH/s now (~1.3kW I think), maybe up to 3GH/s by the end of the month.
|
|
|
But type oclvanitygen so you can use your GPU (a lot faster).
|
|
|
What's the average user's hashing power? Still curious ![Tongue](https://bitcointalk.org/Smileys/default/tongue.gif)
|
|
|
Current they don't I think that is why he said "when possible". There is no technical reason under 64bit drivers (and its petabyte sized VM space) that a system couldn't support 9+ GPUs.
Missed that. But I doubt AMD would lift this limit in the foreseeable future, people using more than 2-4 GPUs are already a niche.
|
|
|
|