The RaspberryPi is supposed to be around 5 watts. Most Mini-ITX boards will be in the 50 to 75 watt range. Oh, the RaspberryPi also runs a proper linux distro....Debian 50-70W? For a atom or via board? In what universe?
|
|
|
Had the same thing once, pulling the plug + 10 min wait magically fixed it.
|
|
|
+1 for MSI 890FXA-GD70 3 cards in standard cases with airspace, can do 6 cards with risers, 6-pin connector onboard so no need for powered risers. And +1 on "don't put cards right next to each other, unless you like overheating cards and dead fans".
|
|
|
...
Well, if that's your guess... I could tell you what that's *very* unlikely, but... meh.
|
|
|
I'd stay away from Scythe UKs, got a box full of em. Sleeve bearings shot after a few months @ 100%. Btw, OEM of the Scythe UK line is Young Lin Tech Co. If you're looking for decent rad fans in the size/power category of a UK3k... delta afb1212hhe panaflo fba12g12u ... or really pretty much anything 120x120x38 from a reputable mfg. with ball bearings and 5-7W.
|
|
|
...
Yeah, a FF784 package on a FF780 footprint. right. While we're making random shit up, EP3SL70 or 110 (can't see any way to fit 120 rounds in a SL50, even 64 single-stage rounds is pushing it).
|
|
|
Can't set anything lower, when I try it *shows* lower but jumps back to 1375 (obvious in power consumption @ 925 core)
|
|
|
So we should subtract 139W from the total power number to figure what the card is using?
I'd subtract 108W or so, should get decent estimate of real card power at the wall with a 92% efficient PSU.
|
|
|
*shame* disregard those results, somehow managed to completely botch it.
I think I mixed up hashrates, clocks and power measurements from different sets of runs or skipped lines somehwere. Great, have to throw all data out and start over :/
Phenom II 1100T @ 4.0GHz, cpu freq scaling disabled. idle, display off 122W idle @ desktop 139W phoenix 1.7.3, -k poclbm AGGRESSION=10 WORKSIZE=256 stock V @ 50% fan 925 core/1375 mem, 554 Mh/s, 369W 925 core/1020 mem, 554 Mh/s, 360W 1070 core/1020 mem, 640 Mh/s, 386W 1170 core/1020 mem, 698 Mh/s, 407W 1.20 V @ 55% fan 1170 core/1020 mem, 698 Mh/s, 421W 1200 core/1020 mem, 714 Mh/s, 428W 1240 core/1020 mem, 736 Mh/s, 431W 1.25V @ 70% fan 1240 core/1020 mem, 736 Mh/s, 467W 1270 core/1020 mem, 753 Mh/s, 476W 1.30V @ 80% fan 1270 core/1020 mem, 753 Mh/s, 509W 1350 core/1020 mem, 799 Mh/s, 525W
not sure how I got 695/759/804, guess that was a run at 1375 mem
|
|
|
Now, after massive thread derailment... back to OPs question. stock vs. stock 2*6870 easily beats 6970. 2*~290 = ~580 vs. ~390 for a 6970.
|
|
|
Well, iirc GPU-Z shader # is simply based off a hardcoded table of device IDs, so that's no big help either. Atm I'm out of ideas of how to figure out what is really going on there, just seems *very* odd that your 5850 is 11% faster than what everyone else is reporting, and 111% just happens to be very close to 1600/1440.
|
|
|
snip
Yup.. looks like you have a 5870 there clocking away at the normal 430Mh at 930 clock. Yeah, but the PCI device ID says it's supposed to be a 5850... Honestly not sure WTF to make of that, all I get is "congratulations, you're the proud owner of a 5850 with 1600 enabled shaders."
|
|
|
930 core. under 70C.
430Mh/s at 930 core on a 58*50*? That'd be 335Mh/s stock. On a 5850. Or 436Mh/s on a stock 5870. So... why is everyone else reporting numbers *way* below that? ArtForz, how do you calculate these formulas and stuff ? I look and don't understand anything. LOL How do you get the mhash/s value from the core and shaders the card has ? It's rather simple to convert between cards of the same family, mining scales pretty much 100% with #shaders * clock. 430Mh/s / 930MHz / 1440 shaders = 0.000321087 hash/shaderclock 0.000321087 hash/shaderclock * 850 MHz = 272923.95 hash/shadersecond 272923.95 hash/shadersecond * 1600 shaders = 436678320 hash/sec. so to convert, it's generally simply hashrateA / clockA / #shadersA * clockB * #shadersB
|
|
|
Can't remember who it was but some dude got them at around 720 or 700 mhash/s !
But then the power consumed makes your MH/W ratio suck !
1-upd! max @ stock V: 1070 core, 695Mh/s @ 1.20V: 1170, 759Mh/s and for shits and giggles... @ 1.25V: 1240 core, 805Mh/s Tip: try phoenix with -k poclbm AGGRESSION=10 WORKSIZE=256 Any advice on better setting with Diablo? Nope, haven't found any better settings than what other people have been saying for Diablo, best I found was ~2.2% worse than phoenix.
|
|
|
930 core. under 70C.
430Mh/s at 930 core on a 58*50*? That'd be 335Mh/s stock. On a 5850. Or 436Mh/s on a stock 5870. So... why is everyone else reporting numbers *way* below that?
|
|
|
Just tried it for shits and giggles... 7970 primary, 6870 secondary w/ the 7970 drivers from AMDs site for windows and 11.12 for linux. works.
|
|
|
Can't remember who it was but some dude got them at around 720 or 700 mhash/s !
But then the power consumed makes your MH/W ratio suck !
1-upd! max @ stock V: 1070 core, 695Mh/s @ 1.20V: 1170, 759Mh/s and for shits and giggles... @ 1.25V: 1240 core, 805Mh/s Tip: try phoenix with -k poclbm AGGRESSION=10 WORKSIZE=256 So what is the consensus ? Are CGN shaders more or less or the same in terms of mining compared to VLIW4 or VLIW5 shaders from 6970 and 5870 ? Quick calc says we're already a few % higher hashrate/shader/clock than VLIWx. I'd have to take a closer look at how GCN shader ASM looks and what engine performance is like, but wouldn't expect any miracles, maybe another 2-3% increase. So my opinion so far "pretty much a wash, a hair better than 5xxx per shader per clock, may get another few % when more time is spent tuning kernels for GCN". Mining is really unique in how it manages to get well > 90% peak throughput even on VLIW5
|
|
|
My understanding is that 2D acceleration was made possible with support from AMD. My point was that adding OpenCL to open source drivers would require AMD assistance (assistance they have so far been unwilling to provide) not that it would be impossible.
Now that I can agree with. The work required to add OpenCL support is pretty massive, and AFAIK there's pretty much 0 done on a open source OpenCL -> VLIWx shader asm compiler, several other large parts of the puzzle are also completely missing (runtime, runtime/opengl/driver integration/...). But saying that the open source driver can't use shaders is wrong. Now, for something like a dedicated miner for 5/6xxx you'd "only" have to handcraft a kernel for each arch in VLIWx ASM (fully documented in public AMD docs btw) and hack the radeon driver so it allocates input/output regions and loads/runs your shader program on command. Iirc there's still a open 200BTC bounty for a miner working like that. but it's a *lot* of work for very little benefit.
|
|
|
Can't remember who it was but some dude got them at around 720 or 700 mhash/s !
But then the power consumed makes your MH/W ratio suck !
1-upd! max @ stock V: 1070 core, 695Mh/s @ 1.20V: 1170, 759Mh/s and for shits and giggles... @ 1.25V: 1240 core, 805Mh/s Tip: try phoenix with -k poclbm AGGRESSION=10 WORKSIZE=256
|
|
|
Without AMD support it is unlikely any Open Source driver that takes advantage of shaders will ever exist.
Err, what? Take a look at the Xorg radeon driver and how it does several parts of 2D acceleration on 5xxx and 69xx
|
|
|
|