Bitcoin Forum
April 20, 2024, 02:23:11 AM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 »  All
  Print  
Author Topic: New AMD APUs... [AMD A8-Series]  (Read 18101 times)
themike5000
Member
**
Offline Offline

Activity: 99
Merit: 10


View Profile
July 07, 2011, 08:37:02 PM
 #21

What software are you using to mine?

GUIMiner 0701

But is GUIMiner taking advantage of the GPU core?

Yes.  You can't get 65mhash out of a normal CPU.

Vertcoin: VdHjU3L2dcHCR3uQmqpM6mf4LCvp2678wh
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
According to NIST and ECRYPT II, the cryptographic algorithms used in Bitcoin are expected to be strong until at least 2030. (After that, it will not be too difficult to transition to different algorithms.)
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
1713579791
Hero Member
*
Offline Offline

Posts: 1713579791

View Profile Personal Message (Offline)

Ignore
1713579791
Reply with quote  #2

1713579791
Report to moderator
CanaryInTheMine
Donator
Legendary
*
Offline Offline

Activity: 2352
Merit: 1060


between a rock and a block!


View Profile
July 29, 2011, 11:09:33 PM
 #22

I have a A8-3850 Box,

Running all 4 cores. Getting 65MHash/s


How are you cooling it? stock sink/fan or something else?

Have you tried overclocking?
KKAtan
Newbie
*
Offline Offline

Activity: 50
Merit: 0


View Profile
July 30, 2011, 04:59:50 PM
 #23

So, the 2.9GHz will do 65Mhash/s...

What about the 2.6GHz version?
The cpu speed is irrelevant. What you look at is how many shaders there are, and how fast the shaders run.

400 shaders at 600 mhz is the strongest Llano you can get.


Running all 4 cores. Getting 65MHash/s
Best to deactivate the cpu mining and only mine with the graphics shaders.
Overclocking the shaders from 600 Mhz should boost your hash rate far more than the cpus ever could achieve.


If they make a power-efficient
It's vliw5 at 32 nm soi. It's far more power efficient than any other graphics card in existence currently.
GenTarkin
Legendary
*
Offline Offline

Activity: 2450
Merit: 1002


View Profile
July 30, 2011, 08:19:33 PM
 #24

I have set one of these up and using CGminer ... the a8 can pull off around 68mh/s .. OS is win7 x64.
Not too bad!

GenTarkin's MOD Kncminer Titan custom firmware! v1.0.4! -- !!NO LONGER AVAILABLE!!
Donations: bitcoin- 1Px71mWNQNKW19xuARqrmnbcem1dXqJ3At || litecoin- LYXrLis3ik6TRn8tdvzAyJ264DRvwYVeEw
caston
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
October 05, 2011, 12:24:21 PM
 #25

Could you use this with something like Tenebrix working with both the CPU and the shaders?

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 12:39:15 PM
 #26

The shader core is a standard GPU (called HD 6550D) its specs are similar to the GPU on HD 6570 video card.
It is 400 shaders vs 480 running at higher clock and it is more efficient using 32nm process but it is for all intents and purposes a GPU.

It has all the advantages and disadvantages of GPU.  Namely it only has 64KB on cache on the GPU which is insufficient to efficiently run scrypt. The CPU "side" of the chip is a rather pedestrian CPU.  It should perform similarly to any other modern CPU.

So while you "could" use it to mine "CPU friendly" you won't get a massive boost over other CPU as the GPU is "crippled".  A higher clocked multi-core chip (like 6 core Phenom II) would still be superior.
caston
Hero Member
*****
Offline Offline

Activity: 756
Merit: 500



View Profile WWW
October 05, 2011, 01:22:33 PM
 #27

The shader core is a standard GPU (called HD 6550D) its specs are similar to the GPU on HD 6570 video card.
It is 400 shaders vs 480 running at higher clock and it is more efficient using 32nm process but it is for all intents and purposes a GPU.

It has all the advantages and disadvantages of GPU.  Namely it only has 64KB on cache on the GPU which is insufficient to efficiently run scrypt. The CPU "side" of the chip is a rather pedestrian CPU.  It should perform similarly to any other modern CPU.

So while you "could" use it to mine "CPU friendly" you won't get a massive boost over other CPU as the GPU is "crippled".  A higher clocked multi-core chip (like 6 core Phenom II) would still be superior.

Thank you. I have been reading up a bit on them and it is quite confusing. You'd think that in AMD's whole fusion strategy they could allow the GPU and the CPU to access the same cache. Perhaps this is further down the pipeline.

bitcoin BTC: 1MikVUu1DauWB33T5diyforbQjTWJ9D4RF
bitcoin cash: 1JdkCGuW4LSgqYiM6QS7zTzAttD9MNAsiK

-updated 3rd December 2017
Shevek
Sr. Member
****
Offline Offline

Activity: 252
Merit: 250



View Profile
October 05, 2011, 01:58:05 PM
 #28

It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.

OT

I'm curious about your settings ...

Proposals for improving bitcoin are like asses: everybody has one
1SheveKuPHpzpLqSvPSavik9wnC51voBa
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 02:54:55 PM
 #29

Thank you. I have been reading up a bit on them and it is quite confusing. You'd think that in AMD's whole fusion strategy they could allow the GPU and the CPU to access the same cache. Perhaps this is further down the pipeline.

Possibly someday.  One thing is for sure we can expect tighter and tighter integration of CPU and GPU cores.  Currently the GPU doesn't have access to CPU cache though.

A high end "fusion" could be interesting for high performance computing.  Imagine a chip w/ 4 or 8 x86 cores and a much larger shader core one with internal high speed cache and ability to share L2 caches of "traditional" cores.    In a 4 socket server that would give you pretty amazing computing densities. 

Of course AMD whole blending of the lines between CPU & GPU will eventually kill the "CPU friendly" block chains.  It is a futile endeavor.   
MadHacker
Full Member
***
Offline Offline

Activity: 235
Merit: 100



View Profile
October 05, 2011, 04:03:51 PM
 #30

if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh
CanaryInTheMine
Donator
Legendary
*
Offline Offline

Activity: 2352
Merit: 1060


between a rock and a block!


View Profile
October 05, 2011, 04:05:36 PM
 #31

if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh
with a very cold liquid Smiley
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 04:23:29 PM
 #32

if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.


MadHacker
Full Member
***
Offline Offline

Activity: 235
Merit: 100



View Profile
October 05, 2011, 04:54:54 PM
 #33

if AMD would just make gpu chips stack-able in to arrays that would be something
that would be awsome
get 10 HD6970 chips stacked together..
it could dump 3000 watts of heat in a square inch.

just have to figure out how to cool it  Huh

liquid cooling could handle it.  It takes 4 watt hours of power to raise 1 gallon by 1 deg C.

Thus to keep the temp of 3000 watt chip within 10 deg of ambient it would require 147 gallons per hour.  Sounds like a lot but that is just a mere 2 gallons per minute.  A good water cooling pump has 10x that capacity.

Now to avoid the water heating up you would need a pretty large radiator to dissipate 3kW of heat but it wouldn't require anything exotic.

To correct what someone above said the temp of water is negligible, water is a very effective conductor of heat.  "Warm water" is cools just as well as "ice cold" water is your goal is just to keep the temps below say 60C.

I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 05:07:21 PM
 #34

I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)
CanaryInTheMine
Donator
Legendary
*
Offline Offline

Activity: 2352
Merit: 1060


between a rock and a block!


View Profile
October 05, 2011, 05:13:42 PM
 #35

I still think it would be hard to create a water block that will be able to absorb 3000W of heat from a square inch.
i think you would then need micro channels or tubes through the GPU chip itself to remove the heat.

No reason for it to be 3000W from 1" squared.  Even if you could do that the outer chip layers would act as an insulator and "cook" the inner chips.

You could simply have a chip-waterblock sandwhich.

waterblock
chip
waterblock
chip
waterblock
chip
waterblock
chip
waterblock

or maybe something more like a stacked grid array (4x4 chips under a waterblock and then stacked)


I think I saw something about interleaving chips with liquid cooling pathways... i think it was IBM?
DeathAndTaxes
Donator
Legendary
*
Offline Offline

Activity: 1218
Merit: 1079


Gerald Davis


View Profile
October 05, 2011, 05:15:27 PM
 #36

It was IBM (and others) however those designs are at micro scale.  Water cooling INSIDE the chip in tiny channels and pumped via electrical impulses inside the chip itself.  We likely are some years away from when that would be commercially viable.

IBM does currently sell some watercooled servers.  Some of their high performance servers use a 32 core Power6 chip.   With up to 44 chips per 44U rack (2 per 2U server), that's 1400 cores per standard datacenter rack.  To dissipate that kind of thermal load IBM sells an enterprise rated watercooling kit.

Many people don't know that most early computers were watercooled.  Most mainframes were (and some still are) liquid cooled.  It was only when power densities dropped that air cooling became viable.  In datacenter servers are getting smaller, more in a rack, and pulling more power.  Using forced air to remove that heat is horribly inefficient and noisy.  As the power density starts to climb we will see more enterprise grade watercooling. 
kokojie
Legendary
*
Offline Offline

Activity: 1792
Merit: 1003



View Profile
October 05, 2011, 05:25:02 PM
 #37

Really? share your magic then, I have a 5570 and can't even reach 100 mh/s

It'll be about 92% of a 5570, or about 55 MH/s. Smiley

I have a 5570 and can do 200+ Mh/S... Unless a 92% means something different now, there is a problem somewhere.

btc: 15sFnThw58hiGHYXyUAasgfauifTEB1ZF6
epenue
Full Member
***
Offline Offline

Activity: 136
Merit: 100


View Profile
October 21, 2012, 09:23:00 PM
 #38

Trying yo mine with one of these (3870K) and its just impossible;

i get 3MH, obviously GPU is doing nothing,

The problem i think is that i mine in Windows XP64, i have installed latest SDK available, 2.3, but with guiminer - opencl i get only 3MH, and trying to mine from command line i get this error:


C:\guiminer>poclbm
Traceback (most recent call last):
  File "poclbm.py", line 48, in <module>
pyopencl.LogicError: clGetPlatformIDs failed: invalid/unknown error code


All help i find about that error is for linux based systems,

any help?

Thanks,
legolouman
Hero Member
*****
Offline Offline

Activity: 504
Merit: 500


Decent Programmer to boot!


View Profile
October 21, 2012, 09:25:50 PM
 #39

Are we still debating?

The A4-3400 gets about 100mhash/s. I have firsthand experience in using them.

If you love me, you'd give me a Satoshi!
BTC - 1MSzGKh5znbrcEF2qTrtrWBm4ydH5eT49f
LTC - LYeJrmYQQvt6gRQxrDz66XTwtkdodx9udz
superfastkyle
Sr. Member
****
Offline Offline

Activity: 437
Merit: 250


View Profile
October 22, 2012, 03:54:41 AM
 #40

has anyone got an a10 yet to test? I believe on the sha-256 bandwidth tests it was about 3 times as fast as the a8 so I was hoping it maybe worth trying if I build new gpu based systems
Pages: « 1 [2] 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!