Stinky_Pete
|
|
September 05, 2013, 08:29:29 PM |
|
But apart from the above, it's good to see the community working with mtrlt to debug and improve the program - and no sign of the pitchfork wavers.
|
|
|
|
ReCat
|
|
September 05, 2013, 08:31:56 PM |
|
I'd be offended if I was offered only $27/hour for a professional job. Yes, it is quite underpaid considering the job that is being done. He seems to be satisfied, though, as he isn't asking for any more donations? Of course, realize that it goes assuming he works on it full time.
|
BTC: 1recatirpHBjR9sxgabB3RDtM6TgntYUW Hold onto what you love with all your might, Because you can never know when - Oh. What you love is now gone.
|
|
|
bcp19
|
|
September 05, 2013, 10:17:01 PM |
|
It seems like memory corruption, the most annoying kind of bug. Might take some time to fix, I haven't been able to reproduce it on my computer.
This is a serious problem that may not have a fix. I have a GPU that I still have running on SHA256 because it starts throwing massive errors if I try to use the same intensity on scrypt coins. I don't think this is a GPU error, but rather an error in the CPU code somewhere. The reason I brought it up is that several people had memory problems on relatively new cards and one of the posters explained about the tolerances of GDDR5 memory used in GPUs as compared to the DDR3 used in computers. http://www.mersenneforum.org/showthread.php?t=17598 is the initial discussion. In answer to the question of "Why is GPU memory weaker than RAM?" was: That is normal for video cards. I already commented this few times, the video cards industry is the only one (from all the electronic related branches) which accepts memories with "some percent" of unstable or bad bits. That is to be able to make cheap cards for people as you an me, and it is accepted because your eyes will make no difference (and even with specialized instruments is difficult to see) between a pixel on the screen which is Green-126 and one which is Green-127. Your monitor even has only 18 physical "wires" (6 for each of Red, Green, and Blue) through which the color is transmitted, so it CAN'T show more then 2^18=262144 colors, and your eyes may see more then 4000 only if you are professional photographer or painter (or woman, hehe, my wife always tells me that I only see 16 colors, and indeed, for me "peach" is a fruit, not a color, and "rose" is a flower). So, displaying 24-bit color or even 32-bit color is just because old processors use a full byte for each r/g/b, or because new processors can operate faster on 32-bit registers. But from 8 bits of red, only 6 most significant go to the monitor. When you use your monitor in 16 bits mode, there are 5 lines of red and blue, and 6 of green (as the human eye is more sensitive to green), the rest of the "bits" of the color is "wasted" anyhow (physically, the lines are not connected to the LCD controllers, I am not talking about serialization, LVDS, and other stuff used by your card to send signals to your monitor, I am only talking what's happening at the LCD controller level).
That is why a Tesla C/M2090 is 4-6 time the price of gtx580, there is no difference between them except the ECC memory and intensive production testing for endurance.
--------------------------------------------------------------------------------
One possible remedy postulated is downclocking the memory.
|
I do not suffer fools gladly... "Captain! We're surrounded!" I embrace my inner Kool-Aid.
|
|
|
Vorksholk
Legendary
Offline
Activity: 1713
Merit: 1029
|
|
September 06, 2013, 04:01:14 AM |
|
I've made the primality tester 4-5x faster, but I haven't managed to fix the "fractional assert" problem. Should I just release the faster version (many people won't be able to use it) or try to fix the bug first (will take time)?
Personally I think releasing that version right now sounds awesome.
|
|
|
|
YipYip
|
|
September 06, 2013, 05:21:56 AM |
|
I've made the primality tester 4-5x faster, but I haven't managed to fix the "fractional assert" problem. Should I just release the faster version (many people won't be able to use it) or try to fix the bug first (will take time)?
Personally I think releasing that version right now sounds awesome. Fractional assert problem...can it be bypassed by OS or other work arounds ?? If so then release it...
|
OBJECT NOT FOUND
|
|
|
ivanlabrie
|
|
September 06, 2013, 08:43:28 AM |
|
Yeah, if it works on linux release the kraken :p
|
|
|
|
DeaDTerra
Donator
Legendary
Offline
Activity: 1064
Merit: 1000
|
|
September 06, 2013, 10:31:03 AM |
|
I've made the primality tester 4-5x faster, but I haven't managed to fix the "fractional assert" problem. Should I just release the faster version (many people won't be able to use it) or try to fix the bug first (will take time)?
I would of course prefer it if you fixed the bug I am having first, but then again I am biased //DeaDTerra
|
|
|
|
jammertr
Member
Offline
Activity: 100
Merit: 10
|
|
September 06, 2013, 11:20:21 AM |
|
I've made the primality tester 4-5x faster, but I haven't managed to fix the "fractional assert" problem. Should I just release the faster version (many people won't be able to use it) or try to fix the bug first (will take time)?
I would of course prefer it if you fixed the bug I am having first, but then again I am biased //DeaDTerra +1 bug fix
|
|
|
|
techbytes
Legendary
Offline
Activity: 1694
Merit: 1054
Point. Click. Blockchain
|
|
September 06, 2013, 12:47:04 PM |
|
I've made the primality tester 4-5x faster, but I haven't managed to fix the "fractional assert" problem. Should I just release the faster version (many people won't be able to use it) or try to fix the bug first (will take time)?
bug fix first or hear people bitching... (yip-yip hooray...) -tb-
|
|
|
|
refer_2_me
|
|
September 06, 2013, 06:04:26 PM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
|
BTC: 1reFerkRnftob5YvbB112bbuwepC9XYLj XPM: APQpPZCfEz3kejrYTfyACY1J9HrjnRf34Y
|
|
|
wyldfire
Newbie
Offline
Activity: 23
Merit: 0
|
|
September 06, 2013, 07:35:20 PM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
I doubt it's a priority. With NVIDIA's poor integer performance it seems like it's not even worth it. Maybe it would be if the GPU miner could hit ~20x performance over typical CPUs, but as it stands it's not really there yet. But yeah there's definitely a way to eliminate the dependency on OCL 1.2. You just need to find someone motivated enough to do it.
|
|
|
|
ReCat
|
|
September 06, 2013, 07:44:25 PM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
NVIDIA is poor at doing anything GPGPU. Forget about it ever running there. Even if it does run, performance will be unbearable.
|
BTC: 1recatirpHBjR9sxgabB3RDtM6TgntYUW Hold onto what you love with all your might, Because you can never know when - Oh. What you love is now gone.
|
|
|
wyldfire
Newbie
Offline
Activity: 23
Merit: 0
|
|
September 06, 2013, 07:51:43 PM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
NVIDIA is poor at doing anything GPGPU. Wha..?! No way! NVIDIA has a huge advantage over AMD in many aspects. Just look at how well their software works compared w/AMD's. You still need an X server running to do computation with AMD GPUs and that totally blows. NVIDIA made a poor (IMO) strategic decision by abandoning OCL but you still have to give them the credit for creating it! I think they were afraid to abandon their early adopter CUDA customers and decided they didn't have the throughput to support both. I think eventually they'll reverse their position on OCL. But to a lot of folks doing GPGPU they don't care about OCL and they're using CUDA and loving it. So it's not fair to say "NVIDIA is poor at doing anything GPGPU" IMO.
|
|
|
|
ReCat
|
|
September 06, 2013, 09:28:34 PM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
NVIDIA is poor at doing anything GPGPU. Wha..?! No way! NVIDIA has a huge advantage over AMD in many aspects. Just look at how well their software works compared w/AMD's. You still need an X server running to do computation with AMD GPUs and that totally blows. NVIDIA made a poor (IMO) strategic decision by abandoning OCL but you still have to give them the credit for creating it! I think they were afraid to abandon their early adopter CUDA customers and decided they didn't have the throughput to support both. I think eventually they'll reverse their position on OCL. But to a lot of folks doing GPGPU they don't care about OCL and they're using CUDA and loving it. So it's not fair to say "NVIDIA is poor at doing anything GPGPU" IMO. OpenCL Trademarks belong to Apple Corp. I dont think Nvidia made OpenCL. They might be good at GPGPU, but only on the GPU's that specialize in it. ie. Their tesla series. The consumer GPU's they make aren't as good.. but they are also the vast majority. Idk. All I know is that the GPGPU software I've seen out there runs tons faster on ATI cards than it does on NVIDIA cards.
|
BTC: 1recatirpHBjR9sxgabB3RDtM6TgntYUW Hold onto what you love with all your might, Because you can never know when - Oh. What you love is now gone.
|
|
|
wyldfire
Newbie
Offline
Activity: 23
Merit: 0
|
|
September 06, 2013, 10:02:40 PM |
|
Wha..?! No way! NVIDIA has a huge advantage over AMD in many aspects. Just look at how well their software works compared w/AMD's. You still need an X server running to do computation with AMD GPUs and that totally blows.
NVIDIA made a poor (IMO) strategic decision by abandoning OCL but you still have to give them the credit for creating it! I think they were afraid to abandon their early adopter CUDA customers and decided they didn't have the throughput to support both.
I think eventually they'll reverse their position on OCL. But to a lot of folks doing GPGPU they don't care about OCL and they're using CUDA and loving it. So it's not fair to say "NVIDIA is poor at doing anything GPGPU" IMO.
OpenCL Trademarks belong to Apple Corp. I dont think Nvidia made OpenCL. They might be good at GPGPU, but only on the GPU's that specialize in it. ie. Their tesla series. The consumer GPU's they make aren't as good.. but they are also the vast majority. Idk. All I know is that the GPGPU software I've seen out there runs tons faster on ATI cards than it does on NVIDIA cards. Yeah, Apple owns the trademarks because they're the ones who brought everyone to the table. Apple loved CUDA but isn't dumb enough to sole-source any of their parts. So they told NVIDIA and ATI that they should all play nice and standardize CUDA. OpenCL was the result. It's only barely different from OpenCL. The biggest differences are primarily in making CUDA fit a programming model similar to the shaders already used in OpenGL. NVIDIA wanted to win a contract with Apple and they had a huge headstart on the competition. AMD's Brook and CAL/IL was mostly a flop, so they would happily jump onboard with a Khronos standard. If you look just at hashing (and now prime number computation), you're missing a much bigger part of the GPGPU marketplace. Most of the GPGPU customers (in terms of units purchased) are running floating point computations of enormous matrices and using the interpolation hardware. They're used in scientific applications, Oil&Gas, Medical stuff, etc. In those applications, NVIDIA does very well, often better than AMD.
|
|
|
|
ivanlabrie
|
|
September 06, 2013, 11:01:29 PM |
|
Thanks, didn't know how that started
|
|
|
|
ReCat
|
|
September 06, 2013, 11:27:48 PM |
|
Wha..?! No way! NVIDIA has a huge advantage over AMD in many aspects. Just look at how well their software works compared w/AMD's. You still need an X server running to do computation with AMD GPUs and that totally blows.
NVIDIA made a poor (IMO) strategic decision by abandoning OCL but you still have to give them the credit for creating it! I think they were afraid to abandon their early adopter CUDA customers and decided they didn't have the throughput to support both.
I think eventually they'll reverse their position on OCL. But to a lot of folks doing GPGPU they don't care about OCL and they're using CUDA and loving it. So it's not fair to say "NVIDIA is poor at doing anything GPGPU" IMO.
OpenCL Trademarks belong to Apple Corp. I dont think Nvidia made OpenCL. They might be good at GPGPU, but only on the GPU's that specialize in it. ie. Their tesla series. The consumer GPU's they make aren't as good.. but they are also the vast majority. Idk. All I know is that the GPGPU software I've seen out there runs tons faster on ATI cards than it does on NVIDIA cards. Yeah, Apple owns the trademarks because they're the ones who brought everyone to the table. Apple loved CUDA but isn't dumb enough to sole-source any of their parts. So they told NVIDIA and ATI that they should all play nice and standardize CUDA. OpenCL was the result. It's only barely different from OpenCL. The biggest differences are primarily in making CUDA fit a programming model similar to the shaders already used in OpenGL. NVIDIA wanted to win a contract with Apple and they had a huge headstart on the competition. AMD's Brook and CAL/IL was mostly a flop, so they would happily jump onboard with a Khronos standard. If you look just at hashing (and now prime number computation), you're missing a much bigger part of the GPGPU marketplace. Most of the GPGPU customers (in terms of units purchased) are running floating point computations of enormous matrices and using the interpolation hardware. They're used in scientific applications, Oil&Gas, Medical stuff, etc. In those applications, NVIDIA does very well, often better than AMD. Any sources? There's very few applications of GPGPU out there, and the few I have seen seem to indicate that ATI performs better, but I'm not sure. Especially at floating point math. So I heard.
|
BTC: 1recatirpHBjR9sxgabB3RDtM6TgntYUW Hold onto what you love with all your might, Because you can never know when - Oh. What you love is now gone.
|
|
|
arnuschky
|
|
September 07, 2013, 08:52:47 AM |
|
Any word on getting this to work on NVIDIA cards? From what I understand it's because the nvidia cards don't support opencl 1.2 (yet?). Any potential workarounds on windows or linux?
I hacked up a solution for the clEnqueueFillBuffer problem (which seems that it's the only function Sunny used from OpenCL 1.2, the rest is 1.1 and thus well supported by Nvidia). I then ran into another weird problem when compiling the kernels, at which point I decided it's too much work because a) I don't know anything about OpenCL and b) I don't even want to mine.
|
|
|
|
arnuschky
|
|
September 07, 2013, 08:57:22 AM |
|
I then ran into another weird problem when compiling the kernels
For the record, here's the error: Write buffer vPrimes, 6302644 bytes. Status: 0 Compiling kernel... this could take up to 2 minutes. ptxas error : Entry function 'CalculateMultipliers' uses too much shared data (0x5078 bytes + 0x10 bytes system, 0x4000 max)
|
|
|
|
mtrlt (OP)
Member
Offline
Activity: 104
Merit: 10
|
|
September 07, 2013, 10:55:11 AM |
|
I then ran into another weird problem when compiling the kernels
For the record, here's the error: Write buffer vPrimes, 6302644 bytes. Status: 0 Compiling kernel... this could take up to 2 minutes. ptxas error : Entry function 'CalculateMultipliers' uses too much shared data (0x5078 bytes + 0x10 bytes system, 0x4000 max)
What GPU? It seems it only has 16 kilobytes of local memory, whereas I've programmed the miner with the assumption of 32 kilobytes, which is what ~all AMD GPUs have.
|
|
|
|
|