At the moment, my miner doesn't work on Geforce until Nvidia fixes their bugs... however, in your case, its because you're using a headless version of the JDK. Install a non-headless version of the JDK.
If you can figure out a good workaround for the Geforce bug (see above posts for description), feel free to send me a patch.
Actually no, I believe the problem is more likely that the library is linked against a different version of the Sun JVM. The headless problem can be easily solved using -Djava.awt.headless=true Since GeForce are buggy I will wait for them to work before having another go lwjgl doesn't handle being headless correctly yet. So, trying to force awt headless, or running with DISPLAY unset, will cause either the error you have or the ELFCLASS32 error. lwjgl is linked against java 1.5, and my app is built against 1.6, so its not an issue of what version stuff was built with, it works with many versions of sun and openjdk. As for the Geforce bug, see the post above yours, I think I have it finally fixed, just use the beta drivers.
|
|
|
Hey nvidia users... I just had an nvidia user on IRC confirm my miner DOES work... but you have to use the beta drivers.
So, sweet, I fixed it. Go upgrade your drivers.
|
|
|
Not to sound n00bish, but when I follow the instructions I get: [mithrandir@fedora poclbm]$ python ./poclbm.py Traceback (most recent call last): File "./poclbm.py", line 3, in <module> import pyopencl as cl ImportError: No module named pyopencl Funny thing is, I did install pyopencl with an rpm I got. (I tried it with the src, and got errors. RPM install worked fine.) My system is: Fedora 14, 32 bit, Python 2.7, ATI Radeon 3100 Graphics, AMD Sempron LE-1300 EDIT: GPUCap via Wine says I don't have OpenCL support, but it also says I don't even have a GPU. Is there a way to find out if I have OpenCL support on GNU/Linux? According to that python error, no you don't have pyopencl installed. Go figure. Also, you need at least a Radeon 4xxx to use OpenCL, 3xxx isnt enough.
|
|
|
Update: I've fixed the nasty Nvidia bug. This should now work correctly on Nvidia and OSX. Everyone go try it and report back.
FAILED again on my machine: ./DiabloMiner-Linux.sh -u XXXX -p XXXX Added GeForce 9800 GX2 (#1) (16 CU, local work size of Exception in thread "main" java.lang.Exception: Failed to build program on GeForce 9800 GX2 (#1) at com.diablominer.DiabloMiner.DiabloMiner$DeviceState.<init>(DiabloMiner.java:319) at com.diablominer.DiabloMiner.DiabloMiner.execute(DiabloMiner.java:186) at com.diablominer.DiabloMiner.DiabloMiner.main(DiabloMiner.java:89) Well, at least it failed farther into the kernel build process. This is good news.
|
|
|
Update: I've fixed the nasty Nvidia bug. This should now work correctly on Nvidia and OSX. Everyone go try it and report back.
|
|
|
I have got access for a limited time to a small cluster running some Geforce cards, and I'm wondering if I can get your miner to work on them at night. Problem is they are debian machines and they are headless. The headless is not so much a problem since I can run Java in headless mode, what is a problem is this:
Any idea as to why?
At the moment, my miner doesn't work on Geforce until Nvidia fixes their bugs... however, in your case, its because you're using a headless version of the JDK. Install a non-headless version of the JDK. If you can figure out a good workaround for the Geforce bug (see above posts for description), feel free to send me a patch.
|
|
|
Perhaps I could get some help:
I have the sinking feeling that my first foray into GPU processing has uncovered that my GPU is unsupported. It's a Radeon X2100 Mobility. I haven't seen anything saying that it is supported by OpenCL, but I haven't seen anything saying it isn't either.
It isn't, and you didn't download the Stream SDK, or you're not using it correctly.
|
|
|
Just installing miner to my friend. We found this error on his MacOS box. Did anybody see this already? Thanks mac-mini-david:DiabloMiner david$ ./DiabloMiner-OSX.sh -u **** -p **** Added GeForce 9400 (#1) (2 CU, 1x vector, local work size of ERROR: [CL_INVALID_BUILD_OPTIONS] : OpenCL Error : clBuildProgram failed: Invalid build options "-D VECTORS=1 -D NS="(u)((nonce * 1) + 0)" -D CHECKOUTPUT="if(H == 0) {output[0] = ns;}" -D WORKGROUPSIZE="""
Exception in thread "main" java.lang.Exception: Failed to build program on GeForce 9400 (#1) at com.diablominer.DiabloMiner.DiabloMiner$DeviceState.<init>(DiabloMiner.java:371) at com.diablominer.DiabloMiner.DiabloMiner.execute(DiabloMiner.java:198) at com.diablominer.DiabloMiner.DiabloMiner.main(DiabloMiner.java:90)
Its the same as the Nvidia bug (and its only by coincidence that your friend has Nvidia hardware; OSX has its own OpenCL impl). Both Nvidia and OSX do not implement that correctly. Its annoying as hell.
|
|
|
Update: The -52 errors only happens on SDK 2.2, and now the miner ignores them since they only happen periodically. It does not happen with 2.1 and has no effect on the mining process, and I suggest you use 2.1 anyhow since it runs much faster.
|
|
|
The open source driver bits are mostly there. The driver folks know how OpenCL will work, what ioctl(s) it will use.... once an open source OpenCL exists.
Actually, Mesa has a very early non-functional prototype that uses Gallium URL to non-functional OpenCL prototype? Hrm, good question. I saw it mentioned in one of my news feeds, but I can't find it now.
|
|
|
Hey man, an open cross platform royalty free API is very important. Imagine gaming if all we had was D3D.... PC gaming would be dead. OpenGL is the only thing keeping it alive IMO.
Agreed. I'm sorry, I didn't mean to imply that I was against industry members getting together to produce and promote open, cross-platform, royality-free APIs. That is very important, and is indeed something that I encourage. I was simply remarking that I was fooled into thinking that it included an open-source implementation...since "Open" has been thrown around as a Orwellian marketing term by some organizations, even though it isn't really open. As for owner binaries, I don't keep those around; you can use git to pull older revisions in, but I don't recommend it because they may end up being subtly broken because I've fixed bugs since then.
OK. I'll try that. Until I fix Nvidia (if there is any fix at all), just buy a new video card. Geforces mine very slowly, about 3x slower per watt, and maybe 4x slower per dollar depending on the card. Go buy a Radeon 5xxx, you'll be happier.
Nvidia actually gave me this GeForce 9800 Gx2 as a consolation prize for a GPGPU research proposal I submitted to Nvidia, but was rejected. So I have this big fancy GPGPU which I haven't really been using. Oh well. The Open in OpenGL doesn't imply any sort of implementation of all. There already has been an open OpenGL impl for years, its called Mesa. Khronos (formerly known as the OpenGL Steering Committee) doesn't maintain an implementation of anything. As for Nvidia giving you that card.... they lied again. No Nvidia card is good at GPGPU for any task outside of heavily float-oriented tasks similar to the graphics rendering the card normally would be doing. Trust me, its really worth shelling out the cash for a Radeon 5xxx.
|
|
|
DAMNIT!!! I was fooled again by the marketing term "Open". Turns out that according to http://www.khronos.org/opencl, the Open just means "OpenCL™ is the first open, royalty-free standard for cross-platform"...nothing to do with implementation. Oh well... I have this nice Nvidia GeForce 9800 GX2...not doing anything...could be generating bitcoin Do you have a link to the older binaries? Actually, Mesa has a very early non-functional prototype that uses Gallium. I suspect in the nest 2-3 years you can run my miner on Radeon 5xxx hardware with a fully open source stack.
However, as long as Nvidia continues to unofficially threaten to sue projects like Nouveau for trying to support Nvidia on the new Gallium stack, Nvidia is probably going to go bankrupt before they turn around and quit pissing off customers.
DAMNIT!!! I swear! I will never again work at a pro-IP tech corporation in any manner! Hey man, an open cross platform royalty free API is very important. Imagine gaming if all we had was D3D.... PC gaming would be dead. OpenGL is the only thing keeping it alive IMO. As for owner binaries, I don't keep those around; you can use git to pull older revisions in, but I don't recommend it because they may end up being subtly broken because I've fixed bugs since then. Until I fix Nvidia (if there is any fix at all), just buy a new video card. Geforces mine very slowly, about 3x slower per watt, and maybe 4x slower per dollar depending on the card. Go buy a Radeon 5xxx, you'll be happier.
|
|
|
Damnit!!! Proprietary crap!!! I swear, I will never work at a pro-IP corporation ever or buy another closed-source piece of hardware ever! My new goal in life: develop and release an open source GPGPU. Or maybe for the time being write an open source bitcoiner FPGA miner and post it to opencores.org... Do you know of open source drivers that I can use instead? Its not the proprietary crap issue. Catalyst on Linux works fine, but obviously you need AMD hardware to use AMD drivers. FPGAs aren't worth dealing with. Due to the cost of mass producing a generic massively parallel high throughput product like Radeon 5xxx series GPUs, you need about $3k worth of FPGA hardware to keep up with a $500 5970. AMD, however, is paying several million dollars to develop an entire open source solution through the existing X/Mesa/DRI community (along with Intel). Do you know of open source drivers that I can use instead?
Open source drivers won't get you an open source OpenCL compiler and implementation, unfortunately. The open source driver bits are mostly there. The driver folks know how OpenCL will work, what ioctl(s) it will use.... once an open source OpenCL exists. Actually, Mesa has a very early non-functional prototype that uses Gallium. I suspect in the nest 2-3 years you can run my miner on Radeon 5xxx hardware with a fully open source stack. However, as long as Nvidia continues to unofficially threaten to sue projects like Nouveau for trying to support Nvidia on the new Gallium stack, Nvidia is probably going to go bankrupt before they turn around and quit pissing off customers.
|
|
|
I get this error running the latest compiled version with the supplied windows execution parameters: As per the post immediately before yours, update to the normal 0.3.17 binary, not a m0 patched one I am having this error when running DiabloMiner-Linux.sh on Ubuntu 10.04 with GeForce 9800 GX2: ./DiabloMiner-Linux.sh -u xxxxxxxx -p xxxxxxxx Added GeForce 9800 GX2 (16 CU, 1x vector, local work size of clang: Too many positional arguments specified! Can specify at most 1 positional arguments: See: clang --help ./DiabloMiner-Linux.sh -u fontaine -p dsfah Added GeForce 9800 GX2 (16 CU, 1x vector, local work size of clang: Too many positional arguments specified! Can specify at most 1 positional arguments: See: clang --help
Same on Win7 & Nvidia. Previous binaries working like a charm. Btw thanks for getwork update! It's cool we do not need patch anymore! You two are both suffering from the fact Nvidia has broken drivers. They do not comply with the OpenCL specification. Until either they fix it, or I figure out a workaround that doesn't involve me getting rid of defines in the kernel altogether, this can't be fixed. Go bitch at Nvidia, maybe they'll listen if more people do it.
|
|
|
My miner has now been updated to use the new getwork.
|
|
|
Update: Miner now works with Satoshi's getwork impl, so you don't need to patch anymore.
|
|
|
OSX seems to suffer from the same bug Nvidia does. Neither of them support the specification required ability to do -Dfoo=bar.
Do you see a fix for this in the future? Ask Apple. They do not comply with the specification, and they probably don't care.
|
|
|
Weird, I get: Added Radeon HD 4670 (8 CU, 2x vector, local work size of ERROR: [CL_INVALID_BUILD_OPTIONS] : OpenCL Error : clBuildProgram failed: Invalid build options "-D VECTORS=2 -D NS="(u)((nonce * 2) + 0, (nonce * 2) + 1)" -D CHECKOUTPUT="if(H.s0 == 0) {output[0] = ns.s0;}if(H.s1 == 0) {output[1] = ns.s1;}" -D WORKGROUPSIZE="""
Exception in thread "main" java.lang.Exception: Failed to build program on Radeon HD 4670 at com.diablominer.DiabloMiner.DiabloMiner$DeviceState.<init>(DiabloMiner.java:368) at com.diablominer.DiabloMiner.DiabloMiner.execute(DiabloMiner.java:195) at com.diablominer.DiabloMiner.DiabloMiner.main(DiabloMiner.java:88)
when trying to run DiabloMiner-OSX.sh with my username and pass. OSX seems to suffer from the same bug Nvidia does. Neither of them support the specification required ability to do -Dfoo=bar.
|
|
|
Hi Diablo, I just tested your miner against current (0.3.17) client version (unpatched) and this does not work. There is getwork feature already included in official client. Do you plan to update your API to accept default client implementation? It would be cool.
One more question. Is possible to add parameter to specify port or even whole URL to client? Now you only accept 'host' parameter, so both port or path inside uri are hardcoded inside. I'm working on cooperative mining and I would like to test it against your miner, too. Full URL in command line would help me a lot.
Thanks, Marek
Its on the todo list to make it work with satoshi's getwork impl. My client supports both host and port, see -o and -p. If you need anything else, it may be more useful to request something from satoshi as to support in the official client (which then I can support).
|
|
|
Hi, I am trying the latest DiabloMiner build. But when i run ./DiabloMiner-Linux.sh -u user -p pass i am still getting same error: Added GeForce 8800 GTS (12 CU, 1x vector, local work size of clang: Too many positional arguments specified! Can specify at most 1 positional arguments: See: clang --help I am runnig Debian Testing and my GPU is GeForce 8800 GTS. Can someone help me? Nvidia drivers currently have a bug that Nvidia has not fixed yet. There was already an issue open about this on the tracker.
|
|
|
|