Show Posts
|
Pages: « 1 [2] 3 »
|
That's a really good "getting started" guide. Thank you for your work No problem, I'm glad to help! I you have any input let me know and I will update it. Also, it seems that calculation of nonces/minute may be off... https://bitcointalk.org/index.php?topic=731923.msg8771161#msg8771161It shows me calculating at 121,230 nonces/minute... but actual file growth is more along the lines of only 12,123.0 nonces/minute. (Maybe this is from the message I got when compiling?) ...It might also be nice to have a "ETA" for completion time. But not really needed. Maybe the CLOCKS_PER_SEC value doesn't match the actual clock() values on your platform (function that I use to retrieve the time). I will use the time() function instead. Here is the new version: GPU plot generator v2.0.1 Warning: to the AMD owners, the 14.4 catalyst driver cause a major bug. The generated files are corrupted. I will investigate on this problem. In the meantime, I recommend you to switch back your driver to the 13.12 version.Changelog: - Nonces display corrected (XX/YY nonces). - Using time() function rather than clock() in nonces/minute computation. - Nonces/minutes repeated at the end of the generation process to keep a trace of it after execution. - ETA and elapsed time added. Windows x86 binaries: https://mega.co.nz/#!TVEVjQ5L!Uyd2HuPTZSP9eARwjiChjd1P6BMWHQtYT0SGGHLrnuI Sources: https://mega.co.nz/#!2RUwHAyC!IOxfTZk9PsRdPS7Z3a76MAT4vT3ygvp--r_dVQ5GaD4
|
|
|
Here is a better guide on How to use the Windows GPU Plotter: Getting Started Download/Install GPU Drivers and OpenCL support:For Nvidia GPU Users:I don't have an Nvidia GPU, so I still need confirmation from ya'll on what is required According to Nvidia "OpenCL support is included in the latest NVIDIA GPU drivers, available at http://www.nvidia.com/Download/index.aspx?lang=en-us" For AMD/ATI GPU Users:Download/Install drivers for your video card: (Note, you should already have video drivers installed. But you may need to play around with installing different versions of the driver for best performance) Latest Version - http://support.amd.com/en-us/download/desktop?os=Windows+7+-+64Archived Versions - http://support.amd.com/en-us/download/desktop/previous?os=Windows%207%20-%2064Download/Install the AMD APP SDK for your version of Windows: http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/Download/Extract the Windows GPU Plotter:1) The GPU Plotter is archived in a .7z file, so you will need to Download and install 7-zip to be able to extract it (if you don't already have it): http://www.7-zip.org/download.html2) Download the Windows GPU plot generator (v2.0.0): https://mega.co.nz/#!2BNDXY4L!jgwHDZXDJyFp2Jg5mN8sxtpplgXEInSMf1cQGbPc5lM 3) Right-click & extract the "gpuPlotGenerator-bin-win-x86-2.0.0.7z" file and select 7-zip -> extract to "gpuPlotGenerator-bin-win-x86-2.0.0\" Using the Windows GPU Plotter:1) open the newly created "gpuPlotGenerator-bin-win-x86-2.0.0" folder that was created in the previous step 2) In an empty space within the "gpuPlotGenerator-bin-win-x86-2.0.0" folder do the following: Hold down the "shift" key and right-click in an empty spot. Select "Open command window here" (Note: the "Open command window here" option is only available if you hold Shift and Right-Click) (Alternatively, you could just open a command window manually and do a "CD" to the folder containing the gpuPlotGenerator.exe file) 3) Run the following command to list the GPU's Platform and Platform ID: gpuPlotGenerator.exe list platforms Note down the "ID" number for the proper device platform, this number will be <platformId> in the next stepExample: 4) Run the following commands to find the DeviceID's of the device in your system, replacing <platformId> with the number you noted down in the last step: gpuPlotGenerator.exe list devices <platformId> Example: gpuPlotGenerator.exe list devices 0 Note down the "ID" number for the proper device to use, this will be <deviceId> in the next stepNote down the "Max global memory size" number, this will be the MAXIMUM <Stagger> we are able to set (may be more with trial/error)Note down the "Max work group size" number, this will be the MAXIMUM <Threads> we are able to set5) Finally, create the desired plotting information. Here is the basic syntax for the app: gpuPlotGenerator.exe generate <platformId> <deviceId> "<Plot folder path>" <AccountNumber> <StartingPlot> <NumberOfPlots> <Stagger> <Threads> <Hashes> <platformId> = The ID# we found in Step 3 (In my case, this was 0) <deviceId> = The ID# we found in Step 4 (In my case, this was also 0) <Plot folder path> = The folder you wish to have plots created (Ex: C:\Path to\plots) <AccountNumber> = This is your Numeric Burstcoin wallet address (Ex: 11111222223333344444) <StartingPlot> = The plot number you would like to start generating at <NumberOfPlots> = The number of plots to create from the StartingPlot (This needs to be a number that is evenly dividable by the <Stagger> you set) <Stagger> = Amount of memory to use on the GPU, in MB. (Ex: I set mine to 1024, instead of my MAX of 1265) (I've been told you may be able to set this higer, but would be a trial and error testing on your part if you wish to try setting the stagger higher) <Threads> = Amount of parallel GPU threads to use (Typically either 64, 128 or 256 depending on the capabilities of your card, which is indicated by the "Max work group size" above) <Hashes> = Number of chunks the GPU will split work into. (Ranges from 1 to 8160, this is purely guess work... so start low-ish and try to go up as close to 8160 as you can as higher numbers stress the GPU more) As an example, this is the command that I used on my AMD Radeon 7800: gpuPlotGenerator.exe generate 0 0 "C:\Path to\plots" 11111222223333344444 14670000 7335000 1000 64 1024 Note: The above command is probably not optimized for the best speed... but it's just an example that works for my card Troubleshooting[ERROR] An OpenCL error occured in the generation process, aborting... [ERROR] [-1001] Unable to retrieve the OpenCL platforms There are no OpenCL devices detectable. Try Installing the latest Drivers/OpenCL for your device (Listed in "Getting Started" above). Also, could be thrown if you are using a Remote Desktop app such as RDP (as it uses a generic mirror driver instead of the GPU driver). Try using something like TeamViewer or VNC instead of Remote Desktop. Contribution needed for more troubleshooting steps... Revised instructions for using the Windows GPU Plotter, as some were still getting confused. Also, thank you to bipben for the amazing job on getting a GPU Plotter functional (also, you are more than welcome to steal any of my notes for your ReadMe if it would help users) That's a really good "getting started" guide. Thank you for your work
|
|
|
Setting OpenCL step3 kernel static arguments 0.0576701% (10652672/5326848 nonces), 8299.71 nonces/minutes... 0.11534% (10655744/5326848 nonces), 10400.3 nonces/minutes... 0.17301% (10658816/5326848 nonces), 11357.2 nonces/minutes... 0.230681% (10661888/5326848 nonces), 11893.5 nonces/minutes... 0.288351% (10664960/5326848 nonces), 12215.5 nonces/minutes... 0.346021% (10668032/5326848 nonces), 12467.4 nonces/minutes... 0.403691% (10671104/5326848 nonces), 12626.9 nonces/minutes... 0.461361% (10674176/5326848 nonces), 12742.9 nonces/minutes... 0.519031% (10677248/5326848 nonces), 12851.2 nonces/minutes... 0.576701% (10680320/5326848 nonces), 12947.5 nonces/minutes... 0.634371% (10683392/5326848 nonces), 13030.8 nonces/minutes... 0.692042% (10686464/5326848 nonces), 13093.8 nonces/minutes... 0.749712% (10689536/5326848 nonces), 13161.1 nonces/minutes... 0.807382% (10692608/5326848 nonces), 13223.1 nonces/minutes... 0.865052% (10695680/5326848 nonces), 13281.6 nonces/minutes... 0.922722% (10698752/5326848 nonces), 13347.3 nonces/minutes... 0.980392% (10701824/5326848 nonces), 13393.2 nonces/minutes... 1.03806% (10704896/5326848 nonces), 13444.7 nonces/minutes... 1.09573% (10707968/5326848 nonces), 13473.2 nonces/minutes... 1.1534% (10711040/5326848 nonces), 13509.5 nonces/minutes... 1.21107% (10714112/5326848 nonces), 13537.3 nonces/minutes... 1.26874% (10717184/5326848 nonces), 13566 nonces/minutes... gpuplotgenerator generate 0 0 G:\plots blablabla 10649600 5324800 3072 128 1024 r9 280x
Your doing better than me on v.2.0.0, I'm doing 11.5k nonces/min, but my hashes is at 8160. biphen still hasn't answered my question how stagger_size/hashes ratio relates in this gpu plotter as the read me vaguely discusses it. My answer is available here (must have been lost in the whole flood) : https://bitcointalk.org/index.php?topic=731923.msg8772880#msg8772880
|
|
|
Thanks Bipben for the GPU generator and your hard work! I've gone ahead and doubled your burst wallet balance for you. Wow! Thanks a lot for your support I will continue to enhance the plot generator. The next version will be centered on performance. Some people pointed out that it would be great to be able to split the work between many graphic cards. I will add this feature to the roadmap. v.2.0.0 seems solid, thanks.... Also, could you explain the relationship of the stagger size/hashes ratio? It was kind of vague to understand it. There is no relationship between the two. The <staggerSize> parameter is used to order the scoops into the resulting plots file. The greater the stagger, the more RAM will be required to generate the nonces. The <hashesSize> parameter is a value between 1 and 8160 used to split the step2 global work in chunks. Those chunks of workload are then queued to your graphic card one after another. Example : ./gpuPlotGenerator 0 0 <path> <address> 0 100 10 64 400 -> Creating generation buffer ((PLOT_SIZE + 16) * staggerSize) -> Creating result buffer (PLOT_SIZE * staggerSize) -----> Generating 10 (staggerSize) nonces (from 0 to 9) -> Step1: Buffer initialisation spread on <threadsNumber=64> threads -> Step2: -> Computing 400 hashes spread on <threadsNumber=64> threads (from 0 to 399) -> Computing 400 hashes spread on <threadsNumber=64> threads (from 400 to 799) -> ... -> Step3: reordering scoops : moving them from the generation buffer to the result buffer -> Writing nonces to disk -----> Generating 10 (staggerSize) nonces (from 10 to 19) ... and so on
As you can see, without the reordering process, we should be able to reduce the amount of memory needed by the plot generation (or at least scaling it as we want without any impact in the mining) and greatly enhance its parallelization. Postponing this step could greatly improve the whole generation process but will increased I/O operations. I will make different versions to test some ideas on that matter. Also, I will try to lighten has much as possible the step2 kernel processing to increase the nonces/minutes.
|
|
|
Uray is it networking problem or serwe hardware load ? Or maybe ddos ? Can you limit max miners manually ?
its hashing problem, its heavy to check nonce submission to produce deadline value, when new block is announced, all miners at the same time submit their nonce to pool, then pool need to check their nonce one by one, some miner will say its submission error while they are waiting for deadline result, but actually the deadline is submitted but more than 15 seconds later Does your pool servers have one or more available GPUs? Nonce submission check could be hardware accelerated. I can work on that part if you want.
|
|
|
Hello I generate plot from nonce 0 to 3276800 with stagger size 1024 and its fine. Now i want to make my plot file bigger so i generate from plot:
3276800+1024=3277824 to plot 3600000 (3600000*256/1000000=921.6GB) So i generated plot from 838Gb to 921Gb.
Problem is my plot generation dont stop at nonce 3600000 bu is going next above thah value. I had to stopi it manually (ctrl+c) but i don't know why this is going and if everything is fine now ?
Hi, It's a display bug: the actual code displays ["currentNonce" / "noncesNumber"] but it should be ["currentNonce - startNonce" / "noncesnumber"]. This bug occurs when <startNonce> is greater than 0. Sorry for that. I will correct it in the next version. You can use the program without any issue on the generated plots file. The displayed percentage value is correct so use this instead.
|
|
|
Okay, after some review, I must conclude that so-called GPU Plotter is a worthless piece of crap, suited only for hackers and code monkeys.
After downloading the 2 files and 7zip, and installing 7zip, the first directions here were to read the README files. Okay.
And.... progress comes to a halt. Direction #1 - install msys/mingw. Installing it isn't the main problem, rather, the bigger problem is that the file ISN'T HERE!!!!!
How the hell am I supposed to install something that isn't in the folder to install?!?!?!?!?
Then after installing OpenCL, which I can do, it gets into modifying files and command line directions for use of the cd command, etc.
Hey, how about releasing a version that WORKS without having to hack / guess / compile the damn thing?
Get to work on it, this current version SUCKS OBAMA!!!!!
And make it for Windows, no more Linux stuff that 1% or less of the world uses!!!!
+1 downloaded & installed msys/mingw from google, but no idea after that !!!! In my original post there are two links : the source code for "code monkeys" as you say, and an already pre-build version for windows... It would be nice of you to pay a bit more attention before being so aggressive. As you can see from all the posts, the actual plotter is still in an early stage of its development. So yes, the README isn't perfect, and yes there are bugs, but everyone here try to do their best to help each other.
|
|
|
Trying the new gpu plotter.... my config... gpuPlotGenerator generate 0 0 plots 4163282010088137402 15662000 20480 500 64 128 PAUSE The error I get says "[-54] Error in step1 kernel launch" It does this after "Setting OpenCL step3 kernel static arguments" Any ideas? Is my config bad, I started with pretty low numbers. I should also mention I'm plotting on my cpu right now with 4 cores (i2720qm), and have 2/8 gb free. The -54 is the "CL_INVALID_WORK_GROUP_SIZE" error. It is related to your <threads> parameter. Different values for this parameter may solve your actual problem.
|
|
|
Thanks Bipben for the GPU generator and your hard work! I've gone ahead and doubled your burst wallet balance for you. Wow! Thanks a lot for your support I will continue to enhance the plot generator. The next version will be centered on performance. Some people pointed out that it would be great to be able to split the work between many graphic cards. I will add this feature to the roadmap.
|
|
|
GPU plot generator v2.0.0 Changelog: Kernel split to increase graphic cards compatibility (introduction of the <hashesNumber> parameter to batch hashes computations). OpenCL platforms and devices listing added. Platform and device selection added as generation parameters. Enhanced CPU and GPU support (removed the auto-selection and fallback). Clearer information display. Displaying the % and nonces/minutes while generating plots. README added with some basic build and run instructions. Makefile cleaned up. Windows x86 binaries: https://mega.co.nz/#!2BNDXY4L!jgwHDZXDJyFp2Jg5mN8sxtpplgXEInSMf1cQGbPc5lM Sources: https://mega.co.nz/#!LZs1RapR!6VGB1SssX5lFp8GX0bZmN2OH-MftLVzEAn1P1nKrdwA Please read the README provided with both the binaries and the sources. Your feedbacks are welcome, it greatly helps me. If you like this software, support me Roadmap for the next versions (feel free to post your features/ideas) : Kernel enhancement to speed up plot generation (shabal core can be improved). Find a way to reduce the memory overhead caused by my workaround solution to bypass the per-thread local memory limit. Make a list of graphic cards along with their optimal parameters (needs your participation). >>>>>>> There is a dedicated topic on the new forum for the GPU plot generator http://burstforum.com/index.php?threads/gpu-plot-generator.45/ <<<<<<<
|
|
|
The GPU plot generator v2.0.0 will allow you to do that. I will release it in few hours at most. Be patient Will this also fix the weird double memory problem? I was able to plot up to 2024 stagger with my 3GB 280x but anymore than that and I get the CL error. Strange thing is that looking at MSI Afterburner I'm only using 1.1GB on the card and less than 20% of my system RAM. The time it takes to look through the plots for valid shares is related to stagger size, right? I'm noticing that my systems aren't always able to go through all my plots before the end of some blocks. Most of my plots are at stagger 1000 because I was having problems going any higher with the Java miner. The "weird" double memory problem is due to the limited amount of local memory for each workgroup thread on the GPU side (it would require more than 260KB per thread, what is really big in the graphic cards world). The only way I found for now is to allocate another huge buffer (of (PLOT_SIZE + 16) * staggerSize bytes) that each workgroup thread can use to store the processing data. Thus, the amount of memory needed by the GPU plot generator is more than twice the normal amount. One solution to avoid this memory overhead is to find a mean to do the last algorithm step in an on-place fashion. I will try to work on that and on the kernel enhancement (to speed things up even more :p) for the 2.1.0 version.
|
|
|
What command to use 2 280x for a bat file to use them all ?
anyone ? read the thread Miner for multiple graphics dont exist yet. And, few pages back there was a configuration for R9 R280x.. The GPU plot generator v2.0.0 will allow you to do that. I will release it in few hours at most. Be patient
|
|
|
GPU plot generator v1.1.0 Author: Cryo Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL -------------- Path: /plots Nonces: 4092000 to 4132960 (10 GB) Process memory: 1024MB Threads number: 128 -------------- Retrieving OpenCL platform Retrieving OpenCL GPU device Creating OpenCL context An OpenCL error occured in the generation process, aborting... >>> [-33] Unable to create the OpenCL context oh look, new error This error is CL_INVALID_DEVICE... Congrats, that's a first ^^ Looks like the selected GPU device is not located on the auto-selected platform. The next version will let you choose both the platform and the GPU to work on, so this issue would be fixed. Be patient, I'm working on it as fast as I can.
|
|
|
How much memory do you have in your GPU? The most I can gen @ is 3000 on a higher end card.
The lower end ones with 1GB constantly fail with the error posted above.
It's a 3GB R9 280x... Hi guys, I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side. As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself). Once I have a stable version (really soon ), I will work on this particular problem. Please, consider also to test a version for nvidia cards! Afaik there isn't yet a user that was able to start the plot generator on nvidia gpus The next version should work on both NVIDIA an AMD cards. Still need some test but sounds promising. Hope that you will release it quickly (at least when burst profitability is still good) It's nearly working already. Need some more tests and two or three bugs correction ^^
|
|
|
How much memory do you have in your GPU? The most I can gen @ is 3000 on a higher end card.
The lower end ones with 1GB constantly fail with the error posted above.
It's a 3GB R9 280x... Hi guys, I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side. As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself). Once I have a stable version (really soon ), I will work on this particular problem. Please, consider also to test a version for nvidia cards! Afaik there isn't yet a user that was able to start the plot generator on nvidia gpus The next version should work on both NVIDIA an AMD cards. Still need some test but sounds promising.
|
|
|
Hi everyone, After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished. I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL. Regarding the plot generation, I found an OpenCL implementation of Shabal ( https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time. Regards Hi everyone, As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot). Here is a preview you can test for now : gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly). I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program. For the windows people, you can use the binary version directly. For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable. The executable usage is : ./gpuPlotGenerator <address> <start nonce> <nonces> <stagger size> The parameters are the same as the original plot generator, without the threads number. If you find bugs or if you want some new features, let me now. If you want to support me, here are my Bitcoin and Burst addresses : Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL Regards Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it. Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however. I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance. You might be interested in those kernels here: https://bitcointalk.org/index.php?topic=731923.msg8695829#msg8695829Thanks, I will look at your kernels to see if I can find a better solution. Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later. In CPU mode, it all goes pretty well (when no graphic card is detected). The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow. Here are the files : gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory. The executable usage is : ./gpuPlotGenerator <path> <address> <start nonce> <nonces> <stagger size> <threads> <path> : the path to the plots directory <threads> : number of parrallel threads for each work group Found the "randomness" cause. NVIDIA is caching the kernel after the first build and rebuild it from time to time. By cleaning the cache, I can force the kernel build and speed up the debugging process. I will notify you as soon as the crash cause is found and corrected. Bad news guys. There is no actual "bug" in the implementation. Seems like the graphic card is beeing streesed too much by the shabal core, thus the driver is shutting down the kernel (there is a watch-dog timer for this purpose hard coded in the display driver to ensure that the display don't freeze too much). I will try to improve the whole algorithm and memory consumption to the needed graphic card power. In the meantime, I found this thread ( http://stackoverflow.com/questions/12259044/limitations-of-work-item-load-in-gpu-cuda-opencl) that speak about this particular issue. The available options are : - If you have more than one graphic card, you can launch the plotter on the one that does not hold the display. There is still no option to select the graphic card in the plotter, but I will code it soon so that you can test it in a multi-GPU environment. - You can try to turn-off the watchdog timer by following the provided link, but be CAREFUL, you may experience terrible display lags, or even full black screens until the plotter process finishes its work. You don't need to improve it to avoid this issue, just split it. One kernel for first half, one kernel for second half. The new major update is in progress, thanks to burstcoin advice. I think that a lot more of graphic cards will be compatible with this version (at least I hope so). I'm a Linux n00b, but I managed to get Ubuntu up and running so that I could use dcct's plot generator. ...I'm currently trying to use your v1.1.0 of the GPU Plot Generator with my AMD (ATI) Radeon 7800. I've edited/made the following changes to the "Makefile" file (I chose the default install location for AMDAPPSDK): CC = g++ CC_FLAGS = -ansi -pedantic -W -Wall -std=c++0x -O3 -I../opt/AMDAPPSDK-2.9-1/include LD = g++ LD_FLAGS = -fPIC -L../opt/AMDAPPSDK-2.9-1/lib/x86_64 -static-libgcc -static-libstdc++ -lopencl I keep getting this error now when I issue the "make" command: Linking [bin/gpuPlotGenerator.exe] /usr/bin/ld: cannot find -lopencl collect2: error: ld returned 1 exit status make: *** [bin/gpuPlotGenerator.exe] Error 1 I've tried Googling the error and see that a bunch of posts on the issue, but can't seem to correct this "/usr/bin/ld: cannot find -lopencl" error. Any suggestions? Make sure you have the opencl libraries in your /opt/AMDAPPSDK-2.9-1/lib/x86_64 folder (something like libopencl.a). If not, try to locate them on your disk. You probably need to build the AMD SDK to have them in this directory. Or maybe they have a different name (with an included version number for example, so juste change the -lopencl to -lopenclX.X.X). Hope this help.
|
|
|
How much memory do you have in your GPU? The most I can gen @ is 3000 on a higher end card.
The lower end ones with 1GB constantly fail with the error posted above.
It's a 3GB R9 280x... Hi guys, I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side. As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself). Once I have a stable version (really soon ), I will work on this particular problem.
|
|
|
Hows gpu plot generator working? Any update on that guys? By the way where do we put that gpu plotter, inside poc miner folde and run there? I wanna try on win 8.1, with x290 gpu is it possible? Thanks.
Wait for the new version. Once stable, I will write a small "how to use" guide. Whats your ETA and what does this mean? "<threads> : number of parrallel threads for each work group" I will work on it tonight (GMT+1). The <threads> parameter is used to set the work groups on the GPU size. Alphateam got better results with a value of 64 on its 290x. I will publish a list of parameters that fit the most for each graphic card later.
|
|
|
Hows gpu plot generator working? Any update on that guys? By the way where do we put that gpu plotter, inside poc miner folde and run there? I wanna try on win 8.1, with x290 gpu is it possible? Thanks.
Wait for the new version. Once stable, I will write a small "how to use" guide.
|
|
|
I'm plotting with 1.1 gpu plotter. But nobody guaranty US who the file are generated correctly? And the file is ok to mine on solo/pool. No one are using plots generated with gpu and is able tu mine some burst from pool?
Thanks
I have tested the output files against the ones generated by the c plot generator for some low parameters ([<offset> <nb> <stagger>], [0 1 1], [0 4 2], [0 100 10], [10 100 10]). There was no diff.
|
|
|
|