Irontiga
|
|
September 09, 2014, 07:49:10 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Probably not...
|
|
|
|
callmejack
|
|
September 09, 2014, 07:49:42 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
depends on the ambient temperature. the intel stock cooler is designed to work good. most time the powersupplies and mainboards break in longterm before a cpu breaks. despite of some old athlon thinderbirds (those with plain dies on the chip) many years back i never had any issues with a cpu running within the specified conditions under full load 24/7 (idleling windows servers tend to keep the cpu constantly hot).
|
|
|
|
yellowduck2
|
|
September 09, 2014, 07:49:43 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Probably not... 3 months ?
|
|
|
|
SpeedDemon13
|
|
September 09, 2014, 07:50:46 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Don't know if a year, but maybe for a couple of months it could. Server cpus are probably more tolerable to 95%~100% loads than the common desktop cpus to run for a year. Regardless, it should run with no issues if everything around it runs properly. ie. motherboard, psu, etc....
|
CRYPTSY exchange: https://www.cryptsy.com/users/register?refid=9017 BURST= BURST-TE3W-CFGH-7343-6VM6R BTC=1CNsqGUR9YJNrhydQZnUPbaDv6h4uaYCHv ETH=0x144bc9fe471d3c71d8e09d58060d78661b1d4f32 SHF=0x13a0a2cb0d55eca975cf2d97015f7d580ce52d85 EXP=0xd71921dca837e415a58ca0d6dd2223cc84e0ea2f SC=6bdf9d12a983fed6723abad91a39be4f95d227f9bdb0490de3b8e5d45357f63d564638b1bd71 CLAMS=xGVTdM9EJpNBCYAjHFVxuZGcqvoL22nP6f SOIL=0x8b5c989bc931c0769a50ecaf9ffe490c67cb5911
|
|
|
burstcoin (OP)
|
|
September 09, 2014, 07:51:54 AM |
|
Anyone else facing virus notifications? My Kaspersky detects "Dangerous URL blocked, http://123.249.35.13:8123/burst" every half an hour or so. Why is this happening? The client exchanges addresses with each other so you can connect to more people. Someone's probably just running a burst client on an ip kaspersky blocks for some other reason.
|
BURST-QHCJ-9HB5-PTGC-5Q8J9
|
|
|
alphateam
|
|
September 09, 2014, 07:52:10 AM |
|
Hi everyone, After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished. I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL. Regarding the plot generation, I found an OpenCL implementation of Shabal ( https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time. Regards Hi everyone, As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot). Here is a preview you can test for now : gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly). I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program. For the windows people, you can use the binary version directly. For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable. The executable usage is : ./gpuPlotGenerator <address> <start nonce> <nonces> <stagger size> The parameters are the same as the original plot generator, without the threads number. If you find bugs or if you want some new features, let me now. If you want to support me, here are my Bitcoin and Burst addresses : Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL Regards Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it. Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however. I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance. You might be interested in those kernels here: https://bitcointalk.org/index.php?topic=731923.msg8695829#msg8695829Thanks, I will look at your kernels to see if I can find a better solution. Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later. In CPU mode, it all goes pretty well (when no graphic card is detected). The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow. Here are the files : gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory. The executable usage is : ./gpuPlotGenerator <path> <address> <start nonce> <nonces> <stagger size> <threads> <path> : the path to the plots directory <threads> : number of parrallel threads for each work group So the usage would be like this: "D:/gpuPlotGenerator <numerical_account_address> 0 819200 4096 <cpu/gpu_threads?>" Is that format correct? Is the thread count need for gpu plotting(Point out in bold)? What's the nonce/minute rate? Hi, This is still a buggy early stage version. I post it here to have feedback from people who owns more powerfull graphic cards (the behaviour may vary from one card to another). But yes, the final usage would be the one you mentioned. The threads parameter is the number of threads used in the local work group. In GPU mode, the value should be a multiple a 64, 256 is the typical value for most of the cards. Ok i made a test with my R9 290 I Put 256 in thread (apparently can't put more) And in 1min15 i generate from nonce 888597 to nonce 900885, So 9830 nonce minute, not bad at all
|
|
|
|
yellowduck2
|
|
September 09, 2014, 07:52:30 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Don't know if a year, but maybe for a couple of months it could. Server cpus are probably more tolerable to 95%~100% loads than the common desktop cpus to run for a year. most people here running on home PC, this coin is only 1 months old, couple of months down the road we will start seeing people complaining about their CPU dying ? If they kill their CPU, motherboard , RAM , i don't think it's easy to get ROI
|
|
|
|
Irontiga
|
|
September 09, 2014, 07:55:13 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Don't know if a year, but maybe for a couple of months it could. Server cpus are probably more tolerable to 95%~100% loads than the common desktop cpus to run for a year. most people here running on home PC, this coin is only 1 months old, couple of months down the road we will start seeing people complaining about their CPU dying ? If they kill their CPU, motherboard , RAM , i don't think it's easy to get ROI Home users don't plot 1000tb....You only use high cpu while plotting
|
|
|
|
SpeedDemon13
|
|
September 09, 2014, 07:55:24 AM |
|
Hi everyone, After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished. I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL. Regarding the plot generation, I found an OpenCL implementation of Shabal ( https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time. Regards Hi everyone, As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot). Here is a preview you can test for now : gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly). I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program. For the windows people, you can use the binary version directly. For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable. The executable usage is : ./gpuPlotGenerator <address> <start nonce> <nonces> <stagger size> The parameters are the same as the original plot generator, without the threads number. If you find bugs or if you want some new features, let me now. If you want to support me, here are my Bitcoin and Burst addresses : Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL Regards Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it. Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however. I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance. You might be interested in those kernels here: https://bitcointalk.org/index.php?topic=731923.msg8695829#msg8695829Thanks, I will look at your kernels to see if I can find a better solution. Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later. In CPU mode, it all goes pretty well (when no graphic card is detected). The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow. Here are the files : gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory. The executable usage is : ./gpuPlotGenerator <path> <address> <start nonce> <nonces> <stagger size> <threads> <path> : the path to the plots directory <threads> : number of parrallel threads for each work group So the usage would be like this: "D:/gpuPlotGenerator <numerical_account_address> 0 819200 4096 <cpu/gpu_threads?>" Is that format correct? Is the thread count need for gpu plotting(Point out in bold)? What's the nonce/minute rate? Hi, This is still a buggy early stage version. I post it here to have feedback from people who owns more powerfull graphic cards (the behaviour may vary from one card to another). But yes, the final usage would be the one you mentioned. The threads parameter is the number of threads used in the local work group. In GPU mode, the value should be a multiple a 64, 256 is the typical value for most of the cards. Ok i made a test with my R9 290 I Put 256 in thread (apparently can't put more) And in 1min15 i generate from nonce 888597 to nonce 900885, So 9830 nonce minute, not bad at all What's your gpu load and temps?
|
CRYPTSY exchange: https://www.cryptsy.com/users/register?refid=9017 BURST= BURST-TE3W-CFGH-7343-6VM6R BTC=1CNsqGUR9YJNrhydQZnUPbaDv6h4uaYCHv ETH=0x144bc9fe471d3c71d8e09d58060d78661b1d4f32 SHF=0x13a0a2cb0d55eca975cf2d97015f7d580ce52d85 EXP=0xd71921dca837e415a58ca0d6dd2223cc84e0ea2f SC=6bdf9d12a983fed6723abad91a39be4f95d227f9bdb0490de3b8e5d45357f63d564638b1bd71 CLAMS=xGVTdM9EJpNBCYAjHFVxuZGcqvoL22nP6f SOIL=0x8b5c989bc931c0769a50ecaf9ffe490c67cb5911
|
|
|
|
Matt9301
Legendary
Offline
Activity: 1652
Merit: 1009
|
|
September 09, 2014, 08:01:31 AM |
|
|
|
|
|
duncan_idaho
|
|
September 09, 2014, 08:02:47 AM |
|
I did ... Bute i dont see any statistics on the pool main page.
|
|
|
|
uray
|
|
September 09, 2014, 08:03:58 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
hundreds years
|
|
|
|
coinits
Legendary
Offline
Activity: 1582
Merit: 1019
011110000110110101110010
|
|
September 09, 2014, 08:06:05 AM |
|
for most people their better off buying coin. Just set a low buy order and u chance of return is far greater than mining and much faster. Buying coin can get ROI in hours or days. Mining will take weeks to months.
Price tanked over the past few hours. Dumpers be dumping.
|
Jump you fuckers! | The thing about smart motherfuckers is they sound like crazy motherfuckers to dumb motherfuckers. | My sig space for rent for 0.01 btc per week.
|
|
|
duncan_idaho
|
|
September 09, 2014, 08:09:16 AM |
|
uray did you recived my PM ?
|
|
|
|
victorteoh
Sr. Member
Offline
Activity: 334
Merit: 250
🌟 æternity🌟 blockchain🌟
|
|
September 09, 2014, 08:09:34 AM |
|
Anyone else facing virus notifications? My Kaspersky detects "Dangerous URL blocked, http://123.249.35.13:8123/burst" every half an hour or so. Why is this happening? The client exchanges addresses with each other so you can connect to more people. Someone's probably just running a burst client on an ip kaspersky blocks for some other reason. The same blocked ip always appear on my kaspersky. So it's ok right if my kaspersky blocks it right ? Should be no problem in mining and getting shares?
|
|
|
|
vaxman
Member
Offline
Activity: 99
Merit: 10
|
|
September 09, 2014, 08:10:26 AM |
|
anybody know how long can a CPU last running at 95-100% 24/7 ?
Can run fine for a long period, depending on the cooling and if the cpu is overclocked too high/too long. stock , no extra cooling , can run for at least a year ? Don't know if a year, but maybe for a couple of months it could. Server cpus are probably more tolerable to 95%~100% loads than the common desktop cpus to run for a year. Regardless, it should run with no issues if everything around it runs properly. ie. motherboard, psu, etc.... oh people, that makes me cringe. My Mail server ist an ancient 23 year old box, still humming away with up-to-date software. Load is at 100%, because it is still running the dnetc-client from 2003 onwards. Not that that make any particular sense. Anyway, if your machine has no massive design flaws (power supply at >90%, inadequate cooling, bad manufacturing (heat transfer, bad capacitors, oxidizing contacts) it should keep on running for years, even at 100% load. Stop irritating the rookies, please.
|
|
|
|
AnonymousEconomist
Full Member
Offline
Activity: 154
Merit: 100
Add me on Twitter! @AnonOnAMoose
|
|
September 09, 2014, 08:11:17 AM |
|
for most people their better off buying coin. Just set a low buy order and u chance of return is far greater than mining and much faster. Buying coin can get ROI in hours or days. Mining will take weeks to months.
Price tanked over the past few hours. Dumpers be dumping. well it was either going straight up or down, as illustrated in the chart i posted... we could use some good news... or, as i stated a few times previously, SLOW DOWN THE INFLATION
|
Add me on Twitter! @AnonOnAMoose
|
|
|
Avaahnaa
Member
Offline
Activity: 67
Merit: 10
|
|
September 09, 2014, 08:12:01 AM |
|
Hi everyone, After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished. I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL. Regarding the plot generation, I found an OpenCL implementation of Shabal ( https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time. Regards Hi everyone, As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot). Here is a preview you can test for now : gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly). I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program. For the windows people, you can use the binary version directly. For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable. The executable usage is : ./gpuPlotGenerator <address> <start nonce> <nonces> <stagger size> The parameters are the same as the original plot generator, without the threads number. If you find bugs or if you want some new features, let me now. If you want to support me, here are my Bitcoin and Burst addresses : Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL Regards Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it. Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however. I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance. You might be interested in those kernels here: https://bitcointalk.org/index.php?topic=731923.msg8695829#msg8695829Thanks, I will look at your kernels to see if I can find a better solution. Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later. In CPU mode, it all goes pretty well (when no graphic card is detected). The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow. Here are the files : gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory. The executable usage is : ./gpuPlotGenerator <path> <address> <start nonce> <nonces> <stagger size> <threads> <path> : the path to the plots directory <threads> : number of parrallel threads for each work group So the usage would be like this: "D:/gpuPlotGenerator <numerical_account_address> 0 819200 4096 <cpu/gpu_threads?>" Is that format correct? Is the thread count need for gpu plotting(Point out in bold)? What's the nonce/minute rate? Hi, This is still a buggy early stage version. I post it here to have feedback from people who owns more powerfull graphic cards (the behaviour may vary from one card to another). But yes, the final usage would be the one you mentioned. The threads parameter is the number of threads used in the local work group. In GPU mode, the value should be a multiple a 64, 256 is the typical value for most of the cards. Ok i made a test with my R9 290 I Put 256 in thread (apparently can't put more) And in 1min15 i generate from nonce 888597 to nonce 900885, So 9830 nonce minute, not bad at all Hi How exactly do you plot in 'GPU mode'? I also have an R9 290 but was told that GPU plotting wasn't a thing.
|
|
|
|
BurstBurst
|
|
September 09, 2014, 08:12:42 AM |
|
What is your full parameter ? Hi everyone, After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished. I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL. Regarding the plot generation, I found an OpenCL implementation of Shabal ( https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time. Regards Hi everyone, As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot). Here is a preview you can test for now : gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly). I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program. For the windows people, you can use the binary version directly. For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable. The executable usage is : ./gpuPlotGenerator <address> <start nonce> <nonces> <stagger size> The parameters are the same as the original plot generator, without the threads number. If you find bugs or if you want some new features, let me now. If you want to support me, here are my Bitcoin and Burst addresses : Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD Burst: BURST-YA29-QCEW-QXC3-BKXDL Regards Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it. Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however. I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance. You might be interested in those kernels here: https://bitcointalk.org/index.php?topic=731923.msg8695829#msg8695829Thanks, I will look at your kernels to see if I can find a better solution. Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later. In CPU mode, it all goes pretty well (when no graphic card is detected). The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow. Here are the files : gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory. The executable usage is : ./gpuPlotGenerator <path> <address> <start nonce> <nonces> <stagger size> <threads> <path> : the path to the plots directory <threads> : number of parrallel threads for each work group So the usage would be like this: "D:/gpuPlotGenerator <numerical_account_address> 0 819200 4096 <cpu/gpu_threads?>" Is that format correct? Is the thread count need for gpu plotting(Point out in bold)? What's the nonce/minute rate? Hi, This is still a buggy early stage version. I post it here to have feedback from people who owns more powerfull graphic cards (the behaviour may vary from one card to another). But yes, the final usage would be the one you mentioned. The threads parameter is the number of threads used in the local work group. In GPU mode, the value should be a multiple a 64, 256 is the typical value for most of the cards. Ok i made a test with my R9 290 I Put 256 in thread (apparently can't put more) And in 1min15 i generate from nonce 888597 to nonce 900885, So 9830 nonce minute, not bad at all
|
|
|
|
|