djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
June 30, 2014, 12:34:57 PM |
|
something strange. I am able to get the 780ti up to 3477kHash/s (on X15) and 97%gpu usage, however I lose 200kHash/s on the 750ti. (it also compiles a lot faster)
apparently this has to do with how the conversion from uint64 to uint32 is made... faster on 750ti: __double_as_longlong(__hiloint2double( faster on 780ti: (unsigned long long)LO | (((unsigned long long)HI) << 32ULL);
What is strange is I thought that gpus where optimised for real numbers (rather than integer)
Very strange. Maybe compute 5 cards are better at that **shrugs shoulder** Are you still using the 2 uint32's being merged? for the moment yes, the best thing would be to just get a pointer to the texture memory, because the long long is just there, but I can only access it in smaller chunk.
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
bigjme
|
|
June 30, 2014, 12:40:47 PM |
|
can you message me with the code for reading the memory values out that gives you the 2 uint32's? im at work so i cant load any coding files up, only see text
i will have a look
|
Owner of: cudamining.co.uk
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
June 30, 2014, 12:45:11 PM |
|
can you message me with the code for reading the memory values out that gives you the 2 uint32's? im at work so i cant load any coding files up, only see text
i will have a look
you mean the cu file ? by the way, could anyone having older cards (fermi & kepler) have a look at the performance with this executable: https://mega.co.nz/#!tMl0TAAK!UOlmfbmhT1_UEDC7NSYUcNFTTrdVDIp2Q6DJ1Riok5U
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
bigjme
|
|
June 30, 2014, 12:53:23 PM |
|
just had a look and i cant see any way to do it without merging. There are ways to get 64 bit values out of the memory but i cant find them
|
Owner of: cudamining.co.uk
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
June 30, 2014, 01:01:13 PM |
|
just had a look and i cant see any way to do it without merging. There are ways to get 64 bit values out of the memory but i cant find them
I was thinking using the funnelshift (without shifting ), it is supposed to work with uint64 type (at least I read it, however I don't find any method returning a long long). Not mentioning I am not sure entirely it is for the same type of memory... (such a mess for the beginner...)
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
ahu
Newbie
Offline
Activity: 15
Merit: 0
|
|
June 30, 2014, 02:18:04 PM |
|
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.
[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s [2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s [2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s [2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s [2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s [2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s [2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s [2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!) ...more performance gain than I thought. really nice. thanks! missing your donation address as signature or on github What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency.
|
|
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
|
June 30, 2014, 02:25:16 PM |
|
can you message me with the code for reading the memory values out that gives you the 2 uint32's? im at work so i cant load any coding files up, only see text
i will have a look
you mean the cu file ? by the way, could anyone having older cards (fermi & kepler) have a look at the performance with this executable: https://mega.co.nz/#!tMl0TAAK!UOlmfbmhT1_UEDC7NSYUcNFTTrdVDIp2Q6DJ1Riok5U Did my previous exe (or Amph exe) worked for you (because if you had a problem before with my previous exe, well it won't change...)
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
tsiv
|
|
June 30, 2014, 02:56:52 PM |
|
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.
[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s [2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s [2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s [2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s [2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s [2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s [2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s [2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!) ...more performance gain than I thought. really nice. thanks! missing your donation address as signature or on github What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency. My 6x750 Ti rig is drawing 310 W from wall at 280 H/s per card. BTC H81 with a Celeron G1820. Somewhere slightly under 50 W per card + whatever the rest of the rig pulls. Edit: Put some addresses on the Github front page earlier today btw. My "member rank" on these forums allows for a massive 50 character signature, pretty much good for nothing.
|
|
|
|
ikanunaki
|
|
June 30, 2014, 03:03:13 PM |
|
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.
[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s [2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s [2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s [2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s [2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s [2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s [2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s [2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!) ...more performance gain than I thought. really nice. thanks! missing your donation address as signature or on github What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency. My 6x750 Ti rig is drawing 310 W from wall at 280 H/s per card. BTC H81 with a Celeron G1820. Somewhere slightly under 50 W per card + whatever the rest of the rig pulls. Edit: Put some addresses on the Github front page earlier today btw. My "member rank" on these forums allows for a massive 50 character signature, pretty much good for nothing. same for me. but with old compiler version (about 220 H/s per card). where i can find this new update but in w64 compiler version?
|
|
|
|
Quicken
|
|
June 30, 2014, 03:09:14 PM |
|
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?
I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here: https://bitcointalk.org/index.php?topic=656841.msg7600519#msg7600519Any help appreciated.
|
|
|
|
yellowduck2
|
|
June 30, 2014, 03:14:48 PM |
|
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?
I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here: https://bitcointalk.org/index.php?topic=656841.msg7600519#msg7600519Any help appreciated. Same problem using wins 8.1. Any optimized version for wins 8.1 ?
|
|
|
|
Balitorium
Newbie
Offline
Activity: 23
Merit: 0
|
|
June 30, 2014, 03:31:15 PM |
|
time to show some gratitude, the binary release readme.txt contains tsiv's "motivational addresses" pushed some VTC and XMR your way.
|
|
|
|
tsiv
|
|
June 30, 2014, 03:33:03 PM |
|
time to show some gratitude, the binary release readme.txt contains tsiv's "motivational addresses" pushed some VTC and XMR your way. Not gonna lie, working on this algo is making a huge dent on the whiskey fund. Cheers mate
|
|
|
|
tsiv
|
|
June 30, 2014, 03:35:51 PM |
|
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?
I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here: https://bitcointalk.org/index.php?topic=656841.msg7600519#msg7600519Any help appreciated. Same problem using wins 8.1. Any optimized version for wins 8.1 ? Cross-post from the bounty thread: I'm trying to use the pre-compiled ccminer-cryptonight_20140630_r2 ccminer on Windows 8.1 with a GTX750ti and seem to be having some problems getting results. I am pointing the miner at minexmr as indicated on their website: http://minexmr.com/with a batch file as follows: C:\monero\ccminer-cryptonight_20140630_r2\ccminer.exe -t 1 -d gtx750ti -o stratum+tcp://pool.minexmr.com:7777 -u <address> -p x At launch, I get a series of results like: GPU #0: GeForce GTX 750 Ti, using 40 blocks of 8 threads Pool set diff to 15000 GPU #0: GeForce GTX 750 Ti, 93.81 H/s then a popup says display driver stopped responding and has recovered. After that I see results with crazy high numbers of hashes like this: GPU #0: GeForce GTX 750 Ti, 163611988.12 H/s interspersed with 'stratum detected new block' but no accepted results within a half hour check period. I also tried downloading the previous release, but switching to that one makes the cmd.exe pop up and vanish immediately on my system (Windows 8.1, Driver 337.88). The GTX750ti is not attached to a display output. Any help appreciated. Not sure what's going wrong. Pretty sure it's still a TDR issue, the biggest part of the cryptonight core get still run as a single launch and it just might take that 2 seconds and Windows with default TDR delay considers the GPU stuck and does a driver reset. https://bitcointalk.org/index.php?topic=656841.msg7529269#msg7529269 for a workaround. I plan on looking at splitting the work down, at quick glance it looks like it could be run piece by piece. Will probably hurt performance a bit, have to save and reload the encryption keys on every kernel launch and launches themselves have some overhead. My thought was to make it a cmd line option, allowing the user to decide how much (or if) they want to split it up. Maybe add a few microseconds of sleep between the launches, stop the display freezing for 1+ seconds at a time and make the computer at least semi-usable.
|
|
|
|
QuadraQ
|
|
June 30, 2014, 03:38:13 PM |
|
Does the new cryptonight release of ccminer support anything other than the 750 Ti? I tried to get it running on my laptop with a GeForce GT 750M (don't laugh) which is compute 3.0 compatible, but I got a message in windows that the driver had crashed and recovered and ccminer reported outrageously high hashrates while never submitting anything successfully to the pool and my GPU meter showing basically zero activity. Do I need to do more than put the "-a cryptonight" flag in the batch file?
I get exactly the same problem with a 750 ti, on Windows 8.1, as described in detail here: https://bitcointalk.org/index.php?topic=656841.msg7600519#msg7600519Any help appreciated. Same problem using wins 8.1. Any optimized version for wins 8.1 ? Ah, my laptop is also running Windows 8.1, but it never occurred to me that could be the problem. Please tsiv, is there any way to fix this? Have mercy on us!
|
|
|
|
policymaker
Full Member
Offline
Activity: 210
Merit: 100
Crypto Currency Supporter
|
|
June 30, 2014, 03:49:48 PM |
|
Hello every1,
I am using ccminer v1 which gets me 1800kh from my 660ti, and ccminer1.2 for 2200kh/s, it just makes my card lag a bit when using pc. Have there been any links for a better/more optimized CC other than github release? Thank you very much!
|
|
|
|
Starlightbreaker
Legendary
Offline
Activity: 1764
Merit: 1006
|
|
June 30, 2014, 04:36:30 PM Last edit: September 15, 2016, 10:50:00 AM by Starlightbreaker |
|
from 1200h/s to 1400h/s
fuck it, time to sell my amd rigs.
but it isn't profitable right? 2-3 monero a day? My profit calc says otherwise Mining using crytonight miner is very unstable. The hashrate shown on miner and pool is very different. Most of the time pool hashrate could be reduce by 50% of what is shown on miner. I think crytonight miner submitting of shares to pool need improvement because pool don't receive share often which cause miner hashrate shown on pool to cut down majority of the time. The only time pool showing correct hashrate as miner is when initial connection to pool. After which hashrate start to decay and reduce. Hashrate shown on miner is just a guide line, most importance is how much hash the pool receive because that is how much they going to pay you. So what u see on profit cal need to be reduce by 30-40% due to unstable hashrate. meh, it's pretty much accurate for me, still within ~5-10%, so no complaints there. from 1200h/s to 1400h/s
fuck it, time to sell my amd rigs.
For XMR : The amd R9 290X is about 750h/s at stock (gpu@1000, tri-x). The couple R9 290X @1000 + R9 290@900 (underclocked for noise and heat, -100mv) give me about 1390 h/s. Power usage is about 330W for the pair. Maybe when the BBR ccminer will arrive, I will switch to nvidia, up to now amd is good enough. i still have 15 amd gpus, 7 7950, 5 280x and and 3 7970, so i probably just switch out the 7950s.
|
|
|
|
c18machine
Newbie
Offline
Activity: 59
Merit: 0
|
|
June 30, 2014, 04:40:29 PM |
|
from 1200h/s to 1400h/s
fuck it, time to sell my amd rigs.
but it isn't profitable right? 2-3 monero a day? My profit calc says otherwise Mining using crytonight miner is very unstable. The hashrate shown on miner and pool is very different. Most of the time pool hashrate could be reduce by 50% of what is shown on miner. I think crytonight miner submitting of shares to pool need improvement because pool don't receive share often which cause miner hashrate shown on pool to cut down majority of the time. The only time pool showing correct hashrate as miner is when initial connection to pool. After which hashrate start to decay and reduce. Hashrate shown on miner is just a guide line, most importance is how much hash the pool receive because that is how much they going to pay you. So what u see on profit cal need to be reduce by 30-40% due to unstable hashrate. meh, it's pretty much accurate for me, still within ~5-10%, so no complaints there. I think that might depend on pool/difficulty, thats why you have that issue or maybe you just need more ram. not sure.
|
|
|
|
ivcelmik
|
|
June 30, 2014, 05:06:42 PM |
|
Took the easy gains first, around 18% boost on 750 Ti. Still trying to wrap my head around the hard part.
[2014-06-30 00:11:59] GPU #5: GeForce GTX 750 Ti, 286.07 H/s [2014-06-30 00:11:59] GPU #1: GeForce GTX 750 Ti, 286.72 H/s [2014-06-30 00:11:59] GPU #2: GeForce GTX 750 Ti, 285.23 H/s [2014-06-30 00:11:59] GPU #0: GeForce GTX 750 Ti, 284.80 H/s [2014-06-30 00:11:59] GPU #4: GeForce GTX 750 Ti, 284.69 H/s [2014-06-30 00:11:59] GPU #3: GeForce GTX 750 Ti, 284.42 H/s [2014-06-30 00:12:04] GPU #0: GeForce GTX 750 Ti, 274.37 H/s [2014-06-30 00:12:04] accepted: 4/4 (100.00%), 1701.50 H/s (yay!!!) ...more performance gain than I thought. really nice. thanks! missing your donation address as signature or on github What's the power usage per GPU with the new build? It seems GPU's are getting close to Xeons in efficiency. Can u help me with toyr conf file i only get 260H/s thx in advance
|
|
|
|
|