bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
 |
November 29, 2015, 10:38:28 PM |
|
@bathrobehero ....... What is a swapfile? thx
Pagefile or virtual memory.
|
Not your keys, not your coins!
|
|
|
tbearhere
Legendary
Offline
Activity: 3290
Merit: 1003
|
 |
November 29, 2015, 11:22:29 PM |
|
@bathrobehero ....... What is a swapfile? thx
Pagefile or virtual memory. Yes..thx.....I never heard it called swapfile. 
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 3122
Merit: 1093
--- ChainWorks Industries ---
|
 |
November 29, 2015, 11:36:01 PM |
|
@bathrobehero ....... What is a swapfile? thx
Pagefile or virtual memory. Yes..thx.....I never heard it called swapfile.  linux / unix / posix based systems use ( and call them ) swapfiles ( and swap partitions ) ... it seems simple enough that if you 'swap' data from memory to a hdd drive 'file' - then back again ... the file was sure to be called 'swap' 'file' ... logical - right microsoft? ... ahem ...  ... #crysx
|
|
|
|
chrysophylax
Legendary
Offline
Activity: 3122
Merit: 1093
--- ChainWorks Industries ---
|
 |
November 30, 2015, 01:24:59 AM |
|
@bathrobehero ....... What is a swapfile? thx
Pagefile or virtual memory. Yes..thx.....I never heard it called swapfile.  linux / unix / posix based systems use ( and call them ) swapfiles ( and swap partitions ) ... it seems simple enough that if you 'swap' data from memory to a hdd drive 'file' - then back again ... the file was sure to be called 'swap' 'file' ... logical - right microsoft? ... ahem ...  ... #crysx Intel manuals for x86 refer to it as paging. There's actually a good reason for this - pages of memory are mapped into physical memory (and other places.) If it's not in physical memory, it's marked "not present" and accesses to it result in a page fault, running the OSs handler for that, which loads the requested page into memory and returns the address. and hence why posix systems simplified it into human readable terms  ... ho hum ... its all geek to me ... hang on - i am one  ... #crysx
|
|
|
|
Cryptozillah
|
 |
November 30, 2015, 03:04:43 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
|
|
|
|
frazier34567
Member

Offline
Activity: 95
Merit: 10
|
 |
November 30, 2015, 04:23:50 PM |
|
With my 6XEVGA 750 ti SC (No Bios mod, no extra connector, no OC, Max 1320MHz) on LyraREv2 it is 395W In from wall on digital PSU. Windows 8.1 Pro Quark 420W in from wall Idle 90W in from wall
Lyra2v2 on the 750ti should do around 5MHASH each with the correct clocks. so your rig does around 30MHASH if you tune it correctly. The 980ti only does around 18 MHASH with overclocking and costs the same as 6 used ti's. But the 980ti can of course do much better with the right code.. I currently do 4806 per card or 28836 for the rig. but then I am not pushing my cards. The only thing i have up is the fan speed at about 70% at 53C I am sure I could do more but it is not worth the time at this current pay rate.
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
 |
November 30, 2015, 04:32:46 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. This suggests that all GPU threads initialize in parallel, allocate the memory at the same time, use it briefly, then free it. If the threads initialized serially only one would require large amounts of memory and would free it before the next thread initializes. The dynamic memory requirements would be reduced and larger rigs could run with less memory and smaller swap space. Make sense?
|
|
|
|
bathrobehero
Legendary
Offline
Activity: 2002
Merit: 1051
ICO? Not even once.
|
 |
November 30, 2015, 08:33:00 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. This suggests that all GPU threads initialize in parallel, allocate the memory at the same time, use it briefly, then free it. If the threads initialized serially only one would require large amounts of memory and would free it before the next thread initializes. The dynamic memory requirements would be reduced and larger rigs could run with less memory and smaller swap space. Make sense? It's definitely parallel initialization but every software I used to monitor pagefile use, they never showed anything just a few MB tops. But if you run multiple instances of ccminer on the same device, then it will - depending on the algo - use a huge amount of memory and of course pagefile if needed. It's almost like it needs memory/pagefile just in case the GPU rans out of it.
|
Not your keys, not your coins!
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
 |
November 30, 2015, 09:00:42 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...)
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
AzzAz
Legendary
Offline
Activity: 1030
Merit: 1006
|
 |
November 30, 2015, 09:21:35 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...) Definitely important. My " problematic" rig ( miner crashes) had small pagefile. I added RAM and miner now works on Lyra2v2 and x13, neoscrypt not again. Then I increased Pagefile and voila! - neoscrypt works too... So that is the most memory dependant algo?
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
 |
November 30, 2015, 09:55:33 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...) So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
|
|
|
|
djm34
Legendary
Offline
Activity: 1400
Merit: 1050
|
 |
November 30, 2015, 10:11:31 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...) So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation. this memory isn't used anymore once it has been allocated to the vram
|
djm34 facebook pageBTC: 1NENYmxwZGHsKFmyjTc5WferTn5VTFb7Ze Pledge for neoscrypt ccminer to that address: 16UoC4DmTz2pvhFvcfTQrzkPTrXkWijzXw
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
 |
November 30, 2015, 10:40:31 PM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...) So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation. this memory isn't used anymore once it has been allocated to the vram When memory is not deallocated when it is no longer needed it's usually called a leak. Is this something that cudamalloc does transparently or does the application have any control?
|
|
|
|
bileq
Legendary
Offline
Activity: 1288
Merit: 1068
|
 |
November 30, 2015, 10:50:32 PM |
|
always Cuda error in func 'x11_simd512_cpu_init' at line 791 : out of memory. i dont hope any profit just to test my gpu its gt9800 with 4gb memory cuda 7.5 installed any working config for me?
|
|
|
|
joblo
Legendary
Offline
Activity: 1470
Merit: 1114
|
 |
December 01, 2015, 12:14:53 AM |
|
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.
Glad to see you're running stable. I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux? From my understanding the memory/pagefile requirements increase with the number of GPUs. Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram. This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...) (provided, if I am not too wrong in my representation of how it works...) So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation. this memory isn't used anymore once it has been allocated to the vram When memory is not deallocated when it is no longer needed it's usually called a leak. Is this something that cudamalloc does transparently or does the application have any control? Wrong - it's only LEAKED if the pointer to that memory is lost, meaning you couldn't deallocate it if you wanted to - and it happens in repeated code. To "leak," you actually have to slowly continue to eat memory until there's no more left. You are technically correct, perhaps "hog" would be a better term. That doesn't change the point that large amounts of CPU memory remain allocated after it is no longer needed.
|
|
|
|
scryptr
Legendary
Offline
Activity: 1798
Merit: 1028
|
 |
December 01, 2015, 12:23:53 AM |
|
always Cuda error in func 'x11_simd512_cpu_init' at line 791 : out of memory. i dont hope any profit just to test my gpu its gt9800 with 4gb memory cuda 7.5 installed any working config for me?
GT9800-- A GT9800 was top of the line once. I have a GT9800+ and it will mine scrypt at 14kh/s with CudaMiner, early 2014 vintage software, and written to compile on CUDA 5.5. The GT9800 simply does not have the capacity to mine at a reasonable rate. It will not perform at all with software designed for the Maxwell chipset, the circuitry is not there. The version of CCminer written by sp_ and discussed in this thread is specifically written to only work on Maxwell chip architecture. --scryptr
|
|
|
|
MaxDZ8
|
 |
December 01, 2015, 10:06:00 AM |
|
So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram While this is technically correct I can tell from experience some drivers will still keep the address range as reserved, apparently this has some benefits for driver mangling (I can see how assuming different range <--> different resource can help). Beware, CUDA is way more than your GPU or CPU, sometimes it goes through some heavy magic. How is the memory consumption being measured?
|
|
|
|
theotherme
Member

Offline
Activity: 81
Merit: 10
|
 |
December 01, 2015, 11:01:48 AM |
|
So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram While this is technically correct I can tell from experience some drivers will still keep the address range as reserved, apparently this has some benefits for driver mangling (I can see how assuming different range <--> different resource can help). Beware, CUDA is way more than your GPU or CPU, sometimes it goes through some heavy magic. hmm... heavy magic...  (well magical stuff is just science which isn't yet understood  ) The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  )
|
|
|
|
pallas
Legendary
Offline
Activity: 2716
Merit: 1094
Black Belt Developer
|
 |
December 01, 2015, 11:45:48 AM |
|
hmm... heavy magic...  (well magical stuff is just science which isn't yet understood  ) The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  ) then a magician is just a scientist who is a bit ahead :-D
|
|
|
|
theotherme
Member

Offline
Activity: 81
Merit: 10
|
 |
December 01, 2015, 11:56:50 AM |
|
hmm... heavy magic...  (well magical stuff is just science which isn't yet understood  ) The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  ) then a magician is just a scientist who is a bit ahead :-D yep 
|
|
|
|
|