Show Posts
|
Pages: [1] 2 »
|
is there any way to store logs in separate folder? better if without hardlinks.
|
|
|
So... I reinstalled windows and rig works like a charm. I bet on windows update.
|
|
|
Did u solved problem? Tried reinstall windows? No windows update in my case. Weird.  Thanks Doc. The first thing I did is ddu and reinstalled the driver. No go. Thanks Doc. It has been working for weeks. Yesterday it stopped working and I rebooted the box. The miner started to be like that. My other miners are using the same software and same config. All are just fine. SRBMiner-CN.exe --enablecoinforking --config Config\config-litev7.txt --pools pools.txt --logfile %LOGTIME%
Try without the --enablecoinforking parameter, maybe that's causing the issue. Interesting, then i guess it isn't that parameter that causes the issue, but maybe windows pulled some driver update. To be sure that's not the case you could do a quick DDU and driver reinstall. Got the same issue. Noone cn miner works. Seems like its windows update related, but even uninstall last updates wont help :/
|
|
|
Thanks Doc. The first thing I did is ddu and reinstalled the driver. No go. Thanks Doc. It has been working for weeks. Yesterday it stopped working and I rebooted the box. The miner started to be like that. My other miners are using the same software and same config. All are just fine. SRBMiner-CN.exe --enablecoinforking --config Config\config-litev7.txt --pools pools.txt --logfile %LOGTIME%
Try without the --enablecoinforking parameter, maybe that's causing the issue. Interesting, then i guess it isn't that parameter that causes the issue, but maybe windows pulled some driver update. To be sure that's not the case you could do a quick DDU and driver reinstall. Got the same issue. Noone cn miner works. Seems like its windows update related, but even uninstall last updates wont help :/
|
|
|
My beliefs as well.  I like "CryptoPay" and actually, my logo can be adapted to such a name. I think "People Payment Network" is problematic, PP Network, Pee Pee Network, plus "network" is still somewhat unfriendly to non-tech consumers. Use it if you like it https://cryptopay.me/ Do you like apple or coca-cola? should we take their names?
|
|
|
Go Quickto. Quick crypto.
|
|
|
2700+h/s@vega64 srbminer 1.8.0 int 120 ws 16 thr 2 core 1490 mem 1100 watt 210
Thanks, what about before the fork ? 2000-2100hs
|
|
|
2700+h/s@vega64 srbminer 1.8.0 int 120 ws 16 thr 2 core 1490 mem 1100 watt 210
|
|
|
how to setup intensity through bat? SRBMiner-CN.exe --gpureorder --config Config\config-graft.txt --ccryptonighttype graft --cpool pool.graft.hashvault.pro:7777 --cwallet zzzzzzzzzz --logfile %LOGTIME% --apienable wont load intensity from config-graft.txt First page or readme : Parameters:
--ccryptonighttype value (algo to use) --cgpuid value (gpu id, comma separated values, use --listdevices to see available) --cgpuintensity value (gpu intensity, comma separated values) --cgputhreads value (number of gpu threads, comma separated values) --cgpuworksize value (gpu worksize, comma separated values) --cgpufragments value(can be 0,1,2,4,8,16,32,64,128, comma separated values) --cgpuheavymode value (mode for heavy algos (1, 2, 3), comma separated values) --cgputhreaddelay value (delay to maintain between same gpu threads, 1 - 1000, comma separated values) --cgputargettemperature value (gpu temperature, comma separated values) --cgputargetfanspeed value (gpu fan speed in RPM, comma separated values) --cgpuofftemperature value (gpu turn off temperature, comma separated values) --cgpuadltype value (gpu adl to use (1 or 2), comma separated values) --cgpuoldmode value (old kernel creation mode - true or false, comma separated values)
Also first page : V1.8.0 . . - No more mixing of cmd line setup and config files, now it's one or the other That's why nothing is read from config-graft.txt --cgpuintensity 120 wont work in bat. neiter --cgpuintensity 120,120,120,120,120,120,120,120,120 (9 vega rig) Have made standalone txts and all work like a charm. 120int@vega64=2720h/s it works in bat too, but you need to set cgpuid too so it knows for which gpu is which setting  cpuid 0 = intel remove word 'value' after --gpuintensity 😀 yeye i saw it after upload  just played with all possibilities. btw it ask define nuber of threads after all  and finally i start it from single bat SRBMiner-CN.exe --gpureorder --ccryptonighttype graft --cpool pool.graft.hashvault.pro:7777 --cwallet zzzzzz --cgpuid 1,2,3,4,5,6,7,8,9 --cgpuintensity 120,120,120,120,120,120,120,120,120 --cgputhreads 2,2,2,2,2,2,2,2,2 --logfile %LOGTIME% --apienable upd. --cgpuid 1,2,3,4,5,6,7,8,9 is wrong define. right is --cgpuid 0,1,2,3,4,5,6,7,8 even if i use internal intel graphic which appears as gpu0 in regedit&overdriventool or 1 card is not workin
|
|
|
how to setup intensity through bat? SRBMiner-CN.exe --gpureorder --config Config\config-graft.txt --ccryptonighttype graft --cpool pool.graft.hashvault.pro:7777 --cwallet zzzzzzzzzz --logfile %LOGTIME% --apienable wont load intensity from config-graft.txt First page or readme : Parameters:
--ccryptonighttype value (algo to use) --cgpuid value (gpu id, comma separated values, use --listdevices to see available) --cgpuintensity value (gpu intensity, comma separated values) --cgputhreads value (number of gpu threads, comma separated values) --cgpuworksize value (gpu worksize, comma separated values) --cgpufragments value(can be 0,1,2,4,8,16,32,64,128, comma separated values) --cgpuheavymode value (mode for heavy algos (1, 2, 3), comma separated values) --cgputhreaddelay value (delay to maintain between same gpu threads, 1 - 1000, comma separated values) --cgputargettemperature value (gpu temperature, comma separated values) --cgputargetfanspeed value (gpu fan speed in RPM, comma separated values) --cgpuofftemperature value (gpu turn off temperature, comma separated values) --cgpuadltype value (gpu adl to use (1 or 2), comma separated values) --cgpuoldmode value (old kernel creation mode - true or false, comma separated values)
Also first page : V1.8.0 . . - No more mixing of cmd line setup and config files, now it's one or the other That's why nothing is read from config-graft.txt --cgpuintensity 120 wont work in bat. neiter --cgpuintensity 120,120,120,120,120,120,120,120,120 (9 vega rig) Have made standalone txts and all work like a charm. 120int@vega64=2720h/s
|
|
|
how to setup intensity through bat? SRBMiner-CN.exe --gpureorder --config Config\config-graft.txt --ccryptonighttype graft --cpool pool.graft.hashvault.pro:7777 --cwallet zzzzzzzzzz --logfile %LOGTIME% --apienable wont load intensity from config-graft.txt
|
|
|
4 days till graft fork. will u support reverse waltz? xmrig&stak will.
|
|
|
i have the same issue. randomly closes at multialgo and work like a charm for weeks at single algo. rigs 8-9 vegas. nothing unusual in logs.
The developer said he hoped this would be fixed in this new version, but today I experienced it again: the miner window closed by itself and there wasn't anything unusual in the logs, just normal mining. Could you guys tell me which version of windows 10 are you using? I have a feeling it is older than 1809. 1803 too
|
|
|
One issue that I've been getting since earlier versions is that, while mining on a pool with multi-algo support (MO), sometimes the miner will just close without any apparent reason. The command prompt window just closes. When checking the log, nothing abnormal was happening. Sometimes the last line on the log is an algo change, sometimes is submitting a share. Miner just closes. Any idea of what may cause that or what else can I do to test more and provide more feedback?
i have the same issue. randomly closes at multialgo and work like a charm for weeks at single algo. rigs 8-9 vegas. nothing unusual in logs.
|
|
|
Give me an example/screenshot of how it happens, how often?
[2019-01-23 09:55:30] ERROR : Pool not responding [2019-01-23 09:55:30] Pool rejected result 0x000009CF () and crash have the same issue with my vegarig. poor wifi = random srb crashes. Also sometimes have huge hashdrops after algoswitch [2019-01-24 18:50:00] pool_have_job: Pool sent a new job (ID: Hgodqwlu/kpZWoc8SfpkVHJPNsfC) [bittubev2] [2019-01-24 18:50:22] hashrate: GPU0: 1936 H/s [T:52c][BUS:3] [2019-01-24 18:50:22] hashrate: GPU1: 1945 H/s [T:53c][BUS:6] [2019-01-24 18:50:22] hashrate: GPU2: 469 H/s [T:42c][BUS:9] [2019-01-24 18:50:22] hashrate: GPU3: 1790 H/s [T:55c][BUS:12] [2019-01-24 18:50:22] hashrate: GPU4: 1926 H/s [T:54c][BUS:15] [2019-01-24 18:50:22] hashrate: GPU5: 1939 H/s [T:53c][BUS:18] [2019-01-24 18:50:22] hashrate: GPU6: 1928 H/s [T:53c][BUS:22] [2019-01-24 18:50:22] hashrate: GPU7: 1942 H/s [T:54c][BUS:25] [2019-01-24 18:50:22] hashrate: GPU8: 1812 H/s [T:54c][BUS:28] [2019-01-24 18:50:22] hashrate: Total: 15687 H/s [2019-01-24 18:50:32] miner_result: Sending user result to pool [2019-01-24 18:50:32] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"Hgodqwlu/kpZWoc8SfpkVHJPNsfC","nonce":"92aa8e63","result":"23e1a60a1e8c883dd996f76f6e81161f2fa365443282b7da8139f5d383100000","algo":"cn-heavy/tube"}} [2019-01-24 18:50:32] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:50:32] miner_result: Pool accepted result 0x00001083 [333ms] [2019-01-24 18:50:34] miner_result: Sending user result to pool [2019-01-24 18:50:34] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"Hgodqwlu/kpZWoc8SfpkVHJPNsfC","nonce":"4241729c","result":"9795e7596350b8fde6c824e6b4e64b23ba7e91ee0ab775743ad526f33f0c0000","algo":"cn-heavy/tube"}} [2019-01-24 18:50:35] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:50:35] miner_result: Pool accepted result 0x00000C3F [520ms] [2019-01-24 18:50:43] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"0505d2bda7e205f4c28e39a29b7d5cdaf46d5ccc929fc7c9644dc1b9c92e0c6365620dbd46fb1400000000fda2adffc4c3100624f4d70dc7f77fe3fadbc0c11f295d0b86833d10365e1fc402","algo":"cn-heavy/tube","variant":"tube","job_id":"iJHTHB35GVYx3EhPTkYBLUIuZYMP","target":"64230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:50:43] pool_have_job: Pool sent a new job (ID: iJHTHB35GVYx3EhPTkYBLUIuZYMP) [bittubev2] [2019-01-24 18:51:08] miner_result: Sending user result to pool [2019-01-24 18:51:08] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"iJHTHB35GVYx3EhPTkYBLUIuZYMP","nonce":"2400abaa","result":"a953779e472bfb15eee80b877ac21e8a92d1fca2ffcbecd880ab54605e020000","algo":"cn-heavy/tube"}} [2019-01-24 18:51:08] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:51:08] miner_result: Pool accepted result 0x0000025E [379ms] [2019-01-24 18:51:13] miner_result: Sending user result to pool [2019-01-24 18:51:13] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"iJHTHB35GVYx3EhPTkYBLUIuZYMP","nonce":"14d51c47","result":"f0f744240c20a9d989728380b6b945c802fa5160a2166108d3b6c3bcb0090000","algo":"cn-heavy/tube"}} [2019-01-24 18:51:13] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:51:13] miner_result: Pool accepted result 0x000009B0 [609ms] [2019-01-24 18:51:56] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"05059cbea7e205b157737b2dc6c396a1786df8bcb34c9a9181823eae67a1c0e9391eae277b7caf00000000e7e10ca3d3c1175ed9140945d720dfa40e0efa1233b665281fb829d491b3ae2e08","algo":"cn-heavy/tube","variant":"tube","job_id":"/fwCQXhH5SmeQddgTvS3d6AUQiwS","target":"65230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:51:56] pool_have_job: Pool sent a new job (ID: /fwCQXhH5SmeQddgTvS3d6AUQiwS) [bittubev2] [2019-01-24 18:52:04] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"0505a4bea7e205ea3c0394022bc7592b63c9596cf1a874e128a50335892f4aeb34c53d3d5443ed000000009285f4a3707fe4e0725134fc45a3a3fbdcf99d92b5fa4f04d00fc72971cc3c8f01","algo":"cn-heavy/tube","variant":"tube","job_id":"hG7D7QMA3J65lU7ciVfwh/4LK4NQ","target":"65230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:52:04] pool_have_job: Pool sent a new job (ID: hG7D7QMA3J65lU7ciVfwh/4LK4NQ) [bittubev2] [2019-01-24 18:52:22] hashrate: GPU0: 1934 H/s [T:52c][BUS:3] [2019-01-24 18:52:22] hashrate: GPU1: 1945 H/s [T:54c][BUS:6] [2019-01-24 18:52:22] hashrate: GPU2: 490 H/s [T:41c][BUS:9] [2019-01-24 18:52:22] hashrate: GPU3: 1783 H/s [T:54c][BUS:12] [2019-01-24 18:52:22] hashrate: GPU4: 1922 H/s [T:55c][BUS:15] [2019-01-24 18:52:22] hashrate: GPU5: 1938 H/s [T:55c][BUS:18] [2019-01-24 18:52:22] hashrate: GPU6: 1928 H/s [T:54c][BUS:22] [2019-01-24 18:52:22] hashrate: GPU7: 1934 H/s [T:55c][BUS:25] [2019-01-24 18:52:22] hashrate: GPU8: 1812 H/s [T:54c][BUS:28] [2019-01-24 18:52:22] hashrate: Total: 15686 H/s [2019-01-24 18:52:27] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"0505bbbea7e20533c6504517761841d4e9d7458626fb1e1bf934f37d687279ea706b5ccaad44c00000000060080b6ecb7a69d7b6342df785edf225ab259c0a6502c4a5ed0a2c1931b7f1b904","algo":"cn-heavy/tube","variant":"tube","job_id":"WNDolPJldBJEIKgP3vEUGN7jIaSt","target":"65230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:52:27] pool_have_job: Pool sent a new job (ID: WNDolPJldBJEIKgP3vEUGN7jIaSt) [bittubev2] [2019-01-24 18:52:30] miner_result: Sending user result to pool [2019-01-24 18:52:30] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"WNDolPJldBJEIKgP3vEUGN7jIaSt","nonce":"d3ee388e","result":"1a1c2a5f4f73d9480b08b269dcd2be66706f6de0f0e86f13af91c0d77b0e0000","algo":"cn-heavy/tube"}} [2019-01-24 18:52:31] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:52:31] miner_result: Pool accepted result 0x00000E7B [526ms] [2019-01-24 18:52:34] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"030385bea7e2051791f0b34606a150e54e44d58052672104b28398d5cd1b3949850182bb16e2ef000000001a57bedde54e2a33e586631eabebdb74d590f705a29f7c9dde0b3bf202bd7a9d02","algo":"cn-heavy/xhv","variant":"xhv","job_id":"A4P/lVmVqzqyxBp6F2ToUCNy5sDF","target":"65230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:52:34] algo_switch: Pool requested algo switch to [haven] [2019-01-24 18:52:47] ocl_release: Device BUS_ID25 thread0 resources released [2019-01-24 18:52:47] ocl_release: Device BUS_ID25 thread1 resources released [2019-01-24 18:52:47] ocl_release: Device BUS_ID28 thread2 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID28 thread3 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID12 thread4 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID12 thread5 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID6 thread6 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID6 thread7 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID22 thread8 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID22 thread9 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID18 thread10 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID18 thread11 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID3 thread12 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID3 thread13 resources released [2019-01-24 18:52:48] ocl_release: Device BUS_ID15 thread14 resources released [2019-01-24 18:52:49] ocl_release: Device BUS_ID15 thread15 resources released [2019-01-24 18:52:49] ocl_release: Device BUS_ID9 thread16 resources released [2019-01-24 18:52:49] ocl_release: Device BUS_ID9 thread17 resources released [2019-01-24 18:52:49] AMD Platform ID: 0 [2019-01-24 18:52:49] Created OCL context [2019-01-24 18:52:49] Using fragments 1 for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Using heavy_mode 1 for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Using thread_delay 319 for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Created OCL command queue for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Created OCL input buffer for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Created OCL output buffer for DeviceID 0 (Thread 0) [2019-01-24 18:52:49] Loading [haven-old] kernel for DEVICE BUS_ID[25] ... [2019-01-24 18:52:49] ctx->Program for DeviceID 0 (Thread 0) loaded [2019-01-24 18:52:49] Using fragments 1 for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Using heavy_mode 1 for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Using thread_delay 319 for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Created OCL command queue for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Created OCL input buffer for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Created OCL output buffer for DeviceID 0 (Thread 1) [2019-01-24 18:52:49] Loading [haven-old] kernel for DEVICE BUS_ID[25] ... [2019-01-24 18:52:49] ctx->Program for DeviceID 0 (Thread 1) loaded [2019-01-24 18:52:49] Using fragments 1 for DeviceID 1 (Thread 2) [2019-01-24 18:52:49] Using heavy_mode 1 for DeviceID 1 (Thread 2) [2019-01-24 18:52:49] Using thread_delay 319 for DeviceID 1 (Thread 2) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 1 (Thread 2) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 1 (Thread 2) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 1 (Thread 2) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[28] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 1 (Thread 2) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 1 (Thread 3) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[28] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 1 (Thread 3) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 2 (Thread 4) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[12] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 2 (Thread 4) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 2 (Thread 5) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[12] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 2 (Thread 5) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 3 (Thread 6) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[6] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 3 (Thread 6) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 3 (Thread 7) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[6] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 3 (Thread 7) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 4 (Thread 8) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[22] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 4 (Thread 8) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 4 (Thread 9) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[22] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 4 (Thread 9) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 5 (Thread 10) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[18] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 5 (Thread 10) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 5 (Thread 11) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[18] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 5 (Thread 11) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 6 (Thread 12) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[3] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 6 (Thread 12) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 6 (Thread 13) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[3] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 6 (Thread 13) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 7 (Thread 14) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[15] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 7 (Thread 14) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 7 (Thread 15) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[15] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 7 (Thread 15) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 8 (Thread 16) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[9] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 8 (Thread 16) loaded [2019-01-24 18:52:50] Using fragments 1 for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Using heavy_mode 1 for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Using thread_delay 319 for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Created OCL command queue for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Created OCL input buffer for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Created OCL output buffer for DeviceID 8 (Thread 17) [2019-01-24 18:52:50] Loading [haven-old] kernel for DEVICE BUS_ID[9] ... [2019-01-24 18:52:50] ctx->Program for DeviceID 8 (Thread 17) loaded [2019-01-24 18:52:51] GPU0: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU1: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU2: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU3: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU4: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU5: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU6: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU7: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] GPU8: [I: 62.0][W: 8][T: 2][F: 1] [2019-01-24 18:52:51] pool_have_job: Pool sent a new job (ID: A4P/lVmVqzqyxBp6F2ToUCNy5sDF) [haven] [2019-01-24 18:53:10] miner_result: Sending user result to pool [2019-01-24 18:53:10] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"A4P/lVmVqzqyxBp6F2ToUCNy5sDF","nonce":"3904721c","result":"ef4a5222d5ef7f8bae1c0ee0a100a816b8f8d754f30041a0542c7997db1f0000","algo":"cn-heavy/xhv"}} [2019-01-24 18:53:10] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:53:10] miner_result: Pool accepted result 0x00001FDB [320ms] [2019-01-24 18:53:26] miner_result: Sending user result to pool [2019-01-24 18:53:26] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"A4P/lVmVqzqyxBp6F2ToUCNy5sDF","nonce":"f07d0080","result":"58b0d87e43ca393fab23d93e663bc79b946477e016f2f7e56d434bf5a5170000","algo":"cn-heavy/xhv"}} [2019-01-24 18:53:27] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:53:27] miner_result: Pool accepted result 0x000017A5 [462ms] [2019-01-24 18:54:10] miner_result: Sending user result to pool [2019-01-24 18:54:10] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"A4P/lVmVqzqyxBp6F2ToUCNy5sDF","nonce":"ff851d47","result":"943788a035f9bd1eb02021141f16a62fdc85a5a6f3d83e43af7818f746120000","algo":"cn-heavy/xhv"}} [2019-01-24 18:54:11] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:54:11] miner_result: Pool accepted result 0x00001246 [304ms] [2019-01-24 18:54:22] hashrate: GPU0: 1994 H/s [T:51c][BUS:3] [2019-01-24 18:54:22] hashrate: GPU1: 2004 H/s [T:51c][BUS:6] [2019-01-24 18:54:22] hashrate: GPU2: 440 H/s [T:40c][BUS:9] [2019-01-24 18:54:22] hashrate: GPU3: 1866 H/s [T:52c][BUS:12] [2019-01-24 18:54:22] hashrate: GPU4: 1986 H/s [T:51c][BUS:15] [2019-01-24 18:54:22] hashrate: GPU5: 2000 H/s [T:52c][BUS:18] [2019-01-24 18:54:22] hashrate: GPU6: 1989 H/s [T:50c][BUS:22] [2019-01-24 18:54:22] hashrate: GPU7: 343 H/s [T:41c][BUS:25] [2019-01-24 18:54:22] hashrate: GPU8: 1876 H/s [T:50c][BUS:28] [2019-01-24 18:54:22] hashrate: Total: 14498 H/s [2019-01-24 18:54:24] miner_result: Sending user result to pool [2019-01-24 18:54:24] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"A4P/lVmVqzqyxBp6F2ToUCNy5sDF","nonce":"e804acaa","result":"2504ea66d3c5d17005e127e875f9165c7607f7e6ff883fe4dcf39ee50e100000","algo":"cn-heavy/xhv"}} [2019-01-24 18:54:24] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:54:24] miner_result: Pool accepted result 0x0000100E [437ms] [2019-01-24 18:55:15] json_receive: {"jsonrpc":"2.0","method":"job","params":{"blob":"0303e3bfa7e2056f57ef8e1c3231b7572eb4f5154a94ad0d25d9a7b5210f645cdf4359a5d149a2000000009b1664512a9fca9d2edb78138019408f99be63acafcc1fa6e99b9cfe36ffe20316","algo":"cn-heavy/xhv","variant":"xhv","job_id":"zAob6l20121XNBQY9RpsfcdKXSOH","target":"e2230000","id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2"}} [2019-01-24 18:55:15] pool_have_job: Pool sent a new job (ID: zAob6l20121XNBQY9RpsfcdKXSOH) [haven] [2019-01-24 18:56:04] miner_result: Sending user result to pool [2019-01-24 18:56:04] json_send: {"id":1,"jsonrpc": "2.0","method":"submit","params":{"id":"02f99cb6-c4d4-4c70-9ecb-8402de0f1cb2","job_id":"zAob6l20121XNBQY9RpsfcdKXSOH","nonce":"0596398e","result":"cb0da7227f1895ac58dd5bffad1ce89b539d54287761a6caa23d2839a3020000","algo":"cn-heavy/xhv"}} [2019-01-24 18:56:05] hashrate: GPU0: 1994 H/s [T:52c][BUS:3] [2019-01-24 18:56:05] hashrate: GPU1: 2005 H/s [T:53c][BUS:6] [2019-01-24 18:56:05] hashrate: GPU2: 461 H/s [T:41c][BUS:9] [2019-01-24 18:56:05] hashrate: GPU3: 1874 H/s [T:54c][BUS:12] [2019-01-24 18:56:05] hashrate: GPU4: 1986 H/s [T:54c][BUS:15] [2019-01-24 18:56:05] hashrate: GPU5: 1998 H/s [T:52c][BUS:18] [2019-01-24 18:56:05] hashrate: GPU6: 1989 H/s [T:52c][BUS:22] [2019-01-24 18:56:05] hashrate: GPU7: 368 H/s [T:39c][BUS:25] [2019-01-24 18:56:05] hashrate: GPU8: 1875 H/s [T:54c][BUS:28] [2019-01-24 18:56:05] hashrate: Total: 14550 H/s [2019-01-24 18:56:06] json_receive: {"id":1,"jsonrpc":"2.0","error":null,"result":{"status":"OK"}} [2019-01-24 18:56:06] miner_result: Pool accepted result 0x000002A3 [2145ms] [2019-01-24 18:56:15] hashrate: GPU0: 1994 H/s [T:52c][BUS:3] [2019-01-24 18:56:15] hashrate: GPU1: 2004 H/s [T:54c][BUS:6] [2019-01-24 18:56:15] hashrate: GPU2: 469 H/s [T:42c][BUS:9] [2019-01-24 18:56:15] hashrate: GPU3: 1871 H/s [T:54c][BUS:12] [2019-01-24 18:56:15] hashrate: GPU4: 1986 H/s [T:54c][BUS:15] [2019-01-24 18:56:15] hashrate: GPU5: 1998 H/s [T:54c][BUS:18] [2019-01-24 18:56:15] hashrate: GPU6: 1989 H/s [T:52c][BUS:22] [2019-01-24 18:56:15] hashrate: GPU7: 373 H/s [T:39c][BUS:25] [2019-01-24 18:56:15] hashrate: GPU8: 1878 H/s [T:54c][BUS:28] [2019-01-24 18:56:15] hashrate: Total: 14562 H/s [2019-01-24 18:56:22] hashrate: GPU0: 1994 H/s [T:52c][BUS:3] [2019-01-24 18:56:23] hashrate: GPU1: 2004 H/s [T:53c][BUS:6] [2019-01-24 18:56:23] hashrate: GPU2: 471 H/s [T:40c][BUS:9] [2019-01-24 18:56:23] hashrate: GPU3: 1866 H/s [T:54c][BUS:12] [2019-01-24 18:56:23] hashrate: GPU4: 1986 H/s [T:55c][BUS:15] [2019-01-24 18:56:23] hashrate: GPU5: 1998 H/s [T:54c][BUS:18] [2019-01-24 18:56:23] hashrate: GPU6: 1988 H/s [T:52c][BUS:22] [2019-01-24 18:56:23] hashrate: GPU7: 370 H/s [T:39c][BUS:25] [2019-01-24 18:56:23] hashrate: GPU8: 1878 H/s [T:54c][BUS:28] [2019-01-24 18:56:23] hashrate: Total: 14555 H/s
|
|
|
does anyone get some better hashes on haven algo with the new V1.7.6 ?
vega64 = ~2000hr srb 1.7.6 amd 18.6.1 win 10 x64 1803 int/work/thr 62/8/2 p6-p7 1490/900 p3 1100/900 samsung ~167watts 9 mixed vega rig. gpu 2,3,8 - 56 ref@64bios. other - 64 ref. consumption ~1560watts 
|
|
|
WHERE ARE GRAFT PAYMENT CARDS?no update like always from graft team. late november was nearly a month ago a lot of promises no delivers from website: https://www.graft.network/graft-coldpay-cold-wallet-payment-supercard/Delivery: First trial batch is tentatively scheduled delivery late November, 2018 with majority of the cards going to project supporters. If you are interested in getting one, consider donating to the cause on top of the page. DELIVERY: The ColdPay card production was deprioritized based on the community vote, with attention shifting to the core critical path efforts. The card is still being working on it in the background, but the production timelines have been pushed out until later date, to be announced.
|
|
|
how do i get vega frontier edition to work properly?
I've tried 1703, 1709, using blockchain drivers, 17.2, 17.1 various registry files but for some strange reason,
I keep getting code 43 when i manually disable enable vega FE in dev mgr or it enables and hashes at 1500h/s
Which miner do you use? My vegas hashing 2khs on xmr-stak 2.2.0 but only 1.5khs on 2.3.0 version. Any advices are appreciated. P.s. did you saw that video? https://youtu.be/wUrt7DgSiDM
|
|
|
GraftSuperNode Archived Updated on 7 Nov 2017
|
|
|
|