go6ooo1212
Legendary
Offline
Activity: 1512
Merit: 1000
quarkchain.io
|
|
September 14, 2014, 07:27:46 PM |
|
Could someone help me with the number of noons per HDD. I taught the count is the <number of HDD GBs>*<default alocation size of the partition> (which is usually 4096 with NTFS under WIn). Example: I set the number of noons 7360512 which should be arount 1800GB, but the gpuplotgenerator 2.1.1 made the plot around 1500GB. How to properly clac the number of noons ?
maybe this xls can help you. https://www.dropbox.com/s/qajs4fj9iyc927l/Burst_PLOT.xlsx?dl=0settings for cpu/gpu plotter have fun Thanks you for this export , I have something more to understand. When I set the number of noons into the gpuplotgen... , ,it alway change the number of noons itself with some small value around 2000 noons. Once its 832 noons , the other is 2048 noons of difference , and I cant dont know where it is comeing from ... you need to plot in CPU mode or GPU mode? if gpu mode AMD or Nvidia? Im plotting with AMD R9 280X, with gpuplotgenenrator 2.1.1; 14.7 drivers and AMD SDK 2.9.1 EDIT: My command line is: gpuPlotGenerator.exe generate 0 0 "localpath\plots" account 25955332 1515520 3072 128 1024 pause
|
|
|
|
Stanist
Newbie
Offline
Activity: 2
Merit: 0
|
|
September 14, 2014, 07:30:03 PM |
|
Is it possible to use a synology diskstation to store plot files?
Yes. This has already been said in this thread. How to link them to miner it thats work? tnx for answer.
Any way you want. Synology has SMB, NFS export on its shared dirs, just use whatever. From your question, it seems to me you didn't even try before asking. You right i have not tried it but wanted to know before start ploting. Only have an old nvida card so i cpu plot slow. I thank you for fast and good answer. Will give it a try now / tnx
|
|
|
|
fanepatent
|
|
September 14, 2014, 07:36:26 PM |
|
http://burstdice.ddns.net/80%, 60%, 40% and 20% chances. 2% house edge since minimum bet is 10 and fee is 1 (10%) Just send to one address and wait for the result. bump
|
BURST - BURST-58XP-63WY-XSVQ-ASG9A
|
|
|
Avaahnaa
Member
Offline
Activity: 67
Merit: 10
|
|
September 14, 2014, 07:41:53 PM |
|
This seems to me that also size of plot & disk i/o are limiting factor. (Sometimes blocks comes sooner than my miner evaluates whole plot). How long does it take your miner to evaluate yours?
I'm currently plotted to multiple 25GB files... Is it problem? Do you use one file per HDD partition (e.g. 1 TB file)? Is there much of improvement with disk IO?
I'm using ext2, noatime... according to vmstat there is max 2000 kB_read/s disk reading during mining (so very mild without any significant CPU load).
I don't want to repost but... Do you have please any thoughts on this about mining performance? Thanks! It depends on the stagger size you use. My 4TB usb3-drives take ~ 20s to evaluate, with 50k stagger size. If you go with 1000 or less - thats 64kb/read - you should merge them using the merge utility. Where is this merge utility. The only thing that I can think of that you could be referring to has been taken down.
|
|
|
|
dcct
|
|
September 14, 2014, 07:47:01 PM |
|
Where is this merge utility. The only thing that I can think of that you could be referring to has been taken down.
Right on the OP. https://bchain.info/dcct_miner.tgzIt also includes the merge tool
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 07:53:56 PM |
|
This seems to me that also size of plot & disk i/o are limiting factor. (Sometimes blocks comes sooner than my miner evaluates whole plot). How long does it take your miner to evaluate yours?
I'm currently plotted to multiple 25GB files... Is it problem? Do you use one file per HDD partition (e.g. 1 TB file)? Is there much of improvement with disk IO?
I'm using ext2, noatime... according to vmstat there is max 2000 kB_read/s disk reading during mining (so very mild without any significant CPU load).
I don't want to repost but... Do you have please any thoughts on this about mining performance? Thanks! It depends on the stagger size you use. My 4TB usb3-drives take ~ 20s to evaluate, with 50k stagger size. If you go with 1000 or less - thats 64kb/read - you should merge them using the merge utility. Wow .. 20 seconds is really fast for 4TB.. I'm nowhere near this... I've created it using GPU Plotter and thought its mainly for optimizing the plotting generation. I've used some 800 or what according to some recommendation to set it whatever is MB RAM of graphics card... Clearly its the issue... How long does it take to say merge 50 files into one 1TB and change stagger to your recommendation using your utility? Is it feasible? How can I come with exact stagger for optimized mining? Is there some formula? How many files do you have on that usb3 drive? Does also the number of files have some impact or its just matter of stagger...
|
|
|
|
dcct
|
|
September 14, 2014, 07:55:47 PM |
|
Wow .. 20 seconds is really fast for 4TB.. I'm nowhere near this... I've created it using GPU Plotter and thought its mainly for optimizing the plotting generation. I've used some 800 or what according to some recommendation to set it whatever is MB RAM of graphics card...
Clearly its the issue...
How long does it take to say merge 50 files into one 1TB with different stagger using your utility? Is it feasible?
How many files do you have on that usb3 drive? Does also the number of files have some impact or its just matter of stagger...
Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. Its worth it! I have just one file on them. As long as its not 100's of small files it does not matter.
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 07:56:02 PM |
|
http://burstdice.ddns.net/80%, 60%, 40% and 20% chances. 2% house edge since minimum bet is 10 and fee is 1 (10%) Just send to one address and wait for the result. bump tested and working for me...
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 07:59:25 PM |
|
Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. But its worth it!
I have just one file on them. As long as its not 100's of small files it does not matter.
Do you have some estimation? Isn't faster to generate them all over? I'm doing 4500 nonces/min
|
|
|
|
dcct
|
|
September 14, 2014, 08:02:26 PM |
|
Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. But its worth it!
I have just one file on them. As long as its not 100's of small files it does not matter.
Do you have some estimation? Isn't faster to generate them all over? I'm doing 4500 nonces/min Give it a try. I think its a lot faster than 4500 nonces/min.
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 08:09:11 PM |
|
Give it a try. I think its a lot faster than 4500 nonces/min.
Thank you very much.. And what about that formula? Should staggered nonces fit in RAM? e.g. your stagger number 50000*0.256= 12.8GB (1 nonce = 256kb) So you've got 12.8 GB available RAM? Sorry I'm totally in the dark
|
|
|
|
dcct
|
|
September 14, 2014, 08:11:07 PM |
|
Give it a try. I think its a lot faster than 4500 nonces/min.
Thank you very much.. And what about that formula? Should staggered nonces fit in RAM? e.g. your stagger number 50000*0.256= 12.8GB So you've got 12.8 GB available RAM? Sorry I'm totally in the dark Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory. Yes its 256kb per nonce
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 08:12:21 PM |
|
Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory. Yes its 256kb per nonce Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining?
|
|
|
|
m3ta
|
|
September 14, 2014, 08:12:32 PM |
|
We have reached the point where (since it is proven that gpu-made plots are "as good" as cpu-made ones), the gpuplotgenerator is of utmost importance to this coin's future.
As such, it needs to run not only in Windows (blah blah blah, yes, most people use it, that's not what i'm debating) but also in any *nix. (and possibly MacOs too)
Is anyone planning on making this a reality?
|
|
|
|
dcct
|
|
September 14, 2014, 08:15:38 PM |
|
Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory. Yes its 256kb per nonce Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining? The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file.
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 08:20:48 PM |
|
Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory. Yes its 256kb per nonce Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining? The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file. Thanks for clear explanation... Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds? I've got really low I/O when mining through plots - cca 2000 kB/s... Is it ok? Anyway I'll merge the files using your tool - thanks alot!
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 14, 2014, 08:21:24 PM |
|
Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory. Yes its 256kb per nonce Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining? The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file. Thanks for clear explanation... Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds? I've got really low I/O when mining through plots - cca 2000 kB/s (but for a lot longer than you)... Is it ok? Anyway I'll merge the files using your tool to see the improvement. I just want to have something to compare with... Thank you!
|
|
|
|
dcct
|
|
September 14, 2014, 08:29:29 PM |
|
Thanks for clear explanation...
Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds?
I've got really low I/O when mining through plots - cca 2000 kB/s (but for a lot longer than you)... Is it ok?
Anyway I'll merge the files using your tool to see the improvement. I just want to have something to compare with... Thank you!
While mining ~1GB is read of the 4TB. With merged plots that would take less than 10s to read, if you have a fast processor. Here its ~50MB/s while reading 2000KB/s is too low.
|
|
|
|
enta2k
Full Member
Offline
Activity: 294
Merit: 101
The Future of Security Tokens
|
|
September 14, 2014, 08:34:23 PM |
|
Uncaught error from thread [Uncaught error from thread [default-akka.actor.defau lt-dispatcher-7default-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[] shutting down JVM si nce 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[defaultdefault] ] java.lang.OutOfMemoryError: Java heap space at pocminer.ScoopReader.readFile(ScoopReader.java:30) at pocminer.ScoopReader.onReceive(ScoopReader.java:18) at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s cala:167)[ERROR] [09/14/2014 22:32:40.971] [default-akka.actor.default-dispatche r-3] [ActorSystem(default)] Uncaught error from thread [default-akka.actor.defau lt-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabl ed java.lang.OutOfMemoryError: Java heap space at pocminer.ScoopReader.readFile(ScoopReader.java:30) at pocminer.ScoopReader.onReceive(ScoopReader.java:18) at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s cala:167) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst ractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool .java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19 79) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre ad.java:107)
at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) [ERROR] [09/14/2014 22:32:40.971] [default-akka.actor.default-dispatcher-7] [Act orSystem(default)] Uncaught error from thread [default-akka.actor.default-dispat cher-7] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled java.lang.OutOfMemoryError: Java heap space at pocminer.ScoopReader.readFile(ScoopReader.java:30) at pocminer.ScoopReader.onReceive(ScoopReader.java:18) at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s cala:167) at akka.actor.Actor$class.aroundReceive(Actor.scala:465) at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97) at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516) at akka.actor.ActorCell.invoke(ActorCell.scala:487) at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst ractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool .java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19 79) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre ad.java:107) at akka.actor.ActorCell.invoke(ActorCell.scala:487)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238) at akka.dispatch.Mailbox.run(Mailbox.scala:220) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst ractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool .java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19 79) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre
Any idea what to do? my miner gives me that error every ~30min.
|
|
|
|
louiswwwwwww
Newbie
Offline
Activity: 59
Merit: 0
|
|
September 14, 2014, 08:35:30 PM |
|
how many coins right now???
|
|
|
|
|