Bitcoin Forum
May 07, 2024, 02:39:26 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 [492] 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 ... 1315 »
  Print  
Author Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000  (Read 2170603 times)
go6ooo1212
Legendary
*
Offline Offline

Activity: 1512
Merit: 1000


quarkchain.io


View Profile
September 14, 2014, 07:27:46 PM
 #9821

Could someone help me with the number of noons per HDD. I taught the count is the <number of HDD GBs>*<default alocation size of the partition> (which is usually 4096 with NTFS under WIn).
Example: I set the number of noons 7360512 which should be arount 1800GB, but the gpuplotgenerator 2.1.1 made the plot around 1500GB. How to properly clac the number of noons ?

maybe this xls can help you.

https://www.dropbox.com/s/qajs4fj9iyc927l/Burst_PLOT.xlsx?dl=0

settings for cpu/gpu plotter

have fun


Thanks you for this export , I have something more to understand. When I set the number of noons into the gpuplotgen... , ,it alway change the number of noons itself with some small value around 2000 noons.
Once its 832 noons , the other is 2048 noons of difference , and I cant dont know where it is comeing from ...

you need to plot in CPU mode or GPU mode?
if gpu mode AMD or Nvidia?
Im plotting with AMD R9 280X, with gpuplotgenenrator 2.1.1; 14.7 drivers and AMD SDK 2.9.1
EDIT: My command line is:
gpuPlotGenerator.exe generate 0 0 "localpath\plots" account 25955332 1515520 3072 128 1024

pause
1715092766
Hero Member
*
Offline Offline

Posts: 1715092766

View Profile Personal Message (Offline)

Ignore
1715092766
Reply with quote  #2

1715092766
Report to moderator
In order to achieve higher forum ranks, you need both activity points and merit points.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1715092766
Hero Member
*
Offline Offline

Posts: 1715092766

View Profile Personal Message (Offline)

Ignore
1715092766
Reply with quote  #2

1715092766
Report to moderator
1715092766
Hero Member
*
Offline Offline

Posts: 1715092766

View Profile Personal Message (Offline)

Ignore
1715092766
Reply with quote  #2

1715092766
Report to moderator
Stanist
Newbie
*
Offline Offline

Activity: 2
Merit: 0


View Profile
September 14, 2014, 07:30:03 PM
 #9822

Is it possible to use a synology diskstation to store plot files?

Yes. This has already been said in this thread.


How to link them to miner it thats work? tnx for answer.

Any way you want. Synology has SMB, NFS export on its shared dirs, just use whatever.

From your question, it seems to me you didn't even try before asking.


You right i have not tried it but wanted to know before start ploting. Only have an old nvida card so i cpu plot slow. I thank you for fast and good answer. Will give it a try now
/ tnx
fanepatent
Full Member
***
Offline Offline

Activity: 224
Merit: 100


View Profile
September 14, 2014, 07:36:26 PM
 #9823

http://burstdice.ddns.net/

80%, 60%, 40% and 20% chances.
2% house edge since minimum bet is 10 and fee is 1 (10%)
Just send to one address and wait for the result.

bump

BURST - BURST-58XP-63WY-XSVQ-ASG9A
Avaahnaa
Member
**
Offline Offline

Activity: 67
Merit: 10


View Profile
September 14, 2014, 07:41:53 PM
 #9824


This seems to me that also size of plot & disk i/o are limiting factor. (Sometimes blocks comes sooner than my miner evaluates whole plot). How long does it take your miner to evaluate yours?

I'm currently plotted to multiple 25GB files... Is it problem? Do you use one file per HDD partition (e.g. 1 TB file)? Is there much of improvement with disk IO?

I'm using ext2, noatime... according to vmstat there is max 2000 kB_read/s disk reading during mining (so very mild without any significant CPU load).
I don't want to repost but... Do you have please any thoughts on this about mining performance? Thanks!

It depends on the stagger size you use. My 4TB usb3-drives take ~ 20s to evaluate, with 50k stagger size. If you go with 1000 or less - thats 64kb/read - you should merge them using the merge utility.


Where is this merge utility.  The only thing that I can think of that you could be referring to has been taken down.
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 07:47:01 PM
 #9825

Where is this merge utility.  The only thing that I can think of that you could be referring to has been taken down.

Right on the OP.

https://bchain.info/dcct_miner.tgz

It also includes the merge tool
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 07:53:56 PM
 #9826


This seems to me that also size of plot & disk i/o are limiting factor. (Sometimes blocks comes sooner than my miner evaluates whole plot). How long does it take your miner to evaluate yours?

I'm currently plotted to multiple 25GB files... Is it problem? Do you use one file per HDD partition (e.g. 1 TB file)? Is there much of improvement with disk IO?

I'm using ext2, noatime... according to vmstat there is max 2000 kB_read/s disk reading during mining (so very mild without any significant CPU load).
I don't want to repost but... Do you have please any thoughts on this about mining performance? Thanks!

It depends on the stagger size you use. My 4TB usb3-drives take ~ 20s to evaluate, with 50k stagger size. If you go with 1000 or less - thats 64kb/read - you should merge them using the merge utility.

Wow .. 20 seconds is really fast for 4TB.. I'm nowhere near this...
I've created it using GPU Plotter and thought its mainly for optimizing the plotting generation. I've used some 800 or what according to some recommendation to set it whatever is MB RAM of graphics card...

Clearly its the issue...

How long does it take to say merge 50 files into one 1TB and change stagger to your recommendation using your utility? Is it feasible?

How can I come with exact stagger for optimized mining? Is there some formula?

How many files do you have on that usb3 drive? Does also the number of files have some impact or its just matter of stagger...
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 07:55:47 PM
 #9827

Wow .. 20 seconds is really fast for 4TB.. I'm nowhere near this...
I've created it using GPU Plotter and thought its mainly for optimizing the plotting generation. I've used some 800 or what according to some recommendation to set it whatever is MB RAM of graphics card...

Clearly its the issue...

How long does it take to say merge 50 files into one 1TB with different stagger using your utility? Is it feasible?

How many files do you have on that usb3 drive? Does also the number of files have some impact or its just matter of stagger...

Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. Its worth it!

I have just one file on them. As long as its not 100's of small files it does not matter.
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 07:56:02 PM
 #9828

http://burstdice.ddns.net/

80%, 60%, 40% and 20% chances.
2% house edge since minimum bet is 10 and fee is 1 (10%)
Just send to one address and wait for the result.

bump
tested and working for me...
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 07:59:25 PM
 #9829

Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. But its worth it!

I have just one file on them. As long as its not 100's of small files it does not matter.
Do you have some estimation? Isn't faster to generate them all over? I'm doing 4500 nonces/min
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 08:02:26 PM
 #9830

Merging from a small stagger size takes quite some time - but it depends on how fast your HDD is. But its worth it!

I have just one file on them. As long as its not 100's of small files it does not matter.
Do you have some estimation? Isn't faster to generate them all over? I'm doing 4500 nonces/min

Give it a try. I think its a lot faster than 4500 nonces/min.
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 08:09:11 PM
 #9831

Give it a try. I think its a lot faster than 4500 nonces/min.
Thank you very much..

And what about that formula? Should staggered nonces fit in RAM?
e.g. your stagger number 50000*0.256= 12.8GB     (1 nonce = 256kb)
So you've got 12.8 GB available RAM?

Sorry I'm totally in the dark Smiley
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 08:11:07 PM
 #9832

Give it a try. I think its a lot faster than 4500 nonces/min.
Thank you very much..

And what about that formula? Should staggered nonces fit in RAM?
e.g. your stagger number 50000*0.256= 12.8GB
So you've got 12.8 GB available RAM?

Sorry I'm totally in the dark Smiley

Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory.

Yes its 256kb per nonce Smiley
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 08:12:21 PM
 #9833


Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory.

Yes its 256kb per nonce Smiley
Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining?
m3ta
Sr. Member
****
Offline Offline

Activity: 435
Merit: 250



View Profile WWW
September 14, 2014, 08:12:32 PM
 #9834

We have reached the point where (since it is proven that gpu-made plots are "as good" as cpu-made ones), the gpuplotgenerator is of utmost importance to this coin's future.

As such, it needs to run not only in Windows (blah blah blah, yes, most people use it, that's not what i'm debating) but also in any *nix. (and possibly MacOs too)

Is anyone planning on making this a reality?

Why the frell so many retards spell "ect" as an abbreviation of "Et Cetera"? "ETC", DAMMIT! http://en.wikipedia.org/wiki/Et_cetera

Host:/# rm -rf /var/forum/trolls
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 08:15:38 PM
 #9835


Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory.

Yes its 256kb per nonce Smiley
Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining?

The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file.
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 08:20:48 PM
 #9836


Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory.

Yes its 256kb per nonce Smiley
Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining?

The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file.
Thanks for clear explanation...

Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds?

I've got really low I/O when mining through plots - cca 2000 kB/s... Is it ok?

Anyway I'll merge the files using your tool - thanks alot!
carlos
Member
**
Offline Offline

Activity: 107
Merit: 10


View Profile
September 14, 2014, 08:21:24 PM
 #9837


Thats the amount of memory you need to create a plot with that stagger size. Using my C-plotter. If you merge if afterwards, it can be created with a lot less memory.

Yes its 256kb per nonce Smiley
Yes I understand to create, but to use it for reading (mining)? Whats best stagger for mining?

The larger the better. If you have 10k stagger size and a plot sitze of 1M nonces, the miner needs (1M/10k) = 100 disk seeks to read the file. So best is just 1 seek per file.
Thanks for clear explanation...

Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds?

I've got really low I/O when mining through plots - cca 2000 kB/s (but for a lot longer than you)... Is it ok?

Anyway I'll merge the files using your tool to see the improvement.
I just want to have something to compare with...
Thank you!
dcct
Sr. Member
****
Offline Offline

Activity: 280
Merit: 250


View Profile
September 14, 2014, 08:29:29 PM
 #9838

Thanks for clear explanation...

Can you please tell me what throughput do you have on that usb3 disk i/o when mining for that 20 seconds?

I've got really low I/O when mining through plots - cca 2000 kB/s (but for a lot longer than you)... Is it ok?

Anyway I'll merge the files using your tool to see the improvement.
I just want to have something to compare with...
Thank you!

While mining ~1GB is read of the 4TB. With merged plots that would take less than 10s to read, if you have a fast processor. Here its ~50MB/s while reading

2000KB/s is too low.
enta2k
Full Member
***
Offline Offline

Activity: 294
Merit: 101


The Future of Security Tokens


View Profile
September 14, 2014, 08:34:23 PM
 #9839

Uncaught error from thread [Uncaught error from thread [default-akka.actor.defau
lt-dispatcher-7default-akka.actor.default-dispatcher-3] shutting down JVM since
'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[] shutting down JVM si
nce 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[defaultdefault]
]
java.lang.OutOfMemoryError: Java heap space
        at pocminer.ScoopReader.readFile(ScoopReader.java:30)
        at pocminer.ScoopReader.onReceive(ScoopReader.java:18)
        at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s
cala:167)[ERROR] [09/14/2014 22:32:40.971] [default-akka.actor.default-dispatche
r-3] [ActorSystem(default)] Uncaught error from thread [default-akka.actor.defau
lt-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabl
ed
java.lang.OutOfMemoryError: Java heap space
        at pocminer.ScoopReader.readFile(ScoopReader.java:30)
        at pocminer.ScoopReader.onReceive(ScoopReader.java:18)
        at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s
cala:167)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst
ractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool
.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19
79)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre
ad.java:107)


        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
[ERROR] [09/14/2014 22:32:40.971] [default-akka.actor.default-dispatcher-7] [Act
orSystem(default)] Uncaught error from thread [default-akka.actor.default-dispat
cher-7] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled
java.lang.OutOfMemoryError: Java heap space
        at pocminer.ScoopReader.readFile(ScoopReader.java:30)
        at pocminer.ScoopReader.onReceive(ScoopReader.java:18)
        at akka.actor.UntypedActor$$anonfun$receive$1.applyOrElse(UntypedActor.s
cala:167)
        at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
        at akka.actor.UntypedActor.aroundReceive(UntypedActor.scala:97)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst
ractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool
.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19
79)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre
ad.java:107)
        at akka.actor.ActorCell.invoke(ActorCell.scala:487)

        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
        at akka.dispatch.Mailbox.run(Mailbox.scala:220)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(Abst
ractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool
.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:19
79)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThre

Any idea what to do? my miner gives me that error every ~30min.

louiswwwwwww
Newbie
*
Offline Offline

Activity: 59
Merit: 0


View Profile
September 14, 2014, 08:35:30 PM
 #9840

how many coins right now???
Pages: « 1 ... 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 [492] 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 ... 1315 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!