mig6r
|
|
September 22, 2014, 07:10:05 PM |
|
hello, I would like to understand how the use of the memory. I thought the stagger determined the use of the memory. I use a stagger of 4096, I consumed with 4to 8gb, I just added a hdd 2to and now I use 11GB If I add yet 2to 14Go I would use? Thank you
Edit : If someone wants to give me the best size for plots and good staggerr to reach 22To with 16GB memory with little cpu. It would be really nice thank you
|
|
|
|
agran
|
|
September 22, 2014, 07:13:14 PM |
|
|
|
|
|
enta2k
Full Member
Offline
Activity: 294
Merit: 101
The Future of Security Tokens
|
|
September 22, 2014, 07:24:24 PM |
|
Windows @echo off cls :start java -cp pocminer_pool.jar;lib/*;lib/akka/*;lib/jetty/* pocminer_pool.POCMiner mine http://127.0.0.1:8125 http://poolipgoeshere:8121 goto start
Linux while i="0" do java -cp pocminer_pool.jar:lib/*:lib/akka/*:lib/jetty/* pocminer_pool.POCMiner mine http://127.0.0.1:8125 http://poolipgoeshere:8121 done Okay i found it, thanks for nothing requote it in case someones looking for the same.
|
|
|
|
koko2530
|
|
September 22, 2014, 07:27:59 PM |
|
|
|
|
|
carlos
Member
Offline
Activity: 107
Merit: 10
|
|
September 22, 2014, 07:29:22 PM Last edit: September 22, 2014, 07:41:27 PM by carlos |
|
Math question: How can I calculate difficulty from baseTarget of block? Does it still uses NXT's formula? https://wiki.nxtcrypto.org/wiki/Whitepaper:Nxt#Base_Target_ValueWas it changed when there is not Proof of Stake? And most importantly: How can we estimate total network plot size from cumulative difficulty? I'd like to graph it and offer to public as online tool. Thanks in advance
|
|
|
|
callmejack
|
|
September 22, 2014, 07:31:34 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
|
|
|
|
fabula
|
|
September 22, 2014, 07:34:07 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
Only 100TB?
|
|
|
|
m3ta
|
|
September 22, 2014, 07:41:29 PM |
|
I am on your pool now but receiving the error - "Unable to get mining info from wallet :http://burstpool.ddns.net:8124/.... It was working earlier.. Ideas? I cannot get to the site directly, either. Must be down Just a moment I've already said - a ddns.net is a dynamic DNS, which means this crap is running off a home connection, not a dedicated IP on a VM or just hosting on a datacenter. Apparently, pool owner is even too lame to install an auto updater. But even still, whenever his grandma's ISP connection on the basement is restarted, IP will change and all miners will lose connection.
|
|
|
|
callmejack
|
|
September 22, 2014, 07:50:21 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
Only 100TB? 129*1.9tb for now as spare parts which i can mine with for 6 month or more as testcase. also awaiting new hardware arrivals
|
|
|
|
fabula
|
|
September 22, 2014, 07:53:23 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
Only 100TB? 129*1.9tb for now. waiting for new hardware arrivals You rock! Congratulations.
|
|
|
|
enta2k
Full Member
Offline
Activity: 294
Merit: 101
The Future of Security Tokens
|
|
September 22, 2014, 07:54:06 PM |
|
I am on your pool now but receiving the error - "Unable to get mining info from wallet :http://burstpool.ddns.net:8124/.... It was working earlier.. Ideas? I cannot get to the site directly, either. Must be down Just a moment I've already said - a ddns.net is a dynamic DNS, which means this crap is running off a home connection, not a dedicated IP on a VM or just hosting on a datacenter. Apparently, pool owner is even too lame to install an auto updater. But even still, whenever his grandma's ISP connection on the basement is restarted, IP will change and all miners will lose connection. Whatever, this grannypool works 10 times better than any other pool i tried so far.
|
|
|
|
uray
|
|
September 22, 2014, 07:56:04 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
|
|
|
|
|
callmejack
|
|
September 22, 2014, 08:05:52 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
Only 100TB? 129*1.9tb for now. waiting for new hardware arrivals You rock! Congratulations. the good thing is that it runs totally unattended in the background and if a drive fails i know the drive was bad and it cannot fail in production use. today i just told some friends of mine this story and they think of throwing some pb onto burst during the next couple of weeks to test their spare hardware with something useful too. if they do so i am not sure about but i know to what they have access to
|
|
|
|
callmejack
|
|
September 22, 2014, 08:20:59 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically. i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution. i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load. i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb in 2gb plots and kept my inital setup like it was. for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks. if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it?
|
|
|
|
uray
|
|
September 22, 2014, 08:26:57 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically. i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution. i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load. i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was. for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks. if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it? have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round
|
|
|
|
callmejack
|
|
September 22, 2014, 08:40:14 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically. i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution. i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load. i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was. for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks. if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it? have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round not yet cause i am fine with the java miner in my setup. on a average node i get a disk i/o of 500-600 mb/s when a new block arrives. this lasts for less than 15-20 seconds depending on the disks. its basically what the storage is capable of. can your miner mine with the default wallet or requires it the pool counterpart?
|
|
|
|
uray
|
|
September 22, 2014, 08:45:07 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically. i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution. i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load. i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was. for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks. if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it? have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round not yet cause i am fine with the java miner in my setup. on a average node i get a disk i/o of 500-600 mb/s when a new block arrives. this lasts for less than 15-20 seconds depending on the disks. its basically what the storage is capable of. can your miner mine with the default wallet or requires it the pool counterpart? its pool only
|
|
|
|
mig6r
|
|
September 22, 2014, 08:46:38 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? So, why my miner use 14Gb of memory for 8Tb plots with 4096 stagger ? There is a prolem with my plots ?
|
|
|
|
uray
|
|
September 22, 2014, 09:07:13 PM |
|
i dont get why everyone requires such much memory to plot. i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great. even the java plotter only used few hundred megabytes ram during plotting. a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes. would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?
why u need multiple miner instance? which miner are u using? if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ? So, why my miner use 14Gb of memory for 8Tb plots with 4096 stagger ? There is a prolem with my plots ? i don't know exatcly, but i am using it fine for 15 TB, it use only 600MB for 8192 stagger how do you create that plot? which plotter r u using?
|
|
|
|
|