Show Posts
|
Pages: [1]
|
I am willing to put up a 1k Burst Bounty for the Development of an OSX App or script or service that will check to see if my mining is taking place and if not start it and if it is then do nothing
I run 2 miners on the same PC since i am mining 2 different Drives of plot files using the poolminer software
Do you realize that it's $0.09 ? 
|
|
|
Or in my case - open plot file, seek to 1323 * 9,142,272 nonces from start, read next 9,142,272 nonces (which are all consecutive as it was written directly to the file). No difference in bandwidth, and certainly not 4096 times less bandwidth.
You're correct, I forgot that they're grouped inside of file. If seeking is supported then splitting them to different files is unnecessary, I assumed that seek is not supported.
|
|
|
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?
Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor. To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec. H. As I recall, by tweaking the plotter it's possible to store data for one block per file, creating a set of 4096 files and only reading one of them on each block. So, the required bandwidth can be 4096 times smaller. So, instead of reading 9,142,272 nonces from one single, optimized file (and that's one of 50), I should read one nonce from each of 9,142,272 individual files? I somehow don't see that being more efficient .... Create a plot [256 kb] -> store plot[0] to plots0.dat, store plot[1] to plots1.dat etc for each 4096 scoops. Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc etc This way there will be 4096 large files (plot0.dat ... plot4095.dat) which can be uploaded to the cloud. You decide how large these files will be, of course they can be split if necessary. They also will be badly fragmented when stored on HDD, but this doesn't matter because they will be uploaded one by one to the cloud anyway. When mining block 1323, simply fetch scoops from plot1323.dat and so on. 4096 times less bandwidth because unneeded scoops won't be downloaded. p.s. In normal mining mode only ONE scoop from 4096 scoops in each plot is used. It's inefficient but it was done this way because reading speed is not the bottleneck when using local HDDs and it allows avoiding fragmentation when plotting (leading to faster plotting as there's no defrag step).
|
|
|
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?
Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor. To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec. H. As I recall, by tweaking the plotter it's possible to store data for one block per file, creating a set of 4096 files and only reading one of them on each block. So, the required bandwidth can be 4096 times smaller.
|
|
|
BinLaden as developer. He would be the best technical choice, but was rejected from a couple of people (below)
Why do you think so? Did he post any code?
|
|
|
In fact I like bitladen's idea of a coin based on CryptoNote+PoC+floating rewards, but imho any of these are too radical changes for burst. It would be interesting as a new coin.
|
|
|
I fail to see how increasing the reward will bring in more miners. Wouldn't it only worsen the inflation? Is this the plan to kill the coin faster so another fork can be introduced?  Rewriting the core code to get rid of "NXT fork" fame, probably changing the algo to make plotting much much slower so ppl with petabytes of storage won't be able to takeover instantly, releasing user-friendly mining / wallet software right from the start has a better chance of success imho.
|
|
|
I'm not doing something right with the run.sh file. For Mac, do I just double click on it like the bat file in windows to start? The same thing happened before, it didn't work in Java 7 that's why I upgraded to Java 8.
Hmm, class not found. If you are sure you got the correct java I suggest redownloading Burst. Try downloading and installing JDK, not JRE (it did not work for me, I haven't checked why). Then open Terminal, cd to wallet folder. Run ./compile.sh ./run.sh
|
|
|
Btw, on dev2 pool make sure miner sends all shares below target deadline, not only the best one.
|
|
|
Any stable pool out there?
I have 50TB and i can not even get 12K per day. (No matter the pool i am using)
(I was using SG in the past but currently something is wrong with the pool and my deadlines)
Try dev2 pool.
|
|
|
What is your suggestions? This is a school project and we wanna do this aslong as its not going to be a loss..
Why not holding the mined coins until they cost much much higher? Power consumption is minimal, hdds lifespan (if configured properly) shouldn't shorten much.
|
|
|
How are external drives for mining?
I was thinking about getting a external 4tb drive with usb 3.
My main concerns are heat and the lifespan of the drive. It's a bit more expensive but I dont have any more room inside of my computer and not too eager to change the power supply..
Most people are probably using external drives. I have a few 4tb external usb 3.0 and they've been doing well for me. If you can, stick to usb 3.0, but usb 2.0 will work also, probably just a ~5% decrease performance. Once you're done plotting, make sure to use dcct's optimizer to max out the stagger for each plot file. This will drastically reduce the stress on your drives. USB2 - ~40 mb/s, too slow for large HDDs. Works well for ~1-1.5Tb USB3 hdd - 160 mb/s and more To extend HDD lifespan it's important to disable sleep. If it spins down every block it will die fast. This can be done programmatically for some models or using a hack, for example by writing a file to HDD periodically. A simple USB fan works well for cooling down external HDDs btw. 
|
|
|
Hi, don't know if you already explained but I'm curious about how you calculate the chance to find a block in solo
YOUR_Tb * 100% / Total_plot_size_Tb = (YOUR_Tb *100% )/ ((2^64)/4/240/baseTarget) Pardon me if it's a stupid question, but how to use this formula? For example, I have 20 tb plots. What is the chance to find a block solo?
|
|
|
Anyway, there's some serious fail coding somewhere. I've seen so many people talk about the future of this altcoin and focusing on advertising and investors, blah blah, but the blunt truth is that any investor will not want to invest on something that is so broken.
Some of third party tools are incompatible or have known issues, but dev can't be held responsible for other developers code. Did you try dev2 pool? I've checked the shares list and it did register all the shares which were sent to it.
|
|
|
|