Bitcoin Forum
May 01, 2024, 12:41:28 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 [641] 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 ... 1315 »
  Print  
Author Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000  (Read 2170602 times)
Irontiga
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500


View Profile
October 04, 2014, 02:53:05 AM
 #12801

Buy hash power @ cryptomining.farm

http://cryptomining.farm/mining.html

 Grin


it's spelt "contract" not contact

also

https://burstforum.com/index.php?threads/cloud-mining-proposal.93/

where prices were @ ~$120/year/4tb --your deal is a ripoff
Bitcoin mining is now a specialized and very risky industry, just like gold mining. Amateur miners are unlikely to make much money, and may even lose money. Bitcoin is much more than just mining, though!
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714567288
Hero Member
*
Offline Offline

Posts: 1714567288

View Profile Personal Message (Offline)

Ignore
1714567288
Reply with quote  #2

1714567288
Report to moderator
1714567288
Hero Member
*
Offline Offline

Posts: 1714567288

View Profile Personal Message (Offline)

Ignore
1714567288
Reply with quote  #2

1714567288
Report to moderator
Palmdetroit
Legendary
*
Offline Offline

Activity: 910
Merit: 1000


PHS 50% PoS - Stop mining start minting


View Profile
October 04, 2014, 02:54:12 AM
 #12802

Any reason to go over 25k Stagger size? Anyone find a max?

Irontiga
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500


View Profile
October 04, 2014, 02:54:56 AM
 #12803

Any reaosnt o gov over 25k Stagger size? Anyone find a max?

I think pinballDude has a stagger of 60k, it's somewhere over on burstforum.com
koko2530
Sr. Member
****
Offline Offline

Activity: 397
Merit: 250


View Profile
October 04, 2014, 03:19:19 AM
 #12804


up2me and buyer

 Cool
Alex Coventry
Newbie
*
Offline Offline

Activity: 19
Merit: 0


View Profile
October 04, 2014, 04:29:54 AM
 #12805

As the HDD is now bottleneck, there are two options: <snip>
  - Write to multiple disks at the same time (I will put this on the roadmap).

I modified an earlier version of your plotter to do this.  I've been using it for a couple of weeks or so, it seems to produce valid plots much faster.  Only tested on linux.

Code:
void CommandGenerate::help() const {
std::cerr << "Usage: ./gpuPlotGenerator generate ";
std::cerr << "<platformId> <deviceId> <staggerSize> <threadsNumber> ";
std::cerr << "<hashesNumber> <path> <address> <startNonce> <noncesNumber> ";
std::cerr << "[<path> <address> <startNonce> <noncesNumber> ...]" << std::endl;
std::cerr << "    - platformId: Id of the OpenCL platform to use (see [list] command)." << std::endl;
std::cerr << "    - deviceId: Id of the OpenCL device to use (see [list] command)." << std::endl;
std::cerr << "    - staggerSize: Stagger size." << std::endl;
std::cerr << "    - threadsNumber: Number of parallel threads for each work group." << std::endl;
std::cerr << "    - hashesNumber: Number of hashes to compute for each step2 kernel calls." << std::endl;
std::cerr << "    - path: Path to the plots directory." << std::endl;
std::cerr << "    - address: Burst numerical address." << std::endl;
std::cerr << "    - startNonce: First nonce of the plot generation." << std::endl;
std::cerr << "    - noncesNumber: Number of nonces to generate." << std::endl;
std::cerr << "With multiple [<path> <address> <startNonce> <noncesNumber>] arguments " << std::endl;
std::cerr << "GPU calculation iterates through a stagger for each job and the results are " << std::endl;
std::cerr << "saved asynchronously.  This is intended to be used for plotting multiple " << std::endl;
std::cerr << "mechanical drives simultaneously in order to max out GPU bandwidth." << std::endl;
}
Chode
Newbie
*
Offline Offline

Activity: 42
Merit: 0



View Profile
October 04, 2014, 04:31:56 AM
 #12806

After a bunch of wasted hours I've finally managed to make dcct's miner work with dev's v2 pool.


So, first you have to download the original files from https://bchain.info/dcct_miner.tgz

After you extract them, replace the contents of mine.c with the following code:

Code:
/*
        Uses burstcoin plot files to mine coins
        Author: Markus Tervooren <info@bchain.info>
        BURST-R5LP-KEL9-UYLG-GFG6T

With code written by Uray Meiviar <uraymeiviar@gmail.com>
BURST-8E8K-WQ2F-ZDZ5-FQWHX

        Implementation of Shabal is taken from:
        http://www.shabal.com/?p=198

Usage: ./mine <node ip> [<plot dir> <plot dir> ..]
*/

#define _GNU_SOURCE
#include <stdint.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/time.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <dirent.h>
#include <pthread.h>

#include "shabal.h"
#include "helper.h"

// Do not report results with deadline above this to the node. If you mine solo set this to 10000 to avoid stressing out the node.
#define MAXDEADLINE 5000000

// Change if you need
#define DEFAULT_PORT 8125

// These are fixed for BURST. Dont change!
#define HASH_SIZE 32
#define HASH_CAP 4096
#define PLOT_SIZE (HASH_CAP * HASH_SIZE * 2)

// Read this many nonces at once. 100k = 6.4MB per directory/thread.
// More may speedup things a little.
#define CACHESIZE 100000

#define BUFFERSIZE 2000

unsigned long long addr;
unsigned long long startnonce;
int scoop;

unsigned long long best;
unsigned long long bestn;
unsigned long long deadline;

unsigned long long targetdeadline;

char signature[33];
char oldSignature[33];

char nodeip[16];
int nodeport = DEFAULT_PORT;

unsigned long long bytesRead = 0;
unsigned long long height = 0;
unsigned long long baseTarget = 0;
time_t starttime;

int stopThreads = 0;

pthread_mutex_t byteLock;

#define SHARECACHE 1000

#ifdef SHARE_POOL
int sharefill;
unsigned long long sharekey[SHARECACHE];
unsigned long long sharenonce[SHARECACHE];
#endif

// Buffer to read the passphrase to. Only when SOLO mining
#ifdef SOLO
char passphrase[BUFFERSIZE + 1];
#endif

char readbuffer[BUFFERSIZE + 1];

// Some more buffers
char writebuffer[BUFFERSIZE + 1];

char *contactWallet(char *req, int bytes) {
int s = socket(AF_INET, SOCK_STREAM, IPPROTO_IP);

struct sockaddr_in ss;
ss.sin_addr.s_addr = inet_addr( nodeip );
ss.sin_family = AF_INET;
ss.sin_port = htons( nodeport );

struct timeval tv;
tv.tv_sec =  15;
tv.tv_usec = 0;  

setsockopt(s, SOL_SOCKET, SO_RCVTIMEO, (char *)&tv,sizeof(struct timeval));

if(connect(s, (struct sockaddr*)&ss, sizeof(struct sockaddr_in)) == -1) {
printf("\nError sending result to node                           \n");
fflush(stdout);
return NULL;
}

int written = 0;
do {
int w = write(s, &req[written], bytes - written);
if(w < 1) {
printf("\nError sending request to node                     \n");
return NULL;
}
written += w;
} while(written < bytes);

int bytesread = 0, rbytes;
do {
rbytes = read(s, &readbuffer[bytesread], BUFFERSIZE - bytesread);
if(bytes > 0)
bytesread += rbytes;

} while(rbytes > 0 && bytesread < BUFFERSIZE);

close(s);

// Finish read
readbuffer[bytesread] = 0;

// locate HTTP header end
char *find = strstr(readbuffer, "\r\n\r\n");

// No header found
if(find == NULL)
return NULL;

return find + 4;
}

void procscoop(unsigned long long nonce, int n, char *data, unsigned long long account_id) {
char *cache;
char sig[32 + 64];

cache = data;

int v;

memmove(sig, signature, 32);

for(v=0; v<n; v++) {
memmove(&sig[32], cache, 64);

shabal_context x;
shabal_init(&x, 256);
shabal(&x, sig, 64 + 32);

char res[32];

shabal_close(&x, 0, 0, res);

unsigned long long *wertung = (unsigned long long*)res;

// Sharepool: Submit all deadlines below threshold
// Uray_pool: Submit best deadline
// Solo: Best deadline, but not low quality deadlines

#ifdef SHARE_POOL
// For sharepool just store results for later submission
if(*wertung < targetdeadline * baseTarget && sharefill < SHARECACHE) {
sharekey[sharefill] = account_id;
sharenonce[sharefill] = nonce;
sharefill++;
}
#else
if(bestn == 0 || *wertung <= best) {
best = *wertung;
bestn = nonce;

#ifdef SOLO
if(best < baseTarget * MAXDEADLINE) { // Has to be this good before we inform the node
#endif


#ifdef URAY_POOL
                       int bytes = sprintf(writebuffer, "POST /burst?requestType=submitNonce&accountId=%llu&nonce=%llu HTTP/1.0\r\nConnection: close\r\n\r\n", account_id,bestn);
#else
int bytes = sprintf(writebuffer, "POST /burst?requestType=submitNonce&secretPhrase=%s&nonce=%llu HTTP/1.0\r\nConnection: close\r\n\r\n", passphrase, bestn);
#endif
char *buffer = contactWallet( writebuffer, bytes );

if(buffer != NULL) {
char *rdeadline = strstr(buffer, "\"deadline\":");
if(rdeadline != NULL) {
rdeadline += 11;
char *end = strstr(rdeadline, "}");
if(end != NULL) {
// Parse and check if we have a better deadline
unsigned long long ndeadline = strtoull(rdeadline, 0, 10);
if(ndeadline < deadline || deadline == 0)
deadline = ndeadline;
}
} else {
printf("\nWalet reported no deadline.\n");
}
#ifdef SOLO
// Deadline too high? Passphrase may be wrong.
if(deadline > MAXDEADLINE) {
printf("\nYour deadline is larger than it should be. Check if you put the correct passphrase to passphrases.txt.\n");
fflush(stdout);
}
#endif

}
#ifdef SOLO
}
#endif
}

#endif
nonce++;
cache += 64;
}
}


void *work_i(void *x_void_ptr) {
        char *x_ptr = (char*)x_void_ptr;

char *cache = (char*) malloc(CACHESIZE * HASH_SIZE * 2);

if(cache == NULL) {
printf("\nError allocating memory                         \n");
exit(-1);
}


DIR *d;
struct dirent *dir;
d = opendir(x_ptr);

if (d) {
while ((dir = readdir(d)) != NULL) {
unsigned long long key, nonce, nonces, stagger, n;

char fullname[512];
strcpy(fullname, x_ptr);

if(sscanf(dir->d_name, "%llu_%llu_%llu_%llu", &key, &nonce, &nonces, &stagger)) {
// Does path end with a /? If not, add it.
if( fullname[ strlen( x_void_ptr ) ] == '/' ) {
strcpy(&fullname[ strlen( x_void_ptr ) ], dir->d_name);
} else {
fullname[ strlen( x_void_ptr ) ] = '/';
strcpy(&fullname[ strlen( x_void_ptr ) + 1 ], dir->d_name);
}

int fh = open(fullname, O_RDONLY);

if(fh < 0) {
                                        printf("\nError opening file %s                             \n", fullname);
                                        fflush(stdout);
}

unsigned long long offset = stagger * scoop * HASH_SIZE * 2;
unsigned long long size = stagger * HASH_SIZE * 2;

for(n=0; n<nonces; n+=stagger) {
// Read one Scoop out of this block:
// start to start+size in steps of CACHESIZE * HASH_SIZE * 2

unsigned long long start = n * HASH_CAP * HASH_SIZE * 2 + offset, i;
unsigned long long noffset = 0;
for(i = start; i < start + size; i += CACHESIZE * HASH_SIZE * 2) {
unsigned int readsize = CACHESIZE * HASH_SIZE * 2;
if(readsize > start + size - i)
readsize = start + size - i;

int bytes = 0, b;
do {
b = pread(fh, &cache[bytes], readsize - bytes, i);
bytes += b;
} while(bytes < readsize && b > 0); // Read until cache is filled (or file ended)

if(b != 0) {
procscoop(n + nonce + noffset, readsize / (HASH_SIZE * 2), cache, key); // Process block

// Lock and add to totals
pthread_mutex_lock(&byteLock);
bytesRead += readsize;
pthread_mutex_unlock(&byteLock);
}

noffset += CACHESIZE;
}

if(stopThreads) { // New block while processing: Stop.
close(fh);
closedir(d);
free(cache);
return NULL;
}
}
close(fh);
}
}
closedir(d);
}
free(cache);
return NULL;
}

int pollNode() {

// Share-pool works differently
#ifdef SHARE_POOL
int bytes = sprintf(writebuffer, "GET /pool/getMiningInfo HTTP/1.0\r\nHost: %s:%i\r\nConnection: close\r\n\r\n", nodeip, nodeport);
#else
int bytes = sprintf(writebuffer, "POST /burst?requestType=getMiningInfo HTTP/1.0\r\nConnection: close\r\n\r\n");
#endif

char *buffer = contactWallet( writebuffer, bytes );

if(buffer == NULL)
return 0;

// Parse result
#ifdef SHARE_POOL
char *rbaseTarget = strstr(buffer, "\"baseTarget\": \"");
char *rheight = strstr(buffer, "\"height\": \"");
char *generationSignature = strstr(buffer, "\"generationSignature\": \"");
char *tdl = strstr(buffer, "\"targetDeadline\": \"");

if(rbaseTarget == NULL || rheight == NULL || generationSignature == NULL || tdl == NULL)
return 0;

char *endBaseTarget = strstr(rbaseTarget + 15, "\"");
char *endHeight = strstr(rheight + 11, "\"");
char *endGenerationSignature = strstr(generationSignature + 24, "\"");
char *endtdl = strstr(tdl + 19, "\"");

if(endBaseTarget == NULL || endHeight == NULL || endGenerationSignature == NULL || endtdl == NULL)
return 0;

// Set endpoints
endBaseTarget[0] = 0;
endHeight[0] = 0;
endGenerationSignature[0] = 0;
endtdl[0] = 0;

// Parse
if(xstr2strr(signature, 33, generationSignature + 24) < 0) {
printf("\nNode response: Error decoding generationsignature           \n");
fflush(stdout);
return 0;
}

height = strtoull(rheight + 11, 0, 10);
baseTarget = strtoull(rbaseTarget + 15, 0, 10);
targetdeadline = strtoull(tdl + 19, 0, 10);
#else
char *rbaseTarget = strstr(buffer, "\"baseTarget\":\"");
char *rheight = strstr(buffer, "\"height\":\"");
char *generationSignature = strstr(buffer, "\"generationSignature\":\"");
if(rbaseTarget == NULL || rheight == NULL || generationSignature == NULL)
return 0;

char *endBaseTarget = strstr(rbaseTarget + 14, "\"");
char *endHeight = strstr(rheight + 10, "\"");
char *endGenerationSignature = strstr(generationSignature + 23, "\"");
if(endBaseTarget == NULL || endHeight == NULL || endGenerationSignature == NULL)
return 0;

// Set endpoints
endBaseTarget[0] = 0;
endHeight[0] = 0;
endGenerationSignature[0] = 0;

// Parse
if(xstr2strr(signature, 33, generationSignature + 23) < 0) {
printf("\nNode response: Error decoding generationsignature           \n");
fflush(stdout);
return 0;
}

height = strtoull(rheight + 10, 0, 10);
baseTarget = strtoull(rbaseTarget + 14, 0, 10);
#endif

return 1;
}

void update() {
// Try until we get a result.
while(pollNode() == 0) {
printf("\nCould not get mining info from Node. Will retry..             \n");
fflush(stdout);
struct timespec wait;
wait.tv_sec = 1;
wait.tv_nsec = 0;
nanosleep(&wait, NULL);
};
}

int main(int argc, char **argv) {
int i;
if(argc < 3) {
printf("Usage: ./mine <node url> [<plot dir> <plot dir> ..]\n");
exit(-1);
}

#ifdef SOLO
// Reading passphrase from file
int pf = open( "passphrases.txt", O_RDONLY );
if( pf < 0 ) {
printf("Could not find file passphrases.txt\nThis file should contain the passphrase used to create the plotfiles\n");
exit(-1);
}

int bytes = read( pf, passphrase, 2000 );

// Replace spaces with +
for( i=0; i<bytes; i++ ) {
if( passphrase[i] == ' ' )
passphrase[i] = '+';

// end on newline
if( passphrase[i] == '\n' || passphrase[i] == '\r')
passphrase[i] = 0;
}

passphrase[bytes] = 0;
#endif

// Check if all directories exist:
struct stat d = {0};

for(i = 2; i < argc; i++) {
if ( stat( argv[i], &d) ) {
printf( "Plot directory %s does not exist\n", argv[i] );
exit(-1);
} else {
if( !(d.st_mode & S_IFDIR) ) {
printf( "%s is not a directory\n", argv[i] );
exit(-1);
}
}
}

char *hostname = argv[1];

// Contains http://? strip it.
if(strncmp(hostname, "http://", 7) == 0)
hostname += 7;

// Contains Port? Extract and strip.
char *p = strstr(hostname, ":");
if(p != NULL) {
p[0] = 0;
p++;
nodeport = atoi(p);
}

printf("Using %s port %i\n", hostname, nodeport);

hostname_to_ip(hostname, nodeip);

memset(oldSignature, 0, 33);

pthread_t worker[argc];
time(&starttime);

// Get startpoint:
update();

// Main loop
for(;;) {
// Get scoop:
char scoopgen[40];
memmove(scoopgen, signature, 32);

char *mov = (char*)&height;

scoopgen[32] = mov[7]; scoopgen[33] = mov[6]; scoopgen[34] = mov[5]; scoopgen[35] = mov[4]; scoopgen[36] = mov[3]; scoopgen[37] = mov[2]; scoopgen[38] = mov[1]; scoopgen[39] = mov[0];

shabal_context x;
shabal_init(&x, 256);
shabal(&x, scoopgen, 40);
char xcache[32];
shabal_close(&x, 0, 0, xcache);

scoop = (((unsigned char)xcache[31]) + 256 * (unsigned char)xcache[30]) % HASH_CAP;

// New block: reset stats
best = bestn = deadline = bytesRead = 0;

#ifdef SHARE_POOL
sharefill = 0;
#endif

for(i = 2; i < argc; i++) {
if(pthread_create(&worker[i], NULL, work_i, argv[i])) {
printf("\nError creating thread. Out of memory? Try lower stagger size\n");
exit(-1);
}
}

#ifdef SHARE_POOL
// Collect threads back in for dev's pool:
                for(i = 2; i < argc; i++)
                       pthread_join(worker[i], NULL);

if(sharefill > 0) {
char *f1 = (char*) malloc(SHARECACHE * 100);
char *f2 = (char*) malloc(SHARECACHE * 100);

int used = 0;
for(i = 0; i<sharefill; i++)
used += sprintf(&f1[used], "%llu:%llu:%llu\n", sharekey[i], sharenonce[i], height);


int ilen = 1, red = used;
while(red > 10) {
ilen++;
red /= 10;
}


int db = sprintf(f2, "POST /pool/submitWork HTTP/1.1\r\nHost: %s:%i\r\nContent-Type: text/plain;charset=UTF-8\r\nContent-Length: %i\r\n\r\n%s\n", nodeip, nodeport, used + 1, f1);

printf("\nServer response: %s\n", contactWallet(f2, db));

free(f1);
free(f2);
}
#endif

memmove(oldSignature, signature, 32);

// Wait until block changes:
do {
update();

time_t ttime;
time(&ttime);
#ifdef SHARE_POOL
printf("\r%llu MB read/%llu GB total/%i shares@target %llu                 ", (bytesRead / ( 1024 * 1024 )), (bytesRead / (256 * 1024)), sharefill, targetdeadline);
#else
if(deadline == 0)
printf("\r%llu MB read/%llu GB total/no deadline                 ", (bytesRead / ( 1024 * 1024 )), (bytesRead / (256 * 1024)));
else
printf("\r%llu MB read/%llu GB total/deadline %llus (%llis left)           ", (bytesRead / ( 1024 * 1024 )), (bytesRead / (256 * 1024)), deadline, (long long)deadline + (unsigned int)starttime - (unsigned int)ttime);
#endif

fflush(stdout);

struct timespec wait;
// Query faster when solo mining
#ifdef SOLO
wait.tv_sec = 1;
#else
wait.tv_sec = 5;
#endif
wait.tv_nsec = 0;
nanosleep(&wait, NULL);
} while(memcmp(signature, oldSignature, 32) == 0); // Wait until signature changed

printf("\nNew block %llu, basetarget %llu                          \n", height, baseTarget);
fflush(stdout);

// Remember starttime
time(&starttime);

#ifndef SHARE_POOL
// Tell all threads to stop:
stopThreads = 1;
for(i = 2; i < argc; i++)
      pthread_join(worker[i], NULL);

stopThreads = 0;
#endif
}
}


and after that just follow the instructions from the original announcement made by dcct - https://bitcointalk.org/index.php?topic=731923.msg8879760#msg8879760

Note 1: If anyone is wondering, the problem was in the way the data was submitted to the pool. The tip came from burstcoin in a post about Blago's miner.
Note 2: I'm not a C programmer, so I'm not really sure if the content-length header has been calculated right, so if not all of the shares are being counted by the pool this would be the main reason.
Note 3: The binaries provided in the archive are the original ones made by dcct and therefore won't work (the archive link is the one from his announcement post)... so you'll have to recompile the code.
Note 4: I've only compiled and tested the code on a 64 bit Linux, but the changes I've made shouldn't break anything.
Note 5: I've never used the v1 pool, nor I intend to use it, so if the original version wasn't working with it, this one won't work as well.
bensam123
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
October 04, 2014, 07:04:14 AM
 #12807

Is this calculator still accurate? https://bchain.info/BURST/tools/calculator

Predicted output hasn't really changed that much regardless of the price of Burst bottoming out over the last couple weeks.
Irontiga
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500


View Profile
October 04, 2014, 07:31:36 AM
 #12808

Is this calculator still accurate? https://bchain.info/BURST/tools/calculator

Predicted output hasn't really changed that much regardless of the price of Burst bottoming out over the last couple weeks.

It can never be accurate, but it is close. BaseTarget jumps around too much. It's a burst thing, nothing to do with the calc.
bensam123
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
October 04, 2014, 07:46:45 AM
 #12809

Is this calculator still accurate? https://bchain.info/BURST/tools/calculator

Predicted output hasn't really changed that much regardless of the price of Burst bottoming out over the last couple weeks.

It can never be accurate, but it is close. BaseTarget jumps around too much. It's a burst thing, nothing to do with the calc.

+/-10%? Is it a average or a current value?
Irontiga
Hero Member
*****
Offline Offline

Activity: 588
Merit: 500


View Profile
October 04, 2014, 07:49:36 AM
 #12810

Is this calculator still accurate? https://bchain.info/BURST/tools/calculator

Predicted output hasn't really changed that much regardless of the price of Burst bottoming out over the last couple weeks.

It can never be accurate, but it is close. BaseTarget jumps around too much. It's a burst thing, nothing to do with the calc.

+/-10%? Is it a average or a current value?


Current value. Probably if you made a graph(as someone has done) and took baseTarget every hour you'd get a good idea of the current diff.
bensam123
Sr. Member
****
Offline Offline

Activity: 423
Merit: 250


View Profile
October 04, 2014, 08:36:37 AM
 #12811

Thanks for the answer.

So a question about the GPU Plotter. If you plot with it, do you have to continue to use it after plotting? This would interfere with normal coin mining if that's the case.
SpeedDemon13
Hero Member
*****
Offline Offline

Activity: 518
Merit: 500



View Profile WWW
October 04, 2014, 08:51:57 AM
 #12812

Thanks for the answer.

So a question about the GPU Plotter. If you plot with it, do you have to continue to use it after plotting? This would interfere with normal coin mining if that's the case.

Once gpu plotting is done, the gpu is free to gpu mine or game with.

CRYPTSY exchange: https://www.cryptsy.com/users/register?refid=9017 BURST= BURST-TE3W-CFGH-7343-6VM6R BTC=1CNsqGUR9YJNrhydQZnUPbaDv6h4uaYCHv ETH=0x144bc9fe471d3c71d8e09d58060d78661b1d4f32 SHF=0x13a0a2cb0d55eca975cf2d97015f7d580ce52d85 EXP=0xd71921dca837e415a58ca0d6dd2223cc84e0ea2f SC=6bdf9d12a983fed6723abad91a39be4f95d227f9bdb0490de3b8e5d45357f63d564638b1bd71 CLAMS=xGVTdM9EJpNBCYAjHFVxuZGcqvoL22nP6f SOIL=0x8b5c989bc931c0769a50ecaf9ffe490c67cb5911
bipben
Member
**
Offline Offline

Activity: 60
Merit: 10


View Profile
October 04, 2014, 09:51:12 AM
 #12813

Awesome!
Are we able to plot with large stagger sizes in this version?
@twig123 Yes, the correlation between the staggerSize and the GPU RAM has been removed. The staggerSize only depends on the CPU RAM amount now.

Would it be more beneficial for hdds to run in RAID arrays now, ie. RAID0? As for ssds, that not a very cost efficient alternative...lol.....I'll send you some burst and thank you for all the efforts, more likely the middle to end of next week.
@SpeedDemon13 Yes I agree with you, the SSD reference is not really cost effective. Moreover, plotting on a SSD and transfering the files to standard HDD later only shift the problem.
However, I think that a multi-GPU multi-disks version could be a good idea.

And you Cryo, very big thanks!
Version 3.0 came out great and at the moment it has everything you need to begin now to create a lot of plots without problems.

Steps for creating this version taken from the author is not a short time. Respect him for that!

I transferred to Cryo 5000 Burst. Good people, I will always support!
@ Palad1n Thanks for your support =D

Thanks alot  Grin Have done some initial testing on nvidia, and I think it's faster but foremost it can handle much larger stagger sizes. And I'm just on tiny 750ti's

Donation incmming  Wink
@mmmaybe Glad to hear from an NVidia user. Thanks for your support =)

Awesome!

Fooling around a bit with it, 2 x 780 Ti went over 60k n/m and I'm fairly certain they could have gone higher if it weren't for the I/O limitation.

The highest stagger I could reach was 16383 which is just below 4GB RAM which is the maximum it could allocate even though I have 16 GB. And that is completely regardless of VRAM amount.
@bathrobehero Wow, impressive performances. Regarding the <staggerSize> it can be because you have a 32bits platform. What command line do you use? What error do you have when trying to allocate more than 4GB? Which OS do you have?

Getting the following error message some times:

Code:
[ERROR] std::exception

Running 5 devices, within their memory limits. Any ideas...?
@mmmaybe After wich step do you see this error? There is a problem in the error display here so it'll be difficult to know the real cause. I'll have to correct the displaying problem before. I'll work on it, thanks for your feedback.

As the HDD is now bottleneck, there are two options: <snip>
  - Write to multiple disks at the same time (I will put this on the roadmap).

I modified an earlier version of your plotter to do this.  I've been using it for a couple of weeks or so, it seems to produce valid plots much faster.  Only tested on linux.

Code:
void CommandGenerate::help() const {
std::cerr << "Usage: ./gpuPlotGenerator generate ";
std::cerr << "<platformId> <deviceId> <staggerSize> <threadsNumber> ";
std::cerr << "<hashesNumber> <path> <address> <startNonce> <noncesNumber> ";
std::cerr << "[<path> <address> <startNonce> <noncesNumber> ...]" << std::endl;
std::cerr << "    - platformId: Id of the OpenCL platform to use (see [list] command)." << std::endl;
std::cerr << "    - deviceId: Id of the OpenCL device to use (see [list] command)." << std::endl;
std::cerr << "    - staggerSize: Stagger size." << std::endl;
std::cerr << "    - threadsNumber: Number of parallel threads for each work group." << std::endl;
std::cerr << "    - hashesNumber: Number of hashes to compute for each step2 kernel calls." << std::endl;
std::cerr << "    - path: Path to the plots directory." << std::endl;
std::cerr << "    - address: Burst numerical address." << std::endl;
std::cerr << "    - startNonce: First nonce of the plot generation." << std::endl;
std::cerr << "    - noncesNumber: Number of nonces to generate." << std::endl;
std::cerr << "With multiple [<path> <address> <startNonce> <noncesNumber>] arguments " << std::endl;
std::cerr << "GPU calculation iterates through a stagger for each job and the results are " << std::endl;
std::cerr << "saved asynchronously.  This is intended to be used for plotting multiple " << std::endl;
std::cerr << "mechanical drives simultaneously in order to max out GPU bandwidth." << std::endl;
}
@Alex Coventry I've looked at your code. I think that the N CPU buffers are really bottleneck as it will require a lot of RAM to plot N disks at the same time, or will force the user to reduce its <staggerSize> value, thus increasing disk-stress when mining. Maybe a limted amount of rotating buffers would be enough and more RAM efficient. Maybe a stagger-less version (staggerSize = fileSize / PLOT_SIZE) of the plotter could solve this RAM issue (need some tests as it will increase IO operations).
Anyhow I will work on that part with the already implemented multi-GPU support, begining with the ideas behind your version.
What do you mean by "much faster"? Do you speak about a performance difference between the 3.0.0 and the 2.1.1 or between the 2.1.1 and your modded one?

Burst: BURST-YA29-QCEW-QXC3-BKXDL
bipben
Member
**
Offline Offline

Activity: 60
Merit: 10


View Profile
October 04, 2014, 10:29:54 AM
 #12814

I've transfered the code of the GPU plot generator from my personal SVN repository to GITHUB.
Here is the link: https://github.com/bhamon/gpuPlotGenerator

Burst: BURST-YA29-QCEW-QXC3-BKXDL
DrTrouble
Member
**
Offline Offline

Activity: 111
Merit: 10


View Profile
October 04, 2014, 01:57:53 PM
 #12815

...

Just started to use your latest miner for uray's pool2.
Just curious about the last line abbreviations, could you please tell what the sdl, cdl, ss & rs stand for, also there is some number in the small brackets  so far for me always like (0) what's that number for ?

One more request, could you please try to show the dealines (at least the best deadline) in years:days:hours:minutes:seconds format like it is shown by uray's miner ?

Best Regards & Good luck for your work.

sdl - sent deadlines
rdl - recieve confirms deadlines
ss - sent getMiningInfo to server
rs - recieve getMiningInfo from server
(0) - errors

years:days:hours:minutes:seconds format   - will done.



OK, got it. But I have an issue.



I frequently get the error message as shown in the attached screen shot & unless I click on ignore, it keeps showing no deadline, without reading any plot file!

Edit : Just read your message few posts up, NVM. Please let us know when you fix it. Thanks.

I, too, am having the same issues on only SOME of my miners.  Eagerly awaiting a fix!

What is this "Bitcoin" of which you speak???
pimytron
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
October 04, 2014, 02:20:55 PM
 #12816

C-CEX lost my deposit again. Over 6000 BURST. I wonder how long, if ever, it will take to get my coins back this time?

Come on C-CEX. No need of this.
me too
redsn0w
Legendary
*
Offline Offline

Activity: 1778
Merit: 1042


#Free market


View Profile
October 04, 2014, 02:22:46 PM
 #12817

C-CEX lost my deposit again. Over 6000 BURST. I wonder how long, if ever, it will take to get my coins back this time?

Come on C-CEX. No need of this.
me too

The next time I suggest you to use  poloniex.com or bittrex.com  Wink  . And Have you try to contact the c-cex support ? Only theri can help you .
Chode
Newbie
*
Offline Offline

Activity: 42
Merit: 0



View Profile
October 04, 2014, 02:48:59 PM
 #12818

Recently I've seen a lot of blocks taking long amounts of time in which the disks are idle so I'd suggest to all the miner developers to add an extra command line option to their miners that tells if a USB drive has been used. If the option is switched on, the miner will write a small (few KB) file of random data to a folder on the hard drive/s so they won't automatically power down (many external drives do this and disregard the OS power preferences). For example the option could be something like --usb=/path/to/the/usb/folder/storing/the/plots/
I know this can be achieved through external software or shell scripts but it's kinda annoying to have another extra piece of software installed and running on the system...
pimytron
Newbie
*
Offline Offline

Activity: 55
Merit: 0


View Profile
October 04, 2014, 03:08:24 PM
 #12819

C-CEX lost my deposit again. Over 6000 BURST. I wonder how long, if ever, it will take to get my coins back this time?

Come on C-CEX. No need of this.
me too

The next time I suggest you to use  poloniex.com or bittrex.com  Wink  . And Have you try to contact the c-cex support ? Only theri can help you .


Bye Bye C-CEX  Cry
Patek
Member
**
Offline Offline

Activity: 64
Merit: 10


View Profile
October 04, 2014, 03:19:29 PM
 #12820

Could someone please drop me market id for cryptoport shares? Been looking all around but cant really find them in this long thread :/
Pages: « 1 ... 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 [641] 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 ... 1315 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!