Bitcoin Forum
May 04, 2024, 01:32:14 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: « 1 ... 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 [95] 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 ... 311 »
  Print  
Author Topic: [ANN][RIC] Riecoin: constellations POW *CPU* HARD FORK successful, world record  (Read 684948 times)
steban
Full Member
***
Offline Offline

Activity: 168
Merit: 100



View Profile
March 04, 2014, 03:45:00 AM
 #1881

I'm mining this interesting coin...
The fair launch will really be a motivator for more adoption...
People, don't let this crypto fall into oblivion like Datacoin  Roll Eyes
I'm HODLING   Grin

RIE is been one of the very few coins with a fair launch, the work done is not wasted and remains extremely cheap. It should be evident that it will reach a much better place in the coinmarket cap. Still it puzzles me to see people holding Meme (other than Doge) or other clone coins, when they could have RIE.
1714786334
Hero Member
*
Offline Offline

Posts: 1714786334

View Profile Personal Message (Offline)

Ignore
1714786334
Reply with quote  #2

1714786334
Report to moderator
"In a nutshell, the network works like a distributed timestamp server, stamping the first transaction to spend a coin. It takes advantage of the nature of information being easy to spread but hard to stifle." -- Satoshi
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1714786334
Hero Member
*
Offline Offline

Posts: 1714786334

View Profile Personal Message (Offline)

Ignore
1714786334
Reply with quote  #2

1714786334
Report to moderator
surfer43
Sr. Member
****
Offline Offline

Activity: 560
Merit: 250


"Trading Platform of The Future!"


View Profile
March 04, 2014, 03:55:12 AM
 #1882

I'm mining this interesting coin...
The fair launch will really be a motivator for more adoption...
People, don't let this crypto fall into oblivion like Datacoin  Roll Eyes
I'm HODLING   Grin

RIE is been one of the very few coins with a fair launch, the work done is not wasted and remains extremely cheap. It should be evident that it will reach a much better place in the coinmarket cap. Still it puzzles me to see people holding Meme (other than Doge) or other clone coins, when they could have RIE.
When they could mine RIC.
bsunau7
Member
**
Offline Offline

Activity: 114
Merit: 10


View Profile
March 04, 2014, 09:06:28 AM
 #1883

As I think Supercomputing mentioned, using larger numbers instead of 2310*n+97, say some number 200 bits long instead of 2310, could go a long way.
Regarding the metric, I proposed "range scanned / s @ diff", but "time per 2*32 nonce @ difficulty" would probably work just as well, just don't forget to adjust for the numbers you are skipping (4 out of 5 in the 2310 case). I agree that it's difficult to compare between different difficulties...

I never liked adjusting for skipping numbers, if I can code something which only one in a million numbers are considered p6 candidates (i.e skips over 1m numbers on average) than I shouldn't have to adjust my rate by a million.  This is why the "range/s" @ difficulty seems like the best fit; clever (or not so clever) algorithms can be rated.

@SC I'd love to know how you maintain primordials of that size (you don't have to tell, but I've kept mine to less than 64bits to help with other parts of my code).


What do you mean specifically by "maintain"?  I'm happy to spill the beans on my big primorial version, since my hacked verison of jh's is faster at this point. ;-)

  -Dave

I use a static (calculate once and use many time) primorial based sieve and the sieve consumes quite a lot of memory.  Right now I am using a hybrid approach just to keep the memory footprint down...  I assume that a sieve using 200bit numbers is going to consume significantly more space than I am and as I am getting close to not fitting in memory it has my interested..

--
bsunau7
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
March 04, 2014, 12:26:24 PM
 #1884

I also made some more optimized changes to my modded xptminer build to get even more 4ch/s for comparsion with dga's new release.

static linux bins for different arch you can grab here: http://go.ispire.me/1vo
Fee: 2%

Did you follow the "sieve all six to 50k" guideline?  I'd encourage you and other miner creators to be very clear about the size of the all-six sieve used -- we all benefit from having blocks be found at a fair rate.

I ask because a miner that violates that will be easy to detect and penalize server-side - so I'd caution people to always make sure their miners are playing by the game.

dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
March 04, 2014, 12:30:13 PM
 #1885

As I think Supercomputing mentioned, using larger numbers instead of 2310*n+97, say some number 200 bits long instead of 2310, could go a long way.
Regarding the metric, I proposed "range scanned / s @ diff", but "time per 2*32 nonce @ difficulty" would probably work just as well, just don't forget to adjust for the numbers you are skipping (4 out of 5 in the 2310 case). I agree that it's difficult to compare between different difficulties...

I never liked adjusting for skipping numbers, if I can code something which only one in a million numbers are considered p6 candidates (i.e skips over 1m numbers on average) than I shouldn't have to adjust my rate by a million.  This is why the "range/s" @ difficulty seems like the best fit; clever (or not so clever) algorithms can be rated.

@SC I'd love to know how you maintain primordials of that size (you don't have to tell, but I've kept mine to less than 64bits to help with other parts of my code).


What do you mean specifically by "maintain"?  I'm happy to spill the beans on my big primorial version, since my hacked verison of jh's is faster at this point. ;-)

  -Dave

I use a static (calculate once and use many time) primorial based sieve and the sieve consumes quite a lot of memory.  Right now I am using a hybrid approach just to keep the memory footprint down...  I assume that a sieve using 200bit numbers is going to consume significantly more space than I am and as I am getting close to not fitting in memory it has my interested..

Nah, the trick is:
 - Generate up to a certain size polynomial.  I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
 - Sieve *this* out up to the huge primorial in advance.
 - Do your operations relative to the huge primorial.

But, as warned - the simple bitvector is still working better for me. Wink

bsunau7
Member
**
Offline Offline

Activity: 114
Merit: 10


View Profile
March 04, 2014, 01:13:33 PM
 #1886

As I think Supercomputing mentioned, using larger numbers instead of 2310*n+97, say some number 200 bits long instead of 2310, could go a long way.
Regarding the metric, I proposed "range scanned / s @ diff", but "time per 2*32 nonce @ difficulty" would probably work just as well, just don't forget to adjust for the numbers you are skipping (4 out of 5 in the 2310 case). I agree that it's difficult to compare between different difficulties...

I never liked adjusting for skipping numbers, if I can code something which only one in a million numbers are considered p6 candidates (i.e skips over 1m numbers on average) than I shouldn't have to adjust my rate by a million.  This is why the "range/s" @ difficulty seems like the best fit; clever (or not so clever) algorithms can be rated.

@SC I'd love to know how you maintain primordials of that size (you don't have to tell, but I've kept mine to less than 64bits to help with other parts of my code).


What do you mean specifically by "maintain"?  I'm happy to spill the beans on my big primorial version, since my hacked verison of jh's is faster at this point. ;-)

  -Dave

I use a static (calculate once and use many time) primorial based sieve and the sieve consumes quite a lot of memory.  Right now I am using a hybrid approach just to keep the memory footprint down...  I assume that a sieve using 200bit numbers is going to consume significantly more space than I am and as I am getting close to not fitting in memory it has my interested..

Nah, the trick is:
 - Generate up to a certain size polynomial.  I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
 - Sieve *this* out up to the huge primorial in advance.
 - Do your operations relative to the huge primorial.

But, as warned - the simple bitvector is still working better for me. Wink

Cool, that is what I am going but looking at your numbers I also pre sieve the possible p6 chains reducing my candidate count by ~128 times:

const uint64_t  primorial = 7420738134810;
const uint32_t  sexcount = 14243984;

Then I run a second scan inline to catch the next 2 dozen or so primes (lets me avoid gmp and use simple 64bit math) before I hit the expensive code.  General idea was to get a list of candidates which could be feed into something else (GPU was the thought).

It is much faster than reference but it is reaching the limit of how fast I can push it.

I have probably made a horrendous error in my algorithm... but coding again was fun...

Regards,

--
bsunau7
surfer43
Sr. Member
****
Offline Offline

Activity: 560
Merit: 250


"Trading Platform of The Future!"


View Profile
March 04, 2014, 01:14:48 PM
 #1887

wtf the ypool shares per second just doubled to 20  Angry
Supercomputing
Sr. Member
****
Offline Offline

Activity: 278
Merit: 250


View Profile
March 04, 2014, 02:30:26 PM
 #1888

As I think Supercomputing mentioned, using larger numbers instead of 2310*n+97, say some number 200 bits long instead of 2310, could go a long way.
Regarding the metric, I proposed "range scanned / s @ diff", but "time per 2*32 nonce @ difficulty" would probably work just as well, just don't forget to adjust for the numbers you are skipping (4 out of 5 in the 2310 case). I agree that it's difficult to compare between different difficulties...

I never liked adjusting for skipping numbers, if I can code something which only one in a million numbers are considered p6 candidates (i.e skips over 1m numbers on average) than I shouldn't have to adjust my rate by a million.  This is why the "range/s" @ difficulty seems like the best fit; clever (or not so clever) algorithms can be rated.

@SC I'd love to know how you maintain primordials of that size (you don't have to tell, but I've kept mine to less than 64bits to help with other parts of my code).


What do you mean specifically by "maintain"?  I'm happy to spill the beans on my big primorial version, since my hacked verison of jh's is faster at this point. ;-)

  -Dave

I use a static (calculate once and use many time) primorial based sieve and the sieve consumes quite a lot of memory.  Right now I am using a hybrid approach just to keep the memory footprint down...  I assume that a sieve using 200bit numbers is going to consume significantly more space than I am and as I am getting close to not fitting in memory it has my interested..

Nah, the trick is:
 - Generate up to a certain size polynomial.  I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
 - Sieve *this* out up to the huge primorial in advance.
 - Do your operations relative to the huge primorial.

But, as warned - the simple bitvector is still working better for me. Wink

Cool, that is what I am going but looking at your numbers I also pre sieve the possible p6 chains reducing my candidate count by ~128 times:

const uint64_t  primorial = 7420738134810;
const uint32_t  sexcount = 14243984;

Then I run a second scan inline to catch the next 2 dozen or so primes (lets me avoid gmp and use simple 64bit math) before I hit the expensive code.  General idea was to get a list of candidates which could be feed into something else (GPU was the thought).

It is much faster than reference but it is reaching the limit of how fast I can push it.

I have probably made a horrendous error in my algorithm... but coding again was fun...

Regards,

--
bsunau7


My implementation is a little different from both implementations mentioned above. If fact, the overhead is much less than that of jh00's implementation. My implementation is almost identical to Kim Walisch's primesieve implementation with a few minor exceptions.

Please see Kim Walisch's description of wheel factorization if you would like to know exactly what I am doing:
http://primesieve.org/

Electrical Engineering & Computer Science
http://www.eecs.mit.edu/
beatfried
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
March 04, 2014, 02:34:55 PM
 #1889

wtf the ypool shares per second just doubled to 20  Angry
yeah... I think many people (including me) pointed their rigs back to ypool after solomining while they had troubles...
GordonSSS
Member
**
Offline Offline

Activity: 63
Merit: 10


View Profile
March 04, 2014, 03:06:42 PM
 #1890

Any chance of Windows binaries?

I also made some more optimized changes to my modded xptminer build to get even more 4ch/s for comparsion with dga's new release.

static linux bins for different arch you can grab here: http://go.ispire.me/1vo
Fee: 2%

XPM: AWFyioszN3vsyQsPbAtCybqu3j5v6FqQTE
RIC: RDzYLbepJdGu5vZMwYe5GtiJYe417AWJJV
BTC: 1LXgRb1F6KZmVQBzcKsfpAAL57Se9EKeT6
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
March 04, 2014, 03:16:23 PM
 #1891


 - Generate up to a certain size polynomial.  I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
 - Sieve *this* out up to the huge primorial in advance.
 - Do your operations relative to the huge primorial.

But, as warned - the simple bitvector is still working better for me. Wink

Cool, that is what I am going but looking at your numbers I also pre sieve the possible p6 chains reducing my candidate count by ~128 times:
const uint64_t  primorial = 7420738134810;
const uint32_t  sexcount = 14 243 984;

Then I run a second scan inline to catch the next 2 dozen or so primes (lets me avoid gmp and use simple 64bit math) before I hit the expensive code.  General idea was to get a list of candidates which could be feed into something else (GPU was the thought).

It is much faster than reference but it is reaching the limit of how fast I can push it.

I have probably made a horrendous error in my algorithm... but coding again was fun...

Regards,

My implementation is a little different from both implementations mentioned above. If fact, the overhead is much less than that of jh00's implementation. My implementation is almost identical to Kim Walisch's primesieve implementation with a few minor exceptions.

Please see Kim Walisch's description of wheel factorization if you would like to know exactly what I am doing:
http://primesieve.org/


@bsunau7 - mine does the same.  I kill any location that fails to produce a six-set.  I wonder which of us has a bug?  *grin*  I'll check my sieving code again.  As one way to start comparing, the polynomials for the first few primorials are:

Generator at Pn7 (210)
97  

Generator at Pn11 (2310)
97  937  1147  1357  2197  

Generator at Pn13 (30030)
97  1357  2407  3457  4717  5557  5767  6817  7867  8077  8287  10177  10597  11647  12907  13747  13957  15007  16057  16267  17107  18367  19417  19837  21727  21937  22147  23197  24247  24457  25297  26557  27607  28657  29917  

@Supercomputing - Did you figure out a way to combine wheel factorization with storing a dense bitvector div 2310 (or div 210)?  Or do you just allow a large bitvector and handle it through segmentation?  I liked the way the jh implementation saved a lot of sieve space that way, and a straightforward prime sieve achieves a less dense packing (3-4x).

Supercomputing
Sr. Member
****
Offline Offline

Activity: 278
Merit: 250


View Profile
March 04, 2014, 06:31:02 PM
Last edit: March 04, 2014, 07:04:29 PM by Supercomputing
 #1892


 - Generate up to a certain size polynomial.  I use 200560490130 or the next as my base primorial and store a vector of all 48923875 entries.
 - Sieve *this* out up to the huge primorial in advance.
 - Do your operations relative to the huge primorial.

But, as warned - the simple bitvector is still working better for me. Wink

Cool, that is what I am going but looking at your numbers I also pre sieve the possible p6 chains reducing my candidate count by ~128 times:
const uint64_t  primorial = 7420738134810;
const uint32_t  sexcount = 14 243 984;

Then I run a second scan inline to catch the next 2 dozen or so primes (lets me avoid gmp and use simple 64bit math) before I hit the expensive code.  General idea was to get a list of candidates which could be feed into something else (GPU was the thought).

It is much faster than reference but it is reaching the limit of how fast I can push it.

I have probably made a horrendous error in my algorithm... but coding again was fun...

Regards,

My implementation is a little different from both implementations mentioned above. If fact, the overhead is much less than that of jh00's implementation. My implementation is almost identical to Kim Walisch's primesieve implementation with a few minor exceptions.

Please see Kim Walisch's description of wheel factorization if you would like to know exactly what I am doing:
http://primesieve.org/


@bsunau7 - mine does the same.  I kill any location that fails to produce a six-set.  I wonder which of us has a bug?  *grin*  I'll check my sieving code again.  As one way to start comparing, the polynomials for the first few primorials are:

Generator at Pn7 (210)
97  

Generator at Pn11 (2310)
97  937  1147  1357  2197  

Generator at Pn13 (30030)
97  1357  2407  3457  4717  5557  5767  6817  7867  8077  8287  10177  10597  11647  12907  13747  13957  15007  16057  16267  17107  18367  19417  19837  21727  21937  22147  23197  24247  24457  25297  26557  27607  28657  29917  

@Supercomputing - Did you figure out a way to combine wheel factorization with storing a dense bitvector div 2310 (or div 210)?  Or do you just allow a large bitvector and handle it through segmentation?  I liked the way the jh implementation saved a lot of sieve space that way, and a straightforward prime sieve achieves a less dense packing (3-4x).

Well, think of a primorial as a wheel with no pre-sieving. For example, 43# guaranties that k, k+4, k+6, k+10, k+12, and k+16 will have no divisors less than or equal to 43. Therefore, the bigger the primorial, the more efficiently the sieve will run. Each bit in your sieve array already represents a potential chain k, and the trick is to segment the sieve so that you do not keep eliminating the same false chains over and over again within your sieve array.

Only a single static table of 32-bit integers (primes interleaved with prime inverses) is needed to coalesce memory access.

Electrical Engineering & Computer Science
http://www.eecs.mit.edu/
steban
Full Member
***
Offline Offline

Activity: 168
Merit: 100



View Profile
March 04, 2014, 07:20:12 PM
 #1893

Any news on new pools up?
glongsword
Full Member
***
Offline Offline

Activity: 314
Merit: 100



View Profile
March 04, 2014, 07:21:44 PM
 #1894

FYI: Riecoin exchange poloniex had 12% of its customers' BTC stolen and they are working out how to refund them: https://bitcointalk.org/index.php?topic=499580
steban
Full Member
***
Offline Offline

Activity: 168
Merit: 100



View Profile
March 04, 2014, 07:54:55 PM
 #1895

FYI: Riecoin exchange poloniex had 12% of its customers' BTC stolen and they are working out how to refund them: https://bitcointalk.org/index.php?topic=499580

Riecoin is ONE of the coins traded at Poloniex, an actually only people holding BTC are affected. Go spread FUD somewhere else.

gpools
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250


View Profile
March 04, 2014, 08:35:54 PM
 #1896

FYI: Riecoin exchange poloniex had 12% of its customers' BTC stolen and they are working out how to refund them: https://bitcointalk.org/index.php?topic=499580

Riecoin is ONE of the coins traded at Poloniex, an actually only people holding BTC are affected. Go spread FUD somewhere else.


other exchange https://www.mintpal.com/
surfer43
Sr. Member
****
Offline Offline

Activity: 560
Merit: 250


"Trading Platform of The Future!"


View Profile
March 04, 2014, 09:13:52 PM
 #1897

wtf the ypool shares per second just doubled to 20  Angry
yeah... I think many people (including me) pointed their rigs back to ypool after solomining while they had troubles...
Is it even feasible to solo mine right now?
dga
Hero Member
*****
Offline Offline

Activity: 737
Merit: 511


View Profile WWW
March 04, 2014, 09:16:42 PM
 #1898

wtf the ypool shares per second just doubled to 20  Angry
yeah... I think many people (including me) pointed their rigs back to ypool after solomining while they had troubles...
Is it even feasible to solo mine right now?

The whales can. Smiley  Someone's mined some blocks using my solo miner, and it's pretty clear from the ypool block logs that others are solomining also.

There's an interesting exercise in looking at the offsets in the block log, btw - it can tell you a lot about the miners that are in use.

Supercomputing
Sr. Member
****
Offline Offline

Activity: 278
Merit: 250


View Profile
March 04, 2014, 09:42:50 PM
 #1899

wtf the ypool shares per second just doubled to 20  Angry
yeah... I think many people (including me) pointed their rigs back to ypool after solomining while they had troubles...
Is it even feasible to solo mine right now?

The whales can. Smiley  Someone's mined some blocks using my solo miner, and it's pretty clear from the ypool block logs that others are solomining also.

There's an interesting exercise in looking at the offsets in the block log, btw - it can tell you a lot about the miners that are in use.

I mined 11 blocks yesterday with 10 Dell R620 servers (2 x Intel E5-2697 v2's each), can you tell which ones are mine?  Grin

Electrical Engineering & Computer Science
http://www.eecs.mit.edu/
steban
Full Member
***
Offline Offline

Activity: 168
Merit: 100



View Profile
March 04, 2014, 09:46:54 PM
Last edit: March 04, 2014, 10:28:02 PM by steban
 #1900

FYI: Riecoin exchange poloniex had 12% of its customers' BTC stolen and they are working out how to refund them: https://bitcointalk.org/index.php?topic=499580

Riecoin is ONE of the coins traded at Poloniex, an actually only people holding BTC are affected. Go spread FUD somewhere else.




I would like to add that actually Poloniex is taking full responsibility of the hack and reimbursing from their pocket the lost BTC.  Most exchanges just disappear after something like this, I can see Poloniex becoming a major player in little time.

As I Wrote before, Poloniex is one of the Exchanges where RIE is traded. Riecoin is also now listed on MintPal, and will soon appear on Cryptsy, as there was no premine, and is got new code on it..
Pages: « 1 ... 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 [95] 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 ... 311 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!