gah - I always forget to resize my VMs to 2GB+ for compiling... new wallet running smoothly after remembering, looking good so far
|
|
|
Thanks dev for the new coin control features.
+1 Coin Control is an excellent feature, and will become essential for the faster PoS coins. Good, this will help distinguish CommunityCoin in the future
|
|
|
i asked before, but it was ignored, so ill ask again, when can u change the diff ? 512 is too big .. come on ...
It may be more accurate to say that your question was deferred rather than ignored. It's been stated as one of those "nice to have but not critical" things.
|
|
|
Well Ghash didn't expect all the multipools to jump on their promo, so the tiered reduction was "needed" to prevent 51% attack, lol. They should have just honored their offer and not allowed new sign ups, but it seems that whales and multipools are just mining when <40GH/s and then switching back and forth for the week. I haven't mined litecoin in a long time and have nothing in my wallet, so I am going to take advantage of the 200% bonus, even if it dips to 125% every now and then.
this thread is not about that shitty Ghash, can we stop talking about that crap ? so ghash is over 4GH/s, so their promotion for 2x payouts lasted a day? yep, typical bait and switch.
|
|
|
Like it or not, Multi-pool mining is helping this consolidation, by killing off the coins no one really wants. The only value in 90% of the crap coins created is false hope, pumped by speculators looking to take advantage of idiots. This, exactly this, and lots of it. Scamcoins are (IMHO) unhealthy for cryptocurrencies, and it takes the entire market to force them to where they should be (zero) by selling as soon as they are mined. These multipools are simply exploiting the imbalance on too-high-priced altcoins and forcing them down to a level where they're at more on par to their actual demand. *IF* there is another coin (perhaps some scrypt-jane or variant) that deserves to take a place on the real stage, then it should survive the multipool process. Using multipools helps with the process. Now, if there was only a way to mine scrypt AND scrypt-jane (etc.) in a single pool... THAT would be very interesting to me.
|
|
|
You know, the best way I see to put this reject difference between pools stuff to bed would be a simple test: given two identical optimized rigs (largish ones to help combat variance) with identical configurations, each pointed to a different pool, run them for exactly the same amount of time for at least one week. Compare total payouts. Best payout wins.
A bunch of us have done that on this subreddit: http://www.reddit.com/r/multiminingI did wafflepool vs middlecoin for 5 days and wafflepool came out ahead. Also there is this site: http://poolpicker.eu/ (he talks about how he gets those stats on his twitter https://twitter.com/Webbson_/ ) That website gets its data from the pool's reported stats, so you would have to trust the pools not to pad their stats. I might split my GPUs up between clevermining and another pool, but several of these pools look the same when looked at on the long term, I'm not sure if it matters. hmm... it's a good start, I guess, although I'm curious to actual payouts instead of self-reported payouts at poolpicker, since "accepted hashrate" is an important variable between pools that should be part of the comparison.
|
|
|
You know, the best way I see to put this reject difference between pools stuff to bed would be a simple test: given two identical optimized rigs (largish ones to help combat variance) with identical configurations, each pointed to a different pool, run them for exactly the same amount of time for at least one week. Compare total payouts. Best payout wins.
I may do that this weekend (I'll have to do some calculations on power draws first when switching things around to make identical setups though - back of the napkin math says should work at ~2.5M/h each for the two older rigs I have in mind for rearranging for this) unless someone else is already doing this and wishes to share with the class
Have to run them at the same time to rule out market fluctuations, if this isn't already known. Yes, exactly - which is why I said there should be two identical rigs (I suppose I didn't specify "running simultaneously", although that was my intent)
|
|
|
You know, the best way I see to put this reject difference between pools stuff to bed would be a simple test: given two identical optimized rigs (largish ones to help combat variance) with identical configurations, each pointed to a different pool, run them for exactly the same amount of time for at least one week. Compare total payouts. Best payout wins.
I may do that this weekend (I'll have to do some calculations on power draws first when switching things around to make identical setups though - back of the napkin math says should work at ~2.5M/h each for the two older rigs I have in mind for rearranging for this) unless someone else is already doing this and wishes to share with the class
|
|
|
Fighting a nasty DDOS Well, you know you've arrived, then.
|
|
|
well, good news is that my rejects are at 0% right now
|
|
|
is the quoting option on this forum broken or are people just unaware of its use? I'm having a hard time distinguishing the quoted text from the reply text over the last 10 or so posts.
|
|
|
does anybody here know how long you have to mine and how much hashing power you should have in order to acquire 1 MMC?
pools are in development as I type this; the answer to that question right now is a function of you solomining a block right now, which isn't all that easy to determine due to the wide variance involved. As soon as the pools are up and running, difficulty will jump a bit as those who until now do not have access to high-end machines run (back) to mining, but you'll be able to mine fractions of a coin based on hash power contributed to the pool instead of waiting for a complete block on your own. Ok I'll just sign up to a pool as soon as they are developed. Any idea where such info will be posted? very soon. I know a standalone miner has already been developed, just waiting on the pool website now.
|
|
|
does anybody here know how long you have to mine and how much hashing power you should have in order to acquire 1 MMC?
pools are in development as I type this; the answer to that question right now is a function of you solomining a block right now, which isn't all that easy to determine due to the wide variance involved. As soon as the pools are up and running, difficulty will jump a bit as those who until now do not have access to high-end machines run (back) to mining, but you'll be able to mine fractions of a coin based on hash power contributed to the pool instead of waiting for a complete block on your own.
|
|
|
what is the value of a mmc?
0.01 BTC If one MMC = 10 PTS and 10 PTS = 0.02 BTC then one MMC = 0.2 BTC Again: if only... Just because PTS owners at block 32000 got awarded 1 MMC per 10 PTS doesn't necessarily mean that 1 MMC = 10 PTS. The market decides the value. Right now there isn't a market, so people interested in getting in on MMC should start making offers. Clearly no one is willing to pay 0.2 BTC / MMC, or even 0.01 BTC / MMC for that matter. Exactly. One thing Protoshares have going for them is they are effectively a way to premine other coins as they are released (this is an oversimplification, but it's more or less accurate). Since MemoryCoin is one of those spinoffs, it doesn't have that inherent potential future value that Protoshares has. I expect the market to start near .001 BTC or so, right about where the spreadsheet has it, in fact, and gradually increase alongside difficulty (but it will be pegged to the actual cost of mining, not the exchange rate into Bitcoin - as such, if Bitcoin rises dramatically against fiat faster than Memorycoin difficulty adjustments, the price of Memorycoin should drop against Bitcoin, but stay relatively near the same fiat price). If Memorycoin ever gets traction is a viable altcoin, then you'll start to see it detach from fiat and move more with Bitcoin or Litecoin.
|
|
|
FWIW, the newest git source as of 3 hours prior to this post is working on linux with no indication whatsoever of the memory leak, but I also rewrote my self-monitoring script a bit, as I noticed purely by accident that the CPU drops near zero when votes are being tabulated (my script may have been killing and restarting the process while votes were being counted, which is probably what caused the large number of corrupt blockchain issues I was having). Not sure if my stable systems are due to new source or to something I did... but I'm quite certain it wasn't 100% me.
|
|
|
Thanks for addressing this.
Tired on source posted as of 30 minutes ago, I still get the same thing... 32mb per thread added to vmem every time the CPU dips (not exactly sure what's happening, probably finished with old work and starting new work?)
Okay, thanks - so if you're just running 1 thread, on each iteration, it's adding 32mb? and if you're running 32 threads, it's adding 32x32mb = 1GB each iteration? correct.
|
|
|
Okay, can you try uncommenting (latest source) lines 136, 137
//CRYPTO_cleanup_all_ex_data(); //EVP_cleanup();
and 271, 272
//CRYPTO_cleanup_all_ex_data(); //EVP_cleanup();
Let me know if that helps please.
Thanks for addressing this. Tired on source posted as of 30 minutes ago, I still get the same thing... 32mb per thread added to vmem every time the CPU dips (not exactly sure what's happening, probably finished with old work and starting new work?) actually, this sometimes (!?) breaks ssl with "context: unable to load ssl2 md5 routines". reverting to commented.
|
|
|
Okay, can you try uncommenting (latest source) lines 136, 137
//CRYPTO_cleanup_all_ex_data(); //EVP_cleanup();
and 271, 272
//CRYPTO_cleanup_all_ex_data(); //EVP_cleanup();
Let me know if that helps please.
Thanks for addressing this. Tired on source posted as of 30 minutes ago, I still get the same thing... 32mb per thread added to vmem every time the CPU dips (not exactly sure what's happening, probably finished with old work and starting new work?)
|
|
|
Okay. Flash Fork at 607 - github updated. Binaries available shortly.
Not cool, man - you just effectively cut off those who took the time to upgrade due to the carefully and considerately planned announcement, in favor of those who happened to catch the thread just when you posted. :\
|
|
|
that's unnecessary. I've found that the problem is exacerbated by the number of threads being used; perhaps he's only using 2 or 4 cores and therefore hasn't hit a hard limit yet?
I ended up just writing a script to restart the daemon if (when) cpu drops below a certain threshold, indicating the process has crashed... hacky, but good enough for now.
radiumsoup, can you share that script? appriciate radiumsoup, please post a workaround script, you will help many people. I'll put a link on www.MemoryCoin.info Thank you! sorry, I didn't see that earlier request... what I wrote is highly dependent on my own peculiar setup (resident launch/relaunch script triggered by rc.local, cron job to launch a cpu monitor, and the daemon itself). I can help point people in the direction I took I suppose though. bash script with pseudocode in comments: INTERVAL=15; # seconds between cpu checks THRESHOLD=".6 * $(/usr/bin/nproc)"; # average per-CPU load minimum before we assume it died while : do FIVEMINCPU=($(cat /proc/loadavg | cut -d ' ' -f 2)) if (( $(echo "$FIVEMINCPU" '<' "$THRESHOLD" | bc -l) )); then #kill any remaining running process using whatever method matches your setup (I used killall -s 9 bitcoind, because the memorycoin executable is the only thing named bitcoind on these hosts) #launch the daemon again, again depending on your setup (I have it launching automatically from another script if it detects it's not already running) sleep 600; # must be set to enough time to allow the process to start, scan the blocks, and bring the 5-minute load average back above the threshold. fi sleep $INTERVAL; done note that you should not run this at boot without first allowing the daemon to launch and start to pull the 5-minute average up, otherwise it will kill the process before it has a chance to peg the cpus. I have this set to launch at boot with a 600 second sleep before the while loop begins. I know there are a lot of better ways to accomplish the same, but this is just lipstick on the pig until someone who knows C much better than I do (which isn't saying much) can figure out the memory leak, at which point this (hopefully) becomes unnecessary.
|
|
|
|