There is no risk, your client will verify the whole block chaianyway, no matter where it gets it from. Looks to me like there's some serious networking issue on your end. Are you able to browse the web with that machine at all?
|
|
|
Someone shoot this damned block, please! ...or shoot me instead for I've had enough
|
|
|
Like I said, this is your chance to shine Cgminer has a very useful shares parameter (e.g. "shares" : "10000") which tells it to exit once it solves a specified number of shares. Write a script to launch cgminer consecutively with different SDKs, kernels, vectors, and worksizes and post your results here. This will be the real deal, far more accurate than any info you can find online.
|
|
|
Phatk will almost surely be faster than poclbm on old hardware AFAIK. Why don't you do some benchmarking and post the results here?
|
|
|
Don't forget to include mining-caused hardware MTBF reduction into your profitability calculations.
|
|
|
I back up wallets with 2 Bitcoins on it... and I am a lazy guy. I don't understand you guys that have fairly large wallets and don't make a backup of one freaking file! Of course, I bet you do from now on. I guess some people just have to learn the hard way. I couldn't agree more. You never bothered to drag a file worth 1500$ over to your flash drive? Sounds like you need to be lashed into shape, soldier... Was that wallet encrypted? If it wasn't you may use a whole range of tools including hex editors and data carving tools. If it was, the hex editors will be of no avail. Also, before you actually do anything with that drive, it would be prudent to make a perfect bit-by-bit image of your Windows partition on another drive and carve data off that image. Do the words "Linux", "dd" and "foremost" ring a bell?
|
|
|
Perhaps an uninformed question as I'm not doing Windows mining, can't you just request 330MHz in cgminer? Does the card refuse to set memory that low?
Works fine here. Though my dedicated miners run linux, my main PC is also used for mining and runs both windows and linux. I have no problem letting cgminer set the memory clock to 300 on any OS. If it helps, on windows I do not use catalyst other then enabling the overdrive (it still shows default clocks btw). I dont use any of the other trix/whatever apps. Just wait until catalyst has fully loaded before launching cgminer, and it will set 300 Mhz with no problems. I was trying to help cypherdoc with that post. That's what I get for failing to specify my target... There, fixed.
|
|
|
the default value of 95°C for temp-cutoff is only good for determining cutoff temperature in case the user failed to specify one. You still need auto-gpu to enforce the temp-cutoff check.
Let me reiterate: if you want any thermal throttling in cgminer, auto-gpu must be defined.
|
|
|
Remember that SolidCoin is trying to compete. Since SolidCoin has evolved slower than Bitcoin due to a much smaller developer base, what other weapons remain to them? They have very little traction, an occasional low blow is nothing to marvel at. A bit of FUD should hardly come as a surprise. Colluding with the feds is one way of looking at the CIA talk. Another one could be CIA "paying spooks" with bitcoin. Is there anything wrong in lecturing "the spooks" about Bitcoin? CIA is very much interested in bleeding edge tech. Do you think RealSolid would have done a better job explaining Bitcoin to them? Let's take a glance at other " objective" information RealSolid provides: (1) Bitomat failure: it was a Polish bitcoin exchange having spectacularly capsized. How do an exchange's issues pertain to Bitcoin? (2) MtGox being hacked: ditto. Do you suppose that a SolidCoin exchange can't possibly be attacked due to some "adversary repelling aura" built into the protocol? (3) MyBitcoin hurt. It hurt a lot. Bitcoin (and all other cryptocurrencies) were young back then and the problem of entrusting private keys to a third party hadn't been so obvious. Since the MyBitcoin fiasco, not a single service was built using this flawed approach. Don't release control over your private keys == problem solved. As it turns out, there's nothing Bitcoin-specific on protocol level. These are all third-party issues ANY cryptocurrency has to deal with. "SolidCoin: Ready for Bitcoin collapse"waiting for that pesky Bitcoin to just explode out of the blue... waiting... waiting some more... still waiting, forlorn and forgotten...
|
|
|
how would you get your self down to 330 in Windows?
Perhaps an uninformed question as I'm not doing Windows mining, can't you just request 330MHz in cgminer? Does the card refuse to set memory that low? If cgminer fails on Windows, I'd try Afterburner/Trixx. Something's bound to work.
|
|
|
Litecoin on the other hand emphasizes on the use of CPU mining by the implementation of Scrypt. Scrypt uses the low latency cache memory of CPU's to provide greater hashing speeds on CPUs in comparison to GPUs (which we use for Bitcoin mining). The developers meant Litecoin to supplement Bitcoins as 'silver mis meant to supplement gold'. Read :https://bitcointalk.org/index.php?topic=47417.0
It's inconsequential that Litecoin is CPU rather than GPU mined. It serves the same function as Bitcoin hence is in direct competition. Sadly, Litecoin has no ace up the sleeve, not one single feature implemented which Bitcoin would lack. From a merchant's perspective, there is no advantage in going with Litecoin.
|
|
|
You are aware that SolidCoin is not decentralized currency, right? Not only is RealSolid able to shut the network completely down at his slightest whim, he has done it before (SolidCoin 1.0). Would you actually pump money into a project which one person has total control over? What if RealSolid's system is compromised and an adversary gains control over the chain?
I won't even go into other numerous technicalities (police nodes, mandatory tax) or the fact that SolidCoin is allegedly violating Oracle's software licence. Hard as it might be to believe, SolidCoin has even managed to violate Bitcoin's licence which merely required attribution... that's the reason of the last DMCA takedown.
It's a no-contest actually.
Litecoin? Yet another alternative cryptocurrency created by grabbing the source code and changing the project name. DOA with exchange rate of some 2000 LTC for 1 BTC. I wouldn't bother.
|
|
|
Go for the 1GB 6950s, not the 2GB. You are pouring money down the drain and gain absolutely nothing by going with the 2GB versions.
|
|
|
Nicely done I see you're staying true to your DIY spirit...
|
|
|
Yeah, I'd love to see that option as well. ABCPool's suspiciously high invalid rates always bothered me...
|
|
|
I already told you to measure it over at least 24 hours.
|
|
|
Options used:"auto-gpu" : true, "gpu-engine" : "942", "auto-fan" : true, "gpu-fan" : "52-57", "temp-target" : "59", "temp-overheat" : "63", "temp-cutoff" : "79,67",
But in your example above the card will start lowering clock at the temp-target temp (59C) not wait until the temp-overheat temp (63C). Likely by the time you reached 63C the card is already at the minimum clock speed anyways. Looking at the C code, it is now abundantly clear that core throttling begins at target + hysteresis. I just happened to have had hysteresis at 4, therefore temp_target + hysteresis = temp-overheat. Bad example values. I guess I need to update THAT post once again... for great justice. else if ((ga->lpActivity.iCurrentPerformanceLevel == ga->lpOdParameters.iNumberOfPerformanceLevels - 1) && (temp > ga->targettemp + opt_hysteresis && engine > ga->minspeed && fan_optimal)) { if (opt_debug) applog(LOG_DEBUG, "Temperature %d degrees over target, decreasing clock speed", opt_hysteresis); newengine = engine - ga->lpOdParameters.sEngineClock.iStep;
|
|
|
Utility is the number of solved non-stale shares per minute. It's a slightly better indicator than MHash/s, although at the cost of higher variation. Run cgminer for a day or so to even out the variation.
|
|
|
"auto-gpu" : true, "gpu-engine" : "942", "auto-fan" : true - the card will throttle itself down to its stock speed on exceeding temp-overheat, mining threads will be disabled on exceeding "temp-cutoff". The fan will speed up to 100% on temp-overheat.
I don't think this is right. I use auto-gpu and auto-fan on a rig and if fan is at 85% gpu is throttled at target-temp (75 default) + temp-hysterisis (3 default). I think temp-overheat is just used to move the fan to 100% after other options fail. If that doesn't work temp-cutoff is the failsafe. EDIT: You were correct. I used poorly-chosen example temperatures as my hysteresis had been set to 4 all the time. 59 + 4 = 63 = temp-overheat Both core clock and fan speed are being changed at the same time: [2012-02-02 20:47:46] Overheat detected on GPU 1, increasing fan to 100% [2012-02-02 20:47:46] Overheat detected, decreasing GPU 1 clock speed
Options used:"auto-gpu" : true, "gpu-engine" : "942", "auto-fan" : true, "gpu-fan" : "52-57", "temp-target" : "59", "temp-overheat" : "63", "temp-cutoff" : "79,67",
|
|
|
|