Bear in mind that Mybitcoin.com is a small business with few people to answer messages, and a large and growing userbase. Sometimes these things take time. Plus it's the weekend and the middle of the night in New Zealand.
I've been trying to be patient, but It's now almost three days since I sent the last message. Oh well, hopefully the service has not gone rogue and I will get the BTC back.
|
|
|
I tried to purchase bitbills through the MyBitcoin merchant service. Followed the instructions precisely, but got a message saying "Payment failed". BTC was sent, merchat didn't get the BTC or confirmation and MyBitcoin doesn't reply to support messages.
Has anyone else had similar problems with MyBitcoin? Did you get the dispute resolved? How? Anyone know who the the owner of MyBitcoin is here in the forums?
With the experience I have, I'd recommend others to avoid using the payment gateway.
|
|
|
Tried to order bitbills, but mybitcoins told me the payment was failed. Paid the exact amount to the exact address. I've tried to use the contact form on the mybitcoin site without any answer.
Does the payment gateway just suck or is this a scam? Maybe llama could consider using other payment systems (heard that others have had problems with mybitcoin too, but thought it was rare).
|
|
|
I just lost a HDD and am writing this from a mac by heart. I did something similar to this:
- create a shortcut with all your mining options (username, password etc) targeting a gpu (usually with DEVICE parameter) - modify the shortcut to be wrapped inside cmd.exe: cmd.exe /C start /AFFINITY 2 "your_miner.exe -plus -all -the -cool -options" - test that the shortcut works - copy&clone this shortcut into the Startup folder. One for each GPU, varying the DEVICE parameter - reboot
The trick with the /AFFINITY is to set the process to only use CPU core #2 in this case. In windows 7 64bit, this helps you save power since p0clm miner uses 100% of the computing power of a single core for every GPU core.
HTH
|
|
|
CPU miners will never find a single share before the total share count is near 5k shares or so.
Why? This is not correct - everybody has the same chance to hit first share, of course proportionally to his hashrate. Just for curiosity, 7/8 shares was submitted by normal users with decent GPUs (I know some of the users so I can tell), not by users with strong rigs. Of course probability that CPU user with 1mhash/s hit one of first 8 shares is very low, but don't forget that he has 200-300x slower worker. In these cases, you could use the share ratio of the last round the did have time to actually find some shares for.
Sorry, I still don't see why. And why the limit should be 5k shares and not 3k or 10k shares. Don't confuse fairness and luck; this artifical limit is definitely less _fair_, because it favors unlucky workers. Fair points, maybe the idea I threw wasn't quite up to the task. However, since I have endless supply of them ideas, here's another one: What if the rewards would be calculated only once per day. Every user would get a reward of total_btc_for_given_day * their_shares_for_given_day / total_shares_for_given_day Would that make more sense? What other downsides would this have except slightly slower feed back on rewards for users? (altho you could still show a daily estimate).
|
|
|
I want to make it so difficult for authorities to separate darknet traffic from legitimate traffic that they would have render their networks almost useless to block it.
I've had this idea of Outernet in my head. Basically a whole new network that has nothing to do with Internet or our current ip4 address space. So a IP route from A -> Z could first go through the internet for hops A-H, then jump out to Outernet and take a few hops there and later return to back to the Internet and continue from it's way towards node Z. The Outernet could be protocol/hardware agnostic. It could use something like WIFI/WIMAX/hobbyist radio frequencies to operate. Will need to think about this more, sorry for any bad terms - I'm not a networking guy.
|
|
|
Next, why anybody who didn't contributed to the round should receive any reward? Would be a lot more fair that way. CPU miners will never find a single share before the total share count is near 5k shares or so. In these cases, you could use the share ratio of the last round the did have time to actually find some shares for. The idea is not fully finished yet.. But I guess you get the point. If the block was not invalid, only one or two guys would have gotten rewards even if everyone started to hash for it.
|
|
|
# Block found Duration Total shares Your Reward Block # Validity 2538 2011-03-29 04:57:49 0:00:02 8 8 none 115552 93 confirmations left Now that doesn't seem too fair. How could this be "fixed" to be more fair with the score based system? Maybe total shares < 1000, use the ratio from the last round? EDIT: Ah. so it was a glitch/cheat attempt?
|
|
|
I've read about Windows guys having ability to under-clock memory down to 300Mhz on the HD 5XXX cards is there a tool to do this in linux? Aticonfig seems to be getting its minimum mem clock speeds from somewhere, GPU BIOS, how do the windows clock tools get around that?
E.g: $ aticonfig --odgc --adapter=all
Adapter 0 - ATI Radeon HD 5900 Series Core (MHz) Memory (MHz) Current Clocks : 725 1000 Current Peak : 725 1000 Configurable Peak Range : [550-1000] [1000-1500] GPU load : 99%
Adapter 1 - ATI Radeon HD 5900 Series Core (MHz) Memory (MHz) Current Clocks : 725 1000 Current Peak : 725 1000 Configurable Peak Range : [550-1000] [1000-1500] GPU load : 98%
indicates the minimum mem. clock speed is 1000 MHz, configurable range [1000-1500].
Is there a --pplib-cmd that will do this perhaps? (related does someone have a list of the --pplib-cmd options?)
Now that you mention it: what are the benefits of this? Anything else besides slightly lower wattage and temps?
|
|
|
I accidentally stumbled across CAL++ as well. Sounds like a sweet deal. Unfortunately, I couldn't easily build pyrit under windows so I have to give in with the idea for now. And I'm not familiar with C at all.
80% improvement sounds awesome and it just might work. Even a 10-20% improvement to hashing would be major. I think you could get quite a few donations for porting poclbm to use CAL++ instead of the OpenCL wrapper.
I hope someone will look this up !
|
|
|
I just spent 2 hours trying to solve the issue by installing different versions of both the Catalyst drivers and the stream SDK- nothing helped. Combinations I tried: 10.09 + 2.1 SDK 10.10 + the included OpenCL runtime 10.10 + 2.1 SDK 11.2 + 2.1 SDK 11.2 + 2.3 SDK 11.4rc2 + 2.3 SDK .. and maybe some others. Every combination still used 100% of a CPU core for each GPU core. Hope this saves some time from others. Any ideas on how to fix the issue are also welcome Uninstall every ati driver & sdk using some uinstallers like revo uninstaller. Also never install sdk from stand alone adk package, instead install from driver package. If you have 5000 series card, install 10.12 & 2.1 sdk. If you have 6000 series card, install 11.2 & 2.3 APP. You haven't mentioned what card you have, what OS & how many Mhash/s you getting. Instead of going express install, go for custom install & install only these 4. ATI display driver, ATI catalyst install manager, APP 2.3 or SDK 2.1 & VC++ Thank you for the response. I tried this, but still no luck. I'm going to try some wild stuff, like removing some AMD specific drivers etc next. Hardware specs: AMD 880G (3x PCIE slots for GPUs) AMD Phenom II x4 BE550 1800Mhz DDR III 1x Radeon HD 5970 (about to get another one soon) and I'm running win7 64bit. I'll post if I figure out something. I need to fix this before I get the other card or this will be 100% of all my CPU core
|
|
|
I just spent 2 hours trying to solve the issue by installing different versions of both the Catalyst drivers and the stream SDK- nothing helped. Combinations I tried: 10.09 + 2.1 SDK 10.10 + the included OpenCL runtime 10.10 + 2.1 SDK 11.2 + 2.1 SDK 11.2 + 2.3 SDK 11.4rc2 + 2.3 SDK .. and maybe some others. Every combination still used 100% of a CPU core for each GPU core. Hope this saves some time from others. Any ideas on how to fix the issue are also welcome
|
|
|
I'm also on Win7 64bit and seeing the same issue. Newest Catalyst drivers and Stream SDK.
Must have something to do with the 64bit windows. IIRC, I tried this on 32bit win7 and didn't have such problems. Maybe someone else could confirm if they're not having this issue on 32bit win7.
Luckily I have 4 cores.. But the rig is still consuming almost 100w more than it should :--/
|
|
|
|