Show Posts
|
Pages: [1]
|
Looking at the bitcoin block chain on blockchain.info, I was wondering what happened to block #1551. Have a look here: http://blockchain.info/block-index/1551You'll see that this is block #1550, I've just addressed it by the block-index number in the URL, and they follow one another. However, the address for the next block is: http://blockchain.info/block-index/3104Why does block #1551 have block-index 3104 and not 1552? The block-index had incremented by 1 along with the block number from the start, but when I was expecting an increment by 1 from 1551 to 1552, it incremented by 1553 from 1551 to 3104. Is this just a peculiarity of blockchain.info's URLs? I notice that 1552 + 1552 = 3104, but that doesn't mean anything to me. MOD EDIT: Added (blockchain.info) to the title to make this less confusing.
|
|
|
Mechanizm, yeah, could be that guy, the numbers are close enough.
By the way, I'd never buy a rig like that today. Saw from the comments on youtube that he'd already got his rig paid for, which is great, but I don't think you'd ever get a rig like that covered at the speeds difficulty are increasing. I haven't done the math, but I doubt it's worth it.
|
|
|
Of course, there's probably 50 ways to "even out the graph" if you have lots of servers available. But I still think this is one rig rather than someone shaping the curve of their pooled uplink. That's just mean they aren't getting the peak of their traffic, which doesn't sound like a great idea. Why bother?
|
|
|
I figure it's a single rig, because it seems to be mining at a pretty stable rate. If it was many people, I'd imagine the rate would vary more.
|
|
|
Brilliant, that worked! Thanks! Well, it wasn't enough to just connect a second screen, I actually had to modify my xorg.conf ServerLayout so that both screens are active to get it working. Here's my current xorg.conf. I'm not sure all of this is necessary, but at least it works for me now. Getting about 340 Mhash/s on each card, not overclocked. Section "Module" Load "dri" Load "glx" EndSection
Section "ServerLayout" Identifier "Standard config" Screen 0 "screen0" 0 0 Screen "screen1" LeftOf "screen0" InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection
Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection
Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection
Section "Monitor" Identifier "monitor0" Option "VendorName" "Fujitsu-Siemens" Option "ModelName" "SCALEOVIEW T19-2" EndSection
Section "Monitor" Identifier "monitor1" Option "VendorName" "Vendor" Option "ModelName" "Model" EndSection
Section "Device" Identifier "radeon0" Driver "fglrx" BusID "PCI:9:0:0" EndSection
Section "Device" Identifier "radeon1" Driver "fglrx" BusID "PCI:4:0:0" EndSection
Section "Screen" Identifier "screen0" Device "radeon0" Monitor "monitor0" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection
Section "Screen" Identifier "screen1" Device "radeon1" Monitor "monitor1" DefaultDepth 24 SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection
Folax, thanks, I'll check out LinuxCoin. Would be great to avoid multiple screens and dummy plugs.
|
|
|
Hi, I have a computer with 2 ATI Radeon HD 5870 cards. I have got things up and running, but OpenCL can only see one GPU, not both. I've installed Ubuntu 11.04 Natty Narwhal, and I've installed the fglrx driver (8.840) and the AMD APP SDK (2.4), built pyopencl, and have successfully run poclbm on one GPU, and also on the CPU (though jgarzik's cpuminer was a lot faster on the CPU using sse2_64). Anyone have any idea what I need to do to get both cards mining? lspci shows both cards: 04:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress) 09:00.0 VGA compatible controller: ATI Technologies Inc Radeon HD 5870 (Cypress)
I've configured both cards in xorg.conf: Section "Device" Identifier "radeon0" Driver "fglrx" BusID "PCI:9:0:0" EndSection
Section "Device" Identifier "radeon1" Driver "fglrx" BusID "PCI:4:0:0" EndSection
and X starts up fine, but clinfo (bundled with AMD APP SDK) just shows the GPU and the CPU: Number of platforms: 1 Platform Profile: FULL_PROFILE Platform Version: OpenCL 1.1 AMD-APP-SDK-v2.4 (595.10) Platform Name: AMD Accelerated Parallel Processing Platform Vendor: Advanced Micro Devices, Inc. Platform Extensions: cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Name: AMD Accelerated Parallel Processing Number of devices: 2 Device Type: CL_DEVICE_TYPE_GPU Device ID: 4098 Max compute units: 20 Max work items dimensions: 3 Max work items[0]: 256 Max work items[1]: 256 Max work items[2]: 256 Max work group size: 256 Preferred vector width char: 16 Preferred vector width short: 8 Preferred vector width int: 4 Preferred vector width long: 2 Preferred vector width float: 4 Preferred vector width double: 0 Native vector width char: 16 Native vector width short: 8 Native vector width int: 4 Native vector width long: 2 Native vector width float: 4 Native vector width double: 0 Max clock frequency: 850Mhz Address bits: 32 Max memory allocation: 134217728 Image support: Yes Max number of images read arguments: 128 Max number of images write arguments: 8 Max image 2D width: 8192 Max image 2D height: 8192 Max image 3D width: 2048 Max image 3D height: 2048 Max image 3D depth: 2048 Max samplers within kernel: 16 Max size of kernel argument: 1024 Alignment (bits) of base address: 32768 Minimum alignment (bytes) for any datatype: 128 Single precision floating point capability Denorms: No Quiet NaNs: Yes Round to nearest even: Yes Round to zero: Yes Round to +ve and infinity: Yes IEEE754-2008 fused multiply-add: Yes Cache type: None Cache line size: 0 Cache size: 0 Global memory size: 536870912 Constant buffer size: 65536 Max number of constant args: 8 Local memory type: Scratchpad Local memory size: 32768 Kernel Preferred work group size multiple: 64 Error correction support: 0 Unified memory for Host and Device: 0 Profiling timer resolution: 1 Device endianess: Little Available: Yes Compiler available: Yes Execution capabilities: Execute OpenCL kernels: Yes Execute native function: No Queue properties: Out-of-Order: No Profiling : Yes Platform ID: 0x7f1b34e26800 Name: Cypress Vendor: Advanced Micro Devices, Inc. Driver version: CAL 1.4.1353 Profile: FULL_PROFILE Version: OpenCL 1.1 AMD-APP-SDK-v2.4 (595.10) Extensions: cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_amd_device_attribute_query cl_amd_printf cl_amd_media_ops cl_amd_popcnt
Device Type: CL_DEVICE_TYPE_CPU Device ID: 4098 Max compute units: 6 Max work items dimensions: 3 Max work items[0]: 1024 Max work items[1]: 1024 Max work items[2]: 1024 Max work group size: 1024 Preferred vector width char: 16 Preferred vector width short: 8 Preferred vector width int: 4 Preferred vector width long: 2 Preferred vector width float: 4 Preferred vector width double: 0 Native vector width char: 16 Native vector width short: 8 Native vector width int: 4 Native vector width long: 2 Native vector width float: 4 Native vector width double: 0 Max clock frequency: 800Mhz Address bits: 64 Max memory allocation: 2147483648 Image support: Yes Max number of images read arguments: 128 Max number of images write arguments: 8 Max image 2D width: 8192 Max image 2D height: 8192 Max image 3D width: 2048 Max image 3D height: 2048 Max image 3D depth: 2048 Max samplers within kernel: 16 Max size of kernel argument: 4096 Alignment (bits) of base address: 1024 Minimum alignment (bytes) for any datatype: 128 Single precision floating point capability Denorms: Yes Quiet NaNs: Yes Round to nearest even: Yes Round to zero: Yes Round to +ve and infinity: Yes IEEE754-2008 fused multiply-add: No Cache type: Read/Write Cache line size: 64 Cache size: 65536 Global memory size: 8388317184 Constant buffer size: 65536 Max number of constant args: 8 Local memory type: Global Local memory size: 32768 Kernel Preferred work group size multiple: 1 Error correction support: 0 Unified memory for Host and Device: 1 Profiling timer resolution: 1 Device endianess: Little Available: Yes Compiler available: Yes Execution capabilities: Execute OpenCL kernels: Yes Execute native function: Yes Queue properties: Out-of-Order: No Profiling : Yes Platform ID: 0x7f1b34e26800 Name: AMD Phenom(tm) II X6 1055T Processor Vendor: AuthenticAMD Driver version: 2.0 Profile: FULL_PROFILE Version: OpenCL 1.1 AMD-APP-SDK-v2.4 (595.10) Extensions: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_media_ops cl_amd_popcnt cl_amd_printf
|
|
|
This worked for me in a similar situation now, but I couldn't see any pointer to an updated list of IP addresses to use with the --addnode command. I found this list, and guess it would be the right list to use: https://en.bitcoin.it/wiki/Fallback_Nodes
|
|
|
I've been mining for deepbit.net, but wanted to get my cash because I've changed. I had just small change now, 0.04230547 bitcoins. However, when I asked for payments, I only got 0.04 bitcoins, and deepbit.net keeps the remaining 0.00230547?? Maybe not a lot of money, but for every user they keep up to 0.009999 BTC when that user quits. I guess that's in addition to their 3% fee, so in reality it's 3% plus 0.005 BTC on average. But for me, the 0.005 BTC is more than 3% of what I've put in...
Or is there a way for me to get it? Do I have to mine until I get exactly 1.00 bitcoins or something like that before I receive everything?
Those sub-bitcents are never taken by the pool, they still are in your account. Of course i'm not sending full-precision sums because this will cause problems for users with previous versions of clients and then they will ask me why their money is lost. So it's better to send only >= 0.01 BTC at least until most clients will upgrade to the official bitcoin client that doesn't discards < 0.01 change. You can mine 0.00769453 BTC more and withdraw 0.01 BTC then. Thanks for explaining! I have a fairly new client, but I don't know what it supports. Maybe that could be an option (off by default), to enable sending sub-bitcent values? Not sure how I can mine exactly 0.00769453 BTC, I'd have to keep an eye on the mining and stop it exactly when it's done, or otherwise just let it pass I guess.
|
|
|
I've been mining for deepbit.net, but wanted to get my cash because I've changed. I had just small change now, 0.04230547 bitcoins. However, when I asked for payments, I only got 0.04 bitcoins, and deepbit.net keeps the remaining 0.00230547?? Maybe not a lot of money, but for every user they keep up to 0.009999 BTC when that user quits. I guess that's in addition to their 3% fee, so in reality it's 3% plus 0.005 BTC on average. But for me, the 0.005 BTC is more than 3% of what I've put in...
Or is there a way for me to get it? Do I have to mine until I get exactly 1.00 bitcoins or something like that before I receive everything?
|
|
|
100 FPGAs @ 100Mhz = 10Gh/s in one computer case? I would gladly pay for that. My biggest constraint right now is physical space, followed distantly by the amount of electricity my circuit breaker can route my way.
Given that these numbers are right: Do you think it would be worth it? If you're willing to do this, then probably a few others are too. If you buy a rig like this for US$50000 (dunno the price, just assuming something), you'd need to make ~7142 bitcoins at a value of US$7 each before it's worth it, or 143 blocks. Remember that the network only hands out 210000 chunks until 2013, so difficulty will increase as you are doing this. The question is - will it pay off? Will you actually be able to use this rig to get those 143 blocks? Maybe smaller investments is the way to go for bitcoins (unless you already have some fancy hardware). Buy something, generate a couple of blocks on it, then buy some new hardware that is better, and generate a few blocks on that, etc... Heavy investments will probably be overtaken by bitcoin limitations and technology before they've paid off. If so, what would be the right level of investments? If someone works on it, I guess it should be possible to find an optimal price level vs. block performance ratio that could be a buyers guide to get into bitcoins. Unless you want to gamle that bitcoin prices will increase a lot, of course. You're already gambling that they'll stay at current levels.
|
|
|
OK, it's approximately valid for more than a week, I guess. Sorry, I'm just a pedant sometimes. But watching the FPGA thread something might happen to difficulty soon, and on a much longer perspective I guess there'll be an overnight doubling in 2013 when bitcoins-per-block falls from 50 to 25, and again in 2017, etc. As an indicator of how much you definitely should NOT be paying for a bitcoin, it could be a good number. But it's actually very inflated, because while the Amazon EC2 Cluster GPU instance is (probably) good for scientific stuff, it has a bad price/performance ratio for generating bitcoins, because the hardware is very suboptimal for this task. If Amazon's GPU instances had two ATI Radeon 5970 mining at maybe 700Mhash/s each (instead of 70Mhash/s for each Tesla card), then ErMurazor should have had approx 3.8BTC instead of 0.38BTC for his $20. Unless I made a mistake already, this would make the calculation looks like: 20/3.8 = 5.26 USD/BTC Of course, my assumption does assume that someone working at the same efficiency as Amazon EC2 actually starts setting up servers suitable for mining at similar prices. That's not happening for a while... And it probably means that people selling bitcoins on Mt Gox even as high as $8.50 aren't making a fortune, unless they've generated those bitcoins when it was a lot easier.
|
|
|
Congratulations! You've just discovered a ceiling above which price of bitcoin will never rise (at current difficulty). 20/0.38 = 52.63 USD/BTC.
Never at current difficulty, so never this week?
|
|
|
OK, sorry I missed those, I tried browsing the hundreds of messages. Also, it suddenly works, but I'll look more closely.
|
|
|
I'm getting "500 Internal Server Error" on http://coin-control.appspot.com/addr/<address> with the Exception: KeyError('<address>',) Apparently this happens in: ".../coin control.py", line 33, in getStats if balance[address]: I've just started mining. Does it take time for my address to appear here?
|
|
|
Bitcoin and EC2 aren't comparable. With EC2 you can rent a virtual Windows or Linux servers for whatever use you like (with root access).
Perhaps it could be possible to change the type of work done by bitcoin miners to actually be something useful, rather than just solving a bunch of hashes. I don't know anything about how easy/hard that would be to implement, or what that could mean for security or anything, so I don't know if it's a good idea.
|
|
|
It's not so bad, but that depends on what you want to use it for. The Tesla GPU cards support double precision floating point operations, which can be useful for scientific stuff, but it doesn't help when mining for bitcoins. The Tesla doesn't have a much higher rating when it comes to raw GFLOPS.
|
|
|
Find a pool you want to join. New pools are posted here regularly, and there's also an article on the wiki with a few: https://en.bitcoin.it/wiki/Pooled_miningWhen you've joined a pool you'll have received a username, password, hostname and port number. Use the standard bitcoin program something like this to do work: bitcoin --get --rpcuser=<user> --rpcpassword=<password> --rpcconnect=<hostname> --rpcport=<port> or get poclbm or cpuminer or some other client that may be able to work more efficiently.
|
|
|
I'll post this howto here, because it could be useful for documentation: 1. It could help people configure other (similar) systems, and 2. It's not worth it. The Amazon servers cost more than the bitcoin earnings. Anyway, here's what to do to get maximum juice out of one of these servers. They have 2 Tesla C2050 GPU cards, and 16x CPU cores. With these instructions, you'll run poclbm on each GPU, and jgarzik's cpuminer on all processors, all mining for deepbit.net. First, start up an Amazon EC2 GPU (cg1.4xlarge) instance with Amazon's own cluster Linux distribution (I used ami-321eed5b). When it's up and running, log on and run the following commands. Cut'n'paste is fine but remember to fix username/password/pooled mining server. # set up path stuff echo >> $HOME/.bash_profile echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/tools/lib' >> $HOME/.bash_profile echo 'export PATH=$PATH:$HOME/tools/bin' >> $HOME/.bash_profile export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/tools/lib export PATH=$PATH:$HOME/tools/bin cpus=$(cat /proc/cpuinfo | grep ^processor | wc -l)
# initial package config sudo yum -y groupinstall "Development Tools" sudo yum -y install git libcurl-devel python-devel screen rsync
# install yasm git clone git://github.com/yasm/yasm.git cd yasm ./autogen.sh ./configure --prefix=$HOME/tools make -j $cpus make install cd -
# install and start cpuminer git clone https://github.com/jgarzik/cpuminer.git cd cpuminer ./autogen.sh ./configure make -j $cpus screen -d -m ./minerd --threads $cpus --algo sse2_64 --url http://deepbit.net:8332/ --userpass YOUR_EMAIL:YOUR_PASSWORD cd -
# install numpy git clone git://github.com/numpy/numpy.git numpy cd numpy git checkout remotes/origin/maintenance/1.6.x sudo python setup.py install cd -
# set up newer nvidia library wget http://developer.download.nvidia.com/compute/cuda/3_2_prod/drivers/devdriver_3.2_linux_64_260.19.26.run wget http://developer.download.nvidia.com/compute/cuda/3_2_prod/toolkit/cudatoolkit_3.2.16_linux_64_fedora13.run sudo mv -v /lib/modules/$(uname -r)/kernel/drivers/video/nvidia.ko /root/ At this point, you need to reboot your server. The easiest way to do that is to run sudo reboot. Log back in to the server after reboot, and continue. You'll need to interact with the NVIDIA installers, so you can't cut'n'paste everything here. You'll also need to edit the siteconf.py file in the command vi siteconf.py and make sure it says CL_ENABLE_DEVICE_FISSION = Falsecpus=$(cat /proc/cpuinfo | grep ^processor | wc -l)
# restart cpuminer cd cpuminer screen -d -m ./minerd --threads $cpus --algo sse2_64 --url http://deepbit.net:8332/ --userpass YOUR_EMAIL:YOUR_PASSWORD cd - sudo bash devdriver_3.2_linux_64_260.19.26.run sudo bash cudatoolkit_3.2.16_linux_64_fedora13.run
# install pyopencl git clone http://git.tiker.net/trees/pyopencl.git cd pyopencl sudo easy_install Mako git submodule init git submodule update python configure.py --cl-inc-dir=/usr/local/cuda/include --cl-lib-dir=/usr/local/cuda/lib64 vi siteconf.py # set CL_ENABLE_DEVICE_FISSION = False sudo make install cd -
# get poclbm and start for each device git clone https://github.com/m0mchil/poclbm.git cd poclbm/ screen -d -m python poclbm.py -o deepbit.net -p 8332 -u YOUR_EMAIL --pass=YOUR_PASSWORD -v -w 256 --device 0 screen -d -m python poclbm.py -o deepbit.net -p 8332 -u YOUR_EMAIL --pass=YOUR_PASSWORD -v -w 256 --device 1
If you're not familiar with the screen program, it runs your programs on a virtual console. You can access them with screen -r <id>, and you release them again with the key-combo Ctrl-a Ctrl-d. Now you should be running 2 poclbm.py instances and one instance of cpuminer with 16 threads. In my test, each GPU calculated ~75000khash/s, while each CPU core did approx 1400khash/s, for a grand total of approximately 170mhash/s. ...and that's not worth it given an instance price of $2.10 per hour.
|
|
|
|