I don't want to login on a forum I won't use just to see a picture.
|
|
|
2. As long as you use powered USB hubs, you can hook up to 144 devices on each computer USB port. Two USB hubs are not necessary, one would do just fine. The only reason I can think of to use two hubs would be to make it more stable (1 USB hub dies, all asics go down.... but if you have 2, only half of them will.) Just make sure to shop around and read reviews. Some hubs are super cheap.
127 devices on each USB port, including the hubs. Most 7-port hubs are 2 4-port hubs so they count for 2. The maximum depth for chaining, connecting a hub to another hub, is 5 iirc. So connecting a 7-port hub to a 7-port hub to a 7-port hub is possible, but you might find out what port you need to connect to.
|
|
|
I'm not sure if you really want to clock these things at 2GHz. A normal cpu/gpu has many parts that aren't active together. If you have a bitcoin miner the size of a modern processor at the same manufacturing process and at the same clock speed, it will generate very much more heat.
|
|
|
If someone finds someway to reduce the work by for example 20 bits it would be a million times faster. That trick could work for all current technologies, cpu/gpa/fpga/asic. But keep in mind, the trick would be something many cryptographers haven't found yet. So personally I would bet on quantum computers.
|
|
|
Like what? a netbook? IIRC laptops and netbooks still draw40w-95w depending on the AC adapter.
What? My netbook draws 12W. It's only got a 35W adapter! Yes most smaller laptops will have a charger somewhere in the 60-95W range, but that's MAX LOAD! That's like saying my desktop tower with a 1000W CM PSU uses 1000W. A laptop mining on a bunch of FPGAs will not use 95 or even 60W. My asus eee can draw more power than the adapter can deliver. Fortunately than only happens when compiling big sources (like a linux kernel) with the multiple jobs for make.
|
|
|
Does this have anything to do with GIGAMINING from gigavps? I haven't seen anything about this in the original gigamining thread.
|
|
|
We have 30 hours left to reach the goal of $750,000. If we do not then the project will NOT be funded at all. Even though it can't mine, it's only $99 and way more powerful than a normal PC. Please dig deep and support this project!
A single threaded application is still much faster on a normal pc compared to this 'supercomputer'. A 'very much the same threads'-application is still much faster on a videocard compared to this 'supercomputer'. Don't get me wrong, I like the idea of 64 cores on 1 chip, but I skip this one.
|
|
|
Why would the pools disappear? If someone mines with a single gpu now and has 1/10000th of the total networkspeed and gets a asic and still has a 1/10000th of the total networkspeed, it's still a good idea to use a pool.
All the major pools now won't support an ASIC. There has to be major changes for them to support ASIC. Especially at the beginning when ASIC are returning easy work extremely fast. Even with stratum there's going to be major bandwidth problems in the beginning. Why is the 1kB/s (regardless of miningspeed) a major bandwidth problem? If there were a period where a block was solved once every 5 seconds (just making this number up) until difficulty caught up I am guessing that there could be a temporary network congestion problem. Mining might only be 1kB/s but block propagation could take up a large chunk of network bandwidth. You could see p2pool-like problems manfesting on the main Bitcoin network, the major one being collisions of solved blocks that will result in many orphans. This wouldn't be a pool issue, though. 5 seconds a block won't be a big problem because it won't generate extra transactions. Each block will be the 80 bytes header with a little data for the block reward, at most 1/4kB. There probably won't even be much collisions because I really doubt an increase of 100k of speed will come at once from multiple sources. 5 seconds a block takes at most 2016 * 5 = 10080 seconds = 2 hours 48 minutes to change the difficulty to 20 seconds a block. (maximum increase is times 4)
|
|
|
Why would the pools disappear? If someone mines with a single gpu now and has 1/10000th of the total networkspeed and gets a asic and still has a 1/10000th of the total networkspeed, it's still a good idea to use a pool.
All the major pools now won't support an ASIC. There has to be major changes for them to support ASIC. Especially at the beginning when ASIC are returning easy work extremely fast. Even with stratum there's going to be major bandwidth problems in the beginning. Why is the 1kB/s (regardless of miningspeed) a major bandwidth problem?
|
|
|
Why would the pools disappear? If someone mines with a single gpu now and has 1/10000th of the total networkspeed and gets a asic and still has a 1/10000th of the total networkspeed, it's still a good idea to use a pool.
|
|
|
That would be 200m$ worth of BFL hardware... Not going to happen, leaving quite a margin for less efficient device to cover their investments ... at some point.
Are you one of these guys who think that "640kB ought to be enough for anyone"? If Bitcoin continue to succeed, it will come to a point where $200M will have been spent on mining hardware. Just consider that today alone, $20M has been spent on CPU/GPU/FGPA mining hardware. Going to $200M is only a 10x increase. Unless by then $ stops to exists and you buy your hardware with BTC
|
|
|
this whole power debate is exactly why i have always planned to set up solar/wind/whatever power sources asap for any significant mining effort. ...i should set up a solar selling business for BTC... I have wind, hydro and solar setup on my property. I don't live there but when I do - free everything! If I want 'free' energy like that, I still have to buy the turbines/panels, so there still is no free energy.
|
|
|
- Dividends are paid every hour except for religious holidays.
- Every day is a religious holiday for our shamanistic culture -- please be understanding.
Nice! You can actually keep all promises you made
|
|
|
I've seen this kind of question before With the type of getwork that just gets one blockheader the asic has to search 2^32 nonces and finds on average 1 nonce worth a share. The getwork uses about 1kB data. Sending the nonce back to the pool uses less iirc, about 1/4kB. For 1500 GHash/s this gives 1500G / 2^32 is about 350kB/s pool->client and 90kB/s client->pool. For getwork there is already rollntime that tells the client it can change the time part in the blockheader. So with a rollntime with value 10 the clients has 10 times more work so the 350kB/s pool-client is reduced to 35kB/s. If the client->pool data gets to much, it is possible to change the difficulty 1 for a valid share to something bigger. That way the client finds less shares, but the shares are worth more so on average it is still the same. There are at least 2 new protocol proposals, but I can't remember their names so I can't find them right now. One of them lets the client calculate the header itself iirc and has some higher difficulty for shares and something better than the current longpoll the prevent stales. The other I haven't read the text but I think it has something just like that.
|
|
|
On the deepbit site under help : Deepbit's transaction fee policy (This is not related to the pool's payments to our users) We think that free transactions are very important for the Bitcoin system so we are including as much free transactions as possible into our blocks, possibly more than anyone else. But sometimes a fee may be useful, so here is our fee policy: A transaction is considered to be 'free' if the offered fee is less than 0.01 BTC. Free transactions are only accepted if they are less than 2000 bytes in size and have no more than 5 outputs. Outputs less than 0.01 BTC are not allowed in free transactions. The minimum fee for non-free transaction is 0.01 BTC for each started 1000 bytes.
|
|
|
I don't know in detail how the gpu litecoin miners work, but what I guess is indeed something with gpu threads. Maybe when the gpu is calculating on the 1GB of work, the cpu already puts the next 1GB of work in place. That way the gpu can start the new work immediately when the old is done. Doing this with more threads would probably lower the chance all work is done before new work is ready. But I'm no expert for litecoin gpu mining. Just a computer programmer thinking about what approach he would choose to do it.
And another thing, litecoin doesn't use 128kB but a little more, around 128.5kB. It is possible it's faster to align on some boundary. If the boundary for memory blocks is 64kB the total amount of 1GB becomes 1.5GB. With 4 threads putting work ready it becomes 6GB.
|
|
|
Litecoin uses 128.5kB memory, I doubt the fpga itself has this memory, so you need some memory chip(s) besides it. And even then, currently there is no litecoin bitfiles to load to the fpga, so those must be created also.
The FPGAs I will be using have a few MBs of memory on them, how hard would it be to make the bitfiles and whatnot, that would have to be done in verilog right? What kind of FPGA is that? The Xilinx spartan-6 LX150 only has 18 Kb blocks if I'm correct. Wait, can anyone explain in greater detail why LTC mining only uses 128kb of memory, but yet any scrypt mining program use enormous (GBs) amounts of memory?
The scrypt hash calculation only uses a little over 128kB, but if you run it on a quadcore cpu there are 4 calculations at the same time, so 512kB is used. If you run it on a gpu, with for example has 1024 calculation units, you run 1024 calculations at the same time so you need 1024 * 128kB = 128MB.
|
|
|
|