Bitcoin Forum
May 25, 2024, 11:53:06 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 [122] 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
2421  Alternate cryptocurrencies / Altcoin Discussion / Re: A blockchain with a hashing function doing useful work on: October 03, 2011, 08:22:37 PM
What's proof-of-stake and BTCDD?
Proof of stake is currently not a single thing, it's a set of ideas some of which were discussed here. Bitcoin days destroyed (edited after realizing BTCDD is not a standard abbreviation) is a measure of the movement of coins. I talked about "circulation" in the first linked thread, and later realized that bitcoin days destroyed could be a good way to measure it.
2422  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 08:12:58 PM
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.

Or simply make both half's Independent.  Currently the bitcoin block is signed by a single hash (well technically a hash of the hash).    There is no reason some alternate design couldn't requires 2 hashes.  A valid block is only valid if signed by both hashing algorithms and each hash is below their required difficulty.  In essence a double key to lock each block.  Each algorithm would be completely independent in terms of difficulty and target and would reset

Even if you compromised one of the two algorithms you still wouldn't have control over the block chain.  IF the algorithms were different enough that no single hardware was efficient at both you would see hashing split into two distinct camps each developing independent "economies".
That's the opposite of independence. It means that the same party needs to do both CPU and GPU to validate their block. So end users can't mine because they don't have GPUs. And there's no way to adjust difficulty separately since you don't have separate blocks to count.

It will create very weird dynamics where you first try to find one key, and if you do you power down that key and start the other to find a matching key. If you can find both keys before anyone else you get the block. And I think this means that it's enough to dominate one type because then you "only" need to find the key for the other to win a block.
2423  Alternate cryptocurrencies / Altcoin Discussion / Re: A blockchain with a hashing function doing useful work on: October 03, 2011, 08:03:47 PM
I'll warrant that the F@H people can cheat the system (destroying all value) if they wanted to. But 1) why would they want to do that? and 2) that's the price of doing business--there's no practical way around centralization of work distribution and verification in this context.
I don't know what your system is like and I understand if you want to keep it under wraps for now, but based on my naive understanding there are lots of problems:
 - At times when F@H is down and there are no rewards, why would anyone waste power to confirm transactions?
 - What if someone greedy at F@H rigs stuff to make it look like an address belong to himself did work?
 - What if the F@H project is terminated? I assume more than one project will be involved, but if something happens to a major one it could create a shock.
 - Who decides what projects are included? What prevents someone for gaining free computing for his for-profit project?
 - What happens when Bitcoin is x1000000 times the size of F@H and you have the whole world's economy relying on a few obscure charities?

Unless you have some magic solution, I think you're completely missing the point of why Bitcoin's decentralization is so important. The solution to the "wastefullness" of hashing should be in the form of allowing more security with less resources (eg branch selection based on proof-of-stake and bitcoin days destroyed).
2424  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 07:36:13 PM
You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.
I think you shouldn't artificially balance targets. They would both converge to a point where both are at the same level of profitability, regardless of the advances in technology. Requiring an alternate every n blocks makes sense though...
If you have blocks corresponding to two different hash functions, you have two options:
1. Have a single difficulty value, and hardcode a ratio between the hash targets. Then if you choose a wrong ratio one type will be more profitable than the other and be solely used.
2. Have two separate difficulty values, each computed by the time it took to find X blocks of a type compared to the desired time. To know what the desired time is you have to set what % of the blocks you want to be of this type. It needn't be 50/50 but that gives the most security for this application. Then the respective difficulties of the two types will converge to a point where they are equally profitable.
2425  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 06:19:30 PM
the flaw of Bitcoin design is not the GPUs, it is the mining pools. They completely invalidate the initial assumption thatr every Bitcoin participator is contributing to the network security. With the pools, only pool owners do. Currently the end miners don't need a Bitcoin client and can't even know for sure which network they are mining for...
I guess you need to use bold and all caps to be heard around here. So, for the third time,

POOLS ARE ONLY A SECURITY THREAT DUE TO AN IMPLEMENTATION DETAIL THAT CAN BE EASILY FIXED. IT IS NOT A FUNDAMENTAL PROBLEM WITH THE DESIGN.
2426  Alternate cryptocurrencies / Altcoin Discussion / Re: A blockchain with a hashing function doing useful work on: October 03, 2011, 05:44:18 PM
I don't see centralism being quite the demon it is made out to be, personally.
Centralization is good for some things, bad for others. If I understand correctly that the F@H server is required to confirm that work was done, then if the server goes down the whole currency shuts down which is a MAJOR weakness.
2427  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 03, 2011, 04:58:49 PM
if we lose 90% of hashing power in a month then there that is a sign something much more serious is wrong with bitcoin than the difficulty algorithm;
"Shit happens" as they say. It would certainly mean there's something wrong, but not necessarily fatal. It will be fatal if we're not prepared. Also, merely preparing for the possibility may make it less likely by preventing a "hash run" - price drops, miners quit, transactions are confirmed more slowly, causing loss of confidence and more price drops, more miners quit...

This is way off-topic from the OP... but Gavin, why couldn't ArtForz's patch be applied to core? Bumping the interval of consideration up from 2015 to 2016 will have no effect unless nodes are acting dishonestly. And this isn't far-fetched--as ArtForz points out the current situation gives miners significant incentive to collaborate and cheat.
I'm all for making the fix but I don't think the incentive is really that great. If miners collude to generate coins ahead of schedule, it will cause a loss of confidence in Bitcoin making their mined coins worthless.
2428  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 01:36:53 PM
GPUs makes the network very botnet resistant.    Bitcoin likely would have been destroyed by botnets already (if out of spite or to simply see if they could) had it not been for the rise of specialized (i.e. GPU) mining.

This idea that specialized mining may defend the Bitcoin network from botnets might have merit.

I wonder if it might be possible to have the best of both worlds - where specialist mining makes commercial sense and casual CPU miners can also convert electricity to cryptocurrency at a rate that isn't prohibitive.

I'd guess you'd need to see a ratio of about approximately - 1.5:1 (Efficiency on Specialist Hardward:General CPU) to have both co-exist.

You could do something like that. Have two kinds of blocks, call them "CPU blocks" and "GPU blocks", each using a hashing function with corresponding friendliness. They're mostly equivalent for purposes of confirming transactions and they exist on the same chain (a GPU block can reference a CPU block, etc.). Each type will have its own difficulty, where the difficulties are targeted so that 50% of blocks are GPU and 50% are CPU. There is an additional rule that a block is invalid if the last 8 blocks were the same type as it. So a botnet dominating the CPU blocks or a cluster dominating the GPU blocks can't do much because it can't generate a branch longer than 8 blocks (if someone manages to do both there's still a problem). You can change the recommended waiting period from 6 to 10 blocks.


Those users will likely joins pools so pools which already the largest threat to decentralization still remain an issue.
I already explained that pools are not a threat to decentralization going forward.
2429  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 01:08:15 PM
Basing it on RAM is even more foolish.

While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system.  This would give commercial miner 4000x the hashing power of average node.  A commercial miner is always going to be able to pick the right hardware to maximize yield.  Limiting the hashing algorithm by RAM wouldn't change that.
And they get this 16TB of RAM for free? RAM is expensive, and the kind of RAM usually used on servers is more expensive than consumer RAM. And again, even if they manage to make it a bit more efficient it's not close to competing with already having a computer.

BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram).
You need 2GB per instance. You can't parallelize over this 2GB bringing all the GPU's ALUS to bear. GPU computation and RAM are very parallel but not "fast", this takes away their advantage.

Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either.  If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger.  i.e. a non-linear relationship between hashing power and pool size.  This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship.  Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.
There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.
2430  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 03, 2011, 12:36:45 PM
So back on topic... This idea actually looks promising, I yet have to run the numbers but I suspect it'll work "economically" as long as you keep the total adjustment possible over X blocks symmetrical for up vs. down (or even allow quicker up than down adjustment). Still somewhat worried about poisson noise causing issues for a faster-adjusting algo, but I guess that *should* average out over time...
I ran some simulations, I don't think there's anything wrong with it but by itself it won't solve the problem. I tried m=1 (adjustment every block), n=2016 (ratio calculated over the past 2016 blocks) and correction of ratio^(m/n). It takes 600 days to recover from a x100 drop in hashrate. Using a correction of ratio^(2m/n) it's still stable and improves it to 400 days. Maybe it's a matter of finding the highest stable exponent.
2431  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 03, 2011, 11:24:11 AM
Wouldn't it be easiest to keep the original retarget algo (with 2016 block window etc), but just do the retargetting more often so that retarget windows overlap? You could retarget every block if you wanted (or every 10 or whatever).

With system like this, for example Namecoin difficulty would have dropped to "profitable" levels a long long time ago, and it would have suffered only a temporary slowdown.

Is there a downside to a system like this that I don't see? Some problem it doesn't solve or some exploit it introduces? It seems so simple :-)
This only has limited effectiveness against the doomsday problem. Even if you retarget on every block, you still have to wait for that first block which could take a while if things are really bad. And as ArtForz said you need to take a root of the correction (otherwise it explodes), so even after the block is found the correction is not enough.
2432  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 03, 2011, 08:46:45 AM
Solidcoin/ixcoin/i0coin/... with *1.1 /4 limits (edit: in theory, as the real chains all also have the same off-by-1 in their retargeting):
N = 1 cooperate, everyone gets 1.
N > 0.5 cooperate, cooperators get 1/N, defectors get 0.
N < 0.5 cooperate, cooperators get 0, defectors get 3.6/(1-N).
N = 0 cooperate, everyone gets 3.6.
Can't this kind of problem be solved with a term corresponding to the I in PID? If more blocks than normal are found consistently, this will kick in and increase the difficulty.

Also, the suggestion in the OP isn't to change the limits asymmetrically. It's to expedite the adjustment when the recent history indicates the next adjustment is expected too far in the future. If the hashrate goes backup, the difficulty can rapidly increase just fine. Maybe even make it so that in this scenario the difficulty can even more easily go up, to further decrease any gameability.

All I'm saying is that just because some alts came up with a half-assed algo, doesn't mean the doomsday problem is unsolvable.


Under the “bitcoin as planned” alternative there is no incentive either way, so fix the off-by-1 and make sure adjustments are +/- the same maximum ("symmetric" by everyone but kjj's definition) and we're done, right? In other words, you need the off-by-1 or asymmetry for this to be exploitable, right?
Done with ArtForz's attack, yes. But not with the problem of the OP.

Also, once new coins are not being generated, the issue goes away as well, right? It doesn't matter how many blocks you generate if the entire subsidy is coming from transaction fees (a blocknumber based demurrage would still suffer the same problem, however).
I believe so. But the dynamics of that era are still not understood well enough even without considering potential attacks.
2433  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 08:22:43 AM
Wouldn't trying to keep changing things all the time result in fragmentation of the network as bunches of people get too lazy or just simply are not all that computer savvy to feel comfortable constantly upgrading things and stay with their old stuff?
Upgrading the client every so often is good practice anyway. If the big players agree to the change everyone else will just have to follow. Those that can't bother to keep up are better off using an eWallet rather than a client. It's not essential that we actually do change the hashing function frequently, only that we are prepared for the contingency. The "change every year" plan was just an example of how we could prevent specialization if we really wanted and all else fails.
2434  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 05:08:43 AM
only profitable for those with the most (and fastest) CPUs and the resources needed to support them (electricity, etc)
It's not about quantity. Someone with 1000 CPUs will make x1000 times the revenue, but with x1000 times the cost. It's about efficiency, cost per bitcoin generated (where all costs are considered - electricity, hardware, maintenance...). If I have just 1 CPU but with the same efficiency I can also profit.

those with the skills and resources will be the ones getting the profits
Skills, resources and opportunities. Someone who has a computer he bought for other purposes, which happens to be able to mine, has an opportunity to profit. At-home miners have several other big advantages over dedicated businesses. If there's really no specialized hardware, all the business has is things like some more technical knowledge and negotiating slightly better power prices and it simply can't compete.


1) It still won't be "fair".  Sure if you can only use CPU then the traditional mining farm because kaput.  It still doesn't make average user an "equal share".  What about IT department managers who may have access to thousands of CPU?  They dwarf the returns than an "average" user can ever make.  You simply substitute one king of the hill for another.
Nobody in the thread said it should be "fair". It's about making Bitcoin decentralized per the vision, and making it more secure (by making it more difficult for an attacker to build a dedicated cluster).

2) It makes the currency very very very vulnerable to botnets.  The largest botnet (Storm Trojan) has roughly 230,000 computers under its control.  It could instantaneously fork/control/double spend any crypto currency.   There are much fewer computers with high end GPU systems, they are more detectable when compromised, and on average tend to be owned by more computer savy users making controlling an equally powerful GPU botnet a more difficult task.
Then solve that problem. Botnets are a potential problem now but they will become less so as Bitcoin grows. In any case they seem like a challenge to overcome rather than a fatal flaw in CPU-mining.

3) If GPU were dead then FPGA would simply rein supreme.  CPU are still very inefficient because they are a jack of all trades.  That versatility means they don't excel at anything.  If bitcoin or some other crypto currency was GPU immune large professional miners would simply use FPGA and drive price down below electrical cost of CPU based nodes.  The bad news is it would make the network even smaller and even more vulnerable to a botnet (who's owner doesn't really care about electrical costs).
The point with CPU-friendly functions is RAM. With a given amount of RAM you can only run so many instances, so you're bound by your sequential speed. Unless FPGA can achieve a big advantage over CPU in this regard, they will just be too expensive to make it worthwhile.

4) Technology is always changing.  GPU are becoming more and more "general purpose".  It is entirely possible that code which runs inefficiently on today's GPU would run much more efficiently on next generation GPU.  So what are we going to scrap the block chain and start over everytime their is an architectural change.
Who said anything about scrapping the block chain? We're using the same block chain but deciding that starting with block X a different hash function is used. And yes, having a policy for updating the hash function is good for this and other reasons.

5) CPU will become much more GPU "like" in the future.  The idea of using multiple cores with redundant fully independent implementation is highly inefficient (current Phenom and Core i designs).  To continue to maintain Moore's law expect more designs like AMD APU which blend CPU and GPU elements.  Another example is the PS3 Cell processor with a single general purpose cores and 8 "simple" number crunching cores.  As time goes on these hybrid designs will become the rule rather than the exception.  Would be very silly is any crypto currency was less efficient on future CPU designs than current ones out of some naive goal of making it "GPU proof".
Again, RAM. You can choose a hash function which requires 2GB RAM per instance. Then the amortized cost of the CPU time will be negligible, and your computing rate is determined strictly by your available RAM and sequential speed.
2435  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 02, 2011, 03:08:35 PM
Bitcoins have most certainly gone up in price due to GPU mining and lack of energy efficiency. The more energy put into the system, the more coins cost to make, the higher the bitcoin price is.
No. The causal chain is Market => Price => Total mining reward => Incentive to mine => Amount of miners => Difficulty => Cost to mine. The other direction, the direct influence of the specifics of mining on the coin price, is negligible. If cost per hash was lower, there would simply be more hashes and larger difficulty thus maintaining market equilibrium. (though there are indirect effects due to network security, popularity etc).
2436  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 02, 2011, 07:46:40 AM
Bitcoin does not have the problem with miners coming and going at anywhere near the level that seen with the alternates.
Scenario 1: Something really bad happens and the Bitcoin exchange rate quickly drops to a tenth of its previous value. This is when mining was already close to breakeven, so for most miners it is no longer profitable to mine and they quit. Hashrate drops to a tenth of the previous value, blocks are found every 100 minutes, retargeting is in 5 months.

Scenario 2: Someone breaks the hashing function or builds a huge mining cluster, and uses it to attack Bitcoin. He drives the difficulty way up and then quits.

Scenario 3: It is decided to change the hashing function. Miners want to protect their investment so they play hardball bargaining and (threaten to) quit.

These can all be solved by hardcoding a new value for the difficulty, but wouldn't it be better to have an adjustment algorithm robust against this? Especially considering most Bitcoin activity will freeze until a solution is decided, implemented and distributed.

Anyone proposing a change in the feedback controller for difficulty should be required to show the stability region of his proposal. Pretty much everyone who tries to "improve" the PI controller implemented by Satoshi comes up with some hackneyed version of PID controller and then is surprised that it can be made to osciallte and is not even assymptotically stable in the Lyapunov sense if any nonlinearity is included.
...
If I remember correctly it integrates the error over 2016 time intervals. This is some approximation of PI, more accurate approximation would be if the calculation for the expected block time was carried since the block 0 (time = -infinity).
I know enough to understand the ideas of what you're saying, but not the specifics. And I think you're wrong about the last part. Integrating the error over 2016 time intervals is not an approximation for I, the integral from -infinity. It is an approximation for P, used rather than direct measurements (equivalent to 1 time interval) because the quantity measured is stochastic.

P is what exists currently.
I would fix problems with long-term trends. Currently, halving is done every less than 4 years because of the rising difficulty trend.
D would rapidly adapt to abrupt changes, basically what the OP is suggesting.

It's possible that the existing linear control theory doesn't apply directly to the block finding system and that P, I and D are only used metaphorically.
People who understand this stuff should gather and design a proper difficulty adjustment algorithm with all these components.

I did some simulation of blocktime variance (assuming honest nodes, and after calibration for network growth) for various difficulty adjustment intervals, posed on the freicoin forums. The nifty chart is copied below. The difference between one-week and two-week intervals was negligible, and is what I would recommend if a shorter interval was desired.

A 24-hour interval (144 blocks) would have a variance/clock skew of 8.4%--meaning that one would *expect* the parameters governing difficulty adjustment to be in error by as much as 8.4% (vs bitcoin's current 2.2%). That's a significant difference. A 1-week retarget would have 3.8% variance. Twice-weekly would have 4.4% variance. I certainly wouldn't let it go any smaller than that..
We're not talking about using the same primitive algorithm with a shorter timespan. We're talking about either a new intelligent algorithm, or a specific exception to deal with emergencies.
2437  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 02, 2011, 07:14:18 AM
I don't understand why you consider GPUs as being a "problem" presumably because the average user without a GPU farm cannot compete...
If the vision is that Bitcoin will be completely decentralized and that everyone will mine (and I'm not saying this has to be the vision), then this is a problem, regardless of if it is an inevitable problem.

The average user, by definition, owns an average PC that will run workloads at an average performance. It is inevitable that specialized hardware will appear and significantly outperforms him. Even if mining was done by an algorithm designed to be costly to implement in hardware (like Tenebrix using scrypt), this would cause miners to evolve toward specialized hardware like many-core server farms.
At-home miners use hardware they already purchased for other purposes; they have a place for their PC with presumably sufficient cooling; they can mine only when their PC is on anyway so they pay only for the additional electricity of the CPU load, not to keep their computer running. Commercial miners need to pay for all of this as well as other expenses; they take the risk that Bitcoin value will decrease, that competition will increase, or that the hashing function used will change obsoleting their investment; and they still need it to be profitable enough to make for a viable business. For this, their hardware advantage needs to be huge.

How huge will it be? The idea with scrypt is that every core needs a certain amount of RAM, the exact amount being configurable. So those many-core severs will need lots of RAM to go along with it, which will get pretty expensive.
2438  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 02, 2011, 06:51:00 AM
I guess you could also have two or more hashing functions concurrently.
Yes, this is what I alluded to when I said that "the change can be gradual", but it can only be done for a transitional period because you need to hardcode how the difficulties of the two hashing functions relate. Either have a shared difficulty, and you hardcode the ratio of the targets, or you have separate difficulties and you hardcode the difficulty adjustment algorithm to target a certain percentage of the total blocks coming from each method. A "gradual" change would be starting with 100% old method, 0% new method and slowly shifting to to 0% old, 100% new.

But this is only required when we make changes that obsoletes existing investments. It will be necessary at the time we adopt a "change every year" strategy, but once that's adopted there's no need for a gradual transition each year because there will be no investment in hardware specific to the function of the year.
2439  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 02, 2011, 05:58:21 AM
The thing is, I just don't see how such a design is possible.
Did you read the scrypt paper (linked by the alt currency Tenebrix)? It sounds promising. In the end it's a numbers game, and I don't know what the numbers will be. But I think that if the Bitcoin economy as a whole decides it doesn't want specialized mining, it can be done - for example, by deciding to change the hashing function every year, so that noone will want to design chips that will expire by the time they are out.
2440  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 02, 2011, 05:48:31 AM
Not only did Satoshi planned for an increasing amount of power being thrown at mining, but he did know that GPUs would exel at it, way before any GPU miner even existed, in December 2009:

We should have a gentleman's agreement to postpone the GPU arms race as long as we can for the good of the network.  It's much easer to get new users up to speed if they don't have to worry about GPU drivers and compatibility.  It's nice how anyone with just a CPU can compete fairly equally right now.

Thanks - very revealing quote there. My reading of it is that he became aware of the GPU problem after the 'horse had already bolted' - and is thus appealing for a 'gentleman's agreement' for what he might have otherwise prevented in design from the outset.
Did you read the quote in context? (The quote header is a link.) The immediately preceding comment says

Suggestion :
Since the coins are generated faster on fast machines, many people will want to use their GPU power to do this, too.
So, my suggestion is to implement a GPU-computing support using ATI Stream and Nvidia CUDA.
So, this means GPU mining still wasn't implemented (and I think this remained the case for another year). And, like I and others said, GPGPU is old news (and was probably foreseen well before the popular implementations).
Pages: « 1 ... 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 [122] 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!