Bitcoin Forum
May 02, 2024, 03:55:30 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [68] 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 »
1341  Bitcoin / Development & Technical Discussion / Re: Integer or float used in Bitcoin? on: December 23, 2012, 04:08:29 AM
Decimal is easy to screw up.  Even if computers don't have a problem with it, in my experience humans make all kinds of subtle bugs dealing with decimal and floating datatypes.   Integer is clean and simple.  
Your experience is very atypical. Of the two problems: keeping track of the scale factor (for binary integer) and keeping track of decimal point; the first is always more error prone. This has been proven statistically over many significant code bases, repeatedly, and in many programming languages.

Personally I think this is somewhat academic.  Lets step back and say what is the smallest discrete pricing unit which matters today.  One could say $0.01 USD but honestly I doubt $0.01 is material anymore (more like just inertia carrying it onward).   I think worse case scenario a crypto-coin would be fine even if pricing was limited to a precision of $0.05 USD in value.  Note I am not saying optimal but it wouldn't hinder adoption in any significant manner. If a satoshi is worth even 1/100th of a US cent (2012 purchasing power) then BTC has exploded in popularity and usage.   If 1 satoshi is worth $0.05 USD  then 1 BTC = $5M and th emoney supply is worth $105T USD.  That is more than the combined money supplies of all countries on earth. 
I don't really disagree with the above; but I know of a different point of view. Many financial trades routinely quote prices with more than 2 fractional digits: 3 - bond prices; 4 - OTC stock prices, wholesale electronic component prices. There is quite a bit of research that more decimal digits makes markets more efficient and collusion more difficult. Advanced trading software pretty much always uses many more than 2.
1342  Bitcoin / Development & Technical Discussion / Re: Integer or float used in Bitcoin? on: December 23, 2012, 03:18:26 AM
Clearly we must be talking about different arithmetic types. I'm talking about arbitrary-precision rational arithmetic of the type provided by mpq_t in the GNU multi-precision library, for which "2/1" + "2/1" = "2/1" * "2/1" = "4/1" (exact rationals with no concept of uncertainty).
No, we've been talking about the same thing. You just spent so much time amongst scientists and engineers that you've forgotten how the average folks (e.g. bookkeepers, tradesmen, etc.) do arithmetic.

What I wrote was: 200/100 + 200/100 = 400/100 and 200/100 *  200/100 = 40000/10000. The concept of "un/certainity" is know to most of the people as "in/significant digits". You may have saved yourself some time right now, but you'll pay it all back (and more) when the users will start showing you examples where your mathematically correct arithmetic simply doesn't agree with their textbooks or with the results from the HP-12c.

What is the value of your "exact" arithmetic if it cannot exactly represent many of the solutions to the time value of money problems, e.g. APR is 5%, how much is this per month or per week or per day? Lots of TVM math requires irrational numbers, quite few commonly used calculations (e.g. IRR) don't have closed-form solutions and require iterative approximations. If your software is going to disagree with the HP-12c, your software is going to lose, not the HP-12c.

Does GNU gmp support all the rounding modes routinely used in accounting? I couldn't quickly find them in the documentation. IEEE-754 folks support at least the basic 5.

Edit: Anyway, the advanced quantitative trading software that I've seen used continuous compounding for the TVM calculations. I can't recall the minimum fixed precision that was required; it was either 6 or 8 digits after the decimal point.

Edit2: To avoid any further miscommunication: please try correctly implementing http://speleotrove.com/decimal/telco.html using your chosen representation.
1343  Bitcoin / Development & Technical Discussion / Re: Integer or float used in Bitcoin? on: December 22, 2012, 11:54:36 PM
Wouldn't a simpler system be to keep it as a 64bit unsigned integer value, increase the number of digits to 12 and put in place a limitation that no tx (or block can involve more than 10.5M BTC (50% of the money supply)?

2^64 = 1.84467E+19

10.5M BTC * 1E12 satoshis of precision per BTC = 1.05E19

You keep everything in bigint math, no need to go to a larger datatype (and thus larger tx size), no implementation issues with floats or decimals.  Simple & easy with the very minor limitation that a valid transaction can't have outputs or inputs valued at more than 10.5M BTC.  The limit is highly unlikely to be anything more than academic because Bitcoin is likely horribly flawed if a single entity has >50% of the money supply.
What I was trying to convey is that it is pretty much immaterial what any cryptocoin does internally. What matters is that such *coin implementation can be integrated into a general financial workflow.

In order to satisfy above it has to be easily representable in various SQL variants, Javascript, Ruby, Microsoft ASPs and .NETs, PHP, etc. Therefore the "just barely fits in a double" is actually a very good choice and a lifesaver for those who program in PHP.

Check out BitFinEx: they had Ruby code & database using binary floating point. When somebody pointed this on the forum they converted to decimal floating point in few days. This was a definite win for them.

The worst that could happen would be that the Bitcoin overtakes all other currencies but is impossible to represent easily and correctly in COBOL. Or maybe that would be the best that could happen?  Smiley
1344  Bitcoin / Development & Technical Discussion / Re: Integer or float used in Bitcoin? on: December 22, 2012, 09:26:28 PM
Yes, but without being partisan BID is the only format that really makes sense for this application. DPD is added complexity but allows easy conversion to BCD in hardware, for hardware accelerated decimal arithmetic. But being a financial application all internal arithmetic would have to be exact, arbitrary-precision rational rather than truncated floating point anyway, so the decimal64 format would only be used for serialization, making the hardware efficiency gains (and associated complexity) rather pointless.
Well, you point of view depends on the point of sitting.

DPD is the only format with full support (from the vendor IBM) and has direct hardware acceleration (on POWER and z/Arch).

BID is the widest available (because of Intel), but has glaring omissions in support, e.g. no widely available implementations scanf() and printf(). It is no big task to write them looking at the IBM's code, but I would consider it dangerous to distribute such rewrite.

In my experience rational math is overrated for financial applications. Even very simple TVM calculations have irrational solutions. And you still may need to implement correct rounding just to properly interface with the outside world. And don't even mention the word "truncation". Smiley There is quite a few generaly accepted accounting procedures that are very explicit about the rounding mode and you'll have to match them in you rational math library to pass audits.

Edit: Also, one thing I forgot about rational math is that many people's expectations will not match the formal mathematical definitions. The most common example is 2 + 2 != 2 * 2. Mathematically this is false, but in practice you'll need to accomodate the old custom: 2.00 + 2.00 = 4.00, but 2.00 * 2.00 = 4.0000 .
1345  Bitcoin / Development & Technical Discussion / Re: Integer or float used in Bitcoin? on: December 22, 2012, 07:29:37 PM
Actually the fact that all bitcoins fit within 2**53 is quite a happy accident. It means that integer-satoshi amounts can be represented exactly by the IEEE decimal64 type, which provides one route towards practically infinite divisibility by using exact arbitrary-precision rational arithmetic and decimal64 for serialization.
Yeah, definitely a fortuitous choice. It correctly fits in all three floating point formats: binary64 (a.k.a. double precision) and both representations of decimal64 (BID and DPD). 53-bit and 16-digit both happen to be quite broadly supported.

Use of decimal64 for serialization may not be a best choice, because of the two competing implementations are promoted, respectively, by Intel and IBM.
1346  Bitcoin / Mining software (miners) / Re: Mining protocol bandwidth comparison: GBT, Stratum, and getwork on: December 18, 2012, 09:34:24 PM
We have a very suspicious community that will continually check the pools' actions.
No disrespect intended for your thoughtful response, but this made me chuckle.  Deepbit was (is?) frequently attempting to mine forks of its own blocks. It's only Luke's paranoia code that caused anyone to actually notice it.  Look at what happened with with the BIP16 enforcement— discussed and announced months in advance there were popular pools that didn't update for it— for some (e.g. 50BTC) for a long as a month, and they kept hundreds of GH/s of mining during that time. A lot of malicious activity is indistinguishable from the normal sloppiness of Bitcoin services, and what does get caught takes hours (or weeks, months...) for people to widely notice much less respond to... an attack event can be over and done with far faster than that.

The only way to have any confidence that the centralized pools aren't single points of failure is making them so they aren't— so that most malicious activity cause miners to fall over to other pools that aren't misbehaving... but most of those checks can't be implemented without actually telling miners what work they're contributing to.   I'll leave it to Luke to point to the checks he currently implements and sees being implemented in the future.

Quote from: 2112
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.
So many possible responses… (1) You must have me confused with someone else— I write portable software. (2) show me this person who "can't" use GCC, (3) but not officially supporting VC is more about being conservative and making the software testable, especially since we have a significant shortage of testing resources, especially on windows (and Microsoft's standards conformance is lackluster to say the least)— that a compiler bug caused miscompilation that ate someone coin but it was fine on the compiler the tester used is little consolation. (4) Although you do seem to have me confused for your employee, as while I'm happy for you to use things I write I'm certainly not writing it to your specifications without a sizable paycheck. (5) But if you'd like to submit some clean portability fixes to the reference client I'll gladly review them (6) though having heterogeneous software is the direct opposite of people blindly copying code with so little attention that they can't bother to do a little work in their environment, so changes that have risk or complexity may not be worth it if they're only for the sake of helping forks and blind copying. (7) Finally, why have you made this entirely offtopic reply here? The thread is about protocols between pools and miners, and has basically nothing to do with the reference client not providing you with MSVC project files.

I'm quoting in full just in case. In my experience the various attempts at disparaging Microsoft software stem from lack of understanding of its capabilities and the requirements it fulfills in many markets. Close to a half of bitcoind code is just a re-implementation of database (but not-really-a-database), of which multitute is available on Windows, together with the plentitude of available APIs. Probably close to a half of bitcoin-qt code is just a re-implementation of a data-bound control that are taught in the first semester of Visual Basic or C# or C++ programming.

I actually admire Gavin Andresen for willingness to simply admit that he's not familiar with various available database/financial APIs and learning them would detract him too much from the short term goals that he sees for Bitcoin.

Anyone can also note that various moral and ideaological opponents of Microsoft suddenly change their stance when talking about Apple. And Apple OSX/iOS is even worse prison-like-system than Microsoft Windows are.

But lets get back to the supposed superior security of the GBT and value that such security provides for Bitcoin. Lets look a the recent successfull double-spends agains SatoshiDICE losing bets. If the GBT was really security-oriented then we would've heard about this from many simultaneous GBT users. But we've heard about this voluntarily from the cheater himself. The list of would-be double-spenders would be freely circulated here on this forum.

I'm having trouble believing anyone who simultaneously claims that he has long-term financial security outlook but is oblivious to the attempts to abuse the protocol happening right now.

Combined with the above is the fact that you seem to pop up in every thread discussing alternate Bitcoin implementations and protocols and start disparaging their authors.

I don't know what are yours and Luke's goals, but I'm getting more and more convinced that they aren't the ones that you are openly defending here. There is a possibility that you are completely incorruptible like Felix Dzerzhinsky and you will show up in your armoured train anywhere you observe some counter-revolutionary activity that is trying to corrupt the ideals of the Bitcoin revolution.

Edit: I may be suffering from a case of mixed revolutions. Substitite Maximilien de Robespierre and guillotine in the above paragraph.
1347  Bitcoin / Mining software (miners) / Re: Mining protocol bandwidth comparison: GBT, Stratum, and getwork on: December 18, 2012, 08:26:57 PM
This is why I, and most other Bitcoin developers, work to improve standardization and make Bitcoin more friendly to multiple competing implementations maintained by different people. Remember that GBT is the standard developed by free cooperation of many different people with different projects, whereas stratum was the protocol developed behind closed doors by a single individual.
Stratum family of protocols was not developed behind closed doors. The development started right here with a lively discussion:

https://bitcointalk.org/index.php?topic=55842.0

From the very beginning it was ignored by the core development team because of its key features:

1) allows a clean break from the polling only (RPC) behaviour deeply embedded into the archotecture of the Satoshi's client.

2) shows a way forward that lies after abandoning the legacy of long-poll (one of the most horrible hacks in the Bitcoin millieu) and correctly using two-way transport ability of a single TCP/IP socket.

Those things must have been a complete cultural shock to the core development team: a protocol that not only acknowledges the essential asynchronicity of Bitcoin network, but exploits it to reduce the network resource usage.

I'm going to assume that developers still living in the world where everything has to be polled for will spare no means to try to disparage anyone from outside of their group.

Bitcoin is essentially asynchronous and anyone who tries to hide that fact is going to be working on a very bad software project. Completely asynchronous implementations like 0MQ or FIX are currently too advanced for an average Bitcoin implementer. Stratum is somewhere inbetween and is a way for the average Bitcoin implementer to learn modern software architecture.

1348  Bitcoin / Development & Technical Discussion / Re: Proof of Proof - an alternative to proof of ___ systems on: December 17, 2012, 06:56:09 PM
Disregard this, I am writing up an even better proposal called Proof of Metaproof.
Can you distill it until it is 190-proof or at least 151-proof?
1349  Bitcoin / Hardware / Re: [Announcement] Avalon ASIC Development Status [Batch #1] on: December 17, 2012, 03:26:55 PM
These chips crunch near a billion hashes per second.  Losing a small handful of those each second is miniscule.

Mine along on your CPU if you wanna make up the difference and then some.
I get a feeling that a longer explanation is required for those unfamiliar with digital logic design.

The issue isn't really about losing one in billions of hashes. It is about gaining the timing margin (a.k.a. overclocking headroom) in the design.

Of course Avalon's logic is secret, but I'm going to discuss the problem based on one of the open-source FPGA hashers. It had a critical timing path in the logic that latched the "golden nonce". Since the design was 125-deep pipelined it had a hardware that subtracted constant 125 from the nonce counter before sending it out of the chip.

Now we have two ways to speed up the above design:

1) remove the 32-bit wide constant subtractor. This will gain a fraction of a nanosecond on every hash tried. It is very easy to subtract 125 in software from the nonce downloaded from the chip.

2) acknowledge that the timing violation may occur and the nonce latched may not be the exact one that solved the block, but a next one or previous one, depending on the details of the latching logic. It is somewhat more involved, but still easily doable in software: recompute the hashes for nonce values n-126,n-125,n-124 and use the one that solved the block. Again this will make the design more tolerant to overclocking for every hash tried inside the chip.

Obviously 1) cannot be applied to the ASIC chip or closed-source FPGA bitstream. But the method 2) remains applicable, just use a different set of test values.
1350  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 17, 2012, 01:49:44 PM
The problem with these solutions is that the final executable is not created by the compiler/MSVS.  It is created by py2exe.  MSVS is only compiling a DLL that gets bundled into the final distributable with the python code (containing the C++ blockchain utilities).

However, as I was writing this response, I realized that I bet py2exe has a solution for this.  Sure enough, this page shows that it's pretty simple.  So yeah, I wish I'd realized that before...

Haven't tried it yet, but this looks pretty damned easy...
Well, the next problem you'll going to run into will be related to the manifest files. I took a quick look at the py2exe web page and it seems like it was developed with Visual Studio .NET 2003 in mind (MSVC 7.1). This is the last release that didn't require manifest files for full functionality and full security.

I don't know how to fully solve this problem without modifying py2exe.

Edit: I looked at the news "py2exe 0.6.9 released (2008/11/15)" and they talk about supporting Vista UAC, which means that they may have some rudimentary, unfinished support for manifests.
1351  Bitcoin / Hardware / Re: [Announcement] Avalon ASIC Development Status [Batch #1] on: December 17, 2012, 02:55:27 AM
No, a solution is not provided every clock cycle. A mining logic block will drop non-solutions without requiring any communication with any external logic: it just has to look if the high 32-bits are zero or not.

The end-result is that a a ~7.5 Ghash/sec chip, for example, is going to output a difficulty-1 solution every half second, on average. That's only a few hundred bytes transmitted every second. Hardly "rocket science".
I think I understand what hardcore-fs has on his mind. He is saying that with multiple hashing pipelines you may miss more valuable difficulty-n (n>1) share if your glue hardware is occupied with transmiting a difficulty-1 share that had just been found by another pipeline. This situation is probably infrequent, but he insists on a synchronous FIFO to handle it properly.

I had similar problem back in school, where we had to handle quite improbable fault conditions but we didn't wanted to lose track of them. We simply used asynchronous S/R flip-flops and interrupts. Software would single-step backtrack the faulty channels if more than one fault occured nearly simultaneously.

I think the same approach can be used for hashing chip: don't bother catching exact nonce; since you know the order in which nonces are tried you can check couple of previous nonces in software. Even if the hashing chip cannot reliably use asynchronous S/R flip-flops it could for sure use synchronous J/K flip-flops.

Basically, it is a hardware/software tradeoff in handling rare, but important, conditions.
1352  Bitcoin / Armory / Re: Armory - Discussion Thread on: December 16, 2012, 09:57:50 PM
I guess you you don't need the specifics, I just need to be able to add a *.ico file to a *.exe file from the command line, and then I can add to my MSVS post-build scripts to do it automatically on each build.  
In Visual Studio all you need to do is right-click on the executable project and do "Add->Resource->Icon". Visual Studio will generate the resource script (*.rc) for you and automatically invoke the resource compiler (rc) and add the compiled resource file (*.res) to the linker's input.
1353  Bitcoin / Mining software (miners) / Re: Mining protocol bandwidth comparison: GBT, Stratum, and getwork on: December 16, 2012, 09:48:30 PM
Luke runs a service, though its not much of a money making service (as his pool doesn't take a cut of the subsidy).  I don't— but I don't earn a cent from working on Bitcoin (except for a few donations here and there— which I've had none in months).  I don't think this has squat to do with motivations, but if if you're looking to disparage _someone_ based on motivations you're looking at the wrong parties.

Bitcoin is a distributed and decentralized system.  You can make any distributed system non-distributed by just having some centralized parties run it.   If that is what you want— Paypal is a _ton_ more efficient— all this trustless distributed support has costs, you know—, and more carefully audited and trustworthy than basically _any_ centralized Bitcoin service.  I happen to think that Bitcoin is worth having, but only worth having if it offers something that more efficient options can't, without decentralization bitcoin is just a scheme for digicoin pump and dumpers to make a buck. So, looking out for Bitcoin's security is my only  motivation on this subject. People using GBT or verified stratum for mining against a centralized pool aren't even running any software I wrote, and they're certainly not paying me for it.

I'm quite happy that people provide services for Bitcoin, even though services have centralization risk. But in the community we should demand that things be run in a way which minimally undermines Bitcoin and we should reject the race to the bottom that would eventually leave us with something no better than a really byzantine implementation of paypal.

Quote
Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
And yet this isn't a question anyone is forced to ask themselves, even P2Pool users. And at the same time, people on centralized mining setups still need to maintain their software (e.g. it was pretty halrious to watch big pools lose half their hashrate when phoenix just stopped submitting blocks at some difficulties)
I understand your arguments, but I'm unable to reconcile them with your actions.

With your writing in English you are advocating distributed systems and heterogenous implementations.

With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.

The marginalization of Windows users and developers (like Casascius) is very glaring example: lots more people would contribute to the project if the core developers would actually download Microsoft Visual Studio Express, purge the code base from GNU-isms and made sure that all future changes developed on Linux don't break Visual Studio builds.

I don't disagree with you: decentralized project can be surreptitiously centralized by malicious (or incompetent) software as a service vendors.

But there is another avenue to such surreptitious centralization: a core development team that is full of "not invented here" and "our way or highway".
1354  Bitcoin / Mining software (miners) / Re: Mining protocol bandwidth comparison: GBT, Stratum, and getwork on: December 16, 2012, 06:13:13 PM
The whole post above is well argued. But I wanted to highlight just this fragment.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.
This is a standard dialectic argument used when software vendor tries to disparage software as a service (SaaS) vendor.

In this case we have Luke-Jr & gmaxwell as conventional software vendor. They hawk their complex software and the updates to it. Although the software is open-sourced it is so complex and obfuscated that only very few will be able to sensibly maintain it. The service component (PoW pooling) is pushed behind.

slush, ckolivas and kano together form a SaaS coalition. They hawk first of all their service and software part is the simplest possible required to relay the work that needs to be performed.

The "who's better" decision shouldn't be made on the bandwidth cost alone. The most rational discriminator is each miner's administrative overhead costs and skills. Simply speaking you the question you want to ask yourself is: Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
1355  Bitcoin / Hardware / Re: [Announcement] Avalon ASIC Development Status [Batch #1] on: December 16, 2012, 01:59:46 AM
they would need 88 chips. That sounds very unlikely to me...
For a Chinese designers 88 would be a doubly prosperous number or joy number. Sounds likely to me...

http://en.wikipedia.org/wiki/Numbers_in_Chinese_culture#Eight
1356  Bitcoin / Project Development / Re: Probably the hottest business idea of the moment in BTC on: December 13, 2012, 06:57:59 PM
There is a miscommunication going on here.

I'm pretty sure that killerstorm had mathematical formal verification in his mind.

http://en.wikipedia.org/wiki/Formal_verification

The ISO stuff mentioned by spyked is a bureaucratic formal verification of management processes, possibly in relation to information technology.

I think the best summary of the ISO certificates is with the old Russian joke: a certificate that you aren't a camel.

http://en.wikipedia.org/wiki/Russian_jokes
1357  Bitcoin / Development & Technical Discussion / Re: ANN: Announcing code availability of the bitsofproof supernode on: December 13, 2012, 03:14:18 PM
The primary interface to the kernel will be defined as a Java interface (BCSAPI), available throgh usual ways of remote invocaton just by configuration. For debug insight I prefer implementing JMX.
BCSAPI == Bean Context Services API
or
BCSAPI == Business Connectivity Services API
?
1358  Bitcoin / Hardware / Re: Goliath on: December 13, 2012, 01:35:51 AM
The general run of thumb is an FPGA is 10x slower, 10x higher area (die cost), and 10x higher electricity.
This is a fair point. I don't have any hard data pointing otherwise.

I did however discuss SHA-2 implementations with somebody up-to-speed. His best guess was that the power-reduction factor will be 2.5-5 times. We've come to such conclusion after noticing that the SHA-2 has unusually high toggle rate  (especially in D-type flip-flops) and uses a lot of carry-look-ahead chains (in multi-level adders). Also, the sea-of-hashers (not unrolled) implementation would use comparatively little of the long-distance routing resources.

The area reduction should be easily 10 times, especially after realizing that neither JTAG chains nor even the reset signals are required in the optimized hashers.

The timing reduction should fall somewhere in between the above two.

Last, but not least, all open-source FPGA bitstreams were speed-optimized, not power-optimized. I don't know which optimization strategy was used by the Avalon group, but I'm betting that they've choosen the default (timing closure) due to their tight time-to-market constraints.
1359  Bitcoin / Hardware / Re: Goliath on: December 12, 2012, 11:11:18 PM
Taking advantage of un-utilized FPGA time would have been great for 2012. However once Avalon sells batch #4 at end of 2013 and the network hash rate is 10x-50x where it is today, even free FPGAs will not break even. Just as free CPUs did not break even after GPUs became mainstream.

As per my note above, the type of semicon chip being used is by far the #1 consideration IMHO. Knowing this, and the source of the chips, is required to make a real analysis of long-term potential of the project.
I think you needlessly extrapolate the current FPGA technology used in the Bitcoin miners against the future process shrink of ASIC miners.

If Enterpoint has capital-outlay-free access to the most recent FPGAs (28nm) then I think they will be quite competitive with ASIC in 110-90-65nm range; based purely on the electricity rates.

We think we can build so much hashing power that it would make the market unstable and we plan to control the hashing power release to avoid creating a problem.
What else could cause "unstability" in the Bitcoin market aside from part time operation? Pretend for a moment that Enterpoint had already delivered (and received payment) for a high-frequency trading hardware that is operated 5*4hours/week during the window where both NYSE and LSE are open.

Long term the type of particular chip doesn't really matter. What matters is the continuous access to the new fabrication nodes and ability to share the development costs between Bitcoin finance and the classical finance.

For a homorous take on a really-long-term prognosis, see the 2nd link in my signature.
1360  Bitcoin / Hardware / Re: Goliath on: December 12, 2012, 10:11:26 PM
Before we ask for money you will be able to see the inital small system results in the network hash rate. It will be big enough to be noticed.
This is interesting point, if done outside of market hours where high-frequency trading goes on. In that case Enterpoint may just use an idle time on the FPGA trading machines. There would be no capital outlay for them whatsover, only the electricity cost.
Pages: « 1 ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 [68] 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!