Bitcoin Forum
May 27, 2024, 01:09:02 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 ... 109 »
481  Bitcoin / Hardware / Re: Request for Discussion: proposal for standard modular rack miner on: August 22, 2015, 07:15:04 AM
We checked several other miners for heatsink size comparisons.
I guess I'm overly cautious. I always worked on the devices with very strict quality assurance requirements: medical or industrial. Things like self-adhesive thermal pads were unacceptable, only mica leaves would do.

The lottery-ticket-printing devices that you are designing have much shorter lifespan, so a more relaxed design approach may be used. On the other hand, things like retail GPU cards have very carefully designed heathsinks: the die or at most 2 dies are in the center, everything else has a heat spreader or some other kind of interposer.

2112 - have you talked to PlanetCrypto about chip dev at all? I figured something like what he's doing might interest you.
It is interesting, but I need to keep my nose very close to my own grindstone. I'm not in the position to really get involved in the new projects. I'm fine with openly sharing knowledge and commenting in public on the forum.
482  Bitcoin / Hardware / Re: Request for Discussion: proposal for standard modular rack miner on: August 22, 2015, 12:03:20 AM
I guess after re-reading the original post I missed the fact that the hashing chips may be in the supply-voltage-serial a.k.a. string configuration. Since each chip will have different ground potential then there is some sort of galvanic isolation provided between the chips and the heathsinks.

So the heathsinks may be able to slide over their isolation layer enough to accommodate thermal expansion.
483  Bitcoin / Hardware / Re: BITMAIN launches 4th generation Bitcoin mining ASIC: BM1385 on: August 21, 2015, 11:15:43 PM
You have completely the wrong view of full custom, a rolled design would be a really dumb idea for a modern mining chip and very area inefficient, the customisation involves only two circuit elements, but I'm sure you know that. Not rocket science at all, no magic, and very little risk if you have some respect for semiconductor physics. DRC is there for very good reasons which again I'm sure you know, and only an idiot would even consider violating them.
The rolled vs. unrolled isn't a fully resolved choice. The losses and noise in the very long lines that drag the signals over 15 SHA-256 rounds are quite significant. I think the bitfury approach of routing hashed words in one direction and constant SHA coefficients in a perpendicular direction gives overall savings over trying to squeeze combinatorial optimizations after fully unrolling. Most of the combinatorial optimization gain is achieved by SHA-256 round pairing, i.e. 32 round-pairs instead of by-the-FIPS explicit 64 single-rounds.

I did not do a full analog modeling of both choices (rolled/unrolled) for SHA-256. But I've done something similar in the past that was bound by the speed of carry-look-ahead adders. I actually doubt that anyone here on this forum (maybe with exception of bitfury) did the required tradeoff analysis. My scientific will-ass guess is that Bitcoin miner has a possibility of being an example of one such circuits where leaving things rolled will be of great benefit. The very high toggle ratio (only -6dB below the theoretical maximum of a ring oscillator) will probably benefit from using some sort of SCL (source-coupled logic) or CML (current-mode logic) instead of the garden-variety CMOS bang-bangs that every CAD monkey throws at the Bitcoin mining problem.

People do fully unrolled hashers because the logic synthesis tools use heuristic place & route algorithms that don't converge or converge extremely slowly on the rolled designs.

As far as I understand the full DRC compliance at 28nm "mature" process is very, very conservative. I don't have any exact numbers handy, but the assumed gate  error ratios for a "digital" manufacturing process are way too high for Bitcoin miner that can easily tolerate a percentage point of errors. Violating some of the DRC to shed the unnecessary margins is one of the simplest ways to save power, after the obvious things like dropping JTAG and other testability overheads.

Re-reading your first sentence, I don't really understand the part
Quote
the customisation involves only two circuit elements, but I'm sure you know that.
Could you restate what you had in mind?
484  Bitcoin / Hardware / Re: Request for Discussion: proposal for standard modular rack miner on: August 21, 2015, 09:51:10 PM
Isn't 10in roughly the same size slab as a GPU these days?
Have you ever looked under the GPU's heathsink? How many chips it touches and where are the chips in relation to the heathsink?
485  Bitcoin / Hardware / Re: Request for Discussion: proposal for standard modular rack miner on: August 21, 2015, 09:39:03 PM
10 inches long contiguous aluminum heathsink?

The thermal expansion of such a slab of aluminum will be literally ripping the chips off of the PCB.

Either the heathsink or the PCB needs to be partitioned into sectors.
486  Bitcoin / Hardware / Re: BITMAIN launches 4th generation Bitcoin mining ASIC: BM1385 on: August 21, 2015, 07:59:21 PM
I am surprised that they say that full custom poses a higher 'risk' - that's only true for very complex chips like cpu's, not for the very simple (and I mean very simple) functions found in SHA256.
The added 'risk' is because for the first time they are getting outside the 'standard cell' design flow.

I have doubt that the new design is truly full custom. Their previous designs were simple unrolled hashers. True 'full custom' optimized design would be rolled. And switching from unrolled to rolled would involve redesign of the I/O protocol.

My bet is on them purchasing a custom standard macro library: lower-power by lower-area and lower-noise-margins. Sort-of like bitfury did for his first chip: 55nm-drawn transistors in the 65nm-nominal process.

Such a 'extra-low-power' library may be violating some default DRC's (design rule checks) of their foundry. Thus the foundry makes them explicitly waive DRC conformance warranty with their mask order.


487  Bitcoin / Hardware / Re: BITMAIN launches 4th generation Bitcoin mining ASIC: BM1385 on: August 21, 2015, 07:10:18 PM
Thats why you employ people that know what they are doing and use MPW runs. Not rocket science, is it?
It seems to be impossible for the Bitcoin miner designers to hire the knowledgeable people. Do you know of any Bitcoin mining ASIC project that doesn't look like a student project or a quickie hack job?

Thus far only ASICMINER acknowledged (very early) of being unable to hire or contract anyone with power/analog ASIC experience.
488  Other / Meta / Re: Do we have IP bans? on: August 11, 2015, 11:23:40 PM
For most people using residential ISPs all over the world it is trivial to get a new IP:

1) if the ISP uses DHCP: first use "RELEASE" then "RENEW" while not having any reservation;

2) if the ISP uses PPPo{A,E}: just disconnect the PPP session and reconnect anew.

What are the residential ISPs that make it difficult to change the IPs? I have yet to see any, aside from the ones that don't even assign globally routable IPs but force the use of CG-NAT.

489  Bitcoin / Development & Technical Discussion / Re: Berkeley Database version 4.8 requirement unsustainable on: August 09, 2015, 02:41:35 AM
To be honest I don't get the point you are trying to convey. How does this relate to benchmarking or distributed transaction processing?
It's relating to your previous claim of a need for distributed transaction which does not exists IMHO, as blockchain & wallet can be completely seperated in terms of transactions ('You want to be able to produce a distributed transaction that updates both Bitcoin wallet and some external database as a single transaction that either goes to completion in both databases or gets rolled back in both databases')
I think I understand our miscommunication. You are thinking of "transaction" like defined in Bitcoin class CTransaction. I'm thinking of "transaction" like defined in the usual financial terms:

https://en.wikipedia.org/wiki/Transaction_processing
https://en.wikipedia.org/wiki/Transaction_processing_system

Writing financial database applications without transactional consistency is essentially a form fraud as committed by GLBSE, ASICMINER (and others that I don't recall at this moment). I remember those two as the first two large examples of sending outgoing Bitcoin transactions without recording them properly in their auxiliary databases. They performed the Bitcoin "sendmany" twice and then publicly appealed to the recipients to return the overpayment.

Append-only is indeed most researched, but as you also stated, there is nothing 100% predictible or standardized in terms of behavior, hence you cannot assume any simple append-only code will behavie correctly or consistently for simple code across platforms (in case of power failures or crashes).
Properly implemented append-{only,mostly} data storages provide the same guarantees as the general-purpose ones. I don't see any reason why you are making those caveats, beyond that probably you haven't seen the "properly implemented" ones.

Actually mempools (and non-confirmed or non-confirmable transactions) contain  many business-critical data that is very valuable long-term:

It could, but should it? Does not those fall into the realm of custom needs?

What I'm arguing about here is to have bitcoin core as a foundation, a bitcoin protocol toolbox, a reference implementation, that can be built upon and expanded, rather than the alpha and omega of bitcoin.

So the more open, standard and modular it is, the easier it is to build and expand upon.
Every custom format or storage reduces openness (practical openness, the one that matters), and does not necessarily improve performance or usability (like the current wallet format).
You are just self contradictory. You want the code to be toolbox for possible extensions yet consider CTxMemPool appropriate abstraction. CTxMemPool should be an abstract base class that interfaces to one of many https://en.wikipedia.org/wiki/In-memory_database implementations.

Basically I sense that you have exactly zero previous experience with financial software and the relevant accounting and auditing practices.

490  Bitcoin / Development & Technical Discussion / Re: Berkeley Database version 4.8 requirement unsustainable on: August 08, 2015, 01:19:15 AM
1) Block explorer is not a valid benchmark. You want to be able to produce a distributed transaction that updates both Bitcoin wallet and some external database as a single transaction that either goes to completion in both databases or gets rolled back in both databases.
That's incorrect, and is actually a problematic design if you force that dependency.

What you need are actually 3 databases, which can be completely independent from each other:
- a blockchain DB (same as explorer DB), that can be restricted/filtered to only tx that concern you for size considerations
- a key DB (which will hold your key pool, xpriv, etc.)
- a wallet helper DB to hold metadata (address labels, payments extra info, accounts...)

The first two DB are only related by a filtering optimization, so no cross-DB transactions are ever necessary, and all the heavy lifting happens in the blockchain DB. When creating a new key, you always write it to the Key DB first, and it can only end up in the blockchain DB after having been confirmed. New unconfirmed tx are persisted in the wallet helper (with metadata and for eventual re-broadcasting), then go straight to the mempool for broadcasting. This could also help clean up the annoyances with new unconfirmed tx in the current bitcoin core code.

It would also isolate wallet management from key and tx creation/signing, the blockchain & key DB would be part of the core, but wallet helper could/should be considered as not being "core", but something alternative wallets could replace or do differently.

Key DB is really critical in terms of security, wallet helper much less so (if it leaks, you would leak some private info, but no funds), having it separate also means it will be simpler to move it to dedicated, standardized hardware.
To be honest I don't get the point you are trying to convey. How does this relate to benchmarking or distributed transaction processing?

2) Append-only files don't mean having to read and parse the whole file every time. For a popular example check out the PDF format that is designed to be read from the end, including appending new page indexes to the end.
But then you need some validation and recovery as what you appended could be partial or corrupted if a crash or power issue happened while you were writing it. Basically you would be re-inventing the ACID properties of a DB. Plenty of pitfalls there, especially with a cross-platform target, plenty of testing required.

You want to avoid NIHS and Yet-Another-Attempt-At-A-Crash-Proof-File-Format.
I disagree on that. Append only or append mostly data storage is a new and fertile area of research. It is mostly motivated by the constraints of the flash storage, where 1->0 bit transitions could be written (or rewritten) with small blocks (e.g. 4kB) but the 0->1 transitions require erasing of big blocks (e.g. 256kB).

Probably not much is published or open-sourced in this area. Flash storage vendors are very secretive about their new research in flash-optimized storage formats.

The 3 layers are: (a) blockchain storage including confirmed transactions (b) wallet keys&addresses storage (c) mempool a.k.a. unconfirmed transactions storage.
Yes, and currently (a) & (b) range from pretty bad to awful in terms of performance and accessibility of data.
(c) is a technical internal layer, I would not consider it as something alongside a & b, as it is transient and can be discarded.
Actually mempools (and non-confirmed or non-confirmable transactions) contain  many business-critical data that is very valuable long-term:

1) validly signed double-spend attempts
2) history of payment flows and fees at the granularity better than decaminutes.
491  Economy / Speculation / Re: Gold collapsing. Bitcoin UP. on: August 08, 2015, 12:23:17 AM
Interesting piece on the 2013 fork.  https://freedom-to-tinker.com/blog/randomwalker/analyzing-the-2013-bitcoin-fork-centralized-decision-making-saved-the-day/

Some ungrateful people here need to show a little more respect for the work of the devs, especially LukeJr.
Thanks for posting that link. It was a great example of how to manufacture panic and raise trivial resource allocation problems into widespread abuse opportunities.

It is still interesting that despite years passing nearly nobody in the Bitcoin development circles has even a passing knowledge of operational issues with database management systems.

The problem could have been trivially fixed by creating two line text file and restarting the Bitcoin daemon. The patch could be issued later after more careful analysis.

https://bitcointalk.org/index.php?topic=152208.0

The crowd around Mircea Popescu follows the same path years later:

http://qntra.net/2015/08/new-per-block-transaction-highs-wedge-some-nodes-patch-available/

492  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] Litecoin - a lite version of Bitcoin. Launched! on: August 07, 2015, 07:15:53 PM
None of what you said is unexpected behavior. During the fork, there were a majority of 0.8 nodes relaying version 2 blocks being mined and no version 3 blocks being mined. This caused the v0.10 nodes to appear 'hung'. If you checked the nodes log file, you would see many ERROR: ContextualCheckBlockHeader : rejected nVersion=2 block errors signifying that your node is receiving blocks but is not going to process it due the BIP66 enforcement. During the fork incident, there were some people who upgraded their 0.8 nodes to become 0.10 nodes but had the 0.8 chain data and were attempting to relay it to 0.10 nodes, which they obviously rejected. This is only solved by the people who upgraded their nodes from 0.8.x running -reindex to rebuild a valid chain, in accordance with the BIP66 ruleset which 0.10 supports, or they simply wait until the valid v3 chain takes over, forcing a reorg and and they will start accepting the blocks again.

Whilst we were in the process of taking over the invalid chain, we setup a dedicated node which people could connect to via addnode or -connect in order to bypass any issues until the v3 chain became the main one (as what was explained in my announcement). Also, since the pool is GPU mining as it was guaranteed to take over the v2 at the time, your CPU mining nodes would of experienced issues finding blocks because of the adjusted difficulty which might of appeared to you as 'stalling.' if the pool were to experience downtime. As explained previously, I attempt to operate the pool with high uptime but there may be periods of X minutes to possibly even an hr or two until it comes back online which may also appear to you as 'stalling'. FYI, the pool will continue to operate until the v2 miners upgrade.

If you start up a node and do a fresh sync now, you'll encounter no issues since ALL nodes (0.8 and 0.10) are on the same chain. Note, if the pool does go do down and the v2 chain does take over, a 'hung' node is possible until blocks start getting mined again, which is totally expected behavior and nothing to be concerned about (apart from the fact that we need more 0.10 testnet miners). And to address your other concern, we use the same block propagation and relaying code as Bitcoin core so this behavior isn't Litecoin specific, nor network specific.

Also, let me reiterate, if this was an actual issue, Bitcoin and Litecoin's main network would be experiencing issues (which they are not). Relaying of blocks and transactions to other nodes fine.
Sadly, not much communication had occurred in our discussion.

Forget about v2 blocks. The forks occur with exclusively v3 blocks with all nodes running 0.10.2.2.

The RPC command "getchaintips" doesn't lie (amongst many more trivial forks):
Code:
    {
        "height" : 642020,
        "hash" : "f979424831796342c3cd4cce98d33b1c6d2bc7e908270114fbd01d9110120f20",
        "branchlen" : 542,
        "status" : "valid-fork"
    },
Valid fork of 542 blocks (all of them v3) is not a symptom of normal operation. This is a symptom of grave failure of convergence to the consensus. Regretfully I'm in the process of moving, therefore I cannot post the similar, but non-identical forks that occurred on other nodes that I had running at that time.

Probably by running a single-address & single-node pool you are precluding the bug from reoccurring. This is a workaround, not a fix. I see your post as an explanation of the theory of operation, you don't seem to be even interested in reproducing the bug. And the bug will probably reoccur as soon as you stop pool mining on the testnet.

493  Bitcoin / Development & Technical Discussion / Re: Berkeley Database version 4.8 requirement unsustainable on: August 04, 2015, 04:15:33 PM
Not really, since the wallet holds critical information, you want it safe from the usual range of errors, and you want ACID properties on it, which means a database.

A standard database means the wallet information is directly accessible to the users through standard tools, without having to use custom tools.

Also having a proper database means you can do away with a lot of the bitcoin-side wallet bookkeeping in C++, and just query the DB.

If properly indexed, you will have no performance issues with SQLite (unlike the current wallet code). I run whole blockchain explorers on SQLite, and can compute balance for any address or wallet faster than bitcoin core can compute balance for a moderately busy wallet, and that's with a brute-force select sum(), and wallets with hundreds of thousands of addresses (like major exchanges and darknet marketplaces).

Quote
But does "Real databases" include SQLite here? Does SQLite gives any assurances in the case of sudden reboots?

Yes, it was built for that (and other issues)

Quote
At least with an append-only file (and regular syncing) you can be fairly sure that only record of your last activity before the reboot will be lost, and that not some management structure in the middle broke and made it impossible to parse the entire file.

SQLite was in part designed to compete with fopen(), to provide robustness vs crashes which a simple file does not provide, and a lot of people use it just because of that robustness.

Also simple append-only file means you would have to parse and load everything every time, which will be slow and inefficient.
My comments to the above:

1) Block explorer is not a valid benchmark. You want to be able to produce a distributed transaction that updates both Bitcoin wallet and some external database as a single transaction that either goes to completion in both databases or gets rolled back in both databases.

BerkeleyDB (with bitcoind -privdb=0) actually has X/Open distributed transaction monitor interface (not out of the box in Bitcoin Core, but not too much work either). I have never seen SQLite participate in a distributed transaction, but I'm not saying that it impossible or even very hard to modify it to be able to participate in some form of transactional processing.

2) Append-only files don't mean having to read and parse the whole file every time. For a popular example check out the PDF format that is designed to be read from the end, including appending new page indexes to the end.

3) You guys are all going to lose if you keep up your popularity contest for the favorite embedded database engine. The 3 database layers in the Bitcoin Core all need to be abstracted to be engine-independent. Then it would not matter if an example implementation uses not the best choice of engine for the particular application needs. The 3 layers are: (a) blockchain storage including confirmed transactions (b) wallet keys&addresses storage (c) mempool a.k.a. unconfirmed transactions storage.

494  Bitcoin / Development & Technical Discussion / Re: Idea: "Superpeers" Bitcoin core/block broadcast should use prioritized peer list on: August 02, 2015, 04:50:00 PM
You've spend too much time thinking and not enough time reading.

Your idea is already implemented by Matt Corallo under the name "Relay Network".

https://bitcointalk.org/index.php?topic=766190.0

495  Alternate cryptocurrencies / Announcements (Altcoins) / Re: [ANN] Litecoin - a lite version of Bitcoin. Launched! on: July 31, 2015, 10:10:36 PM
I work on Litecoin Core and know about this testnet issue, I was the one who also made a post advising users to stop mining on 0.8.7.x and upgrade to 0.10.2.2 whilst also setting up a private pool to orphan the version 2 chain. After our notification and to encourage others to begin mining, I lowered my testnet pools mining intensity. In doing so, version 2 blocks got the lead again which lasted in a small fork. If you actually looked at testnet recently, you would see that this issue has now been corrected and my pool settings will permanently stay like this until I see no more 0.8.x miners (subject to availability obviously). The block height is currently at 646208 in which I'm currently connected to over 25 nodes (similar to the dedicated testnet node we have setup), all reflecting this. This can also be verified by a block explorer site like http://blockchains.io/ltct/blocks/

Also, please don't make ridiculous claims that we don't know what we are doing, we know very well what is in Litecoin Core, including the causation of this and we know how to resolve issues like this (as shown above) and is why we contacted all pools to make sure that mainnet encounters a smooth transition for BIP66 activation (which has had 0 problems). Your concern stating that 'if somebody managed to split the testnet that far there maybe a bug lurking in the code that could be used to split the mainnet' is clearly unfounded and shows inadequate knowledge of how this issue occurs. If people could do it on mainnet, they would. BTW, this issue also occurred on Bitcoin's testnet https://blog.blocktrail.com/2015/06/bitcoin-testnet-is-forking-19-blocks-deep-and-counting/, admittedly it isn't a good thing to have happen on testnet, but both Bitcoin and Litecoin devs were extremely focused on mainnet BIP66 activation, as testnet is low priority which can always be reset accordingly if things go haywire (which has been done in the past).
I'm sorry thrasher, but I still think that you seem to be missing the root cause of the problem exhibited on the Litecoin testnet. I'll write it in single sentence in a separate paragraph to avoid it getting lost in a wall of text.

The problem with the new 0.10.* nodes seem to be that they maintain the mutual connections but under certain circumstances cease to listen or distribute the newly mined blocks.

Running a single pool that overwhelms the combined competition from all the old 0.8.* nodes is a neat workaround and a temporary safety measure. But they still have the advantage that they correctly pass the mined blocks amongs themselves and properly cumulate the hashing power of the individual nodes into their (sub-)net-wide hashing power.

This cumulation ceased occurring on the 0.10.* sub-net, so the net-wide hashing power is no longer the sum of the individual hashing powers of the nodes that were actively mining. Before you started or restarted your pool I was CPU-mining on my several test nodes and I nearly immediately noticed the stalls after upgrading from 0.8.* to 0.10.*.

I'm currently in the process of moving, so I'm back to running a single node and thus I cannot easily reproduce this problem or search the logs of orphaned transactions.

But I believe the bug is still there and as soon as you stop your pool it will reoccur.

The remaining questions are:

1) is this bug new to Litecoin codebase or was imported from the Bitcoin codebase,

2) is this bug exploitable on the mainnet or is particular to the testnet behavior where it temporarily switches to minimum difficulty under certain conditions.
496  Bitcoin / Armory / Re: Why should I trust the Ubuntu compilers to produce an Armory binary? on: July 31, 2015, 06:25:06 PM
Cool thread, interesting read.

The answer to why should you trust depends on how paranoid you are.

This whole Gitian business is a rehash of the old "secure Ada compilers" flop. First they paid to develop reproducible binaries build system. Then they paid to develop reproducible testbench environments. Then they paid to evaluate possibility of building reproducible random number generators and tightly controlled environmental chambers to exactly recreate fair testing conditions. The spiral of paranoia stopped thereabouts.

But the Ada software quality did not improve, all that happened was more butts were covered and the blame for defects was spread more evenly amongst the subcontractors. Well, and the kajillions of dollars were spent and earned.

When I joined this forum the "Satoshi clients" was at 0.3.24 and was essentially untestable. Right now the "Bitcoin Core" is at 0.11.0 and is still essentially untestable.

I don't recall when the GCC compiler suite gained the "-frandom-seed=number" flag, but at least the GCC maintainers showed the understanding of the problems of testability.

Edit:

I just looked through the most recent Bitcoin Core code and I see that sometime before 0.11.0 there was a refactoring made that at least made partial testability possible: seed_insecure_rand(bool);

So the Bitcoin Core developers are at least on the good track towards making a general testability possible.

I'm sorry that my message doesn't directly pertain towards Armory. The gist of my message is that "reproducible builds" at most guarantee that everyone has the same bugs and same exploits in their binaries.
497  Bitcoin / Hardware / Re: GekkoScience BM1384 Project Development Discussion on: July 31, 2015, 05:17:54 AM
Wow, that's a lot to digest. It'll take me a few days to chew on all the references. But REALLY appreciate your taking the time to weed through stuff.
Obviously there are some road blocks. The real question looming in my mind is 1) Can this get done in time and 2) in the process of doing it will the entity be forced or coerced into falling into the big 4's business model by economic realities.

If it were easy to do, everybody would be doing it.
One thing that seriously hampers the big Bitcoin mining ASIC vendors is that all of them (thus far) are "pure plays," meaning they do Bitcoin mining ASICs and nothing else. The better strategy seems to be a "conglomerate" approach, where an organization that has other ASIC development program dedicates a fraction of available silicon real estate on their chips to the research/experimental versions of their hashing cores. This can allow them to have a couple of serious test runs for essentially free, provided that the design is intelligent.

The forum user helveticoin (https://bitcointalk.org/index.php?action=profile;u=82676) did try doing the above "compound" chip over 2 years ago, but it had a fatal flaw: it was a full ARM SoC with Bitcoin hashers as additional peripherals. This is a very wrong way to do it, as the optimal operating points for the hashers are outside of the reliable operation zone for the SoC, therefore their design was seriously underperforming.

The intelligent way of doing a "compound" ASIC would be where the primary ASIC function of the chip has only common ground with the mining ASIC function of the chip. Therefore the chip is still commercially useable in their primary function with mining I/O pads declared as no-connect. At the same time it can serve as a mining ASIC testbed by doing the opposite: declaring the primary I/O pads as no-connects.

Bitcoin mining ASIC is not that difficult to do. The reason why hardly anyone is doing it is because of the personality issues and general instability of Bitcoin mining entrepreneurs. It is a subject for separate discussion, but many people in the Bitcoin milieu were seriously hampered by their peculiar outlook that is a MLM-like salesmanship mixed with religious-like fervor, kinda like a mixture of AmWay & Scientology. This was very repelling to many skilled professionals. Thus far only Spondoolies seem to be capable of not pigeonholing themselves into that niche.

498  Bitcoin / Development & Technical Discussion / Re: Level DB vs VSAM KSDS on: July 30, 2015, 12:46:55 AM
Nobody in the core development team has any significant experience with databases.

By not providing a database abstraction layer for the various storage pools (including mempool) they can have a better grip on the whole project.

Check out the historical perspective from 3 years ago:

blah blah blah blah
Gentle reminder to the other bitcoin developers: it is generally best not to feed trolls.  Use the ignore button.

Edit: Actually I just realized that you've asked about VSAM, which would mean that you've compiled it on the (big-endian) IBM mainframe. Does it work at all?
499  Bitcoin / Hardware / Re: GekkoScience BM1384 Project Development Discussion on: July 29, 2015, 06:08:01 PM
In the Synopsys world of IP things these SHA-256 cells are a proven item down to 14nm. I assume that they have taken that into account (... noise margins for each gate/transistor).
Oh, for sure not. "Proven" means "works correctly", not "works efficiently". The Intellectual Property blocks are sold as encrypted https://en.wikipedia.org/wiki/Register-transfer_level sub-circuits, that completely rely on the underlying implementation technology giving correct results.

This is the essence of digital design workflow: the assumption than the errors/faults are vanishingly small, like 10-10 or better. This is vastly over-reliable for Bitcoin mining where which can easily tolerate some percentage points of erroneus computations, like 10-1.

Besides, the general purpose hashing circuit will be easily outrun, even by the open source dedicated mining circuit that takes all the cheap and trivial optimizations: the nonce only changes 2 block of the inner SHA-256, the bit-lengths of both stages are fixed, last 3 rounds of the outer stage are unnecessary because we only look for zeros in the most-significant word.

https://github.com/progranism/Open-Source-FPGA-Bitcoin-Miner

This is a snippet of info I was totally unaware of. Thanks. Typically, do these clock generators "slide" the freq up and down or are they selecting a predetermined freq from a pool of freqs and then hopping amongst them?
No frequency hopping. This is continuous sliding up and down within the preselected range around the center frequency or below the upper limit frequency. Check out the manual for the example chip that I had in my browser bookmarks for couple of years now:

http://www.ti.com/product/cdce913

That sounds dreamily expensive, if for no other reason than it has the word "Intel" in it. lol
Well, it is worth dreaming sometimes, just so one won't become another bitter Bitcoin miner with perpetually pursed lips:

http://www.hotchips.org/wp-content/uploads/hc_archives/hc23/HC23.17.1-tutorial1/HC23.17.121,Inductence-Gardner-Intel-DG%20081711-correct.pdf

Edit:

Simply stated, if the community (of which I'd like to think we're a part) designs the chip, then the community owns the design (all the stuff one forwards to a foundry to get a run made). Put it up on GitHub. GPL licensed open source hardware? In theory, anyone could take that design and have a foundry make'm a batch. Much like an individual D/L's source compiles it and runs it. Given H/W is a little different as most can't afford a "foundry compiler" but a group might be able to pool resources and have a run made. And this community has pulled together in the past to make things happen.
You have a nice ideology, but it is nearly completely impractical. Any foundry will require you to sign a mutual non-disclosure agreement that prohibits publication of their design kit. So you'll have two options:

1) open-source RTL design, which is really kinda trivial, on the par with student's homework at better schools.

2) open-source uncommitted pre-layout BSIM4 analog model, which is more or less useless for actual design of the masks.

The real value in the mining chip design is on the back-end, in the optimization of the layout. And for that open source is currently helpless. Without violating NDAs one could at most produce a scientific/research paper that could get published in the peer-reviewed journals.

Edit2: I'm so out of date. BSIM is now at BSIM6 and is about to split into more specialized braches:

http://www-device.eecs.berkeley.edu/bsim/

Edit3: Older, now closed, thread about:  OpenBitASIC : The Open Source Bitcoin ASIC Initiative

https://bitcointalk.org/index.php?topic=76351.0
500  Bitcoin / Hardware / Re: GekkoScience BM1384 Project Development Discussion on: July 25, 2015, 10:10:25 PM
I am not in favor of one-wire communication. No real reason to make both endpoints more complex in order to save a couple traces/leads.
I'm not in favor either. But I understand the constraints of the lead time. If you had a choice of your ICs delivered in 5-lead packages in 1 month or in 7-lead packages in 6 months, which one would you choose?

What are your thoughts on the previous discussions regarding chained UART versus address-decoded SPI?
I see this question as incorrectly posed. There are actually two independent choices in it:

1) UART vs SPI. On this I have no real preference, but way more experience with USARTs (that includes not only asynchronous but also synchronous devices/protocols.). Even the very lame UARTs have parity error detection, whereas very lame SPIs have nothing but "Hail Mary" protection.
2) Star topology vs daisy-chain topology. On this I prefer star because the ICs need to be running at the edge of failure (thermal or noise), otherwise the project is not competitive.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [25] 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 ... 109 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!