As insiders have known for many years now, a chips' node size has become a pointless and erroneous measure of what the dimensions inside of a chip actually are and not-so-useful in defining what the chip performance can be.
Looks like a new definition is in the works to better reflect actual progress being made. ref
https://read.nxtbook.com/ieee/spectrum/spectrum_na_august_2020/the_node_is_nonsense.htmlMy take on it is that companies like MicroBt and Canaan made the right choice by staying at 16nm for so long and relying on the fact that the Market forces driving the quest for lower 'node size' for things like memory, CPU's, GPU's etc as well as mining chips are at the same time benefiting the higher nodes. How?
Easy - use for example the '16nm node' that MicroBt and Canaan used for their respective M10/M20 and A10xx series miners. My guess is that TSMC applied lower-node metal interconnect tech to the now much cheaper 16nm node. Smaller/thinner metalization interconnect layers means lower switching losses improving efficiency while still using the larger (and cheaper, fully mature) 16nm gate size. It keeps the long ago paid for 16nm foundries busy while providing decent performance chips at far lower cost than cutting edge chips like BM's 7nm and now their bleeding-edge 5nm node size...
AFAIK the M30x from MicroBT use what,
12nm 8nm chips? I know Canaan is still smarting from the 7nm A921 debacle: They and/or Samsung rather blew it on that chip so even now their latest A1066 is still a brick of 16nm chips sourced from TSMC.
Edit: Earlier this year Canaan mentioned an A1166 but not sure what 'node size' it is/was supposed to be. They've been mighty quiet about it ever since.
Edit edit: The A11's specs say it has 342 of their A3205 16nm ASIC