Regarding the upthread post wherein I alleged that Ethereum can't scale (and tangentially I also alleged smart contracts on a
Turing-complete blockchain are another
DAO attack waiting to happen), I was asked to provide
some more details.
(Additionally I have recently explained that the 10s (coming to 100s?) of ICOs being launched every year now because of this Ethereum smart contract nonsense which enables rapid prototyping of 1000s of hair-brained ideas that will probably never achieve any serious adoption, because again Ethereum can't scale and another smart contract attack means all smart contracts become untrustable, means we likely have a
coming ICO graveyard where our ecosystem has died. Thanks to greedy developers who want to cash out after making a fancy presentation and thus don't develop first and launch without an ICO and really don't have much incentive to develop anything at all after they raise $millions to party with, especially young guys who have a lot of energy for conferences, dinners, parties, fancy hotels, etc.. What happened to my generation of startups where
we developed them from our bedroom while living at our parents' hose.
The low marginal cost of preparing the snazzy website, whitepaper and perhaps even a barely functional smart contract demo is minimal, perhaps < $5000 or even less. Yet these ICOs can easily raise 100s or 1000s of BTC. So the supply of ICOs rises to meet the demand from gullible speculators who all "
want to buy earlier than each other" so they all end up buying ICOs, thus
none of them buy earlier than each other and the ICO developers walk away with the Bitcoins and the speculators will be left penniless in the coming ICO oversupply graveyard because all the demand will have been transferred to ICO developers who don't buy in the aftermarket.)
NEW PROBLEM!There is also an inconsistency in the HumanIQ design as expressed on
pages 6 and
19-22 in the white paper and the
wallet features section of the website.
Firstly, you claim that users will only need to use their face and voice and do not need to deal with passwords. But then you tie the user to one device and also to signing keys stored on their mobile device. So if that mobile device is lost, stolen, or accessed even momentarily by a hacker (even if they just extract the keys and device ID perhaps even remotely and don't perform the attack over the user's mobile device), then the attacker only needs to fool the face and video identification system in order to steal the user's funds. They could for example exchange them for Bitcoin and be gone.
Face and video identification systems are theoretically reasonably easy to fool with synthetic methods, and I expect that to be true even against
automated challenge-response:
https://engineering.cmu.edu/media/feature/2016/12_01_facial_recognition_eyeglasses.htmlhttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.461.7360&rep=rep1&type=pdf#page=6 (see Figure 9)
Secondly, if user loses his device he has lost his keys. So the only way those keys could be recovered (again meeting your requirement that users don't have to ever mess with keys or store them in a paper wallet) is if some centralized service is storing the keys and can perform a recovery process that identifies the user. But again the attacker can game this recovery process, perhaps unless it is administered by a real human interrogator. And the centralized service becomes a huge attack surface for a hacker to steal all the users' keys (Bitcoin exchange failures show it is a serious risk). And of course as you admit on
page 20 of your whitepaper, then it means the biometric identification is not decentralized on the blockchain, but a centralized service. Centralized blockchain concepts will never scale out. The world won't trust a system that depends on one centralized service. Sorry.
HumanIQ is ostensibly obviously lacking the skills of a qualified, experienced CTO on its team. Afaics, the
HumanIQ design is irreparably/insolubly flawed and has apparently not been properly vetted by an appropriately skilled technical person.
Note it is possible to utilize face and voice biometrics stored on a blockchain to prove that the users are all unique within the false positive tolerance of multimodal biometric identification, e.g. perhaps to within a
few percent false positive rate for synthetic attacks (at current state-of-the-art) and even better if supplemented with real human interrogation (which would be retroactively objectively verifiable from the blockchain data in the future as the state-of-the-art advances). But for
automated logins and identification of users ongoing usage of the system, the false positive rate is
currently too high because it would be the theft rate. I suppose you were looking at service providers such as the following and just assumed the EER rates were acceptable without reading the research:
https://www.onefacein.com/faq.htmlhttp://www.voicetrust.com/solutions/voice-login/https://playground.bioid.com/BioIDWebService/LiveDetectionhttps://www.keylemon.com/oasishttps://www.quora.com/What-are-the-best-face-detection-APIs/answer/Ben-Virdee-ChapmanSo I also do not understand why the white paper implies we can't store a (hash of a) video on a blockchain. Surely we can.
Please do point out any misunderstanding or mistake I have made in my analysis.
Note that although via deep learning of deep neural nets the
state-of-the-art research on face recognition accuracy has reached as high as 99% (and do look at how close to
indistinguishable the errors were even to a human observer such as yourself) and state-of-the-art research on voice identification has reached as high
as ~1% EER (i.e. roughly 1% false positive with a 1% false negative if compared to
a 2.33% EER on page 5), these performance figures do not include data sets where the attacker is trying to game the system. So refer to the links I provided above on synthetic attacks to get a better idea of the risk of theft with multimodal biometric face and voice identification. The state-of-the-art is advancing rapidly because a
2014 paper only cited a 7% EER, but the datasets and training criteria are not probably not comparable.
1% or even 0.1% rates of theft would still be unacceptable.
Very nice analysis. Adoption will be a tough challenge for this if it makes through. I'd like to see the developer's response / comment on this analysis.