Bitcoin Forum
October 31, 2024, 05:04:59 PM *
News: Latest Bitcoin Core release: 28.0 [Torrent]
 
   Home   Help Search Login Register More  
Poll
Question: Viᖚes (social currency unit)?
like - 27 (27.6%)
might work - 10 (10.2%)
dislike - 17 (17.3%)
prefer tech name, e.g. factom, ion, ethereum, iota, epsilon - 15 (15.3%)
prefer explicit currency name, e.g. net⚷eys, neㄘcash, ᨇcash, mycash, bitoken, netoken, cyberbit, bitcash - 2 (2%)
problematic - 2 (2%)
offending / repulsive - 4 (4.1%)
project objectives unrealistic or incorrect - 10 (10.2%)
biased against lead dev or project ethos - 11 (11.2%)
Total Voters: 98

Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [43] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
  Print  
Author Topic: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin?  (Read 95272 times)
This is a self-moderated topic. If you do not want to be moderated by the person who started this topic, create a new topic.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 06, 2016, 08:14:59 PM
Last edit: February 07, 2016, 03:49:32 PM by TPTB_need_war
 #841

<r0ach> you can't solve byzantine generals problem with a probabilistic model unless you've first solved sybil with a probabilistic model and Bitcoin doesn't do that
<r0ach> because there's no way of telling if all pools are owned by the same person, then it's not collusion or 51% attack, it's a sybil attack
<r0ach> since the essence of the byzantine generals problem is sybil attack, dealing with sybil comes first in the hierarchy before byzantine generals is discussed at all

I made this same point in either 2013 or 2014.

Afaics, the only solution is unprofitable PoW which is the design I am now pursuing.

...and another incentive structure must be developed to encourage decentralized p2p mining.

Switching to an ASIC resistant PoW coin doesn't solve this problem but merely delays the inevitable. As interest and hash power grows ASICS will be developed within time regardless.

I believe it is possible to design a memory hard PoW that is not electrically more efficient on an ASIC, but it will be very slow. I originally didn't think so, but have since realized I had a mistake in my 2013/4 research on memory hard hashes. It is possible that Cuckoo Hash already achieves this, but it is more difficult to be certain and it is very slow when DRAM economics are maximized (although it adds asymmetric validation which is important for DDoS rejection if the transaction signatures are ECC and not Winternitz and for verification when PoW share difficulty can't be high because each PoW trial is so slow).

Cryptonote's memory hard hash can't possibly be ASIC resistant, because by my computation it could not possibly have 100 hashes/second on Intel CPUs and be ASIC resistant.

See also Zcash's analysis thus far.

Correction follows.

It will be impossible to design a memory hard PoW that is not electrically more efficient on an ASIC, unless the hash function employed (for randomizing the read/writes over the memory space) is insignificant w.r.t. the RAM power consumption, which is probably not going to be the case in any design where that hash function has sufficient diffusion to be secure.

The only way to make an ASIC resistant PoW is for the proving computation to be memory latency bound, because DRAM latency can't be improved much in general (whereas hardwired arithmetic computation and memory bandwidth can be accelerated with custom hardware):

http://community.cadence.com/cadence_blogs_8/b/ii/archive/2011/11/17/arm-techcon-paper-why-dram-latency-is-getting-worse
http://www.chipestimate.com/techtalk.php?d=2011-11-22

However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs) and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction). This can even be done in software as interleaved memory spaces without needing a custom memory controller. More exotic optimizations might have custom memory controllers and larger memory banks (note I am not expert on this hardware issue). This is probably why Cryptonote includes also AES-NI instructions because GPUs have only at best at parity in performance per watt on AES, but that won't be enough to stop ASICs.

However that optimization for ASICs will bump into memory bandwidth limit so the amortization will have a limit. Theoretically memory bandwidth can be increased with duplicated memory banks for reads but not for writes!

Using larger memory spaces in a properly designed memory hard PoW hash function (not Scrypt) can decrease the probability of that instances will hit the same memory bank within a sufficiently small window of time necessary to reduce the latency. Also using wider hash functions (e.g. my Shazam at 2048 to 4096-bits) reduces the number of instances that can be interleaved in the same memory bank (and standard DRAM I think has bank/page size of 4KB?). The ASIC can respond by designing custom DRAM with larger memory banks and run more instances, but that not only raises the investment required but the memory bandwidth limit for writes seems to be an insurmountable upper bound.

So although I think a memory hard PoW hash can be made which is more ASIC resistant than current ones, I think it will be impossible to sustain parity in hashes/Watt and hashes/$hardware. Perhaps the best will be within 1 to 2 orders-of-magnitude on those.

So all profitably mined PoW coins (with sufficient market caps) are destined to be centralized into ASIC mining farms running on cheap or free electricity, but the scale and rate at which this happens can be drastically improved over SHA256 (Bitcoin, etc).

My design of unprofitably mined PoW will only require that the difficulty from the PoW shares sent with transactions is sufficient to making ASIC mining unprofitable for the level of block reward offered. Keeping the CPU implementation of the PoW prover within 1 to 2 orders-of-magnitude of an ASIC implementation reduces the level of such aforementioned difficulty needed.

I hope I didn't make another error in this corrected statement. It is late and I am rushing.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 05:33:01 AM
 #842

Quote from: myself in pvt msg to my angel investor
Thank you for your understanding. Well I don't plan on failing! The main issue is the one I can't entirely control (yet)...

The main issue I am still struggling with is my health. Because I relapsed this week, I increased my intake of 80% concentrated curcumin extract (w/piperine) to 30 grams per day! (mixed with locally fresh cold pressed coconut milk/virgin oil). That means 1 kilo per month! And that is concentrated extract thus equivalent to consuming 100 kilos of tumeric monthly.

The result is I want to sleep always. But I think that is good. After I sleep, I awake with some energy and feel good for perhaps 4 hours. But then I take more curcumin and I feel sleepy again.

It seems I am so messed up in my pancreas/gall bladder/colon area, that it will require either surgury (but no MRI or diagnosis yet) or it will require massive doses of curcumin and massive sleep (I assume sleep is the body's way of repairing damage).

So all I can say is the curcumin extract treatment is very active in the area of the problem (gut/digestion) and seems to be efficacious in terms of calming the systemic imflammation and improving disgestion and (frequency/stool quality of) defecation. And it is causing me to want to sleep so much that it limits the hours I can be awake to work. And I can't yet discern whether it is actually curing my gut issue. I have some signs that cause me to believe it may be. But I can't yet detect for sure. I think this fight for cure is one where I will struggle while undergoing the cure (I do imagine there is really cancer in there and I am attempting to shrink the tumor with the curcumin). So much guessing bcz I haven't a MRI. I contemplate going again to try to get a doctor here, but then again I'd rather expend my time trynig to work. I will pursue the curcumin for some more weeks before deciding what to do next on my health issue.

About the work in front of me now, first step is I am working on the memory hard PoW function. After that, I will investigate the XXXXXXXXXXX issues. Then I will be able to make some sort of estimate as to whether I think we can do a quicker launch and ramp from a rudimentary set of features, or whether I really need a year of coding before launch.

I think our window of opportunity is more limited than you think, because everyone is searching for the solution to Bitcoin's scaling problem. And others are starting to get wind of the idea of combining social networking and crypto [e.g. GetGems]. I think we need to strike now asap. Also yes I need money to go abroad [for medical diagnosis].

st0at
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
February 07, 2016, 06:24:40 AM
 #843

CoinHoarder, stoat, and I ain't fallin' for AnnoyingMint's BS:

You guys are clueless as to Zcash not being able to succeed with a 11% block subsidy. Ripple premined 100% of their coin and they are doing quite well, Dash did too. Bitcoin was effectively "instamined" by early adopters. There is no way to fairly distribute any cryptocurrency. Then again, this being "fatal flaw" is being brought up by someone that insist they should include a backdoor for the government, so I will take anything you say about their business plan (and the fact that yours is so much better) with a grain of salt.

Yeah that idiot TPTB thinks every anonymous coin should have a viewkey, but you are correct that Monero should be shunned because it enables auditing!

Mega Kim.Com is our hero! He is a marketing genius (who studied under Charles Ponzi's mastery of manipulating human psychology) cleverly amasses $100 million by charging us small commissions to provide Bittorrent links so we can steal all the content we love (from our beloved artists or from Hollywood) and be free of the censorship that we can't otherwise post to our blog for free access to readers.

Yeah we love Hollywood's content because we love a model of interaction wherein we are dumb zombies who do not create content but sit in front of the TV. Because long live TV and the model of non-interaction as content!

Yeah you are so smart and TPTB is so dumb. Thank you immensely for leading your generation to the truth. Amen.


And he will not also admit the following is why he incorrect about stealing content.

Governments are organizing now around controlling the internet. The illegal activity through Bittorrent (which also steals from ISPs which have higher upload bandwidth allowances) is helping the governments feel they are justified in regulating the internet via Net Neutrality and other measures. You young fellow feel free to pursue theft of music and other content which deprives the millions of artists of income to pay their rent. You are not going to create the new Knowledge Economy with your theft model. And by advocating theft, you are helping the NWO totalitarianism to take form by providing an economic incentive and political support from millions of artists who are violated by piracy. Dumb. But I expect that from you.

I didn't have time to go over this earlier, but you are using a straw man argument here. My point in bringing up Bittorrent is that decentralized technologies exist that the government cannot shut down. I was not condoning or promoting Bittorrent's copyright infringement, but rather admiring the technology behind it. You said that anything that broke laws or regulations would be shut down by the government, even if it is built on decentralized technology, and I was pointing out that is not necessarily the case.

Yeah let's move the goalposts and nobody will notice. That is just a little secret between you and I.  Lips sealed

Yeah the word 'decentralization' is always correct, even when the concept of decentralized file storage has the inviolable issue of enabling copyright theft until an algorithmic identification of sameness is invented.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 06:49:02 AM
 #844

My (age 17 in May) high IQ daughter has confirmed that my plan is correct and if I can implement then I do have a shot of replacing Facebook for her generation.

She is very excited to promote my new site to her 10,000+ Facebook friends.

My daughter has Fb friends in every country of the world.

tromp
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
February 07, 2016, 04:03:46 PM
 #845

However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs)

Where do you get those numbers? What I can measure is that a GPU has a 5x higher throughput
of random memory accesses. I don't know to what extent that is due to more memory banks in the GPU
but that makes it hard to believe your numbers.

Quote
and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction).

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.
You cannot combine multiple random accesses because the odds of them being in the same row
is around 2^-14 (number of rows).
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 04:28:22 PM
Last edit: February 07, 2016, 05:05:36 PM by TPTB_need_war
 #846

However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs)

Where do you get those numbers? What I can measure is that a GPU has a 5x higher throughput
of random memory accesses. I don't know to what extent that is due to more memory banks in the GPU
but that makes it hard to believe your numbers.

From my old rough draft:

Quote
The random access latency of Intel's L3 cache [13] is 4 times faster than DRAM main memory [2] and 25 times faster than GPU DDR main memory [14].

[14] http://www.sisoftware.co.uk/?d=qa&f=gpu_mem_latency&l=en&a=9
     GPU Computing Gems, Volume 2, Table 1.1, Section 1.2 Memory Performance

Unfortunately that cited page has disappeared since 2013. You can use their software to measure it. That is referring to one sequential process.

You are referring to the latency when the GPU is running multiple instances or (in Cuckoo's case) otherwise exploiting parallelism in the PoW proving function. Of course the latency drops then, because GPU is able schedule simultaneous accesses to the same memory bank (or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)

Edit: Try these:

http://courses.cms.caltech.edu/cs101gpu/2015_lectures/cs179_2015_lec05.pdf#page=11
http://stackoverflow.com/questions/13888749/what-are-the-latencies-of-gpu

Edit#2: http://arxiv.org/pdf/1509.02308.pdf#page=11

and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction).

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.
You cannot combine multiple random accesses because the odds of them being in the same row
is around 2^-14 (number of rows).

If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

I am not expert on the size of memory banks and the implications of increasing them.

tromp
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
February 07, 2016, 04:56:57 PM
 #847

If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.
But each bank can only have one of its 2^14=16384 rows active at any time.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 05:28:40 PM
Last edit: February 07, 2016, 06:39:45 PM by TPTB_need_war
 #848

If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.

Why do you say 'no' when I also wrote that alternative possibility is that banks are independent:

(or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)



But each bank can only have one of its 2^14=16384 rows active at any time.

My point remains that if there is parallelism in the memory access (whether it be coalescing accesses from the same bank/row or for example 32K simultaneous accesses from 32K independent banks), then by employing the huge number of threads in the GPU (ditto an ASIC) then the effective latency of the memory due to parallelism (not the latency as seen per thread) drops until the memory bandwidth bound is reached.

However it might be an important distinction between whether the accesses are coalesced versus simultaneously accessed (and thus more than one energized) memory banks (row of the bank) in terms of electricity consumption. Yet I think the DRAM memory consumption is always much less than the computation, so as I said unless the computation portion (e.g. the hash function employed) can be a insignificant then electricity consumption will be lower on the ASIC. Still waiting to see what you will find out when you measure Cuckoo with a Kill-A-Watt meter.

Why did you claim that memory latency is not very high on the GPU? Did you not see the references I cited? By not replying to my point on that, does that mean you agree with what I wrote about you were confusing latency per sequential access with latency under parallelism?



Edit: I was conflating 'bank' with 'page'. I meant page since I think mentioned 4KB and it was also mentioned in the link I provided:

http://www.chipestimate.com/techtalk.php?d=2011-11-22

I hope I didn't make another error in this corrected statement. It is late and I am rushing.

Quote from that link:

DDR DRAM requires a delay of tRCD between activating a page in DRAM and the first access to that page. At a minimum, the controller should store enough transactions so that a new transaction entering the queue would issue it's activate command immediately and then be delayed by execution of previously accepted transactions by at least tRCD of the DRAM.

And note:

The size of a typical page is between 4K to 16K. In theory, this size is independent of the OS pages which are typically 4KB each.

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 05:44:13 PM
 #849

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.

The bolded statement is not correct in any case. Threads are cheap on the GPU. It is memory bandwidth that is the bound. Adding more instances and/or more per instance parallelism (if the PoW proving function exhibits per instance parallelism) are both valid means to increase throughput until the memory bandwidth bound limit is reached. Adding instances doesn't slow down the performance of each instance unless the memory bandwidth bound has been reached (regardless of whether the memory spaces of separate instances are interleaved or not).

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 06:24:10 PM
 #850

I have come to the conclusion that we will all stab each other to death when faced with the choice between that and applauding/cheering each other (working together).

It is the nature of men. Find leverage. Seek. Destroy. Pretend to be part of a team while it serves one's interests but only while it does.

Men are damn competitive.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 08:23:16 PM
 #851

If you think about what gives a currency its value independent of any FX exchange, it is the level of production for sale in that currency increases so via competition, more is offered at lower currency price and thus the value of the currency has increased. Thus our goal is to get more users offering more things for sale in the currency.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 07, 2016, 08:36:26 PM
Last edit: February 07, 2016, 08:48:21 PM by TPTB_need_war
 #852

Example, for a 128KB memory space with 32 KB memory banks

Hard to argue with someone who either confuses terms or whose numbers are way off.
You have at best a few hundred memory banks.

Quoting from http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html

Banks

To reduce access latency, memory is split into multiple equal-sized units called banks. Most DRAM chips today have 8 to 16 banks.

...

A memory bank can only service one request at a time. Any other accesses to the same bank must wait for the previous access to complete, known as a bank-conflict. In contrast, memory access to different banks can proceed in parallel (known as bank-level parallelism).

Row-Buffer

Each DRAM bank has one row-buffer, a structure which provides access to the page which is open at the bank. Before a memory location can be read, the entire page containing that memory location is opened and read into the row buffer. The page stays in the row buffer until it is explicitly closed. If an access to the open page arrives at the bank, it can be serviced immediately from the row buffer within a single memory cycle. This scenario is called a row-buffer hit (typically less than ten processor cycles). However, if an access to another row arrives, the current row must be closed and the new row must be opened before the request can be serviced. This is called a row-buffer conflict. A row-buffer conflict incurs substantial delay in DRAM (typically 70+ processor cycles).

I have already explained to you that the page size is 4KB to 16KB according to one source, and I made the assumption (just for a hypothetical example) that maybe it could be as high as 32KB in specially designed memory setup for an ASIC. And I stated that I don't know what the implications are of making the size larger. I did use the word 'bank' instead of 'page' but I clarified for you in the prior post that I meant 'page' (see quote below) and should have been evident by the link I had provided (in the post you are quoting above) which discussed memory pages as the unit of relevance to latency (which I guess you apparently didn't bother to read).

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.

What number is far off? Even if we take the page size to be 4KB, that is not going to be any where near your 2^-14 nonsense.

The number of memory banks is irrelevant to the probability of coalescing multiple accesses into one scheduled latency window. The relevancy is the ratio of the page size to the memory space (and the rate of accesses relative to the latency window). Duh!

I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

The page size and the row buffer size are equivalent. And the fact that only one page (row) per bank can be accessed synchronously is irrelevant!

Now what is that you are slobbering about?

(next time before you start to think you deserve to act like a pompous condescending asshole, at least make sure you have your logic correct)

I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with. I don't know wtf happened to your attitude lately. Seems ever since I stated upthread some alternatives to your Cucooko PoW, that you've decided you need to hate on me. What is up with that. Did you really think you were going to get rich or massive fame from a PoW algorithm. Geez man we have bigger issues to deal with. That is just one cog in the wheel. Isn't worth destroying friendships over. I thought of you as friendly but not any more.

tromp
Legendary
*
Offline Offline

Activity: 990
Merit: 1110


View Profile
February 07, 2016, 09:37:44 PM
 #853

I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

Oops; I didn't realize you were talking about a tiny Cuckoo Cycle instance. I normally think of sizes in the dozens or hundreds of MB. Apologies for the misunderstanding.

In that case you are right that most memory accesses will be coelescing into the same row/page.
But when you start running multiple instances they may occupy different pages in the same bank
and conflict with each other.

The 2^-14 applies when most of your physical memory is used for Cuckoo Cycle,
as might be tried in an ASIC setup.

Quote
I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with.

I'm just keen to correct what I perceive as misrepresentation or false claims about my work.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 10, 2016, 10:01:51 AM
 #854

I suspect
Benthach and spoetnik are both tptbneedwar alt accounts.  Just 3 different personalities of the same unhinged loon

This is slander against my reputation. If you don't retract this, I will put negative trust on your profile.

I am absolutely not either of those accounts. I demand a retraction from you.

I have pointed out the engineering reasons that Ethereum is fundamentally flawed. I have nothing to do with the weak arguments of those two clowns.

FreeTrade
Legendary
*
Offline Offline

Activity: 1470
Merit: 1030



View Profile
February 10, 2016, 10:47:23 AM
 #855

I suspect
Benthach and spoetnik are both tptbneedwar alt accounts.  Just 3 different personalities of the same unhinged loon

This is slander against my reputation. If you don't retract this, I will put negative trust on your profile.

I am absolutely not either of those accounts. I demand a retraction from you.

I have pointed out the engineering reasons that Ethereum is fundamentally flawed. I have nothing to do with the weak arguments of those two clowns.

Yeah Anonymint is merely grumpy, disagreeable and egomaniacal, but he makes good technical points to make up for it. Spoetnik is a clown, agreed, don't know about the other one.

RepNet is a reputational social network blockchain for uncensored Twitter/Reddit style discussion. 10% Interest On All Balances. 100% Distributed to Users and Developers.
TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 10, 2016, 10:48:08 AM
Last edit: February 11, 2016, 04:21:28 PM by TPTB_need_war
 #856

The first draft specification for my Shazam encryption function is complete and will deployed in my upcoming (much improved over Scrypt and my prior design from 2013) sequential memory hard PoW hash function (note Shazam is a simple tweak of ChaCha so it isn't like I invented much for Shazam, but I did need to analyze and assimilate several issues as stated in the specification ... also included my formerly unpublished security analysis of ARX from 2013):

Code:
/*
Shazam is a fast 1024-bit (128B) encryption function. Shazam has the following
attributes— which are required[1] for use in constructing sequential memory-hard
functions¹; and which are not sufficient to make Shazam a secure stream cipher
nor a cryptographic hash function[2]:

  1. The outputs are uniformly distributed.
  2. Per instance computation can’t be significantly accelerated by more
     internal parallelism than can be exploited on CPUs nor by precomputation of
     a limited-space data structure.
  3. Computation of any segment of the output can’t be significantly faster
     than computing the entire output.
  4. Maximizes the ratio of the output length to the execution speed.

For security against the structure around the matrix diagonal which passes
through[3] the Salsa20 and ChaCha block function, the first use of Shazam in a
chain of hashes should employ input constants[4].

ChaCha[5] seems best fit to these requirements; and compared to Salsa20 (and its
impressive visualized diffusion[8]), ChaCha has 50% greater bit diffusion[6],
updating each 32-bit doubleword twice per quarter-round, equivalent security yet
requiring one less round[9], and equivalent or faster per-round execution speed.
Same as for Salsa20, each ChaCha quarter-round is a confusion-and-diffusion[10]
block function that employs (48 per round, i.e. 16 each of) 32-bit
add-rotate-xor (ARX) operations.

Salsa20 alternates row and column rounds which required a slow matrix transpose
between rounds (i.e. swapping rows and columns across the diagonal) for naive
SIMD vector implementations. ChaCha incorporates an optimization[7] first
discovered for Salsa20[11] that instead rotates each column (except the first)
by multiples of the 32-bit doublewords; which is a faster operation on SIMD
registers than swapping. The slow inverse mapping to generate the final
output[11] is eliminated in ChaCha[7].

Rotate operations on SIMD registers are typically only fast for multiples of
8-bits (one byte). Although ChaCha has one each of (multiple of a byte) 16-bit
and 8-bit rotate operations that Salsa20 didn’t, the other two rotate operations
per quarter-round are (12-bit and 7-bit which are) not multiples of a byte. To
maximize the ratio in requirement #4 and enable all rotate operations to be a
multiple of a byte, Shazam widens ChaCha from 512-bit to 1024-bit by increasing
the 32-bit doublewords to 64-bit quadwords. Thus the chosen rotate operations
are 32-bit, 24-bit, 16-bit, and 8-bit per quarter-round. Most recent mobile ARM
processors have the NEON SIMD feature which provides sixteen 128-bit SIMD
registers comprising two 64-bit quadwords. Thus two instances of Shazam per core
(or per hyperthread for Intel CPUs) can be executed in parallel. NEON provides
the VTBL instruction[12] for (multiple of a byte) rotates on 64-bit quadwords.
ARM’s Advanced SIMD doubles the number of 128-bit SIMD registers and Intel’s
AVX2 doubles the SIMD registers’ width to 256-bit; and thus can execute in
parallel four instances of Shazam per core (or per hyperthread).

The widening to 256-bit registers can be implemented on AVX2 for column rotates
(of the four 64-bit quadwords) with a single instruction per column. On ARM NEON
the maximum register size is 128-bit so each column rotate can’t be accomplished
with one instruction because the VTBL instruction has only a 64-bit quadword
output. The third column rotate can be accomplished with a single VSWP
instruction, and the second and fourth columns each with two VSWP instructions.
Note the definition of the terms ‘quadword’ and ‘doubleword’ are different for
AVX2 and NEON; and this document adopts the AVX2 definition where quadwords are
four 16-bit words and doublewords are two 16-bit words.

________________________________________________________________________________
¹ A sequential memory-hard function is typically one-way and has uniformly
  distributed, constant length outputs; and is a non-invertible hash function
  only if the input is variable-length because variable-length input is the
  definition of a hash function.

ARX Security

ARX operations are employed in some block encryption algorithms, because they
are relatively fast in software on general purpose CPUs and reasonable
performance on dedicated hardware circuits, and also because they run in
constant time, and therefore immune to timing attacks.

Rotational cryptanalysis attempts to attack encryption functions that employ ARX
operations. Salsa and ChaCha employ input constants to defeat such attacks[3].

Addition and multiplication modulo (2ⁿ-1) diffuse through high bits but set low
bits to 0. Without shuffles or rotation permutation to diffuse changes from high
to low bits, addition and multiplication modulo (2ⁿ-1) can be broken with low
complexity working from the low to the high bits[13].

The overflow carry bit, i.e. addition modulo ∞ minus addition modulo (2ⁿ-1),
obtains the value 0 or 1 with equal probability, thus addition modulo (2ⁿ-1) is
discontinuous—i.e. defeats linearity over the ring Z/2ⁿ[17]—because the carry
is 1 in half of the instances[14] and defeats linearity over the ring Z/2[16]
because the low bit of both operands is 1 in one-fourth of the instances.

The number of overflow high bits in multiplication modulo ∞ minus multiplication
modulo (2ⁿ-1) depends on the highest set bits of the operands, thus
multiplication modulo (2ⁿ-1) defeats linearity over the range of rings Z/2 to
Z/2ⁿ.

Logical exclusive-or defeats linearity over the ring Z/2ⁿ always[16] because it
is not a linear function operator.

Each multiplication modulo ∞ amplifies the amount diffusion and confusion
provided by each addition. For example, multiplying any number by 23 is
equivalent to the number multiplied by 16 added to the number multiplied by 4
added to the number multiplied by 2 added to the number. This is recursive since
multiplying the number by 4 is equivalent to the number multiplied by 2 added to
the number multiplied by 2. Addition of a number with itself is equivalent to a
1-bit left shift or multiplication by 2. Multiplying any variable number by
another variable number creates additional confusion.

Multiplication defeats rotational cryptoanalysis[15] because unlike for
addition, rotation of the multiplication of two operands never distributes over
the operands, i.e. is not equal to the multiplication of the rotated operands.
A proof is that rotation is equivalent to the exclusive-or of left and right
shifts. Left and right shifts are equivalent to multiplication and division by a
factor of 2, which don’t distribute over multiplication, e.g.
(8 × 8) × 2 ≠ (8 × 2) × (8 × 2) and (8 × 8) ÷ 2 ≠ (8 ÷ 2) × (8 ÷ 2). Addition
modulo ∞ is always distributive over rotation[17] because addition distributes
over multiplication and division e.g. (8 + 8) ÷ 2 = (8 ÷ 2) + (8 ÷ 2). Due to
the aforementioned non-linearity over Z/2ⁿ due to carry, addition modulo (2ⁿ-1)
is only distributive over rotation with a probability 1/4 up to 3/8 depending on
the relative number of bits of rotation[15][18].

However, multiplication modulo (2ⁿ-1) sets all low bits to 0,
orders-of-magnitude more frequently than addition modulo (2ⁿ-1)— a degenerate
result that squashes diffusion and confusion.

References

[1] C. Percival, “Stronger Key Derivation Via Sequential Memory-Hard Functions”,
    pg. 9: http://www.tarsnap.com/scrypt/scrypt.pdf#page=9

[2] Stream cipher security considerations for Salsa20 aren’t required, c.f.
    D. Berstein, “Salsa20 security”: https://cr.yp.to/snuffle/security.pdf

    Cryptographic hash function security considerations aren’t required[1].

[3] D. Berstein, “Salsa20 security”, §4 Notes on the diagonal constants, pg. 4:
    https://cr.yp.to/snuffle/security.pdf#page=4

    J. Hernandez-Castro et al, “On the Salsa20 Core Function”, §4 Conclusions,
    pg. 6: https://www.iacr.org/archive/fse2008/50860470/50860470.pdf#page=6

[4] Y. Nir et al, “ChaCha20 and Poly1305 for IETF Protocols”, IRTF RFC 7539,
    §2.3. The ChaCha20 Block Function, pg. 7:
    https://tools.ietf.org/html/rfc7539#page-7

[5] D. Berstein, “ChaCha, a variant of Salsa20”:
    http://cr.yp.to/chacha/chacha-20080128.pdf

[6] D. Berstein, “ChaCha, a variant of Salsa20”, pg. 3:
    http://cr.yp.to/chacha/chacha-20080128.pdf#page=3

[7] D. Berstein, “ChaCha, a variant of Salsa20”, pg. 5:
    http://cr.yp.to/chacha/chacha-20080128.pdf#page=5

[8] https://cr.yp.to/snuffle/diffusion.html

[9] https://en.wikipedia.org/w/index.php?title=Salsa20&oldid=703900109#ChaCha_variant

[10] http://en.wikipedia.org/wiki/Confusion_and_diffusion

     http://www.theamazingking.com/crypto-block.php

     H. Feistel, “Cryptography and Computer Privacy”, Scientific American,
     Vol. 228, No. 5, 1973:
     http://apprendre-en-ligne.net/crypto/bibliotheque/feistel/

[11] P. Mabin Joseph et al, “Exploiting SIMD Instructions in Modern
     Microprocessors to Optimize the Performance of Stream Ciphers”, IJCNIS
     Vol. 5, No. 6, pp. 56-66, §C. Salsa 20/12, pg. 61:
     http://www.mecs-press.org/ijcnis/ijcnis-v5-n6/IJCNIS-V5-N6-8.pdf#page=6

[12] “ARM Compiler toolchain Assembler Reference”, §4.4.9 VTBL, VTBX, pg. 224:
     http://infocenter.arm.com/help/topic/com.arm.doc.dui0489f/DUI0489F_arm_assembler_reference.pdf#page=224

     T. Terriberry, “SIMD Assembly Tutorial: ARM NEON”,
     §Byte Permute Instructions, pg. 53:
     http://people.xiph.org/~tterribe/daala/neon_tutorial.pdf#page=53

[13] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §2 Related Work, pg. 2:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=2

[14] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §6 Cryptanalysis of generic AR systems, pg. 10:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=10

[15] D. Khovratovich et al, “Rotational Cryptanalysis of ARX”,
     §3 Review of Rotational Cryptanalysis, pg. 3:
     https://www.iacr.org/archive/fse2010/61470339/61470339.pdf#page=3

[16] D. Berstein, “Salsa20 design”, §2 Operations:
     https://cr.yp.to/snuffle/design.pdf

[17] M. Daum, “Cryptanalysis of Hash Functions of the MD4-Family”,
     §4.1 Links between Different Kinds of Operations, pg. 40:
     www-brs.ub.ruhr-uni-bochum.de/netahtml/HSS/Diss/DaumMagnus/diss.pdf#page=48

[18] M. Daum, “Cryptanalysis of Hash Functions of the MD4-Family”,
     §4.1.3 Modular Additions and Bit Rotations, Corollary 4.12, pg. 47:
     www-brs.ub.ruhr-uni-bochum.de/netahtml/HSS/Diss/DaumMagnus/diss.pdf#page=55
*/

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 10, 2016, 09:34:20 PM
Last edit: February 11, 2016, 01:08:16 AM by TPTB_need_war
 #857

The truth about Ethereum, i.e. borderline scam or at least technological incompetence:

Ignore the fud and the hype.  I'm long term excited about this project.

[...]

Even though I think eth will destroy the need and market cap of bitcoin (wait and see).  It won't happen today or tomorrow.  You are buying into a pump.  Wait until things are boring and then don't put off your purchase.

Eth is already better than bitcoin.  People just don't know it yet.  And the ecosystem is coming down the like at an insane rate.  Most popular alts are scams.

Why are you displaying your ignorance about technological issues that you are apparently incapable of comprehending? Even you were in my Reddit thread where I explained it, yet you somehow still are in this delusion that Ethereum is not incompetent.  Huh

Ethereum is not better than any other scam. And it is a borderline scam or at least incompetence masked by technobabble from some young nerds who know some math and programming, but have limited capitulation to reality.

So why are you constantly making threads spreading pointless FUD about ethereum?

Yeah why are you doing that stoat?

And you are lying about my identity and have refused to retract your slander, when I am clearly not the two users you accused me of being and I have even shared my LinkedIn photo and identity.

Why can't you admit that Ethereum's developers suck and after $millions wasted, they still have not solved the most fundamental issue that must be solved in order to make scripting on a block chain work?

The technological challenge with a long-running script on a block chain is verification. The gas (and txn fees) are paid to the winner of the PoW block, not to all miners, but all miners (full nodes) have to endure the SAME cost of verification. Yet not all miners have the same hashrate, thus not all miners have the same income per block. Thus some miners recoup less of their verification costs than other miners. As I explained in greater detail, this forces mining to become 100% centralized in one miner with 100% hashrate.

Ethereum is off on another tangent named Casper, with shards, consensus-by-betting, etc, which is another hopeless and futile attempt to solve a problem that CAN NOT BE SOLVED BECAUSE OF THE INVIOLABLE CAP THEOREM!

Ethereum will never solve this problem and remain decentralized. Never. Thus all the scripts and products being built on top of Ethereum are headed to failure when Ethereum fails to solve the scaling problem of verification in a decentralized manner. Because centralization of scripting is meaningless, we always had that already.

I have solved the problem because I realized verification MUST be centralized (due to the inviolable CAP theorem and the correct understanding that a 100% decentralized system can not solve the Byzantine General's Problem), and thus I instead designed a way to control the centralization of verification with decentralized PoW miners (because each user submits a PoW share with their txn and because PoW mining is rendered UNprofitable for all parties).

So who will be the winner of everything? Me. Not Ethereum. Not to mention that marketing plan is light years ahead of any altcoin, because I will market directly to the millions of masses and achieve millions of adoptions (and be the first coin to do so).

Look I was there at the beginning telling Charles (one of the guys who founded and organized the creation of Ethereum) in Skype that Vitalik's PoW algorithm could be parallelized thus not CPU only, telling him that they could not solve the fundamental problem above, and telling him that they were going to raise too much $ with too many mouths to feed and still wouldn't solve the fundamental problems. Originally Charles was recruiting me to form this company, not Vitalik. But I balked and said I didn't want to raise all that money and I didn't want to start something until I was sure I had solved all fundamental issues. If you don't believe me, go ask Charles.

All the gory details about Ethereum's technical incompetence are here:

https://www.reddit.com/r/ethtrader/comments/42rvm3/truth_about_ethereum_is_being_banned_at/

Enjoy the Ethereum pump while it is hot and while people are ignorant of the truth about the technical incompetence of the Ethereum developers. Eventually the truth will come out and especially when my white papers and coin are released.



Ethereum is off on another tangent named Casper, with shards, consensus-by-betting, etc, which is another hopeless and futile attempt to solve a problem that CAN NOT BE SOLVED BECAUSE OF THE INVIOLABLE CAP THEOREM!

Casper adds Game theory into the mix. You can fool lightweight nodes, but you will lose a lot of money after that. Security deposits solve a lot of problems in real world.

Proof-of-cheating (or by any other name) backed by deposits, is a game theory that leads to centralization. Think it out. It is not really that hard to show that. It has analogous flaws as PoS.

Edit: for n00bs, note that by definition if we will use proof-of-cheating with deposits in Casper (along with consensus-by-betting) to rely on some limited number of nodes with deposits to verify the block chain for us, then those masternodes have the same economic problem that I explained in my prior post and thus the one with the most income is the winner take all, i.e. 100% centralization. There is simply no way to avoid centralization of verification. The solution is the one I have designed and explained and which Ethereum is not implementing.



Quote from: TrashMan
I just don't want the same thing happening to ZCash that happend to ethereum, Vitalik announced a while ago how much Ether the foundation had left and then the price skyrocketed because everybody was waiting for the whales to sell out.

Could you please provide to us a link to this announcement by Vitalik?

How do we know the insiders did not sell to themselves and have (intentionally) created a deception which caused P&D fever?




Let's start this by making myself a little less popular, saying: the altcoin scene will die in 2016

It is quite possible that the altcoin market will be devasted by a potential decline of Bitcoin to < $150 and perhaps well below $100, because we have an unexpected global contagion coming that will be worse than 2008 when everything crashed. As we all know, when Bitcoin's price gets a flu, altcoins' prices go into comas.


Are there any exceptions?


Yes, there are!

Coins that offer unique decentralized services, "Blockchain 2.0" or other disrupting technology will be the buy-and-holds for 2016.

Some examples?


- Ethereum | Decentralized software & smart contracts on the blockchain
- Factom | Honesty to record-keeping
- Voxelus | Virtual Reality without coding
- Radium | Decentralized Services on the smartchain
- MaidSafe | Crowd-sourced internet
- Synereo (AMP) | Decentralized social network
- Sia | Decentralized storage

All those coins are doomed to fail due to insoluble fundamental technological issues which they didn't solve which render those coins entirely useless:

https://bitcointalk.org/index.php?topic=1219023.msg13842262#msg13842262
https://bitcointalk.org/index.php?topic=1354274.msg13833591#msg13833591
https://bitcointalk.org/index.php?topic=1219023.msg13043602#msg13043602 (I will be responding to AlanX's post soon)



So far I see the ethereum blockchain and consensus protocol working fine.

It hasn't been scaled yet. Bitcoin's scalepocalypse will pale in comparison to Ethereum's doom in the wild. Essentially what Ethereum is designing with Casper is a technobabble wrapper around centralized verification, because either they know that verification can't be decentralized (as I have explained), or they are determined to delude themselves otherwise (with the result being the centralization occurs anyway).

With centralization, Ehereum can scale except note that will be viewed as a failure by the market, unless verification centralization can be hidden behind Sybil attacks on the verification nodes (meaning no one can prove that a 1000 nodes aren't controlled by the same entity). I have a strong suspicion that is why Ethereum is being funded by Peter Thiel and other banksters, because they understand Ethereum is a way for them to control without being detected. Satoshi had prevented this outcome in Bitcoin by setting the maximum block size to 1MB, which thus restricted verification from centralizing entirely (yet it will still be impossible to prevent Bitcoin Classic from centralizing due to the other economics of profitable PoW mining).

But centralization always leads to failure. So ultimately this will fail sooner or later.

And what I see from you is just a load of wild claims.

That is because you are n00b and you can't understand the technological arguments. The points I have made are not wild at all. Do you realize I was probably the first person to predict Bitcoin's scalepocalypse in 2013 as ArticMine graciously admits today:

I introduced this concept in 2013 in my thread Spiraling Transaction Fees and I nailed the block size as the fundamental issue in my last post in that 2013 thread.

Seems stoat you had no clue how long I have been here in this forum and doing serious technological research.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 11, 2016, 01:25:38 AM
 #858

...but it's really just bitcoin for me.  All the rest are wannabees IMO.  But if something comes along that can challenge bitcoin--legitimately--then more power to whatever that is.  If it's ethereum, well then it's got its work cut out for itself.  It's not going to be easy to dethrone btc.

The only coin that could dethrone Bitcoin would be one that had higher levels of adoption.

None of the altcoins have generated any adoption. Maybe Doge generated a few 1000s of actual users, but that is nothing compared to 10,000s of serious users of Bitcoin.

Millions of users estimates for Bitcoin is delusion (based on wallet estimates for example from Coinbase which doesn't include wallets that have been abandoned and driven originally by affiliation payouts, Coinbase is burning up IPO money three times to the well because its business model has failed). No coin has that yet.

You'll know it when you become aware of the Bitcoin killer. It will be so damn obvious due to the millions of users adopting it.

TPTB_need_war (OP)
Sr. Member
****
Offline Offline

Activity: 420
Merit: 262


View Profile
February 11, 2016, 02:57:06 AM
 #859

Congratulations to the Monero Team on your successful Hydroden Helix release. I primarily believe in bitcoin and strongly believe most altcoins are doomed for failure. Monero has good chances to be an important exception.

https://github.com/monero-project/bitmonero/releases/tag/v0.9.0

Monero scales to 0 block size  Shocked

It is dangerous when you hand chess masters the keys to software design.

tobeaj2mer01
Legendary
*
Offline Offline

Activity: 1098
Merit: 1000


Angel investor.


View Profile
February 11, 2016, 07:10:21 AM
 #860

Is there a rough timeline of your project?

Sirx: SQyHJdSRPk5WyvQ5rJpwDUHrLVSvK2ffFa
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [43] 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 »
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!