Bitcoin Forum
June 21, 2024, 06:19:10 PM *
News: Voting for pizza day contest
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 [141] 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 ... 391 »
2801  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 10:36:50 PM
"Correctly functioning components of a Byzantine fault tolerant system will be able to provide the system's service, assuming there are not too many faulty components."

In Bitcoin "too many faulty components" = majority of the CPU power.

We can't count the components because identities can be Sybil attacked.

But more saliently since I know your retort would be that hashrate is the count, you seem to be going in circles because you ignore what I already wrote:

In bitcoin, BGP is solved to within the stated tolerance of 51% byzantine faulty nodes.

Satoshi's PoW does not distinguish between faulty and non-faulty nodes.

You guys are like a dog chasing its tail.
2802  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 09:47:53 PM
monsterer and smooth, I repeat again, how do you prove if a 51% attack is censoring transactions? In other words, how do you even detect it in an objective and provable manner?

A system which doesn't objectively (from the perspective of all observers) know when it is failing is not Byzantine fault tolerant.

Refer again to the Wikipedia definitions:

The following practical, concise definitions are helpful in understanding Byzantine fault tolerance:[3][4]

Byzantine fault
    Any fault presenting different symptoms to different observers
Byzantine failure
    The loss of a system service due to a Byzantine fault in systems that require consensus

This circular logic of yours is getting redundant. I have made my point and you have not refuted it.
2803  Economy / Economics / Re: Economic Devastation on: February 07, 2016, 09:41:51 PM
You young fellow feel free to pursue theft of music and other content which deprives the millions of artists of income to pay their rent.

I view this in completely different terms.  Before file sharing existed, people would record songs off the radio onto their tape cassettes.  The music was already technically (but not legally) out in the public domain for anyone to hear, you were just bypassing the business model of ad supported revenue.  The music was even being beamed at you via radio waves against your own will, yet there's probably plenty of obscure laws trying to govern whether you can or can't record it and what you can do with it.

We have a similar situation with ad blockers on websites.  Their business model is starting to fail.  To me, the whole situation with music is just the state trying to prop up an invalid business model.  In the old days, entertainers were considered to have the lowest of social status possible.  This is one of the initial reasons Nero was ridiculed as an emperor, because he wanted to be an actor and emperor at the same time.  Even if entertainer's social status was garbage, they could still get paid doing it, they just had to do it through live performance.  There was no "record thyself and make millions".
 
Modern civilization elevates these entertainers from the social status of garbage men, to basically higher than the president of the country in both fame and wealth.  This is not to say they shouldn't get paid, but past history and current technology both point to the idea that they will likely be required to do so only through live performance.  If you're saying it's the government's job to make sure their invalid business model is still able to make them mega-millionaires without even having to do live performance at all, then that would be an extreme left wing view.

I really read your rebuttal with an open mind, because if I am incorrect I will suffer immensely. So I am not writing the following based in what I want to believe, but rather based on my sober analysis of the facts. I am eager to read any rebuttal which can teach me why I am wrong.

First of all, distinguish SUPER STARS from the average indie musician earning couple of $100 a month, or the more successful indie or small label outfit earning just above the poverty line. The former number several dozens to maybe a few hundred (active) whereas the latter number in the 100,000s to millions (and maybe much more if they could earn a bit more).

Depriving indie musicians of a decent income (not even wealth!) to pay their rent and food is not the way to build a new age Knowledge Age economy wherein we creative people create things and sell them direct to each other instead of being slaves to corporations. If you are going to advocate stealing music, and since we are moving into a digital age where all work will be digitized, then let's advocate stealing everything then including 3D printer designs, commercial software, etc.. so that we will be reduced an economy valued only by physical raw materials and energy production so the bankers will own and control all value in economy. Yeah nice.  Cry

Afaik, the reason artists were devalued throughout history was due to two facts:

  • Lack of abundance in the ancient economy which is required to produce a gift culture. The artists in a gift culture are on the receiving end of the gifts because they don't directly produce necessities of life that are thus in abundance in a gift culture.
  • Economies of yore have been capital intensive, economies-of-scale (e.g. Rome road building, post Dark Age agriculture, Industrial Age factories) thus artists contributed no useful labor to the capitalists. The point being that the capitalists were in control. But I have explained this all changes in Knowledge Age[1]

Why you not want to pay an insignificant tip to indie musicians so they can flourish and you don't have to view ads? We are now in an abundance economy. There is no excuse to not tip the indie artists.

Would you prefer to have massive unemployment and social welfare system that will sink us into a Dark Age?

Do you want all those unemployed artists on welfare to vote to steal your money with capital controls because the economy failed them?

Not everyone wants to be a programmer or what ever.

If you enjoy or listen to a song regularly, then is absolutely no financial reason you can justify for not tipping the creator a penny. You will only destroy society, the Knowledge Age, and yourself by being so selfish and myopic. Perhaps you could justify it for other reasons such as micropayments being a hassle and subscription being a lockin (to one provider) paradigm.

What might be more convincing to me, is to argue that those people who are going to steal (or who won't bother to find the music in official venues) will do it any way (or at least will have been exposed to the music thus potentially being another fan for the musician to sell a T-shirt to), thus arguing there is no economic incentive to prevent bootleg copies from appearing on decentralized file storage systems. And thus to argue that the business model that works is give away free the downloads, and sell the fans trinkets and live performances. Perhaps that is your point?

Afaics, SoundCloud was supposed to be offering that model and the musicians pay SoundCloud to offer the downloads for free. In return musicians could afaics promote their music and gain fans for example on their Facebook page and then sell the fans stuff such as T-shirts. But lately SoundCloud has started to limit apps to 15,000 plays per day, apps that play SoundCloud content aren't allowed to develop social networking type features, and SoundCloud disabled their Facebook embedded player (changed it to a link to SoundCloud's website) so that SoundCloud could drive ad revenues and/or synergies on their own site. Appears SoundCloud was being hammered by the RIAA with DCMA requests and SoundCloud caved in to the major record labels. Now Universal has accesse to delete any song from SoundCloud.

So one could argue that a decentralized file storage could provide the function SoundCloud was supposed to be offering.

Musicians like to get statistics on how many plays their song has. They like to get feedback on their songs. Etc.

If society decides to adopt the decentralized file storage and end copyrights, then I will adjust to it. But for the time being, it is not clear whether that is the best model for the indie artists and for our Knowledge Age future.

For example, it is not clear to me that I need 150 T-shirts, one each from each indie band I like. And then how do I tip them for new music they create if I already bought a T-shirt? I don't have time to go to live concerts and what if the band is not in my area. We are moving to global economy (check out songdew.com for music from India). Wouldn't it make more sense for my music organizer to tip them automatically based on my plays? So I don't have to hassle with it making sure I take care of the artists who provide my music that I love.

So you could argue okay, but no reason to not let others steal it if they really want to. Well maybe true, but in that case the decentralized file storage can coexist with the micropayment model.

Which outcome do you think is realistically the most likely and why?


[1]https://bitcointalk.org/index.php?topic=355212.0
https://bitcointalk.org/index.php?topic=355212.msg13761518#msg13761518 (see the "Edit:")
2804  Alternate cryptocurrencies / Altcoin Discussion / Re: Thoughts on Zcash? on: February 07, 2016, 09:37:21 PM
You young fellow feel free to pursue theft of music and other content which deprives the millions of artists of income to pay their rent.

I view this in completely different terms.  Before file sharing existed, people would record songs off the radio onto their tape cassettes.  The music was already technically (but not legally) out in the public domain for anyone to hear, you were just bypassing the business model of ad supported revenue.  The music was even being beamed at you via radio waves against your own will, yet there's probably plenty of obscure laws trying to govern whether you can or can't record it and what you can do with it.

We have a similar situation with ad blockers on websites.  Their business model is starting to fail.  To me, the whole situation with music is just the state trying to prop up an invalid business model.  In the old days, entertainers were considered to have the lowest of social status possible.  This is one of the initial reasons Nero was ridiculed as an emperor, because he wanted to be an actor and emperor at the same time.  Even if entertainer's social status was garbage, they could still get paid doing it, they just had to do it through live performance.  There was no "record thyself and make millions".
 
Modern civilization elevates these entertainers from the social status of garbage men, to basically higher than the president of the country in both fame and wealth.  This is not to say they shouldn't get paid, but past history and current technology both point to the idea that they will likely be required to do so only through live performance.  If you're saying it's the government's job to make sure their invalid business model is still able to make them mega-millionaires without even having to do live performance at all, then that would be an extreme left wing view.

I really read your rebuttal with an open mind, because if I am incorrect I will suffer immensely. So I am not writing the following based in what I want to believe, but rather based on my sober analysis of the facts. I am eager to read any rebuttal which can teach me why I am wrong.

First of all, distinguish SUPER STARS from the average indie musician earning couple of $100 a month, or the more successful indie or small label outfit earning just above the poverty line. The former number several dozens to maybe a few hundred (active) whereas the latter number in the 100,000s to millions (and maybe much more if they could earn a bit more).

Depriving indie musicians of a decent income (not even wealth!) to pay their rent and food is not the way to build a new age Knowledge Age economy wherein we creative people create things and sell them direct to each other instead of being slaves to corporations. If you are going to advocate stealing music, and since we are moving into a digital age where all work will be digitized, then let's advocate stealing everything then including 3D printer designs, commercial software, etc.. so that we will be reduced an economy valued only by physical raw materials and energy production so the bankers will own and control all value in economy. Yeah nice.  Cry

Afaik, the reason artists were devalued throughout history was due to two facts:

  • Lack of abundance in the ancient economy which is required to produce a gift culture. The artists in a gift culture are on the receiving end of the gifts because they don't directly produce necessities of life that are thus in abundance in a gift culture.
  • Economies of yore have been capital intensive, economies-of-scale (e.g. Rome road building, post Dark Age agriculture, Industrial Age factories) thus artists contributed no useful labor to the capitalists. The point being that the capitalists were in control. But I have explained this all changes in Knowledge Age[1]

Why you not want to pay an insignificant tip to indie musicians so they can flourish and you don't have to view ads? We are now in an abundance economy. There is no excuse to not tip the indie artists.

Would you prefer to have massive unemployment and social welfare system that will sink us into a Dark Age?

Do you want all those unemployed artists on welfare to vote to steal your money with capital controls because the economy failed them?

Not everyone wants to be a programmer or what ever.

If you enjoy or listen to a song regularly, then is absolutely no financial reason you can justify for not tipping the creator a penny. You will only destroy society, the Knowledge Age, and yourself by being so selfish and myopic. Perhaps you could justify it for other reasons such as micropayments being a hassle and subscription being a lockin (to one provider) paradigm.

What might be more convincing to me, is to argue that those people who are going to steal (or who won't bother to find the music in official venues) will do it any way (or at least will have been exposed to the music thus potentially being another fan for the musician to sell a T-shirt to), thus arguing there is no economic incentive to prevent bootleg copies from appearing on decentralized file storage systems. And thus to argue that the business model that works is give away free the downloads, and sell the fans trinkets and live performances. Perhaps that is your point?

Afaics, SoundCloud was supposed to be offering that model and the musicians pay SoundCloud to offer the downloads for free. In return musicians could afaics promote their music and gain fans for example on their Facebook page and then sell the fans stuff such as T-shirts. But lately SoundCloud has started to limit apps to 15,000 plays per day, apps that play SoundCloud content aren't allowed to develop social networking type features, and SoundCloud disabled their Facebook embedded player (changed it to a link to SoundCloud's website) so that SoundCloud could drive ad revenues and/or synergies on their own site. Appears SoundCloud was being hammered by the RIAA with DCMA requests and SoundCloud caved in to the major record labels. Now Universal has accesse to delete any song from SoundCloud.

So one could argue that a decentralized file storage could provide the function SoundCloud was supposed to be offering.

Musicians like to get statistics on how many plays their song has. They like to get feedback on their songs. Etc.

If society decides to adopt the decentralized file storage and end copyrights, then I will adjust to it. But for the time being, it is not clear whether that is the best model for the indie artists and for our Knowledge Age future.

For example, it is not clear to me that I need 150 T-shirts, one each from each indie band I like. And then how do I tip them for new music they create if I already bought a T-shirt? I don't have time to go to live concerts and what if the band is not in my area. We are moving to global economy (check out songdew.com for music from India). Wouldn't it make more sense for my music organizer to tip them automatically based on my plays? So I don't have to hassle with it making sure I take care of the artists who provide my music that I love.

So you could argue okay, but no reason to not let others steal it if they really want to. Well maybe true, but in that case the decentralized file storage can coexist with the micropayment model.

Which outcome do you think is realistically the most likely and why?


[1]https://bitcointalk.org/index.php?topic=355212.0
https://bitcointalk.org/index.php?topic=355212.msg13761518#msg13761518 (see the "Edit:")
2805  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 08:36:26 PM
Example, for a 128KB memory space with 32 KB memory banks

Hard to argue with someone who either confuses terms or whose numbers are way off.
You have at best a few hundred memory banks.

Quoting from http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html

Banks

To reduce access latency, memory is split into multiple equal-sized units called banks. Most DRAM chips today have 8 to 16 banks.

...

A memory bank can only service one request at a time. Any other accesses to the same bank must wait for the previous access to complete, known as a bank-conflict. In contrast, memory access to different banks can proceed in parallel (known as bank-level parallelism).

Row-Buffer

Each DRAM bank has one row-buffer, a structure which provides access to the page which is open at the bank. Before a memory location can be read, the entire page containing that memory location is opened and read into the row buffer. The page stays in the row buffer until it is explicitly closed. If an access to the open page arrives at the bank, it can be serviced immediately from the row buffer within a single memory cycle. This scenario is called a row-buffer hit (typically less than ten processor cycles). However, if an access to another row arrives, the current row must be closed and the new row must be opened before the request can be serviced. This is called a row-buffer conflict. A row-buffer conflict incurs substantial delay in DRAM (typically 70+ processor cycles).

I have already explained to you that the page size is 4KB to 16KB according to one source, and I made the assumption (just for a hypothetical example) that maybe it could be as high as 32KB in specially designed memory setup for an ASIC. And I stated that I don't know what the implications are of making the size larger. I did use the word 'bank' instead of 'page' but I clarified for you in the prior post that I meant 'page' (see quote below) and should have been evident by the link I had provided (in the post you are quoting above) which discussed memory pages as the unit of relevance to latency (which I guess you apparently didn't bother to read).

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.

What number is far off? Even if we take the page size to be 4KB, that is not going to be any where near your 2^-14 nonsense.

The number of memory banks is irrelevant to the probability of coalescing multiple accesses into one scheduled latency window. The relevancy is the ratio of the page size to the memory space (and the rate of accesses relative to the latency window). Duh!

I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

The page size and the row buffer size are equivalent. And the fact that only one page (row) per bank can be accessed synchronously is irrelevant!

Now what is that you are slobbering about?

(next time before you start to think you deserve to act like a pompous condescending asshole, at least make sure you have your logic correct)

I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with. I don't know wtf happened to your attitude lately. Seems ever since I stated upthread some alternatives to your Cucooko PoW, that you've decided you need to hate on me. What is up with that. Did you really think you were going to get rich or massive fame from a PoW algorithm. Geez man we have bigger issues to deal with. That is just one cog in the wheel. Isn't worth destroying friendships over. I thought of you as friendly but not any more.
2806  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 08:23:16 PM
If you think about what gives a currency its value independent of any FX exchange, it is the level of production for sale in that currency increases so via competition, more is offered at lower currency price and thus the value of the currency has increased. Thus our goal is to get more users offering more things for sale in the currency.
2807  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 06:50:44 PM
Go ahead. Find a way to put me down. I dare you all!

Fuck I am tired of this forum.
2808  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 06:44:52 PM
Satoshi's PoW does not distinguish between faulty and non-faulty nodes.

In bitcoin, faulty nodes = sybil nodes.

A byzantine fault in bitcoin is a fork.

Quote
So we can conclude Bitcoin didn't solve BGP because there is no block chain objectivity about faults. And then we can say that Sybil attacks on pools destroy one of our subjective metrics for community appraisal of Consistency.

A poor conclusion. LCR provides the objectivity; branches which get orphaned were objectively selected against as being byzantine faulty.

So if the LCR is creating censored transactions is that not a fault/failure? What the hell use of Byzantine fault tolerance if it doesn't guarantee a system that can be used by the participants?

The following practical, concise definitions are helpful in understanding Byzantine fault tolerance:[3][4]

Byzantine fault
    Any fault presenting different symptoms to different observers
Byzantine failure
    The loss of a system service due to a Byzantine fault in systems that require consensus

Loss of Access is a failure. CAP theorem applies.
2809  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 06:24:10 PM
I have come to the conclusion that we will all stab each other to death when faced with the choice between that and applauding/cheering each other (working together).

It is the nature of men. Find leverage. Seek. Destroy. Pretend to be part of a team while it serves one's interests but only while it does.

Men are damn competitive.
2810  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 06:18:33 PM
Quote
The stated problem bounds do not include being able to tell whether someone controls >50% of the hash rate. That isn't in the paper at all. The wording of the paper is "As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network". It doesn't matter whether they cooperate via pools or otherwise, either way it is outside the bounds.

Without considering the Sybil attack, then one isn't solving the Byzantine fault issue, i.e. isn't solving the Byzantine Generals problem (which is the correct title of this thread). Just because Satoshi failed to mention that he hadn't solved what he was implying to have solved, doesn't make that just having a majority of the hashrate is the only consideration in a PoW solution to the Byzantine Generals problem.

There is no Sybil attack possible on the problem as stated. "A majority of CPU power" is a physical quantity which can't be Sybil attacked. Period.

The Byzantine Generals problem does not state "A majority of CPU power" as the problem. I already stated that is Satoshi's requirement but as the correct title of this thread points out, Satoshi's stated requirement is not a solution to the Byzantine Generals problem. Period.

Okay, but so what?

Bitcoin also didn't solve P ?= NP or any number of other problems.

And unless I'm mistaken, Satoshi did not say that it did solve the Byzantine Generals problem, especially in the specific manner that problem is formulated (with discrete General-actors, something that doesn't even exist in Bitcoin at all). At best there is a rough similarity. Correction: Satoshi did say it was a solution in this email. But again, he formulated in a very careful manner, stating that each general has a laptop, which ends up making "majority of CPU power" equivalent to a majority of discrete General-actors.

He said exactly what it does solve. If a majority of the CPU power is not conspiring to attack the network, then it reaches consensus that is final and secure (though slowly in the case close to 50%).

It is up you as a prospective user or investor to decide whether "a majority of the CPU power" is an acceptable requirement. It seems at this point there isn't anything better, and some number of people think it is useful (most of the world does not).

Because as I explained to monsterer in the prior post, Satoshi's design is ambiguous about Byzantine faults such as censoring transactions and thus it does not solve the BGP.

And because I assert there are other ways to organize a PoW block chain design so that some of those faults can be objectively identified and reacted to (e.g. the fault of censoring transactions). The fact that in Satoshi's design these faults can't even be objectively identified (and the Sybil attack on pools is another one that destroys any objectivity), then there is no recourse other than for the system to centralize and fail. Pool centralization is increasing despite the move away from pools that had too much hashrate (and the linked data doesn't even account for the Sybil attack we can't see).
2811  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 06:03:53 PM
BGP is not solved if there is Sybil attack vulnerability.

In bitcoin, BGP is solved to within the stated tolerance of 51% byzantine faulty nodes.

Satoshi's PoW does not distinguish between faulty and non-faulty nodes.

The following is not "other attack" but rather it is a Byzantine fault (because the loyal participants can't be certain of Consistency by keeping the control of the hashrate below 51%).

There is no way to distinguish a 51% attack from a non-attack, e.g. for example censoring transactions, in a way that is provable with block chain data (i.e. to an offline node that comes online). One of the key innovations in my design, is it is possible for a payer to send his PoW share away from a "pool" (not example the same as pool in Bitcoin) that is provably (from that payer's individual perspective) responsible for censoring the transaction.

Since nothing about faults is provable from the block chain, then there is no provable Consistency (w.r.t. to what loyal nodes would consider a fault, e.g. censoring txns) and thus the BGP has not been solved.

We use community monitoring to estimate that we have Consistency, but this can't be proven objectively just from the block chain. We must correlate user experiences and other data points such as pool concentration.

A Sybil attack against the means by which loyal participants determine whether 51% control has been perhaps ceded to pools removes one of the key data points.

So we can conclude Bitcoin didn't solve BGP because there is no block chain objectivity about faults. And then we can say that Sybil attacks on pools destroy one of our subjective metrics for community appraisal of Consistency.
2812  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 05:48:17 PM
Yes, Bitcoin solves BGP (in some way)...

Quote
There is no Sybil attack possible on the problem as stated. "A majority of CPU power" is a physical quantity which can't be Sybil attacked. Period.

True, but there are other "attacks". Such as calling up Chinese miners and convince them to do a certain thing.

Incorrect again.

BGP is not solved if there is Sybil attack vulnerability. The following is not "other attack" but rather it is a Byzantine fault (because the loyal participants can't be certain of Consistency by keeping the control of the hashrate below 51%). Since you have no way to know which pools are controlled by the same entity and thus which pools have the lowest VERIFICATION costs per block reward (which is very important once you scale Bitcoin to Visa scale), then you have no way to know where to send your PoW shares so as to prevent that Sybil attacker from leeching off of the other pools and driving them bankrupt thus centralizing all pools under one control but hidden by a Sybil attack. In other words, the system is GUARANTEED to become 51% attacked due to the economics and the fact that control can be hidden behind a Sybil attack.

I wrote that already upthread and you just don't read apparently.
2813  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 05:44:13 PM
This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.

The bolded statement is not correct in any case. Threads are cheap on the GPU. It is memory bandwidth that is the bound. Adding more instances and/or more per instance parallelism (if the PoW proving function exhibits per instance parallelism) are both valid means to increase throughput until the memory bandwidth bound limit is reached. Adding instances doesn't slow down the performance of each instance unless the memory bandwidth bound has been reached (regardless of whether the memory spaces of separate instances are interleaved or not).
2814  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 05:28:40 PM
If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.

Why do you say 'no' when I also wrote that alternative possibility is that banks are independent:

(or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)



But each bank can only have one of its 2^14=16384 rows active at any time.

My point remains that if there is parallelism in the memory access (whether it be coalescing accesses from the same bank/row or for example 32K simultaneous accesses from 32K independent banks), then by employing the huge number of threads in the GPU (ditto an ASIC) then the effective latency of the memory due to parallelism (not the latency as seen per thread) drops until the memory bandwidth bound is reached.

However it might be an important distinction between whether the accesses are coalesced versus simultaneously accessed (and thus more than one energized) memory banks (row of the bank) in terms of electricity consumption. Yet I think the DRAM memory consumption is always much less than the computation, so as I said unless the computation portion (e.g. the hash function employed) can be a insignificant then electricity consumption will be lower on the ASIC. Still waiting to see what you will find out when you measure Cuckoo with a Kill-A-Watt meter.

Why did you claim that memory latency is not very high on the GPU? Did you not see the references I cited? By not replying to my point on that, does that mean you agree with what I wrote about you were confusing latency per sequential access with latency under parallelism?



Edit: I was conflating 'bank' with 'page'. I meant page since I think mentioned 4KB and it was also mentioned in the link I provided:

http://www.chipestimate.com/techtalk.php?d=2011-11-22

I hope I didn't make another error in this corrected statement. It is late and I am rushing.

Quote from that link:

DDR DRAM requires a delay of tRCD between activating a page in DRAM and the first access to that page. At a minimum, the controller should store enough transactions so that a new transaction entering the queue would issue it's activate command immediately and then be delayed by execution of previously accepted transactions by at least tRCD of the DRAM.

And note:

The size of a typical page is between 4K to 16K. In theory, this size is independent of the OS pages which are typically 4KB each.

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.
2815  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: February 07, 2016, 04:28:22 PM
However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs)

Where do you get those numbers? What I can measure is that a GPU has a 5x higher throughput
of random memory accesses. I don't know to what extent that is due to more memory banks in the GPU
but that makes it hard to believe your numbers.

From my old rough draft:

Quote
The random access latency of Intel's L3 cache [13] is 4 times faster than DRAM main memory [2] and 25 times faster than GPU DDR main memory [14].

[14] http://www.sisoftware.co.uk/?d=qa&f=gpu_mem_latency&l=en&a=9
     GPU Computing Gems, Volume 2, Table 1.1, Section 1.2 Memory Performance

Unfortunately that cited page has disappeared since 2013. You can use their software to measure it. That is referring to one sequential process.

You are referring to the latency when the GPU is running multiple instances or (in Cuckoo's case) otherwise exploiting parallelism in the PoW proving function. Of course the latency drops then, because GPU is able schedule simultaneous accesses to the same memory bank (or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)

Edit: Try these:

http://courses.cms.caltech.edu/cs101gpu/2015_lectures/cs179_2015_lec05.pdf#page=11
http://stackoverflow.com/questions/13888749/what-are-the-latencies-of-gpu

Edit#2: http://arxiv.org/pdf/1509.02308.pdf#page=11

and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction).

This make no sense to me. When all your memory banks are already busy switching rows on every
(random) memory access, then every additional PoW instance you run will just slow things down.
You cannot combine multiple random accesses because the odds of them being in the same row
is around 2^-14 (number of rows).

If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

I am not expert on the size of memory banks and the implications of increasing them.
2816  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 04:16:19 PM
"Incorrect. It only proves some information existed at a certain block. There is no way to put a clock time in the block chain."

It seems you really haven't understood the most basic things. Blocks are of course timestamped with actual UTC timestamp from the node that generates it.

You do not understand basic issues.

A 51% attacker can put any time he wants in the block chain.

Honest nodes can sync to a global clock, but this is not guaranteed to be accurate unless an offline node can later prove that the chain was not generated by a 51% attack on the clock records. And of course there is no objective way to prove this, other than trusting the community. And so then you lose the objective, trustless quality.

This is fundamental and if you don't understand this, then you are not qualified as a block chain expert. Block chains are only objective relative to blocks. Period.

Sorry you lose again. And I know damn well the underhanded methods you are employing to try to discredit me and sorry you will lose.
2817  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 03:59:44 PM
Yes, Bitcoin solves BGP (in some way). It solves also a bunch of other completely unknown problems:

* how to prove some information existed at a certain time

Incorrect. It only proves some information existed at a certain block. There is no way to put[objectively prove] a clock time in the block chain.

* how to create a public ledger of ownership

Incorrect. A longest-chain-rule (LCR) block chain records non-conflicting state transformations. That isn't limited to a ledger of ownership.

* how to issue a currency without requiring a nation state army to enforce scarcity

Incorrect. A block chain can distribute tokens. That doesn't guarantee anything about it becoming a currency and being immune to nation state armies. If not immune (i.e. not defensible against), then 'without' is incorrect. (it doesn't even guarantee the distribution won't be centralized by mining farms)

* how to reach agreement over a communications channel on value

Again you are pigeon-holing what a block chain does. Again a longest-chain-rule (LCR) block chain records non-conflicting state transformations.

E.g. last year there has been 1B$ investment in this area, and there been almost no progress at all in terms of advanced applications (just an increase in noise levels).

Thanks for ignoring my progress and thereby insinuating my sharing/progress has been noise.

I think the possibilities are largely not explored.

I appear to be reasonably skilled at distilling to the generative essence and I will assert that there isn't a large space of possible designs that will work to eliminate the centralization issue. Mine seems to be the only possible one.

Many also don't know the pre Bitcoin designs, Bitgold and b-money, which are also helpful to consider, see http://www.weidai.com/bmoney.txt and https://en.bitcoin.it/wiki/Bit_Gold_proposal . Actually quite surprising since satoshi said Bitcoin is an implementation of those ideas:

Quote from satoshi:
Quote
Bitcoin is an implementation of Wei Dai's b-money proposal http://weidai.com/bmoney.txt on Cypherpunks http://en.wikipedia.org/wiki/Cypherpunks in 1998 and Nick Szabo's Bitgold proposal http://unenumerated.blogspot.com/2005/12/bit-gold.html

https://bitcointalk.org/index.php?topic=342.msg4508#msg4508

See also:
https://bitcointalk.org/index.php?topic=583.msg11405#msg11405

Now that is noise or at least veering very far from a solution to the problem this thread raises.
2818  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 03:27:19 PM
...and another incentive structure must be developed to encourage decentralized p2p mining.

Switching to an ASIC resistant PoW coin doesn't solve this problem but merely delays the inevitable. As interest and hash power grows ASICS will be developed within time regardless.

I believe it is possible to design a memory hard PoW that is not electrically more efficient on an ASIC, but it will be very slow. I originally didn't think so, but have since realized I had a mistake in my 2013/4 research on memory hard hashes. It is possible that Cuckoo Hash already achieves this, but it is more difficult to be certain and it is very slow when DRAM economics are maximized (although it adds asymmetric validation which is important for DDoS rejection if the transaction signatures are ECC and not Winternitz and for verification when PoW share difficulty can't be high because each PoW trial is so slow).

Cryptonote's memory hard hash can't possibly be ASIC resistant, because by my computation it could not possibly have 100 hashes/second on Intel CPUs and be ASIC resistant.

See also Zcash's analysis thus far.

Correction follows.

It will be impossible to design a memory hard PoW that is not electrically more efficient on an ASIC, unless the hash function employed (for randomizing the read/writes over the memory space) is insignificant w.r.t. the RAM power consumption, which is probably not going to be the case in any design where that hash function has sufficient diffusion to be secure.

The only way to make an ASIC resistant PoW is for the proving computation to be memory latency bound, because DRAM latency can't be improved much in general (whereas hardwired arithmetic computation and memory bandwidth can be accelerated with custom hardware):

http://community.cadence.com/cadence_blogs_8/b/ii/archive/2011/11/17/arm-techcon-paper-why-dram-latency-is-getting-worse
http://www.chipestimate.com/techtalk.php?d=2011-11-22

However, what a GPU (which starts with 4 - 10X worse main memory latency than CPUs) and especially an ASIC will do to get better DRAM amortization (if not also lower electricity consumption due to less latency) is run dozens or hundreds of instances of the proving algorithm with the memory spaces interleaved such that the latencies are combined and amortized over all instances, so that the effective latency drops (because reading from the same memory bank of DRAM is latency free if multiple accesses within the same bank are combined into the same transaction). This can even be done in software as interleaved memory spaces without needing a custom memory controller. More exotic optimizations might have custom memory controllers and larger memory banks (note I am not expert on this hardware issue). This is probably why Cryptonote includes also AES-NI instructions because GPUs have only at best at parity in performance per watt on AES, but that won't be enough to stop ASICs.

However that optimization for ASICs will bump into memory bandwidth limit so the amortization will have a limit. Theoretically memory bandwidth can be increased with duplicated memory banks for reads but not for writes!

Using larger memory spaces in a properly designed memory hard PoW hash function (not Scrypt) can decrease the probability of that instances will hit the same memory bank within a sufficiently small window of time necessary to reduce the latency. Also using wider hash functions (e.g. my Shazam at 2048 to 4096-bits) reduces the number of instances that can be interleaved in the same memory bank (and standard DRAM I think has bank/page size of 4KB?). The ASIC can respond by designing custom DRAM with larger memory banks and run more instances, but that not only raises the investment required but the memory bandwidth limit for writes seems to be an insurmountable upper bound.

So although I think a memory hard PoW hash can be made which is more ASIC resistant than current ones, I think it will be impossible to sustain parity in hashes/Watt and hashes/$hardware. Perhaps the best will be within 1 to 2 orders-of-magnitude on those.

So all profitably mined PoW coins (with sufficient market caps) are destined to be centralized into ASIC mining farms running on cheap or free electricity, but the scale and rate at which this happens can be drastically improved over SHA256 (Bitcoin, etc).

My design of unprofitably mined PoW will only require that the difficulty from the PoW shares sent with transactions is sufficient to making ASIC mining unprofitable for the level of block reward offered. Keeping the CPU implementation of the PoW prover within 1 to 2 orders-of-magnitude of an ASIC implementation reduces the level of such aforementioned difficulty needed.

I hope I didn't make another error in this corrected statement. It is late and I am rushing.
2819  Economy / Economics / Re: Martin Armstrong Discussion on: February 07, 2016, 09:56:47 AM
This user is currently ignored.

Sloanf will disappear and hide under a rock later this year when the markets yet again do everything MA has predicted.

Sloanf refuses to real his true identity so that his personal reputation won't be destroyed later this year.

I and some others here have revealed our true identity.

Sloanf is a Sybil attack.
2820  Alternate cryptocurrencies / Altcoin Discussion / Re: Satoshi didn't solve the Byzantine generals problem on: February 07, 2016, 09:17:35 AM
Quote
The stated problem bounds do not include being able to tell whether someone controls >50% of the hash rate. That isn't in the paper at all. The wording of the paper is "As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network". It doesn't matter whether they cooperate via pools or otherwise, either way it is outside the bounds.

Without considering the Sybil attack, then one isn't solving the Byzantine fault issue, i.e. isn't solving the Byzantine Generals problem (which is the correct title of this thread). Just because Satoshi failed to mention that he hadn't solved what he was implying to have solved, doesn't make that just having a majority of the hashrate is the only consideration in a PoW solution to the Byzantine Generals problem.

There is no Sybil attack possible on the problem as stated. "A majority of CPU power" is a physical quantity which can't be Sybil attacked. Period.

The Byzantine Generals problem does not state "A majority of CPU power" as the problem. I already stated that is Satoshi's requirement but as the correct title of this thread points out, Satoshi's stated requirement is not a solution to the Byzantine Generals problem. Period.

One of the attack vectors in solving the Byzantine Generals is the Sybil attack. The Byzantine Generals problem is all about the need to trust that 2/3 of the generals are loyal without centralization where all generals are the same person, i.e. that there is no Sybil attack.

Anyone who has studied all the variants of consensus algorithms (as I have) will know clearly that Sybil attacks are always resolved via centralization of the protocol.

This is why as I looked for an improvement over all of what has already been tried, I was cognizant of that I would need to accept centralization in some aspect and so I began to look for the possibility of controlling centralization with decentralization, i.e. a separation of orthogonal concerns which is often how paradigm shifts arise to  solve intractable design challenges.

Every consensus design creates centralization. This will always be unavoidable due to the CAP theorem. The key in my mind is to select carefully where that centralization should be.

  • Satoshi's PoW consensus design centralizes because a) SHA256 has orders-of-magnitude lower electrical cost on ASICs, b) full nodes must centralize (maximize pooled hashrate) to win the battle over who will have the most profitable verification costs (which can be accomplished with a Sybil attack), and c) variance of block rewards require maximizing pooled hashrate (at least up to double-digit percentages and Sybil attack incentives kick in from there).
  • Stellar's SCP consensus design centralizes because although it can't diverge, it requires that slices are not Sybil attacked to avoid eternal preemption (being jammed stuck forever).
  • Ripple's consensus algorithm diverges unless it is centralized trust, as confirmed by Stellar's divergence before it switched to the SCP algorithm.
  • Iota's (any DAG's) consensus diverges unless centralization can force the mathematical model that payers and recipients encode in their interaction with the system.
  • Ethereum never solved the issue that verification of long running scripts can't be decentralized. They are now off another deadend tangent (consensus-by-betting, Casper, shards) trying to deny the CAP theorem.
  • PoS is centralization.

Extracting the generative essence of an issue is what I do. That is where I have made my career in the past and will do so again.
Pages: « 1 ... 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 [141] 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 ... 391 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!