Bitcoin Forum
August 21, 2024, 08:03:24 AM *
News: Latest Bitcoin Core release: 27.1 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 [151] 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 391 »
3001  Alternate cryptocurrencies / Altcoin Discussion / Re: Synereo - Earn Money Using Social Media on: January 31, 2016, 06:44:26 AM
I will be making some comments as I am reading the Synereo white paper, and the coherent, holistic analysis will hopefully follow later after I am done reading the entire white paper (note Elokane already admitted to me that the white paper is missing some critical algorithms such as the ability to filter out low/zero information/entropy actions such as when users Like all their friends' timeline posts irrespective of value of the content that was shared).

End users can run the service on their own computation devices, gaining
full control over their data. Thus, Synereo is directly aligned with the shifting
trends of the web from a centralized model, to a more open, user-centric, dis-
tributed architecture. As part of an e ort to make the Synereo technology easily
accessible to large audiences, we provide an architecture for deploying central-
ized Synereo web-based gateway services, such that technically-able users - or
partner services - can host Synereo nodes as a service to their peers.

Unfortunately I explained a dilemma:

https://bitcointalk.org/index.php?topic=1340057.msg13670558#msg13670558

If the users store their files on their own computer, then there is no way to enforce legal orders, thus the Synereo protocol will be banned by (centralized gateway) hosts. And if Synereo doesn't allow users to host content from their computers, then user's can't resist government regulation of their activities. Besides serving files from user computers over ISPs that have asymmetrically low upload (relative to download) bandwidth is a Tragedy of the Commons as some ISPs effectively pay for other ISPs' lower upload allowances (which is why Bittorrent is throttled/banned by many ISPs, which thus helps drive the Net Neutrality politics that will enslave us in internet taxation ... which btw I pointed out to Bittorrent in 2008, I offered a solution, and they apparently ignored me...click link above to read more).

So the point is that hosting illegal content is a non-starter. And thus hosting (at least high-bandwidth or copyrightable) content on user computers is a non-starter.

If we are going to find utility in Synereo, it has to come from gains in user's sense of value from the attention model, which is what I will be analyzing.

It is not clear if there is any advantage to it being decentralized (but I do hope to find one that matters to users). Mostly the masses don't care about decentralization ideology. They care about the value they get out of the social experience (and not just monetized value). However, we are already seeing where the values of certains users (e.g. musicians) are in conflict with the values of certain music promotion sites such as SoundCloud and Spotify. I have also been analyzing these music promotions sites. There are many. And that might be the more important trust that is being violated, not the reporting to the government which I think is not an actionable cause for most people:

Faced with these numbers, many people are asking themselves, Does it make
sense that the value we create simply by sharing our lives online is retained by
the people who happened to be the rst to provide the infrastructure allowing
us to do so? That these social platforms stated aim is to increase the revenue
they can extricate from us? From our basic need to communicate and share
ourselves with others?
Indeed, this is how current social networking service providers see their users:
as unpaid laborers. As free content creators whose behaviors can be recorded
and measured, the data generated auctioned o to corporations. And for many,
this may still be ne. The services given are now seen as basic necessities in
our digital age, and so perhaps the balance struck between user and service
provider is a fair one. However
, there are other issues tipping the scale against
the incumbents: theres been a breach of trust. The information going into user
feeds is being manipulated, and the information going out - including details
of our activity outside of Facebook - is being handed over to governmental
authorities; privacy settings be damned.

I didn't know this:

Ello, a recent attempt to create an environment
where users arent monetized through ads, exploded in popularity within a few
months of its launch, registering 1 million users and keeping 3 million more on
its waiting list as it went on to scale its centralized technology. [17]


Please refer to prior discussion here:

https://www.reddit.com/r/ethtrader/comments/42rvm3/truth_about_ethereum_is_being_banned_at/

Note I am ideologically in support of a decentralized concept. But economics rules ideology. So let's see where the chips fall in terms of analysis. As the preface to the white paper says, a manifesto is not sufficient. One must also have an economic plan.
3002  Alternate cryptocurrencies / Altcoin Discussion / Re: Synereo - Earn Money Using Social Media on: January 31, 2016, 05:54:52 AM
Let me attempt a "back of the napkin" guesstimate.

Normally ads pay between $1 - $10 per CPM (thousand impressions), but it can be even less if there are very low CTR, low sales conversions/branding recall, or too many ads on same page. For starters let's assume the revenue distribution is egalitarian (i.e. uniformly distributed among users), and assume a typical user views 100 ads per day. So that is $0.10 to $1 per day in revenue for each user. That isn't even a third world wage any more.

Perhaps with multiple ads on the same page and very well targeted ad content, we could raise that by a factor of 10. Then it might be worthwhile to someone in a third world country, but it doesn't seem like it will ever be worthwhile to someone in a developed nation. Okay $300 per month might be worth it to some kids who live with their parents in a developed nation, but it isn't going to be participating in the significant portion of the economy.

Please note that the $300 per month upper end of the guesstimate is for targeting adults in Western countries with higher paying jobs that enable them to spend money. But that $30 - $300 monthly guesstimate is not worth their time to waste viewing 100 - 1000 ads per day on a social network.

The Western teenager demographic and third world users that have less money to spend on products advertised, thus the ads pay less. So that would not be $300 for a teenage or third world user. More likely in the realm of $10 per month. And that is a waste of their time as well.

Paying users to view ads is not an economic model (ditto paying users to solve CAPTCHAs such as for crypto coin faucets). If it was, many sites would be doing it successfully and teenagers and third world would be employed doing it.

While Chinese consumers are increasingly listening to music on licensed services, the most popular services are free and supported by advertising, generating very little revenue for record companies.

As shown by Information is Beautiful’s updated-for-2015 visualization of the subject, signed artists make .0019 cents per stream on Pandora and .0011 cents per Spotify stream. The worst payout of all for musicians, however, comes from Youtube, which pays out about .0003 per play. An artist signed to a record label would thus have to have their Youtube video played 4,200,000 times in order to earn the monthly U.S. minimum wage of $1,260.

Pharrell’s ‘Happy’ Streams 43 Million Times, Makes Less Than $3,000

Paying a site for advertising since it aggregates users and earns $1 - $10 per user per month is a viable economic model:

Facebook recently acquired WhatsApp for 19b
USD, paying 42 dollars per user. [11] Similarly, the value of Facebook and
Twitter users is often calculated through their market cap - currently at 141
and and 81.5 dollars, respectively.
3003  Economy / Economics / Re: Martin Armstrong Discussion on: January 31, 2016, 05:15:56 AM
the following zerohedge like article timing couldnt be worse, with the mkt ripping up due to Japanese NIRP the next morning on Jan 29th, Fri.  

Is a Slingshot Move Setting Up? dated Jan 28th, Thurs.

We have penetrated last year’s low in the cash S&P500, but not in the Dow yet. The Yearly Bearish does not come into play until we get to the bottom of the upward channel. Penetrating last year’s low is indeed setting the stage for the Slingshot. Everything on our models is clearly pointing to this trend extending into 2020, and instead of concluding by 2017 that will be the starting point.

We NEED the vast majority to get really bearish calling for the end of the world and declines of 50% to 90%. This is the fuel for the Slingshot we need in place, just as we saw in 1987.

He is writing about the prospect for the USA markets (a decline into a V bottom and slingshot rebound), which is within his long-standing prediction for the USD and US stocks to be the bullish as the rest of the world collapses (and capital runs to the USA for safe haven). He had two or three possible scenarios since at least 2012 which is the US stocks could pause around the 2015.75 ECM turn date, or they could phase transition up to the 28 - 40K top before the turn date. It became clear on the 2014 close (which btw he nailed the price of oil for the close at $54, predicted publicly in his blog when oil was $100+), that the former scenario was the most likely.

Thus he continues to predict accurately on the longer-term.

When you idiots don't even comprehend what MA is writing about, you have no place to come here and troll this thread with your ignorance and lack of comprehension.

Please stop trolling idiots.
3004  Economy / Economics / Re: Martin Armstrong Discussion on: January 31, 2016, 05:02:55 AM
Oh here we go again. You think if you keep repeating something, it somehow will turn into facts. Show me the documentation and everything else that prove your and MA’s claims. I brought to you several sources on which I based my conclusion here

I did innumerable times present all of this documentation. Go read all of AnonyMint's archives. Then come back in one month after you are done.

Also you could watch the movie about Armstrong which was vetted by the attorneys of the fact insurance company. I already told you that, but you refuse to go buy the movie.

Please stop trolling. It is not my job to repeat again what I already provided in my archives. It is your job to stop whining and go do your homework.

The so called "facts" you claim to present are very poorly researched with very loose comb that has already demonstrated you don't even comprehend what he has been writing, and thus are not the facts. I will not go backwards to resummarize all the posts I did over 3 years. You go read them and learn.

MA is not God and he is not perfect. But you are slandering with errors. As well you have no objectivity because you have clearly formed an opinion before you even started your "research" as evident by the way you attack all of us by implying we are religious nutcases.

And doing it from a newbie account and you won't tell us who you are. You are not even confident enough to put your personal reputation on the line, yet you put MA's reputation on the line. Everyone knows my real identity.

Do you understand that you are wasting and consuming my very scarce and important time. And this is making me and others here angry.


Please stop trolling. Please.

You should really start learning how to read, otherwise it’s going to get even worse. All you mentioned is exactly what MA claims.  He claims that he is dealing with empirical science like physics. That’s why he always brings a lot of bs on Mandelbrot set and other stuff everywhere regardless of any relevance. He claims that markets are not random walk and everything not only can be predicted but is predicted by him and\or his computer (which nobody has ever seen). He also claims that all those variables are just noise, market manipulations don’t exist\or don’t affect markets, natural disasters go in cycles and therefore can be predicted and are predicted by him and so on. That’s precisely what he is selling: the claim that markets can be predicted like physics and is predicted by him and only by him\his computer.

And predicting quite accurately inspite of your inability to understand what he has predicted.

I already refuted you nonsense about his real estate prediction being wrong. His chart on the real estate has never changed. It always predicted a bounce from 2007 to 2012, but not to new highs in internationally inflation adjusted value. You don't pay attention to that detail. He model is global, not domestic so it uses international inflation adjusted value.

You decided that what he does is impossible, but you didn't decide that based on objectivity science and research, but just because your opinion is that what Armstrong does is impossible.

Real Estate Cycles & International Value
Posted on January 21, 2016 by Martin Armstrong   

Case Shiller 1890


QUESTION: Mr. Armstrong, your real estate cycle turned up from 1955. It does not match the Case-Shiller index which peaked in 1890s and bottomed in 1920 and then began to rally after 1940 into the 1955 period. Something seem strange with that index given the huge Florida real estate bubble which burst in 1927. Can you explain why the Case-Shiller seems to be off so much? Here is a chart that has been going around the Web.

Thanks



ANSWER: This is the typical problem with people creating an index and then trying to extend it back in time. They ALWAYS ignore the currency and project purely a domestic view. During the 1890s, J.P. Morgan had to bail out the U.S. Treasury for it was dead broke. As people feared the government would declare bankruptcy, private assets rose in NOMINAL terms. This was matched by the massive exit of foreign capital from the USA.

Florida-1The Case-Shiller index bottoms in 1920, but this was the point of a massive rise in the dollar’s value. Foreign capital poured into the USA to park because of World War I. This, in turn, led to wild speculation in Florida, which as you correctly stated, burst in 1927. Because rhis isdex is national, it also suppresses regional booms. As real estate peaked in Florida, the hot money then shifted to stocks creating the Phase Transition into 1929. It was this capital flows between asset classes into stocks where that concentration led to the 1929 bubble.

The Case-Shiller index, which suddenly rose from the Great Depression, does not take into account the dollar devaluation that sparked that rise as it did in equities. That was virtually a 60% devaluation of the dollar that moved it from $20 to $35 on a gold standard by FDR. Was that rise “real” or currency related? Sorry, the real rise begins post-war from 1955. That was the real housing boom.

The Case-Shiller does not accurately reflect the changes in currency. One must look at everything in terms of international value before they can see if they really made money or just broke even because the currency declined. From a value perspective, the 1929 high was more than three times that of the 1890s. So the high of the 1890s was purely a rise due to the collapse in the dollar; it was the hallmark of the panic of 1893 and was best expressed in Grover Cleveland’s speech before Congress.

We use international value rather than nominal local currency. If you fail to use the international value, the end result will always be erroneous for you will NEVER see when foreign capital will rush in or flee. You simply must look at the world in this manner or you will lose your shirt and everything else. Those who thought gold was in a bull market after the 1980 Crash, lost a lot because they failed to grasp international value.
3005  Alternate cryptocurrencies / Speculation (Altcoins) / Re: Is Ethereum a bubble? on: January 31, 2016, 04:58:21 AM
Ethereum is a platform. So many applications can be built on it. So the application is limitless and will increase over time.

I have already explained that isn't the case. But any way, nevermind just invest in the slogan not in the truth, since the slogan may be more important than the truth since slogans spread virally amongst n00bs, but truth is impeded by lack of knowledge.
3006  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 31, 2016, 04:41:06 AM
Proving data is stored on a decentralised node is something of an ongoing project.  

Others are looking at the issue:

https://bitslog.wordpress.com/2015/09/16/proof-of-unique-blockchain-storage-revised/

So far I think PoW for nodes to validate blocks or data they contain is an interesting approach.

Afaics, all of these proof-of-storage/retrievably designs are fundamentally flawed (MaidSafe, Storj, Sia, etc) and can't ever be fixed.

They try to use network latency to prevent centralized outsourcing, but ubiquitously consistent network latency is not a reliable commodity. Duh. Sorry!

Sergio also the one who started the idea for a DAG which I have explained is fundamentally flawed (and afaics even had an egregious/fundamental error in this white paper for his DagCoin). Find that info in the thread linked above.
3007  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 31, 2016, 04:35:45 AM
Note the 5.5 minute per GB proving time for your Cuckoo Cycle PoW (and slower on mobile devices I presume) is entirely unacceptable for my design for decentralizing Satoshi's PoW design by having each payer include a PoW share. Even one minute is too slow, so then you aren't even monetizing a fraction of the CPU's main memory which is the performance per $ hardware economics.

Less time translates to less memory. A graph size of 2^26 takes about 8MB of memory and 5 sec runtime single-threaded. But you might need several runs to find a cycle of the required length.
You'd probably want to accept a whole range of lengths, like 16-64, to reduce the variance.

Yeah but my point is then we haven't monetized the DRAM of the computer relative to the professional mining farm. The user paid the entire purchase price for the computer not just 8MB of his 2+GB DRAM. As I wrote upthread, this impacts the ratio of unprofitable miners to ones that desire to profitable required to make all unprofitable in my design (it also has impacts in a profitable PoW mining design such as Satoshi's design). The economics of mining is electricity and hardware amortization. For a memory hard scenario, afaics the DRAM amortization becomes a more significant factor in the economics equation.

I figure I can probably do 0.5 GB in roughly 5 seconds or less (perhaps under a second, I need to restudy my late 2013/early 2014 Shazam 512B variant of Chacha work in a new context). I lose the asymmetric verification of yours, but gain much better proving times relative to memory amortized (and this wouldn't be vulnerable to probabilistic graph-theoretic analysis breakage of your NP problem). Nevertheless afaics (thus far) it doesn't help me with the music marketing strategy.  Undecided  It may help with my decentralized PoW design. Need to study that aspect. I am more focused on marketing strategy today.
3008  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 31, 2016, 04:01:26 AM
Also I am wondering if you tried instead of hitting the memory bound with one instance, run multiple instances on the GPU since I assume you are not using 6GB for each instance? In this case the tradeoff between increased computation (thus masking memory latency) and memory bandwidth may be more favorable to the GPU?

Cuckoo Cycle, even using smaller amounts of memory, like 512MB, consumes practically all memory bandwidth of a GPU. So the GPU having 4GB or 6GB is irrelevant.
If you run two 512MB Cuckoo instances, then each runs at about half the speed due to memory contention.
What happens is that the memory used by Cuckoo Cycle is spread across all the memory banks
(using let's say 1/8th of each) as this allows the maximum throughput. Each memory bank suffers a row switching delay on every memory access, forming the bottleneck. Running two instances means they
have to take turns getting results from each memory bank, so there's no increase in throughput.
No amount of extra computational resources changes that.

I've measured the same phenomenon on CPUs.

I know that of course. What I was getting at is whether the parallelization on one instance was more costly in terms of memory accesses versus running a separate instance. It has been 2 years since I studied that Cuckoo Cycle graph NP problem briefly (and no available time or human brain bandwidth to delve into it today), so I didn't know the answer. See also my other reply above (which I even edited) while you were replying.

Note the 5.5 minute per GB proving time for your Cuckoo Cycle PoW (and slower on mobile devices I presume) is entirely unacceptable for my design for decentralizing Satoshi's PoW design by having each payer include a PoW share. Even one minute is too slow, so then you aren't even monetizing a fraction of the CPU's main memory which is the performance per $ hardware economics.

Designing a holistic PoW system for crypto currency has many variables, including marketing strategy.
3009  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 31, 2016, 03:48:22 AM
Was that running all threads of the i7 versus all compute units of the GPU? Did you maximize the # of instances each could do with its available compute units, i.e. normalize for if the GPU has 8 or 16GB as necessary to max out its FLOPS?

That's with maxing out either cores (CPU) or memory bandwidth (GPU).

But not necessarily maxing out CPU computation, since the NP problem is memory latency bound i.e. cores spend a lot of time idle. This is why TDP isn't a reliable estimate of what is going on. I assume you know the CPU has 25 times lower/faster main memory latency than the GPU. So the CPU may be sucking a lot more electricity than the GPU which is hitting its bound on memory bandwidth (and not latency thus not maximum masked parallel computation).

I am shocked that you hadn't purchased a $20 Kill-A-Watt meter given you have invested so much of your time on this. However, I believe these may not be available in some countries in Europe. In the USA, we can order it from Amazon.

I see you said you maxed out memory bandwidth, but what about trading some memory for 10X more computation until the memory bandwidth bound and computation bound (FLOPS) are matched?

The only known trade-off uses k times less memory but 15k times more computation *and* memory accesses.

Perhaps there might be a probabilistic approach. I really haven't studied the NP problem you are applying. It worries me that you assume there aren't other tradeoffs when this is not serial random walk. Your claims are graph-theoretic only, not informational theoretic (as is the case in a perfectly uniformly distributed serial random walk). How extensively is this specific NP problem studied?
3010  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 31, 2016, 02:45:13 AM
tromp, the bottom line is that when I wrote that document I was trying to find a way to make a PoW where CPUs would be as power efficient as GPUs and ASICs, so that a professional miner wouldn't be more profitable than a CPU miner. That was back when I hadn't yet contemplated the solution of making mining unprofitable. And in the portion of the paper I omitted, I concluded that I had failed (even with fiddling around the various SRAM caches on Intel CPUs). Even if one makes a Scrypt-like random walk through memory entirely latency bound, thus GPU can run multiple instances until the latency is masked by computation and becomes computation bound or memory bandwidth bound. And believe in both cases, then the GPU will be more efficient on computation being performed.

What Cryptonote's Cryptonite PoW hash apparently does is make it impossible to run enough instances on a typical GPU (with its limited memory of say 6GB unless one were to customize one) to overcome the AES-NI instructions incorporated into the memory hard algorithm, since the GPU is apparently only at par in computational efficiency on AES-NI. Or what I think is really going on but I haven't confirmed it, is Cryptonite is AES-NI bound, so the GPU remains at parity. Which is exactly the direction I investigated next in 2014 after abandoning memory hard PoW (even the NP complexity class asymmetric variant such as yours). Also CN attempts to fiddle around the size of the various SRAM caches, but that can be a pitfall in scenarios such as ASICS or Tilegra or other hardware competitors.

So that is why I had abandoned memory hard PoW and investigated a specific instruction in the AES-NI instruction set which appears to have the highest level of optimization in terms of power efficiency as far as I can estimate. This also meant the PoW hash could be very fast (noting Cryptonite is slow) and would help with asymmetric validation of PoW shares in the DDoS scenario (although in my latest 2015 coin design I can verify Merkel signatures orders-of-magnitude faster than any memory hard PoW hash so the point becomes irrelevant).

I have recently entertained the thought that the only way to make them (nearly) equal with a memory hard approach would be to be no computation or for the computation to be so small so as to require an inordinate amount of total RAM or where the memory bandwidth bound would limit the computation latency masking to an insignificant portion of total power consumed. But this may not be easy to accomplish because DRAM is so power efficient. I also noticed an error in my thought process before in my rough draft paper where I hadn't contemplated another way to force a serial random walk that much more strongly resists memory vs. computation tradeoff and for which the computation would be very tiny relative to memory latency. And now my goal is no longer for them to be equal (and besides even if they were equal the mining farms have up to an order-of-magnitude cheaper electricity and more efficient amortization of power supplies), but just to be within say an order-of-magnitude because I am targeting unprofitable mining and the ratio dictates the ratio of unprofitable miners to those miners who desire to be profitable required for all miners to be unprofitable. This approach might be superior to the specific AES-NI instruction I had designed a PoW hash around in 2014.

But the main reason I revisited memory hard PoW is because I can't optimize an AES-NI instruction PoW hash from a browser (no native assembly code and because WebGL on mobile phones means as GPU or ASIC orders of magnitude more power efficient and hardware cost efficient) which impacted a marketing strategy I was investigating. However I have concluded last night that the marketing strategy I was contemplated is flawed because there isn't enough value in the electricity (and memory cost) consumed by the PoW hash to give sufficient value to computing the hash for transferred income (even if unprofitable) on a mobile phone. It turns out that marketing is much more important than PoW in terms of a problem that needs to be solved for crypto currency. The income transfer would make music download bandwidth profitable, but that is peanuts compared to the value generated by social media advertising and user expenditures. I am starting to bump up against some fundamental marketing barriers, e.g. microtransactions are useless is most every scenario (even music!), mobile is the future and there isn't enough electricity to monetize anything from PoW (besides competing on electricity consumption due to computation is a losing strategy w.r.t. to GPUs and ASICs). The money to be made in social media isn't from monetizing the CPU/DRAM nor the users' Likes as Synereo is attempting, but from creating value for the users (then either profiting on the advertising and/or the users' expenditures on what they value). This unfortunately has nothing to do with crypto currency, although it is very interesting to me and requires a lot of fun programming & design so I am starting to get frustrated with crypto currency as being an enormous time waster for me. Three years of researching and still not funding a sure project to work on in crypto currency that doesn't just devolve to a P&D to speculators, because crypto currency has no significant adoption markets (subject to change as I do my final thinking on these fundamental question before I quit crypto).

Bottom line is your Cuckcoo Cycle PoW can be parallelized and so the GPU can employ more computation to mask some of its slower main memory latency up to the memory bandwidth bound. With a guesstimated power efficiency advantage of perhaps 2 - 3X, although you have only estimated that from TDP and not actually measured it. It behoves you to attempt to measure that as it is a very important metric for considering whether to deploy your PoW. The reason it can be parallelized is because the entropy of the data structures is not a serial random walk (which is the point I was trying to make in my poorly worded text from the rough draft of an unpublished paper). Also I am wondering if you tried instead of hitting the memory bound with one instance, run multiple instances on the GPU since I assume you are not using 6GB for each instance? In this case the tradeoff between increased computation (thus masking memory latency) and memory bandwidth may be more favorable to the GPU?

Note that other paper's proposed PoW can also be parallelized up to the memory bandwidth bound but afaics they didn't measure relative power efficiency:


Quote
thus the total advantage
of GPU over CPU is about the factor of 4, which is even
smaller than bandwidth ratio (134 GB/s in GTX480 vs 17
GB/s for DDR3). This supports our assumption of very limited
parallelism advantage due to restrictions of memory bandwidth


tromp, I think you will remember the discussion we had in 2014 about ASICs and I was claiming it could be parallelized and you were pointing out the bandwidth limitation at the interconnection between IC chips due to the fact the memory can't fit on the same silicon chip as the computational logic (or that not all the memory can fit on one chip).

So what I am really saying above is that afaics the fundamentally important invention of lasting value that I have found for crypto is unprofitable mining. I haven't decided yet whether to pursue that to implementation.
3011  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 30, 2016, 07:14:57 PM
What I am saying is that the entropy of your problem space is large but limited, which is indeed because the confusion and diffusion injected into the memory space is not entirely randomized over the entire memory space allocated to the PoW computation. Duh. Which is precisely what Andersen discovered when he broke your Cuckoo Cycle as I warned you would be the case. Quoting from the above paper:

You're living in the past quoting 2 papers that both focus on an early 2014 version of Cuckoo Cycle.

What David Andersen did in april 2014 is to reduce memory consumption by a factor of 32, which has become
part of the reference miner in may 2014, well in the past when my Cuckoo Cycle paper was published in BITCOIN 2015.

The paper says:

Quote
1The project webpage [37] claims Andersen’s optimizations be integrated into the miner, but the performance numbers are mainly unchanged since before the cryptanalysis appeared


What is the performance per Watt and performance per $ hardware comparing CPU and GPU now for reference miners?

It should be possible to use the superior FLOPS of the GPU to trade less memory for more computation and/or parallelization, thus giving the GPU an advantage over the CPU. Maintaining parity for the CPU was the entire point of a memory hard PoW algorithm. Also the more parallelization, the lower the effective latency of the GPU's memory because latency gets masked by computation proceeding in parallel. Up to the limit of memory bandwidth (which is very high on the GPU as you know).

Edit: I will need to study this when I am not so sleepy, to give it proper thought.

Edit#2: you added the following your post after I replied to it:

But you don't need to read that paper to learn of the linear time memory trade off, which is right on the project page:

"I claim that trading off memory for running time, as implemented in tomato_miner.h, incurs at least one order of magnitude extra slowdown".

Btw, there is maximum entropy in the bitmap of alive edges once 50% of them have been eliminated.

But the GPU gets that computation for free because it is masked by the latency for the random accesses. That is why I asked for some performance figures above comparing CPU and GPU. I haven't looked at your project page for 2+ years.

Edit#3: so apparently about a 3X advantage in rate per watt since the TDP of the GPU you cited is 165W (including 4 GB ram) and i7 afair is about 125W:

https://github.com/tromp/cuckoo#1-memory-bank--1-virtual-core--1-vote

Was that running all threads of the i7 versus all compute units of the GPU? Did you maximize the # of instances each could do with its available compute units, i.e. normalize for if the GPU has 8 or 16GB as necessary to max out its FLOPS? I see you said you maxed out memory bandwidth, but what about trading some memory for 10X more computation until the memory bandwidth bound and computation bound (FLOPS) are matched?
3012  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 30, 2016, 06:52:44 PM
Verification—that a hash output is valid for a given input—should be orders-of-magnitude more efficient than computing the hash.

The validation ratio when used in a proof-of-work system also depends on how fast the hash is, because validation only requires one hash whereas solving proof-of-work requires as many hashes as the collective difficulty or more importantly the pool's minimum share difficulty.

Cuckoo Cycle is not a hash.

I was using the term 'hash' to mean proof-of-work problem with a verifier in the sense as defined here:

https://eprint.iacr.org/2015/946.pdf#page=2

I wasn't focused on defining terms (in an informal internal paper for my own analysis that I never prepared for publishing), but rather on analyzing the issues pertaining to memory hardness, such as resistance to parallelization and/or trading computation for space as should be obvious from my analysis of Scrypt.

The Cuckoo Cycle hash [19] significantly increases the asymmetric validation ratio, but has unprovable security because its algorithm is based on permutated orderings which do not incorporate diffusion and confusion [20]

Diffusion and confusion are properties of hash functions. Cuckoo Cycle is not a hash function.
You need to get your basics straight.

Careful of that condescending attitude. You are starting to act like Shen-noether, Gregory Maxwell and other condescending jerks that have come before you.

What I am saying is that the entropy of your problem space is large but limited, which is indeed because the confusion and diffusion injected into the memory space is not entirely randomized over the entire memory space allocated to the PoW computation. Duh. Which is precisely what Andersen discovered when he broke your Cuckoo Cycle as I warned you would be the case. Quoting from the above paper:


A more promising scheme was proposed by Tromp [37] as the Cuckoo-cycle PoW. The prover must find a cycle of certain length in a directed bipartite graph with N vertices and O(N) edges. It is reasonably efficient (only 10 seconds to fill 1 GB of RAM with 4 threads) and allows very fast verification. The author claimed prohibitive time-memory tradeoffs. However, the original scheme was broken by Andersen [6]: a prover can reduce the memory by the factor of 50 with time increase by the factor of 2 only. Moreover, Andersen demonstrated a simple time-memory tradeoff, which allows for the constant time-memory product (reduced by the factor of 25 compared to the original). Thus the actual performance is closer to 3-4 minutes per GB1). Apart from Andersen’s analysis, no other tradeoffs were explored for the problem in [37], there is no evidence that the proposed cycle-finding algorithm is optimal, and its amortization properties are unknown



Although its entropy is large, i.e. the factorial permutation of all buckets in the hash space, it doesn't require that its entire memory space be accessed, thus possibly bit flags could be employed to reduce the memory used and make it faster.

There is no "entire memory space". Cuckoo Cycle uses memory for data structures, not for filling up with random garbage like scrypt does.

You lack imagination and abstraction conceptualization skills. Think of about the algorithm from the analogy that applies, and discover new insights.

Think about the entropy of the memory allocated for the data structures. If the entropy is not maximum one can in theory find a way to shrink that memory space and trade computation for memory space, then your algorithm is no longer memory hard. As well if one can parallelize computation within the same memory structures, it is no longer memory hard. As well if the entropy is not maximized, there is no evidence that a faster algorithm can't be found.

The "random garbage" is precisely necessary to maximize entropy and eliminate the possibility to trade memory for computation and/or parallelize the computation. Even the above paper's proposal seems to give up parallelization to the GPU (although they claim only by an order-of-magnitude and that is not including the GPU has multi-GB of memory and can run more than one instance if not memory bandwidth bounded).

I agree that Cuckoo Cycle can be parallellized. In fact the GPU solver works best with 16384 threads.
But it's only marginally faster than 4096 threads. Because they're saturating the random access
memory bandwidth.

Not just parallelization but also memory for computation time improvement as I predicted and now confirmed:

"Andersen [6]: a prover can reduce the memory by the factor of 50 with time increase by the factor of 2 only."
3013  Alternate cryptocurrencies / Speculation (Altcoins) / Re: Is Ethereum a bubble? on: January 30, 2016, 02:48:12 PM
Again, people shouldn't keep this at arm's length.  Imagine if it really does become the 'world computer' with a billion users, and billions of smart contracts, the blockchain would be too huge.

There won't be any block chain that scales to the significant global usage where most users (and miners) run full nodes. Simply can't be done.

Centralization is required for scaling. The only solution I thought of is to control the centralized full nodes with decentralized power. Keep the mining power in the hands of the users.

Agreed. You need trust to scale. Mining = creating coins + verification. It would be better to talk about verifiers or auditors rather than miners.

Trust & verify (statistically).
3014  Alternate cryptocurrencies / Altcoin Discussion / Re: Synereo - Earn Money Using Social Media on: January 30, 2016, 02:46:47 PM
Normally ads pay between $1 - $10 per CPM (thousand impressions),

In my experience that's too high by a factor of 10 - 100 for CPM ads. Maybe CPA you might get closer to $1 per action, but they're generally pretty low quality ads, which you wouldn't want on a social network.

Are you sure? I am saying $1 - $10 paid per 1000 displays of the banner ad. During the dot.com bubble is was as high as $40. Has it declined now below $1?

Last time I looked into this (a good 5 years ago, mind) you'd be lucky to get $0.4 CPM on google adsense, which is the highest paying service.

You mean for the website owner's earnings. That is after Google takes its cut right. So figure the advertiser is paying roughly $1 CPM. Synereo intends to cut out the middle man.
3015  Alternate cryptocurrencies / Speculation (Altcoins) / Re: Is Ethereum a bubble? on: January 30, 2016, 02:41:35 PM
Again, people shouldn't keep this at arm's length.  Imagine if it really does become the 'world computer' with a billion users, and billions of smart contracts, the blockchain would be too huge.

There won't be any block chain that scales to the significant global usage where most users (and miners) run full nodes. Simply can't be done.

Centralization is required for scaling. The only solution I thought of is to control the centralized full nodes with decentralized power. Keep the mining power in the hands of the users.
3016  Economy / Economics / Re: Martin Armstrong Discussion on: January 30, 2016, 02:36:37 PM
"There is always someone around who will take your gold."    (In payment, not steal it)

If you need to buy or barter for something, even if for very high value, you can find someone who will give you what you need for your gold.  At this moment that is not true for Bitcoin.  Sure there are people around who will do some trades for your BTC, but not that many, nor in large amounts.

I'd quess the number of people that will accept BTC at localbitcoins.com is roughly the same (within an order-of-magnitude) as the number of pawnshops in the world that will take your gold coin. Here in the Philippines I bet you'd be lucky to get 50% of the spot price value at a pawnshop.

The key difference in my mind is that gold you can find a physical buyer that you can trust won't kill you (e.g. a storefront pawnshop or gold dealer). Whereas meetups with Bitcoin are going to be with a stranger (and risky for both parties). This is important in the sense of needing to get cash and not wanting it to pass through a bank account.

Once the NWO outlaws cash (and regulates gold dealer, but the lack of cash will kill the black markets too), then there will be no advantage for gold any more. You will have to sell it for digital cash. Barter becomes extremely rare to find.

OROBTC, I am sorry to tell you that gold is dying. The digital cash age will kill it and everyone will throw their gold into the streets as the Bible says. We may not get there entirely though until 2032 or so.
3017  Economy / Economics / Re: Bitcoin or Gold? What would you pick? on: January 30, 2016, 02:35:13 PM
"There is always someone around who will take your gold."    (In payment, not steal it)

If you need to buy or barter for something, even if for very high value, you can find someone who will give you what you need for your gold.  At this moment that is not true for Bitcoin.  Sure there are people around who will do some trades for your BTC, but not that many, nor in large amounts.

I'd quess the number of people that will accept BTC at localbitcoins.com is roughly the same (within an order-of-magnitude) as the number of pawnshops in the world that will take your gold coin. Here in the Philippines I bet you'd be lucky to get 50% of the spot price value at a pawnshop.

The key difference in my mind is that gold you can find a physical buyer that you can trust won't kill you (e.g. a storefront pawnshop or gold dealer). Whereas meetups with Bitcoin are going to be with a stranger (and risky for both parties). This is important in the sense of needing to get cash and not wanting it to pass through a bank account.

Once the NWO outlaws cash (and regulates gold dealer, but the lack of cash will kill the black markets too), then there will be no advantage for gold any more. You will have to sell it for digital cash. Barter becomes extremely rare to find.

OROBTC, I am sorry to tell you that gold is dying. The digital cash age will kill it and everyone will throw their gold into the streets as the Bible says. We may not get there entirely though until 2032 or so.
3018  Economy / Economics / Re: Economic Totalitarianism on: January 30, 2016, 02:27:39 PM
Debtor's prison is back in Texas:

http://www.extremetech.com/internet/222033-texas-police-now-double-as-debt-collectors-thanks-to-free-license-plate-readers
3019  Alternate cryptocurrencies / Altcoin Discussion / Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? on: January 30, 2016, 08:42:50 AM
I decided to publish a section of my research on memory hard PoW hash functions from 2013. This is only the first section where I analyzed Scrypt. I may have published this section before when Monero was first announced and I was publicly debating with one of the Monero dudes about Cryptonote's PoW hash Cryptonite (yet another discussion that turned condescending and foul mouthed). Note there are 23 references cited in the complete version of this paper. This explains why Cryptonite employs AES-NI instructions to defeat the FLOPS and superior memory bandwidth advantage of the GPU.

I claim my analysis of tromp's Cuckoo below predated this as I added it to my paper immediately after tromp posted about his new paper:

David Andersen. A public review of cuckoo cycle. http://www.cs.cmu.edu/dga/crypto/cuckoo/analysis.pdf, 2014
http://da-data.blogspot.com/2014/03/a-public-review-of-cuckoo-cycle.html



Scrypt ROMix

   1: X <- B
   2: for i = 0 to N-1 do
   3:    V[ i] <- X
   4:    X <- H(X)
   5: end for
   6: for i = 0 to N-1 do
   7:    j <- Integerify(X) mod N
   8:    X <- H(X ^ V[j])
   9: end for
  10: B' <- X

Without parallelism the execution time of ROMix is bounded by that of the hash function H or the random access memory latency to read V[j].

Without parallelism the memory bandwidth to write V[ i] can not be a significant factor because in the case that the execution time of the first loop is dependent on memory bandwidth instead of H and not random access latency since the writes are sequential, the random access latency to read V[j] in the second loop is slower than the memory bandwidth in the first loop.

Percival's sequential memory-hard proof [1] states that redundant recomputation of H in exchange for a reduced memory footprint will not be asymptotically faster. For example even if H is asymptotically (as N goes to infinity) perfectly distributed in the Random Oracle model so the second loop only accesses each V[j] at most once, the H for each second element will be computed twice when reducing the memory footprint by ⅔ by recomputing the H for each second and third element in the second loop instead of retrieved from memory.

Percival's proof fails if the random access latency is not insignificant compared to the execution time of H because the execution time for H in a parallel thread is free as it masked by the other thread is which is stalled for the duration of the latency. This is why BlockMix is employed to increase the execution time of H.

Consider the example that the execution of H is twice as fast as the random access memory latency, i.e. H executes in ½ the delay of each random access. Analogous to cpuminer's "lookup gap" of 3, the computation of H for each second and third elements of V[j] can be repeated in the second loop instead of retrieved from memory. Thus ⅓ the memory requirements, average execution time equal to the latency (indicated by the computed value of 1 below), and only a ½ × ⅓ = 1/6 average increase in computational cost for accessing the third elements which is masked by ½ of the latency not offset by H in line 8. Each first, second, and third element of V[j] has a ⅓ probability of being accessed, so the relative execution time is computed as follows.

   ½ × ⅓ + (½ + ½) × ⅓ + (½ + ½ + ½) × ⅓ = 1

Since GPU DDR main memory has nearly an order-of-magnitude slower random access memory latency than the CPU's DRAM, the GPU employs a "lookup gap" to reduce the memory footprint to allow more parallel instances of Scrypt to execute simultaneously up to the available number of threads and memory bandwidth. The GPU's order-of-magnitude faster memory bandwidth allows running more parallel instances of the first loop. Thus the superior FLOPs of the GPU is fully utilized, making it faster than the CPU.

L3crypt

[...]

Even without employing "lookup gap", the GPU could potentially execute more than 200 concurrent instances of L3crypt to leverage its superior FLOPs and offset the 25x slower main memory latency and the CPU's 8 hyperthreads. To defeat this, the output of L3crypt should be hashed with a cryptographic hash the leverages the CPU's AES-NI instructions and with enough rounds to roughly equal the computation time of L3crypt. GPUs are roughly at parity with AES-NI in hashes per watt [24].

[...]

Asymmetric Validation

Verification—that a hash output is valid for a given input—should be orders-of-magnitude more efficient than computing the hash.

The validation ratio when used in a proof-of-work system also depends on how fast the hash is, because validation only requires one hash whereas solving proof-of-work requires as many hashes as the collective difficulty or more importantly the pool's minimum share difficulty.

The Cuckoo Cycle hash [19] significantly increases the asymmetric validation ratio, but has unprovable security because its algorithm is based on permutated orderings which do not incorporate diffusion and confusion [20] and thus not provably secure in the Random Oracle model, i.e. we can't prove there aren't algorithms to speed up the solutions. Although its entropy is large, i.e. the factorial permutation of all buckets in the hash space, it doesn't require that its entire memory space be accessed, thus possibly bit flags could be employed to reduce the memory used and make it faster.

The Cuckoo Cycle hash isn't a personal computer only hash because it requires only a minimal CPU and it is non-sequentially latency bound. It can be parallelized over the same memory space masking much of the latency. Thus for example the Tilera server CPUs will outperform Intel's personal computer CPUs because Tilera's has 3X to 12X more hardware threads per watt and isn't plagued by the GPU's slow main memory latency. Whereas for L3crypt no further parallelization is possible so even though compared to the 8-core Xeon E5 or Haswell-E the Tilera has same L3 cache [21] and 50% to 67% of the power consumption, the latency is 2X greater [22] and each clock cycle is 2X to 3X slower. Although parallelization can be applied for computing H to try to match Intel's AVX2 acceleration, L3crypt is sequentially memory latency bound.

[...]

Future Proof


CPU memory bandwidth is doubling approximately every four years [7] with up to a 50% improvement expected by 2015 [6] and memory size is doubling approximately every two years which why Moore's Law expects a doubling of performance every 18 months [8] computed as follows.

   2^(years/2) × 2^(years/4) = 2^(years/1.5)

However Percival noted that memory latency is not following Moore's Law [1].

References

[1] Percival, Colin. Stronger key derivation via sequential memory-hard functions.
    BSDCan'09, May 2009. http://www.tarsnap.com/scrypt/scrypt.pdf

[...]

[6] http://www.pcworld.com/article/2050260/hefty-price-premium-awaits-early-ddr4-memory-adopters.html

[7] http://www.timeline-help.com/computer-memory-timeline-4.html

[8] http://en.wikipedia.org/wiki/Moore's_law#cite_note-IntelInterview-2

[...]

[19] https://github.com/tromp/cuckoo/blob/master/cuckoo.pdf

[20] http://en.wikipedia.org/wiki/Confusion_and_diffusion
     http://www.theamazingking.com/crypto-block.php

[21] http://www.tilera.com/sites/default/files/productbriefs/TILE-Gx8072_PB041-03_WEB.pdf

[22] http://www.tilera.com/scm/docs/UG101-User-Architecture-Reference.pdf#page=369

[...]

[24] https://www.iacr.org/workshops/ches/ches2010/presentations/CHES2010_Session06_Talk03.pdf#page=16
3020  Economy / Economics / Re: Martin Armstrong Discussion on: January 30, 2016, 08:16:04 AM
sloanf, here you go for a concrete long-term prediction from Armstrong's Socrates computer.
Pages: « 1 ... 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 [151] 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 ... 391 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!