Bitcoin Forum
May 25, 2024, 07:08:35 PM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 ... 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 [121] 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
2401  Economy / Marketplace / Re: Wanted: Long-term BTC purchase contract with mining company on: October 10, 2011, 05:09:49 AM
Vladimir and ArtForz are probably the only ones dealing with that kind of volume (if at all). But I don't know if they'll want to sell at that price.
2402  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 06, 2011, 03:04:01 PM
to augment proof of work with proof-of-stake and circulation
and have been active in the system for a longer time and are trusted/interconnected to more users have a stronger say
Things like this tend to be prone to Sybil attacks. I'm not an expert but what I learned from comments like this is that there are some very good reasons why Bitcoin is based on Proof-of-Work rather than trust. Not sure if this applies to your system.
2403  Bitcoin / Bitcoin Discussion / Re: I need help tranlating Satoshi's design paper into as many languages as possible on: October 06, 2011, 02:56:38 PM
People that had already done almost half of the job are gonna be upset with things getting reset :/

Yea, I know. :-(  I feel pretty bad people may have wasted a few hours of their time.  But, I made the decision to do it because there are 100+ languages that need to be translated and the sentences needs to be complete.  Please let me know how I can make your life any easier.
I only did a small part but I'm not at all happy about this. I could swallow it but now I have to worry this may happen again after I spend even more time. Is it possible to access the old translations so that only copy/pasting needs to be done?

Also, if I go to the translating page now the original texts are all messed up - going to http://crowdin.net/translate/bitcoin/bitcoins.html/en-he, for me at least the texts are in a seemingly random order.

My right-to-left question still stands.

I decided to wake up at 3AM in the morning to answer your question.  :-)  I checked if you can access the older revisions.  It looks like you can but it seems like you can only access one or the other but not both.  There is this file format called "tmx".  It has some translations in it and I've put it in this document.  I don't know if it will help you:   https://docs.google.com/leaf?id=0B1UsG65HCLkuNGQyYzU1NmQtZmE4ZS00YWM0LWJjNWYtMDZhYzQwZjc1ZGI4&hl=en_US

I don't know why crowdin seems to "randomize" the order.  If crowdin is not a good tool to use for your language, I guess I am open for others to do it in Word.  The problem I see with this, however, is I won't be able to convert it to PDF and I don't know what English word correlates to the translated word.  So you will have to do all of the work should you decide to translate it without using crowdin.

As for the right-to-left question, what I do to generate the PDF is to do it manually, line by line, so it takes a bit of time.  Are the Hebrew translations right-to-left in crowd-in?  What I do is I copy the sentence, and I paste it in OpenDraw.  I just copied a Hebrew sentence from crowdin and I pasted it in OpenDraw, and it seemed to paste ok.  It seems like a bunch of squiggles to me so I have no idea if it is right-to-left.  Maybe if I have a sample sentence you can paste here I can use?
Downloading the tmx file doesn't seem to work for me.

There are a lot of subtleties with correct functioning of RTL languages in various software, some of which can't really be seen by looking at just one line, so there really is a need to look at a more "wholesome" chunk and see that it turns out correct.

I'm sure crowdin has its advantages but for me it would be much more straightforward to simply create a translated Word file and send it to you. Is that something which will be easy to import to OpenDraw? I don't know how to create the diagrams, but I can send separately a list of translations of the terms involved.

Also, I think trying to translate the bibliography is futile.
2404  Bitcoin / Pools / Re: [38 GH/s] yourbtc.net - Double Geometric Method - 0% Fee - API - Full Decimal on: October 06, 2011, 02:35:25 PM
I may sound like idiotic but? Is pure PPS stil best for slow miners with slow hashrates (under 100mhash). There are some PPS pools with 0%fee and they share full 50BTC/shares. So that should be best option? Right? Or do we get more advantage with DGM in long run? Im not sure if i understadn even single thing about DGM?
0% fee PPS (and I'm talking about real PPS, not fakes) would have been best, but in fact it's too good to be true - in pools that have this this is a promotional offer that can't be sustained over the long run. Many PPS operators don't completely understand the risks involved and their pool is at risk of bankruptcy or switching to a safer method. But if a pool does manage to offer PPS with low fees it's better than any other method with the same fees.

But it doesn't matter too much. 100 MH/s is less than what some people have but it's high enough that you don't have to worry about share-based variance. DGM, PPLNS etc. will be perfectly fine and I think this pool is better than currently existing PPS offers. Larger pools have an advantage though.

You don't really need to understand the specifics of DGM. There are good methods and bad methods, and all good methods (such as DGM) will give you pretty much the same payout over the long run.
2405  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 06, 2011, 09:25:00 AM
@Meni Care to give a hint about what you are thinking of? Working on a new application type, a "fundamental change" in how branches are selected is a real option. Therefore I would be very interested in learning about this, even if it is "just" a "raw" idea and not a source code patch.
I'm mostly referring to ideas I've expressed in the previously linked thread, to augment proof of work with proof-of-stake and circulation (possibly quantified as bitcoin days destroyed) in branch selection.
2406  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 06, 2011, 07:44:15 AM
The botnets seemed to have come to the conclusion that it is better to join the bitcoin network rather than sabotage it.
This depends on who runs the botnet.

What would be the point of an attack even if it could be briefly successful before being discovered and blocked?
How do you block an attack? Reject blocks that have the "evil" bit set? (Actually there are ways, but they require a fundamental change in how branches are selected)
2407  Bitcoin / Bitcoin Discussion / Re: I need help tranlating Satoshi's design paper into as many languages as possible on: October 06, 2011, 06:54:18 AM
People that had already done almost half of the job are gonna be upset with things getting reset :/

Yea, I know. :-(  I feel pretty bad people may have wasted a few hours of their time.  But, I made the decision to do it because there are 100+ languages that need to be translated and the sentences needs to be complete.  Please let me know how I can make your life any easier.
I only did a small part but I'm not at all happy about this. I could swallow it but now I have to worry this may happen again after I spend even more time. Is it possible to access the old translations so that only copy/pasting needs to be done?

Also, if I go to the translating page now the original texts are all messed up - going to http://crowdin.net/translate/bitcoin/bitcoins.html/en-he, for me at least the texts are in a seemingly random order.

My right-to-left question still stands.
2408  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 05, 2011, 08:19:26 PM
However now comes the crucial difference. Assume I have 2^256 participants, numbered 0, 1, 2, 3, ... How long will they need for the first block? In the current (parallelizable) PoW used in Bitcoin they need a few microseconds. Every participant uses his own number as nonce in the first round...and most likely one of them will produce a hash which is smaller than the current target value. In the non-parallelizable PoW I am thinking of, they will still need more or less 10 minutes as they should, since this corresponds more or less to the number of operations they have to do before they get a realistic chance for reaching the goal. However, since there is some variability, also a slower CPU with better random choices gets a chance.
I think I now understand what you're talking about. This is basically making the computation more granular, significantly increasing the time it takes to test one value (from a microsecond to 10 minutes).

I think you'll find that still, an entity with enough resources wins, and more easily than with the current system.

Thanx again for challenging my thoughts in the discussion. This is very fruitful.
You're welcome, glad to help.
2409  Bitcoin / Bitcoin Discussion / Re: I need help tranlating Satoshi's design paper into as many languages as possible on: October 05, 2011, 07:29:55 PM
It would've been better if it was organized in terms of paragraphs and sentences instead of lines.

If this is a problem I can try and redo it so it has sentences and paragraphs instead of lines. 

Ok, I converted the document into sentences.  I apologize for this.  Still new at this.
That's great and all, but it looks like the work already done has vanished.
2410  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 05, 2011, 07:17:03 PM
let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.
Why?
If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.
Why?
It is characteristic of non-parallelizable PoWs that they do not scale in the way you describe. I believe we have a misunderstanding here.
This isn't about parallelizable vs. non-parallelizable computations. Performance in serial computations doesn't scale linearly with more computing cores, but this is irrelevant. This is about the process of block finding, which is why I asked if your system diverges fundamentally in the notion that blocks are something found once in a while by people on the network. If not then it's really "if Billy and Sally each have an apple, then that's two apples" math - if in a given scenario (not in distinct scenarios) two people find 1 block each, then both of them together finds 2 blocks. If a network of 4000 people find 4000 blocks per month, each finds on average 1 block per month. This isn't enough data to know the distribution (it's possible one person finds all 4000) but the best scenario is when each finds close to 1.

It also means that if in a given situation 4000 people find 4000 blocks, each finding about 1, then if I join in it would only be fair if I also find about 1 (or, more precisely, that each will now find 4000/4001).

Because the pool shouldn't be the one deciding what goes in a block. As was explained, a pool is essentially just an agreement to share rewards.

Ok. Forget the pool as part of the argument here but think of parallel computing. The pool is a parallel computer.

The line of reasoning is about parallel computation and scalability of the PoWs.

With parallelizable PoWs, Bill Gates can buy as much computing power as he wants. He then changes a transaction in block 5 to his favour. Thanx to his computing power he easily can redo the entire block chain history since then. If the PoWs are, as I suggest, non-parallelizable, he simply cannot do better buy buying more computers. The only thing he can do is increase the clocking. By this, he can speed up his computation mabe by a factor of 5 or 10 - as opposed to buying more computers, where only money is his limit. So, non-parallelizable PoWs are an effective solution against this kind of attack.

(Yes, I know that the hashes of some 6 or so intermediate blocks are hardcoded in the bitcoin program and hence the attack will not work out exactly the way I described it - but this does not damage the line of reasoning in principle.)
Yes, with parallelizable PoW you can overwhelm the network given enough time and money. My contention is that non-parallelizable makes the problem worse, not better. With fully serial only the fastest one will do anything, so noone else will be incentivized to contribute his resources. So this one person can do the attack, and even if he's honest, it's only his resources that stand against a potential attacker (rather than the resources of many interested parties).

And there's no indication that some hybrid middle ground gives better results - to me it seems like more like a linear utility function where fully parallel is best and it gets worse the closer you make it to fully serial.

Also, I hold the position that security can be significantly improved using some form of proof-of-stake (basically a more methodical version of the hardcoded hashes).

Variance in block finding times is unwanted, but I think most will agree it pales in comparison to the other issues involved. Especially since there are basically two relevant timescales - "instant" (0 confirmations) and "not instant". The time for 10 confirmations follows Erlang(10) distribution which has less variance.
I do not think that the "variance in block finding times" is the essential advantage, it is rather convergence speed to "longest chain" (I have no hard results on this but am currently simulating this a bit) and better resistence against attacks which involve pools parallel computers.
See above. I think you're going the wrong way.

By all means you should pursue whatever research question interests you, but I expect you'll be disappointed both in finding a solution satisfying your requirements, and in its potential usefulness.
Trying to understand the argument. Do you think there is no PoW matching all the requirements? Care to give a hint why?
I'm still not completely sure what the requirements are, this whole discussion has been confusing to me. But yes, to me it seems that from a "back to basics" viewpoint a serial computation only makes it easier for one entity to dominate the blockchain, making the "better security" requirement impossible. Again, if multiple computers don't give more power over the network, it means the attacker doesn't have to compete against multiple computers, only against one.

As to a potential usefulness: The concept is by now means "finished" but until now the discussion on the board proved very fruitful and helps to improve the system I am working on. This is for a different kind of block-chain application, so I am not expecting an impact for Bitcoin. Bitcoin is widely disseminated so I do not expect significant protocol changes to occur any time soon, especially by suggestions from outside the core team.  
You mean an alternative Bitcoin-like currency, or something that doesn't look anything like it? If the former I doubt this will be applicable, if the latter I can only speculate unless you give more details about the application.

The Bitcoin code progresses slowly, probably mostly because of the sophistication of the code, but I trust that all sufficiently good ideas will make it in eventually.
2411  Bitcoin / Development & Technical Discussion / Re: Difficulty adjustment needs modifying on: October 05, 2011, 05:19:56 PM
Hell you don't even have to have an attack, and the "doomsday" scenario is written into the code.
Mining right now is marginally profitable if you have efficient GPUs and cheap electricity.
It fairly obviously isn't profitable for a decent number of people, as evidenced by the dropping hash rate and difficulty.
Now look into the future a little it to the 50% drop in rewards.
Presto!  Anybody without free electricity won't be mining profitably anymore, and bitcoin has a namecoin type issue.

Bitcoin prices better at least double by then, or bitcoin is in serious trouble.
Halving is not going to cause doomsday, for several reasons.
 - Capital expenditure is a major component in mining cost. First, this means that there will be plenty of people who are making more than twice their electricity cost.
 - Second, it means that in the time before halving, people will avoid buying hardware in anticipation of decreased profitability, so the difficulty will be less than it would have otherwise been.
 - The price will gradually increase in the time before halving in anticipation of the reduced supply.


* Price does not have to double, only difficulty has to halve.
Not a counterargument by itself, because the point with doomsday is that difficulty doesn't get a chance to adjust.
2412  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 05, 2011, 01:39:07 PM
Now suppose it is you and me and some 40 other guys with the same hash performance as you have in your example. Suppose I want to claim 100 BTC bounty for every block instead of the standard 50 BTC. Chances are next to 100% that I will manage. Since, on the avaerage, I am faster than you (and all the other guys combined), I will dominate the longest chain in the long run.
Ok, you're definitely confused about the capabilities of someone with >50% of the hashing power. He cannot do things like put a 100BTC generation transaction per block. Such blocks are invalid and will be rejected by the network (particularly the nodes that actually accept bitcoins for goods and services). In other words, these will not be Bitcoin blocks - the rest of the network will happily continue to build the Bitcoin chain, while he enjoys his own isolated make-believe chain.

let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?.

Why?

The perspective I am looking at is not the single block but the development of the block chain.

As soon as one of the four people found a block, this person broadcasts this block and the puzzles the other three had been working on becomes obsolete (at least that's my understanding on what the reference implementation does). Only a cheater would be interested in continuing to work on "his" version of the block; however, having lost the block in question, chances are getting higher that he will not manage to push "his" version of the next block.

Four people with a computer would rather find a total of 4 blocks in FOUR months - and these blocks would be the four blocks chained next to each other, ie a block chain of length 4.
Does your system maintain the notion that each given block is found by some specific individual? If so, if 4 people find 4 blocks in 4 months, it means each person finds 1 block in 4 months, contrary to the premise that each person finds 1 block per month...

If it wasn't clear, in this example the intention was that the 4 people aren't all there is, there are 4000 more similar people each finding 1 block per month, for a total of 4000 blocks per month. So again, if 4 people find 1 block per month each, then between them they find 4 blocks per month.

And, once more - pools are not a security threat ...
How do you prevent a pool from pooling more than 50% of the hashability and then imposing its own understanding of Bitcoin upon the remaining nodes?
Because the pool shouldn't be the one deciding what goes in a block. As was explained, a pool is essentially just an agreement to share rewards. Even in centralized pools (and like I said there are decentralized ones), all the operator needs is to verify that miners intend to share rewards, by checking that they find shares which credit the pool in the generation transaction. But everything else can be chosen by the miner.

This is a future fix, however - currently centralized pools do tell miners what to include in the block. But miners can still verify that they're building on the latest block, so they can detect pools attempting a double-spend attack (which is the main thing you can do with >50%).

Block finding follows a Poisson process, which means that the time to find a block follows the exponential distribution (where the variance is the square of the mean). The variance is high, but that's an inevitable consequence of the fair linearly scaling process.

Again you are raising an important aspect. The task thus is to see that two goals can be balanced: Linear scaling and small variance.
Variance in block finding times is unwanted, but I think most will agree it pales in comparison to the other issues involved. Especially since there are basically two relevant timescales - "instant" (0 confirmations) and "not instant". The time for 10 confirmations follows Erlang(10) distribution which has less variance.

I agree that the Poisson process is a very natural solution here and prominently unique due to a number of it's characteristic features, such as independence, being memory and state less etc. A non-parallelizable PoW will certainly lose the state-less property. If we drop this part, how will the linear scaling (effort to expected gain) and the variance change? We will not have all properties of Poisson, but we might keep most of the others. The question sounds quite interesting to me.
By all means you should pursue whatever research question interests you, but I expect you'll be disappointed both in finding a solution satisfying your requirements, and in its potential usefulness.
2413  Bitcoin / Bitcoin Discussion / Re: Why is bitcoin proof of work parallelizable ? on: October 05, 2011, 11:10:11 AM
I think you're confused about how the so-called "Bitcoin lottery" works. You seem to think that if I have some system and you have a parallel system with x100 the power, then you will find all the blocks and I will find none, because you'll always beat me to the punch. But no, these are independent Poisson processes (tied only via occasional difficulty adjustments) with different rates, meaning that you will simply find 100 times the blocks I will. So over a period where 1010 blocks were found between us, about 1000 will be yours and 10 will be mine.

In other words, it scales linearly - the amount you get out is exactly proportional to what you put in.

If that's all you're after, mission is already accomplished.

But if you think your "non-parallelizable PoW" system should behave differently, let's say that in this system a person with a computer finds one block per month. Then four people with a computer each should find a total of 4 blocks per month, right?. So a person with 4 computers also finds 4 blocks per month, because the system can't know who the computers belong to (and if it can then it's not at all about a different computational problem, but about using non-computational cues in distributing blocks). So a person with a special 4-CPU system also finds 4 blocks, as does a person with a quad-core CPU.


And, once more - pools are not a security threat if implemented correctly. There's no reason the pooling mediator also has to generate the work. And, there are already peer-to-peer pools such as p2pool.


Edit: Parallelism means that an at-home miner can plug in his computer and contribute to security/receive rewards exactly in proportion to what he put in. Non-parallelism means his effect will depend in complicated ways on what others are doing and usually leave the poor person at a significant disadvantage (since others are using faster computers), which is the opposite of what you want.

In addition to the ones outlined in my above posts, I see one more: Currently the time for solving a PoW is distributed according to a Poisson distribution (Satoshi describes the consequences of this in his paper). We have a parameter (difficulty) where we can tune the mean of this distribution, but we cannot independently tune the variance of the distribution (with Poisson it will always be equal to the mean). With a different PoW system we will be able to obtain different distribution shapes (possibly with a smaller variance than Poisson). This could make the entire system more stable. Certainly it will impact the Bitcoin convergence behaviour. For the end user the impact might be a higher trust in a block with smaller waiting times.
Block finding follows a Poisson process, which means that the time to find a block follows the exponential distribution (where the variance is the square of the mean). The variance is high, but that's an inevitable consequence of the fair linearly scaling process.

If it pleases you, the variance of block finding times will probably be less in the transaction fees era.
2414  Bitcoin / Bitcoin Discussion / Re: I need help tranlating Satoshi's design paper into as many languages as possible on: October 05, 2011, 10:40:32 AM
I think the value in translating a technical paper is fairly limited, those who want technical details would typically have no problem (and possibly prefer) reading the English original. A better thing to have in multiple languages is a paper giving a thorough, reasoned discussion about all of Bitcoin's advantages (preferably tailored to what's relevant for each specific country).

Anyway, how well does the platform handle right-to-left languages? Can you generate a pdf sample for the work done so far in Hebrew?
2415  Bitcoin / Meetups / Re: EUROPEAN BITCOIN CONFERENCE 2011, PRAGUE NOV 25-27 on: October 05, 2011, 08:13:42 AM
I just bought this account from someone on ebay.
Huh
2416  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 04, 2011, 03:14:15 PM
Satoshi didn't see the pool miners coming for sure.
Satoshi understands probability so he clearly expected pools to emerge. It's likely though he didn't think they need any special consideration in the design.
2417  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 04, 2011, 07:37:32 AM
And even if we reach "Visa levels" as described here, miners would only have to download, at peaks, 76*4.000 = 304KB/s if I got it right (a new header each time a new transaction arrives and changes the Merkle Tree).
No, as I explained, the miner doesn't need to get a new header when there's a new transaction. He just keeps mining on a header which doesn't include all the new transactions. When he finishes 4GH he gets a new header with all the recent transactions. That's how it's done right now, it's not a potential future optimization.
2418  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 04, 2011, 05:11:43 AM
Or is it an indirect hash of something in the header which is itself a hash to all transactions? Even if it's that, wouldn't such header have to be retransmitted each time a new transaction is propagated?
What DeathAndTaxes said, the Merkle root is the "executive summary" of the transactions. And, inclusion of transactions in the block is on a "best effort" basis - everyone chooses which transactions to include, and currently most miners/pools include all transactions they know. But it's ok if a miner is missing a few recent transactions, he'll get (a header corresponding to) them in the next getwork.
2419  Alternate cryptocurrencies / Altcoin Discussion / Re: A blockchain with a hashing function doing useful work on: October 04, 2011, 04:58:44 AM
I think you're completely missing the point of this project, and are far too serious Smiley .
Probably. Carry on then Smiley.

Maybe not in this thread, but people have been seriously suggesting that Bitcoin proof-of-work should be based on external distributed computing projects.
2420  Bitcoin / Bitcoin Discussion / Re: Are GPU's Satoshi's mistake? on: October 03, 2011, 08:40:04 PM
like Garvin said, the problem is msg relaying
Good point. It doesn't really matter if the mining algorithm is CPU-friendly. If bitcoin usage grows significantly, other resources - mainly bandwidth - required by the mining process will probably rule out the "average guy".

Mining will probably become a specialized business despite the mining algorithm. So, better to keep the algorithm which doesn't make us vulnerable to botnets.
Mining pools. The miner only needs the block headers.
Pages: « 1 ... 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 [121] 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!