Bitcoin Forum
May 24, 2024, 08:52:18 AM *
News: Latest Bitcoin Core release: 27.0 [Torrent]
 
  Home Help Search Login Register More  
  Show Posts
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [35] 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 ... 166 »
681  Bitcoin / Development & Technical Discussion / Re: Reasons to keep 10 min target blocktime? on: July 22, 2013, 08:29:52 PM
That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
I'm speaking specifically about confirm count without regard to latency.
Under the assumption that orphaning isn't an issue, with a lower mean block time you need to wait less on average for a given level of security.

For example, let's say that with either 5 min or 10 min mean time, orphaning isn't an issue. Let's say also that you want a 10%-hashrate attacker to have less than 2% chance of successfully double spending, so you wait for 3 confirms.

If 10 min is the time constant, you have to wait 30 mins on average. If it's 5 min, you have to wait 15 mins. This is an advantage of a lower time constant, as suggested by the OP.

 If what I was saying wasn't true, miners publishing all their diff 1 shares would suddenly make Bitcoin a million times more secure.
The behavior of the security function is different in different regions of parameters so an extreme case does not prove anything about a more typical case. With infinitesimal blocks the focus is on total compute power spent as you say, but with more typical block times the focus is on the statistical properties of an atemporal random walk.

Of course with a network difficulty of 1 you'd never converge, but this defies the assumption that "you are slow enough that orphaning isn't a major consideration".


E.g. with shorter blocktimes one needs much less time to estimate honesty of a non pps mining pool.
If miners use software that verifies that blocks it finds are reported by the pool, and the pool publishes the headers of its shares, then no time is needed to verify the pool is working honestly.
682  Bitcoin / Development & Technical Discussion / Re: Reasons to keep 10 min target blocktime? on: July 22, 2013, 07:46:07 PM
(3) With the exception of 1 confirmation transactions, once you are slow enough that orphaning isn't a major consideration there is no real security difference that depend on the particular rate. For moderate length attacks sum computation matters and how you dice it up doesn't matter much. One confirm security— however— isn't particular secure.
That's false, which is the main point I was trying to explain in https://bitcoil.co.il/Doublespend.pdf.
683  Bitcoin / Bitcoin Technical Support / Re: Where do mined coins go? on: July 17, 2013, 05:04:44 PM
It will be sent to a new address, a different one each time.

Where "new" does mean it's taken from the key pool.
684  Other / Meta / Re: Forum time on: July 14, 2013, 09:18:17 AM
Some time ago I changed my timezone so I went back to Profile> Look and Layout Preferences and I found this:

Current forum time: 14-07-2013, 09:17:01
Yes, this is the one. paraipan misunderstood your question.

You can also look at the time in the top-right corner and subtract your offset.
685  Local / עברית (Hebrew) / Re: אני רוצה להתחיל לחצוב, באיזה אתר כדי לי? on: July 06, 2013, 06:47:05 PM
אתה יכול לנסות את https://www.ozcoin.net או https://eclipsemc.com.
686  Economy / Securities / Re: [GLBSE] PureMining: Infinite-term, deterministic mining bond on: July 03, 2013, 02:23:07 PM
any new news?
Not at this time.
687  Local / עברית (Hebrew) / Re: ביטקוין פיזי של קאסאסיאוס למכירה on: July 01, 2013, 08:30:00 PM
אצלי לא היו בעיות, הזמנתי בפדקס והגיע אלי שליח בלי שהייתי צריך לשלם משהו נוסף.
688  Bitcoin / Pools / Re: PPLNS on: June 27, 2013, 11:37:23 AM
I'm afraid I must make another correction to the OP: The payout (sB/tX) * min (r,t) is applied not just to the earliest share, but to all shares. For most shares r is much bigger than t so this reduces to sB/X, but there can be several early shares (not just the earliest) for which r<t.

Thanks for the in-depth explanation.

After looking at this for over an hour, there is just 1 detail that I don't understand: For the final share (that only partially fits in the remainder of the window) how do we calculate the value of t?

Lets take the situation where a round has ended. (Network difficulty always = 240, block value always = 50, X=2, no pool fee)
The winning share is share number 28419 (which has share difficulty = 7).
Looking backwards from here, we begin paying out rewards with share number 28418 (which has share difficulty = 7).
....
We keep going until we get to a share that only partially fits in the remainder of the window: share number 28353 (which has share difficulty = 90).

When I calculate what I think is the value of t, the reward for the partial block ends up spilling well over the window boundary.
What would t be in this case?
(PS - This is actual data I have in a spreadsheet from a block mined on testnet)
t or r? t is simply the score of the winning share, 7/240 in this case.

For r, if all shares 28353-28418 are of difficulty 7, their total score is 66*7/240 = 1.925. Thus r = 0.075.

If you mean that the total payout is higher than the block reward, this is true. In this design, when small shares follow large shares the payout is bigger, when large shares follow small shares the payout is lower.

The practical difference is small since share scores aren't as big as 90/240 in reality.

Requiring that the payout is exactly equal to the block reward is not sustainable because the block reward is variable. However, if we assume the block reward is fixed, there are some alternative designs which satisfy this.

The "General unit-based framework" in AoBPMRS (currently section 5.2) does this, but isn't easy to implement. There's a randomized variant that is fairly easy though:

1. When a block is found, letting t be the score of the winning share, choose a random real u between 0 and t.

2. Pay out uB/X to the winning share.

3. Moving backwards, pay out sB/X to each share fully within the window, s being the share's score.

4. For the last (earliest) share, letting r be the score remaining to complete to X (where the winning share has contributed u), pay out rB/X.
689  Bitcoin / Mining / Re: Solo each rig or pool on: June 26, 2013, 07:38:05 AM
It seems to me it would be better to do the private pool so the rigs don't work against eachother.

Is that how it works?
No. You can simply point all your rigs to your instance of bitcoin-qt and it will know to give different work units to each rig.
690  Economy / Lending / Re: CFD's available - LTC and BTC on: June 25, 2013, 07:15:39 PM
That's not a CFD, it's a call option...
You are right, this is an example of CFD.
It's the same thread.
Yes, thanks for pointing it out.
No idea what just happened. You give a supposed example of a CFD, I say it's not a CFD, then you give the same example again?

I believe that a call option if for the total amount, this agreement is only for the difference, so I do not need to provision the whole amount. Anyway, I'm not so familiar with the exact definition of each instrument, so feel free to call it whatever you wish.  Wink
What matters is not how you settle it, but how the profit responds to price changes of the underlying. With an option to buy 10K LTC at a strike price of $6, if LTC price is $7 when exercised the profit is $10K, just like in what you offered.

See also the graphs here. You're taking a short call, the counterparty is taking a long call.

A CFD is completely different and is equivalent to simply buying the commodity. The corresponding graph would be a straight line through the origin.

If you're offering call options and saying you're offering CFDs it's misleading.
691  Economy / Lending / Re: CFD's available - LTC and BTC on: June 25, 2013, 06:36:39 PM
That's not a CFD, it's a call option...
You are right, this is an example of CFD.
It's the same thread.
692  Economy / Lending / Re: CFD's available - LTC and BTC on: June 25, 2013, 05:51:47 PM
Please see this thread  for example.
That's not a CFD, it's a call option...
693  Bitcoin / Pools / Re: PPLNS on: June 24, 2013, 10:36:13 PM
Regarding the small correction you made, I don't quite understand what's happening.
Are we talking about the "oldest" share, the one that may get a partial reward (in order to get the total score to exactly X)?
If so, I don't see how "t" is relevant to the calculation, if "t" is the score of the "winning share" (i.e. the share that we didn't even take into consideration to begin with).
First we need to remember the goal; each share will have an expected reward of sB.

Thus we need to take a given share, and consider the rewards it will get from future shares.

We construct a timeline window of length X after the share and state that for every block found within this timeline, the original share will be rewarded by (sB/X). Since X blocks are expected to be found, the average reward will be sB.

More specifically, every future share found, with score t, will consume t units of the window, and will have probability t to result in a block giving a reward of sB/X, hence its expected reward is sBt/X. The invariant is that when t units are consumed, an expected reward of sBt/X is added. Thus, summing over shares totaling X units, the average reward is sB.

However, a problem remains with the last share in the window. It extends beyond the window, thus while its expected reward would be sBt/X, the amount of units it consumes is equal to the amount of units remaining in the window, which is r. To maintain the invariant, we modify the reward in case the last share is a block to sBr/tX (only a portion of r/t of the block, if found, is considered in the window). The expected reward would then be sBr/X, maintaining the invariant. This correction is only done for the last share in the window, which can be identified by the fact that r<t ; hence for any share the reward is min (sB/X, sBr/tX) = sB/tX * min (r,t).

This analysis took a share and looked at how much reward it needs to get from future shares if they end up a block; now we just need to look at it backwards to find out how much each block found needs to reward past shares. The result is what is in the OP.

My final question: Is X=2 an acceptable setting for real-world usage?
It depends on pool size and preference; for a smaller pool I'd say it's good, a larger pool could use a higher value (since maturity time becomes less of an issue).
694  Bitcoin / Pools / Re: PPLNS on: June 22, 2013, 07:10:50 PM
I am currently implementing the "correct method" PPLNS and I have one question.

When a block is found, is the "winning share" counted in the window? (the original post suggests that it isn't)
If it isn't counted, what is the reasoning behind this?
Correct, it isn't.

If you included it, the expected payout for a share would depend on future difficulty changes. For example let X=1 and take an extreme case where after I submit a share, the difficulty changes to 1. Then the next share found will be a block and it will take the entire reward, and I will get nothing.

The same happens if instead of the global difficulty changing, a different share difficulty is submitted.

In practice the differences will be minor because shares are small. Where the difference between the naive method and the unit-based method is measured in percentage, the difference in how you count the edges is measured in ppm.

Actually, the question prompted me to revisit the model (it's been a while), and I noticed there was a small correction which I included in AoBPMRS but didn't fix in this post; for the last share, instead of a payout of (srB)/(tX), it should be (sB)/(tX) * min (r,t). I fixed it now.

PS. You should consider shift-PPLNS, I've specified the exact way to do it in this comment.
695  Other / Meta / Re: Activity & new membergroup limits on: June 19, 2013, 07:02:24 PM
Also, even though it didn't seem to be used much, I think the "trust" button is useful. you can still see it in the profile but it's much more useful next to the activity/email/whatnot.
The trust rating is visible (edit: only) next to posts when you're browsing the marketplace subforum, as it has been since inception.
696  Other / Meta / Re: Activity & new membergroup limits on: June 19, 2013, 06:31:00 PM
If I post 1 message per fortnight for the next year, I'll get hero. That incentive to contribute.
Uh ... no. 

1 post every 14 days would be + 1 activity every 14 days.   So in a year you would have +26 activity.

If you post 14 times per 14 days then yes you will be a "hero" in a year.  Then again if you are still around regularly posting week after week for a year doesn't it make sense that you should be.   You haven't gamed anything.
Actually dogie was right about this particular point.

Note he said "I". He already has 834 posts so if he now posts once a fortnight for 52 weeks he'll have time = 33, posts = 860, activity = min (33*14, 860) = 462 = hero.

That's one of the things I tried to solve with my modification.

Old System:
Post 400 times in any amount of time (even a single day) = "hero"
it was 500 posts per hero in the old system.

Agreed.  That would be horrible.  stackoverflow has a badge where you need to login for 100 consecutive days.  Despite regularly posting questions and answers there for years I always miss a day.  It annoys the crap out of me.
Tell me about it. Eventually I realized "WTF do I care about this badge" and restored order to my life.
697  Other / Meta / Re: Activity & new membergroup limits on: June 19, 2013, 10:32:34 AM
1. It's smoother. Instead of having an arbitrary threshold above which posts cease to count, the output varies continuously in the input of interests.
2. It requires to actually be active throughout the registration method. With your method, someone who has been registered for 104 weeks and posted once per 2 weeks (that is, not very active at all), can jack up his activity from 52 to 728 by spamming 700 posts at once. Whereas with my method there is an upper bound on how much you can boost your score by concentrated posting.
With your method, someone who posts 5 posts per period for 40 weeks has a worse score than someone who posts 100 posts per period for 5 weeks. This is wrong. Slower, more consistent posting is better. A min() somewhere is needed, I think.
No. I purposefully offered two options because I figured you might not like the sqrt version. With the hyperbolic version (x / (1 + x/28)) there's no need for min because there is already an upper bound, asymptotically approached, on the activity gained per period; in your example 5*40 would get the higher score.

Assuming the periods are treated separately and not in aggregate, this is simply a softer, superior version of min.

In any case 28 in the formula is a tunable parameter.

As you mentioned, the current method doesn't work perfectly in some strange cases because it only looks at two-week periods in aggregate,
The idea was to have a system that is difficult to game. The method doesn't work in exactly the case that someone is trying to game it.

but this makes the implementation much easier and more efficient. I can do it with one SQL statement:
Code:
select smf_messages.ID_MEMBER as id, least(count(distinct posterTime div 1210000) * 14,
posts) as activity from smf_messages join smf_members on (smf_messages.ID_MEMBER=smf_members.ID_MEMBER)
group by id;

Your method is in principle not significantly less efficient than this, but it will at least make the SQL significantly more complicated, and I might have to create a slower and much larger PHP function. (I know that your method is directly possible in PostgreSQL, but I'm not sure about MySQL.)
Would it be possible to have an auxiliary table with the number of posts per user per 2-week period, update it in a batch job every 2 weeks, and have the activity calculation simply sum over values in this table (plus the activity over the current 2-week period which is not yet in the auxiliary table)? It seems to me to be even more efficient than the current method, and is more flexible.

That said, I think the score will be even more representative if instead of looking at disjoint 2-week periods, it will count all 2-week periods (i.e. days 1-14, days 2-15, etc.). But that may be harder to do.

It could have been worse, I could have asked you to integrate over a Gaussian kernel smoother.
698  Other / Meta / Re: Activity & new membergroup limits on: June 19, 2013, 06:49:55 AM
I like you but please be quiet you're killing my business plan. Didn't you hear theymos. He doesn't want your help.
I believe theymos' question "how is that better?" wasn't rhetorical but rather an honest effort to understand what the best solution is.

I was going to suggest something but I see Meni already said something like it. Right now, the system isn't very smart - it's still suspect able to "binge posting".

Every single post should give you up to 1 activity, but the actual amount it gives depends on the time since your last post.

For example, if your last post was 4 minutes ago, another post will add ~0.01. If your last post was 4 hours ago, another post would add ~0.7. If your last post was 2 days ago, another post would add 1. If your post last was 14 months ago, another post would add 1.
There are issues with this as well. There is nothing wrong with two posts 4 minutes apart; the problem is if you have 100 posts with 4 minutes between each. The scoring should take that into account and penalize only over a larger scale.
699  Other / Meta / Re: Activity & new membergroup limits on: June 19, 2013, 06:29:41 AM
Time spent logged in will also be useful to include as a component.
That can be manipulated very easily, so it's useless for something like this.
Ok.

And you should consider more sophisticated metrics such as sum of f(x) over all two-week periods, f(x) being x / (1 + sqrt(x/14)) or x / (1 + x/28).
Why is that better? I'm not sure that it can be done very easily/efficiently.
1. It's smoother. Instead of having an arbitrary threshold above which posts cease to count, the output varies continuously in the input of interests.
2. It requires to actually be active throughout the registration period. With your method, someone who has been registered for 104 weeks and posted once per 2 weeks (that is, not very active at all), can jack up his activity from 52 to 728 by spamming 700 posts at once. Whereas with my method there is an upper bound on how much you can boost your score by concentrated posting.

I do think a voting system can have merit as well, though that is much easier to screw up.

Also, if there isn't already, there should be an option in profile settings whether to display people's activity, post count, or both.


but what is the ","? is it t*14/p, t*14*p, t*4 + g(p)?, min( t*14 || p)

I think he means min (t*14||p) to say z = g(x,y) is a 3 dimensional function

the logical "or" || should be used not ","

if(t*14<p)
{cout>>t*14;}
else
{cout>>p;}
No, min(x, y) is a function of two variables which returns the lower one of them. min(t*14 || p) is meaningless, || works on boolean values, not on numbers.
700  Other / Meta / Re: Activity & new membergroup limits on: June 18, 2013, 09:10:31 PM
time = number of two-week periods in which you've posted since your registration
activity = min(time * 14, posts)
I don't like it.

It embodies the assumption that any post above once per day (averaged over two weeks) is not contributing; which I do not agree with.

At the very least number of posts per day should be increased, e.g. min(time * 28, posts).

And you should consider more sophisticated metrics such as sum of f(x) over all two-week periods, f(x) being x / (1 + sqrt(x/14)) or x / (1 + x/28).

Time spent logged in will also be useful to include as a component.
Pages: « 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [35] 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 ... 166 »
Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!