Bitcoin Forum
June 22, 2024, 12:22:30 AM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Pages: « 1 [2] 3 4 »  All
  Print  
Author Topic: Does a high pool difficulty lower anyone's profits?  (Read 4363 times)
Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 22, 2013, 10:57:39 PM
Last edit: August 22, 2013, 11:10:06 PM by Liquidfire
 #21

In your simulation, I suggest you don't use any of those formulas, and just model mining as though it really is at the base level. Randomly generate a number between 1 and 100000, and if it's below 1000, it's a share. If it's below 10, it's a block.

I thought about doing this. The problem is then I need to process the workers in a parallell fashion. Meaning run multiple threads/processes. This is because now both miners are racing each other. This complicates the code significantly, but also now you have introduced variables that you didn't have before, race conditions within the operating system.

I agree that would be a more accurate method, but to do it really properly you almost be better off just setting up a real pool and have real miners mine shares at different rates and take stats.

Minus my admittedly inefficient block/share time random generation (working on that), as long as you trust the probability it should be accurate, to the best of my knowledge. The block solve for each block is used by both workers on that block, so they can accurately be evaluated serially.

Edit: also, 2 workers is a really small sample in terms of solving a block, something that is in reality really consistent
h2odysee (OP)
Full Member
***
Offline Offline

Activity: 238
Merit: 119


View Profile WWW
August 22, 2013, 11:06:15 PM
 #22

So here's a sim I just made: http://pastebin.com/nud90qDK

After doing it, I realized that there's really not much to simulate. It's pretty trivial.

No matter how fast the block rate, I get the same result:

Shares per step: 0.01029

http://middlecoin.com - profit-switching, auto-exchanging scrypt pool that pays out in BTC
turtle83
Sr. Member
****
Offline Offline

Activity: 322
Merit: 250


Supersonic


View Profile WWW
August 22, 2013, 11:06:28 PM
 #23

Fast miners are more efficient at reporting their work, that is, they have less work go "unreported". Slower miners are less efficient at reporting their work, because the interruption of new blocks makes a larger percentage of their work go unrecognized. They will have been working longer without finding a share at block change.

Result: no change to overall pool profit. The effect is of the distribution if rewards.

Look forward to your simulations...

but.. i think the argument is invalid.

Say the duration to the interruption is 20 seconds.

A 100 GH/s miner will have exactly 10x the odds of finding a share than 10 GH/s miner. irrespective of the share difficulty. Over time its the same distribution of rewards.

The argument sounds like saying : Pools using diff 1 shares is unfair, because I have done many hashes and didnt find a single diff 1 hash to submit. The work was done and went unrecognized.

I am pretty sure when you do the simulation, you will find that over time, at any difficulty, the 100GH/s miner would have submitted 10x of what 10 GH/s miner would have.

Going by h2odysee suggestion, try this. 2 processes generating random numbers between 1 and 1000000. Process A generates 1000 randoms a second, B does 10000 / sec. Log the output in one place if the number is 100 and another place if its below 10. run the processes many times for 10 second intervals.... im pretty sure both A and B will have similar ratio for the 2 difficulties... over time... i.e. B's ratio will be more consistent than A's but over time itll be the same.


Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 23, 2013, 12:10:46 AM
 #24

Fast miners are more efficient at reporting their work, that is, they have less work go "unreported". Slower miners are less efficient at reporting their work, because the interruption of new blocks makes a larger percentage of their work go unrecognized. They will have been working longer without finding a share at block change.

Result: no change to overall pool profit. The effect is of the distribution if rewards.

Look forward to your simulations...

but.. i think the argument is invalid.

Say the duration to the interruption is 20 seconds.

A 100 GH/s miner will have exactly 10x the odds of finding a share than 10 GH/s miner. irrespective of the share difficulty. Over time its the same distribution of rewards.

The argument sounds like saying : Pools using diff 1 shares is unfair, because I have done many hashes and didnt find a single diff 1 hash to submit. The work was done and went unrecognized.

I am pretty sure when you do the simulation, you will find that over time, at any difficulty, the 100GH/s miner would have submitted 10x of what 10 GH/s miner would have.

Going by h2odysee suggestion, try this. 2 processes generating random numbers between 1 and 1000000. Process A generates 1000 randoms a second, B does 10000 / sec. Log the output in one place if the number is 100 and another place if its below 10. run the processes many times for 10 second intervals.... im pretty sure both A and B will have similar ratio for the 2 difficulties... over time... i.e. B's ratio will be more consistent than A's but over time itll be the same.




When a block changes, every miner will have been working towards their next share.

But one share means different things for different miners. Losing one share for a slow miner is a bigger hit than for a very fast miner.

In the time the slow miner was working, getting nothing (yet), the fast miner already got work in, and got it accepted.

Higher ratio of recognized work / work actually done.

mueslo
Member
**
Offline Offline

Activity: 94
Merit: 10


View Profile
August 23, 2013, 12:50:37 AM
 #25

So here's a sim I just made: http://pastebin.com/nud90qDK

After doing it, I realized that there's really not much to simulate. It's pretty trivial.

No matter how fast the block rate, I get the same result:

Shares per step: 0.01029

I also made one in the old thread, but again, Liquidfire didn't even answer. https://bitcointalk.org/index.php?topic=259649.msg2988147#msg2988147

mueslo
yes very guilty of oversimplification but the analogy is not so flawed as to miss the point for some of the misconceptions I am trying to clear up.
I hope my description simplifies the issues without preaching anything baseless. I'm not explaining how crypto mining works just the fairness aspect of pool difficulty.

I didn't think presenting the information in the form you have above was going to help the discussion at this point.

If you want to take the discussion further in to the a detailed description and minutia of mathematics please be my guest.
I was getting sick of the huge misunderstanding some people were labouring under.

An anology with a die would is much better, it's simpler and more extensible.
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
August 23, 2013, 01:07:08 AM
 #26

In the time the slow miner was working, getting nothing (yet), the fast miner already got work in, and got it accepted.

How is that a problem? The slow miner hasn't found a share yet. Whenever he does find one, he'll report it to the pool and get credited.
Arros
Member
**
Offline Offline

Activity: 62
Merit: 10



View Profile
August 23, 2013, 01:38:15 AM
 #27

In your simulation, I suggest you don't use any of those formulas, and just model mining as though it really is at the base level. Randomly generate a number between 1 and 100000, and if it's below 1000, it's a share. If it's below 10, it's a block.

I thought about doing this. The problem is then I need to process the workers in a parallell fashion. Meaning run multiple threads/processes. This is because now both miners are racing each other. ...

I think you can still simulate their race in sequence by seeing how long each one takes to get something, and compare their "times" to see who won.
Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 23, 2013, 02:01:55 AM
 #28

In the time the slow miner was working, getting nothing (yet), the fast miner already got work in, and got it accepted.

How is that a problem? The slow miner hasn't found a share yet. Whenever he does find one, he'll report it to the pool and get credited.

Because the block changed. We he finds one, its a whole new block by then. All his work on that last block will never be given credit.

Meanwhile, fast miner got 9 on the last block (or 8, or 10, whatever you like). Even if slow miner finds a block this time, fast miner probably got 8-12 shares again.

Over time, the block changing hurts the slow miner more.
mueslo
Member
**
Offline Offline

Activity: 94
Merit: 10


View Profile
August 23, 2013, 02:18:14 AM
 #29

Because the block changed. We he finds one, its a whole new block by then. All his work on that last block will never be given credit.

That's just not the case. He has the exact same chances per hashrate of finding a share before the block changes as a high hashrate miner. Just with a higher variance.
the joint
Legendary
*
Offline Offline

Activity: 1834
Merit: 1020



View Profile
August 23, 2013, 02:24:03 AM
 #30

I think of a higher difficulty in a pool as solo-mining with a lower difficulty.  If you want to try to be lucky for a day as a GPU or Block Erupter miner, set your stratum difficulty to 128 or 256 and watch the variance kick in.
Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 23, 2013, 03:20:36 AM
 #31

Because the block changed. We he finds one, its a whole new block by then. All his work on that last block will never be given credit.

That's just not the case. He has the exact same chances per hashrate of finding a share before the block changes as a high hashrate miner. Just with a higher variance.

If the block changes every 30 seconds on average, and you find a share every 30 seconds on average, and someone else finds a share every 5 seconds, how do you have the same chances?

You'll completely whiff on any given block just as often as you get one share.. meanwhile the other guy gets 6 in. If he has a little bad luck, he gets 5. You have a little bad luck? you get 0. But, it all evens out right? A little bad luck this time, a little good luck next time. Of course every time you have a little good luck, you don't get 2, you'll still get 1. He'll get 7

You each have a good block and a bad block. You have 1 share. He has 12.

1/12 is not the same ratio as the ratio as 1/6.
CryptoBullion
Sr. Member
****
Offline Offline

Activity: 266
Merit: 250


View Profile
August 23, 2013, 03:40:57 AM
 #32

visit pools in my sig ... i have tuned the pplns to cover for exactly what you are talking about  Grin i noticed this happening a long time ago and couldnt stand seeing the little miners get poor pplns score. my diff is set to 63 which will save everyone bandwidth but the pplns scoring i use will even what you were asking about.

Fold Proteins, earn cryptos! CureCoin. https://bitcointalk.org/index.php?topic=268556.0
CryptoPCS.com Prepaid phone refills, post paid phone payments, and bill payments https://bitcointalk.org/index.php?topic=285148.0
Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 23, 2013, 03:50:46 AM
 #33

Alright. In the main forum, I posted my simulation script showing the skew towards fast miners. If you want the background, read that.

The main criticism was a fair one - I was "cheating" and using 0.5x and 1.5x of the average as the range for my random values for the share time, and block times. It was a shortcut that, admittedly skewed the results, but in such a negligible way that I felt it would still expose the effect. It was essentially cutting off the tiny sliver on the far right end the pierces out into infinity (plus some of the end on both sides). Its a minuscule amount of the distribution, but it was certainly a fair criticism.

So I did some research. The appropriate distribution is the poisson distribution. This is the same distribution that is used by bitcoin itself to calculate the expected difficulty for the next difficulty change. It is used in the prediction of the network, to maintain the statistical probability of one block every 10 minutes. Bitcoin would collapse without this prediction.

So it turns out theres a wonderful module in python called numpy. It has a built in poisson distribution random generator, making my life easy and making the change a breeze.

The main point about the poisson distribution, that should address the concerns: The Poisson distribution is not symmetrical; it is skewed toward the infinity end.

I also found and fixed a bug in my stats calculation. It didn't change the results enough to invalidate my previous conclusion, but these results should be accurate.

So with that, I reran my tests from before.


Slowest Coin

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 7.29973071787 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 8.33333333333X the block-find-speed

Slow Coin

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 6.86225439815 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 4.0X the block-find-speed

Medium Coin

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 6.00872764122 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 2.0X the block-find-speed

Fast Coin

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 4.23694576719 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 1.0X the block-find-speed

Very Fast Coin

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 0.0129950864524 percent of the profit
Over sample size of 100000
When worker1's average share-find-speed was: 0.5X the block-find-speed

As you can see, the results are very similar.



And of course, obligatory code so you know I am not full of shit and can try for yourself.

import random
import numpy as np

class worker():
    def __init__(self,hashrate):
        self.hashrate = hashrate
        self.sharesolvetime = 60 / hashrate
        self.shares = 0

class pool():
    def __init__(self,blockfindtime):
        self.blockfindtime = blockfindtime

pool1 = pool(500)
worker1 = worker(1)
worker2 = worker(12)
samplesize = 100000

for n in range(0,samplesize):
    clock = np.random.poisson(pool1.blockfindtime)
    clock1 = clock
    while clock1 > 0:
        sharesolve = np.random.poisson(worker1.sharesolvetime)
        if sharesolve > clock1:
            break
        else:
            worker1.shares = worker1.shares + 1
            clock1 = clock1 - sharesolve
    clock2 = clock
    while clock2 > 0:
        sharesolve = np.random.poisson(worker2.sharesolvetime)
        if sharesolve > clock2:
            break
        else:
            worker2.shares = worker2.shares + 1
            clock2 = clock2 - sharesolve
    
print "Worker 1 has: " + str((float(worker1.hashrate) / float(worker2.hashrate + worker1.hashrate)) * 100) + ' percent of the hash power'
print "But worker 1 has: " + str((float(worker1.shares) / float(worker2.shares + worker1.shares)) * 100) + ' percent of the profit'
print "Over sample size of " + str(samplesize)
print "When worker1's average share-find-speed was: " + str((float(pool1.blockfindtime) / float(worker1.sharesolvetime))) + 'X the block-find-speed'
    



If you want to run it yourself, you need numpy. http://www.numpy.org/
h2odysee (OP)
Full Member
***
Offline Offline

Activity: 238
Merit: 119


View Profile WWW
August 23, 2013, 05:15:26 AM
 #34

Looking at a graph of a poisson distribution, it starts low, peaks, then falls off. That's not the right distribution to use here. We need to start high, and fall off. The peak should be the very first sample.

If you are using a difficulty such that you have a 50% chance of getting a share each hash, the distribution will look like so:
1st hash: 50%
2nd hash: 25%
3rd hash: 12.5%
4th hash: 6.25%
and so on, dividing by two each time.

So whatever kind of distribution you call that.

http://middlecoin.com - profit-switching, auto-exchanging scrypt pool that pays out in BTC
CoinBuzz
Sr. Member
****
Offline Offline

Activity: 490
Merit: 250



View Profile
August 23, 2013, 08:38:11 AM
 #35

In other pools, VARDIFF select 64 difficulty for my hashrate,

How VARDIFF decide on that? (maybe this could help us to find the answer)

Join ASAP: FREE BITCOIN
minerapia
Full Member
***
Offline Offline

Activity: 168
Merit: 100


View Profile
August 23, 2013, 01:05:20 PM
 #36

It merely calculates how many share you post in second and adjust the diff so its in desirable range (shares per sec)

donations -> btc: 1M6yf45NskQxWXknkMTzQ8o6wShQcSY4EC
                   ltc: LeTpCd6cQL26Q1vjc9kJrTjjFMrPhrpv6j
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
August 23, 2013, 01:19:07 PM
 #37

Because the block changed. We he finds one, its a whole new block by then. All his work on that last block will never be given credit.

There is no "partial work" on blocks. Each random hash is either a valid share or not. When you get notified there is a new block, you keep trying hashes to see if they meet the new block instead of the old block. The only time a new block has any effect is if you report a share moment too late and it is 'stale'. But that's true for anyone.

The probability of finding a share is independent of your miner's speed or how many blocks you've looked at previously. A new block notification has no effect on this (unless the difficulty for the new block has changed).

Meanwhile, fast miner got 9 on the last block (or 8, or 10, whatever you like). Even if slow miner finds a block this time, fast miner probably got 8-12 shares again.

If the fast miner is 9x the speed of the slow miner, they should be reporting shares 9x as often as the slow miner. If it takes about 5 shares to find a block, then for every 2 blocks found on average that is 10 total shares, 1 share from the slow miner and 9 shares from the fast miner. So every other block the slow miner didn't report a share during that block. But that doesn't matter. The slow miner is reporting 1/10 of the total shares and will earn 1/10 of the total payout.

Over time, the block changing hurts the slow miner more.

Untrue.
Liquidfire
Newbie
*
Offline Offline

Activity: 28
Merit: 0


View Profile
August 23, 2013, 01:26:28 PM
 #38

Looking at a graph of a poisson distribution, it starts low, peaks, then falls off. That's not the right distribution to use here. We need to start high, and fall off. The peak should be the very first sample.

If you are using a difficulty such that you have a 50% chance of getting a share each hash, the distribution will look like so:
1st hash: 50%
2nd hash: 25%
3rd hash: 12.5%
4th hash: 6.25%
and so on, dividing by two each time.

So whatever kind of distribution you call that.

Your example would solve something like, "what is the distribution of many hashes it should take to solve a share given 50% success rate".

What I am using poisson for, is to solve "given a known mean of 60 seconds, what is the distribution of time (in seconds) it will take to solve a share".

So the low, peak, low shape (skewed towards right for infinity) is correct for my use.



I can tell you'd prefer a full simulation instead of a half simulation. Mine is a half simulation since I am just using probability to say how long the block/share time took.

Let me give a full simulation some thought. I initially thought it would be hard to do, but the more I think about it, I think it might be possible to make each worker a thread. I thought there would be a race condition, but I forgot about the global interpreter lock in python. Basically python can't achieve true multi-threading because of the GIL which locks the interpreter to one thread at a time, but in this case, it would  actually be favorable.

Something like calculating how often a worker attempts a hash per second based on hash rate, putting a sleep(x) command in appropriate to that hash rate, and letting them all go to town until someone solves the block.
roy7
Sr. Member
****
Offline Offline

Activity: 434
Merit: 250


View Profile
August 23, 2013, 01:28:33 PM
 #39

You'll completely whiff on any given block just as often as you get one share.. meanwhile the other guy gets 6 in. If he has a little bad luck, he gets 5. You have a little bad luck? you get 0. But, it all evens out right? A little bad luck this time, a little good luck next time. Of course every time you have a little good luck, you don't get 2, you'll still get 1. He'll get 7

Miners aren't paid based on some sort of weighted system where it's the # of shares per block you submit. It's the total number of shares you submit vs the total number of shares all miners have submitted.

Say the slow miner's average hash rate is about 1 share every 2 blocks and the fast miner is 9 shares every 2 blocks. Look at the slow miner over a period of 4 blocks. Maybe he submits 0 shares in blocks 1-3, and 2 shares in block 4. Or 2 shares in block 1 and no shares in block 2-4. Or one share in block 1 and one share in block 4, with no shares in blocks 2-3. The end result is after four blocks he's submitted 2 shares.

The fast miner, on the other hand, would average about 18 shares over these 4 blocks. In total after 4 blocks, on average, we'd see 20 total shares. 2 from the slow miner, 18 from the fast miner. And 2/20 of the coins being generated by the pool would be getting paid to the slow miner. Which specific blocks the slow miner submitted the shares on doesn't matter.

If it's more clear, raise the slow miner's average speed to 1 share per block. Whether he gets 1 share each in blocks 1-4, or 4 shares in block 1 and 0 in blocks 2-4, he still submitted 4 shares and is paid 4/total_shares_in_pool. How the shares were spread between the blocks doesn't change this payout.

(I'm assuming something like proportional or PPLNS payout of course. DGM doesn't use a proportional ratio although steady miners over time end up with the exactly fair proportional income based on their relative speeds.)
mueslo
Member
**
Offline Offline

Activity: 94
Merit: 10


View Profile
August 23, 2013, 02:03:26 PM
Last edit: August 23, 2013, 02:39:45 PM by mueslo
 #40


If the block changes every 30 seconds on average, and you find a share every 30 seconds on average, and someone else finds a share every 5 seconds, how do you have the same chances?

You cannot precalculate when an event may happen like this, that would violate you having a constant chance of finding a hash, and would make it dependent on the past. If I find a share now, I have the exact same chance of finding it one share later, since you find shares (approx) instantly. Or let me put it this way: assuming you have been unlucky, and got e.g. 0 shares in the time were on average supposed to get two: this does not make it any more likely that you will now find shares in the next timespan. Since you are constantly looking for shares and with the same probability at each point in time, the Poisson process continues even over block changes, so you can't just reset the time.

I explained this in the old thread. If you have an event that has a constant chance of happening in time (e.g. finding a hash, nuclear decay in atoms, ...), the amount of times the event occurs on average in a given timespan (here: between two blocks) is given by the Poisson distribution.

Here is the probability (y axis) that you find N shares (x-axis) between the two blocks if your hashrate is equal to the block find time.

Another analogy. You are gambling with slot machines that are free to use. Every time the block changes you have to change slot machines (but you can do so instantly, we are not simulating latency). By your argument, someone who can pull the lever five times as fast, would get more than five times what you got. Why should that be the case?

I suggest you read up a bit on the Poisson processes, I don't think you understand it quite correctly at the moment. Additionally, I'm confident I know what I'm talking about, I study physics. I'm also by no means a fast miner, I have a measly 1 MH/s.



You'll completely whiff on any given block just as often as you get one share.. meanwhile the other guy gets 6 in. If he has a little bad luck, he gets 5. You have a little bad luck? you get 0. But, it all evens out right? A little bad luck this time, a little good luck next time. Of course every time you have a little good luck, you don't get 2, you'll still get 1. He'll get 7

You each have a good block and a bad block. You have 1 share. He has 12.

1/12 is not the same ratio as the ratio as 1/6.

He might get 7, but that's only 17% more than he should have gotten. When you get 2, that's a 100% more than what you should have gotten. Here's the picture accompanying the above, for a miner that has 6x the hashrate of the share-rate-equal-to-block-rate miner:




Now on to your simulation: You are using the wrong distribution. If you don't know why, read my post again from the top, read the wiki page or watch the video I linked above.

The pobability distribution of the times between finding shares (again, this is a poisson process) is simply e^(-t/T), where T is the average time between shares. Which has exactly the property that you can just restart it at all points without changing anything.


Here is the correct version of your code:
Code:
import numpy.random as rnd

class worker():
    sharetime = None #time the next share is found
    def __init__(self,avgsharetime):
        self.avgsharetime = avgsharetime
        self.hashrate = 60/avgsharetime
        self.shares = 0
        self.generatesharetime(0.0)
        
    def generatesharetime(self, currenttime):
        self.sharetime = currenttime + rnd.exponential(scale=self.avgsharetime)

class pool():
    blocktime = None #time the next block is found
    def __init__(self,avgblocktime):
        self.avgblocktime = avgblocktime
        self.generateblocktime(0.0)
    def generateblocktime(self,currenttime):
        self.blocktime = currenttime + rnd.exponential(scale=self.avgblocktime)

pool1 = pool(2)
worker1 = worker(12)
worker2 = worker(1)
duration = 1000.

t=0.
while t<duration:
    if pool1.blocktime<worker1.sharetime and pool1.blocktime<worker2.sharetime:
        t=pool1.blocktime
        print "new block t=",t
        worker1.generatesharetime(t) #if you disable these, nothing changes in the outcome
        worker2.generatesharetime(t) #
        pool1.generateblocktime(t)
        
    elif worker1.sharetime<worker2.sharetime:
        t=worker1.sharetime
        print "worker 1 found a share t=",t
        worker1.shares+=1
        worker1.generatesharetime(t)

    elif worker2.sharetime<worker1.sharetime:
        t=worker2.sharetime  
        print "worker 2 found a share t=",t
        worker2.shares+=1
        worker2.generatesharetime(t)
    else:
        print "this is hugely improbable"


print worker1.shares
print worker2.shares
    
print "Worker 1 has: " + str((float(worker1.hashrate) / float(worker2.hashrate + worker1.hashrate)) * 100) + ' percent of the hash power'
print "But worker 1 has: " + str((float(worker1.shares) / float(worker2.shares + worker1.shares)) * 100) + ' percent of the profit'
#print "Over sample size of " + str(samplesize)
print "When worker1's average share-find-speed was: " + str((float(pool1.avgblocktime) / float(worker1.avgsharetime))) + "x the block time"
 

Example Output:

blocks and shares over time

Worker 1 has: 7.69230769231 percent of the hash power
But worker 1 has: 8.26645264848 percent of the profit
When worker1's average share-find-speed was: 0.166666666667x the block time
Pages: « 1 [2] 3 4 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!