Reading the source is a terribly inefficient way of checking for backdoors. Unless you *really* know what to look for, you could read source code 100 times and never realise that there was a backdoor in it.
You can learn a lot more far more quickly about how secure / full of trojans / likely to open up backdoors a program is by running it in a Virtual Machine and inspecting what it does while running.
Strictly speaking, it's nearly impossible to tell if a program is malicious just by running it in a virtual machine and inspecting what it does. For example, suppose the actual SolidCoin 2.0 client had a booby-trap "if block number is greater than 8000 and difficulty is greater than 100, find and upload all wallets for all Bitcoin variants to RealSolid and erase the hard disk". Easy to code, very difficult to detect because until the booby-trap is triggered it doesn't do anything suspicious - no unexpected network activity, no dodgy file accesses, nothing. It'd also be more or less impossible to meet the booby trap condition in testing before it triggered for real.
(The observant will notice that SolidCoin has actually crossed that threshold and nothing's happened - it's just a hypothetical example.)
In any case, surely if I were that clever and nefarious to plant some advanced, remotely activated, undetectable trojan like that in the client, I wouldn't be so stupid as to leave it in the source code? So how exactly would the source code help you there?
If memory serves me correctly, Bitcoin's moved to having its binaries compiled by multiple trusted developers that check they all get the same binary as each other in order to make this kind of attack harder. It's a shame SolidCoin doesn't do the same thing.