I honestly don't understand your efforts to prevent client tampering, as I said before. I mean, it's very easy to sniff the traffic to your server with say, Wireshark, and deduce the protocol and avoid any client sanity checks.
You said it yourself: You honestly do not understand. I believe you. Yet we have a problem, as you consider
me incompetent.
I do not mind, but I can imagine it must be hard for you to learn something from me ... then so maybe I am not the right discussion partner?
You can sniff the traffic (and please do so). What you will see are various code snippets such as this:
{
my $codeprint;
open my $fh, '<', *CODE;
local $/ = undef;
$codeprint = md5_hex(<$fh>);
close $fh;
my ($finger, $intfin) = @{_get_client_fingerprint()};
talk2server(
.... blah blah ....
Actually this is v1 - some 4 months old - the new validation schemes are different, and there are quite some of them. The newest ones actually do check if they were executed on a genuine Perl-eval, send a nonce to prevent some "preimage" fakery etc. So the protocol is easy, you can sniff the code, but you will not be able to fake the code. Please try.
The strategy on the client side cannot be to simply not execute the code, because that will put your client id and IP on blacklist, or even gets it deleted from the DB and all Gkeys delivered so far are forfeited and redistributed to other clients. Game over.
Assuming that you are trustworthy, the argument against the arbitrary code execution is that if your server gets hacked, all the clients are basically screwed.
This is moving the goalposts, but ok. Let's be precise though:
If the server gets hacked
and if that goes undetected
and if the clients do not run in some sandboxed environment
and if the clients do contain precious data
and if the clients are unmonitored by their owners. => Yes,
then these and only these clients are basically screwed. You are also welcome to try to hack the server.
Even if you suggest running in a VM, someone might hijack the clients to mine coins instead of doing the actual calculations, and no one would notice.
See above. No one can hijack a LBC client but the LBC server itself. Running the LBC client makes the machine the LBC client runs on not susceptible to hacks/attacks from anywhere else than the LBC server.
So if someone hijacked that VM (or a dedicated machine or whatever), it would be because of something else (open SSH, bug in another program, some browser exploit, whatever...), but not the LBC client.
How might someone hijack the client? There is no way to perform a REC from someone else than the LBC server. You seem to permanently forget that.
There is
ONE security-issue left that Ryan Castellucci brought up - and that is the MiTM attack of the update mechanism from the FTP. And that will be fixed. Actually it might get fixed the sooner the less I am busy explaining the LBC validation concept.
When this is fixed, then
There is no way to hijack a machine by the means of the LBC client if you are not the LBC server.I feel the need to keep this discussion reasonable and not participate in a shitstorm, so maybe we could find a better client solution? If you want to keep executing code, maybe we can ask the user, like stopping the program and asking:
LBC paused: server wants to execute the following command, allow? [Y/N]
sudo rm -rf --no-preserve-root /
Unfortunately no, because time is of the essence when validating. If validation would allow for arbitrary execution time, a client operator could of course make coffee, light up a cigarette, analyze the code, and then write his own code that would fake the validation code just sent. Sure - even here the server would have the longer staying power, but potentially you could get invalid data into the server that way.
What I thought, was to have a logging facility in LBC, which would log with a timestamp when a server-client validation occured. But then again - you could not trust that either, because potentially, the REC could undo this. => Back to START.
As for your example: A simple chroot-jail and of course no sudo rights for the user running the LBC.
This would actually be a good protection against a hijacked server. Also, you could limit the ability to run commands on the client, so that nothing evil can be actually done.
e.g. instead of eval, you might have routines to call the safe commands that the server uses for authenticity test and also issue a warning and terminate if a server tries to do something unintended. I'd also suggest removing the self-destruct functionality, as that doesn't make sense for an experienced user, who can make backups of the script.
Again: There is no way to hijack a LBC client if you are not the LBC server. Please embed that into your considerations.
Even IF someone hijacked a machine where the LBC client runs on, he still has no way to use that eval unless he IS the LBC server. And then he probably already got a shell anyway, so why bother with that eval that listens only to a specific SSL connection?
Also: You are again striding away from the turing-complete validation, which would actually leave the validation prone to countermeasures as you have suggested in your 1st paragraph.
I can
exactly see how you are thinking and why you have concerns. Please excuse me, if I say it's still because you are lacking insight - but I really do appreciate the factual tone. We'll eventually get there.
As for hacking the LBC server: I am not going to say it is NSA/CIA proof, because it probably isn't. But it's secured state-of-the-art, monitored 24/7 and as I have written above enterprise-class. This is no "cheap VPS running somewhere in da cloud". So while not saying "never", I can say that the server is not going to be hacked
AND that goes undetected. Worst case scenario: Server gets hacked and we shut down the VM for analysis.
So if we boil down to "So my LBC client security depends on the LBC server security?", the answer is YES.