And why would bycrypt have any more resistance to a mining farm?
I mean if we start using mining farms to brute force (On what actually? You need to check if the outcome is correct right?) no hashing method is safe.
Of course bcrypt is safe against hashing farms. That is the whole point.
As to why/how? It isn't a single hash but instead a looping construct which performs multiple hashes using the prior hash's output as input for the next round. The default workload is 10 which is 2^10= 1024 hashes. Today the recommended minimum is 16 = 65,536 recursive hashes. That significantly cuts the throughput of a hashing farm. Instead of being able to check billions of potential passwords every second the same hardware can only check thousands.
Another way to look at it is for a given password complexity and cracking hardware a sha-512 hash could be brute forced in 1 hour it would take multiple years using bcrypt(16).
It isn't any harder to brute force 3 algorithms which combined take 3x clocks cycles as it is to brute force a single algorithm which takes 3x clock cycles.
Doesn't it take 3 times as long to decrypt a password that has being hashed 3 times?
Above you say it will take as long to decrypt a single hashed md5 password as a password that has being hashed with sha512, md5 and whirlpool.
I think you are missing the point. If you use n algorithms to acheive a total hashing time of 3x you haven't gained anything (other than needless complexit) over using a SINGLE good algorithm and iterating it enough times to take 3x time.
Stored hash = RandomAlgo#1(RandomAlgo#2(RandomAlgo#3(password+sat)+salt)+salt)
isn't any more secure than this:
Stored hash = SHA512(SHA512(SHA512(password+sat)+salt)+salt)
All of the picking randomly algorithms from a pool of algorithms and then encrypting them is what is known as "feel good security". It does nothing that simply using a single good algorithm multiple times doesn't do. However the second issue is when dealing with massive cracking engines a multiplier of 3x (or 5x or 18x) doesn't provide any meaningful increase in resistance. Since you seem hell bent on catching the "I must reinvent the wheel" train, your next though likely is to increase the number of iterations. So lets take this to the logic conclusion.
If you took the above idea, used a core hashing algorithm which was harder to brute force, ensured sufficient entropy in the salt (64 bit minimum), built in protection for intra-round key collision (which optimizes attacks using rainbow tables), designed it to store the version, hash, salt, and # of rounds in the output (which adds upgrade/compatibility), built in a whole bunch of error checking to reduce/prevent usage bugs which weaken the output, and allowed for changing the # of rounds as computers get faster .... well you would have just re-invented bcrypt
. In a decade or two when it was as exhaustively tested and analyzed as bcrypt you have a high confidence it was as secure as bcrypt.
Alternatively you could just use what has already been written a decade ago.
BTW bcrypt isn't the only option. scrypt is a variant which is memory-hard throughput on GPU/FPGA restricted. PBKDF2 (Password-Based Key Derivation Function) is a third common option which can use a variety of base hashing algorithms.