I believe this would be much like the approach that "VanityGen" takes, where it just randomly generates a key and tests to see if it matches your desired prefix... You could randomly generate the missing characters and test if it results in a valid key etc.
Usually when randomness is involved in an algorithm, the
first step is random not the
subsequent ones, and there is a relationship between these steps. There is also a chance of failure in such algorithms which means you have to go back to first step and choose another random first step. Example: Pollard's kangaroo algorithm, you "hop" around based on the first random entry point.
I am not familiar with how Vanitygen works but most probably the only reason for choosing a random key for its starting point is because the final result
must be random and if it starts from anything not-random, the result could be reproduced and that makes the whole thing useless. Besides, that code is looking for collision in a very small space which is not more than a couple of bytes depending on the number of characters you chose so you have many possible results. But here we are looking for a
single needle in a haystack.
What you said, could work if and only if there were some rules for going from key1 (the random key) to key2 apart from iteration (+=1) which means the only way of implementing something like this is to randomize the entry point and do the same thing from there. Anything other than that (eg. choosing a random combination each step) has the potential of increasing the chance of not finding the result ever since you'd be shooting in the dark.
Hint: the trick in optimization of something like this is going through all cases but not-repeating things you have already done. A simple loop over 656 mil+ would have taken about 20 minutes.