Self-programming seems a concern to me. Without any limitations or unchangeable core, an AI could go in all sorts of strange directions, a mad sadistic god, a benevolent interfering nuisance, or a disinterested shut-in, or something inconceivable to a human mind.
Also, for the sake of simplicity sci-fi stories have one central AI with one trait, but with sufficient computing power you could end up with thousands, or millions of AIs going off in all directions. Unless one tries to hack all the others and absorb them, if it didn't succeed they'd all be spending their time fighting each other