Here's the thing, their first iteration of this card was Larrabee and that was cancelled because the best it could do was a match an already phased out 4870 in graphical terms, though it was supposedly a compute monster matching a 5970 in that area.
And while I do understand what you are saying regarding Intel's work in these coding circles, this is quite different (like night and day) from writing a driver that will work for consumers. Let's put it this way, they didn't scrap the architecture and we're supposed to see this in Haswell (chip after Ivy Bridge) as it's graphics engine.
So far Nvidia and AMD release pretty decent driver packages that optimize well for their respective hardware. I mean you do hear grumbling about this and that aspect, but overall they work and both have large dev teams just building the drivers. It's a monumental task to build this software for any architecture and Intel just isn't really all that into it yet. So they shifted their resources to Knights Ferry projects where they can deal with non-consumers and hopefully leverage the learning into a driver that might work OK by the time Haswell launches.
In the meantime, we saw a new, faster compute architecture card with the 6950-6970 launch and Nvidia is busting their chops to bring out their next gen products too. This is a toughy for Intel, they are in deep and I mean deep. The percentage of the low power market scooped by a simple chip like Brazos is proof that graphics matter. Now Llano is launching this summer and has even more under the hood and fabulous power management. Intel is feeling the graphics pinch already and if they had a product ready to launch, that technology would already be on the market. No one sits back and loses any potential sale on purpose.
Anyhow, OpenCL is working and is getting adopted. Intel just coming in with a new programming model based on mixed scalar/vector programming that doesn't offer significant benefits won't be taking any share from anyone any day soon. In the meantime they're chasing two companies with this approach that have great market presence. I can't see them doing anything significant until this tech hits their mainstream chips in order to help aid adoption of this approach. Even that will be contingent on doing it right and anyone can make a mistake.