there's a big difference between feeding some API data and building an engine to crawl sites.
Couldn't participating sites keep a bitcoin.xml file in the root directory of their websites or something which would contain a list of all products and current prices? Websites can dynamically update xml files with cron jobs and the directory service/search engine only has to look for this file on all websites in the list.