Bitcoin Forum
June 14, 2024, 01:03:49 PM *
News: Voting for pizza day contest
 
   Home   Help Search Login Register More  
Pages: [1]
  Print  
Author Topic: Idea for a new coin, seeing if it might be feasible. P2P Search engine?  (Read 849 times)
skeeterskeeter (OP)
Full Member
***
Offline Offline

Activity: 160
Merit: 100


View Profile
October 14, 2014, 12:50:08 AM
 #1

I had the idea of a P2P search engine. Its informal and I have no real plans to develop it (yet?), but I just wanted to share it and get opinions or ideas.

So the "search engine" would be miners/nodes on the network. There would be two jobs for the miners. They would need to crawl the web and produce the massive page rank array that is then used by the other miners to service search results. Miners would be paid in "coins", which could then be sold to users. The page rank array would be akin to the block chain, but it only evolve on a local level based on the miner crawling the web and adding to its part of the array.

So their is proof of work, namely crawling and adding to the web; there would have to be miners that "fight" the crawlers to verify that the crawlers are creating proper links otherwise the proof of would would easily be could by just submitting garbage data.

Also proof of work for the miners who search their local copy of the page rank table to produce search results for users, these would be correlated over the network (multiple "miners" would hold duplicate page rank data) to produce a final results with an "accuracy" rating based on which miners agree as to the results of the search.

There could also be proof of stake for "miners" who just hold part of the page rank array and "occasionally verify" with the network; they would hold "stronger" links in the web, basically pages that always link to each other and have for a long time. This would keep malicious miners from overtime slowly changing search results in their favor.

---------

Obviously that's a lot, not well thought out, missing logic, and I'm not sure if I understand proof of stake versus proof of work. But I just wanted to bring it up for discussion.
Willisius
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250

I'm really quite sane!


View Profile
October 14, 2014, 01:28:47 AM
Last edit: October 14, 2014, 01:49:00 AM by Willisius
 #2

The first thing that comes to mind is how rankings would be determined. It sounds like all miners would have to agree that a given rank is valid. If there was a deterministic ranking method, a sort of DoS attack could be possible whereby one creates pages that satisfy the ranking criteria fully but have no usable content.

edit:
If you'd like to see some empirical evidence of this attack, look up search engine optimization. Wink It turns out the economically rational action is to game the ranking algorithm.
skeeterskeeter (OP)
Full Member
***
Offline Offline

Activity: 160
Merit: 100


View Profile
October 15, 2014, 05:19:25 PM
 #3

The first thing that comes to mind is how rankings would be determined. It sounds like all miners would have to agree that a given rank is valid. If there was a deterministic ranking method, a sort of DoS attack could be possible whereby one creates pages that satisfy the ranking criteria fully but have no usable content.

edit:
If you'd like to see some empirical evidence of this attack, look up search engine optimization. Wink It turns out the economically rational action is to game the ranking algorithm.

What do you mean by "to game the ranking algorithm"?

---------

I am saying this on the assumption that this network would require a huge amount of data to be stored between all of the nodes, if that is true I think it might be more feasible for each node to have only a subset of the entire rank list. There would be many copies of the list stored by many nodes, when a search is done a consensus is taken from the nodes that respond to the search request as to what the results should be.

The required amount of copies of information about a page in the rank list is the amount of other pages linking to that page. So if there are 10,000 links to a site, then there should be ~10,000 nodes with that sites(and its into links) information. Every time a node responds to a search results and returns a set of nodes, it "releases" that pages information to a new node which then takes in the links, crawls the web and checks them, and then "waits" for searches to them and the cycle continues.

I think this would provide a very small probability that one entity could control all of (or most of) the nodes that contain the current link information for a page. Thus assuming everyone works together to creates good links and page ranks, then it should work out that an attacker would have to control the entire network or a very large amount to effect search results.
YarkoL
Legendary
*
Offline Offline

Activity: 996
Merit: 1013


View Profile
October 16, 2014, 05:16:19 AM
 #4


So their is proof of work, namely crawling and adding to the web; there would have to be miners that "fight" the crawlers to verify that the crawlers are creating proper links otherwise the proof of would would easily be could by just submitting garbage data.


What about nodes that cheat by just going to Google?

“God does not play dice"
Willisius
Sr. Member
****
Offline Offline

Activity: 364
Merit: 250

I'm really quite sane!


View Profile
October 16, 2014, 06:01:06 AM
 #5

The first thing that comes to mind is how rankings would be determined. It sounds like all miners would have to agree that a given rank is valid. If there was a deterministic ranking method, a sort of DoS attack could be possible whereby one creates pages that satisfy the ranking criteria fully but have no usable content.

edit:
If you'd like to see some empirical evidence of this attack, look up search engine optimization. Wink It turns out the economically rational action is to game the ranking algorithm.

What do you mean by "to game the ranking algorithm"?
...

I mean that web pages benefit from being perceived as relevant to a search query, even if they're not. If the methods for determining a web page's relevance are completely open source, it follows that a particularly dedicated entity could possibly create numerous web pages that are designed solely to be ranked as relevant, while having little actual content other than advertisements.
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!