Bitcoin Forum
April 19, 2024, 08:49:33 PM *
News: Latest Bitcoin Core release: 26.0 [Torrent]
 
   Home   Help Search Login Register More  
Pages: [1] 2 »  All
  Print  
Author Topic: Here is the solution for TRUST or WOT  (Read 2089 times)
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 05, 2011, 02:37:25 AM
Last edit: July 16, 2011, 08:33:26 AM by incognegro
 #1

This is my first post. Been reading here since June 9th and mining since then.
I am a Senior Systems Engineer, endpoint security expert and Forex Trader.
I am posting this message because I have a solution to the constant problem of TRUST and I was finally driven to post this because of this crap:
https://forum.bitcoin.org/index.php?topic=25962.0 (since I can't post there I had to post in the newbie section)

I'm talking about trust not just in BTC but anywhere in life.
I created an algorithm in 1989 that handles this problem perfectly.
I saw OTC's web of trust and it was shocking to me.
They got the basic idea right but with no algo except for a rudimentary point system and is trust hacking capable.
My algo can't be hacked or manipulated by anyone, even me.
The best example I've seen was OTC WOT and it's not even close.
It is a common myth that Bitcoin is ruled by a majority of miners. This is not true. Bitcoin miners "vote" on the ordering of transactions, but that's all they do. They can't vote to change the network rules.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1713559773
Hero Member
*
Offline Offline

Posts: 1713559773

View Profile Personal Message (Offline)

Ignore
1713559773
Reply with quote  #2

1713559773
Report to moderator
1713559773
Hero Member
*
Offline Offline

Posts: 1713559773

View Profile Personal Message (Offline)

Ignore
1713559773
Reply with quote  #2

1713559773
Report to moderator
vertygo
Newbie
*
Offline Offline

Activity: 25
Merit: 0


View Profile
July 05, 2011, 05:20:49 AM
 #2

Wouldn't a new trust system require complete transparency ?

Your trust rating (or "Quatloo Bank") has been lowered!

Being vague about your system
 -2 Quatloos.

Everyone and every corporation will be stored in some sort of SQL DB ?
 -1 Quatloo.

Assigning everyone a QR barcode just to force them into your trust system
 - Not cool, man.

Sitting on this since 1989 ?
 +1 Quatloo for keeping the dream alive!

Perhaps not constructive, but the foundation didn't have the right permits anyway.

Tongue

incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 05, 2011, 06:04:40 AM
Last edit: July 16, 2011, 08:38:10 AM by incognegro
 #3

I can understand the negatives as since I've now been up for 2 days I'm pretty unable to think straight and plus I've been thinking for a long while how to present any of this without explaining everything including the algo itself which I absolutely refuse to do.
This would make the 3rd port of the algo as I am already using it for 2 other very difficult problems successfully.

I was hoping to get some feedback as far as what people thought of the concept more than anything.

As for transparency it must be understood that the algo would never be revealed and it would be plainly obvious to the users once they used and understood it's use that it was a very sound system.
People or entities would simply be assigned an ID consisting of a string of chrs, probably 128bits. No info about "them" would be stored or even able to be entered and it wouldn't be needed anyway. It's just referring to an object with values attached to it that can be modified by others through rating and looked up.
Can't store in a DB? How can you ref any object that is not stored somewhere?
Now if entities were not assigned an ID (that could also be refed by QR code) then how could anything be calculated or looked up?
Not cool? Alternatives?
You can't rate or assign any values to a non-object.
There is the foundation.
Did you have an idea of how to calculate variables that were never defined?

The "dream" has already been kept alive by using this algo on 2 other hard problems before this.
I just had this idea a few months ago after constantly being barraged by the question of why people have such a weak trust model and readily trust information from totally proven un-trustworthy sources in addition to other people who are also proven un-trustworthy.
In addition there is always a "trust chain" that could go on for very many branches which would basically be impossible for anyone to verify with a high degree of certainty.
My algo takes care of all that and does it extremely well.
vertygo
Newbie
*
Offline Offline

Activity: 25
Merit: 0


View Profile
July 05, 2011, 09:44:52 AM
 #4

I appreciate your response, and I do respect your enthusiasm. However, you say:
"it would be plainly obvious to the users once they used and understood it's use that it was a very sound system."

If it's that obvious, why can't you actually explain it ? Why do I have to engage and immerse myself in your system before knowing it's benefits ? You have to understand that fundamentally you want everyone to be able trust everyone else, but YOU are not willing to trust ANYONE with a plain explanation. You have already stated it's hack / tamper proof. Fantastic, no reason it can't be open source then!

As for databases, that's a whole other topic. The way you've hinted at is that you will host a central trust database. Well, bitcoin is doing very well without such a thing. Distributed systems have some interesting possibilities.  I'm not saying you are wrong or that it's possible to do it some other way, I'm just saying the control of this data (even if anonymized) is a huge factor. And of course, just basic security against having the DB DOS'd constantly is another issue.

I just don't get why you can't say here.. this is it. A couple of graphs, some flowcharts, and the way you've solved the other 2 problems with the same algo. Is it like Amazon, only people review each other. Is it like ebay and we get gold stickers for doing the same thing over and over again. Is it like this forum where you have to earn a certain level by mucking about in newbie area.

Assigning concrete IDs to real people might not be too popular on a forum based on the ideology of a completely anonymous electronic currency. I don't have an alternative to IDs, I've never thought about a trust system.

If you want feedback on the system, how it might be integrated and adopted, obstacles that need to be overcome, etc., then you need to give specific information.

This is trust you're talking about.. probably the most difficult and powerful feeling a human being can have.

I'll end with a question because I would like to continue this conversation..

What is to prevent me from paying $1000 to 1000 people to ensure my trust score is top of the line.. JUST so I can scam another 1000 people (seconds after confirming my 100% trust rating) for $5000 each ?
Stephen Gornick
Legendary
*
Offline Offline

Activity: 2506
Merit: 1010


View Profile
July 05, 2011, 09:46:34 AM
 #5

A couple of questions.

In that example that Foodstamp gave, he claims he was given a negative rating with someone he never did business with.  Does your solution provide a defense against false negatives?

An additional example from the Foodstamp scam -- he operated under multiple identities, one that had a good rating even though his was bad.  How does your solution address that?


Unichange.me

            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █
            █


incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 06, 2011, 05:56:06 AM
Last edit: July 16, 2011, 08:41:56 AM by incognegro
 #6

vertygo:
This doesn't solely have any application with bitcoin but in the case of otc it would.
It is however at the heart of all things that people trust and gives a way for people to rate and verify such trust.

I can't run a distributed database because of the nature of the queries that need to be run. It will need to be on a huge computer/ data center. Not sure why you want it to be distributed as it has nothing to do with anything. DDoS? Sure, any website can get DDoS. So does this stop Google or Amazon or me?

To your question:
you want to know if a specially crafted attack could be manufactured.
The attack you described would fail in a short time because in my algo the cream rises to the top and the crap sinks to the bottom.
Any 1 highly rated (or group of) person could of course be paid to rate someone highly.
The problem with this is that in the end the attacker (being a false actor) would get bad ratings and those who rated him highly would be negatively impacted.
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 06, 2011, 06:09:26 AM
Last edit: July 16, 2011, 08:44:44 AM by incognegro
 #7

Stephen:
Yes it is possible for someone to rate someone else they don't know but they would need that person's ID code.
I envision ID codes to be freely available but the exact mechanism I haven't yet figured out.

Multiple identities are spotted simply by observing that an ID has few web connections (or none)(this quantity would be shown) and that the ID generally is not mature.
I have figured out that ID's cannot rate others until they are connected to the web (by others rating them) of at least 2, maybe more.
This would prevent a hacker group from creating their own web and jacking up ratings on some.
They would all show ZERO web connections and a legit user would see that.
I prevent this by requiring a certain web connect count before an out going rating can occur.
vertygo
Newbie
*
Offline Offline

Activity: 25
Merit: 0


View Profile
July 07, 2011, 04:33:41 AM
 #8

I like to break things.. so here goes:

- I don't think you could ever stop one person from having multiple accounts.

- I still see groups of people elevating and destroying one person's trust rating without any appeal.
ie. A bunch of secret rascists who all have good ratings because they network, etc. can destroy a minority persons trust rating as soon as they move into the "wrong" neighbourhood.
ie. A member of a church comes out as gay, now everyone there demotes his trust rating.

Sure he can demote every individual that "has it out for him" but that's probably not as influential as 100 people demoting him. Correct ?

I like the idea, I just think you'll never avoid manipulation.. and if it becomes the defacto standard, people's lives could be ruined. ie. Job interviews, bank loans, potential relationships.
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 08, 2011, 10:05:37 PM
Last edit: July 16, 2011, 08:50:21 AM by incognegro
 #9

Good points, let me try to explain:
Yes of course if LOTS of people rate down someone (with my algo) then his rating will go down BUT it will only stay down if he is in fact bad and continually gets rated down normally.
In other words, it would take lots of people who are rated high (not just the same people over and over because it won't accumulate that way) consistently to do any damage.
Like I said before, cream rises to the top etc..
In a small "universe" then yes there can be manipulations much more easily but even then, assuming the person being rated down is in fact a good person, then what you would see would be large movements in his rating AND you would see the same in the bad people who were attacking him.
It's difficult to explain any better than that.
As for multiple accounts, sure they could absolutely, the problem with them doing that is they would always be starting over and their "web" connections would show none or few (which shows an immature ID and possible hacker) and they would have a very difficult task of re-engineering a high rating again which would be nearly impossible from any 1 starting point.
I carefully engineered the algo with hacking in mind and I fully expect hackers to try but it's already taken care of.
This is ALL assuming that I'm dealing with a large user base.
If it's attacked in the bootstrap phase then yeah it's a problem.
I'm not sure of the timeframe that this will be actually "opened for business" as I'm still pushing through the analysis BS with the other application of this.
I took a 3 month break from that one because it's a TON of work.
It was a mindbender to port it over to that  but I'll skip the details.
It's another completely different mindbender for Trust but I managed to figure it out except for a few other things.
SgtSpike
Legendary
*
Offline Offline

Activity: 1400
Merit: 1005



View Profile
July 08, 2011, 10:22:53 PM
 #10

So if no one can leave a rating without having ratings themselves, how does a person ever get rated?

What would stop me from performing 5 $10 sales in order to get good feedback, then screw someone over for $100?

What do you mean by "web" connections?
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 08, 2011, 10:38:04 PM
Last edit: July 16, 2011, 08:52:05 AM by incognegro
 #11

Sgt:
Consider the system is structured in a way to prevent scam/hack attempts for ratings and it's geared towards the world in general and anything related to trust, not just bitcoin WoT.
"web" connections I am referring to are the "web of trust" and any person "entity/object" must have a direct connection to the complete web by being rated by someone who is in the web.
This is necessary to prevent "sub webs" from being created, by hackers or even valid groups who are not connected.
"sub webs" would make ratings that are not comparable to other "sub webs".
The downside is this demands that a "seed web" is started among original trusted (or not trusted of course) people and those people would then be able to invite people into the web by rating them.
Anyone good or bad can join the web. They just need to be rated.
Sure, if their initiating rater gives them a good rating just to get them in it's ok. They will show 1 rating and 1 web connection which should be seen as not yet mature.
Of course their initiating rater is foolish to rate a bad person good then their rating will suffer as soon as the bad person acts as himself (bad) and gets bad ratings so the whole thing takes care of itself in short order.
SgtSpike
Legendary
*
Offline Offline

Activity: 1400
Merit: 1005



View Profile
July 08, 2011, 10:48:28 PM
 #12

Sgt:
Consider the system is structured in a way to prevent scam/hack attempts for ratings and it's geared towards the world in general and anything related to trust, not just bitcoin WoT.
Given that trust is important in so many aspects of everyone's life this is a revolutionary system.
"web" connections I am referring to are the "web of trust" and any person "entity/object" must have a direct connection to the complete web by being rated by someone who is in the web.
This is necessary to prevent "sub webs" from being created, by hackers or even valid groups who are not connected.
"sub webs" would make ratings that are not comparable to other "sub webs".
The downside is this demands that a "seed web" is started among original trusted (or not trusted of course) people and those people would then be able to invite people into the web by rating them.
People know lots of people so this certainly is not a problem.
Anyone good or bad can join the web. They just need to be rated.
Sure, if their initiating rater gives them a good rating just to get them in it's ok. They will show 1 rating and 1 web connection which should be seen as not yet mature.
Of course their initiating rater is foolish to rate a bad person good then their rating will suffer as soon as the bad person acts as himself (bad) and gets bad ratings so the whole thing takes care of itself in short order.
It was cool to see bitcoin WoT. I said "wow this really needs my algo".
The bitcoin WoT web could or would simply be part of the big universal WoT my algo would run on.
I wouldn't call it WoT though.
So what you're saying is, if I rate my buddy with a good rating so that he can come use the system, and then he abuses the system, I get negative rating on my scorecard because of actions he took?  Or, say I purchased a bitbill from someone who is relatively new, so I give him a good rating because I had a good transaction, then he screws somebody over down the line, I would get screwed over too.

It sounds like this whole system actually discourages people from leaving positive feedback.  Why would I want to leave someone else positive feedback when it could potentially bring my own feedback score down in the future?
trentzb
Sr. Member
****
Offline Offline

Activity: 406
Merit: 251


View Profile
July 08, 2011, 10:54:52 PM
 #13

I can't run a distributed database because of the nature of the queries that need to be run. It will need to be on a huge computer/ data center.

If member ratings are stored in a central database this idea will fail. There can be NO single point of failure in such an idea and a central database no matter how secure, isolated, obscure, encrypted, inaccessible, etc, is a single point of failure.

Further, it sounds to me that this idea has a significant dependency on human nature/emotion/whatever you want to name it, and whether a human will do the right or wrong thing. Anytime dependence on human decision making comes into play your best chances to predict the outcome of such decision will be to look at and analyze human incentive in such a decision although even that is not foolproof.

If you truly believe this idea would "revolutionize mankind" then I am interested to learn what your incentives are to keep it from mankind.
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 08, 2011, 11:14:38 PM
Last edit: July 16, 2011, 08:57:26 AM by incognegro
 #14

Think of it this way: Let's consider this example...
All people in the world are in the web, like 6 billion people.
There is a mathematical link between Hyunmo Chen in Korea and Billybob Smith in Kentucky and their ratings can be 100% directly compared.
Good people through more and more ratings show themselves as being good and they end up in the upper part of the ratings range.
The same for bad people going to the bottom.
What I was trying to explain in the last post was that people need to rate people as they SHOULD be rated or they will face a "pull" on their own rating.
In other words, knowingly (and consistently) rating a bad person good will have some sort of negative affect on your rating for some period of time.
Doing it once would be negligible.
There is no discouraging for any sort of feedback as long as it is accurate.
If any 1 rating was accurate for it's action but was not indicative of a person generally it's ok too. The system takes it all into account.
Once one understands the dynamics it basically forces people to not only act correctly but also to rate people correctly, whatever "correct" rating is warranted.
A great example of where people don't rate correctly AT ALL is where men consistently rate ugly women with large breasts a 10. We know there are VERY few real 10's in the world yet guys would make you think there are 100 million. Hope that is not offensive, didn't mean to be. It is a real example.
SgtSpike
Legendary
*
Offline Offline

Activity: 1400
Merit: 1005



View Profile
July 08, 2011, 11:22:00 PM
 #15

Ok, that makes a whole lot more sense now.

What would stop me from making 100 new profiles, then rating each of them with positive ratings to each other?  I think you mentioned something about subwebs being prevented, but I am interested to hear more about that aspect...

How would you differentiate a malicious subweb from an accidental, but legitimate subweb?  In other words, maybe a bunch of Amish people (I know they wouldn't, but hear me out) use the system, but only rate each other, because they have no real contact with other people?
trentzb
Sr. Member
****
Offline Offline

Activity: 406
Merit: 251


View Profile
July 08, 2011, 11:32:44 PM
 #16

Ok so the human incentive would be to preserve my rating (my interests) by accurately rating someone else and the consequence of inaccurate rating would tarnish my rating. That makes all the sense in the world. But what/who defines accurate and inaccurate ratings?

Edit: I added you to the pending whitelist so as soon as an admin has a chance you should be out of newbie jail. Smiley
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 08, 2011, 11:37:48 PM
Last edit: July 16, 2011, 08:59:23 AM by incognegro
 #17

trentzb:
There is no possible way to do hundreds of SQL queries quickly unless the DB is on 1 fast machine or fast system.
To calculate the thing with data all over the place would be impossible.
There are thousands of examples of central DB systems running critical systems and they have architectures for security/redundancy/backup etc.
Your assertion shoots all of them down, including Google, Amazon (not their cloud BS), and the IRS.
Human nature, yes absolutely you are correct.
The bottom line with that is that not everyone is bi-polar or similar all the time, in fact on average the population is generally stable and generally makes the right observations.
Because abnormal or anomalous behavior is not the norm this system is able to put everything where "general" is and handle those abnormalities.
I know the system is foolproof as I have described because of it's current use in systems that are HIGHLY anomalous but have a general normality to them.
It handles them very well.


SgtSpike
Legendary
*
Offline Offline

Activity: 1400
Merit: 1005



View Profile
July 08, 2011, 11:47:43 PM
 #18

You still didn't answer my question about what would stop me from registering 100 new names...

Another potentially vicious downside:  What happens if a group of people decide to attack a particular person they don't like?  Not because of that person actually doing something wrong, but because the group doesn't agree with that person?

Take Anonymous for example.  They might decide to go after Joe Shmoe, for whatever reason.  Maybe just to troll.  Maybe because Joe Shmoe made a post on 4chan that no one liked, and then all the anons went and figured out who he was.

Suddenly, Joe Shmoe ends up with -1000 rating, and since the majority of votes agree that Joe Shmoe is a scumbag, then he must be a scumbag, right?  Since so many anons voted negatively against Joe Shmoe, none of their personal ratings are affected, since they voted "with the flow".  Anonymous has accomplished their mission, which was to ruin Joe Shmoe's rating, with no personal side effects.
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 08, 2011, 11:52:50 PM
Last edit: July 16, 2011, 09:01:30 AM by incognegro
 #19

Sgt:
You could make as many ID's as you want but none of them could rate anything because none of them have been rated by a web connected user.
Even if I dropped that requirement and they crafted a few super high rated ID's they would immediately get adjusted as soon as they got connected to the web, probably by a handful is all that would be needed.
After that their whole effort would be toast and any user could see they were "newbies" to begin with.
There is NO manipulation possible once they have robust connections to the web. It's all a done deal then.
I'm sure lulz or anon will try but I'll be giggling.
Important to note here: user security is important! If lulz or anon were to hijack hundreds of high rated ID's due to LaxSec on the user side then it could cause a problem.
It would be much simpler for them to steal bitcoin wallets or hack emails than it would be to locate hundreds of high rated ID's so consider that since ID's have no info on them.

Your subweb ideas: This is why I would require web connect before outgoing ratings could be made. Subwebs bad. I could however allow subwebs and users would just need to look at the "web connections:" stat to see if they are out in la la land or not. La La Land ratings do not compare to Web Ratings.
incognegro (OP)
Newbie
*
Offline Offline

Activity: 23
Merit: 0


View Profile
July 09, 2011, 12:03:26 AM
Last edit: July 16, 2011, 09:03:09 AM by incognegro
 #20

Sgt:
I'm typing as fast as I can and I get new ones.
Yes Anon could do that but you must remember that if Joe Schmo is not really a bad person then he will still be rated highly by others, also Anon would probably not have a high rating, how could they? So their attacks would be coming from low to mid rated users as a whole. Anon could not be expected to maintain any sort of high ratings, who would rate them highly? They themselves could but that doesn't mean anything because they are only a small small part of the web.
Pages: [1] 2 »  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.19 | SMF © 2006-2009, Simple Machines Valid XHTML 1.0! Valid CSS!