Still, it's hard to get something so simple to say "Honors their contracts" into something like a number. Like if I see 6/7 what am I to think? Do I really think that 11/11 is the same or even likely at all to be similar to another 11/11 I did business with?
It is a simple yes-no question, and I do not see much justification for intermediate numbers.
So my proposal is here for a global black list of persons who have cheated. The only valid record on this list contains
1.) the signature of the blacklisted person that it accepts a given arbiter,
2.) the signature of the arbiter that the person has not fulfilled a contract and not accepted the decision of the arbiter about it,
3.) additional information about the person as well as the contract.
So, to appear on the black list you have to be a total scumbag.
It makes me think of the doctor who has a great record because he never takes risky cases.
The point is that you can take risky cases, and, once you fail, the arbiter makes a decision about compensation. Then, all you have to do is to accept it and to do what the arbiter has told. The penalties are themself established in the contract.
The idea behind this is to exclude only those who refuse any cooperation. Everything else is (and can be) left to freedom of contract and the volitional choice of arbiters.
This black list is negative reputation. There is, clearly, also a need for something different, a positive reputation. For this purpose, I would propose a system of guarantees which is based on it:
So, you offer a compensation for the victim of A if A appears on the black list for violating a contract with the victim.
This is an information which suggests a simple metric: You can easily count the compensation you obtain if you are cheated.
Not that easy, to be honest, because one should take into account the possibility of clones of A giving guarantees. A way to solve this problem would be the possibility to create persons connected with real life data, like passport data, biometric data and so on. These should not be necessarily public, it is sufficient to have them available to the arbiter, who makes them public in the case of a black list entry. Without it, he gives only confirmation that he has a given set of personal data, and that there is not yet a record with personal data of that person on the black list.
Then, the simple case is that you have an expection of the worth of an open personal account (even a scumbag looses a lot if he appears on the black list with his personal data), so that you can compute - or I receive that much money as compensation, or even more money worth of reputation will be destroyed. The more complicate case is if the guarantee is given by a pseudonym - but this pseudonym is supported itself by other personal accounts.
Another problem is how to evaluate hidden personal accounts, there the personal information is known only to the arbiter. Here, there may be many pseudonyms of the same person giving guarantees. This problem can be solved pseudonymously by a trusted notary who assigns ordering information to pseudonyms: An open personal account can give a signed confirmation with timestamp to this notary, where he confirms to own a pseudonym. Then, the notary returns a confirmation that this is the n-th pseudonym of a real person. So, if pseudonyms supported by such numbers give guarantees, one can obtain information about the minimal number of involved real persons.
The remaining problem is trust in arbiters. It has to be solved by iteration (arbiters sign open rules of behaviour, accepting second order arbiters) and personal trust to at least one arbiter as a starting point.
With this system, you can prove something: Whatever happens, or you receive a compensation, or at least one real person ends on the black list with personal data, or a person on your personal list of first order trusted persons does not deserve it. A scumbag on the black list improves the reliability of the network as a whole, so, or everything is fine, or the reliability of the trust information of the whole network increases, or at least your personal trust choices have been improved. That means, you and the whole network learn from bad experiences.
Note that the failure to pay electronically is an objective question, the accused can prove that he has paid, so justice is easy for compensation payments, and, so, the failure of arbiters to punish this with black list entries is easily detectable too. The notary who counts pseudonyms is also simple to implement as an automatic response, and cheating can be established easily after the fact. So there is no problem of complex evaluations.
You have given a guarantee to A, but A is on the black list - pay, or you, or your arbiter, or his arbiter and so on appears on the black list.
How do guarantees appear? If you have a successful cooperation, where you have made some profit, it is reasonable to exchange guarantees with a value which is some part of the profit. So, even if you have to pay for it, you have not made a loss - only a smaller profit.
I don't mean to imply that work should not be done on this, just that a complete solution (even for something as simple as "does this person honor contracts" is probably going to evade us forever and tools for managing the information we can get about potential trading partners is a good place to start.
I'm not that pessimistic. Of course, there remain weak points, but the whole idea of basing trust on past behaviour has similar weak points.
What is important for a reputational system is that there are no scam patterns which may be easily repeated, so that professional cheating becomes impossible.