Reputation rank

Take three on finding a fundamental purpose for my life.

The original post is here: https://erikbenson.typepad.com/mu/2003/0...

Google has PageRank, and when I first started thinking about this, it seemed like they might have reputation rank too.

For example, consider a reputation system where you could vote for the trustworthiness of other people. Your vote would have a weight that was equivalent to the sum of the votes from other people towards you. In other words, your feedback is weighted by the feedback that people leave for you. But the feedback that people leave for you is weighted by the feedback that each of those people got from everyone else. The reputation system is recursive, and pretty much has to know the reputation rank of everyone before it can calculate the reputation rank of any particular person. ?? How would you get out of a infinite loop like that? Most likely, you’d have to snapshot reputation ranks occasionally and use old reputation ranks to help calculate new reputation ranks. Not perfect, but still pretty workable. And that’s pretty much what Google does with PageRank, except it’s ranking pages instead of people.

But then I began to wonder: should people have a single reputation rank that applies to the entire universe, or should reputation rank be a function of the relationship between you and the person you’re interacting with? As a trite example, I trust George W. Bush a lot less than a lot of people. And it’s possible that I trust him more than a lot of people too. The practical reputation rank for George W. Bush should not be the same for all people… otherwise I would not trust it enough to help me make decisions about who to trust. Even an average reputation rank would not work… I want my own. I want to know how George W. Bush’s opinions/values/thoughts/etc line up with my own, not how they line up with the general country, or world. This significantly complicates the calculation of reputation rank for a large group.

As always, thinking about it in terms of software helps come to some conclusions. The calculation can’t be impossible if our brains are capable of doing it without even thinking. A proof of concept script is probably in order here, one that encapsulates all the complexity of a real reputation system without trying to make it scale to thousands or billions of people and without trying to make it account for all things that reputation currently does.

Here’s what I would like to build. If it’s compelling enough, and has enough unanswered questions that only a prototype could answer, after I describe it, maybe I’ll try to build part of it later today. If all the value seems to come out of the functional spec, so to speak, then I’ll leave it at that.

Of course, there would have to be an account manager so that each user of the prototype could have their own space within which to fight for and compensate reputation.

There would have to be a way of making statements. “I have two cats,” would be a statement. “There is a God,” would be another. Metadata for these statements would include the person making the statement, the time it was made, and maybe even a unique ID so that all statements could be referred to unambiguously. Ideally, these statements would be structured in such a way that they could be stored with semantic meaning. Maybe “I believe there is a God,” would work better for that. Statements could be broken up into subject/verb/object triplets and stored as RDF, then you could surf similar statements, and compare the frequency of a statement across the users… but I get ahead of myself. (Even better, statements could be translated into Lojban.)

Statements could be voted on by other users. Actually, votes should turn into new statements: “Erik’s statement (I have two cats) is correct.” That would mean that you agree that Erik has two cats. And you may say, “Erik’s statement (I believe there is a God) is unverified.” It may or may not be true, but the fact that Erik says it’s true may reflect on his ability to only say correct statements (in your eyes). Other people may then make statements/votes about my original statements, or about the votes of other people on my statements.

Now we have a web of interrelated statements and people. How would you determine the reputation of a particular statement or person in relation to yourself?

It would be easy to determine this if you had actually voted on the reputation of the statement or person in question. But there are going to be tons of statements and people who you have not bothered to vote on explicitly. That’s the difference between having a bookmark for “erik benson’s weblog” and searching Google for “erik benson’s weblog”. A good reputation system would similarly give you the same answer “Erik has two cats” even if you had not yet explicitly voted on it yet.

Okay, so you’re looking at the statement “Man Versus Himself is a book worth reading.” This is a statement that is definitely going to require a reputation rank (and perhaps you’ve already given it one, using your brain’s from-the-manufacturer installation). If you have already come to a conclusion, you could make a statement about it, “Erik’s statement (Man Versus Himself is a book worth reading) is a malicious lie.” But, if you don’t yet know whether or not it’s true, this is when the reputation rank’s system could show its strength.

First, you would gather together all of the votes/statements that have been made about that statement. Then, for each of those statements, you would look at the person who had made the statement and calculate their reputation rank in relation to you. Their reputation rank would be determined by comparing their votes with your own. Identical votes will improve the reputation score between you two, contradicting votes will decrease the reputation score. The resulting reputation rank will tell you how much you agree with this person about statements of any kind, and it would result in a percentage chance that you will agree with things they say in the future. The result would be a collection of probabilities that you will agree with each person’s next statement. Their next statement, in this case, happens to be about whether or not they agree with “Man Versus Himself is a book worth reading.” Those who you have a large chance of agreeing with will have more sway on the reputation rank of that statement than those whom you have closer to a 50% chance of agreeing with. If you have a 10% chance of agreeing with someone, and they say that the book is worth reading, that would push the reputation rank of that statement in the opposite direction.

Finally, after doing some math (which I can’t exactly figure out right now… maybe Jeff could help out), you have a reputation rank for that statement. That’s a lot of work to get a reputation rank for a single statement… most likely if this were done in a prototype, it would have to do a lot of these calculations offline in a build so that you didn’t have to sit there and wait forever to get an answer. Then again, if you had 55,000 servers helping to compute the answers for you on the fly, I think a run-time answer would be worth it.

Am I overlooking anything in this very informal functional spec? Will it create reputation of the same quality that our brains do? That’s what I’m really after. Sure, there are probably ways to take shortcuts here and get a good approximation (so that the end result is actually buildable without a huge investment) but right now I’d rather limit the scope of the prototype (by creating a system that would have scaling troubles) than the functionality/quality of the prototype.

One thing I just realized I overlooked is reputation rank scope. I might actually have several different reputation ranks for any given person… one for how trustworthy they are on the subject of politics, one for how trustworthy they are on the subject of plant care, one for how trustworhty they are on the subject of writing Perl code, etc. To be compatible with this, we’d need to know the scope that each statement is made in (perhaps another piece of metadata), and when calculating reputation rank we would only consider votes made within that scope. Or maybe we should just give votes within that scope higher weights, just in case there aren’t enough to get an accurate score… and backfill it with each person’s general reputation from all scopes.

· In these piles: fundamental-purpose · Original post