Trust - the wider context
These are some thoughts partly prompted by Bernard William's "Formal Structures and Social Reality" in Gambetta's "Trust" book, but also from my growing feel for the subject, and how it interests me. Very random thoughts, and in little order.
- Trust is not equivalent to cooperation: quite apart from anything else, I may be trusting in cooperation which may be unknowing, or trusting in the enforcement of rules. For instance, do I need to trust the individual judges to trust the judiciary?
- it may be enough to trust someone to do something, or not.
- or to trust an agent of theirs, or of the system, to do something or not.
- Identity is very important in a trust-based system (or a system with a trust model or set of trust models), but how checkable will identity always be?
- do we always want fully checkable identity? Clearly not: think about slashdot's Anonymous Coward concept, which seems to work quite well much of the time.
- is a probabilistic measure of identity acceptable in certain circumstances? What would these be, and what would such a measure entail? What benefits and problems would arise?
- how about partial identity revealing? It might often be useful, for instance, to identify oneself as part of a group, but no further. Often, for anonymous moderation, blackballing, etc..
- how often are you trusting a cryptographic key/certificate? How often is this actually tempered by your understanding of the person behind the certificate? See people and machines below.
- Such systems will often be imperfect in information: unlike perfect markets, and unlike Coleman's optimistic model for social capital for the same reason. One level of imperfect information is in terms of identity, another might well be reputation (you can't always check someone's reputation fully, or they're new to the system, or you're interacting with them in a new context).
- Reputation is an area where quite a lot of work has been done, and which I need to read up on, however, it's a difficult issue, and not the panacaea which it's sometimes made out to be (in my opinion). Some of the questions which raise their ugly heads:
- how do you trust the reputation servers? Not just in terms of identity - this is one of the easier questions - but to give you the right information, or to be secure (tamperproof...), truthful, contextual, etc.
- Reengagement - many theorists seem to assume static probabilities of reengagement - often very high (mandated), or very low (as in a large population) - but what about fluid populations? The likelihood of reengagement has significant impact on who game theory is applied to the system, and the question of how well you can identify participants is another question altogether.
- Enforcement of punishment (or reward), or statefulness, is very important in many systems, but how is it governed? The word "governed" is used with care: the idea of government suggests a hierarchy, and how does that sit in a p2p system? The answer is probably that it sits fine with some, but very badly with others.
- Even if a punishment is agreed, how is it applied, and to whom? In some teenage bootcamps (or an army unit), for instance, punishment is applied to related members, even if they were not aware of the crime's being plotted: "just because you didn't know x would defect, it doesn't mean that you shouldn't be punished."
- Is widespread punishment a way to try to avoid the tragedy of the commons?
- Can "naming and shaming" be enough (following ebay)? What about issues of identity?
- Reward may be as important as punishment, but what reward structures are sensible? Reputation, as in Advogato? "Karma", as in slashdot, or something else?
- how do you "earn your spurs" (gain credit as a newcomer)? Introduction of new members may be difficult, particularly if they want to have input early in their membership.
- what about getting rewards for helping out? This was a common feature of MOOs and MUDs, but I'm not sure whether it's still used. The question arises of who decides on your reward? Your peers, or wizards, or a set of the community (what's to stop people voting each other up?). Note - Howard Rheingold had some good anecdotal studies of early MUDs.
- People and machines are different, and we react to and interact with them in different ways (until we have Turing Test-passing machines, of course). What are some of the ways in which we tell the differences?
- text elements are a major way of our identifying people, such as smileys subjects, grammar, humour and discursions in conversation
- promptness of response also tempers how we react to different people, but response time may be affected by a number of different factors, including:
- the amount of thought being given to a reply (a long time may in fact signal high involvement with an issue, rather than low involvement)
- commitments and availability
- engagement in a topic
Mike Bursell email@example.com