The more I think about it, the more convinced I am that network theory, as applied by sociologists, is a good way of dealing with some of the issues around trust in p2p networks. I need to read more about network flows - this is how the Advogato trust metric works, and this may well be useful, too.
My hope is that using network theory to map trust relationships may allow issues such as possible calculation of the "background trust" level in a system. This may be useful when you're trying to work out how long to spend looking for someone to trust with some resources: if, for instance, you don't really want to trust anyone who you (via some as-yet-explained method, using some as-yet-to-be-defined criteria) trust, say 80% in this context, but the background level of trust in the system is only 40%, then you're going to be spending a long time looking (and there may we noone who fits the bill), so you may wish to give up, or to change your trust criteria. It may also be useful to be able to calculate your own (or even other people's) background trust level (from you, certainly, and maybe to you, as well). I'm keen to examine how models such as trust decay over time (if unused) and reinforcement over time (if used) will affect systems, and without some measures such as these, it seems unlikely that reasonable results will be achievable.
Network theory doesn't end there - there are likely to be other uses. One of the most promising, on initial musings, will be discovery of cliques. These groups of people with high levels of mutual trust should appear as "islands of trust" in the network, and are likely to be important to any system. They may, indeed, act as anchors, and there may be opportunities to anchor trust to the group, rather than to specific members of it: this is one of the most interesting areas in p2p trust that I've currently identified. Of course, if there are islands of trust, we have to be aware that many trust relationships will be asymmetrical - just because you trust entity A (who may be a major force in the system) doesn't mean that she should trust you back. This echoes issues of asymmetric "esteem" relationships in the core sociological literature.
I also wonder whether spatial relationship mapping has a place within this field of study. This, however, is not an area where I have much knowledge at all, and needs more research by me into possible applicability.
The concept of structurally equivalent position is one which I came across in the sociological attitude literature - again, within network theory. If, for instance, I "trust" entities B, C, D and E, and you (Y) trust entities C, D, E and F, and your levels of trust for the overlapping group of C, D and E are similar, what does that tell me about first you, and second, F? From a network analytical position, we could be said to hold structurally equivalent positions, at least in respect to our relationships with C, D and E. It may be that you and I have similar criteria for trusting entities within this context (though a correlation across other shared contexts might be required for more confidence), and that I should therefore trust F. Maybe I should also trust you (Y)?
There are, however, some major issues with both of these suggestions. To take the second suggestion first: maybe I should trust you. In fact, I should almost certainly not trust you in this context - although I might consider trusting you in the context of a recommender for this context. To explain in more detail, let us assume that the context of trust is repairing scuba gear. I trust a number of people to repair my scuba gear - Beatrice's dive store, Charles' scuba, Deepa's kitshop and Extropian submarine. You don't trust Beatrice (we'll return to this statement later), but you do trust Felafel tanks. So, we have similar trust structures - should I trust you? The answer to this is trust you to do what? If you are Jim, should I trust you in this context - to repair scuba gear? Absolutely not - it seems that you have similar capabilities and resources to me in terms of scuba repair (if these are the bases for your criteria of trust, which they may well be), so you would be a very bad person to trust with the repair of my scuba gear, but I might trust you to recommend others. The same is likely to be the case in other contexts. If I trust entities to run particular services on my behalf, and trust their results, then I may follow the recommendations of those who also trust those entities, but am unlikely to trust the recommenders with the task themselves.
We set aside a question for later: at one point we stated that "you don't trust Beatrice". What, however, does this mean? Do you distrust Beatrice, or are you neutral on the subject? You may not have come across Beatrice before, and so have no view on the subject - your trust level is "null", we could say. Or you may have come across her before, and actively distrust her - could you have registered a negative trust level somehow? Or you may, in fact, be abstaining - your trust level is probably still "null". In all of these cases, I have to think carefully about whether to trust your recommendations. If you are abstaining, should this signify that you have some reservations about Beatrice, and are withholding judgement, or that you just don't have enough information to form an opinion at this stage? How am I to tell the difference between a "null" due to abstention and one due to lack of previous knowledge without a complete history of your interactions? All of these questions may have an impact not only on my level of trust of you as a recommender, but also conceivably on my views of Beatrice.
Last, but not least, there is the question of whether I should trust F based on your recommendation. As mentioned above, it may make sense to look for a correlation across a wider set of contexts before making such a decision, but even then there are issues which I need to examine before coming to a decision. The first of these is to decide whether you might be conspiring with F ("Felafel", in our example) to make me and others trust him. You might be trying to increase the likelihood that Felafel will be trusted by overplaying how much you trust him. You may have chosen similar types of entities to Felafel as trusted entities in order to make it look like Felafel is a worthy trust partner. Or, to slip further into the slough of paranoia, you may even have targeted my trust relationships and copied/adapted them in an attempt to get me in particular to trust Felafel. So, what can we do about this?
The obvious answer is: "Look at other's trust of you as a recommender". Unluckily, this way lies madness and infinite regress: layers of "trust" will build up, as I look to find whether I trust T to recommend V to recommend X, etc., hoping to find a final link to "Y", but with the concern, of course, that T, the first link in the chain, is herself untrustworthy. So what other options exist? This is the sort of area where a broader examination of the subject may yield some answers, and the concept, raised above, of "islands of trust" may provide some possibilities.
Identity is a slippery thing, particularly in the distributed world. Admittedly, it is fairly easy to identify an entity with whom one has had previous dealings through digital signatures, and to know that no other entity can spoof that identity in the longer term (leaving aside the many areas of possible breakdown in this interaction, many of which are human: see Schneier's "Secrets and Lies"). However, there is little that can be done - beyond profiling and human checking (see Bogard's "The simulation of surveillance") - to stop entities representing themselves as several different entities. This raises the possibility that the entity representing itself as "Felafel" above was, in fact, the same entity as you ("Y"), another layer of confusion to add to the question of how to know whom to trust. Other than the profiling and human checking already mentioned, no obvious solutions present themselves to this question, which must remain at this stage subject for further research.
I raised the question above of whether one set of criteria for trust might be based on the availability of capabilities and resources locally. In fact, it might be more appropriate to look at the difference between availability locally and remotely - though these are unlikely to be the only criteria, by an means. In fact, a brokering function may well be able to match these specific requirements (if you trust the broker, of course...), but the relative importance of these differences that must be left to you. Added to these would be the worth of the resource with which you are trusting a third party. If, for instance, you want someone to look after a set of recipes for you, you are likely to have very different criteria for trust than if you are trusting your credit card details to them (unless you are debt-heavy chef!). This just goes to show that the criteria for trust may be very complicated, and very context-specific - another reason for trying to get a better understanding of the field.