home : about : p2p : security : technology : trust
I've been thinking some more about trust generally, and partly in a response to some emails I've exchanged with Nik Silver of jtrix, and partly as a result of other conversations and reading, I thought it would make sense to expand somewhat.
There are quite a few models of trust out in the p2p world, and most seem to have been put together with little thought about what they are trying to implement. By this I mean that assumptions are made about how the system works, and a system created - there are exceptions, notably Advogato, where you can find a discussion of the trust metric. Even in the case of Advogato, however, I would suggest that there has been more thought put into the system than into what model it is trying to espouse. This isn't necessarily a bad thing, but I believe that if we have better ways of discussing and modelling the trust models we want to espouse, and looking past initial assumptions, then we will create much more robust and useful systems. I make, therefore, the distinction between models - which are conceptual - and systems - which are implementations.
When I set up any kind of system, I am almost bound to be making assumptions about the trust model that is being espoused. To give some examples, how about the following questions? I believe that there could be radically different answers to each of these questions, ranging from "absolutely, always", to "no way, never" (this raises the issue of timeliness, which is another aspect of context, about which more later). Consider, then the following questions:
I've tried to bring out, particularly in the later questions, the suggestion that context is very important. I may believe you to be a fantastic computer security expert, but I might not trust you to mend my scuba equipment. On the other hand, if you take a course in scuba equipment repair, I might decide that I now will trust you with this task. There are many aspects to trust - context is always important.
A very interesting question which my ex-colleague Will Harwood brought up is this: if I take a hostage - anything of value to you, it doesn't need to be person - how does this change the relationship? I might now trust you to upload something to my PDA - I'll harm the hostage if you don't do what I want - but I still won't trust you to mend my scuba equipment.
Another fascinating aspect of trust, and one which really comes to the fore in massively distributed peer-to-peer networks - where my interest lies - is that of transferral of trust. I am very unlikely to spend my entire time interacting with people I know in real life, or even with people who have been recommended to me first hand by people I know and trust in real life. How, then, can I be sure that I can trust someone in a particular context? How can I transfer trust? How might we model decaying trust relationships across multiple hops? For instance, if I trust you 90%, and you trust Jon 80%, how much trust should I have in Jon? How should this change if other people I trust 80% only trust Jon 20% in a particular context.
I don't have answers to the questions I've laid out. I hope to find the time to research what the answers might be. However, I do believe that it's important that we find ways to model the questions, so that when systems are created, we at least understand the assumptions that are being made, and accept the risks that go with those assumptions.
Mike Bursell, 2001-10-17
home : about : security : technology : trust : p2p
Mike Bursell - email@example.com, 2001-10-17