The notion of reputation is commonly used in social life and economy, and there exists a common sense regarding its meaning. According to a formal definition of reputation given by Wilson (1985), it is "a characteristic or attribute ascribed to one person (organization, community, etc.) A by another person (or community) B". On the other hand, the reputation of say a service provider can be formed by means of a collection of ratings by different users; each such rating is intuitively equivalent to user satisfaction.
The notion of reputation is very relevant to systems where there is information asymmetry about quality and trust, due to the large number of players involved and their anonymity / pseudonymity. Reputation can be seen as a state variable that gives evidence about the missing information; thus, reputation gives the incentives to providers and consumers to behave properly.
Reputation does not reveal the hidden information, when:
- Significant amount of 'noisy' ratings is in place. The reputation mechanism does not provide the means (e.g. number of rating levels) of expressing one’s particular rating.
- There is limited historical information.
- The shadow of future (i.e., the negative future impact of a bad reputation) is not adequate.
- Strategic manipulation of ratings is easy and thus significant.
When a proper reputation mechanism is absent in systems that serve as a market of services, there may occur:
- Adverse selection when there is "hidden quality" in the provision of services, which decreases individual surplus and gives incentives for a "market of lemons"; i.e., a market where it is preferable to offer low quality services.
- Moral hazard when there is "hidden action" (i.e., the potential for intentional reduction of quality by the provider), which gives incentives for no participation in the market at all.
- Other kinds of abusing behavior (e.g. free-riding, hacking).
A reputation mechanism is successful when:
- It leads to high efficiency, and particular user surplus/social welfare achieved is comparable to the case of perfect information.
- A steady-state market situation can be achieved and maintained.
- It has good performance, which depends on efficient treatment of technical means: storage, processing and communication overhead, scalability etc.
- Provides effective solutions (robustness) to cases of identity change, strategic manipulation of ratings, and milking (i.e. exploitation of) one’s reputation.
Our relevant research so far has mainly focused on:
- Reputation-based policies for efficient exploitation of the reputation metric and approaches for efficient aggregation of ratings' feedback.
- Mechanisms for enforcing submission of truthful ratings' feedback (to be used in reputation calculation) both in peer-to-peer systems and e-marketplaces.
- Other topics, such as calculation of the reputation of a remote node in a trust graph (e.g. the Semantic Web), namely FACiLE, and use of personal reputation in systems requiring team effort (e.g. grid clusters).
Our approach for dealing with the above topics combines:
- Modelling of the relevant system, of the proposed mechanism and of its impact.
- Experimental/analytical evaluation and/or game-theoretic equilibrium analysis of each mechanism.
- Study of the implementation of each mechanism in actual systems.
Reputation-Based Policies in Peer-to-Peer Environments
In particular, in [1,2], we present an in-depth and innovative study of how reputation can be exploited so that the right incentives for high performance are provided to peers in a peer-to-peer system. Such incentives do not arise if peers exploit reputation only when selecting the best providing peer; we show that this approach may lead high-performing peers to receive unfairly low value from the system. We argue and justify experimentally that the calculation of reputation values has to be complemented by proper reputation-based policies that determine the pairs of peers eligible to interact with each other. We introduce two independent dimensions of reputation-based policies, namely "provider selection" and "contention resolution", as well as specific policies for each dimension. We perform extensive comparative assessment of a wide variety of policy pairs and identify the most effective ones by means of simulations of dynamically varying peer-to-peer environments. We show that both dimensions have considerable impact on both the incentives for peers and the efficiency attained. In particular, when peers follow fixed strategies, certain policy pairs differentiate the value received by different types of peers in accordance to the value offered to the system per peer of each type [1,2]. Moreover, when peers follow dynamic rational strategies, incentive compatibility applies under certain pairs of reputation-based policies: each peer is provided with the incentive to improve her performance in order to receive a higher value .
Randomized Feedback Aggregation
Also, we show experimentally that reputation values can be computed quickly and accurately (thus reducing the associated overhead) when only aggregating: a) a small randomly selected subset of the ratings' feedback provided by the peers, , or b) the subset of ratings' feedback that users tend to submit on their own without any enforcement; namely, all negative feedback and a portion (~25%) of the positive feedback.
Credible Reporting of Feedback Information
In , we propose a mechanism for providing the incentives for reporting truthful ratings' feedback in a peer-to-peer system for exchanging services. This mechanism is to complement reputation mechanisms that employ ratings' feedback on the various transactions in order to provide incentives to peers for offering better services to others. Under our approach, both transacting peers (rather than just the client) submit ratings on performance of their mutual transaction. However, only if the two ratings are in agreement, then they are taken into account in the calculation of the providing peer's reputation. On the other hand, if these are in disagreement, then both transacting peers are punished, since such an occasion is a sign that one of them is lying, but the system cannot tell whom! The severity of each peer's punishment is determined by his corresponding non-credibility metric; this is maintained by the mechanism and evolves according to the peer's record. When under punishment, a peer is not allowed to transact with others, while others do not have any incentive to transact with such a peer. We present the results of a multitude of experiments of dynamically evolving peer-to-peer systems. The results show clearly that our mechanism detects and isolates effectively liar peers, while rendering lying costly. Also, our mechanism diminishes the efficiency losses induced to sincere peers by the presence of large subsets of the population of peers that provide their ratings either falsely or according to various unfair strategies. Finally, we explain how our approach can be implemented in practical cases of peer-to-peer systems.
In , we further investigate the above mechanism that provides strong incentives for the submission of truthful feedback in peer-to-peer environments. In particular, we develop a Markov-chain model of the mechanism. Based on this, we prove that, when the mechanism is employed, the system evolves to a beneficial steady-state operation even in the case of a dynamically renewed population. Furthermore, we develop a procedure for the efficient selection of the parameters of the mechanism for any given peer-to-peer system; this procedure is based on ergodic arguments. Simulation experiments reveal that the procedure is indeed accurate, as well as effective regarding the incentives provided to participants for submitting truthful feedback.
Game Theoretic Analysis of Stability of TruthFul Ratings' Equilibria Enforced by Monetary Punishments in Electronic Marketplaces
In , we define and analyze a game-theoretic model that captures the dynamics and the rational incentives in a competitive e-marketplace in which providers and clients exchange roles. That is, we assume that entities in such environments can act both as providers and as clients and thus they are careful about their reputation. The situation differs from that of a peer-to-peer environment due to the payment involved in each transaction and due the arising competition for clients among providers. We study how we can enforce equilibria where ratings are submitted truthfully. We employ a mechanism prescribing that each service provision is rated by both the provider and the client, while this rating is includedin the calculation of reputation only in case of agreement. However, contrary to the mechanism of  and  we assume that monetary penalties are induced to both raters in case of disagreement. First, we analyze the case where these penalties are fixed. By studying the evolutionary dynamics of the system, we prove that, under certain assumptions on the initial conditions, the system is led to a stable equilibrium where all participants report truthfully their ratings. We also investigate the introduction of non-fixed penalties to provide the right incentives for truthful reporting. We derive lower bounds on such penalties that depend on the participant's reputation values so that the truthful rating equilibrium is established. Thus, by employing a punishment that is tailored properly for each participant, this approach can limit the unavoidable social welfare losses due to the penalties for disagreement.
In , we extend the work of . In particular, by means of game-theoretic analysis, we establish that employing proper fixed fines (yet different ones for the provider and for the client of each transaction) for disagreement, we can enforce a stable equilibrium with honest feedback in the market under certain conditions, which we thoroughly investigate. Moreover, we calculate proper non-fixed reputation-based fines that render honest feedback a Nash equilibrium, which is experimentally proved to be stable. Then, we numerically confirm that the social loss reduction per disagreement (due to unfair punishment of one of the participants) achieved by reputation-based fines is significant both for providers and for clients. Our results apply even if a participant employs a different account for each role. Finally, we investigate the impact of employing our approach to eBay, and numerically estimate fixed and reputation-based monetary punishments that lead all eBay participants to all /True /stable equilibrium.
Accurate Trust Inference for Distant Nodes in Trust Graphs
In , we propose an innovative approach for more accurate trust inference for distant nodes in trust graphs (e.g. the Semantic Web). In this context, Web referrals are often employed in order to assess the trustworthiness of the information published. This is due to the fact that most information sources are visited only occasionally by the same client; thus, direct "personal" experience rarely suffices. The accuracy of trust inference for unknown information sources may considerably deteriorate due to "noise" or to the intervention of malicious nodes producing and propagating untrustworthy referrals. Our method for trust inference in the Semantic Web and trust networks in general is referred to as FACiLE (Faith Assessment Combining Last Edges). Unlike all other approaches, FACilE infers a trust value for an information source from a proper combination of only the direct trust values of its neighbours. The efficiency of our approach is evaluated by a series of simulation experiments run for a wide variety of mixes of sources of untrustworthy information. FACiLE outperforms other trust-inference methods in the most interesting cases of population mixes; the performance is so satisfactory that it does not improve from incorporating direct trust for occasionally visited sites.
Reputation-based Revelation of Individual Poor Performance in Grid Clusters
In  we investigate the use of reputation in systems requiring team effort (e.g. in a grid cluster), which in turn depends on individual effort. The objective is to provide incentives for high individual performance, thus promoting team performance too. We have analyzed the effectiveness of reputation for the revelation of the real performance of team members in the presence of sincere and liar types for reporting feedback. We have considered various cases of members' individual observation of other's performance ranging from full information (where all rate all others) to no information (where no ratings' feedback is submitted). We experimentally found that, in case of full information on the performance of others, employment of the majority rule accurately reveals performance of team members, provided that the population contains less than 30% liars. If this is no the case (or if no ratings' feedback is available), blaming all team members for poor performance of the entire team gives the most satisfactory results in terms of individual performance revelation.
 Th. G. Papaioannou and G. D. Stamoulis. Achieving Honest Ratings with Reputation-based Fines in Electronic Markets. Accepted for publication as full paper at IEEE INFOCOM 2008 (Acceptance Rate: 21%), Phoenix, AZ, USA, April 2008.(.pdf)
 Th. G. Papaioannou and G. D. Stamoulis. Reputation-based Estimation of Individual Performance in Grids. Accepted for publication as full paper at IEEE CCGRID 2008, Lyon, France, May 2008. (.pdf)
 Th. G. Papaioannou and G. D. Stamoulis. Enforcing Truthful-Rating Equilibria in Electronic Marketplaces. In Proc. of the IEEE ICDCS Workshop on Incentive-Based Computing, Lisbon, Portugal, July 2006. (.pdf), (presentation)
 V. Bintzios, Th. G. Papaioannou and G. D. Stamoulis. An effective approach for accurate estimation of trust of distant information sources in the Semantic Web. In Proc. of the IEEE ICPS Security, Privacy and Trust in Pervasive and Ubiquitous Computing, Lyon, France, June 2006. (.pdf), (presentation)
 Th. G. Papaioannou and G. D. Stamoulis. Optimizing an Incentives' Mechanism for Truthful Feedback in Virtual Communities. Presented in the AAMAS Workshop on Agents and Peer-to-Peer Computing (in press in a LNCS issue), Utrecht, The Netherlands, July 2005. (.pdf)
 Th. G. Papaioannou and G. D. Stamoulis. Reputation-based Policies that Provide the Right Incentives in Peer-to-Peer Environments. Computer Networks (Special Issue on Management in Peer-to-Peer Systems: Trust, Reputation and Security) (Acceptance Ratio: 13.3%), Elsevier, vol. 50, issue 4, pp. 563-578, 2006. (.pdf)
 Th. G. Papaioannou and G. D. Stamoulis. An Incentives' Mechanism Promoting Truthful Feedback in Peer-to-Peer Systems. In Proc. of IEEE/ACM CCGRID 2005 (Workshop on Global P2P Computing), May 2005. (.pdf) (Cited in the Reputation Mechanisms chapter of the Elsevier Handbook on Economics and Information Systems, 2006.)
 Th. G. Papaioannou and G. D. Stamoulis. Effective Use of Reputation in Peer-to-Peer Environments. In Proc. of IEEE/ACM CCGRID 2004 (Workshop on Global P2P Computing), April 2004. (.pdf)
Partof the above research was performed in the context of IST Project MMAPPS.
|Reputations Research Network|
|IRTF Peer-to-Peer WG|
|C. Dellarocas Home Page|
|P. R. Milgrom Home Page|