Sunteți pe pagina 1din 11

Structured P2P Networks by Example Chord, DKS(N,k,f)

Jun Qin
Computational Engineering Technische Universitaet Dresden

August 26, 2006

Abstract Recent developments in the area of peer-to-peer computing show that structured overlay networks implementing distributed hash tables scale well and can serve as infrastructures for Internet scale applications. This paper presents a study on two representative examples in structured P2P networks: Chord and DKS(N,k,f). It explains how they work, shows their performance and does some discussion on them. Chord solves a fundamental problem that confronts peer-to-peer applications: how to eciently locate the node that stores a particular data item. DKS(N,k,f), which could be perceived as an optimal generalization of Chord, overcomes additional bandwidth consumption in correcting routing table entries in Chord.

Contents
1 Introduction 2 Basics 3 Chord 3.1 The Overlay Graph and Items Mapping 3.2 The Lookup Process . . . . . . . . . . . 3.3 Joins, Leaves and Maintenance . . . . . 3.4 Replication and Fault Tolerance . . . . . 3.5 Evaluation . . . . . . . . . . . . . . . . . 4 DKS(N,f,k) 4.1 The Overlay Graph and Items Mapping 4.2 The Lookup Process . . . . . . . . . . . 4.3 Joins, Leaves and Maintenance . . . . . 4.4 Replication and Fault Tolerance . . . . . 4.5 Evaluation . . . . . . . . . . . . . . . . . 5 Related Work 6 Discussion 7 Conclusion 1 2 2 3 3 3 4 5 5 6 6 6 7 7 7 8 9 10

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

BASICS

Introduction

The need for sharing information and computing resources in large scale networks is motivating a signicant amount of research in the area of P2P computing. Unstructured P2P networks have problems such as single points of failure or control(i.e. Napster [Nap]) and lacking scalability because of widespread use of broadcasts(i.e. Gnutella [200]). Recent developments in the area of peer-to-peer computing show that structured overlay networks implementing distributed hash tables(DHTs) scale well and can serve as infrastructures for Internet scale applications. As one of the mainstream techniques of structured P2P networks, Chord [ISB03] is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures. The Chord protocol supports just one operation: given a key, it maps the key onto a node. Simplicity, correctness and performance are the three features that distinguish Chord from many other peer-to-peer protocols. DKS(N,k,f) [LOAH03] could be perceived as an optimal generalization of Chord. It stands for Distribute k-ary Search and overcomes additional bandwidth consumption in correcting routing table entries in Chord with correction-on-use technique. This paper presents a study on these two representative examples in structured P2P networks: Chord and DKS(N,k,f). It mainly explains how Chord and DKS(N,k,f) work, shows the evaluations of these two DHT based P2P networks and does some discussion on them.

Basics

The need for sharing information and computing resources in large scale networks is motivating a signicant amount of research in the area of peer-to-peer(P2P) computing. Previously in Napster, les where exchanged between computers(Peers) relying on a central directory for knowing which peer has which le. Napster was followed by a number of systems like Gnutella, where the central directory was replaced with a ooding process where each computer connects to random members in a peer-to-peer network and queries his neighbors who act similarly until a query is resolved. Such P2P networks are called unstructured P2P networks because of the use of random graph of peers. Random overlay networks attracted academic researchers from the networking and the distributed systems communities with the simplicity of the solution and its ability to completely diuse central authority. From a computer science point of view, this elimination of central control is very attractive for eliminating single points of failure and building large-scale distributed systems. But the huge amount of induced trac still renders the solution unscalable. The problem of having a scalable P2P overlay network with no central control became a scientically challenging problem and the eorts to solve it resulted in the emergence of what is known as structured P2P overlay networks, referred to also by the term Distributed Hash Tables(DHTs), which is the main approach that was introduced by the academics to build structured overlay networks. Distributed Hash Table is the distributed version of a hash table data structure with the two primitive operations Put(key,value) and Get(Key). The Put operation should result in the storage of the value at one of the peers such that any of the peers can perform the Get operation and reach the peer that has the value. More importantly, both operations need to take a small number of hops. A rst naive solution would be that every peer knows all other peers, and then every Get operation would be resolved in one hop. Apparently, that is not scalable. Therefore, a second constraint is needed. Each node should know a small number of other peers. From a graph-theory point of view, this means that a directed graph of a certain known structure rather than a random graph needs to be constructed with scalable sizes of both the outgoing degree of each node and the diameter of the graph. Chord and DKS(N,k,f) are two representative examples of DHT based structured P2P networks.

Chord

Chord is a scalable peer-to-peer lookup service for Internet applications. The Chord protocol supports just one operation: given a key, it maps the key onto a node. Depending on the application using Chord, that node might be responsible for storing a value associated with the key. Chord adapts eciently as nodes join and leave the system, and can answer queries even if the system is continuously changing.

3.1

The Overlay Graph and Items Mapping

At its heart, Chord provides fast distributed computation of a hash function mapping keys to nodes responsible for them. It uses consistent hashing. The consistent hash function assigns each node and key an m-bit identier using a base hash function such as SHA-1[Sta95]. A nodes identier is chosen by hashing the nodes IP address, while a key identier is produced by hashing the key. The identier length m must be large enough to make the probability of two nodes or keys hashing to the same identier negligible.

Figure 1: An identier circle consisting of the three nodes 0, 1, and 3. In this example, key 1 is located at node 1, key 2 at node 3, and key 6 at node 0. [ISB03] Consistent hashing assigns keys to nodes as follows. Identiers are ordered in an identier circle modulo 2m . Key k is assigned to the rst node whose identier is equal to or follows (the identier of) k in the identier space. This node is called the successor node of key k, denoted by successor (k). If identiers are represented as a circle of numbers from 0 to 2m 1, then successor (k) is the rst node clockwise from k. Figure 1 shows an identier circle with m=3. The circle has three nodes: 0, 1, and 3. The successor of identier 1 is node 1, so key 1 would be located at node 1. Similarly, key 2 would be located at node 3, and key 6 at node 0. Consistent hashing is designed to let nodes enter and leave the network with minimal disruption. To maintain the consistent hashing mapping when a node n joins the network, certain keys previously assigned to ns successor now become assigned to n. When node n leaves the network, all of its assigned keys are reassigned to ns successor. No other changes in assignment of keys to nodes need occur. In the example above, if a node were to join with identier 7, it would capture the key with identier 6 from the node with identier 0.

3.2

The Lookup Process

The lookup process comes as a natural result of how the identier space is partitioned. Both the insertion and querying items depend on nding the successor of an identier. As before, let m be the number of bits in the key/node identiers. Each node, n , maintains a routing table with (at most) m entries, called the nger table. The ith entry in the table at node n

CHORD

contains the identity of the rst node, s , that succeeds n by at least 2i1 on the identier circle, i.e., s = successor(n + 2i1 ), where 1 i m (and all arithmetic is modulo 2m ). We call node s the ith nger of node n, and denote it by n.nger [i].node(see Table 1). A nger table entry includes both the Chord identier and the IP address (and port number) of the relevant node. Note that the rst nger of n is its immediate successor on the circle; for convenience we often refer to it as the successor rather than the rst nger. In the example shown in Figure 2, the nger table of node 1 points to the successor nodes of identiers (1 + 20 ) mod 23 = 2, (1 + 21 ) mod 23 = 3, and (1 + 22 ) mod 23 = 5, respectively. The successor of identier 2 is node 3 , as this is the rst node that follows 2, the successor of identier 3 is (trivially) node 3, and the successor of 5 is node 0.

Figure 2: Finger tables and key locations for a net with nodes 0, 1, and 3, and keys 1, 2, and 6. [ISB03] As a query example, consider the Chord ring in Figure 2. Suppose node 3 wants to nd the successor of identier 1. Since 1 belongs to the circular interval [7, 3), it belongs to 3.nger [3].interval ; node 3 therefore checks the third entry in its nger table, which is 0. Because 0 precedes 1, node 3 will ask node 0 to nd the successor of 1. In turn, node 0 will infer from its nger table that 1s successor is the node 1 itself, and return node 1 to node 3. In general, under normal conditions a lookup takes O(log2 (N )) hops.

3.3

Joins, Leaves and Maintenance

In a dynamic network, nodes can join (and leave) at any time. Joins and leaves make the network change constantly. To join the network, a node n performs a lookup for its own id through some rst contact in the network and inserts itself in the ring between its successor s and the predecessor of s using a periodic stabilization algorithm. Initialization of ns routing table is done by copying the routing table of s or letting s lookup each required edge of n. The subset of nodes that need to adjust their tables to reect the presence of n, will eventually do that because all nodes run a stabilization algorithm that periodically goes through the routing table and looks up the value of each edge. The last task is transfer part of the items stored at s, namely items with id less than or equal to n need to be transferred to n and that is also handled by the application layers of n and s. Graceful removals (leaves) are done by rst transferring all items to the successor and informing the predecessor and successor. The rest of the ngers are corrected by the virtue of the stabilization algorithm. A basic stabilization protocol is used to keep nodes successor pointers up to date, which is the most important and sucient to guarantee correctness of lookups and to add nodes to a

3.4

Replication and Fault Tolerance

Chord ring in a way that preserves reachability of existing nodes, even in the face of concurrent joins and lost and reordered messages. As a simple example, suppose node n joins the system, and its ID lies between nodes np and ns . n would acquire ns as its successor. Node ns , when notied by n, would acquire n as its predecessor. When np next runs stabilize, it will ask ns for its predecessor (which is now n); np would then acquire n as its successor. Finally, np will notify n, and n will acquire np as its predecessor. At this point, all predecessor and successor pointers are correct.

3.4

Replication and Fault Tolerance

When a node n fails, nodes whose nger tables include n must nd ns successor. In addition, the failure of n must not be allowed to disrupt queries that are in progress as the system is re-stabilizing. The key step in failure recovery is maintaining correct successor pointers. To help achieve this, each Chord node maintains a successor-list of its r nearest successors on the Chord ring. In ordinary operation, a modied version of the stabilize routine maintains the successor-list. If node n notices that its successor has failed, it replaces it with the rst live entry in its successor list. At that point, n can direct ordinary lookups for keys for which the failed node was the successor to the new successor. As time passes, stabilize will correct nger table entries and successor-list entries pointing to the failed node. After a node failure, but before stabilization has completed, other nodes may attempt to send requests through the failed node as part of a lookup. Ideally the lookups would be able to proceed, after a timeout, by another path despite the failure. In many cases this is possible. All that is needed is a list of alternate nodes, easily found in the nger table entries preceding that of the failed node. If the failed node had a very low nger table index, nodes in the successor-list are also available as alternates. For an item to be lost or the ring to be disconnected, O(log2 (N ) + 1) successive nodes have to fail simultaneously. The successor-list mechanism also helps higher layer software replicate data. A typical application using Chord might store replicas of the data associated with a key at the k nodes succeeding the key. The fact that a Chord node keeps track of its r successors means that it can inform the higher layer software when successors come and go, and thus when the software should propagate new replicas.

3.5

Evaluation

In the experiment of simultaneous node failures, we evaluate the ability of Chord to regain consistency after a large percentage of nodes fail simultaneously. Figure 3 plots the mean lookup failure rate and the 95% condence interval as a function of p. The lookup failure rate is almost exactly p. Since this is just the fraction of keys expected to be lost due to the failure of the responsible nodes, we conclude that there is no signicant lookup failure in the Chord network. For example, if the Chord network had partitioned in two equal-sized halves, we would expect one-half of the requests to fail because the querier and target would be in dierent partitions half the time. Our results do not show this, suggesting that Chord is robust in the face of multiple simultaneous node failures. Figure 4 shows the measured latency of Chord lookups over a range of numbers of nodes. Experiments with a number of nodes larger than ten are conducted by running multiple independent copies of the Chord software at each site. The lesson from Figure 4 is that lookup latency grows slowly with the total number of nodes, conrming the simulation results that demonstrate Chords scalability.

DKS(N,F,K)

Figure 3: The fraction of lookups that fail as a function of the rate (over time) at which nodes fail and join. Only failures caused by Chord state inconsistency are included, not failures due to lost keys. [ISB03]

DKS(N,f,k)

DKS(N,k,f) could be perceived as an optimal generalization of Chord. It stands for Distribute k-ary Search and overcomes additional bandwidth consumption in correcting routing table entries in Chord with correction-on-use technique.

4.1

The Overlay Graph and Items Mapping

DKS could be perceived as an optimal generalization of Chord to provide shorter diameter with larger routing tables. In the mean time, DKS could be perceived as a meta-system from which other systems could be instantiated. DKS stands for Distributed k-ary Search and it was designed after perceiving that many DHT systems are instances of a form of k-ary search. Figure 5 shows the division of the space done in DKS. It has in common with Chord that each node perceives itself as the start of the space. In the mean time, each interval is divided into k rather than 2 intervals. Along with the goal of DKS to act as a meta-system, mapping items onto nodes is also left as a design choice. A Chord like mapping is a valid as a simple rst choice. However, dierent mappings are possible as well.

4.2

The Lookup Process

When a node n receives a lookup request for key identier t, from its user, node n checks if t is between its predecessor, p and itself. If this is case, node n does a local lookup to nd the value associated to t. The result is returned to user. Otherwise, node n triggers a forwarding process that goes level by level, and that consists in routing lookup messages to the node that succeeds t on the identier circle. Each lookup message carries necessary information(level and interval) for detection and correction of routing entries. When the node n successor of t is reached, n performs a local lookup to retrieve the value associated to t. The result is forwarded backward or sent directly to the origin of the lookup. A query arriving at a node is forwarded to the rst node in the interval to which the id of the node belongs. Therefore, a lookup is resolved in logk (N ) hops. Inserting key/value pairs in the system is similar to the lookup. In addition, messages for inserting key/value pairs are also used for detection and correction of routing entries.

4.3

Joins, Leaves and Maintenance

Figure 4: The fraction of lookups that fail as a function of the rate (over time) at which nodes fail and join. Only failures caused by Chord state inconsistency are included, not failures due to lost keys. [ISB03]

4.3

Joins, Leaves and Maintenance

Unlike Chord, DKS avoids any kind of periodic stabilization both for the maintenance of the successors, the predecessor and the routing table. Instead, it relies on three principles, local atomic actions, correction-on-use and correction-on-change. When a node joins, a form of an atomic distributed transaction is performed to insert it on the ring. Routing tables are then maintained using the correction-on-use technique, an approach introduced in DKS. Every lookup message contains information about the position of the receiver in the routing table of the sender. Upon receiving that information, the receiver can judge whether the sender has an updated routing table. If correct, the receiver continues the lookup, otherwise the receiver noties the sender of the corruption of his routing table and advises him about a better candidate for the lookup according to the receivers knowledge. The sender then contacts the candidate and the process is repeated until the correct node for the routing table of the sender is used for the lookup. By applying the correction-on-use technique, a routing table entry is not corrected until there is a need to use it in some lookup. This approach reduces the maintenance cost signicantly. However, the number of joins and leaves are assumed to be reasonably less than the number of lookup messages. In cases where this assumption does not hold, DKS combines it with the correction-on-change technique [LOAH04a]. Correction-on-change noties all nodes that need to be updated upon the occurrence of a join, leave or failure.

4.4

Replication and Fault Tolerance

In early versions of DKS, fault tolerance was handled similar to Chord where replicas of an item are placed on the successor pointers. In later developments [AGH04], DKS tries to address replication more on the DHT level rather than delegating most of the work to the application layer. Additionally, to avoid congestion in a particular segment of the ring, replicas are placed in dispersed well-chosen positions and not on the successor list. In general, for the correction-on-use technique to work, an invariant is maintained where the predecessor pointer has always to be correct and that is provided by the atomic actions on the circle.

4.5

Evaluation

Figure 6 shows that as we increase the number of lookups, the average lookup length tends to 1/2log2 (210 ) and the 99th percentile of the lookup length tends to log2 (210 ). Those are the typical

RELATED WORK

Figure 5: Illustration of how a DKS node divides the space in an identier space of size N = 28 = 256. [LOAH03] lookup bounds oered by the Chord system. The DKS(N,k,f) system oers the same bounds, yet without active stabilization.

Figure 6: The average, the 1st and the 99th percentile of the lookup length as a result of increasing the lookup trac in a system bootstrapped with 500 nodes and 3500 joins are done concurrently with lookups. [LOAH03] In gure 7, the 99th percentile of the lookup length for the case where k = 4 tends to be high when there is no enough lookup trac which is natural, since the number of out-of-date entries is larger because of the larger routing tables. As the lookup trac increases, the system with k = 4, starts to outperform the system with k = 2. In such experiments, the number of lookup failures observed was negligible with respect to the amount of lookup requests injected.

Related Work

While Chord maps keys onto nodes, traditional name and location services provide a direct mapping between keys and values. A value can be an address, a document, or an arbitrary data

Figure 7: The 99th percentile of the lookup length as a result of increasing the lookup trac in a system of actual size of 210 while 10% of the nodes leave, and another 10% join concurrently. [LOAH03] item. Chord can easily implement this functionality by storing each key/value pair at the node to which that key maps. For this reason and to make the comparison clearer, the rest of this section assumes a Chord-based service that maps keys onto values. DNS provides a host name to IP address mapping [MD88]. Chord can provide the same service with the name representing the key and the associated IP address representing the value. Chord requires no special servers, while DNS relies on a set of special root servers. DNS names are structured to reect administrative boundaries; Chord imposes no naming structure. DNS is specialized to the task of nding named hosts or services, while Chord can also be used to nd data objects that are not tied to particular machines. Chord can be used as a lookup service to implement a variety of systems. In particular, it can help avoid single points of failure or control that systems like Napster possess, and the lack of scalability that systems like Gnutella display because of their widespread use of broadcasts. A couple of applications including a cooperative le-system [FDS01] was built on top of Chord. As a general purpose service, a broadcast algorithm was also developed for Chord [SEAH03]. For DKS general purpose a multicast [LOAH04b] algorithm was developed.

Discussion

Therere some questions raised during the talk on this study. Discussions about them are as follows: 1, About the confusion on the basic denitions and assumptions in Chord. Values. The set of values V such as les, directory entries etc.. Each value has a corresponding key from the set of Keys(V). If a value is a le, the key could be, for instance, its checksum, a combination of owner, creation date and name of any such unique attribute. Nodes. The set P of machines/processes also referred to as nodes or peers. Keys(P) is the set of unique keys for members of P, usually the IP addresses or public keys of the nodes. The Identier Space. A common and fundamental assumption of all DHTs is that the keys of the values and keys of the nodes are mapped into one range using a hashing function. For instance, the IP addresses of the nodes and the checksums of les are hashed using SHA-1 to obtain 128-bit identiers. The term identier is used to refer to hashed keys of items and of nodes. The term identier space refers to the range of possible values of identiers and its size is usually referred to by N. We use id as an abbreviation for identier most of the time.

10

CONCLUSION

Items. When a new value is inserted in the hash table, its key is saved with it. We use the term item to refer to a key-value pair. 2, About the SHA-1 hash function used in Chord. The standard SHA-1 function is used as the base hash function. This makes Chord protocol deterministic. Producing a set of keys that collide under SHA-1 can be seen, in some sense, as inverting, or decrypting the SHA-1 function. This is believed to hard to do. 3, About the application example in Chord, which is a P2P storage system. Theres a main idea about this system, in which Chord identies responsible node for storing a block and talks to the server on that node. Which is the server or client in this system can be seen clear. In pure P2P systems, any peer can act as the server or the client, it depends on what role this peer actually plays in the system. 4, About the concepts Round Trip Time(RTT) of the problems in Chord. Besides additional bandwith consumption for maintaining routing tables, Chord also has such problem like Nodes close on ring can be far in the network, as shown in Figure 8.

Figure 8: Nodes close on ring can be far in the network. [Kaa] A partial solution could be Weight neighbor nodes by Round Trip Time(RTT) when routing. We can choose neighbor who is closer to destination(with lowest RTT) to reduce path latency. In telecommunications, the term round-trip delay time or round-trip time (RTT) has the following meanings: 1, The elapsed time for transit of a signal over a closed circuit, or time elapsed for a message to a remote place and back again; 2, In primary or secondary radar systems, the time required for a transmitted pulse to reach a target and for the echo or transponder reply to return to the receiver. Round-trip delay time is signicant in systems that require two-way interactive communication, such as voice telephony, or ACK/NAK data systems where the roundtrip time directly aects the throughput rate, such as the Transmission Control Protocol. It may range from a very few microseconds for a short line-of-sight (LOS) radio system to many seconds for a multiple-link circuit with one or more satellite links involved. This includes the node delays as well as the media transit time.

Conclusion

This paper presents a study on these two representative examples in structured P2P networks: Chord and DKS(N,k,f). It mainly explains how Chord and DKS(N,k,f) work, shows the evaluations of these two DHT based P2P networks and does some discussion on them. Chord emphasizes simplicity, providing scalable distributed hash service. It solves a fundamental problem that confronts peer-to-peer applications: how to eciently locate the node that stores a particular data item. DKS(N,k,f), which could be perceived as an optimal generalization of Chord, signicantly decreases the additional bandwidth consumption in correcting routing table entries.

REFERENCES

11

Chord and DKS(N,k,f) are two representative examples in structured P2P networks, which is fully distributed and scalable service, with distributed hash table technique(DHT) as a mainstream technique, can arrive target node with less routing information, and has no blooding algorithm. Though structured P2P networks have many good characteristics, it still have problems such as Representative programs generally assumes nodes with same possibilities, adapt well to small scale systems, more complex compared with unstructured P2P systems, and lack of successful deployment of large-scale P2P systems based on DHT in internet. Better solutions still need to be developed.

References
[200] [AGH04] [FDS01] Gnutella 2003. http://www.gnutella.com. 1 Luc Onana Alima Ali Ghodsi and Seif Haridi. A novel replication scheme for loadbalancing and increased security. Technical Report TR-2004-11, SICS, June 2004. 4.4 David Karger Robert Morris Frank Dabek, M. Frans Kaashoek and Ion Stoica. Widearea cooperative storage with cfs. In Proceedings of the 18th ACM Symposium on Operating Systems Principles (SOSP01), Chateau Lake Louise, Ban, Canada, October 2001. 5 David Liben-Nowell David Karger M. Frans Kaashoek Frank Dabek Ion Stoica, Robert Morris and Hari Balakrishnan. Chord: A scalable peer-to-peer lookup service for internet applications. IEEE/ACM TRANSACTIONS ON NETWORKING, 11(1):1732, February 2003. 1, 1, 2, 3, 4 Frans Kaashoek. Slides on peer-to-peer computing research: a fad? MIT. 8 Per Brand Luc Onana Alima, Sameh El-Ansary and Seif Haridi. Dks(n,k,f): A family of low communication, scalable and fault-tolerant infrastructures for p2p applications. In The 3rd International Workshop On Global and Peer-To-Peer Computing on Large Scale Distributed Systems (CCGRID 2003), May 2003. 1, 5, 6, 7

[ISB03]

[Kaa] [LOAH03]

[LOAH04a] Ali Ghodsi Luc Onana Alima and Seif Haridi. A framework for structured peer-to-peer overlay networks. In LNCS volume of the post-proceedings of the Global Computing 2004 workshop. Springer-Verlag, 2004. 4.3 [LOAH04b] Per Brand Luc Onana Alima, Ali Ghodsi and Seif Haridi. Multicast in dks(n,k,f) overlay networks. In The 7th International Conference on Principles of Distributed Systems (OPODIS2003). Springer-Verlag, 2004. 5 [MD88] [Nap] [SEAH03] P. MOCKAPETRIS and K. J. DUNLAP. Development of the domain name system. In Proc. ACM SIGCOMM (Stanford, CA, 1988), pages 123133, 1988. 5 Napster. Open source napster server, 2002. http://opennap.sourceforge.net. 1 Per Brand Sameh El-Ansary, Luc Onana Alima and Seif Haridi. Ecient broadcast in structured p2p netwoks. In 2nd International Workshop on Peer-to-Peer Systems (IPTPS03), Berkeley, CA, USA, February 2003. 5 Secure Hash Standard. Department of Commerce/NIST, National Technical Information Service, Springeld, VA, April 1995. 3.1

[Sta95]

S-ar putea să vă placă și