Sunteți pe pagina 1din 6

SCALABLE AND COST-EFFECTIVE INTERCONNECTION OF DATA-CENTER SERVERS USING DUAL SERVER PORTS

ABSTRACT:

The goal of data-center networking is to interconnect a large number of server machines with low equipment cost while providing high network capacity and high bisection width. It is well understood that the current practice where servers are connected by a tree hierarchy of network switches cannot meet these requirements. In this paper, we explore a new serverinterconnection structure. We observe that the commodity server machines used in todays data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purposes. We believe that if both ports are actively used in network connections, we can build a scalable, cost-effective interconnection structure without either the expensive higher-level large switches or any additional hardware on servers. We have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. We have developed a low-overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. We have also proposed how to incrementally deploy FiConn.

EXISTING SYSTEM: Existing network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfortunately, even when deploying the highest-end IP switches/routers, resulting topologies may only support 50% of the aggregate bandwidth available at the edge of the network, while still incurring tremendous cost. Non-uniform bandwidth among data center nodes complicates application design and limits overall system performance. Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we argue that appropriately architected and interconnected commodity switches may deliver more performance at less cost than available.

PROPOSED SYSTEM: In this paper, we explore a new server-interconnection structure. We observe that the commodity server machines used in todays data centers usually come with two built-in Ethernet ports, one for network connection and the other left for backup purposes. We believe that if both ports are actively used in network connections, we can build a scalable, cost-effective interconnection structure without either the expensive higherlevel large switches or any additional hardware on servers. We have proven that FiConn is highly scalable to encompass hundreds of thousands of servers with low diameter and high bisection width. We have developed a low-overhead traffic-aware routing mechanism to improve effective link utilization based on dynamic traffic state. We have also proposed how to incrementally deploy FiConn. FiConn a novel serverinterconnection network structure that utilizes the dual-port configuration existing in most commodity data-center server machines it is a highly scalable structure because the total number of servers it can support is not limited by the number of server ports or switch ports. It is cost-effective because it requires less number of switches and links than other recently proposed structures for data centers.

SYSTEM SPECIFICATION: HARDWARE SPECIFICATION: Processor Speed RAM Hard Disk General : : : : : Intel Pentium-IV 1.1GHz 512MB 40GB Key Board, Monitor, Mouse

SOFTWARE SPECIFICATION:

Operating System Software

: :

Windows XP JAVA (JDK 1.6)

CONCLUSION: We propose FiConn, a novel server-interconnection network structure that utilizes the dual-port configuration existing in most commodity data-center server machines. It is a highly scalable structure because the total number of servers it can support is not limited by the number of server ports or switch ports. It is cost-effective because it requires less number of switches and links than other recently proposed structures for data centers. We have designed traffic-aware routing in FiConn to make better utilization of the link capacities according to traffic states. We also have proposed solutions to increase the bisection width in incomplete FiConns during incremental deployment.

REFERENCE: [1] M. Isard, M. Budiu, and Y. Yu et al., Dryad: Distributed data-parallel programs from sequential building blocks, in Proc. ACM EuroSys, 2007, pp. 5972. [2] M. Al-Fares, A. Loukissas, and A. Vahdat, A scalable, commodity data center network architecture, in Proc. ACM SIGCOMM, Aug. 2008, pp. 6374. [3] C. Guo, H.Wu, K. Tan, L. Shi, Y. Zhang, and S. Lu, DCell: A scalable and fault-tolerant network structure for data centers, in Proc. ACM SIGCOMM, Aug. 2008, pp. 7586. [4] C. Guo, G. Lu, D. Li, H.Wu, X. Zhang, Y. Shi, C. Tian, Y. Zhang, and S. Lu, BCube: A high performance, server-centric network architecture for modular data centers, in Proc. ACM SIGCOMM, Aug. 2009, pp. 6374. [5] H. Sullivan and T. R. Bashkow, A large scale, homogeneous, fully distributed parallel machine, I, in Proc. ISCA,Mar. 1977, pp. 105117. [6] L. Bhuyan and D. Agrawal, Generalized hypercube and hyperbus structures for a computer network, IEEE Trans. Comput., vol. C-33, no. 4, pp. 323333, Apr. 1984. [7] W. Dally and B. Towles, Route packets, not wires: On-chip interconnection networks, in Proc. DAC, Jun. 2001, pp. 684689. [8] L. Bhuyan and D. Agrawal, A general class of processor interconnection strategies, in Proc. ISCA, Apr. 1982, pp. 9098.

S-ar putea să vă placă și