Sunteți pe pagina 1din 30

Distributed File System Review

Schubert Zhang May 2008

File Systems
Google File System (GFS) Kosmos File System (KFS) Hadoop Distributed File System (HDFS) GlusterFS Red Hat Global File System Luster Summary

Google File System (GFS)

Google File System (GFS)


Specified applications oriented file system. Search engines. Grid computing applications. Data mining applications. Other application for the generation and processing of data. Workload Characters Performance, scalability, reliability, and availability requirements. Large distributed data-intensive applications. Large/Huge files (tens of MB to tens of GB in size). Primarily write-once/read-many. Appending rather than overwriting. Mostly sequential access. The emphasis is on high sustained throughput of data access rather than low latency of data access. System Requirements Inexpensive commodity hardware that may often fail. Adequate memory for Master-Server. GE network interface. Architecture Usually both client and chunkserer run on a same machine. Fixed-size chunks (usually 64MB) (memory of master). File replicated, chunk replicated (usually 3). Single master and multiple chunkservers and accessed by multiple clients.

Google File System (GFS)


Single masterserver metadata server Namespaces (files and chunks) File access control info Mapping from files to chunks Locations of chunks replicas Metadata in memory Namespaces and mapping stored in disk by checkpoints and operation log. Namespace management and locking Metadata HA and fault tolerance Replica Placement, rack-aware replica placement policy Chunk creation, re-replication, rebalancing chunk server management (heartbeat and control.) chunk lease management Garbage collection Minimize the masters involvement in all operations.

Google File System (GFS)


Large number of chunkserver No cache for file data Chunk allocation (lazy) Lease, data replication chain Blocks checksums Chunk state report P2P replication, Replication Pipelining and Clone Large number of clients Linked into each application. Interact with the master for metadata operation Data-bearing communication goes directly to the chunkservers No cache for file data, but cache metadata. Translate operation offset to chunk index. Applications/clients get over the limitations of GFS implementation.

Google File System (GFS)


Cluster scale and performance
Thousands of disks on over a thousand machines Hundreds of TB or several PB of storage Hundreds or thousands of clients

Limitations
No standard API such as POSIX. Not integrated File System operations. Some performance issues depend on applications and clients implementation. GFS does not guarantee that all replicas are byte-wise identical. It only guarantees that the data is written at least once as an atomic unit. Append operation atomically at least once issue. (GFS may insert padding or record duplicates in between.) Application/Client have opportunity to get a stale chunk replica. (Reader deal with it) If a write by the application is large or straddles a chunk boundary, it may be added fragments from different clients. Need tight cooperate of applications. Not support hard links or soft links.

Google File System (GFS)


Need further components to achieve completeness Chubby (Distributed lock and Consistency) BigTable (A Distributed Storage System for Structured Data ) etc.

Kosmos File System (KFS)


A open source implementation of the Google File System
Many clients for distributed computing Application FS OP

KFS Client Client library

Location Signaling

KFS Meta-data server (with HA)

Organization Signaling

Organization Signaling

Block Talk Signaling Block Data Stream

Block Team Talk KFS Block server Block Data Stream KFS Block server

Linux FS Many Block Servers for distributed storage

Linux FS

Kosmos File System (KFS)


Architecture Meta-data server = Google FS Master Block server = Google FS Chunk Server Client library = Google FS Client Workload characters Primarily write-once/read-many workloads Few millions of large files, where each file is on the order of a few tens of MB to a few tens of GB in size Mostly sequential access Implemented in C++ Client API support C++, Java, Python

Kosmos File System (KFS)


Valued Stuff
Client write cache (Google said not necessary) FUSE support: KFS exports a POSIX file interface, Hadoop does not (GFS does not, either) Monitor tools and shell Deploy scripts Job placement and local read optimization Can be integrated with Hadoop: replace HDFS, use the mapreduce of Hadoop. (patch to Hadoop-JIRA-1963) KFS supports atomic append, HDFS does not KFS supports rebalancing, HDFS does not

Status and Limitations


Not good implemented yet. No real user Failed to build a usable program. Similar limitations of Google FS.

Kosmos File System (KFS)


Client support FUSE
Client Implementation KFS Client Client Applications (e.g:shell command ls) OP Result FS OP KFS Meta-data Server

libfuse (FUSE user programming library) FS OP OP Result OP Result glibc FS OP KFS Block Server

glibc

/mnt/kfs (fuseFS) VFS / (local)

FUSE Kernel Module

Ext3 (for Local Disks )

Hadoop Distributed File System (HDFS)


A open source implementation of the Google File System HDFS relaxes a few POSIX requirements to enable streaming access to file system data. From infrastructure for the Apache Nutch. Moving Computation is Cheaper than Moving Data Portability Across Heterogeneous Hardware and Software Platforms, Implemented by Java. Java client API C language wrapper for this Java API HTTP browser interface Architecture (master/slave) Namenode = Google FS masterserver Datanodes = Google FS chunkservers Clients = Google FS clients Blocks = Google FS chunks Namenode Safe Mode The Persistence of File System Metadata like google FS

Not yet support periodic checkpoints.

Communication Protocols RPCs Staging, client data buffing (like POSIX implementation)

Hadoop Distributed File System (HDFS)

Hadoop Distributed File System (HDFS)


Status and Limitations Similar limitations of Google FS. Not yet support appending-writes to files. Not yet implement user quotas or access permissions. Replica placement policy not completed. Not yet support periodic checkpoints of metadata. Not yet support re-balancing. Not yet support snapshot. Whos using HDFS Facebook (implement a read-only FUSE over HDFS, 300 nodes) Yahoo! (1000 nodes) For some non-commercial usage (log analysis, search, etc.)

GlusterFS
Gluster for specific tasks such as HPC Clustering, Storage Clustering, Enterprise Provisioning, Database Clustering etc. GlusterFS GlusterHPC

GlusterFS

GlusterFS

GlusterFS

Clients

Storage Server Cluster

Application (shell: ls, etc.)

GlusterFS Client

Namespace Brick Namespace Bricks (AFR)

POSIX

FUSE libfuse

VFS

FUSE fuse.ko

Namespace Brick File Data Bricks (AFR, Stripe, etc.)

GlusterFS
Architecture
Different from GoogleFS series. No meta-data no master server. User space logical volume management scenario. Server node machines export disk storages as bricks. The brick nodes store distributed files in underling Linux file system. The file namespaces are also stored at storage bricks, just as the file data bricks. Except the size of the files is zero. Bricks (file data or namespaces) support replication. NFS like Disk Layout

Interconnect
Infiniband RDMA (High throughput) TCP/IP

Features
Support FUSE, complete POSIX interface. AFR (mirror) Self Heal Stripe (note: not good implemented)

GlusterFS
Valued Stuff
Easy to setup for a moderate cluster. FUSE and POSIX Scheduler Modules for balancing Performance tuning flexibly Design:
Stackable Modules,Translators, run-time .so implementation. Not tied to I/O Profiles or Hardware or OS

Well-tested and with different representative benchmarks. Performance and simplicity is better then Luster.

Limitations
Lacks global management function, no master. The AFR function depends on configuration, lacks automation and flexibility. Now, cannot automatic add new bricks. If a master component is added, it will be a better Cluster FS.

Whos using GlusterFS


Indian Institute of Technology Kanpur, 24 brick GlusterFS storage on Infiniband. Other small cluster projects.

Red Hat Global File System


Red Hat Cluster Suite Its a shared storage solution, which is a traditional solution. Depends on Red Hat Cluster Suite components Configuration and management function
Conga (luci and ricci)

GLVM DLM GNBD SAN/NAS/DAS

Red Hat Global File System


Deploy
GFS with a SAN (Superior Performance and Scalability) GFS and GNBD with a SAN (Performance, Scalability, Moderate Price) GFS and GNBD with Directly Connected Storage (Economy and Performance)

Red Hat Global File System


GFS Functions Making a File System Mounting a File System Unmounting a File System GFS Quota Management Growing a File System Adding Journals to a File System Direct I/O Data Journaling Configuring atime Updates Suspending Activity on a File System Displaying Extended GFS Information and Statistics Repairing a File System Context-Dependent Path Names (CDPN) Cluster Volume Management aggregate multiple physical volumes into a single, logical device across all nodes in a cluster. provides a logical view of the storage to GFS. Lock Management Cluster Management, Fencing, and Recovery Cluster Configuration Management

Red Hat Global File System


Status It is a shared storage solution. The solution is far from our target. A little too complicated and not easy to manage. High performance and scalability need high level storage hardware and network (eg.SAN). The implementation is not sample.

Luster
Sun Microsystems Target 10,000 of nodes, PB of storage, 100GB/sec throughput. Lustre is kernel software, which interacts with storage devices. Your Lustre deployment must be correctly installed, configured, and administered to reduce the risk of security issues or data loss. It uses Object-Based Storage Devices (OSDs), to manage entire file objects (inodes) instead of blocks. Components
Meta Data Servers (MDSs) Object Storage Targets (OSTs) Lustre clients.

Luster is a little too complex to be used. But it seems a verified and reliable File System.

Luster OSD Architecture

Summary

Shared Cluster Parallel Cloud

Summary Cluster Volume Managers SAN File Systems Cluster File Systems Parallel NFS (pNFS) Object-based Storage Devices (OSD) Global/Parallel File System Distribute/Cluster/Parallel Level
Volume level (block based) File or File system level (file, block or object(for OSD) based) Database or application level

Directly at the storage or in the network

Summary Traditional/Historical
Block level: Volume Management
EMC PowerPath (PPVM) HP Shared LVM IBM LVM MACROIMPACT SAN CVM REDHAT LVM SANBOLIC LaScala VERITAS

File/File System level:


Local Disk FS Distributed: NAS, Samba, AFP, DFS, AFS, RFS, Coda SAN FS

App/DB level: RDBMS, Email system

Advanced/Recent: File/FS level


Distributed: WAFS(NAS extention), NFM, GlobalFS, SANFS, ClusterFS

S-ar putea să vă placă și