Documente Academic
Documente Profesional
Documente Cultură
Basic Concepts
Quickstart
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
http://now.netapp.com/
+32 2 416 32 90
Uptime ServiceDesk
+32 3 451 23 74
servicedesk@uptime.be
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
NetApp Products
Filer
Gateway products
FAS6040
840 TB
840 spindles
FAS3070
FAS3040
126 TB
252 spindles
252 TB
504 spindles
FAS6070
FAS6030
504 TB
1008 spindles
420 TB
840 spindles
FAS2050
69 TB
104 spindles
FAS3020
84 TB
168 spindles
FAS2020
24 TB
40 spindles
FAS3050
168 TB
336 spindles
Data ONTAP
One architecture
One application interface
One management interface
Total interoperability
Learn one; know them all
N7000
N5000
N3000
N3700 (FAS270)
FAS200 series
FAS800 series
FAS900 series
Present
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
Storage Terminology
SAN vs. NAS
NFS (Unix)
CIFS (Windows)
clients
network: TCP/IP
FILE
FILE
FILE
SCSI
clients
network: TCP/IP
hosts
(servers)
network: SCSI, FCP, iSCSI
LUN
LUN
Departmental
SAN
Enterprise
NAS
Departmental
NAS
iSCSI
Fibre
Channel
Dedicated
Ethernet
SAN
(Block)
Corporate
LAN
NAS
(File)
NetApp
FAS
NAS
Share, export
Client/server
Network-Attached Storage
SAN
Target
LUN
Initiator
Fabric
HBA
Multipathing (MPIO)
fabric
fabric
Zoning
Table of Contents
NetApp Products
Storage Terminology
NetApp Terminology
Volumes
Snapshots
Qtrees
LUNs
NetApp Terminology
Some NetApp-specific Terms ...
Data ONTAP
= Operating system on Network Appliance filers and
nearstores, borrows ideas from Unix (BSD)
eg. /etc/ directory on vol0
eg. inodes
RC = release candidate
ONTAP 7G
Head/filer
(Disk) Shelf
Motherboard and first disk shelf are integrated (disk shelf can
be turned into filer and vice versa)
Disk firmwares
Non-disruptively
Shelf firmwares
Non-disruptively for FCAL shelves
Disruptively for (S)ATA shelves
Data ONTAP
Requires reboot
When to Upgrade ?
Filerview (http(s))
Console cable
Telnet
Windows MMC
(Computer Management Snap-in)
(snmp, ndmp)
What ?
Why ?
How ?
Now:
Aggregate
FlexVol1
File1
File2
FlexVol2
LUN
...
Unix-based, hence terms like inodes, but allows NTFSpermissions (NTFS-security style)
Formatting disks ? No: zeroing disks
Aggregate
Data disks
Spare disks
Parity disks
(Broken disks)
(Partner disks)
RAID1 (mirroring)
controller
DATA
DISK
DATA
DISK
RAID5
parity
data
data
data
data
data
parity
data
data
data
data
data
parity
data
data
data
date
data
parity
data
data
data
data
data
parity
This is an
aggregate
Data Disks
Dedicated
Parity
Disk(s)
Double
Parity
Disk
(reliability
x10.000)
RAID4 advantages
Parity doesn't
change
writes
NVRAM
Total
23%
14%
11%
19%
7%
3%
14%
24%
20%
23%
23%
21%
22%
21%
25%
26%
14%
5%
5%
CPU
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
NFS
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
CIFS
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
HTTP
17
2
8
1
8
1
8
19
31
16
48
16
8
5
23
14
38
47
81
Total
23%
21%
1%
22%
21%
3%
27%
16%
18%
7%
27%
25%
22%
22%
20%
22%
22%
20%
22%
27%
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
22
15
5
13
11
1
21
14
29
39
38
18
17
37
28
1
7
10
32
7
Net kB/s
in
out
105
45
75
3
110
8
6
2
130
803
6
2
132
186
130
56
161
158
72
30
253
221
121
49
130
3
4
18
140
73
144
13
31
958
111
398
91
417
Net kB/s
in
out
55
108
115
39
20
3
134
55
145
9
6
0
158
60
57
70
119
865
86
95
76
136
79
75
72
62
132
141
135
89
6
1
126
3
38
22
161
108
32
3
Disk kB/s
read write
18185
24
10002
8
8983
0
13930
32
3356 10020
1024
40
4619
3612
22357
0
20764
0
22336
24
20880
0
20196
8
20783
24
20536
0
21598
0
20428
24
14340
4080
732
8960
1344
1376
Disk kB/s
read write
9992
32
10228
0
260
32
11029
0
12442
0
765
24
14762
0
8438
16
6472
8437
4668
984
19660
20
18064
4
19337
8
19076
8
17974
16
22696
0
21224
12
19776
20
19592
0
18347
16
CP
ty
Ss
:v
Zf
Z
Zf
:v
Zf
CP
ty
:
-n
Zf
Z
Ts
Disk
util
94%
63%
43%
72%
14%
10%
40%
100%
99%
100%
100%
100%
100%
100%
96%
100%
67%
10%
18%
Disk
util
54%
49%
13%
45%
52%
10%
60%
40%
22%
22%
99%
88%
99%
97%
90%
91%
91%
100%
98%
87%
DAFS
FCP iSCSI
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
DAFS
0
17
0
2
0
8
0
1
0
8
0
1
0
8
0
19
0
31
0
16
0
48
0
16
0
8
0
5
0
23
0
14
0
38
0
47
0
81
FCP iSCSI
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
22
15
5
13
11
1
21
14
29
39
38
18
17
37
28
1
7
10
32
7
FCP
in
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
FCP
in
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
kB/s
out
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
kB/s
out
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
RAID group
parity
disk
data
disk
aggregate
spare
disk
Table of Contents
NetApp Products
Storage Terminology
NetApp Terminology
Volumes
Snapshots
Qtrees
LUNs
Table of Contents
Shelf Modules
Disk Shelves
DS14Mk2-FC
DS14Mk2-AT
DS14Mk4-FC
DS12-ESAS
DS20-ESAS
GBIC
Why 2x
modules ?
redundancy
or clustered
systems
shelf in out
ID
ESH
ESH2
1x module
2x modules
shelf in out
ID
Data Cables
DB9 (serial or console) cable
is needed for connection to a
controller
It is required during initial setup
when there is no network
connection to the filer
(Disk) Loop
0a
0a
0c
Disk shelf
shelf ID = 1
disk 0a.16 0a.29
Disk shelf
shelf ID = 2
disk 0a.32 0a.45
Disk shelf
shelf ID = 3
disk 0a.48 0a.61
Disk shelf
shelf ID = 1
disk 0c.16 0c.29
Disk shelf
shelf ID = 2
disk 0c.32 0c.45
FAS250 module
shrunken head
Redundant Power
Supply Units
ESH2 module
ESH module
(autotermination)
Backplane Speed Switch
(1/2/4 Gbps)
ESH
ESH2 (modern)
shelf
ID
Fibre (FC)
connection
for tape
backup
2x Gigabit
Connection
NICs, can
for serial
be teamed console cable
(VIF)
Second
module
installed
=
FAS270c
(cluster)
shelf
ID
Fibre (FC)
Fibre (FC) 2x Gigabit
Connection
connection
connection NICs, can
for serial
for SAN & to additional be teamed console cable
tape backup disk shelves
(VIF)
4x Modular I/O
Expansion Slots
SCSI
connection
for tape
backup:
0e (not on
all models)
1x RLM
NIC
(Remote
LAN
Module)
Connection
for serial
console
cable
1x 10/100
Mbps NIC
Connection
for serial
console
cable
0c
Disk shelf
shelf ID = 1
disk 0a.16 0a.29
Disk shelf
shelf ID = 2
disk 0a.32 0a.45
Disk shelf
shelf ID = 3
disk 0a.48 0a.61
Disk shelf
shelf ID = 1
disk 0c.16 0c.29
Disk shelf
shelf ID = 2
disk 0c.32 0c.45
...
slot 0: FC Host Adapter 0c (Dual-channel, QLogic 2322 rev. 3, 64-bit,
L-port, <UP>)
Firmware rev:
3.3.10
Host Loop Id:
7
FC Node Name:
5:00a:098000:006b3b
Cacheline size: 16
FC Packet size: 2048
SRAM parity:
Yes
External GBIC: No
Link Data Rate: 2 Gbit
21: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA2USA)
20: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA5NJA)
17: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA3J7A)
22: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5W9U72A)
23: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5W9TRMA)
25: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5W8475A)
26: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA006A)
29: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA394A)
27: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA5SLA)
28: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5W9GDEA)
24: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA30AA)
16: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA3TMA)
18: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5WA5UKA)
19: NETAPP
X274_HPYTA146F10 NA02 136.0GB 520B/sect (V5W9UBDA)
Shelf 1: ESH2 Firmware rev. ESH A: 14 ESH B: 14
I/O base 0xee00, size 0x100
memory mapped I/O base 0xe1240000, size 0x1000
slot 0: FC Host Adapter 0d (Dual-channel, QLogic 2322 rev. 3, 64-bit,
L-port, <OFFLINE (hard)>)
...
0a
0c
0c
Disk shelf
shelf ID = 1
disk 0a.16 0a.29
Disk shelf
shelf ID = 1
disk 0c.16 0c.29
Disk shelf
shelf ID = 2
disk 0a.32 0a.45
Disk shelf
shelf ID = 2
disk 0c.32 0c.45
Disk shelf
shelf ID = 3
disk 0a.48 0a.61
Disk shelf
shelf ID = 3
disk 0c.48 0c.61
Cluster Interconnect
(heartbeat) cables
0a
0a
0c
Disk shelf
shelf ID = 1
disk 0a/0c.16 0a/0c.29
Disk shelf
shelf ID = 2
disk 0a/0c.32 0a/0c.45
Disk shelf
shelf ID = 3
disk 0a/0c.48 0a/0c.61
0c
Disk shelf
shelf ID = 1
disk 0a/0c.16 0a/0c.29
Disk shelf
shelf ID = 2
disk 0a/0c.32 0a/0c.45
The filers that connects to the TOP module of a shelf controls the disks in that
shelf under normal (ie. non-failover) circumstances
Different scenarios are possible, eg. 0a/0b & 0c/0d in FC SAN configs!
Cluster Interconnect
(heartbeat) cables
0a
0b 0c
0d
0a
0b 0c
0d
0a
0b 0c
0d
0a
0b 0c
Total capacity =
3 + 2 shelves
0d
Table of Contents
NetApp Products
Storage Terminology
NetApp Terminology
Volumes
Snapshots
Qtrees
LUNs
Syslog translator:
https://now.netapp.com/eservice/ems
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
no
control
control
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
led_on <disk-id>
blink_off <disk-id>
led_off <disk-id>
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
Volumes
2 types:
Files (NAS)
LUNs (SAN)
Volumes (cont.)
Why Flexvols ?
Maximize Storage Utilization and Performance with
Virtualization
Lost Space
Reclaimed Space
Lost Space
40%
Utilization
Lost Space
Lost Space
80%
Utilization
Volumes (cont.)
Why FlexVols ? (cont.)
Regular Volumes
Volume performance
limited by number of
disks it has
Hot volumes can't be
helped by disks on other
volumes
FlexVol Volumes
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
Snapshots
Notes:
Can be scheduled
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
Snapshots
file a
file b
...
D
A
file z
E
F
Free Blocks
G
?
H
?
Disk Data
Blocks
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
Snapshots
file a
file b
...
Snapshot
file z
file a
file b
...
file z
Taking a snapshot =
very fast operation, no
space overhead
D
A
E
F
Free Blocks
G
?
H
?
Disk Data
Blocks
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
Snapshots
file a
file b
...
D
A
Snapshot
file z
file a
E
F
C'
Free Blocks
...
file z
file b
G
?
H
?
Disk Data
Blocks
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
(cont.)
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
Snapshots
file a
file b
...
Snapshot
file z
file a
file b
...
file z
Snap restore of a
single file
D
A
E
F
C'
Free Blocks
G
?
H
?
Disk Data
Blocks
Snapshots (cont.)
A Bird's Eye View at Snapshots & SnapRestore
Snapshots
file a
file b
...
Snapshot
file z
file a
file b
...
file z
Snap restore of a
volume (complete
file system)
D
A
E
F
C'
Free Blocks
G
?
H
?
Disk Data
Blocks
Snapshots (cont.)
Accessing Snapshots from Clients
NFS clients
.snapshot directory
CIFS clients
Snapshots (cont.)
The Problem of Consistent Snapshots
NAS clients modify files
The NetApp filer manages WAFL
metadata and buffers in-memory
Clients
Filer Memory
WAFL metadata
buffers
FlexVol
file
dir
file
file
file
Snapshots (cont.)
The Problem of Consistent Snapshots (cont.)
During snapshot creation, the
necessary buffers are flushed to
disk, then user I/O is suspended
to a volume
Clients
WAFL metadata
buffers
dir
file
file
file
Snapshots (cont.)
The Problem of Consistent Snapshots (cont.)
SAN complicates things!
Server
filesystem metadata
Filer Memory
WAFL metadata
buffers
FlexVol
file
dir
file
file
file
LUN
SnapDrive !
Snapshots (cont.)
The Problem of Consistent Snapshots (cont.)
SnapDrive triggers the snapshot
creation.
Server
SnapDrive
buffers
filesystem metadata
WAFL metadata
buffers
dir
file
file
file
LUN
Snapshots (cont.)
The Problem of Consistent Snapshots (cont.)
Server
database
database buffers
& metadata
buffers
filesystem metadata
Filer Memory
WAFL metadata
buffers
FlexVol
file
dir
file
file
file
LUN
SnapManager !
Snapshots (cont.)
The Problem of Consistent Snapshots (cont.)
Server
database
SnapManager
database buffers
& metadata
SnapDrive
SnapManager talks to
database(s) and puts
database in
backup mode
SnapManager talks to
SnapDrive to take
buffers
filesystem
SnapDrive
talks
to metadata
snapshots of the LUNs
NTFS to suspend
containing database(s)
server I/O during
and transaction logfiles
snapshot creation
Filer Memory
SnapManager packages all
SnapDrive talks to filer to take
this in an application with a
snapshots of affected volumes
nice management GUI and
WAFL metadata
buffers
takes care of snapshot
Filer takes consistent snapshots
management (eg. snapshot
of affected volumes
renaming & deleting, ...)
FlexVol
SnapManager performs the steps
file
dir
file
described above when backing up
file
file
a database via NetApp snapshots
LUN
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
Qtrees
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
LUNs
Table of Contents
NetApp Products
Storage Terminology
Volumes
Snapshots
Qtrees
LUNs
Network Configuration
Single-mode VIF
Multi-mode VIF
Multi-mode VIF:
Table of Contents
NetApp Products
Storage Terminology
NetApp Terminology
Volumes
Snapshots
Qtrees
LUNs
Replication Technologies
SnapMirror, SnapVault (and OSSV), SyncMirror
Name
SnapMirror
Type
ASync Mirror (> 1 minute)
Protocol
IP (WAN/LAN)
Mode
Active/Active
Filer Type : Mix of models
Distance
no limit
Solutions Long distance DR
Data consolidation
Name
SnapVault
Type
ASync Mirror (> 1 hour)
Protocol
IP (WAN/LAN)
Mode
Active/Active
Filer Type : Mix of models SV for
Open systems (win 2K NT Unix)
Distance
no limit
Solutions disk-to-disk backup,restore
HSM
Name
SyncMirror
Type
Synchronous
Protocol
Fibre Channel or DWDM
Mode
Active/Active
Filer Type : Clustered filers Same models
Distance
Max. 35 Km.
Solutions Real Time replication of data
SnapVault (Backup/Restore)
Overview
SnapMirror
Replicate:
Volumes
Qtrees
SnapVault
Replicate:
SyncMirror
NetApp
Filer
Volume
(with
snapshots)
Servers (Windows,
Unix, Linux)
snapshots
Volume
(with
qtrees &
snapshots)
SnapMirror
Synchronous SnapMirror
SnapVault
OSSV (Open Systems SnapVault
SyncMirror
SnapVault
to 2nd Tier
Volume
SnapMirror
to 3rd Tier