Sunteți pe pagina 1din 5

Please address the above conditions, then try "revert_to" again.

ind-filer08> cifs terminate CIFS local server is shutting down... CIFS local server has shut down... ind-filer08> nfs Use of "nfs" by itself is deprecated, please use "nfs status" The following commands are available; for more information type "nfs help <command>" diag nsdb on stat help off setup status NFS server is NOT running. ind-filer08> nfs off NFS server is NOT running. ind-filer08> ind-filer08> revert_to -f 7.3 Newer snapshots on volume vol0 that must be deleted prior to reverting: nightly.1 hourly.5 hourly.4 hourly.3 hourly.2 nightly.0 hourly.1 hourly.0 Please address the above conditions, then try "revert_to" again. ind-filer08> snap list Volume vol0 working... %/used %/total date name ---------- ---------- ------------ -------5% ( 5%) 0% ( 0%) Sep 24 12:18 hourly.0 7% ( 2%) 0% ( 0%) Sep 24 08:00 hourly.1 9% ( 2%) 0% ( 0%) Sep 24 00:00 nightly.0 11% ( 2%) 0% ( 0%) Sep 23 20:00 hourly.2 12% ( 2%) 0% ( 0%) Sep 23 16:00 hourly.3 14% ( 2%) 0% ( 0%) Sep 23 12:00 hourly.4 15% ( 2%) 0% ( 0%) Sep 23 08:00 hourly.5 17% ( 2%) 0% ( 0%) Sep 23 00:00 nightly.1 ind-filer08> Mon Sep 24 14:58:08 GMT [ind-filer08: asup.post.host:info]: AutoSup port (HA Group Notification from ind-filer08 (PERFORMANCE SNAPSHOT) INFO) cannot connect to url support.netapp.com (ZSM - Can't read reply header (line 1)) Mon Sep 24 14:58:08 GMT [ind-filer08: asup.post.retry:info]: AutoSupport message (HA Group Notification from ind-filer08 (PERFORMANCE SNAPSHOT) INFO) was not po sted to NetApp for host (0). The system will retry later to post the message ind-filer08> snap usage: snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]] snap create [-A | -V] <vol-name> <snapshot-name> snap delete [-A | -V] <vol-name> <snapshot-name> | snap delete [-A | -V] -a [-f] [-q] <vol-name> snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]] snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name> snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]] snap reclaimable <vol-name> snapshot-name ... snap reserve [-A | -V] [<vol-name> [percent]] snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as -path>] <vol-name> | <restore-from-path> snap autodelete <vol-name> [on | off | show | reset | help] | snap autodelete <vol-name> <option> <value>...

ind-filer08> snap delete -V vol0 usage: snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]] snap create [-A | -V] <vol-name> <snapshot-name> snap delete [-A | -V] <vol-name> <snapshot-name> | snap delete [-A | -V] -a [-f] [-q] <vol-name> snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]] snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name> snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]] snap reclaimable <vol-name> snapshot-name ... snap reserve [-A | -V] [<vol-name> [percent]] snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as -path>] <vol-name> | <restore-from-path> snap autodelete <vol-name> [on | off | show | reset | help] | snap autodelete <vol-name> <option> <value>... ind-filer08> vol status Volume State Status Options vol0 online raid_dp, flex root, create_ucode=on ind-filer08> df -Lh Filesystem total used avail capacity Mounted on /vol/vol0/ 1276GB 3314MB 1273GB 0% /vol/vol0/ /vol/vol0/.snapshot 67GB 464MB 66GB 1% /vol/vol0/.snapsh ot ind-filer08> snap delete -V /vol/vol0 usage: snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]] snap create [-A | -V] <vol-name> <snapshot-name> snap delete [-A | -V] <vol-name> <snapshot-name> | snap delete [-A | -V] -a [-f] [-q] <vol-name> snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]] snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name> snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]] snap reclaimable <vol-name> snapshot-name ... snap reserve [-A | -V] [<vol-name> [percent]] snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as -path>] <vol-name> | <restore-from-path> snap autodelete <vol-name> [on | off | show | reset | help] | snap autodelete <vol-name> <option> <value>... ind-filer08> snap delete -V -f vol0 snap delete: Options specified are not applicable unless used with -a option. usage: snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]] snap create [-A | -V] <vol-name> <snapshot-name> snap delete [-A | -V] <vol-name> <snapshot-name> | snap delete [-A | -V] -a [-f] [-q] <vol-name> snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]] snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name> snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]] snap reclaimable <vol-name> snapshot-name ... snap reserve [-A | -V] [<vol-name> [percent]] snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as -path>] <vol-name> | <restore-from-path> snap autodelete <vol-name> [on | off | show | reset | help] | snap autodelete <vol-name> <option> <value>... ind-filer08> snap list Volume vol0 working...

%/used %/total date name ---------- ---------- ------------ -------5% ( 5%) 0% ( 0%) Sep 24 12:18 hourly.0 7% ( 2%) 0% ( 0%) Sep 24 08:00 hourly.1 9% ( 2%) 0% ( 0%) Sep 24 00:00 nightly.0 11% ( 2%) 0% ( 0%) Sep 23 20:00 hourly.2 12% ( 2%) 0% ( 0%) Sep 23 16:00 hourly.3 14% ( 2%) 0% ( 0%) Sep 23 12:00 hourly.4 15% ( 2%) 0% ( 0%) Sep 23 08:00 hourly.5 17% ( 2%) 0% ( 0%) Sep 23 00:00 nightly.1 ind-filer08> snap delete usage: snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]] snap create [-A | -V] <vol-name> <snapshot-name> snap delete [-A | -V] <vol-name> <snapshot-name> | snap delete [-A | -V] -a [-f] [-q] <vol-name> snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]] snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name> snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]] snap reclaimable <vol-name> snapshot-name ... snap reserve [-A | -V] [<vol-name> [percent]] snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as -path>] <vol-name> | <restore-from-path> snap autodelete <vol-name> [on | off | show | reset | help] | snap autodelete <vol-name> <option> <value>... ind-filer08> Mon Sep 24 15:00:00 GMT [ind-filer08: kern.uptime.filer:info]: 3: 00pm up 1:39 0 NFS ops, 5021 CIFS ops, 0 HTTP ops, 0 FCP ops, 0 iSCSI ops saMon Sep 24 15:00:01 GMT [ind-filer08: raid.root.unmirrored:error]: Root volume is not mirrored. A takeover of this filer may not be possible in case of a disa ster. ind-filer08> snap delete Mon Sep 24 15:00:06 GMT [ind-filer08: cf.takeover.disab led:warning]: Controller Failover is licensed but takeover of partner is disable d. -a vol0 Are you sure you want to delete all snapshots for volume vol0? y Deleted vol0 snapshot nightly.1. Deleted vol0 snapshot hourly.5. Deleted vol0 snapshot hourly.4. Deleted vol0 snapshot hourly.3. Deleted vol0 snapshot hourly.2. Deleted vol0 snapshot nightly.0. Deleted vol0 snapshot hourly.1. Deleted vol0 snapshot hourly.0. ind-filer08> revert_to usage: revert_to [-f] 7.3 (for 7.3 and 7.3.x) -f Attempt to force revert. ind-filer08> revert_to -f 7.3.6 revert_to: You can only revert to Data ONTAP 7.3 ind-filer08> revert_to -f 7.3 You are about to revert the system to work with Data ONTAP 7.3 The system will be halted immediately after the conversion process completes. Make sure that you have installed Data ONTAP 7.3 onto the boot device, or you will have to run "revert_to" again. Are you sure you want to proceed? [yes/no]? yes Stopping M-host processes...Mon Sep 24 15:00:50 GMT [ind-filer08: revertTo.start :notice]: Starting revert to 7.3.

No matching processes were found done Unmounting root volume...root: mroot unmounted done Removing mroot and pmroot checksum files...done Removing .noteto files from /etc/log/autosupport...done Clearing autosupport message spool Mon Sep 24 15:00:50 GMT [ind-filer08: perf.archive.stop:info]: Performance archi ver stopped. (11068) spinhi was turned off before doing "revert_to". coral was turned off before doing "revert_to". Reverting metafiles with Version 3 Disk Inodes in aggregate aggr0 Reverting private inodes in aggregate aggr0 5% inodes reverted. 10% inodes reverted. 15% inodes reverted. 20% inodes reverted. 25% inodes reverted. 30% inodes reverted. 35% inodes reverted. 40% inodes reverted. 45% inodes reverted. 50% inodes reverted. 55% inodes reverted. 60% inodes reverted. Reverting public inodes in aggregate aggr0 5% inodes reverted. 10% inodes reverted. 15% inodes reverted. 20% inodes reverted. 25% inodes reverted. 30% inodes reverted. 35% inodes reverted. 40% inodes reverted. 45% inodes reverted. 50% inodes reverted. 55% inodes reverted. 60% inodes reverted. Reverting metafiles with Version 3 Disk Inodes in volume vol0 Reverting private inodes in volume vol0 5% inodes reverted. 10% inodes reverted. 15% inodes reverted. 20% inodes reverted. 25% inodes reverted. 30% inodes reverted. 35% inodes reverted. 40% inodes reverted. 45% inodes reverted. 50% inodes reverted. 55% inodes reverted. 60% inodes reverted. Reverting public inodes in volume vol0 5% inodes reverted. 10% inodes reverted. 15% inodes reverted. 20% inodes reverted. 25% inodes reverted. 30% inodes reverted. 35% inodes reverted.

40% inodes reverted. 45% inodes reverted. 50% inodes reverted. 55% inodes reverted. 60% inodes reverted. 65% inodes reverted. 70% inodes reverted. 75% inodes reverted. 80% inodes reverted. 85% inodes reverted. 90% inodes reverted. 95% inodes reverted. Waiting for asynchronous deletes to go to zero... Done. Waiting for asynchronous deletes to drain...Done. Reverting software-based disk ownership areas... Mon Sep 24 15:01:35 GMT [ind-filer08: revertTo.complete:notice]: Revert to 7.3[. x] was completed. Mon Sep 24 15:01:51 GMT [ind-filer08: kern.shutdown:notice]: System shut down be cause : "REVERT".

S-ar putea să vă placă și