Archive

Archive for the ‘7-mode Manual Pages’ Category

NetApp Man Pages

July 7th, 2009

NetApp have kindly given me permission to republish their man pages here. They still need a little tidying up, but the sheer quantity means it’ll take me awhile to get them all sorted and cross-referenced properly, please excuse any visual issues for the moment. I wrote a quick parsing tool to get all the info on here, so there may be a few issues resulting from that still.

I’ve always liked the way that PHP have their function pages giving users the ability to comment directly onto them. This allows people to leave feedback on functions and tools, and also follow up with some extra uses or syntax for commands that aren’t necessarily clearly published. Hopefully this can feed back into NetApp to improve their documentation.

I’d definitely like to encourage people to comment on the man pages with anything that may be useful, and hopefully build this into a useful little reference section. Many thanks again for the NetApp folk for helping me with this.

7-mode Manual Pages, General , , , ,

aggr

July 7th, 2009

Table of Contents

NAME

aggr – commands for managing aggregates, displaying aggregate status, and copying aggregates

SYNOPSIS

aggr command argument

DESCRIPTION

The aggr command family manages aggregates. The aggr commands can create new aggregates, destroy existing ones, undestroy previously destroyed aggregate, manage plexes within a mirrored aggregate, change aggregate status, apply options to an aggregate, copy one aggregate to another, and display their status. Aggregate commands often affect the volume(s) contained within aggregates.

The aggr command family is new in Data ONTAP 7.0. The vol command family provided control over the traditional vol_umes that fused a single user-visible file system and a single RAID-level storage container (aggregate) into an indivisible unit, and still does. To allow for more flexible use of storage, aggregates now also support the ability to contain multiple, independent user-level file systems named flexible volumes.

Data ONTAP 7.0 fully supports both traditional and flexible volumes. The aggr command family is the preferred method for managing a filer’s aggregates, including those that are embedded in traditional volumes.

Note that most of the aggr commands apply equally to both the type of aggregate that contains flexible volumes and the type that is tightly bound to form a traditional volume. Thus, the term aggregate is often used here to refer to both storage classes. In those cases, it provides a shorthand for the longer and more unwieldy phrase "aggregates and traditional volumes”.

Aggregates may either be mirrored or unmirrored. A plex is a physical copy of the WAFL storage within the aggregate. A mirrored aggregate consists of two plexes; unmirrored aggregates contain a single plex. In order to create a mirrored aggregate, you must have a filer configuration that supports RAID-level mirroring. When mirroring is enabled on the filer, the spare disks are divided into two disk pools. When an aggregate is created, all of the disks in a single plex must come from the same disk pool, and the two plexes of a mirrored aggregate must consist of disks from separate pools, as this maximizes fault isolation. This policy can be overridden with the -f option to aggr create, aggr add and aggr mirror, but it is not recommended.

An aggregate name can contain letters, numbers, and the underscore character(_), but the first character must be a letter or underscore. A combined total of up to 200 aggregates (including those embedded in traditional volumes) can be created on each filer.

A plex may be online or offline. If it is offline, it is not available for read or write access. Plexes can be in combinations of the following states:

normal All RAID groups in the plex are functional.

failed At least one of the RAID groups in the plex has failed.

empty The plex is part of an aggregate that is being created, and one or more of the disks targeted to the aggregate need to be zeroed before being added to the plex.

active The plex is available for use.

inactive
The plex is not available for use.

resyncing
The plex’s contents are currently out of date and are in the process of being resynchronized with the contents of the other plex of the aggregate (applies to mirrored aggregates only).

adding disks
Disks are being added to the plex’s RAID group(s).

out-of-date
This state only occurs in mirrored aggregates where one of the plexes has failed. The non-failed plex will be in this state if it needed to be resynchronized at the time the other plex failed.

A plex is named using the name of the aggregate, a slash character delimiter, and the name of the plex. The system automatically selects plex names at creation time. For example, the first plex created in aggregate aggr0 would be aggr0/plex0.

An aggregate may be online, restricted, or offline. When an aggregate is offline, no read or write access is allowed. When an aggregate is restricted, certain operations are allowed (such as aggregate copy, parity recomputation or RAID reconstruction) but data access is not allowed. Aggregates that are not a part of a traditional volume can only be restricted or offlined if they do not contain any flexible volumes.

Aggregates can be in combinations of the following states:

aggr The aggregate is a modern-day aggregate; it is capable of containing zero or more flexible volumes.

copying
The aggregate is currently the target aggregate of an active aggr copy operation.

degraded
The aggregate contains at least one degraded RAID group that is not being reconstructed.

foreign
The disks that the aggregate contains were moved to the current filer from another filer.

growing
Disks are in the process of being added to the aggregate.

initializing
The aggregate is in the process of being initialized.

invalid
The aggregate contains no volumes and none can be added. Typically this happens only after an aborted aggregate copy operation.

ironing
A WAFL consistency check is being performed on this aggregate.

mirror degraded
The aggregate is a mirrored aggregate, and one of its plexes is offline or resyncing.

mirrored
The aggregate is mirrored and all of its RAID groups are functional.

needs check
A WAFL consistency check needs to be performed on the aggregate.

partial
At least one disk was found for the aggregate, but two or more disks are missing.

raid0 The aggregate consists of RAID-0 (no parity) RAID groups (V-Series and NetCache only).

raid4 The aggregate consists of RAID-4 RAID groups.

raid_dp
The aggregate consists of RAID-DP (Double Parity) RAID groups.

reconstruct
At least one RAID group in the aggregate is being reconstructed.

redirect
Aggregate reallocation or file reallocation with the -p option has been started on the aggregate. Read performance to volumes in the aggregate may be degraded.

resyncing
One of the plexes of a mirrored aggregate is being resynchronized.

snapmirrored
The aggregate is a snapmirrored replica of another aggregate. This state can only arise if the aggregate is part of a traditional volume.

trad The aggregate is fused with a single volume. This is also referred to as a traditional volume and is exactly equivalent to the volumes that existed before Data OnTAP 7.0. Flexible volumes can not be created inside of this aggregate.

verifying
A RAID mirror verification operation is currently being run on the aggregate.

wafl inconsistent
The aggregate has been marked corrupted. Please contact Customer Support if you see an aggregate in this state.

USAGE

The following commands are available in the aggr suite:

  add             mirror          restrict        undestroy   copy            offline         scrub           verify   create          online          show_space   destroy         options         split   media_scrub     rename          status 

aggr add aggrname
[ -f ]
[ -n ]
[ -g {raidgroup | new | all} ]
{ ndisks[@size]
|
-d disk1 [ disk2 … ] [ -d diskn [ diskn+1 … ] ] }

Adds disks to the aggregate named aggrname. Specify the disks in the same way as for the aggr create command. If the aggregate is mirrored, then the -d argument must be used twice (if at all).

If the -g option is not used, the disks are added to the most recently created RAID group util it is full, and then one or more new RAID groups are created and the remaining disks are added to new groups. Any other existing RAID groups that are not full remain partially filled.

The -g option allows specification of a RAID group (for example, rg0) to which the indicated disks should be added, or a method by which the disks are added to new or existing RAID groups.

If the -g option is used to specify a RAID group, that RAID group must already exist. The disks are added to that RAID group util it is full. Any remaining disks are ignored.

If the -g option is followed by new, Data ONTAP creates one or more new RAID groups and adds the disks to them, even if the disks would fit into an existing RAID group. Any existing RAID groups that are not full remain partially filled. The name of the new RAID groups are selected automatically. It is not possible to specify the names for the new RAID groups.

If the -g option is followed by all, Data ONTAP adds the specified disks to existing RAID groups first. After all existing RAID groups are full, it creates one or more new RAID groups and adds the specified disks to the new groups.

The -n option can be used to display the command that the system will execute, without actually making any changes. This is useful for displaying the automatically selected disks, for example.

By default, the filer fills up one RAID group with disks before starting another RAID group. Suppose an aggregate currently has one RAID group of 12 disks and its RAID group size is 14. If you add 5 disks to this aggregate, it will have one RAID group with 14 disks and another RAID group with 3 disks. The filer does not evenly distribute disks among RAID groups.

You cannot add disks to a mirrored aggregate if one of the plexes is offline.

The disks in a plex are not permitted to span disk pools. This behavior can be overridden with the -f flag when used together with the -d argument to list disks to add. The -f flag, in combination with -d, can also be used to force adding disks that have a rotational speed that does not match that of the majority of existing disks in the aggregate.

aggr copy abort [ -h] operation_number | all

Terminates aggregate copy operations. The opera_tion_number parameter specifies which operation to terminate. If you specify all, all aggregate active copy operations are terminated.

aggr copy start
[ -S | -s snapshot ] [ -C ]
source destination

Copies all data, including snapshots and flexible volumes, from one aggregate to another. If the -S flag is used, the command copies all snapshots in the source aggregate to the destination aggregate. To specify a particular snapshot to copy, use the -s flag followed by the name of the snapshot. If you use neither the -S nor -s flag in the command, the filer creates a snapshot at the time when the aggr copy start command is executed and copies only that snapshot to the destination aggregate.

The -C flag is required if the source aggregate has had free-space defragmentation performed on it, or if the destination aggregate will be free-space defragmented. Free-space defragmentation can be performed on an aggregate using the reallocate command.

Aggregate copies can only be performed between aggregates that host flexible volumes. Aggregates that are embedded in traditional volumes cannot participate.

The source and destination aggregates can be on the same filer or different filers. If the source or destination aggregate is on a filer other than the one on which you enter the aggr copy start command, specify the aggregate name in the filer_name:aggre_gate_name format.

The filers involved in an aggregate copy must meet the following requirements for the aggr copy start command to be completed successfully:

The source aggregate must be online and the destination aggregate must be restricted.

If the copy is between two filers, each filer must be defined as a trusted host of the other filer. That is, the filer’s name must be in the /etc/hosts.equiv file of the other filer.

If the copy is on the same filer, localhost must be included in the filer’s /etc/hosts.equiv file. Also, the loopback address must be in the filer’s /etc/hosts file. Otherwise, the filer cannot send packets to itself through the loopback address when trying to copy data.

The usable disk space of the destination aggregate must be greater than or equal to the usable disk space of the source aggregate. Use the df -A pathname command to see the amount of usable disk space of a particular aggregate.

Each aggr copy start command generates two aggregate copy operations: one for reading data from the source aggregate and one for writing data to the destination aggregate. Each filer supports up to four simultaneous aggregate copy operations.

aggr copy status [ operation_number ]

Displays the progress of one or all aggr copy operations. The operations are numbered from 0 through 3.

Restart checkpoint information for all transfers is also displayed.

aggr copy throttle [ operation_number ] value

Controls the performance of the aggr copy operation. The value ranges from 10 (full speed) to 1 (one-tenth of full speed). The default value is maintained in the filer’s aggr.copy.throttle option and is set 10 (full speed) at the factory. You can apply the performance value to an operation specified by the operation_number parameter. If you do not specify an operation number in the aggr copy throttle command, the command applies to all aggr copy operations.

Use this command to limit the speed of the aggr copy operation if you suspect that the aggr copy operation is causing performance problems on your filer. In particular, the throttle is designed to help limit the CPU usage of the aggr copy operation. It cannot be used to fine-tune network bandwidth consumption patterns.

The aggr copy throttle command only enables you to set the speed of an aggr copy operation that is in progress. To set the default aggr copy speed to be used by future copy operations, use the options command to set the aggr.copy.throttle option.

aggr create aggrname
[ -f ]
[ -m ]
[ -n ]
[ -t raidtype ]
[ -r raidsize ]
[ -T disk-type ]
[ -R rpm ]
[ -L [compliance | enterprise] ]
[ -v ]
[ -l language-code ]
{ ndisks[@size]

|
-d disk1 [ disk2 … ] [ -d diskn [ diskn+1 … ] ] }

Creates a new aggregate named aggrname. The aggregate name can contain letters, numbers, and the underscore character(_), but the first character must be a letter or underscore. Up to 200 aggregates can be created on each filer. This number includes those aggregates that are embedded within traditional volumes.

An embedded aggregate can be created as part of a traditional volume using the -v option. It cannot contain any flexible volumes.

A regular aggregate, created without the -v option, can contain only flexible volumes. It cannot be incorporated into a traditional volume, and it contains no volumes immediately after creation. New flexible volumes can be created using the vol create command.

The -t raidtype argument specifies the type of RAID group(s) to be used to create the aggregate. The possible RAID group types are raid4 for RAID-4, raid_dp for RAID-DP (Double Parity), and raid0 for simple striping without parity protection. The default raidtype for aggregates and traditional volumes on filers is raid_dp. Setting the raidtype is not permitted on V-Series systems; the default of raid0 is always used.

The -r raidsize argument specifies the maximum number of disks in each RAID group in the aggregate. The maximum and default values of raidsize are platform-dependent, based on performance and reliability considerations. See aggr options raidsize for more details.

The -T disk-type argument specifies the type of disks to use when creating a new aggregate. It is needed only on systems connected to disks of different types. Possible disk types are: ATA, FCAL, LUN, SAS, SATA, and SCSI. Mixing disks of different types in one aggregate is not allowed. -T cannot be used together with -d.

Disk type identifies disk technology and connectivity type. ATA identifies ATA disks with either IDE or serial ATA interface in shelves connected in FCAL (Fibre Channel Arbitrated Loop). FCAL identifies FC disks in shelves connected in FC-AL. LUN identifies virtual disks exported from external storage arrays. The underlying disk technology and RAID type depends on implementation of such external storage arrays. SAS identifies Serial Attached SCSI disks in matching shelves. SATA identifies serial ATA disks in SAS shelves. SCSI stands for Small Computer System Interface, and it is included for backward compatibility with earlier disk technologies.

The -R rpm argument specifies the type of disks to use based on their rotational speed in revolutions per minute (rpm). It is needed only on systems having disks with different rotational speeds. Typical values for rotational speed are 5400, 7200, 10000, and 15000. -R cannot be used together with -d.

ndisks is the number of disks in the aggregate, including the parity disks. The disks in this newly created aggregate come from the pool of spare disks. The smallest disks in this pool join the aggregate first, unless you specify the @size argument. size is the disk size in GB, and disks that are within 10% of the specified size will be selected for use in the aggregate.

The -m option can be used to specify that the new aggregate be mirrored (have two plexes) upon creation. If this option is given, then the indicated disks will be split across the two plexes. By default, the new aggregate will not be mirrored.

The -n option can be used to display the command that the system will execute, without actually making any changes. This is useful for displaying the automatically selected disks, for example.

If you use the -d disk1 [ disk2 … ] argument, the filer creates the aggregate with the specified spare disks disk1, disk2, and so on. You can specify a space-separated list of disk names. Two separate lists must be specified if the new aggregate is mirrored. In the case that the new aggregate is mirrored, the indicated disks must result in an equal number of disks on each new plex.

The disks in a plex are not permitted to span spare pools. This behavior can be overridden with the -f option. The same option can also be used to force using disks that do not have matching rotational speed. The -f option has effect only when used with the -d option specifying disks to use.

To create a SnapLock aggregate, specify the -L flag with the aggr create command. This flag is only supported if either SnapLock Compliance or SnapLock Enterprise is licensed. The type of the SnapLock aggregate created, either Compliance or Enterprise, is determined by the installed SnapLock license. If both SnapLock Compliance and SnapLock Enterprise are licensed, use -L compliance or -L enterprise to specify the desired aggregate type.

The -l language_code argument may be used only when creating a traditional volume using option -v. The filer creates the traditional volume with the language specified by the language code. The default is the language used by the filer’s root volume. See the vol man page for a list of language codes.

aggr destroy { aggrname | plexname } [ -f ]

Destroys the aggregate named aggrname, or the plex named plexname. Note that if the specified aggregate is tied to a traditional volume, then the traditional volume itself is destroyed as well.

If an aggregate is specified, all plexes in the aggregate are destroyed. The named aggregate must also not contain any flexible volumes, regardless of their mount state (online, restricted, or offline). If a plex is specified, the plex is destroyed, leaving an unmirrored aggregate or traditional volume containing the remaining plex. Before destroying the aggregate, traditional volume or plex, the user is prompted to confirm the operation. The -f flag can be used to destroy an aggregate, traditional volume or plex without prompting the user.

The disks originally in the destroyed object become spare disks. Only offline aggregates, traditional volumes and plexes can be destroyed.

aggr media_scrub status [ aggrname | plexname | groupname ]
[ -v ]

Prints the media scrubbing status of the named aggregate, plex, or group. If no name is given, then status is printed for all RAID groups currently running a media scrub. The status includes a percent-complete and whether it is suspended.

The -v flag displays the date and time at which the last full media scrub completed, the date and time at which the current instance of media scrubbing started, and the current status of the named aggregate, plex, or group. If no name is given, this more verbose status is printed for all RAID groups with active media scrubs.

aggr mirror aggrname
[ -f ]
[ -n ]
[ -v victim_aggrname ]
[ -d disk1 [ disk2 … ] ]

Turns an unmirrored aggregate into a mirrored aggregate by adding a plex to it. The plex is either newly-formed from disks chosen from a spare pool, or, if the -v option is specified, is taken from another existing unmirrored aggregate. Aggregate aggrname must currently be unmirrored. Use aggr create to make a new, mirrored aggregate from scratch.

Disks may be specified explicitly using -d in the same way as with the aggr create and aggr add commands. The number of disks indicated must match the number present on the existing aggregate. The disks specified are not permitted to span disk pools. This behavior can be overridden with the -f option. The -f option, in combination with -d, can also be used to force using disks that have a rotational speed that does not match that of the majority of existing disks in the aggregate.

If disks are not specified explicitly, then disks are automatically selected to match those in the aggregate’s existing plex.

The -v option can be used to join victim_aggrname back into aggrname to form a mirrored aggregate. The result is a mirrored aggregate named aggrname which is otherwise identical to aggrname before the operation. Victim_aggrname is effectively destroyed. Victim_aggrname must have been previously mirrored with aggrname, then separated via the aggr split command. Victim_aggrname must be offline. Combined with the -v option, the -f option can be used to join aggrname and vic_tim_aggrname without prompting the user.

The -n option can be used to display the command that the system will execute without actually making any changes. This is useful for displaying the automatically selected disks, for example.

aggr offline { aggrname | plexname }
[ -t cifsdelaytime ]

Takes the aggregate named aggrname (or the plex named plexname) offline. The command takes effect before returning. If the aggregate is already in restricted state, then it is already unavailable for data access, and much of the following description does not apply.

If the aggregate contains any flexible volumes, then the operation is aborted unless the filer is in maintenance mode.

Except in maintenance mode, the aggregate containing the current root volume may not be taken offline. An aggregate containing a volume that has been marked to become root (using vol options vol_name root) also cannot be taken offline.

If the aggregate is embedded in a traditional volume that has CIFS shares, users should be warned before taking the aggregate (and hence the entire traditional volume) offline. Use the -t switch for this. The cifsdelaytime argument specifies the number of minutes to delay before taking the embedded aggregate offline, during which time CIFS users of the traditional volume are warned of the pending loss of service. A time of 0 means take the aggregate offline immediately with no warnings given. CIFS users can lose data if they are not given a chance to terminate applications gracefully.

If a plexname is specified, the plex must be part of a mirrored aggregate and both plexes must be online. Prior to offlining a plex, the system will flush all internally-buffered data associated with the plex and create a snapshot that is written out to both plexes. The snapshot allows for efficient resynchronization when the plex is subsequently brought back online.

A number of operations being performed on the aggregate’s traditional volume can prevent aggr offline from succeeding, for various lengths of time. If such operations are found, there will be a one-second wait for such operations to finish. If they do not, the command is aborted.

A check is also made for files in the aggregate’s associated traditional volume opened by internal ONTAP processes. The command is aborted if any are found.

aggr online { aggrname | plexname }
[ -f ]

Brings the aggregate named aggrname (or the plex named plexname) online. This command takes effect immediately. If the specified aggregate is embedded in a traditional volume, the volume is also also brought online.

If an aggrname is specified, it must be currently offline, restricted, or foreign. If the aggregate is foreign, it will be made native before being brought online. A “foreign” aggregate is an aggregate that consists of disks moved from another filer and that has never been brought online on the current filer. Aggregates that are not foreign are considered “native.”

If the aggregate is inconsistent, but has not lost data, the user will be cautioned and prompted before bringing the aggregate online. The -f flag can be used to override this behavior. It is advisable to run WAFL_check (or do a snapmirror initialize in case of an aggregate embedded in a traditional volume) prior to bringing an inconsistent aggregate online. Bringing an inconsistent aggregate online increases the risk of further file system corruption. If the aggregate is inconsistent and has experienced possible loss of data, it cannot be brought online unless WAFL_check (or snapmirror initialize in the embedded case) has been run on the aggregate.

If a plexname is specified, the plex must be part of an online mirrored aggregate. The system will initiate resynchronization of the plex as part of online processing.

aggr options aggrname [ optname optval ]

Displays the options that have been set for aggregate aggrname, or sets the option named optname of the aggregate named aggrname to the value optval. The command remains effective after the filer is rebooted, so there is no need to add aggr options commands to the /etc/rc file. Some options have values that are numbers. Some options have values that may be on (which can also be expressed as yes, true, or 1 ) or off (which can also be expressed as no, false, or 0). A mixture of uppercase and lowercase characters can be used when typing the value of an option. The aggr status command displays the options that are set per aggregate.

The following describes the options and their possible values:

fs_size_fixed on | off

This option only applies to aggregates that are embedded in traditional volumes. It causes the file system to remain the same size and not grow or shrink when a SnapMirrored volume relationship is broken, or an aggr add is performed on it. This option is automatically set to be on when a traditional volume becomes a SnapMirrored volume. It will remain on after the snapmirror break command is issued for the traditional volume. This allows a traditional volume to be SnapMirrored back to the source without needing to add disks to the source traditional volume. If the traditional volume size is larger than the file system size, turning off this option will force the file system to grow to the size of the traditional volume. The default setting is off.

ignore_inconsistent on | off

This command can only be used in maintenance mode. If this option is set, it allows the aggregate containing the root volume to be brought online on booting, even though it is inconsistent. The user is cautioned that bringing it online prior to running WAFL_check or wafliron may result in further file system inconsistency.

nosnap on | off

If this option is on, it disables automatic snapshots on the aggregate. The default setting is off.

raidsize number

The value of this option is the maximum size of a RAID group that can be created in the aggregate. Changing the value of this option will not cause existing RAID groups to grow or shrink; it will only affect whether more disks will be added to the last existing RAID group and how large new RAID groups will be.

Legal values for this option depend on raidtype. For example, raid_dp allows larger RAID groups than raid4. Limits and default values are also different for different types of filer appliances and different types of disks. Following tables define limits and default values for raidsize.

  ——————————————    raid4 raidsize       min   default   max   ——————————————    R100                  2        8       8    R150                  2        6       6    FAS250                2        7      14    other (FCAL disks)    2        8      14    other (ATA disks)     2        7       7   ——————————————     ——————————————    raid_dp raidsize     min   default   max   ——————————————    R100                  3       12      12    R150                  3       12      16    other (FCAL disks)    3       16      28    other (ATA disks)     3       14      16   —————————————— 

Those values may change in future releases of Data ONTAP.

raidtype raid4 | raid_dp | raid0

Sets the type of RAID used to protect against disk failures. Use of raid4 provides one parity disk per RAID group, while raid_dp provides two. Changing this option immediately changes the RAID type of all RAID groups within the aggregate. When upgrading RAID groups from raid4 to raid_dp, each RAID group begins a reconstruction onto a spare disk allocated for the second `dparity’ parity disk.

Changing this option also changes raidsize to a more suitable value for new raidtype. When upgrading from raid4 to raid_dp, raidsize will be increased to the default value for raid_dp. When downgrading from raid_dp to raid4, raidsize will be decreased to the size of the largest existing RAID group if it is between the default value and the limit for raid4. If the largest RAID group is above the limit for raid4, the new raidsize will be that limit. If the largest RAID group is below the default value for raid4, the new raidsize will be that default value. If raidsize is already below the default value for raid4, it will be reduced by 1.

resyncsnaptime number

This option is used to set the mirror resynchronization snapshot frequency (in minutes). The default value is 60 minutes.

root

If this option is set on a traditional volume, then the effect is identical as that defined in vol man page. Otherwise, if this option is set on an aggregate capable of containing flexible volumes, then that aggregate is marked as being the one that will also contains the root flexible volume on the next reboot. This option can be used on only one aggregate or traditional volume at any given time. The existing root aggregate or traditional volume will become a non-root entity after the reboot.

Until the system is rebooted, the original aggregate and/or traditional volume will continue to show root as one of its options, and the new root aggregate or traditional volume will show diskroot as an option. In general, the aggregate that has the diskroot option is the one that will contain the root flexible volume following the next reboot.

The only way to remove the root status of an aggregate or traditional volume is to set the root option on another aggregate or traditional volume.

snaplock_compliance

This read only option indicates that the aggregate is a SnapLock Compliance aggregate. Aggregates can only be designated SnapLock Compliance aggregates at creation time.

snaplock_enterprise

This read only option indicates that the aggregate is a SnapLock Enterprise aggregate. Aggregates can only be designated SnapLock Enterprise aggregates at creation time.

snapmirrored off

If SnapMirror is enabled for a traditional volume (SnapMirror is not supported for aggregates that contain flexible volumes), the filer automatically sets this option to on. Set this option to off if SnapMirror is no longer to be used to update the traditional volume mirror. After setting this option to off, the mirror becomes a regular writable traditional volume. This option can only be set to off; only the filer can change the value of this option from off to on.

snapshot_autodelete on | off

This option is used to set whether snapshot are automatically deleted in the aggr. If set to on then snapshots may be deleted in the aggr to recover storage as necessary. If set to off then snapshots in the aggr are not automatically deleted to recover storage. Note that snapshots may still be deleted for other reasons, such as maintaining the snapshot schedule for the aggr, or deleting snapshots that are associated with specific operations that no longer need the snapshot. To allow snapshots to be deleted in a timely manner the number of aggr snapshots is limited when snapshot_autodelete is enabled. Because of this, if there are too many snapshots in an aggr then some snapshots must be deleted before the snapshot_autodelete option can be enabled.

aggr rename aggrname newname

Renames the aggregate named aggrname to newname. If this aggregate is embedded in a traditional volume, then that volume’s name is also changed.

aggr restrict aggrname
[ -t cifsdelaytime ]

Put the aggregate named aggrname in restricted state, starting from either online or offline state. The command takes effect before returning.

If the aggregate contains any flexible volumes, the operation is aborted unless the filer is in maintenance mode.

If the aggregate is embedded in a traditional volume that has CIFS shares, users should be warned before restricting the aggregate (and hence the entire traditional volume). Use the -t switch for this. The cifsdelaytime argument specifies the number of minutes to delay before taking the embedded aggregate offline, during which time CIFS users of the traditional volume are warned of the pending loss of service. A time of 0 means take the aggregate offline immediately with no warnings given. CIFS users can lose data if they are not given a chance to terminate applications gracefully.

aggr scrub resume [ aggrname | plexname | groupname ]

Resumes parity scrubbing on the named aggregate, plex, or group. If no name is given, resume all RAID groups currently undergoing a parity scrubbing that has been suspended.

aggr scrub start [ aggrname | plexname | groupname ]

Starts parity scrubbing on the named online aggregate. Parity scrubbing compares the data disks to the parity disk(s) in their RAID group, correcting the parity disk’s contents as necessary. If no name is given, parity scrubbing is started on all online aggregates. If an aggregate name is given, scrubbing is started on all RAID groups contained in the aggregate. If a plex name is given, scrubbing is started on all RAID groups contained in the plex.

aggr scrub status [ aggrname | plexname | groupname ] [ -v ]

Prints the status of parity scrubbing on the named aggregate, plex, or group; all RAID groups currently undergoing parity scrubbing if no name is given. The status includes a percent-complete, and the scrub’s suspended status.

The -v flag displays the date and time at which the last full scrub completed along with the current status on the named aggregate, plex, or group; all RAID groups if no name is given.

aggr scrub stop [ aggrname | plexname | groupname ]

Stops parity scrubbing on the named aggregate, plex, or group; if no name is given, on all RAID groups currently undergoing a parity scrubbing.

aggr scrub suspend [ aggrname | plexname | groupname ]

Suspends parity scrubbing on the named aggregate, plex, or group; if no name is given, on all RAID groups currently undergoing parity scrubbing.

aggr show_space [ -h | -k | -m | -g | -t | -b ] < aggrname >

Displays the space usage in an aggregate. Unlike df, this command shows the space usage for each flexible volume within an aggregate If aggrname is specified, aggr show_space only runs on the corresponding aggregate, otherwise it reports space usage on all the aggregates.

All sizes are reported in 1024-byte blocks, unless otherwise requested by one of the -h, -k, -m, -g, or -t options. The -k, -m, -g, and -t options scale each size-related field of the output to be expressed in kilobytes, megabytes, gigabytes, or terabytes respectively.

The following terminology is used by the command in reporting space.

      Total space      This is the amount of total disk space                        that the aggregate has.        WAFL reserve     WAFL reserves a percentage of the total                        total disk space for aggregate level metadata.                        The space used for maintaining the volumes in                        the aggregate comes out of the WAFL reserve.        Snap reserve     Snap reserve is the amount of space                        reserved for aggregate snapshots.        Usable space     This is the total amount of space that                        is available to the aggregate for provisioning.                        This is computed as                        Usable space = Total space –                                       WAFL reserve –                                       Snap reserve                        df displays this as the ‘total’ space.        BSR NVLOG        This is valid for Synchronous SnapMirror                        destinations only. This is the amount of space                        used in the aggregate on the destination filer to                        store data sent from the source filer(s) before                        sending it to disk.        Allocated        This is the sum of the space reserved                        for the volume and the space used by non                        reserved data. For volume guaranteed                        volumes,  this is at least the size of the                        volume since no data is unreserved. For                        volumes with space guarantee of none,  this                        value is the same as the ‘Used’ space                        (explained below) since no unused space is                        reserved. The Allocated space value                        shows the amount of space that the volume                        is taking from the aggregate. This value can                        be greater than the size of the volume because                        it also includes the metadata required to                        maintain the volume.        Used             This is the amount of space that is taking                        up disk blocks. This value is not the same                        as the ‘used’ space displayed by the df                        command. The Used space in this case                        includes the metadata required to maintain                        the flexible volume.        Avail            Total amount of free space in the aggregate.                        This is the same as the avail space reported                        by df.  

aggr split plexname aggrname
[-r oldvol newvol] [-r ]
[-s suffix]

Removes plexname from a mirrored aggregate and creates a new unmirrored aggregate named aggrname that contains the plex. The original mirrored aggregate becomes unmirrored. The plex to be split from the original aggregate must be functional (not partial), but it could be inactive, resyncing, or outof-date. Aggr split can therefore be used to gain access to a plex that is not up to date with respect to its partner plex, if its partner plex is currently failed.

If the aggregate in which plexname resides is embedded in a traditional volume, aggr split behaves identically to vol split. The new aggregate is embedded in a new traditional volume of the same name.

If the aggregate in which plexname resides contains exactly one flexible volume, aggr split will by default rename the flexible volume image in the split-off plex to be the same as the new aggregate.

If the aggregate in which plexname resides contains more than one flexible volume, it is necessary to specify how to name the volumes in the new aggregate resulting from the split. The -r option can be used repeatedly to give each flexible volume in the resulting aggregate a new name. In addition, the -s option can be used to specify a suffix that is added to the end of all flexible volume names not covered by a -r.

If the original aggregate is restricted at the time of the split, the resulting aggregate will also be restricted. If the restricted aggregate is hosting flexible volumes, they are not renamed at the time of the split. Flexible volumes will be renamed later, when the name conflict is detected while bringing an aggregate online. Flexible volumes in the aggregate that is brought online first keep their names. That aggregate can be either the original aggregate, or the aggregate resulting from the split. When the other aggregate is brought online later, flexible volumes in that aggregate will be renamed.

If the plex of an aggregate embedded within a traditional volume is offline at the time of the split, the resulting aggregate will be offline. When splitting a plex from an aggregate that hosts flexible volumes, if that plex is offline, but the aggregate is online, the resulting aggregate will come online, and its flexible volumes will be renamed. It is not allowed to split a plex from an offline aggregate.

A split mirror can be joined back together via the -v option to aggr mirror.

aggr status [ aggrname ]
[ -r | -v | -d | -c | -b | -s | -f | -i ]

Displays the status of one or all aggregates on the filer. If aggrname is used, the status of the specified aggregate is printed; otherwise the status of all aggregates in the filer are printed. By default, it prints a one-line synopsis of the aggregate which includes the aggregate name, whether it contains a single traditional volume or some number of flexible volumes, if it is online or offline, other states (for example, partial, degraded, wafl inconsistent, and so on) and peraggregate options. Per-aggregate options are displayed only if the options have been changed from the system default values by using the aggr options command, or by the vol options command if the aggregate is embedded in a traditional volume. If the wafl inconsistent state is displayed, please contact Customer Support.

The -v flag shows the on/off state of all peraggregate options and displays information about each volume, plex and RAID group contained in the aggregate.

The -r flag displays a list of the RAID information for that aggregate. If no aggrname is specified, it prints RAID information about all aggregates, information about file system disks, spare disks, and failed disks. For more information about failed disks, see the -f switch description below.

The -d flag displays information about the disks in the specified aggregate. The types of disk information are the same as those from the sysconfig -d command.

The -c flag displays the upgrade status of the Block Checksums data integrity protection feature.

The -b is used to get the size of source and destination aggregates for use with aggr copy. The output contains the storage in the aggregate and the possibly smaller size of the aggregate. The aggregate copy command uses these numbers to determine if the source and destination aggregate sizes are compatible. The size of the source aggregate must be equal or smaller than the size of the destination aggregate.

The -s flag displays a listing of the spare disks on the filer.

The -f flag displays a list of the failed disks on the filer. The command output includes the disk failure reason which can be any of following:

The -i flag displays a list of the flexible volumes contained in an aggregate.

      unknown           Failure reason unknown.       failed            Data ONTAP failed disk due to a                         fatal disk error.       admin failed      User issued a ‘disk fail’ command                         for this disk.       labeled broken    Disk was failed under Data ONTAP                         6.1.X or an earlier version.       init failed       Disk initialization sequence failed.       admin removed     User issued a ‘disk remove’ command                         for this disk.       not responding    Disk not responding to requests.       pulled            Disk was physically pulled,  or no                         data path exists on which to access                         the disk.       bypassed          Disk was bypassed by ESH. 

aggr undestroy [ -n ] < aggrname >

Undestroy a partially intact or previously destroyed aggregate or traditional volume. The command prints a list of candidate aggregates and traditional volumes matching the given name, which can be potentially undestroyed.

The -n option prints the list of disks contained by the aggregate or by the traditional volume, which can be potentially undestroyed. This option can be used to display the result of command execution, without actually making any changes.

aggr verify resume [ aggrname ]

Resumes RAID mirror verification on the named aggregate; if no aggregate name is given, on all aggregates currently undergoing a RAID mirror verification that has been suspended.

aggr verify start [ aggrname ] [ -f plexnumber ]

Starts RAID mirror verification on the named online mirrored aggregate. If no name is given, then RAID mirror verification is started on all online mirrored aggregates. Verification compares the data in both plexes of a mirrored aggregate. In the default case, all blocks that differ are logged, but no changes are made. If the -f flag is given, the plex specified is fixed to match the other plex when mismatches are found. A name must be specified with the -f plexnumber option.

aggr verify stop [ aggrname ]

Stops RAID mirror verification on the named aggregate; if no aggregate name is given, on all aggregates currently undergoing a RAID mirror verification.

aggr verify status [ aggrname ]

Prints the status of RAID mirror verification on the named aggregate; on all aggregates currently undergoing RAID mirror verification if no aggregate name is given. The status includes a percent-complete, and the verification’s suspended status.

aggr verify suspend [ aggrname ]

Suspends RAID mirror verification on the named aggregate; if no aggregate name is given, on all aggregates currently undergoing RAID mirror verification.

CLUSTER CONSIDERATIONS

Aggregates on different filers in a cluster can have the same name. For example, both filers in a cluster can have an aggregate named aggr0.

However, having unique aggregate names in a cluster makes it easier to migrate aggregates between the filers in the cluster.

EXAMPLES

aggr create aggr1 -r 10 20

Creates an aggregate named aggr1 with 20 disks. The RAID groups in this aggregate can contain up to 10 disks, so this new aggregate has two RAID groups. The filer adds the current spare disks to the new aggregate, starting with the smallest disk.

aggr create aggr1 20@9

Creates an aggregate named aggr1 with 20 9-GB disks. Because no RAID group size is specified, the default size (8 disks) is used. The newly-created aggregate contains two RAID groups with 8 disks and a third group with four disks.

aggr create aggr1 -d 8a.1 8a.2 8a.3

Creates an aggregate named aggr1 with the specified three disks.

aggr create aggr1 10
aggr options aggr1 raidsize 5

The first command creates an aggregate named aggr1 with 10 disks which belong to one RAID group. The second command specifies that if any disks are subsequently added to this aggregate, they will not cause any current RAID group to have more than five disks. Each existing RAID group will continue to have 10 disks and no more disks will be added to that RAID group. When new RAID groups are created, they will have a maximum size of five disks.

aggr show_space -h ag1

Displays the space usage of the aggregate `ag1′ and scales the unit of space according to the size.

  Aggregate ‘ag1′    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG          66GB          6797MB           611MB            59GB            65KB    Space allocated to volumes in the aggregate    Volume            Allocated            Used       Guarantee   vol1                   14GB            11GB          volume   vol2                 8861MB          8871MB            file   vol3                 6161MB          6169MB            none   vol4                   26GB            25GB          volume   vol1_clone           1028MB          1028MB       (offline)    Aggregate         Allocated            Used           Avail   Total space            55GB            51GB          3494MB   Snap reserve          611MB            21MB           590MB   WAFL reserve         6797MB          5480KB          6792MB 

aggr status aggr1 -r

Displays the RAID information about aggregate aggr1. In the following example, we see that aggr1 is a RAID-DP aggregate protected by block checksums. It is online, and all disks are operating normally. The aggregate contains four disks -two data disks, one parity disk, and one doubleparity disk. Two disks are located on adapter 0b, and two on adapter 1b. The disk shelf and bay numbers for each disk are indicated. All four disks are 10, 000 RPM FibreChannel disks attached via disk channel A. The disk "Pool" attribute is displayed only if SyncMirror is licensed, which is not the case here (if SyncMirror were licensed, Pool would be either 0 or 1). The amount of disk space that is used by Data ONTAP (“Used”) and is available on the disk (“Phys”) is displayed in the rightmost columns.

  Aggr aggr1 (online,  raid_dp) (block checksums)     Plex /aggr1/plex0 (online,  normal,  active)       RAID group /aggr1/plex0/rg0 (normal)          RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)         ——— ——  ————- —- —- —- —– ————–    ————–         dparity   0b.16   0b    1   0   FC:A   –  FCAL 10000 136000/278528000  137104/280790184         parity    1b.96   1b    6   0   FC:A   –  FCAL 10000 136000/278528000  139072/284820800         data      0b.17   0b    1   1   FC:A   –  FCAL 10000 136000/278528000  139072/284820800         data      1b.97   1b    6   1   FC:A   –  FCAL 10000 136000/278528000  139072/284820800  

SEE ALSO

vol , partner , snapmirror , sysconfig .


Table of Contents






















































7-mode Manual Pages , , , ,

cf

July 7th, 2009

Table of Contents

NAME

cf – controls the takeover and giveback operations of the filers in a cluster

SYNOPSIS

cf [ disable | enable | forcegiveback | forcetakeover [ -df ] | giveback [ -f ] | hw_assist [ status | test stats [ clear ] ] | monitor | partner | status [ -t ] takeover [ -f ] | [ -n ]]

cf nfo [ enable | disable ] disk_shelf

cf nfo status

DESCRIPTION

The cf command controls the cluster failover monitor, which determine when takeover and giveback operations take place within a cluster.

The cf command is available only if your filer has the cluster license.

OPTIONS

disable
Disables the takeover capability of both filers in the cluster.

enable
Enables the takeover capability of both filers in the cluster.

forcegiveback
forcegiveback is dangerous and can lead to data corruption; in almost all cases, use cf giveback -f instead.

Forces the live filer to give back the resources of the failed filer even though the live filer determines that doing so might result in data corruption or cause other severe problems. giveback will refuse to giveback under these conditions. Using the forcegiveback option forces a giveback. When the failed filer reboots as a result of a forced giveback, it displays the following message:

partner giveback incomplete, some data may be lost

forcetakeover [-f] forcetakeover is dangerous and can lead to data corruption; in almost all cases, use cf takeover instead.

Forces one filer to take over its partner even though the filer detects an error that would otherwise prevent a takeover. For example, normally, if a detached or faulty ServerNet cable between the filers causes the filers’ NVRAM contents to be unsynchronized, takeover is disabled. However, if you enter the cf forcetakeover command, the filer takes over its partner despite the unsynchronized NVRAM contents. This command might cause the filer being taken over to lose client data. If you use the -f option, the cf command allows such a forcetakeover to proceed without requiring confirmation by the operator.

forcetakeover -d[f] Forces a filer to take over its partner in all cases where a forcetakeover would fail. In addition it will force a takeover even if some partner mailbox disks are inaccessible. It can only be used when cluster_remote is licensed.

forcetakeover -d is very dangerous. Not only can it cause data corruption, if not used carefully, it can also lead to a situation where both the filer and it’s partner are operational (split brain). As such, it should only be used as a means of last resort when the takeover and forcetakeover commands are unsuccessful in achieving a takeover. The operator must ensure that the partner filer does not become operational at any time while a filer is in a takeover mode initiated by the use of this command. In conjunction with RAID mirroring, it can allow recovery from a disaster when the two filers in the cluster are located at two distant sites. The use of -f option allows this command to proceed without requiring confirmation by the operator.

giveback [ -f ]
Initiates a giveback of partner resources. Once the giveback is complete, the automatic takeover capability is disabled until the partner is rebooted. A giveback fails if outstanding CIFS sessions, active system dump processes, or other filer operations makes a giveback dangerous or disruptive. If you use the -f option, the cf command allows such a giveback to proceed as long as it would not result in data corruption or filer error.

hw_assist [ status | test | stats [ clear ] ] Displays information related to the hardware-assisted takeover functionality. Use the cf hw_assist status command to display the hardware-assisted functionality status of the local as well as the partner filer. If hardware-assisted status is inactive, the command displays the reason and if possible, a corrective action. Use the cf hw_assist test command to validate the hardware-assisted takeover configuration. An error message is printed if hardware-assisted takeover configuration can not be validated. Use the cf hw_assist stats command to display the statistics for all hw_assist alerts received by the filer. Use cf hw_assist stats clear to clear hardware-assisted functionality statistics.

monitor
Displays the time, the state of the local filer and the time spent in this state, the host name of the partner and the state of cluster failover monitor (whether enabled or disabled). If the partner has not been taken over currently, the status of the partner and that of the interconnect are displayed and any ongoing giveback or scheduled takeover operations are reported.

partner
Displays the host name of the partner. If the name is unknown, the cf command displays “partner.”

status
Displays the current status of the local filer and the cluster. If you use the -t option, displays the status of the node as time master or slave.

takeover [ -f ] | [ -n ]
Initiates a takeover of the partner. If you use the -f option, the cf command allows such a takeover to proceed even if it will abort a coredump on the other filer.

If you use the -n option, the cf command allows a takeover to proceed even if the partner node was running an incompatible version of Data ONTAP. The partner node must be cleanly halted in order for this option to succeed. This is used as part of a nondisruptive upgrade process.

nfo [ enable | disable ] disk_shelf
Enables or disables negotiated failover on disk shelf count mismatch.

This command is obsolete. Option cf.takeover.on_disk_shelf_miscompare replaces it.

Negotiated failover is a general facility which supports negotiated failover on the basis of decisions made by various modules. disk_shelf is the only negotiated failover module currently implemented. When communication is first established over the interconnect between the local filer and its partner, a list of disk shelves seen by each node on its A and B loops is exchanged. If a filer sees that the count of shelves that the partner sees on its B loops is greater than the filer’s count of shelves on its A loops, the filer concludes that it is “impaired” (as it sees fewer of its shelves than its partner does) and asks the partner to take it over. If the partner is not itself impaired, it will accept the takeover request and, in turn, ask the requesting filer to shut down gracefully. The partner takes over after the requesting node shuts down, or after a time-out period of approximately 3 minutes expires. The comparison of disk shelves is only done when communication between the filers is established or re-established (for example, after a node reboots).

nfo status
Displays the current negotiated failover status.

This command is obsolete. Use cf status instead.

SEE ALSO

partner


Table of Contents



7-mode Manual Pages , , , ,

bootfs

July 7th, 2009

Table of Contents

NAME

bootfs – boot file system accessor command (ADVANCED)

SYNOPSIS

bootfs chkdsk disk

bootfs core [ -v ] disk

bootfs dir [ -r ] path

bootfs dump { disk | drive } { sector | cluster }

bootfs fdisk disk partition1sizeMB [ partition2sizeMB ] [ partition3sizeMB ] [ partition4sizeMB ]

bootfs format drive [ label ]

bootfs info disk

bootfs sync [ -f ] { disk | drive }

bootfs test [ -v ] disk

DESCRIPTION

The bootfs command allows content viewing and format manipulation of the the boot device.

Using the bootfs command, you may perform four important functions. You may check the integrity of the boot device via the chkdsk subcommand. You may view the contents of your boot device via the dir , dump , and info subcommands. You may alter the partition sizes and format types present on the boot device via the fdisk subcommand. You may reformat the partitions present on the boot device via the format command. You may sync all in memory contents to the physical media via the sync subcommand. Lastly, you may diagnose the health of your boot device via the test subcommand.

OPTIONS

-v
Turns on verbose output.

-r
Recursively lists directories and files.

path
A path consists of a drive, optional directories, and an optional file name. Directories are separated by a /. To discover your boot drive’s name, use " bootfs help subcommand ".

disk
A disk is a physical object, probably a compact flash in this case. A disk name is generally of the form [PCI slot number]a.0, e.g. 0a.0. To discover your boot disk’s name, use " bootfs help subcommand ".

drive
A drive is a formatted partition on the disk. A disk may contain up to four drives. A drive name is generally of the form [PCI slot number]a.0:[partition
number]:,
e.g. 0a.0:1:. To discover your boot drive’s name, use " bootfs help sub_command ".

sector
Disks are divided into sectors. Sectors are based at 0.

cluster
Drives are divided into clusters. Clusters are based at 2, though the root directory can be thought to reside at cluster 0.

partitionNsizeMB
The size of partition N in megabytes. There can be at most four partitions per disk.

label
An 11-character or less string which names the drive.

CLUSTER CONSIDERATIONS

The bootfs command cannot be used on a clustered system’s partner.

EXAMPLES

The dir subcommand lists all files and subdirectories contained in the path provided. The information presented for each file and subdirectory is (in this column order) name, size, date, time, and cluster.

bootfs dir 0a.0:1:/x86/kernel/

    Volume Label in Drive 0a.0:1: is KERNEL     Volume Serial Number is 716C-E9F8     Directory of 0a.0:1:/x86/kernel/     .                                      DIR  02-07-2003   2:37a     2    ..                                     DIR  02-07-2003   2:37a     3    PRIMARY.KRN                        9318400  04-07-2003   6:53p     4                     2187264 bytes free 

The dump subcommand lists either a sector on a disk or a cluster on a drive, depending on the command line arguments provided. The sector or cluster is listed in both hexadecimal and ASCII form.

bootfs dump 0a.0 110

  sector 110 absolute byte 0xdc00 on disk 0a.0          00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f   0123456789abcdef   —-++————————————————++—————-   0000  00 90 ba 5e b4 01 00 80 7b 0c 00 7d 05 ba 51 b4   …^….{..}..Q.   0010  01 00 83 7b 04 00 74 0a 8b 47 24 a3 dc ce 01 00   …{..t..G$…..   0020  eb 0a c7 05 dc ce 01 00 00 00 e0 fe 83 c4 fc ff   …………….   0030  35 dc ce 01 00 52 68 80 b4 01 00 e8 26 b0 ff ff   5….Rh…..&…   0040  a1 dc ce 01 00 8b 90 f0 00 00 00 80 ce 01 89 90   …………….   [etc.]     bootfs dump 0a.0:1: 5    cluster 5 absolute byte 0x25a00 on drive 0a.0:1:          00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f   0123456789abcdef   —-++————————————————++—————-   0000  0a 19 12 00 19 0f 00 01 00 64 00 00 00 00 00 00   ………d……   0010  a1 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   …………….   0020  00 00 00 00 5a 44 5a 44 00 10 00 00 00 00 01 b0   ….ZDZD……..   0030  20 04 00 10 20 05 00 01 20 06 00 02 20 07 00 13    … … … …   0040  fc ef 00 00 fc b1 20 80 fc d0 20 80 4a 63 c0 55   …… … .Jc.U   [etc.] 

The fdisk subcommand creates drives within a disk. A maximum of four drives may be created per disk. The sum of the drives must be less than the size of the disk. Note that most disk manufacturers define a megabyte as 1000*1000 bytes, resulting in a disk being smaller than the size advertised (for example, a 32 MB disk is really 30.5 MB). Performing an fdisk destroys all data on the disk.

bootfs fdisk 0a.0 30

The format subcommand formats a drive to the FAT file system standard. A drive must be formatted before it can store files.

bootfs format 0a.0:1: NETAPP

The info subcommand prints information about a disk. The location of various elements and sizes of sections is displayed.

bootfs info 0a.0

  ——————————————————————–             partition:           1           2           3           4   ——————————————————————–           file system:        0x01        0x01        0x01        0x01     bytes per cluster:        4096        4096        4096        4096    number of clusters:        2809        2809        2042         251           total bytes:    11534336    11534336     8388608     1048576          usable bytes:    11501568    11501568     8359936     1024000            free bytes:    11505664    11505664     8364032     1028096          FAT location:         512         512         512         512         root location:        9728        9728        6656        1536         data location:       26112       26112       23040       17920 

The test subcommand read and writes to/from every byte on the disk. The test subcommand can be used if you suspect your disk is faulty. A faulty disk would, for example, result in a download command failure.

bootfs test -v 0a.0

  [……………………………]    disk 0a.0 passed I/O test 

SEE ALSO

download


Table of Contents

7-mode Manual Pages , , , ,

boot

July 7th, 2009

Table of Contents

NAME

boot – directory of Data ONTAP executables

SYNOPSIS

/etc/boot

DESCRIPTION

The boot directory contains copies of the executable files required to boot the filer. The download command (see download ) copies these files from /etc/boot into the filer’s boot block, from which the system boots.

FILES

/etc/boot
directory of Data ONTAP executables. Files are place in /etc/boot after the tar or setup.exe has decompressed them. These files vary from release to release.

SEE ALSO

download


Table of Contents

Copyright © 1994-2008 NetApp, Inc. Legal Information

7-mode Manual Pages , , , ,

bmc

July 7th, 2009

Table of Contents

NAME

bmc – commmands for use with a Baseboard Management Controller (BMC)

SYNOPSIS

bmc help

bmc reboot

bmc setup

bmc status

bmc test autosupport

DESCRIPTION

The bmc command is used to manage and test a Baseboard Management Controller (BMC), if one is present.

OPTIONS

help
Display a list of Baseboard Management Controller (BMC) commands.

reboot
The reboot command forces the BMC to reboot itself and perform a self-test. If your console connection is through the BMC it will be dropped.

setup
Interactively configure the BMC local-area network (LAN) setttings.

status
Display the current status of the BMC.

test autosupport
Test the BMC autosupport by commanding the BMC to send a test autosupport to all autosupport email addresses in the option lists autosupport.to, autosupport.noteto, and autosupport.support.to.

CLUSTER CONSIDERATIONS

This command only acts upon the Baseboard Management Controller (BMC) that is local to the system.

EXAMPLES

bmc status

might produce:

              Baseboard Management Controller:                  Firmware Version:   1.0                  IPMI version:       2.0                  DHCP:               on                  BMC MAC address:    00:a0:98:05:2b:4a                  IP address:         10.98.144.170                  IP mask:            255.255.255.0                  Gateway IP address: 10.98.144.1                  BMC ARP interval:   10 seconds                  BMC has  user:   naroot                  ASUP enabled:       on                  ASUP mailhost:      mailhost@netapp.com                  ASUP from:          postmaster@netapp.com                  ASUP recipients:    dl-qa-autosupport@netapp.com 

SEE ALSO

setup , options

NOTES

Some of these commands might pause before completing while the Baseboard Management Controller (BMC) is queried. This is normal behavior.


Table of Contents

Copyright © 1994-2008 NetApp, Inc. Legal Information

7-mode Manual Pages , , , ,

backuplog

July 7th, 2009

Table of Contents

NAME

backuplog – captures significant events during file system backup/recovery activities.

SYNOPSIS

/etc/log/backup

DESCRIPTION

Filer captures significant dump/restore-related events and the respective times at which they occur. All events are recorded in one-line messages in /etc/log/backup.

The following are the events filer monitors:

Start
Dump/restore starts.

Restart
Restart of a dump/restore.

End
Dump/restore completes successfully.

Abort
The operation aborts.

Error
Dump/restore hits an unexpected event.

Options
Logs the options as users specify.

Tape_open
Output device is opened successfully.

Tape_close
Output device is closed successfully.

Phase_change
As dump/restore completes a stage.

Dump specific events:

Snapshot
When the snapshot is created or located.

Base_dump
When a valid base dump entry is located.

Logging events:

Start_logging Logging begins.

Stop_logging
Logging ends.

Each event record is in the following format:

TYPE TIME_STAMP IDENTIFIER EVENT (EVENT_INFO)

TYPE
Either dmp(dump), rst(restore) or log events.

TIME_STAMP
Shows date and time at which event occurs.

IDENTIFIER
Unique ID for the dump/restore.

EVENT
The event name.

EVENT_INFO
Event specific information.

A typical event record message looks like:

dmp Thu Apr 5 18:54:56 PDT 2001 /vol/vol0/home(5) Start (level 0, NDMP)

In the particular example:

TYPE
= dmp

TIME_STAMP
= Thu Apr 5 18:54:56 PDT 2001

IDENTIFER
= /vol/vol0/home(5)

EVENT
= Start

EVENT_INFO
= level 0, NDMP

All event messages go to /etc/log/backup. On every Sunday at 00:00, backup is roated to backup.0 and backup.0 is moved to backup.1 and so on. Up to 6 log files(spanning up to 6 weeks) are kept.

The registry option backup.log.enable controls the enabling and disabling of the logging with values on and off respectively. The functionality is enabled by default. (See options for how to set options.)

FILES

/etc/log/backup
backup log file for current week. /etc/log/backup.[0-5] backup log files for previous weeks

SEE ALSO

dump , restore , options


Table of Contents

7-mode Manual Pages , , , ,

backup

July 7th, 2009

Table of Contents

NAME

backup – manages backups

SYNOPSIS

backup status [ <ID> ]

backup terminate <ID>

DESCRIPTION

The backup commands provide facilities to list and manipulate backups on a filer.

A backup job runs on a filer as a process that copies a file system or a subset of it to secondary media, usually tapes. Data can be restored from the secondary media in case the original copy is lost. There are several types of backup processes that run on the filers:

dump
runs natively on the filer.

NDMP
driven by a 3rd party client through NDMP protocol.

RESTARTABLE A failed dump that can be restarted.

USAGE

backup status [ <ID> ]
displays all active instances of backup jobs on the filer. For each backup, the backup status command lists the following information:

ID
The unique ID that is assigned to the backup and persists across reboots until the backup completes successfully or is terminated. After that, the ID can be recycled for another backup.

State
The state can either be ACTIVE or RESTARTABLE. ACTIVE state indicates that the process is currently running; RESTARTABLE means the process is suspended and can be resumed.

Type
Either dump or NDMP.

Device
The current device. It is left blank for RESTARTABLE dumps since they are not running and thus do not have a current device.

Start Date The time and date that the backup first started.

Level
The level of the backup.

Path
Points to the tree that is being backed up.

An example of the backup status command output:

  ID  State        Type  Device   Start Date   Level  Path   —  ———–  —-  ——  ————  —–  —————    0  ACTIVE       NDMP  urst0a  Nov 28 00:22    0    /vol/vol0/    1  RESTARTABLE  dump          Nov 29 00:22    1    /vol/vol1/ 

If a specific ID is provided, the backup status command displays more detailed information for the corresponding backup.

backup terminate <ID>
A RESTARTABLE dump, though not actively running, retains a snapshot and other file system resources. To release the resources, user can explicitly terminate a RESTARTABLE dump. Once terminated, it cannot be restarted again.

SEE ALSO

dump


Table of Contents



7-mode Manual Pages , , , ,

autosupport

July 7th, 2009

Table of Contents

NAME

autosupport – notification daemon

SYNOPSIS

Data ONTAP is capable of sending automated notification to Customer Support at Network Appliance and/or to other designated addressees in certain situations. The notification contains useful information to help them solve or recognize problems quickly and proactively. The system can also be configured to send a short alert notification containing only the reason for the alert to a separate list of recipients. This notification is sent only for critical events that might require some corrective action and can be useful for Administrators with alphanumeric pagers that can accept short email messages.

DESCRIPTION

The autosupport mechanism will use SMTP if there are any (user configured) destination email addresses set in the autosupport.to option. If autosupport.support.enable is on then autosupports will also be sent to Network Appliance. Autosupports sent to Network Appliance may be transmitted by SMTP or by HTTP as specified in the autosupport.support.transport option.

If SMTP is used then the autosupport mechanism contacts a mail host that is listening on the SMTP port (25) to send email. A list of up to 5 mailhosts can be specified by using the autosupport.mailhosts option, and they will be accessed in the order specified until one of them answers as a mailhost. It will then send email through the successful mailhost connection to the destination email address specified in the autosupport.to option. Note that the autosupport.to option only allows 5 email address. To send to more than 5 recipients, create a local alias, or distribution list, and add that as the recipient.

If autosupport.support.enable is on then a copy of the autosupport message is also sent to Network Appliance as follows:

If autosupport.support.transport is smtp then the copy of the autosupport is emailed to the destination specified in autosupport.support.to and the same mailhost picking algorithm is used as above.

If autosupport.support.transport is http then a direct connection to the location specified in autosupport.support.url is made and the autosupport is transmitted to Network Appliance via HTTP POST.

The autosupport mechanism is triggered automatically once a week by the kernel to send information before backing up the messages file. It can also be invoked to send the information through the options command. Autosupport mail will also be sent on events that require corrective action from the System Administrator. And finally, the autosupport mechanism will send notification upon system reboot from disk.

To accommodate multiple delivery methods and destinations and to preserve time dependent values, the outgoing autosupport messages are now spooled in /etc/log/autosupport. Autosupport processing will attempt to deliver all (currently undelivered) messages until the autosupport.retry.count has been reached or until subsequent autosupport messages "fill the spool" such that the oldest (undelivered) messages are forced to be dropped. The spool size is currently 40 messages.

The subject line of the mail sent by the autosupport mechanism contains a text string to identify the reason for the notification. The subject also contains a relative prioritization of the message, using syslog severity levels from DEBUG to EMERGENCY (see syslog.conf ). The messages and other information in the notification should be used to check on the problem being reported.

The setup command tries to configure autosupport as follows:

If a mailhost is specified, it adds an entry for mailhost to the /etc/hosts file.

Setup also queries for autosupport.from information.

OPTIONS

Autosupport features are manipulated through the options command (see options ). The available options are as follows:

autosupport.cifs.verbose
If on, includes CIFS session and share information in autosupport messages. If off, those sections are omitted. The default is off.

autosupport.content
The type of content that the autosupport notification should contain. Allowable values are complete and minimal. The default value is complete. The minimal option allows the delivery of a "sanitized" and smaller version of the autosupport, at the cost of reduced support from Network Appliance. Please contact Network Appliance if you feel you need to use the minimal option. The complete option is the traditional (and default) form of autosupport. If this option is changed from complete to minimal then all previous and pending autosupport messages will be deleted under the assumption that complete messages should not be transmitted.

autosupport.doit
Triggers the autosupport daemon to send an autosupport notification immediately. A text word entered as the option is sent in the notification subject line and should be used to explain the reason for the notification.

autosupport.enable
Enables/disables the autosupport notification features (see autosupport ). The default is on to cause autosupport notifications to be sent. This option will override the autosupport.support.enable option.

autosupport.from
Defines the user to be designated as the sender of the notification. The default is postmaster@your.domain. Email replies from Network Appliance will be sent to this address.

autosupport.local.nht_data.enable
Enables/disables the NHT data autosupport to be sent to the recipients listed in the autosupport.to option. NHT data is the binary, internal log data from each disk drive, and in general, is not parsable by other than Network Appliance. There is no customer data in the NHT autosupport. The default for this option is off.

autosupport.local.performance_data.enable
Enables/disables performance data autosupport to be sent to the recipients listed in autosupport.to. The performance autosupport contains hourly samples of system performance counters, and in general is only useful to Network Appliance. The default is off.

autosupport.mailhost
Defines the list of up to 5 mailhost names. Enter the host names as a comma-separated list with no spaces in between. The default is an empty list.

autosupport.minimal.subject.id
Defines the type of string that is used in the identification portion of the subject line when autosupport.content is set to minimal. Allowable values are systemid and hostname. The default is systemid.

autosupport.noteto
Defines the list of recipients for the autosupport short note email. Up to 5 mail addresses are allowed. Enter the addresses as a comma-separated list with no spaces in between. The default is an empty list to disable short note emails.

autosupport.nht_data.enable
Enables/disables the generation of the Health Trigger (NHT) data autosupport. Default is off

autosupport.performance_data.enable
Enables/disables hourly sampling of system performance data, and weekly creation of a performance data autosupport. The default is on.

autosupport.retry.count
Number of times to try resending the mail before giving up and dropping the mail. Minimum is 5; maximum is 4294967295 ; The default is 15 .

autosupport.retry.interval
Time in minutes to delay before trying to send the autosupport again. Minimum is 30 seconds, maximum is 1 day. Values may end with `s’, `m’ or `h’ to indicate seconds, minutes or hours respectively, if no units are specified than input is
assumed to be in seconds. The default value is 4m.

autosupport.support.enable
Enables/disables the autosupport notification to Network Appliance The default is on to cause autosupport notifications to be sent directly to Network Appliance as described by the autosupport.support.transport option. This option is superceded (overridden) by the value of autosupport.enable.

autosupport.support.proxy
Allows the setting of an http based proxy if autosupport.support.transport is https or http. The default
for this option is the empty string, implying no proxy is necessary.

autosupport.support.to
This option is read only; it shows where autosupport notifications to Network Appliance are sent if autosupport.support.transport is smtp.

autosupport.support.transport
Allows setting the type of delivery desired for autosupport notifications that are destined for Network Appliance. Allowed values are https, http (for direct web based posting) or smtp (for traditional email). The default value is https. Note that http and https may (depending on local network configuration) require that the autosupport.support.proxy option be set correctly. Also smtp requires that autosupport.mailhosts be configured correctly before autosupport delivery can be successful.

autosupport.support.url
This option is read only; it shows where autosupport notifications to Network Appliance are sent if autosupport.support.transport is https or http.

autosupport.throttle
Enables autosupport throttling (see autosupport ). When too many autosupports are sent in too short a time, additional messages of the same type will be dropped. Valid values for this option are on or off. The default value for this option is on.

autosupport.to
Defines the list of recipients for the autosupport email notification. Up to 5 mail addresses are allowed. Enter the addresses as a comma-separated list with no spaces in between. The default is an empty list. Note that it is no longer necessary to use the standard Network Appliance autosupport email address in this field to direct autosupport messages to Network Appliance. Please use autosupport.support.enable instead.

CONTENTS

A complete autosupport will contain the following information. Note that some sections are configurable, and/or available depending on what features are licensed. The order given is the general order of appearance in the autosupport message itself.

Generation date and timestamp

Software Version

System ID

Hostname

SNMP contact name (if specified)

SNMP location (if specified)

Partner System ID (if clustered)

Partner Hostname (if clustered)

Cluster Node Status (if clustered)

Console language type

sysconfig -a output

sysconfig -c output

sysconfig -d output

System Serial Number

Software Licenses (scrambled prior to transmission)

Option settings

availtime output

cf monitor all output (if clustered)

ic stats performance output (if clustered with VIA)

ic stats error -v output (if clustered with VIA)

snet stats -v output (if clustered with SNET)

ifconfig -a output

ifstat -a output

vlan stat output

vif status output

nis info output

nfsstat -c output (if licensed)

cifs stat output (if licensed)

cifs sessions summary (if licensed)

cifs sessions output (if licensed and enabled)

cifs shares summary (if licensed)

cifs shares output (if licensed and enabled)

vol status -l (if cifs is licensed)

httpstat output

vfiler status -a output (if licensed)

df output

df -i output

snap sched output

vol status -v output

vol status output

vol status -c output

vol scrub status -v output

sysconfig -r output

fcstat fcal_stats output

fcstat device_map output

fcstat link_stats output

ECC Memory Scrubber Statistics

ems event status output

ems log status output

registry values

perf report -t output

storage show adapter -a output

storage show hub -a output

storage show disk -a output

storage show fabric output

storage show switch output

storage show port output

EMS log file (if enabled)

/etc/messages content

Parity Inconsistancy information

WAFL_check logs

TYPES

The following types of autosupport messages, with their associated severity, can be generated automatically. The autosupport message text is in bold, and the LOG_XXX value is the syslog severity level. Note that text inside of square brackets ([]) is descriptive and is not static for any given autosupport message of that type.

BATTERY_LOW!!!
LOG_ALERT

BMC_EVENT: BUS ERROR
LOG_ERR

BMC_EVENT: POST ERROR
LOG_ERR

CLUSTER DOWNREV BOOT FIRMWARE
LOG_CRIT

CLUSTER ERROR: DISK/SHELF COUNT MISMATCH LOG_EMERG

CLUSTER GIVEBACK COMPLETE
LOG_INFO

CLUSTER TAKEOVER COMPLETE AUTOMATIC
LOG_ALERT

CLUSTER TAKEOVER COMPLETE MANUAL
LOG_INFO

CLUSTER TAKEOVER FAILED
LOG_INFO

CONFIGURATION_ERROR!!!
LOG_ALERT

CPU FAN WARNING - [fan]
LOG_WARNING

DEVICE_QUALIFICATION_FAILED
LOG_CRIT

DISK CONFIGURATION ERROR
LOG_ALERT

DISK RECONSTRUCTION FAILED!!
LOG_ALERT

DISK_FAIL!!! - Bypassed by ESH
LOG_ALERT

DISK_FAIL!!!
LOG_ALERT

DISK_FAILURE_PREDICTED!!!
LOG_ALERT

DISK_FIRMWARE_NEEDED_UPDATE!!!
LOG_EMERG

DISK_IO_DEGRADED
LOG_WARNING

DISK_LOW_THRUPUT
LOG_NOTICE

DISK_RECOVERED_ERRORS
LOG_WARNING

DISK_SCRUB!!!
LOG_EMERG

FC-AL LINK_FAILURE!!!
LOG_ERR

FC-AL RECOVERABLE ERRORS
LOG_WARNING

OVER_TEMPERATURE_SHUTDOWN!!!
LOG_EMERG

OVER_TEMPERATURE_WARNING!!!
LOG_EMERG

PARTNER DOWN, TAKEOVER IMPOSSIBLE
LOG_ALERT

POSSIBLE BAD RAM
LOG_ERR

POSSIBLE UNLINKED INODE
LOG_ERR

REBOOT (CLUSTER TAKEOVER)
LOG_ALERT

REBOOT (after WAFL_check)
LOG_INFO

REBOOT (after entering firmware)
LOG_INFO

REBOOT (after giveback)
LOG_INFO

REBOOT (halt command)
LOG_INFO

REBOOT (internal halt)
LOG_INFO

REBOOT (internal reboot)
LOG_INFO

REBOOT (panic)
LOG_CRIT

REBOOT (power glitch)
LOG_INFO

REBOOT (power on)
LOG_INFO

REBOOT (reboot command)
LOG_INFO

REBOOT (watchdog reset)
LOG_CRIT

REBOOT
LOG_INFO

SHELF COOLING UNIT FAILED
LOG_EMERG

SHELF COOLING UNIT FAILED
LOG_WARNING

SHELF_FAULT!!!
LOG_ALERT

SNMP USER DEFINED TRAP
LOG_INFO

SPARE_FAIL!!!
LOG_ALERT

SYSTEM_CONFIGURATION_CRITICAL_ERROR
LOG_CRIT

SYSTEM_CONFIGURATION_ERROR
LOG_ERR

UNDER_TEMPERATURE_SHUTDOWN!!!
LOG_EMERG

UNDER_TEMPERATURE_WARNING!!!
LOG_EMERG

USER_TRIGGERED ([user input from autosupport.doit]) LOG_INFO

WAFL_check!!!
LOG_ALERT

WEEKLY_LOG
LOG_INFO

[EMS event]
LOG_INFO

[fan] FAN_FAIL!!!
LOG_ALERT

[mini core]
LOG_CRIT

[power supply failure]
LOG_ALERT

[power supply] POWER_SUPPLY_DEGRADED!!!
LOG_ALERT

[shelf over temperature critical]
LOG_EMERG

CLUSTER CONSIDERATIONS

The autosupport email messages from a filer in a cluster are different from the autosupport email messages from a standalone filer in the following ways:

The subject in the autosupport email messages from a filer in a cluster reads, “Cluster notification, ” instead of “System notification.”

The autosupport email messages from a filer in a cluster contains information about its partner, such as the partner system ID and the partner host name.

In takeover mode, if you reboot the live filer, two autosupport email messages notify the email recipients of the reboot: one is from the live filer and one is from the failed filer.

The live filer sends an autosupport email message after it finishes the takeover process.

SEE ALSO

options , partner , setup , hosts , RFC821


Table of Contents






























































7-mode Manual Pages , , , ,

auditlog

July 7th, 2009

Table of Contents

NAME

auditlog – contains an audit record of recent administrative activity

SYNOPSIS

<logdir>/auditlog

<logdir> is /etc/log for filers and /logs for NetCache appliances.

DESCRIPTION

If the option auditlog.enable is on, the system logs all input to the system at the console/telnet shell and via rsh to the auditlog file. The data output by commands executed in this fashion is also logged to auditlog. Administrative servlet invocations (via HTTP, typically from FilerView) and API calls made via the ONTAPI interface are also logged to the auditlog. A typical message is:

Wed Feb 9 17:34:09 GMT [rshd_0:auditlog]: root:OUT:date: Wed Feb 9 17:34:09 GMT 2000

This indicates that there was an rsh session around Wed Feb 9 17:34:09 GMT which caused the date command to be executed. The user performing the command was root. The type of log is data output by the system as indicated by the OUT keyword.

Commands typed at the filer’s console or executed by rsh are designated by the IN keyword as in:

Wed Feb 9 17:34:03 GMT [rshd_0:auditlog]: :IN:rsh shell: RSH INPUT COMMAND is date

The start and end of an rsh session are specially demarcated as in

Wed Feb 9 17:34:09 GMT [rshd_0:auditlog]: root:START:rsh shell:orbit.eng.mycompany.com

and

Wed Feb 9 17:34:09 GMT [rshd_0:auditlog]: root:END:rsh shell:

The maximum size of the auditlog file is controlled by the auditlog.max_file_size option. If the file gets to this size, it is rotated (see below).

Every Saturday at 24:00, <logdir>/auditlog is moved to <logdir>/auditlog.0, <logdir>/auditlog.0 is moved to <logdir>/auditlog.1, and so on. This process is called rotation. Auditlog files are saved for a total of six weeks, if they do not overflow.

If you want to forward audit log messages to a remote syslog log host (one that accepts syslog messages via the BSD Syslog protocol specified in RFC 3164), modify the filer’s /etc/syslog.conf file to forward messages from the filer’s "local7" facility to the remote host. Do this by adding a line like:

local7.*
@1.2.3.4

to /etc/syslog.conf. An IP address has been used here, but a valid DNS name could also be used. Note that using a DNS name can fail if the filer is unable to resolve the name given in the file. If that happens, your messages will not be forwarded.

On the log host, you’ll need to modify the syslog daemon’s configuration file to redirect syslog message traffic from the "local7" facility to the appropriate configuration file. That is typically done by adding a line similar to the one shown above for the filer:

local7.*
/var/logs/filer_auditlogs

Then restart the daemon on the log host, or send an appropriate signal to it. See the documentation for your log host’s syslog daemon for more information on how to make that configuration change.

FILES

<logdir>/auditlog
auditlog file for current week. <logdir>/auditlog.[0-5] auditlog files for previous weeks

SEE ALSO

options , syslog.conf


Table of Contents

7-mode Manual Pages , , , ,



This site is not affiliated or sponsored in anyway by NetApp or any other company mentioned within.