Thursday, November 25, 2010
netapp command summary
sysconfig -d : shows information of the disk attached to the filer
version : shows the netapp Ontap OS version.
uptime : shows the filer uptime
dns info : this shows the dns resolvers, the no of hits and misses and other info
nis info : this shows the nis domain name, yp servers etc.
rdfile : Like "cat" in Linux, used to read contents of text files/
wrfile : Creates/Overwrites a file. Similar to "cat > filename" in Linux
aggr status : Shows the aggregate status
aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
aggr show_space : Shows the disk usage of the aggreate, WAFL reserve, overheads etc.
vol status : Shows the volume information
vol status -s : Displays the spare disks on the filer
vol status -f : Displays the failed disks on the filer
vol status -r : Shows the raid configuration, reconstruction information of the disks
df -h : Displays volume disk usage
df -i : Shows the inode counts of all the volumes
df -Ah : Shows "df" information of the aggregate
license : Displays/add/removes license on a netapp filer
maxfiles : Displays and adds more inodes to a volume
aggr create : Creates aggregate
vol create : Creates volume in an aggregate
vol offline : Offlines a volume
vol online : Onlines a volume
vol destroy : Destroys and removes an volume
vol size [+|-] : Resize a volume in netapp filer
vol options : Displays/Changes volume options in a netapp filer
qtree create : Creates qtree
qtree status : Displays the status of qtrees
quota on : Enables quota on a netapp filer
quota off : Disables quota
quota resize : Resizes quota
quota report : Reports the quota and usage
snap list : Displays all snapshots on a volume
snap create : Create snapshot
snap sched : Schedule snapshot creation
snap reserve : Display/set snapshot reserve space in volume
/etc/exports : File that manages the NFS exports
rdfile /etc/exports : Read the NFS exports file
wrfile /etc/exports : Write to NFS exports file
exportfs -a : Exports all the filesystems listed in /etc/exports
cifs setup : Setup cifs
cifs shares : Create/displays cifs shares
cifs access : Changes access of cifs shares
lun create : Creates iscsi or fcp luns on a netapp filer
lun map : Maps lun to an igroup
lun show : Show all the luns on a filer
igroup create : Creates netapp igroup
lun stats : Show lun I/O statistics
disk show : Shows all the disk on the filer
disk zero spares : Zeros the spare disks
disk_fw_update : Upgrades the disk firmware on all disks
options : Display/Set options on netapp filer
options nfs : Display/Set NFS options
options timed : Display/Set NTP options on netapp.
options autosupport : Display/Set autosupport options
options cifs : Display/Set cifs options
options tcp : Display/Set TCP options
options net : Display/Set network options
ndmpcopy : Initiates ndmpcopy
ndmpd status : Displays status of ndmpd
ndmpd killall : Terminates all the ndmpd processes.
ifconfig : Displays/Sets IP address on a network/vif interface
vif create : Creates a VIF (bonding/trunking/teaming)
vif status : Displays status of a vif
netstat : Displays network statistics
sysstat -us 1 : begins a 1 second sample of the filer's current utilization (crtl - c to end)
nfsstat : Shows nfs statistics
nfsstat -l : Displays nfs stats per client
nfs_hist : Displays nfs historgram
statit : beings/ends a performance workload sampling [-b starts / -e ends]
stats : Displays stats for every counter on netapp. Read stats man page for more info
ifstat : Displays Network interface stats
qtree stats : displays I/O stats of qtree
environment : display environment status on shelves and chassis of the filer
storage show : Shows storage component details
snapmirror intialize : Initialize a snapmirror relation
snapmirror update : Manually Update snapmirror relation
snapmirror resync : Resyns a broken snapmirror
snapmirror quiesce : Quiesces a snapmirror bond
snapmirror break : Breakes a snapmirror relation
snapmirror abort : Abort a running snapmirror
snapmirror status : Shows snapmirror status
lock status -h : Displays locks held by filer
sm_mon : Manage the locks
storage download shelf : Installs the shelf firmware
software get : Download the Netapp OS software
software install : Installs OS
download : Updates the installed OS
cf status : Displays cluster status
cf takeover : Takes over the cluster partner
cf giveback : Gives back control to the cluster partner
reboot : Reboots a filer
Thursday, September 16, 2010
inetd: the Internet super server
==> Understanding inetd
==> Configuring inetd
What does inetd do?
To provide services over the network, you need an application that understands your requests and can provide that service. These applications usually work behind the scenes, without any user interface and interaction. This means that they are not visible in the normal work environment.
They are called daemons or network daemons. A daemon "listens" on a specific port for incoming requests. When a request arrives, the daemon "wakes up" and begins to perform a specific operation.
If you want to have many services available on your machine, you need to run a daemon for each service. These daemons need memory, take up space in the process table, and so on. For frequently used services, standalone daemons make sense. For services that receive few requests, they're a waste of resources.
To help you optimize your resources, there is a super server that can be configured to listen in on a list of ports and invoke the specific application the moment a request for the application arrives. This daemon is inetd. Most of the network daemons offer you the option to set them to run as stand-alone applications, or to be invoked by inetd on demand. Although the setup with inetd saves resources in the time the service is not used, it creates a delay for the client. The daemon for this service has to be loaded and probably has to initialize itself before it's ready to serve the request. You might think about which services you want to run as stand-alone applications and which ones you want to be called by inetd.
HTTP servers such as Apache are likely to be run as stand-alone services. Apache needs a relatively long load time, and you don't want your customers waiting too long while your Web site loads. Apache is highly optimized to run in stand-alone mode, so you probably don't want it to be spawned by inetd. A service such as in.telnetd, which enables logins from other machines, doesn't need much time to load. It makes a good candidate for being invoked by the super server.
You should be sure that you need inetd before you set it up. A client machine that primarily acts as a workstation has no need to run inetd, because the client machine is not meant to provide any services for the network. You probably don't need inetd for home machines. It's basically a question of security. Using inetd on machines on which you really don't need it may reveal possible entry points for crackers. Running inetd-enabled services such as telnet and rlogin can be used to exploit your machine. This doesn't mean that those services are generally insecure, but they may be a first source of information for potential crackers trying to enter your system.
During the installation, SuSE asks whether inetd should be started. The referring variable in /etc/rc.config is START_INETD. If it is set to yes, inetd will start at system boot time.
Configuring inetd
The inetd daemon reads several files in /etc when it's executed. The main configuration file is /etc/inetd.conf. It specifies which daemon should be started for which service.
The other files it reads are shared with other daemons and contain more general information about Internet services.
* /etc/services
This file maps port numbers to service names. Most of the port numbers below 1024 are assigned to special services (specified in RFC 1340). These assignments are reflected in this file. Every time you refer to a service by its name (as opposed to its port number), this name will be looked up in /etc/services and the referring port number will be used to process your request.
* /etc/rpc
Just like /etc/services, but with RPC (Remote Procedure Call), services are mapped to names.
* /etc/protocols
Another map. Here, protocols are specified and mapped to the numbers the kernel uses to distinguish between the different TCP/IP protocols.
Usually none of these files needs to be edited. The only candidate for changes is /etc/services. If you run some special daemon on your system (such as a database engine), you may want to add the port number of this service to this list.
The inetd daemon needs these files because in its own configuration file /etc/inetd.conf, these names are used to specify which daemon should be started to serve each service. Each line defines one service. Comment lines (starting with a hash sign -- #) and empty lines are ignored. The definition lines have seven fields, which must all contain legal values:
* service name
The name of a valid service in the file /etc/services. For internal services (discussed in the next chapter), the service name must be the official name of the service (that is, the first entry in /etc/services). When used to specify a Sun-RPC based service, this field is a valid RPC service name like listed in the file /etc/rpc. The part on the right of the slash (/) is the RPC version number. This can simply be a single numeric argument or a range of versions. A range is bounded by the lowest value to the highest value (for example, rusers/1-3).
* socket type
These should be one of the keywords stream, dgram, raw, rdm, or seqpacket, depending on whether the socket is a stream, datagram, raw, reliably delivered message, or sequenced packet socket.
* protocol
The protocol must be a valid protocol as listed in /etc/protocols. Examples might be tcp or udp. RPC-based services are specified with the rpc/tcp or rpc/udp service type.
* wait/nowait.max
The wait/nowait entry is applicable to datagram sockets only (other sockets should have a nowait entry in this space). If a datagram server connects to its peer, freeing the socket so that inetd can receive further messages on the socket, it is said to be a multithreaded server and should use the nowait entry. For datagram servers that process all incoming datagrams on a socket and eventually time out, the server is said to be single-threaded and should use a wait entry. Comsat(8) and talkd(8) are both examples of the latter type of datagram server. The optional max suffix (separated from wait or nowait by a period) specifies the maximum number of server instances that may be spawned from inetd within an interval of 60 seconds. When omitted, it defaults to 40.
* user.group
This entry should contain the user name of the user as whom the server should run. This allows for servers to be given less permission than root. An optional group name can be specified by appending a dot to the user name followed by the group name. This allows for servers to run with a different (primary) group id than specified in the password file. If a group is specified and user is not root, the supplementary groups associated with that user will still be set.
* server program
The server-program entry should contain the pathname of the program that is to be executed by inetd when a request is found on its socket. If inetd provides this service internally, this entry should be internal.
* server program arguments
This is the place to give arguments to the daemon. Note that they are starting with argv[0], which is the name of the program. If the service is provided internally, the keyword internal should take the place of this entry.
The inetd daemon provides several trivial services internally by use of routines within itself. These services are echo, discard, chargen (character generator), daytime (human readable time), and time (machine readable time, in the form of the number of seconds since midnight, January 1, 1900).
XREF These services are discussed more extensively in Chapter 12 .
Some examples can show you how it works. To enable telnet into the machine, you will need a line like this in /etc/inetd.conf:
telnet stream tcp nowait root /usr/sbin/in.telnetd in.telnetd
Telnet is the name of the service, it uses a stream socket and TCP protocol, and because it's a TCP service, you can specify it as nowait. The user ID the daemon should run with is root. The last two arguments give the path and name of the actual server application and its arguments.
You always have to give the program name here, because most daemons require it to get a argv[0].
An example for a datagram-based service is talk:
talk dgram udp wait root /usr/sbin/in.talkd in.talkd
You see the differences. Rather than stream you use dgram to specify a datagram type of service. The socket type is set to udp and inetd has to wait until the daemon exits before waiting for new connections.
Now if you look into the preinstalled /etc/inetd.conf file, you will notice that almost all services are using /usr/sbin/tcpd as a daemon for the service, and give the actual service daemon on the command line for tcpd. This is done to increase network security. The tcpd daemon acts as a wrapper and provides lists of hosts who are allowed to use this service. For the moment, think about tcpd as an in-between step that starts the daemon after checking whether the person requesting the service is actually allowed to do this. The SuSE default configuration allows everybody to connect to any port.
Friday, May 14, 2010
how Netapp Snapvault works
Compiled by Rajeev
Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault uses Snapshot technology to store online backups. In event of data loss or corruption on a filer, the backup data can be restored from the SnapVault filer with less downtime. It has significant advantages over traditional tape backups, like
• Media cost savings
• Reduce backup windows versus traditional tape-based backup
• No backup/recovery failures due to media errors
• Simple and Fast recovery of corrupted or destroyed data
Snapvault consists of major two entities – snapvault clients and a snapvault storage server. A snapvault client (Netapp filers and unix/windows servers) is the system whose data should be backed-up. The SnapVault server is a Netapp filer – which gets the data from clients and backs up data.
Snapvault supports two types of backup infrastructure:
1. Netapp to Netapp backups
2. Server to Netapp backups
For Server to Netapp Snapvault, we need to install Open System Snapvault client software provided by Netapp, on the servers. Using the snapvault agent software, the Snapvault server can pull and backup data on to the backup qtrees. SnapVault protects data on a client system by maintaining a number of read-only versions (snapshots) of that data on a SnapVault filer. The replicated data on the snapvault server system can be accessed via NFS or CIFS. The client systems can restore entire directories or single files directly from the snapvault filer. Snapvault requires primary and secondary license.
How snapvault works?
When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.
Snapvault commands
Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers.
Source filer – filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=svfiler
Destination filer – svfiler
svfiler> license add XXXXX
svfiler> options snapvault.enable on
svfiler> options snapvault.access host=filer1
In our GeekyFacts.com tutorial, consider svfiler:/vol/demo_vault as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (svfiler) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.
svfiler> snap sched demo_vault 0 0 0
Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.
svfiler> snapvault start -S filer1:/vol/datasource/qtree1 svfiler:/vol/demo_vault/qtree1
Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.
On source filer:
For example, let us create the schedules on source as below - 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:
Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
svfiler> snapvault snap sched -x demo_vault sv_hourly 6@0-22
svfiler> snapvault snap sched -x demo_vault sv_daily 14@23@sun-fri
svfiler> snapvault snap sched -x demo_vault sv_weekly 6@23@sun
To check the snapvault status, use the command "snapvault status" either on source or destination filer. And to see the backups, do a "snap list" on the destination volume - that will give you all the backup copies, time of creation etc.
Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.
Tuesday, April 13, 2010
Net App filer in Vmware
The other prerequirement you need is a Linux OS. I don’t want install some bloody Linux on my iMac, so I use VMware Fusion and the problem is solved. I use in this tutorial the Ubuntu 8.04 LTS Server. You might be considering now why I choose Ubuntu, the simple answer is I don’t known. Normally I use Open SUSE from Novell, but in this case I really don’t know. Strange things happen some times. Another side note I will guide you trough the installation so don’t panic if you are not an unix geek. But actually as non console junkie you should consider not to buy an EMC / IBM / … NAS ☺
Create a VMware with the normal settings 1 CPU, 10 GB HD, Nat NIC and 512MB Ram. Attach the OS iso and boot the whole batch job. Using the default settings is a nice strategy for not so experienced user. I only changed the keyboard layout to “swiss german (mac)” and check in the screen “install additional software” “ssh server” file transfer will be easy. Create an user called “geek” Then after next, next, …., next, next. The server has finished the miraculous installation. As a windows administrator the first thing I have done is enter in the console
Sudo reboot
After the restart I want to log in as root user, but the install wizard doesn’t accept the root password. After some time I decided to log in as user “geek”. Because it is only a testing environment I decided to enable root login in the console
sudo passwd root
After setting the password i started a console session on my mac shell and connected with ssh. You don’t need to do this but with an ssh session you are able to use Copy and Past with the host OS. For the poor windows user putty is a nice ssh tool. The Mac and Linux user just need to enter the following commands in the console
ssh
the next thing you need to do is copying the download from the net app side “7.3.1-tarfile-v22.tar” to the VM server. I use “Cyberduck”. For windows users a nice program is winscp. The Linux guys should use the mighty shell console. By the way I createt a folder named “/netapp”. So let us open the tar file
tar xvf /netapp/7.3.1-tarfile-v22.tgz
In the readme form net app gives us a hint that the simulator uses perl. To install perl on your machine use “apt-get install perl”, actually pearl was installed so you don’t need to do it. Okay now we are ready to start installing the simulator. The strangest thing on this point was I didn’t have to deal with any problems, all worked just fine. This is a bad sign.
cd /netapp/simulator
./setup.sh
and the installation starts. Some logs from the console behind
Script version 22 (18/Sep/2007)
Where to install to? [/sim]:
Would you like to install as a cluster? [no]:
Would you like full HTML/PDF FilerView documentation to be installed [yes]:
Continue with installation? [no]: yes
Creating /sim
Unpacking sim.tgz to /sim
Configured the simulators mac address to be [00:50:56:0:6c:5]
Please ensure the simulator is not running.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 10
The following disk types are available in MB:
Real (Usable)
a - 43 ( 14)
b - 62 ( 30)
c - 78 ( 45)
d - 129 ( 90)
e - 535 (450)
f – 1024 (900)
If you are unsure choose the default option a
What disk size would you like to use? [a]:
Disk adapter to put disks on? [0]:
Use DHCP on first boot? [yes]:
Ask for floppy boot? [no]:
Checking the default route…
You have a single network interface called eth0 (default route) . You will not be able to access the simulator from this Linux host. If this interface is marked DOWN in ifconfig then your simulator will crash.
Which network interface should the simulator use? [default]:
Your system has 455MB of free memory. The smallest simulator memory you should choose is 110MB. The maximum simulator memory is 415MB.
The recommended memory is 512MB.
Your original default appears to be too high. Seriously consider adjusting to below the maximum amount of 415MB.
How much memory would you like the simulator to use? [512]:
Create a new log for each session? [no]:
Overwrite the single log each time? [yes]:
Adding 10 additional disk(s).
Complete. Run /sim/runsim.sh to start the simulator.
Wow this was very easy. Nice Job Net App. In the last row there is the hint how to start the simulator so lets go.
/sim/runsim.sh
Peng, Klaap, kabumm, doing and an error appears on the console “Error ./maytag.L: No such file or directory”. F.. after some time, maybe 4 hours reading logs, traying these and that, and drinking some coffee I figured out that I need to install some libraries for AMD 64. This sounds funny but it solved my problem. This was the penalty for the easy setup.
apt-get install ia32-libs
and again try to startup. I deleted some info rows from the console log. But all questions should be present in the log below.
root@netapp:/netapp/simulator# /sim/runsim.sh
runsim.sh script version Script version 22 (18/Sep/2007)
This session is logged in /sim/sessionlogs/log
NetApp Release 7.3.1: Thu Jan 8 00:10:49 PST 2009
Copyright (c) 1992-2008 NetApp.
….
….
….
Do you want to enable IPv6? [n]: n
Do you want to configure virtual network interfaces? [n]:
Please enter the IP address for Network Interface ns0 [172.16.111.136]:
Please enter the netmask for Network Interface ns0 [255.255.255.0]:
Please enter media type for ns0 {100tx-fd, auto} [auto]:
Please enter the IP address for Network Interface ns1 []:
Would you like to continue setup through the web interface? [n]:
Please enter the name or IP address of the IPv4 default gateway [172.16.111.2]:
The administration host is given root access to the filer’s
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host:
Please enter timezone [GMT]:
Where is the filer located? []:
What language will be used for multi-protocol files (Type ? for list)?:
language not set
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]: Setting the administrative (root) password for mynetapp …
New password:
Retype new password:
Mon May 4 20:31:11 GMT [passwd.changed:info]: passwd for user ‘root’ changed.
….
….
….
This process will enable CIFS access to the filer from a Windows(R) system.
Use "?" for help at any prompt and Ctrl-C to exit without committing changes.
Your filer is currently visible to all systems using WINS. The WINS
name server currently configured is: [ 172.16.111.2 ].
(1) Keep the current WINS configuration
(2) Change the current WINS name server address(es)
(3) Disable WINS
Selection (1-3)? [1]:
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer
(1) Multiprotocol filer
(2) NTFS-only filer
Selection (1-2)? [1]:
CIFS requires local /etc/passwd and /etc/group files and default files
will be created. The default passwd file contains entries for ‘root’,
‘pcuser’, and ‘nobody’.
Enter the password for the root user []:
Retype the password:
The default name for this CIFS server is ‘MYNETAPP’.
Would you like to change this name? [n]:
Data ONTAP CIFS services support four styles of user authentication.
Choose the one from the list below that best suits your situation.
(1) Active Directory domain authentication (Active Directory domains only)
(2) Windows NT 4 domain authentication (Windows NT or Active Directory domains)
(3) Windows Workgroup authentication using the filer’s local user accounts
(4) /etc/passwd and/or NIS/LDAP authentication
Selection (1-4)? [1]: 4
What is the name of the Workgroup? [WORKGROUP]:
CIFS – Starting SMB protocol…
Welcome to the WORKGROUP Windows(R) workgroup
CIFS local server is running.
Password:
mynetapp> Mon May 4 20:32:25 GMT [console_login_mgr:info]: root logged in from console
Mon May 4 20:32:31 GMT [nbt.nbns.registrationComplete:info]: NBT: All CIFS name registrations have completed for the local server.
mynetapp>
SO this was not so hard. Now enjoy the world class filer in your VMware
netapp
Tags: app, Filer, NAS, Net, NetApp, vmware
This entry was posted on Sunday, May 10th, 2009 at 11:07 am and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
2 Responses to “Net App filer in Vmware”
1.
Russell Bomilla says:
Thursday, 18, March, 2010 at 4:10 am
The best unlimited file hosting on the web
2.
Abomination says:
Saturday, 10, April, 2010 at 4:55 pm
I have a problem with the part that says:
[i]You have a single network interface called eth0 (default route) . You will not be able to access the simulator from this Linux host. If this interface is marked DOWN in ifconfig then your simulator will crash.[/i]
This creates also the difficulty of connecting from the Linux host to the filer using FilerView. Have you managed to find any solutions for that?
Thanks
Rajeev
source :- http://www.dambeck.ch/2009/05/10/net-app-filer-in-vmware/
Netapp Snapmirror Setup Guide
Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.
Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself. This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).
Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.
When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.
This guides you quickly through the Snapmirror setup and commands.
1) Enable Snapmirror on source and destination filer
source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>
2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.
source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>
3) Initializing a Snapmirror relation
Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.
destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.
Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.
destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>
Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.
destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.
destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>
5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.
If you want to sync the data on a scheduled frequency, you can set that in destination filer's /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.
destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm
destination-filer>
6) Other Snapmirror commands
* To break snapmirror relation - do snapmirror quiesce and snapmirror break.
* To update snapmirror data - do snapmirror update
* To resync a broken relation - do snapmirror resync.
* To abort a relation - do snapmirror abort
Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.
To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed
Tags: howto, netapp, storage
6 comments:
Anonymous said...
its very good, simple and helpful
April 22, 2009 12:19 AM
Anonymous said...
Thank you - Works great!
May 4, 2009 7:33 AM
Anonymous said...
Thanks a lotttttttttt dude... great work..
this helped me a lotttttttt......
is there any way can you please do update the snap manager commands also.
it would be nice.
thanks once again...
July 22, 2009 8:36 AM
Veeresh said...
Thanks, it is very good and simple.
November 19, 2009 4:43 AM
Harryjames said...
Also please check the Hosts File , if you are working on simulator
you may want to edit the hosts file and make sure the IP's of nic 1 is on first ..
March 15, 2010 12:20 PM
Anonymous said...
Please Also Check Hosts File on filer1(source) if your working on simulators
If nic1 ip of filer1(source) is not listed to be first then add it to first by editing wrfile /etc/hosts
and to read its rdfile /etc/hosts
======================
http://unixfoo.blogspot.com/2009/01/netapp-snapmirror-setup-guide.html
======================
http://www.unix.com/unix-advanced-expert-users/46140-etc-mnttab-zero-length-i-have-done-silly-thing.html
Being the clever-clogs that I am, I have managed to clobber the contents of /etc/mnttab.
It started when I tried to unmount all the volumes in a prticular veritas disk group and neglected to include a suitable grep in my command line, thus attempting to unmount _all_ the filesystems on the server (for loop).
They all came back with a fs busy message so I was presuming no harm done. However, a df to check all was in order came back with no results, likewise running mount.
When I went to look in the mnttab itself I find it's zero length !?!
Everything appears ok in that the filesystems are still actually mounted and read/write.
If I try a mount or umount I get the message
"mount: Inappropriate ioctl for device"
I suspect a reboot would sort me out but that's a big hammer and I'd rather get to the bottom of it.
What have I done and how in the world do I fix it?
Running solaris 10 on a T2000, fairly recent patches applied. Bugger all running aside from base OS and SAN kit.
=================
I understand that /etc/mnttab is actually implemented as a screwy filesystem itself on Solaris. I read somewhere that:
umount /etc/mnttab
mount -F mntfs mnttab /etc/mnttab
might rebuild it. But I have never tried it. I guess you get to run the experiment.
=================
system() vs `backtick`
system() vs `backtick`by dovert (Initiate) | | Log in | Create a new user | The Monastery Gates | Super Search | | Seekers of Perl Wisdom | Meditations | PerlMonks Discussion | | Obfuscation | Reviews | Cool Uses For Perl | Perl News | Q&A | Tutorials | | Poetry | Recent Threads | Newest Nodes | Donate | What's New | |
on Aug 06, 2004 at 19:02 UTC ( #380670=perlquestion: print w/ replies, xml ) | Need Help?? |
Comment on system() vs `backtick` | |
---|---|
Re: system() vs `backtick` by Prior Nacre V (Hermit) on Aug 06, 2004 at 19:21 UTC | |
The difference is: Can't offer any help on O/S issue. Regards, PN5 | [reply] |
Re: system() vs `backtick` by dave_the_m (Vicar) on Aug 06, 2004 at 19:24 UTC | |
Dave. | [reply] [d/l] [select] |
Re: system() vs `backtick` by hmerrill (Friar) on Aug 06, 2004 at 19:43 UTC | |
| [reply] |
Re: system() vs `backtick` by etcshadow (Priest) on Aug 06, 2004 at 20:36 UTC | |
Why does this matter? Well some programs will produce different output (or just generally work differently) if they detect that they are attached to a terminal than if they are not. For example, at least on many linux variants (can't speak for all *nixes), the ps command will format it's output for a terminal (it sets the the output width to the width of your terminal) if it sees that it is outputting to a terminal, and otherwise it will truncate its output width at 80 characters. Likewise, ls may use terminal escape code to set the color of file-names to indicate permissions or types. Also, ls may organize its output into a pretty looking table when writing to a terminal, but make a nice neat list when NOT writing to a terminal. Anyway, an easy way to check this out with your program is via a useless use of cat such as this:
And comparing the differences in the output. You can see the same thing (sorta) happening between system and backticks with this (which simply checks to see if the output stream is attached to a terminal or not):
As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos.
| [reply] [d/l] [select] |
by ambrus (Monsignor) on Aug 06, 2004 at 23:01 UTC | |
As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos. That is not needed here. The OP says that the command runs fine in backticks, but hungs when runs with system. The program probably waits for input (confirmation) when run on a terminal, that's why it hangs. What the OP actually needs is to fool the program it's not running interactively, by calling it through a backtick or redirecting its stdout. This is much simpler to acheive. It might even be unneccessary if the program has some command line switch to force it not to ask questions. Many programs like rm or fsck have such an option: if you give rm a file for which you do not have write access, it promts you whether you want to delete it, you can override this with the -f switch or by redirecting its stdin. Running rm in backticks redirects the stdout which is not enough for rm: it prompts for a confirmation, even if stderr is redirected too, and if you don't see the error it appears to hang. | [reply] |
Re: system() vs `backtick` by sgifford (Prior) on Aug 06, 2004 at 22:00 UTC | |
| [reply] [d/l] |
Re: system() vs `backtick` by hossman (Parson) on Aug 07, 2004 at 00:31 UTC | |
Many people have have pointed out the difference between `` and system(), but I'd like to suggest that the problem you're seeing might not be something you need to fix in your perl script. The problem might be that in upgrading your RetHat distro, the program(s) you are executing using system() may have changed, such that if they detect they are running attached to a terminal, they wait for input on STDIN. (and in the old version, the program may not have assumed this). It's also important to remember that system() adn `` can/will pass your input to "/bin/sh -c" if it contains shell metacharacters, and on many newer systems /bin/sh is just a symlink to /bin/bash -- which may also be why you are now seeing a different in the behavior of your script. Your old distorbution may have genuinely had "sh" installed, which may have involved your program(s) differently so that they didn't know they were attached to a terminal. | [reply] |
by etcshadow (Priest) on Aug 07, 2004 at 02:45 UTC | |
|