Thursday, December 9, 2010

How to disable SSH host key checking

Remote login using the SSH protocol is a frequent activity in today's internet world. With the SSH protocol, the onus is on the SSH client to verify the identity of the host to which it is connecting. The host identify is established by its SSH host key. Typically, the host key is auto-created during initial SSH installation setup.

By default, the SSH client verifies the host key against a local file containing known, rustworthy machines. This provides protection against possible Man-In-The-Middle attacks. However, there are situations in which you want to bypass this verification step. This article explains how to disable host key checking using OpenSSH, a popular Free and Open-Source implementation of SSH.

When you login to a remote host for the first time, the remote host's host key is most likely unknown to the SSH client. The default behavior is to ask the user to confirm the fingerprint of the host key.

$ ssh Rajeev@192.168.0.100
The authenticity of host '192.168.0.100 (192.168.0.100)' can't be established.
RSA key fingerprint is 3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
Are you sure you want to continue connecting (yes/no)?


If your answer is yes, the SSH client continues login, and stores the host key locally in the file ~/.ssh/known_hosts. You only need to validate the host key the first time around: in subsequent logins, you will not be prompted to confirm it again.

Yet, from time to time, when you try to remote login to the same host from the same origin, you may be refused with the following warning message:

$ ssh Rajeev@192.168.0.100
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
3f:1b:f4:bd:c5:aa:c1:1f:bf:4e:2e:cf:53:fa:d8:59.
Please contact your system administrator.
Add correct host key in /home/peter/.ssh/known_hosts to get rid of this message.
Offending key in /home/rajeev/.ssh/known_hosts:3
RSA host key for 192.168.0.100 has changed and you have requested strict checking.
Host key verification failed.$


There are multiple possible reasons why the remote host key changed. A Man-in-the-Middle attack is only one possible reason. Other possible reasons include:

* OpenSSH was re-installed on the remote host but, for whatever reason, the original host key was not restored.
* The remote host was replaced legitimately by another machine.


If you are sure that this is harmless, you can use either 1 of 2 methods below to trick openSSH to let you login. But be warned that you have become vulnerable to man-in-the-middle attacks.

The first method is to remove the remote host from the ~/.ssh/known_hosts file. Note that the warning message already tells you the line number in the known_hosts file that corresponds to the target remote host. The offending line in the above example is line 3("Offending key in /home/rajeev/.ssh/known_hosts:3")

You can use the following one liner to remove that one line (line 3) from the file.

$ sed -i 3d ~/.ssh/known_hosts


Note that with the above method, you will be prompted to confirm the host key fingerprint when you run ssh to login.

The second method uses two openSSH parameters:

* StrictHostKeyCheckin, and

* UserKnownHostsFile.


This method tricks SSH by configuring it to use an empty known_hosts file, and NOT to ask you to confirm the remote host identity key.

$ ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no rajeev@192.168.0.100
Warning: Permanently added '192.168.0.100' (RSA) to the list of known hosts.
peter@192.168.0.100's password:


The UserKnownHostsFile parameter specifies the database file to use for storing the user host keys (default is ~/.ssh/known_hosts).

The /dev/null file is a special system device file that discards anything and everything written to it, and when used as the input file, returns End Of File immediately.

By configuring the null device file as the host key database, SSH is fooled into thinking that the SSH client has never connected to any SSH server before, and so will never run into a mismatched host key.

The parameter StrictHostKeyChecking specifies if SSH will automatically add new host keys to the host key database file. By setting it to no, the host key is automatically added, without user confirmation, for all first-time connection. Because of the null key database file, all connection is viewed as the first-time for any SSH server host. Therefore, the host key is automatically added to the host key database with no user confirmation. Writing the key to the /dev/null file discards the key and reports success.

Please refer to this excellent article about host keys and key checking.

By specifying the above 2 SSH options on the command line, you can bypass host key checking for that particular SSH login. If you want to bypass host key checking on a permanent basis, you need to specify those same options in the SSH configuration file.

You can edit the global SSH configuration file (/etc/ssh/ssh_config) if you want to make the changes permanent for all users.

If you want to target a particular user, modify the user-specific SSH configuration file (~/.ssh/config). The instructions below apply to both files.

Suppose you want to bypass key checking for a particular subnet (192.168.0.0/24).

Add the following lines to the beginning of the SSH configuration file.

Host 192.168.0.*
StrictHostKeyChecking no
UserKnownHostsFile=/dev/null


Note that the configuration file should have a line like Host * followed by one or more parameter-value pairs. Host *means that it will match any host. Essentially, the parameters following Host * are the general defaults. Because the first matched value for each SSH parameter is used, you want to add the host-specific or subnet-specific parameters to the beginning of the file.

As a final word of caution, unless you know what you are doing, it is probably best to bypass key checking on a case by case basis, rather than making blanket permanent changes to the SSH configuration files.

Thursday, November 25, 2010

netapp command summary

sysconfig -a : shows hardware configuration with more verbose information
sysconfig -d : shows information of the disk attached to the filer
version : shows the netapp Ontap OS version.
uptime : shows the filer uptime
dns info : this shows the dns resolvers, the no of hits and misses and other info
nis info : this shows the nis domain name, yp servers etc.
rdfile : Like "cat" in Linux, used to read contents of text files/
wrfile : Creates/Overwrites a file. Similar to "cat > filename" in Linux
aggr status : Shows the aggregate status
aggr status -r : Shows the raid configuration, reconstruction information of the disks in filer
aggr show_space : Shows the disk usage of the aggreate, WAFL reserve, overheads etc.
vol status : Shows the volume information
vol status -s : Displays the spare disks on the filer
vol status -f : Displays the failed disks on the filer
vol status -r : Shows the raid configuration, reconstruction information of the disks
df -h : Displays volume disk usage
df -i : Shows the inode counts of all the volumes
df -Ah : Shows "df" information of the aggregate
license : Displays/add/removes license on a netapp filer
maxfiles : Displays and adds more inodes to a volume
aggr create : Creates aggregate
vol create : Creates volume in an aggregate
vol offline : Offlines a volume
vol online : Onlines a volume
vol destroy : Destroys and removes an volume
vol size [+|-] : Resize a volume in netapp filer
vol options : Displays/Changes volume options in a netapp filer
qtree create : Creates qtree
qtree status : Displays the status of qtrees
quota on : Enables quota on a netapp filer
quota off : Disables quota
quota resize : Resizes quota
quota report : Reports the quota and usage
snap list : Displays all snapshots on a volume
snap create : Create snapshot
snap sched : Schedule snapshot creation
snap reserve : Display/set snapshot reserve space in volume
/etc/exports : File that manages the NFS exports
rdfile /etc/exports : Read the NFS exports file
wrfile /etc/exports : Write to NFS exports file
exportfs -a : Exports all the filesystems listed in /etc/exports
cifs setup : Setup cifs
cifs shares : Create/displays cifs shares
cifs access : Changes access of cifs shares
lun create : Creates iscsi or fcp luns on a netapp filer
lun map : Maps lun to an igroup
lun show : Show all the luns on a filer
igroup create : Creates netapp igroup
lun stats : Show lun I/O statistics
disk show : Shows all the disk on the filer
disk zero spares : Zeros the spare disks
disk_fw_update : Upgrades the disk firmware on all disks
options : Display/Set options on netapp filer
options nfs : Display/Set NFS options
options timed : Display/Set NTP options on netapp.
options autosupport : Display/Set autosupport options
options cifs : Display/Set cifs options
options tcp : Display/Set TCP options
options net : Display/Set network options
ndmpcopy : Initiates ndmpcopy
ndmpd status : Displays status of ndmpd
ndmpd killall : Terminates all the ndmpd processes.
ifconfig : Displays/Sets IP address on a network/vif interface
vif create : Creates a VIF (bonding/trunking/teaming)
vif status : Displays status of a vif
netstat : Displays network statistics
sysstat -us 1 : begins a 1 second sample of the filer's current utilization (crtl - c to end)
nfsstat : Shows nfs statistics
nfsstat -l : Displays nfs stats per client
nfs_hist : Displays nfs historgram
statit : beings/ends a performance workload sampling [-b starts / -e ends]
stats : Displays stats for every counter on netapp. Read stats man page for more info
ifstat : Displays Network interface stats
qtree stats : displays I/O stats of qtree
environment : display environment status on shelves and chassis of the filer
storage show : Shows storage component details
snapmirror intialize : Initialize a snapmirror relation
snapmirror update : Manually Update snapmirror relation
snapmirror resync : Resyns a broken snapmirror
snapmirror quiesce : Quiesces a snapmirror bond
snapmirror break : Breakes a snapmirror relation
snapmirror abort : Abort a running snapmirror
snapmirror status : Shows snapmirror status
lock status -h : Displays locks held by filer
sm_mon : Manage the locks
storage download shelf : Installs the shelf firmware
software get : Download the Netapp OS software
software install : Installs OS
download : Updates the installed OS
cf status : Displays cluster status
cf takeover : Takes over the cluster partner
cf giveback : Gives back control to the cluster partner
reboot : Reboots a filer

Thursday, September 16, 2010

inetd: the Internet super server

inetd: the Internet super server



==> Understanding inetd
==> Configuring inetd


What does inetd do?


To provide services over the network, you need an application that understands your requests and can provide that service. These applications usually work behind the scenes, without any user interface and interaction. This means that they are not visible in the normal work environment.

They are called daemons or network daemons. A daemon "listens" on a specific port for incoming requests. When a request arrives, the daemon "wakes up" and begins to perform a specific operation.

If you want to have many services available on your machine, you need to run a daemon for each service. These daemons need memory, take up space in the process table, and so on. For frequently used services, standalone daemons make sense. For services that receive few requests, they're a waste of resources.

To help you optimize your resources, there is a super server that can be configured to listen in on a list of ports and invoke the specific application the moment a request for the application arrives. This daemon is inetd. Most of the network daemons offer you the option to set them to run as stand-alone applications, or to be invoked by inetd on demand. Although the setup with inetd saves resources in the time the service is not used, it creates a delay for the client. The daemon for this service has to be loaded and probably has to initialize itself before it's ready to serve the request. You might think about which services you want to run as stand-alone applications and which ones you want to be called by inetd.

HTTP servers such as Apache are likely to be run as stand-alone services. Apache needs a relatively long load time, and you don't want your customers waiting too long while your Web site loads. Apache is highly optimized to run in stand-alone mode, so you probably don't want it to be spawned by inetd. A service such as in.telnetd, which enables logins from other machines, doesn't need much time to load. It makes a good candidate for being invoked by the super server.

You should be sure that you need inetd before you set it up. A client machine that primarily acts as a workstation has no need to run inetd, because the client machine is not meant to provide any services for the network. You probably don't need inetd for home machines. It's basically a question of security. Using inetd on machines on which you really don't need it may reveal possible entry points for crackers. Running inetd-enabled services such as telnet and rlogin can be used to exploit your machine. This doesn't mean that those services are generally insecure, but they may be a first source of information for potential crackers trying to enter your system.

During the installation, SuSE asks whether inetd should be started. The referring variable in /etc/rc.config is START_INETD. If it is set to yes, inetd will start at system boot time.

Configuring inetd


The inetd daemon reads several files in /etc when it's executed. The main configuration file is /etc/inetd.conf. It specifies which daemon should be started for which service.

The other files it reads are shared with other daemons and contain more general information about Internet services.


* /etc/services
This file maps port numbers to service names. Most of the port numbers below 1024 are assigned to special services (specified in RFC 1340). These assignments are reflected in this file. Every time you refer to a service by its name (as opposed to its port number), this name will be looked up in /etc/services and the referring port number will be used to process your request.
* /etc/rpc
Just like /etc/services, but with RPC (Remote Procedure Call), services are mapped to names.
* /etc/protocols
Another map. Here, protocols are specified and mapped to the numbers the kernel uses to distinguish between the different TCP/IP protocols.


Usually none of these files needs to be edited. The only candidate for changes is /etc/services. If you run some special daemon on your system (such as a database engine), you may want to add the port number of this service to this list.

The inetd daemon needs these files because in its own configuration file /etc/inetd.conf, these names are used to specify which daemon should be started to serve each service. Each line defines one service. Comment lines (starting with a hash sign -- #) and empty lines are ignored. The definition lines have seven fields, which must all contain legal values:


* service name
The name of a valid service in the file /etc/services. For internal services (discussed in the next chapter), the service name must be the official name of the service (that is, the first entry in /etc/services). When used to specify a Sun-RPC based service, this field is a valid RPC service name like listed in the file /etc/rpc. The part on the right of the slash (/) is the RPC version number. This can simply be a single numeric argument or a range of versions. A range is bounded by the lowest value to the highest value (for example, rusers/1-3).
* socket type
These should be one of the keywords stream, dgram, raw, rdm, or seqpacket, depending on whether the socket is a stream, datagram, raw, reliably delivered message, or sequenced packet socket.
* protocol
The protocol must be a valid protocol as listed in /etc/protocols. Examples might be tcp or udp. RPC-based services are specified with the rpc/tcp or rpc/udp service type.
* wait/nowait.max
The wait/nowait entry is applicable to datagram sockets only (other sockets should have a nowait entry in this space). If a datagram server connects to its peer, freeing the socket so that inetd can receive further messages on the socket, it is said to be a multithreaded server and should use the nowait entry. For datagram servers that process all incoming datagrams on a socket and eventually time out, the server is said to be single-threaded and should use a wait entry. Comsat(8) and talkd(8) are both examples of the latter type of datagram server. The optional max suffix (separated from wait or nowait by a period) specifies the maximum number of server instances that may be spawned from inetd within an interval of 60 seconds. When omitted, it defaults to 40.
* user.group
This entry should contain the user name of the user as whom the server should run. This allows for servers to be given less permission than root. An optional group name can be specified by appending a dot to the user name followed by the group name. This allows for servers to run with a different (primary) group id than specified in the password file. If a group is specified and user is not root, the supplementary groups associated with that user will still be set.
* server program
The server-program entry should contain the pathname of the program that is to be executed by inetd when a request is found on its socket. If inetd provides this service internally, this entry should be internal.
* server program arguments
This is the place to give arguments to the daemon. Note that they are starting with argv[0], which is the name of the program. If the service is provided internally, the keyword internal should take the place of this entry.


The inetd daemon provides several trivial services internally by use of routines within itself. These services are echo, discard, chargen (character generator), daytime (human readable time), and time (machine readable time, in the form of the number of seconds since midnight, January 1, 1900).

XREF These services are discussed more extensively in Chapter 12 .


Some examples can show you how it works. To enable telnet into the machine, you will need a line like this in /etc/inetd.conf:



telnet stream tcp nowait root /usr/sbin/in.telnetd in.telnetd



Telnet is the name of the service, it uses a stream socket and TCP protocol, and because it's a TCP service, you can specify it as nowait. The user ID the daemon should run with is root. The last two arguments give the path and name of the actual server application and its arguments.

You always have to give the program name here, because most daemons require it to get a argv[0].

An example for a datagram-based service is talk:



talk dgram udp wait root /usr/sbin/in.talkd in.talkd



You see the differences. Rather than stream you use dgram to specify a datagram type of service. The socket type is set to udp and inetd has to wait until the daemon exits before waiting for new connections.

Now if you look into the preinstalled /etc/inetd.conf file, you will notice that almost all services are using /usr/sbin/tcpd as a daemon for the service, and give the actual service daemon on the command line for tcpd. This is done to increase network security. The tcpd daemon acts as a wrapper and provides lists of hosts who are allowed to use this service. For the moment, think about tcpd as an in-between step that starts the daemon after checking whether the person requesting the service is actually allowed to do this. The SuSE default configuration allows everybody to connect to any port.

Friday, May 14, 2010

how Netapp Snapvault works

Netapp Snapvault guide
Compiled by Rajeev

Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault uses Snapshot technology to store online backups. In event of data loss or corruption on a filer, the backup data can be restored from the SnapVault filer with less downtime. It has significant advantages over traditional tape backups, like
• Media cost savings
• Reduce backup windows versus traditional tape-based backup
• No backup/recovery failures due to media errors
• Simple and Fast recovery of corrupted or destroyed data

Snapvault consists of major two entities – snapvault clients and a snapvault storage server. A snapvault client (Netapp filers and unix/windows servers) is the system whose data should be backed-up. The SnapVault server is a Netapp filer – which gets the data from clients and backs up data.

Snapvault supports two types of backup infrastructure:
1. Netapp to Netapp backups
2. Server to Netapp backups

For Server to Netapp Snapvault, we need to install Open System Snapvault client software provided by Netapp, on the servers. Using the snapvault agent software, the Snapvault server can pull and backup data on to the backup qtrees. SnapVault protects data on a client system by maintaining a number of read-only versions (snapshots) of that data on a SnapVault filer. The replicated data on the snapvault server system can be accessed via NFS or CIFS. The client systems can restore entire directories or single files directly from the snapvault filer. Snapvault requires primary and secondary license.

How snapvault works?

When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.

Snapvault commands

Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers.

Source filer – filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=svfiler
Destination filer – svfiler
svfiler> license add XXXXX
svfiler> options snapvault.enable on
svfiler> options snapvault.access host=filer1

In our GeekyFacts.com tutorial, consider svfiler:/vol/demo_vault as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (svfiler) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.
svfiler> snap sched demo_vault 0 0 0

Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.
svfiler> snapvault start -S filer1:/vol/datasource/qtree1 svfiler:/vol/demo_vault/qtree1

Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.

On source filer:

For example, let us create the schedules on source as below - 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:

Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
svfiler> snapvault snap sched -x demo_vault sv_hourly 6@0-22
svfiler> snapvault snap sched -x demo_vault sv_daily 14@23@sun-fri
svfiler> snapvault snap sched -x demo_vault sv_weekly 6@23@sun

To check the snapvault status, use the command "snapvault status" either on source or destination filer. And to see the backups, do a "snap list" on the destination volume - that will give you all the backup copies, time of creation etc.

Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.

Tuesday, April 13, 2010

Net App filer in Vmware

For a project I have to learn some of the specialty of an iScsi NAS. To be precise it was Net App FAS 2500 Filer. These things are very nice but a little too expensive for my personal use. But Net App offers a nice simulator of their product and the simulator is free. You just need to create an account on the Net App web site and search for the word ”simulator” or just follow this link. (http://now.netapp.com/NOW/cgi-bin/simulator). Some bloody side note: the website doesn’t work fine with safari. Just use Firefox and all works well in this case.

The other prerequirement you need is a Linux OS. I don’t want install some bloody Linux on my iMac, so I use VMware Fusion and the problem is solved. I use in this tutorial the Ubuntu 8.04 LTS Server. You might be considering now why I choose Ubuntu, the simple answer is I don’t known. Normally I use Open SUSE from Novell, but in this case I really don’t know. Strange things happen some times. Another side note I will guide you trough the installation so don’t panic if you are not an unix geek. But actually as non console junkie you should consider not to buy an EMC / IBM / … NAS ☺

Create a VMware with the normal settings 1 CPU, 10 GB HD, Nat NIC and 512MB Ram. Attach the OS iso and boot the whole batch job. Using the default settings is a nice strategy for not so experienced user. I only changed the keyboard layout to “swiss german (mac)” and check in the screen “install additional software” “ssh server” file transfer will be easy. Create an user called “geek” Then after next, next, …., next, next. The server has finished the miraculous installation. As a windows administrator the first thing I have done is enter in the console

Sudo reboot

After the restart I want to log in as root user, but the install wizard doesn’t accept the root password. After some time I decided to log in as user “geek”. Because it is only a testing environment I decided to enable root login in the console

sudo passwd root

After setting the password i started a console session on my mac shell and connected with ssh. You don’t need to do this but with an ssh session you are able to use Copy and Past with the host OS. For the poor windows user putty is a nice ssh tool. The Mac and Linux user just need to enter the following commands in the console

ssh -i root

the next thing you need to do is copying the download from the net app side “7.3.1-tarfile-v22.tar” to the VM server. I use “Cyberduck”. For windows users a nice program is winscp. The Linux guys should use the mighty shell console. By the way I createt a folder named “/netapp”. So let us open the tar file

tar xvf /netapp/7.3.1-tarfile-v22.tgz

In the readme form net app gives us a hint that the simulator uses perl. To install perl on your machine use “apt-get install perl”, actually pearl was installed so you don’t need to do it. Okay now we are ready to start installing the simulator. The strangest thing on this point was I didn’t have to deal with any problems, all worked just fine. This is a bad sign.

cd /netapp/simulator
./setup.sh

and the installation starts. Some logs from the console behind

Script version 22 (18/Sep/2007)
Where to install to? [/sim]:
Would you like to install as a cluster? [no]:
Would you like full HTML/PDF FilerView documentation to be installed [yes]:
Continue with installation? [no]: yes
Creating /sim
Unpacking sim.tgz to /sim
Configured the simulators mac address to be [00:50:56:0:6c:5]
Please ensure the simulator is not running.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 10

The following disk types are available in MB:
Real (Usable)
a - 43 ( 14)
b - 62 ( 30)
c - 78 ( 45)
d - 129 ( 90)
e - 535 (450)
f – 1024 (900)

If you are unsure choose the default option a
What disk size would you like to use? [a]:
Disk adapter to put disks on? [0]:
Use DHCP on first boot? [yes]:
Ask for floppy boot? [no]:
Checking the default route…
You have a single network interface called eth0 (default route) . You will not be able to access the simulator from this Linux host. If this interface is marked DOWN in ifconfig then your simulator will crash.
Which network interface should the simulator use? [default]:
Your system has 455MB of free memory. The smallest simulator memory you should choose is 110MB. The maximum simulator memory is 415MB.
The recommended memory is 512MB.
Your original default appears to be too high. Seriously consider adjusting to below the maximum amount of 415MB.
How much memory would you like the simulator to use? [512]:
Create a new log for each session? [no]:
Overwrite the single log each time? [yes]:
Adding 10 additional disk(s).
Complete. Run /sim/runsim.sh to start the simulator.

Wow this was very easy. Nice Job Net App. In the last row there is the hint how to start the simulator so lets go.

/sim/runsim.sh

Peng, Klaap, kabumm, doing and an error appears on the console “Error ./maytag.L: No such file or directory”. F.. after some time, maybe 4 hours reading logs, traying these and that, and drinking some coffee I figured out that I need to install some libraries for AMD 64. This sounds funny but it solved my problem. This was the penalty for the easy setup.

apt-get install ia32-libs


and again try to startup. I deleted some info rows from the console log. But all questions should be present in the log below.

root@netapp:/netapp/simulator# /sim/runsim.sh
runsim.sh script version Script version 22 (18/Sep/2007)
This session is logged in /sim/sessionlogs/log

NetApp Release 7.3.1: Thu Jan 8 00:10:49 PST 2009
Copyright (c) 1992-2008 NetApp.
….
….
….
Do you want to enable IPv6? [n]: n
Do you want to configure virtual network interfaces? [n]:
Please enter the IP address for Network Interface ns0 [172.16.111.136]:
Please enter the netmask for Network Interface ns0 [255.255.255.0]:
Please enter media type for ns0 {100tx-fd, auto} [auto]:
Please enter the IP address for Network Interface ns1 []:
Would you like to continue setup through the web interface? [n]:
Please enter the name or IP address of the IPv4 default gateway [172.16.111.2]:
The administration host is given root access to the filer’s
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host:
Please enter timezone [GMT]:
Where is the filer located? []:
What language will be used for multi-protocol files (Type ? for list)?:
language not set
Do you want to run DNS resolver? [n]:
Do you want to run NIS client? [n]: Setting the administrative (root) password for mynetapp …

New password:
Retype new password:
Mon May 4 20:31:11 GMT [passwd.changed:info]: passwd for user ‘root’ changed.
….
….
….
This process will enable CIFS access to the filer from a Windows(R) system.
Use "?" for help at any prompt and Ctrl-C to exit without committing changes.

Your filer is currently visible to all systems using WINS. The WINS
name server currently configured is: [ 172.16.111.2 ].

(1) Keep the current WINS configuration
(2) Change the current WINS name server address(es)
(3) Disable WINS

Selection (1-3)? [1]:
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer

(1) Multiprotocol filer
(2) NTFS-only filer

Selection (1-2)? [1]:
CIFS requires local /etc/passwd and /etc/group files and default files
will be created. The default passwd file contains entries for ‘root’,
‘pcuser’, and ‘nobody’.
Enter the password for the root user []:
Retype the password:
The default name for this CIFS server is ‘MYNETAPP’.
Would you like to change this name? [n]:
Data ONTAP CIFS services support four styles of user authentication.
Choose the one from the list below that best suits your situation.

(1) Active Directory domain authentication (Active Directory domains only)
(2) Windows NT 4 domain authentication (Windows NT or Active Directory domains)
(3) Windows Workgroup authentication using the filer’s local user accounts
(4) /etc/passwd and/or NIS/LDAP authentication

Selection (1-4)? [1]: 4
What is the name of the Workgroup? [WORKGROUP]:
CIFS – Starting SMB protocol…
Welcome to the WORKGROUP Windows(R) workgroup

CIFS local server is running.

Password:
mynetapp> Mon May 4 20:32:25 GMT [console_login_mgr:info]: root logged in from console
Mon May 4 20:32:31 GMT [nbt.nbns.registrationComplete:info]: NBT: All CIFS name registrations have completed for the local server.

mynetapp>

SO this was not so hard. Now enjoy the world class filer in your VMware



netapp

Tags: app, Filer, NAS, Net, NetApp, vmware

This entry was posted on Sunday, May 10th, 2009 at 11:07 am and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
2 Responses to “Net App filer in Vmware”

1.
Russell Bomilla says:
Thursday, 18, March, 2010 at 4:10 am

The best unlimited file hosting on the web
2.
Abomination says:
Saturday, 10, April, 2010 at 4:55 pm

I have a problem with the part that says:
[i]You have a single network interface called eth0 (default route) . You will not be able to access the simulator from this Linux host. If this interface is marked DOWN in ifconfig then your simulator will crash.[/i]
This creates also the difficulty of connecting from the Linux host to the filer using FilerView. Have you managed to find any solutions for that?
Thanks


Rajeev

source :- http://www.dambeck.ch/2009/05/10/net-app-filer-in-vmware/

Netapp Snapmirror Setup Guide

Netapp Snapmirror Setup Guide
Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself. This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).


Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.

This guides you quickly through the Snapmirror setup and commands.

1) Enable Snapmirror on source and destination filer

source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>

2) Snapmirror Access

Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>

3) Initializing a Snapmirror relation

Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination


Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.

Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.

4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.

destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.

If you want to sync the data on a scheduled frequency, you can set that in destination filer's /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.


destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm
destination-filer>

6) Other Snapmirror commands

* To break snapmirror relation - do snapmirror quiesce and snapmirror break.
* To update snapmirror data - do snapmirror update
* To resync a broken relation - do snapmirror resync.
* To abort a relation - do snapmirror abort

Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed
Tags: howto, netapp, storage
6 comments:

Anonymous said...

its very good, simple and helpful
April 22, 2009 12:19 AM
Anonymous said...

Thank you - Works great!
May 4, 2009 7:33 AM
Anonymous said...

Thanks a lotttttttttt dude... great work..
this helped me a lotttttttt......
is there any way can you please do update the snap manager commands also.
it would be nice.
thanks once again...
July 22, 2009 8:36 AM
Veeresh said...

Thanks, it is very good and simple.
November 19, 2009 4:43 AM
Harryjames said...

Also please check the Hosts File , if you are working on simulator

you may want to edit the hosts file and make sure the IP's of nic 1 is on first ..
March 15, 2010 12:20 PM
Anonymous said...

Please Also Check Hosts File on filer1(source) if your working on simulators

If nic1 ip of filer1(source) is not listed to be first then add it to first by editing wrfile /etc/hosts
and to read its rdfile /etc/hosts

======================
http://unixfoo.blogspot.com/2009/01/netapp-snapmirror-setup-guide.html
======================

http://www.unix.com/unix-advanced-expert-users/46140-etc-mnttab-zero-length-i-have-done-silly-thing.html

Exclamation /etc/mnttab is zero length - I have done a silly thing
Being the clever-clogs that I am, I have managed to clobber the contents of /etc/mnttab.

It started when I tried to unmount all the volumes in a prticular veritas disk group and neglected to include a suitable grep in my command line, thus attempting to unmount _all_ the filesystems on the server (for loop).

They all came back with a fs busy message so I was presuming no harm done. However, a df to check all was in order came back with no results, likewise running mount.

When I went to look in the mnttab itself I find it's zero length !?!

Everything appears ok in that the filesystems are still actually mounted and read/write.

If I try a mount or umount I get the message
"mount: Inappropriate ioctl for device"

I suspect a reboot would sort me out but that's a big hammer and I'd rather get to the bottom of it.

What have I done and how in the world do I fix it?

Running solaris 10 on a T2000, fairly recent patches applied. Bugger all running aside from base OS and SAN kit.

=================
I understand that /etc/mnttab is actually implemented as a screwy filesystem itself on Solaris. I read somewhere that:

umount /etc/mnttab
mount -F mntfs mnttab /etc/mnttab

might rebuild it. But I have never tried it. I guess you get to run the experiment.
=================

system() vs `backtick`

system() vs `backtick`

by dovert (Initiate)
| Log in | Create a new user | The Monastery Gates | Super Search |
| Seekers of Perl Wisdom | Meditations | PerlMonks Discussion |
| Obfuscation | Reviews | Cool Uses For Perl | Perl News | Q&A | Tutorials |
| Poetry | Recent Threads | Newest Nodes | Donate | What's New |

on Aug 06, 2004 at 19:02 UTC ( #380670=perlquestion: print w/ replies, xml ) Need Help??
dovert has asked for the wisdom of the Perl Monks concerning the following question:
update 2004/8/10: Red Hat has shown me (by invoking the perl in a 'kernel backward compatibility mode' that there is some (as yet unidentified) incompatibility between perl 5 and the new kernel, probably in the threading code. We're still trying to peel the onion. Thanks for your responses - a little feedback for the interested. The child process never gets to main() in the hang situation, (ie is not loooking for input) In fact, swapping 'ls' in as the called program doesn't change the behavior. I have a working perl program that, among other things, uses system() to run some other programs. I upgraded a Linux box from RH 9.3 to Enterprise Linux WS 3.1 and the program no longer works. (This is Perl 5.004_04) I can see that perl has forked, but the child is hanging, and the parent is waiting. ^C interrupts the child, and the parent completes. I discovered that if I run the program in `backticks`, it behaves properly. But I don't want to seek out all the system() calls & replace them & I'd like to know what is going on. So, I am interested in any insight about what is different enough about these two ways of running the program, that might make the difference. Or if you have heard of problems in this version of RH Linux. (It is not 'convenient' to upgrade the perl version, BTW) I'd be happy for any clues about where to look.
Comment on system() vs `backtick`
Re: system() vs `backtick`
by Prior Nacre V (Hermit) on Aug 06, 2004 at 19:21 UTC

    The difference is:

    system()
    runs command and returns command's exit status
    backticks
    runs command and returns the command's output

    Can't offer any help on O/S issue.

    Regards,

    PN5

Re: system() vs `backtick`
by dave_the_m (Vicar) on Aug 06, 2004 at 19:24 UTC
    Try strace -p pid on the child process to see what it's hanging on, and if it's on I/O, use lsof -p pid to see what file/socket etc it's hanging on.

    Dave.

Re: system() vs `backtick`
by hmerrill (Friar) on Aug 06, 2004 at 19:43 UTC
    I would suggest you file a bug in bugzilla on Red Hat's site against Enterprise Linux WS 3.1 for the "Perl" package, and copy and paste exactly the info you've written here. If you're lucky the "Perl" package owner at Red Hat may respond quickly. If you can get the email address of the owner of the "Perl" package at Red Hat, try sending him/her a message asking your question.
Re: system() vs `backtick`
by etcshadow (Priest) on Aug 06, 2004 at 20:36 UTC
    It depends on what the program is that you are running... but another possible issue is this: When running a command with system(), it is hooked up directly to your terminal, both for output and input... just as though a user had run that command at a shell prompt. When running a command with backticks, the input (as well as stderr) is still hooked up to the terminal, but the output stream is no longer the terminal... it is a pipe back to the parent (perl) process.

    Why does this matter? Well some programs will produce different output (or just generally work differently) if they detect that they are attached to a terminal than if they are not. For example, at least on many linux variants (can't speak for all *nixes), the ps command will format it's output for a terminal (it sets the the output width to the width of your terminal) if it sees that it is outputting to a terminal, and otherwise it will truncate its output width at 80 characters. Likewise, ls may use terminal escape code to set the color of file-names to indicate permissions or types. Also, ls may organize its output into a pretty looking table when writing to a terminal, but make a nice neat list when NOT writing to a terminal.

    Anyway, an easy way to check this out with your program is via a useless use of cat such as this:

    [me@host]$ command
    ...
    [me@host]$ command | cat
    ...

    And comparing the differences in the output. You can see the same thing (sorta) happening between system and backticks with this (which simply checks to see if the output stream is attached to a terminal or not):

    [me@host]$ perl -e 'print `perl -le "print -t STDOUT"`'

    [me@host]$ perl -e 'system q(perl -le "print -t STDOUT")'
    1
    [me@host]$

    As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos.

    ------------
    :Wq
    Not an editor command: Wq
      As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos.

      That is not needed here.

      The OP says that the command runs fine in backticks, but hungs when runs with system. The program probably waits for input (confirmation) when run on a terminal, that's why it hangs. What the OP actually needs is to fool the program it's not running interactively, by calling it through a backtick or redirecting its stdout. This is much simpler to acheive. It might even be unneccessary if the program has some command line switch to force it not to ask questions. Many programs like rm or fsck have such an option: if you give rm a file for which you do not have write access, it promts you whether you want to delete it, you can override this with the -f switch or by redirecting its stdin. Running rm in backticks redirects the stdout which is not enough for rm: it prompts for a confirmation, even if stderr is redirected too, and if you don't see the error it appears to hang.

Re: system() vs `backtick`
by sgifford (Prior) on Aug 06, 2004 at 22:00 UTC
    backticks send the executed program's STDOUT to a variable, and system sends it to your main program's STDOUT. So it could depend on what your main program's STDOUT is connected to. If it's connected to something that blocks, for example, or something that returns an error when you write to it, that could explain the problem.
Re: system() vs `backtick`
by hossman (Parson) on Aug 07, 2004 at 00:31 UTC

    Many people have have pointed out the difference between `` and system(), but I'd like to suggest that the problem you're seeing might not be something you need to fix in your perl script. The problem might be that in upgrading your RetHat distro, the program(s) you are executing using system() may have changed, such that if they detect they are running attached to a terminal, they wait for input on STDIN. (and in the old version, the program may not have assumed this).

    It's also important to remember that system() adn `` can/will pass your input to "/bin/sh -c" if it contains shell metacharacters, and on many newer systems /bin/sh is just a symlink to /bin/bash -- which may also be why you are now seeing a different in the behavior of your script. Your old distorbution may have genuinely had "sh" installed, which may have involved your program(s) differently so that they didn't know they were attached to a terminal.

      Regarding the compatibility of bash and sh... Bash is supposed to run in a compatibility when invoked as "sh" (by "compatibility mode" I mean that it should function in exactly the same way as sh). Of course, I cannot speak for how well it pulls this off. :-D
      ------------
      :Wq
      Not an editor command: Wq

Monday, March 22, 2010

File descriptors

File descriptors

A file descriptor is a handle created by a process when a file is opened. There is a limit to the amount of file descriptors per process. The default Solaris file descriptor limit is 64.

If the file descriptor limit is exceeded for a process, you may see the following errors:

"Too Many Open Files"
"Err#24 EMFILE" (in truss output)

To display a process' current file descriptor limit, run /usr/proc/bin/pfiles pid | grep rlimit on Solaris systems.

Display system file descriptor settings:
ulimit -Hn (hard limit, cannot be exceeded)
ulimit -Sn / ulimit -n (soft limit may be increased to hard limit value)

Increasing file descriptor settings for child processes (example):
$ ulimit -Hn
1024
$ ulimit -Sn
64
$ ulimit -Sn 1024
$ ulimit -Sn
1024

Solaris kernel parameters:
rlim_fd_cur: soft limit

It may be dangerous to set this value higher than 256 due to limitations with the stdio library. If programs require more file descriptors, they should use setrlimit directly.

rlim_fd_max: hard limit

It may be dangerous to set this value higher than 1024 due to limitations with select. If programs require more file descriptors, they should use setrlimit directly.

More information:
http://www.princeton.edu/~unix/Solaris/troubleshoot/filedesc.html
http://help.netscape.com/kb/corporate/19980403-12.html

Linux NFS configuration services

Linux NFS configuration services


This is specific to RedHat, but most other Linux distributions follow the same pattern.

An NFS server on linux requires 3 services to be running in order to share files:


/etc/rc.d/init.d/portmap
/etc/rc.d/init.d/nfslock
/etc/rc.d/init.d/nfs

You can start/stop/restart these services by issues the above lines with the corresponding arguments: start, stop or restart. You can also use the shortcut 'service' command, as in:

# service portmap start

You need to ensure that these 3 services start up when the system does.

First, verify the default runlevel of the computer. Check the line in /etc/inittab that starts with "id:". The next number will be the default runlevel (usually 3 for non-gui, 5 for gui). You can manage the startup scripts manually by creating symlinks in the corresponding /etc/rc.d/rcX.d directory (where X is the default runlevel) or you can use redhat's "chkconfig" command. Assuming the default runlevel is 3, these 3 commands will ensure that the services necessary for an nfs server start at boot time (these all must run as root):

# chkconfig --level 3 portmap on
# chkconfig --level 3 nfslock on
# chkconfig --level 3 nfs on


Now either reboot the box or issue these commands to start the nfs server:

# service nfs stop
# service nfslock stop
# service portmap stop
# service portmap start
# service nfslock start
# service nfs start


Order is important. Portmap must start first, followed by nfslock, followed by nfs.

The file that defines what directories are shared is /etc/exports. 'man exports' will give you the full overview of all options available for this file. Here is an example line:

/public *(rw,no_root_squash)

This says share the folder /public, allow any IP access, give read/write access, and allow the root user to connect as root.

The * wildcard can be a list or range of IP addresses. For example, if we wanted to restrict this access to the 192.168.1.0/24 Class C subnet, the line would look like this:

/public 192.168.1.0/255.255.255.0(rw,no_root_squash)

rw means read/write. no_root_squash is a setting that allows nfs clients to connect as root. Without this setting, the root user on clients that connect has the permissions of the user 'nfsnobody', uid 65534.

Anytime you make any changes to the /etc/exports file, run the command:

# exportfs -avr
on the server to update the nfs server.


On to the nfs client:

On the client, you need the 'portmap' and 'nfslock' services running. Follow the instructions above to ensure that these 2 services start with the default runlevel. Once they are running, you can mount a directory on the nfs server. If the server ip is 192.168.1.1 and the share name is public, you can do this on a client:

# mkdir /mnt/public
# mount 192.168.1.1:/public /mnt/public


You should get a prompt back. A 'mount' command should show you that the nfs share is mounted, and you should be able to cd to this directory and view the contents.

A couple of things to keep in mind with NFS: permissions are handled by UID. It is very beneficial to ensure that a user on the client has the same UID on the server. Otherwise the permissions will just confuse you. If you are just using root, and using the no_root_squash option on every mount, you don't have to worry about this.

Security: NFS should stand for No F**king Security. It's all IP based, so if anyone can spoof an IP, they can mount on of your shares. NEVER do NFS over an untrusted network (like the internet) and ALWAYS restrict your shares to at least your local subnet.

There are 2 kinds of nfs mounts : soft mounts and hard mounts. With a soft mount, if the server goes away (reboot/lost network/whatever) the client can gracefully close the connection, unmount the share and continue on. However, in order for the client to maintain this ability, it has to do more caching of the reads and writes, so performace is lower and it's possible to lose data if the server goes away unexpectedly.

A hard mount means that the client never gives up trying to hit the server. Ever. Eventually the load on the client will be so high due to backed up i/o requests, you'll have to reboot it. Not all that good, but hard mounts don't have the caching overhead soft mounts do, so you have less chance of losing data. I always use hard mounts, but then specify the 'intr' option on the mount, so I can unmount it manually if the server goes away (see example below).

Troubleshooting:
As with most things in linux, watch the log files. If you get an error on the client when trying to mount a share, look at /var/log/messages on the server. If you get an error like "RPC: program not registered" that means that the portmap service isn't running on one of the machines. Verify all the processes are running and try again.

Examples:
Here are some examples from one of my production linux nfs servers. On the server, a line from /etc/exports:

/aim/exports/clink192.168.23.0/255.255.255.0(rw,no_root_squash,async,wdelay)

Please see the manpage on exports for explanations of some of these options. Here is the corresponding /etc/fstab entry on a client that mounts this share (all on one line):

192.168.23.10:/aim/exports/clink /aim/imports/clink nfs
rsize=8192,wsize=8192,timeo=20,retrans=6,async,rw,noatime,intr 0 0


You might want to experiment with rsize and wsize parameters to get that last bit of performance. With this line in /etc/fstab, the mount is remounted at boot. I can also issue the command 'mount -t nfs -a' to mount all nfs entries in /etc/fstab with the corresponding options.

setuid on shell scripts

setuid on shell scripts

Running software as root without requiring a root password is the subject of a number of tutorials on the web, and although it may seem a little bit confusing, it's fairly simple. Inevitably, users want to run shell scripts as root, too. After all, they're considered a 'program', so why not? Unfortunately, there are unseen bumps in the road to Unix convenience.

The tutorial

Many tutorials show this method for creating a script that runs as root automatically.

  1. Open a text editor, and type up your script:
    #!/bin/sh
    program1
    program2
    ...
  2. Save the file as something.sh.
  3. Open a terminal, and enter the following commands:
    $ su
    [enter password]
    chown root:root something.sh
    chmod 4755 something.sh
    exit
  4. Then, finally run it with ./something.sh, and it'll have root access!

...or not. Most likely, you'll get the same error messages that you did before you ran those commands. If your script does actually work, go ahead and skip the rest of this tutorial. If you experience this problem, read on.

The problem

The instructions are fairly straightforward. Create the shell script that you want to execute, and change the owner and group to root (chown root:root). Now comes the command that's supposed to do the magic:

chmod 4755

Let's break this down a little bit. The 755 part means that there's read/write/execute permissions for the owner (root), and only read/execute permissions for everyone else. This makes sense because you want everyone to be able to execute the script, although you don't want everyone to be able to modify what it does.

Now for the 4 prefix. This means that the specified file will have the setuid bit set. This means that whatever is run will have the permissions of the owner. Since we set root as the owner, this will do exactly what we want. Perfect!

Except it doesn't. Well, the truth is actually that the setuid bit is disabled on a lot of *nix implementations due the massive security holes it incurs. If the method originally mentioned doesn't work for you, chances are that your Linux distribution has disabled setuid for shell scripts.

The solution(s)

One way of solving this problem is to call the shell script from a program that can use the setuid bit. For example, here is how you would accomplish this in a C program:

#include 
#include
#include
#include

int main()
{
setuid( 0 );
system( "/path/to/script.sh" );

return 0;
}

Save it as runscript.c. You'll need the gcc compiler. If you don't have it already, look for it in your package manager. You can usually the majority of your compiler tools with one large package, but many distros also offer the option of installing gcc by itself.

Once you have it, compile it at the prompt:

gcc runscript.c -o runscript
Now do the setuid on this program binary:
su
[enter password]
chown root:root runscript
chmod 4755 runscript

Now, you should be able to run it, and you'll see your script being executed with root permissions. Congratulations!

Another alternative, if you've got it installed, is to prefix all the commands in the shell script with 'sudo'. Then set up the permissions so that a password is not required to run those commands with sudo. Read the manpage for more information.

Conclusion

With all that said, running shell scripts with setuid isn't very safe, and the distro designers had a pretty good idea of what they were doing when many of them disabled it. If you're running a multiuser Unix environment and security is an asset for you, make sure that your scripts are secure. A single slip can result in the compromising of an entire network. Only use them when absolutely necessary, and make sure you know exactly what you're doing if you do decide to use them.

Sticky Bit

The sticky bit and directories

Another important enhancement involves the use of the sticky bit on directories. A directory with the sticky bit set means that only the file owner and the superuser may remove files from that directory. Other users are denied the right to remove files regardless of the directory permissions. Unlike with file sticky bits, the sticky bit on directories remains there until the directory owner or superuser explicitly removes the directory or changes the permissions.

You can gain the most security from this feature by placing the sticky bit on all public directories. These directories are writable by any non-administrator. You should train users that the sticky bit, together with the default umask of 077, solves a big problem for less secure systems. Together, both features prevent other users from altering or replacing any file you have in a public directory. The only information they can gain from the file is its name and attributes.

``Sticky bit example'' illustrates the power of such a scheme. The sticky bit is the ``t'' in the permissions for the directory.

Sticky bit example

   $ id
uid=76(slm) gid=11(guru)
$ ls -al /tmp
total 64
drwxrwxrwt 2 bin bin 1088 Mar 18 21:10 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rwxr-xr-x 1 slm guru 19587 Mar 17 19:41 mine
-rw------- 1 slm guru 279 Mar 17 19:41 mytemp
-rw-rw-rw- 1 root sys 35 Mar 16 12:27 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
$ rm /tmp/Ex16566
rm: /tmp/Ex16566 not removed. Permission denied
$ rm /tmp/protfile
rm: /tmp/protfile not removed. Permission denied
$ cat /tmp/openfile
Ha! Ha!
You can't remove me.
$ rm /tmp/openfile
rm: /tmp/openfile not removed. Permission denied
$ rm -f /tmp/openfile
$ rm /tmp/mine /tmp/mytemp
$ ls -l /tmp
drwxrwxrwt 2 bin bin 1088 Mar 18 21:19 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rw-rw-rw- 1 root sys 35 Mar 16 12:27 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
$ cp /dev/null /tmp/openfile
$ cat /tmp/openfile
$ cp /dev/null /tmp/protfile
cp: cannot create /tmp/protfile
$ ls -l /tmp
drwxrwxrwt 2 bin bin 1088 Mar 18 21:19 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rw-rw-rw- 1 root sys 0 Mar 18 21:19 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
The only files removed are those owned by user slm (the user in the example). The user slm could not remove any other file, even the accessible file /tmp/openfile. However, the mode setting of the file itself allowed slm to destroy the file contents; this is why the umask setting is important in protecting data. Conversely, the mode on /tmp/protfile, together with the sticky bit on /tmp, makes /tmp/protfile impenetrable.

All public directories should have the sticky bit set. These include, but are not limited to:

  • /tmp

  • /usr/tmp

  • /usr/spool/uucppublic
If you are unsure, it is far better to set the sticky bit on a directory than to leave it off. You can set the sticky bit on a directory with the following command, where directory is the name of the directory:

chmod u+t directory To remove the bit, replace the ``+'' with a ``-'' in the chmod command.

*******************Enjoy*********************