Tuesday, April 13, 2010

Netapp Snapmirror Setup Guide

Netapp Snapmirror Setup Guide
Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself. This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).


Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.

This guides you quickly through the Snapmirror setup and commands.

1) Enable Snapmirror on source and destination filer

source-filer> options snapmirror.enable
snapmirror.enable on
source-filer>
source-filer> options snapmirror.access
snapmirror.access legacy
source-filer>

2) Snapmirror Access

Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow
destination-filer
destination-filer2
source-filer>

3) Initializing a Snapmirror relation

Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G
destination-filer> vol restrict demo_destination


Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.

Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.

4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.

destination-filer> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
source-filer:demo_source destination-filer:demo_destination Uninitialized - Transferring (1690 MB done)
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree Uninitialized - Transferring (32 MB done)
destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields.

If you want to sync the data on a scheduled frequency, you can set that in destination filer's /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency.


destination-filer> rdfile /etc/snapmirror.conf
source-filer:demo_source destination-filer:demo_destination - 0 * * * # This syncs every hour
source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm
destination-filer>

6) Other Snapmirror commands

* To break snapmirror relation - do snapmirror quiesce and snapmirror break.
* To update snapmirror data - do snapmirror update
* To resync a broken relation - do snapmirror resync.
* To abort a relation - do snapmirror abort

Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.

To read how to tune the performance & speed of the netapp snapmirror or snapvault replication transfers and adjust the transfer bandwidth , go to Tuning Snapmirror & Snapvault replication data transfer speed
Tags: howto, netapp, storage
6 comments:

Anonymous said...

its very good, simple and helpful
April 22, 2009 12:19 AM
Anonymous said...

Thank you - Works great!
May 4, 2009 7:33 AM
Anonymous said...

Thanks a lotttttttttt dude... great work..
this helped me a lotttttttt......
is there any way can you please do update the snap manager commands also.
it would be nice.
thanks once again...
July 22, 2009 8:36 AM
Veeresh said...

Thanks, it is very good and simple.
November 19, 2009 4:43 AM
Harryjames said...

Also please check the Hosts File , if you are working on simulator

you may want to edit the hosts file and make sure the IP's of nic 1 is on first ..
March 15, 2010 12:20 PM
Anonymous said...

Please Also Check Hosts File on filer1(source) if your working on simulators

If nic1 ip of filer1(source) is not listed to be first then add it to first by editing wrfile /etc/hosts
and to read its rdfile /etc/hosts

======================
http://unixfoo.blogspot.com/2009/01/netapp-snapmirror-setup-guide.html
======================

http://www.unix.com/unix-advanced-expert-users/46140-etc-mnttab-zero-length-i-have-done-silly-thing.html

Exclamation /etc/mnttab is zero length - I have done a silly thing
Being the clever-clogs that I am, I have managed to clobber the contents of /etc/mnttab.

It started when I tried to unmount all the volumes in a prticular veritas disk group and neglected to include a suitable grep in my command line, thus attempting to unmount _all_ the filesystems on the server (for loop).

They all came back with a fs busy message so I was presuming no harm done. However, a df to check all was in order came back with no results, likewise running mount.

When I went to look in the mnttab itself I find it's zero length !?!

Everything appears ok in that the filesystems are still actually mounted and read/write.

If I try a mount or umount I get the message
"mount: Inappropriate ioctl for device"

I suspect a reboot would sort me out but that's a big hammer and I'd rather get to the bottom of it.

What have I done and how in the world do I fix it?

Running solaris 10 on a T2000, fairly recent patches applied. Bugger all running aside from base OS and SAN kit.

=================
I understand that /etc/mnttab is actually implemented as a screwy filesystem itself on Solaris. I read somewhere that:

umount /etc/mnttab
mount -F mntfs mnttab /etc/mnttab

might rebuild it. But I have never tried it. I guess you get to run the experiment.
=================

system() vs `backtick`

system() vs `backtick`

by dovert (Initiate)
| Log in | Create a new user | The Monastery Gates | Super Search |
| Seekers of Perl Wisdom | Meditations | PerlMonks Discussion |
| Obfuscation | Reviews | Cool Uses For Perl | Perl News | Q&A | Tutorials |
| Poetry | Recent Threads | Newest Nodes | Donate | What's New |

on Aug 06, 2004 at 19:02 UTC ( #380670=perlquestion: print w/ replies, xml ) Need Help??
dovert has asked for the wisdom of the Perl Monks concerning the following question:
update 2004/8/10: Red Hat has shown me (by invoking the perl in a 'kernel backward compatibility mode' that there is some (as yet unidentified) incompatibility between perl 5 and the new kernel, probably in the threading code. We're still trying to peel the onion. Thanks for your responses - a little feedback for the interested. The child process never gets to main() in the hang situation, (ie is not loooking for input) In fact, swapping 'ls' in as the called program doesn't change the behavior. I have a working perl program that, among other things, uses system() to run some other programs. I upgraded a Linux box from RH 9.3 to Enterprise Linux WS 3.1 and the program no longer works. (This is Perl 5.004_04) I can see that perl has forked, but the child is hanging, and the parent is waiting. ^C interrupts the child, and the parent completes. I discovered that if I run the program in `backticks`, it behaves properly. But I don't want to seek out all the system() calls & replace them & I'd like to know what is going on. So, I am interested in any insight about what is different enough about these two ways of running the program, that might make the difference. Or if you have heard of problems in this version of RH Linux. (It is not 'convenient' to upgrade the perl version, BTW) I'd be happy for any clues about where to look.
Comment on system() vs `backtick`
Re: system() vs `backtick`
by Prior Nacre V (Hermit) on Aug 06, 2004 at 19:21 UTC

    The difference is:

    system()
    runs command and returns command's exit status
    backticks
    runs command and returns the command's output

    Can't offer any help on O/S issue.

    Regards,

    PN5

Re: system() vs `backtick`
by dave_the_m (Vicar) on Aug 06, 2004 at 19:24 UTC
    Try strace -p pid on the child process to see what it's hanging on, and if it's on I/O, use lsof -p pid to see what file/socket etc it's hanging on.

    Dave.

Re: system() vs `backtick`
by hmerrill (Friar) on Aug 06, 2004 at 19:43 UTC
    I would suggest you file a bug in bugzilla on Red Hat's site against Enterprise Linux WS 3.1 for the "Perl" package, and copy and paste exactly the info you've written here. If you're lucky the "Perl" package owner at Red Hat may respond quickly. If you can get the email address of the owner of the "Perl" package at Red Hat, try sending him/her a message asking your question.
Re: system() vs `backtick`
by etcshadow (Priest) on Aug 06, 2004 at 20:36 UTC
    It depends on what the program is that you are running... but another possible issue is this: When running a command with system(), it is hooked up directly to your terminal, both for output and input... just as though a user had run that command at a shell prompt. When running a command with backticks, the input (as well as stderr) is still hooked up to the terminal, but the output stream is no longer the terminal... it is a pipe back to the parent (perl) process.

    Why does this matter? Well some programs will produce different output (or just generally work differently) if they detect that they are attached to a terminal than if they are not. For example, at least on many linux variants (can't speak for all *nixes), the ps command will format it's output for a terminal (it sets the the output width to the width of your terminal) if it sees that it is outputting to a terminal, and otherwise it will truncate its output width at 80 characters. Likewise, ls may use terminal escape code to set the color of file-names to indicate permissions or types. Also, ls may organize its output into a pretty looking table when writing to a terminal, but make a nice neat list when NOT writing to a terminal.

    Anyway, an easy way to check this out with your program is via a useless use of cat such as this:

    [me@host]$ command
    ...
    [me@host]$ command | cat
    ...

    And comparing the differences in the output. You can see the same thing (sorta) happening between system and backticks with this (which simply checks to see if the output stream is attached to a terminal or not):

    [me@host]$ perl -e 'print `perl -le "print -t STDOUT"`'

    [me@host]$ perl -e 'system q(perl -le "print -t STDOUT")'
    1
    [me@host]$

    As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos.

    ------------
    :Wq
    Not an editor command: Wq
      As a shameless personal plug, I've actually written a node about how you can fool a process into thinking that it is connected to a terminal when it is not. This may not be of the utmost use to you in fixing your problem... but it is apropos.

      That is not needed here.

      The OP says that the command runs fine in backticks, but hungs when runs with system. The program probably waits for input (confirmation) when run on a terminal, that's why it hangs. What the OP actually needs is to fool the program it's not running interactively, by calling it through a backtick or redirecting its stdout. This is much simpler to acheive. It might even be unneccessary if the program has some command line switch to force it not to ask questions. Many programs like rm or fsck have such an option: if you give rm a file for which you do not have write access, it promts you whether you want to delete it, you can override this with the -f switch or by redirecting its stdin. Running rm in backticks redirects the stdout which is not enough for rm: it prompts for a confirmation, even if stderr is redirected too, and if you don't see the error it appears to hang.

Re: system() vs `backtick`
by sgifford (Prior) on Aug 06, 2004 at 22:00 UTC
    backticks send the executed program's STDOUT to a variable, and system sends it to your main program's STDOUT. So it could depend on what your main program's STDOUT is connected to. If it's connected to something that blocks, for example, or something that returns an error when you write to it, that could explain the problem.
Re: system() vs `backtick`
by hossman (Parson) on Aug 07, 2004 at 00:31 UTC

    Many people have have pointed out the difference between `` and system(), but I'd like to suggest that the problem you're seeing might not be something you need to fix in your perl script. The problem might be that in upgrading your RetHat distro, the program(s) you are executing using system() may have changed, such that if they detect they are running attached to a terminal, they wait for input on STDIN. (and in the old version, the program may not have assumed this).

    It's also important to remember that system() adn `` can/will pass your input to "/bin/sh -c" if it contains shell metacharacters, and on many newer systems /bin/sh is just a symlink to /bin/bash -- which may also be why you are now seeing a different in the behavior of your script. Your old distorbution may have genuinely had "sh" installed, which may have involved your program(s) differently so that they didn't know they were attached to a terminal.

      Regarding the compatibility of bash and sh... Bash is supposed to run in a compatibility when invoked as "sh" (by "compatibility mode" I mean that it should function in exactly the same way as sh). Of course, I cannot speak for how well it pulls this off. :-D
      ------------
      :Wq
      Not an editor command: Wq

Monday, March 22, 2010

File descriptors

File descriptors

A file descriptor is a handle created by a process when a file is opened. There is a limit to the amount of file descriptors per process. The default Solaris file descriptor limit is 64.

If the file descriptor limit is exceeded for a process, you may see the following errors:

"Too Many Open Files"
"Err#24 EMFILE" (in truss output)

To display a process' current file descriptor limit, run /usr/proc/bin/pfiles pid | grep rlimit on Solaris systems.

Display system file descriptor settings:
ulimit -Hn (hard limit, cannot be exceeded)
ulimit -Sn / ulimit -n (soft limit may be increased to hard limit value)

Increasing file descriptor settings for child processes (example):
$ ulimit -Hn
1024
$ ulimit -Sn
64
$ ulimit -Sn 1024
$ ulimit -Sn
1024

Solaris kernel parameters:
rlim_fd_cur: soft limit

It may be dangerous to set this value higher than 256 due to limitations with the stdio library. If programs require more file descriptors, they should use setrlimit directly.

rlim_fd_max: hard limit

It may be dangerous to set this value higher than 1024 due to limitations with select. If programs require more file descriptors, they should use setrlimit directly.

More information:
http://www.princeton.edu/~unix/Solaris/troubleshoot/filedesc.html
http://help.netscape.com/kb/corporate/19980403-12.html

Linux NFS configuration services

Linux NFS configuration services


This is specific to RedHat, but most other Linux distributions follow the same pattern.

An NFS server on linux requires 3 services to be running in order to share files:


/etc/rc.d/init.d/portmap
/etc/rc.d/init.d/nfslock
/etc/rc.d/init.d/nfs

You can start/stop/restart these services by issues the above lines with the corresponding arguments: start, stop or restart. You can also use the shortcut 'service' command, as in:

# service portmap start

You need to ensure that these 3 services start up when the system does.

First, verify the default runlevel of the computer. Check the line in /etc/inittab that starts with "id:". The next number will be the default runlevel (usually 3 for non-gui, 5 for gui). You can manage the startup scripts manually by creating symlinks in the corresponding /etc/rc.d/rcX.d directory (where X is the default runlevel) or you can use redhat's "chkconfig" command. Assuming the default runlevel is 3, these 3 commands will ensure that the services necessary for an nfs server start at boot time (these all must run as root):

# chkconfig --level 3 portmap on
# chkconfig --level 3 nfslock on
# chkconfig --level 3 nfs on


Now either reboot the box or issue these commands to start the nfs server:

# service nfs stop
# service nfslock stop
# service portmap stop
# service portmap start
# service nfslock start
# service nfs start


Order is important. Portmap must start first, followed by nfslock, followed by nfs.

The file that defines what directories are shared is /etc/exports. 'man exports' will give you the full overview of all options available for this file. Here is an example line:

/public *(rw,no_root_squash)

This says share the folder /public, allow any IP access, give read/write access, and allow the root user to connect as root.

The * wildcard can be a list or range of IP addresses. For example, if we wanted to restrict this access to the 192.168.1.0/24 Class C subnet, the line would look like this:

/public 192.168.1.0/255.255.255.0(rw,no_root_squash)

rw means read/write. no_root_squash is a setting that allows nfs clients to connect as root. Without this setting, the root user on clients that connect has the permissions of the user 'nfsnobody', uid 65534.

Anytime you make any changes to the /etc/exports file, run the command:

# exportfs -avr
on the server to update the nfs server.


On to the nfs client:

On the client, you need the 'portmap' and 'nfslock' services running. Follow the instructions above to ensure that these 2 services start with the default runlevel. Once they are running, you can mount a directory on the nfs server. If the server ip is 192.168.1.1 and the share name is public, you can do this on a client:

# mkdir /mnt/public
# mount 192.168.1.1:/public /mnt/public


You should get a prompt back. A 'mount' command should show you that the nfs share is mounted, and you should be able to cd to this directory and view the contents.

A couple of things to keep in mind with NFS: permissions are handled by UID. It is very beneficial to ensure that a user on the client has the same UID on the server. Otherwise the permissions will just confuse you. If you are just using root, and using the no_root_squash option on every mount, you don't have to worry about this.

Security: NFS should stand for No F**king Security. It's all IP based, so if anyone can spoof an IP, they can mount on of your shares. NEVER do NFS over an untrusted network (like the internet) and ALWAYS restrict your shares to at least your local subnet.

There are 2 kinds of nfs mounts : soft mounts and hard mounts. With a soft mount, if the server goes away (reboot/lost network/whatever) the client can gracefully close the connection, unmount the share and continue on. However, in order for the client to maintain this ability, it has to do more caching of the reads and writes, so performace is lower and it's possible to lose data if the server goes away unexpectedly.

A hard mount means that the client never gives up trying to hit the server. Ever. Eventually the load on the client will be so high due to backed up i/o requests, you'll have to reboot it. Not all that good, but hard mounts don't have the caching overhead soft mounts do, so you have less chance of losing data. I always use hard mounts, but then specify the 'intr' option on the mount, so I can unmount it manually if the server goes away (see example below).

Troubleshooting:
As with most things in linux, watch the log files. If you get an error on the client when trying to mount a share, look at /var/log/messages on the server. If you get an error like "RPC: program not registered" that means that the portmap service isn't running on one of the machines. Verify all the processes are running and try again.

Examples:
Here are some examples from one of my production linux nfs servers. On the server, a line from /etc/exports:

/aim/exports/clink192.168.23.0/255.255.255.0(rw,no_root_squash,async,wdelay)

Please see the manpage on exports for explanations of some of these options. Here is the corresponding /etc/fstab entry on a client that mounts this share (all on one line):

192.168.23.10:/aim/exports/clink /aim/imports/clink nfs
rsize=8192,wsize=8192,timeo=20,retrans=6,async,rw,noatime,intr 0 0


You might want to experiment with rsize and wsize parameters to get that last bit of performance. With this line in /etc/fstab, the mount is remounted at boot. I can also issue the command 'mount -t nfs -a' to mount all nfs entries in /etc/fstab with the corresponding options.

setuid on shell scripts

setuid on shell scripts

Running software as root without requiring a root password is the subject of a number of tutorials on the web, and although it may seem a little bit confusing, it's fairly simple. Inevitably, users want to run shell scripts as root, too. After all, they're considered a 'program', so why not? Unfortunately, there are unseen bumps in the road to Unix convenience.

The tutorial

Many tutorials show this method for creating a script that runs as root automatically.

  1. Open a text editor, and type up your script:
    #!/bin/sh
    program1
    program2
    ...
  2. Save the file as something.sh.
  3. Open a terminal, and enter the following commands:
    $ su
    [enter password]
    chown root:root something.sh
    chmod 4755 something.sh
    exit
  4. Then, finally run it with ./something.sh, and it'll have root access!

...or not. Most likely, you'll get the same error messages that you did before you ran those commands. If your script does actually work, go ahead and skip the rest of this tutorial. If you experience this problem, read on.

The problem

The instructions are fairly straightforward. Create the shell script that you want to execute, and change the owner and group to root (chown root:root). Now comes the command that's supposed to do the magic:

chmod 4755

Let's break this down a little bit. The 755 part means that there's read/write/execute permissions for the owner (root), and only read/execute permissions for everyone else. This makes sense because you want everyone to be able to execute the script, although you don't want everyone to be able to modify what it does.

Now for the 4 prefix. This means that the specified file will have the setuid bit set. This means that whatever is run will have the permissions of the owner. Since we set root as the owner, this will do exactly what we want. Perfect!

Except it doesn't. Well, the truth is actually that the setuid bit is disabled on a lot of *nix implementations due the massive security holes it incurs. If the method originally mentioned doesn't work for you, chances are that your Linux distribution has disabled setuid for shell scripts.

The solution(s)

One way of solving this problem is to call the shell script from a program that can use the setuid bit. For example, here is how you would accomplish this in a C program:

#include 
#include
#include
#include

int main()
{
setuid( 0 );
system( "/path/to/script.sh" );

return 0;
}

Save it as runscript.c. You'll need the gcc compiler. If you don't have it already, look for it in your package manager. You can usually the majority of your compiler tools with one large package, but many distros also offer the option of installing gcc by itself.

Once you have it, compile it at the prompt:

gcc runscript.c -o runscript
Now do the setuid on this program binary:
su
[enter password]
chown root:root runscript
chmod 4755 runscript

Now, you should be able to run it, and you'll see your script being executed with root permissions. Congratulations!

Another alternative, if you've got it installed, is to prefix all the commands in the shell script with 'sudo'. Then set up the permissions so that a password is not required to run those commands with sudo. Read the manpage for more information.

Conclusion

With all that said, running shell scripts with setuid isn't very safe, and the distro designers had a pretty good idea of what they were doing when many of them disabled it. If you're running a multiuser Unix environment and security is an asset for you, make sure that your scripts are secure. A single slip can result in the compromising of an entire network. Only use them when absolutely necessary, and make sure you know exactly what you're doing if you do decide to use them.

Sticky Bit

The sticky bit and directories

Another important enhancement involves the use of the sticky bit on directories. A directory with the sticky bit set means that only the file owner and the superuser may remove files from that directory. Other users are denied the right to remove files regardless of the directory permissions. Unlike with file sticky bits, the sticky bit on directories remains there until the directory owner or superuser explicitly removes the directory or changes the permissions.

You can gain the most security from this feature by placing the sticky bit on all public directories. These directories are writable by any non-administrator. You should train users that the sticky bit, together with the default umask of 077, solves a big problem for less secure systems. Together, both features prevent other users from altering or replacing any file you have in a public directory. The only information they can gain from the file is its name and attributes.

``Sticky bit example'' illustrates the power of such a scheme. The sticky bit is the ``t'' in the permissions for the directory.

Sticky bit example

   $ id
uid=76(slm) gid=11(guru)
$ ls -al /tmp
total 64
drwxrwxrwt 2 bin bin 1088 Mar 18 21:10 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rwxr-xr-x 1 slm guru 19587 Mar 17 19:41 mine
-rw------- 1 slm guru 279 Mar 17 19:41 mytemp
-rw-rw-rw- 1 root sys 35 Mar 16 12:27 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
$ rm /tmp/Ex16566
rm: /tmp/Ex16566 not removed. Permission denied
$ rm /tmp/protfile
rm: /tmp/protfile not removed. Permission denied
$ cat /tmp/openfile
Ha! Ha!
You can't remove me.
$ rm /tmp/openfile
rm: /tmp/openfile not removed. Permission denied
$ rm -f /tmp/openfile
$ rm /tmp/mine /tmp/mytemp
$ ls -l /tmp
drwxrwxrwt 2 bin bin 1088 Mar 18 21:19 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rw-rw-rw- 1 root sys 35 Mar 16 12:27 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
$ cp /dev/null /tmp/openfile
$ cat /tmp/openfile
$ cp /dev/null /tmp/protfile
cp: cannot create /tmp/protfile
$ ls -l /tmp
drwxrwxrwt 2 bin bin 1088 Mar 18 21:19 .
dr-xr-xr-x 19 bin bin 608 Mar 18 11:50 ..
-rw------- 1 blf guru 19456 Mar 18 21:18 Ex16566
-rw------- 1 blf guru 10240 Mar 18 21:18 Rx16566
-rw-rw-rw- 1 root sys 0 Mar 18 21:19 openfile
-rw------- 1 root root 32 Mar 10 10:26 protfile
The only files removed are those owned by user slm (the user in the example). The user slm could not remove any other file, even the accessible file /tmp/openfile. However, the mode setting of the file itself allowed slm to destroy the file contents; this is why the umask setting is important in protecting data. Conversely, the mode on /tmp/protfile, together with the sticky bit on /tmp, makes /tmp/protfile impenetrable.

All public directories should have the sticky bit set. These include, but are not limited to:

  • /tmp

  • /usr/tmp

  • /usr/spool/uucppublic
If you are unsure, it is far better to set the sticky bit on a directory than to leave it off. You can set the sticky bit on a directory with the following command, where directory is the name of the directory:

chmod u+t directory To remove the bit, replace the ``+'' with a ``-'' in the chmod command.

*******************Enjoy*********************