GIF89a;
, which just prints some debugging
info to show that it works; it should be replaced with the real code.
#!/usr/bin/perl -w
use POSIX ();
use FindBin ();
use File::Basename ();
use File::Spec::Functions;
$| = 1;
# make the daemon cross-platform, so exec always calls the script
# itself with the right path, no matter how the script was invoked.
my $script = File::Basename::basename($0);
my $SELF = catfile($FindBin::Bin, $script);
# POSIX unmasks the sigprocmask properly
$SIG{HUP} = sub {
print "got SIGHUP\n";
exec($SELF, @ARGV) || die "$0: couldn't restart: $!";
};
code();
sub code {
print "PID: $$\n";
print "ARGV: @ARGV\n";
my $count = 0;
while (++$count) {
sleep 2;
print "$count\n";
}
}
=head2 Deferred Signals (Safe Signals)
Before Perl 5.7.3, installing Perl code to deal with signals exposed you to
danger from two things. First, few system library functions are
re-entrant. If the signal interrupts while Perl is executing one function
(like malloc(3) or printf(3)), and your signal handler then calls the same
function again, you could get unpredictable behavior--often, a core dump.
Second, Perl isn't itself re-entrant at the lowest levels. If the signal
interrupts Perl while Perl is changing its own internal data structures,
similarly unpredictable behavior may result.
There were two things you could do, knowing this: be paranoid or be
pragmatic. The paranoid approach was to do as little as possible in your
signal handler. Set an existing integer variable that already has a
value, and return. This doesn't help you if you're in a slow system call,
which will just restart. That means you have to C to longjmp(3) out
of the handler. Even this is a little cavalier for the true paranoiac,
who avoids C in a handler because the system I out to get you.
The pragmatic approach was to say "I know the risks, but prefer the
convenience", and to do anything you wanted in your signal handler,
and be prepared to clean up core dumps now and again.
Perl 5.7.3 and later avoid these problems by "deferring" signals. That is,
when the signal is delivered to the process by the system (to the C code
that implements Perl) a flag is set, and the handler returns immediately.
Then at strategic "safe" points in the Perl interpreter (e.g. when it is
about to execute a new opcode) the flags are checked and the Perl level
handler from %SIG is executed. The "deferred" scheme allows much more
flexibility in the coding of signal handlers as we know the Perl
interpreter is in a safe state, and that we are not in a system library
function when the handler is called. However the implementation does
differ from previous Perls in the following ways:
=over 4
=item Long-running opcodes
As the Perl interpreter looks at signal flags only when it is about
to execute a new opcode, a signal that arrives during a long-running
opcode (e.g. a regular expression operation on a very large string) will
not be seen until the current opcode completes.
If a signal of any given type fires multiple times during an opcode
(such as from a fine-grained timer), the handler for that signal will
be called only once, after the opcode completes; all other
instances will be discarded. Furthermore, if your system's signal queue
gets flooded to the point that there are signals that have been raised
but not yet caught (and thus not deferred) at the time an opcode
completes, those signals may well be caught and deferred during
subsequent opcodes, with sometimes surprising results. For example, you
may see alarms delivered even after calling C as the latter
stops the raising of alarms but does not cancel the delivery of alarms
raised but not yet caught. Do not depend on the behaviors described in
this paragraph as they are side effects of the current implementation and
may change in future versions of Perl.
=item Interrupting IO
When a signal is delivered (e.g., SIGINT from a control-C) the operating
system breaks into IO operations like I(2), which is used to
implement Perl's readline() function, the C<< <> >> operator. On older
Perls the handler was called immediately (and as C is not "unsafe",
this worked well). With the "deferred" scheme the handler is I called
immediately, and if Perl is using the system's C library that
library may restart the C without returning to Perl to give it a
chance to call the %SIG handler. If this happens on your system the
solution is to use the C<:perlio> layer to do IO--at least on those handles
that you want to be able to break into with signals. (The C<:perlio> layer
checks the signal flags and calls %SIG handlers before resuming IO
operation.)
The default in Perl 5.7.3 and later is to automatically use
the C<:perlio> layer.
Note that it is not advisable to access a file handle within a signal
handler where that signal has interrupted an I/O operation on that same
handle. While perl will at least try hard not to crash, there are no
guarantees of data integrity; for example, some data might get dropped or
written twice.
Some networking library functions like gethostbyname() are known to have
their own implementations of timeouts which may conflict with your
timeouts. If you have problems with such functions, try using the POSIX
sigaction() function, which bypasses Perl safe signals. Be warned that
this does subject you to possible memory corruption, as described above.
Instead of setting C<$SIG{ALRM}>:
local $SIG{ALRM} = sub { die "alarm" };
try something like the following:
use POSIX qw(SIGALRM);
POSIX::sigaction(SIGALRM, POSIX::SigAction->new(sub { die "alarm" }))
|| die "Error setting SIGALRM handler: $!\n";
Another way to disable the safe signal behavior locally is to use
the C module from CPAN, which affects
all signals.
=item Restartable system calls
On systems that supported it, older versions of Perl used the
SA_RESTART flag when installing %SIG handlers. This meant that
restartable system calls would continue rather than returning when
a signal arrived. In order to deliver deferred signals promptly,
Perl 5.7.3 and later do I use SA_RESTART. Consequently,
restartable system calls can fail (with $! set to C) in places
where they previously would have succeeded.
The default C<:perlio> layer retries C, C
and C as described above; interrupted C and
C calls will always be retried.
=item Signals as "faults"
Certain signals like SEGV, ILL, and BUS are generated by virtual memory
addressing errors and similar "faults". These are normally fatal: there is
little a Perl-level handler can do with them. So Perl delivers them
immediately rather than attempting to defer them.
=item Signals triggered by operating system state
On some operating systems certain signal handlers are supposed to "do
something" before returning. One example can be CHLD or CLD, which
indicates a child process has completed. On some operating systems the
signal handler is expected to C for the completed child
process. On such systems the deferred signal scheme will not work for
those signals: it does not do the C. Again the failure will
look like a loop as the operating system will reissue the signal because
there are completed child processes that have not yet been Ced for.
=back
If you want the old signal behavior back despite possible
memory corruption, set the environment variable C to
C<"unsafe">. This feature first appeared in Perl 5.8.1.
=head1 Named Pipes
A named pipe (often referred to as a FIFO) is an old Unix IPC
mechanism for processes communicating on the same machine. It works
just like regular anonymous pipes, except that the
processes rendezvous using a filename and need not be related.
To create a named pipe, use the C function.
use POSIX qw(mkfifo);
mkfifo($path, 0700) || die "mkfifo $path failed: $!";
You can also use the Unix command mknod(1), or on some
systems, mkfifo(1). These may not be in your normal path, though.
# system return val is backwards, so && not ||
#
$ENV{PATH} .= ":/etc:/usr/etc";
if ( system("mknod", $path, "p")
&& system("mkfifo", $path) )
{
die "mk{nod,fifo} $path failed";
}
A fifo is convenient when you want to connect a process to an unrelated
one. When you open a fifo, the program will block until there's something
on the other end.
For example, let's say you'd like to have your F<.signature> file be a
named pipe that has a Perl program on the other end. Now every time any
program (like a mailer, news reader, finger program, etc.) tries to read
from that file, the reading program will read the new signature from your
program. We'll use the pipe-checking file-test operator, B<-p>, to find
out whether anyone (or anything) has accidentally removed our fifo.
chdir(); # go home
my $FIFO = ".signature";
while (1) {
unless (-p $FIFO) {
unlink $FIFO; # discard any failure, will catch later
require POSIX; # delayed loading of heavy module
POSIX::mkfifo($FIFO, 0700)
|| die "can't mkfifo $FIFO: $!";
}
# next line blocks till there's a reader
open (FIFO, "> $FIFO") || die "can't open $FIFO: $!";
print FIFO "John Smith (smith\@host.org)\n", `fortune -s`;
close(FIFO) || die "can't close $FIFO: $!";
sleep 2; # to avoid dup signals
}
=head1 Using open() for IPC
Perl's basic open() statement can also be used for unidirectional
interprocess communication by either appending or prepending a pipe
symbol to the second argument to open(). Here's how to start
something up in a child process you intend to write to:
open(SPOOLER, "| cat -v | lpr -h 2>/dev/null")
|| die "can't fork: $!";
local $SIG{PIPE} = sub { die "spooler pipe broke" };
print SPOOLER "stuff\n";
close SPOOLER || die "bad spool: $! $?";
And here's how to start up a child process you intend to read from:
open(STATUS, "netstat -an 2>&1 |")
|| die "can't fork: $!";
while () {
next if /^(tcp|udp)/;
print;
}
close STATUS || die "bad netstat: $! $?";
If one can be sure that a particular program is a Perl script expecting
filenames in @ARGV, the clever programmer can write something like this:
% program f1 "cmd1|" - f2 "cmd2|" f3 < tmpfile
and no matter which sort of shell it's called from, the Perl program will
read from the file F, the process F, standard input (F
in this case), the F file, the F command, and finally the F
file. Pretty nifty, eh?
You might notice that you could use backticks for much the
same effect as opening a pipe for reading:
print grep { !/^(tcp|udp)/ } `netstat -an 2>&1`;
die "bad netstatus ($?)" if $?;
While this is true on the surface, it's much more efficient to process the
file one line or record at a time because then you don't have to read the
whole thing into memory at once. It also gives you finer control of the
whole process, letting you kill off the child process early if you'd like.
Be careful to check the return values from both open() and close(). If
you're I to a pipe, you should also trap SIGPIPE. Otherwise,
think of what happens when you start up a pipe to a command that doesn't
exist: the open() will in all likelihood succeed (it only reflects the
fork()'s success), but then your output will fail--spectacularly. Perl
can't know whether the command worked, because your command is actually
running in a separate process whose exec() might have failed. Therefore,
while readers of bogus commands return just a quick EOF, writers
to bogus commands will get hit with a signal, which they'd best be prepared
to handle. Consider:
open(FH, "|bogus") || die "can't fork: $!";
print FH "bang\n"; # neither necessary nor sufficient
# to check print retval!
close(FH) || die "can't close: $!";
The reason for not checking the return value from print() is because of
pipe buffering; physical writes are delayed. That won't blow up until the
close, and it will blow up with a SIGPIPE. To catch it, you could use
this:
$SIG{PIPE} = "IGNORE";
open(FH, "|bogus") || die "can't fork: $!";
print FH "bang\n";
close(FH) || die "can't close: status=$?";
=head2 Filehandles
Both the main process and any child processes it forks share the same
STDIN, STDOUT, and STDERR filehandles. If both processes try to access
them at once, strange things can happen. You may also want to close
or reopen the filehandles for the child. You can get around this by
opening your pipe with open(), but on some systems this means that the
child process cannot outlive the parent.
=head2 Background Processes
You can run a command in the background with:
system("cmd &");
The command's STDOUT and STDERR (and possibly STDIN, depending on your
shell) will be the same as the parent's. You won't need to catch
SIGCHLD because of the double-fork taking place; see below for details.
=head2 Complete Dissociation of Child from Parent
In some cases (starting server processes, for instance) you'll want to
completely dissociate the child process from the parent. This is
often called daemonization. A well-behaved daemon will also chdir()
to the root directory so it doesn't prevent unmounting the filesystem
containing the directory from which it was launched, and redirect its
standard file descriptors from and to F so that random
output doesn't wind up on the user's terminal.
use POSIX "setsid";
sub daemonize {
chdir("/") || die "can't chdir to /: $!";
open(STDIN, "< /dev/null") || die "can't read /dev/null: $!";
open(STDOUT, "> /dev/null") || die "can't write to /dev/null: $!";
defined(my $pid = fork()) || die "can't fork: $!";
exit if $pid; # non-zero now means I am the parent
(setsid() != -1) || die "Can't start a new session: $!"
open(STDERR, ">&STDOUT") || die "can't dup stdout: $!";
}
The fork() has to come before the setsid() to ensure you aren't a
process group leader; the setsid() will fail if you are. If your
system doesn't have the setsid() function, open F and use the
C ioctl() on it instead. See tty(4) for details.
Non-Unix users should check their C<< I::Process >> module for
other possible solutions.
=head2 Safe Pipe Opens
Another interesting approach to IPC is making your single program go
multiprocess and communicate between--or even amongst--yourselves. The
open() function will accept a file argument of either C<"-|"> or C<"|-">
to do a very interesting thing: it forks a child connected to the
filehandle you've opened. The child is running the same program as the
parent. This is useful for safely opening a file when running under an
assumed UID or GID, for example. If you open a pipe I minus, you can
write to the filehandle you opened and your kid will find it in I
STDIN. If you open a pipe I minus, you can read from the filehandle
you opened whatever your kid writes to I STDOUT.
use English qw[ -no_match_vars ];
my $PRECIOUS = "/path/to/some/safe/file";
my $sleep_count;
my $pid;
do {
$pid = open(KID_TO_WRITE, "|-");
unless (defined $pid) {
warn "cannot fork: $!";
die "bailing out" if $sleep_count++ > 6;
sleep 10;
}
} until defined $pid;
if ($pid) { # I am the parent
print KID_TO_WRITE @some_data;
close(KID_TO_WRITE) || warn "kid exited $?";
} else { # I am the child
# drop permissions in setuid and/or setgid programs:
($EUID, $EGID) = ($UID, $GID);
open (OUTFILE, "> $PRECIOUS")
|| die "can't open $PRECIOUS: $!";
while () {
print OUTFILE; # child's STDIN is parent's KID_TO_WRITE
}
close(OUTFILE) || die "can't close $PRECIOUS: $!";
exit(0); # don't forget this!!
}
Another common use for this construct is when you need to execute
something without the shell's interference. With system(), it's
straightforward, but you can't use a pipe open or backticks safely.
That's because there's no way to stop the shell from getting its hands on
your arguments. Instead, use lower-level control to call exec() directly.
Here's a safe backtick or pipe open for read:
my $pid = open(KID_TO_READ, "-|");
defined($pid) || die "can't fork: $!";
if ($pid) { # parent
while () {
# do something interesting
}
close(KID_TO_READ) || warn "kid exited $?";
} else { # child
($EUID, $EGID) = ($UID, $GID); # suid only
exec($program, @options, @args)
|| die "can't exec program: $!";
# NOTREACHED
}
And here's a safe pipe open for writing:
my $pid = open(KID_TO_WRITE, "|-");
defined($pid) || die "can't fork: $!";
$SIG{PIPE} = sub { die "whoops, $program pipe broke" };
if ($pid) { # parent
print KID_TO_WRITE @data;
close(KID_TO_WRITE) || warn "kid exited $?";
} else { # child
($EUID, $EGID) = ($UID, $GID);
exec($program, @options, @args)
|| die "can't exec program: $!";
# NOTREACHED
}
It is very easy to dead-lock a process using this form of open(), or
indeed with any use of pipe() with multiple subprocesses. The
example above is "safe" because it is simple and calls exec(). See
L"Avoiding Pipe Deadlocks"> for general safety principles, but there
are extra gotchas with Safe Pipe Opens.
In particular, if you opened the pipe using C, then you
cannot simply use close() in the parent process to close an unwanted
writer. Consider this code:
my $pid = open(WRITER, "|-"); # fork open a kid
defined($pid) || die "first fork failed: $!";
if ($pid) {
if (my $sub_pid = fork()) {
defined($sub_pid) || die "second fork failed: $!";
close(WRITER) || die "couldn't close WRITER: $!";
# now do something else...
}
else {
# first write to WRITER
# ...
# then when finished
close(WRITER) || die "couldn't close WRITER: $!";
exit(0);
}
}
else {
# first do something with STDIN, then
exit(0);
}
In the example above, the true parent does not want to write to the WRITER
filehandle, so it closes it. However, because WRITER was opened using
C, it has a special behavior: closing it calls
waitpid() (see L), which waits for the subprocess
to exit. If the child process ends up waiting for something happening
in the section marked "do something else", you have deadlock.
This can also be a problem with intermediate subprocesses in more
complicated code, which will call waitpid() on all open filehandles
during global destruction--in no predictable order.
To solve this, you must manually use pipe(), fork(), and the form of
open() which sets one file descriptor to another, as shown below:
pipe(READER, WRITER) || die "pipe failed: $!";
$pid = fork();
defined($pid) || die "first fork failed: $!";
if ($pid) {
close READER;
if (my $sub_pid = fork()) {
defined($sub_pid) || die "first fork failed: $!";
close(WRITER) || die "can't close WRITER: $!";
}
else {
# write to WRITER...
# ...
# then when finished
close(WRITER) || die "can't close WRITER: $!";
exit(0);
}
# write to WRITER...
}
else {
open(STDIN, "<&READER") || die "can't reopen STDIN: $!";
close(WRITER) || die "can't close WRITER: $!";
# do something...
exit(0);
}
Since Perl 5.8.0, you can also use the list form of C for pipes.
This is preferred when you wish to avoid having the shell interpret
metacharacters that may be in your command string.
So for example, instead of using:
open(PS_PIPE, "ps aux|") || die "can't open ps pipe: $!";
One would use either of these:
open(PS_PIPE, "-|", "ps", "aux")
|| die "can't open ps pipe: $!";
@ps_args = qw[ ps aux ];
open(PS_PIPE, "-|", @ps_args)
|| die "can't open @ps_args|: $!";
Because there are more than three arguments to open(), forks the ps(1)
command I spawning a shell, and reads its standard output via the
C filehandle. The corresponding syntax to I to command
pipes is to use C<"|-"> in place of C<"-|">.
This was admittedly a rather silly example, because you're using string
literals whose content is perfectly safe. There is therefore no cause to
resort to the harder-to-read, multi-argument form of pipe open(). However,
whenever you cannot be assured that the program arguments are free of shell
metacharacters, the fancier form of open() should be used. For example:
@grep_args = ("egrep", "-i", $some_pattern, @many_files);
open(GREP_PIPE, "-|", @grep_args)
|| die "can't open @grep_args|: $!";
Here the multi-argument form of pipe open() is preferred because the
pattern and indeed even the filenames themselves might hold metacharacters.
Be aware that these operations are full Unix forks, which means they may
not be correctly implemented on all alien systems. Additionally, these are
not true multithreading. To learn more about threading, see the F
file mentioned below in the SEE ALSO section.
=head2 Avoiding Pipe Deadlocks
Whenever you have more than one subprocess, you must be careful that each
closes whichever half of any pipes created for interprocess communication
it is not using. This is because any child process reading from the pipe
and expecting an EOF will never receive it, and therefore never exit. A
single process closing a pipe is not enough to close it; the last process
with the pipe open must close it for it to read EOF.
Certain built-in Unix features help prevent this most of the time. For
instance, filehandles have a "close on exec" flag, which is set I
under control of the C<$^F> variable. This is so any filehandles you
didn't explicitly route to the STDIN, STDOUT or STDERR of a child
I will be automatically closed.
Always explicitly and immediately call close() on the writable end of any
pipe, unless that process is actually writing to it. Even if you don't
explicitly call close(), Perl will still close() all filehandles during
global destruction. As previously discussed, if those filehandles have
been opened with Safe Pipe Open, this will result in calling waitpid(),
which may again deadlock.
=head2 Bidirectional Communication with Another Process
While this works reasonably well for unidirectional communication, what
about bidirectional communication? The most obvious approach doesn't work:
# THIS DOES NOT WORK!!
open(PROG_FOR_READING_AND_WRITING, "| some program |")
If you forget to C