------------------------------------------------------------------------------- Shell File Handles and Descripters Shells can 'open' files, and keep them open. It is just rarely used. Complex file descriptor handling are often just not needed, simpler use of named pipes and background commands can be used to simplify things. For a practical example see "co-processing/shell_example.txt" where a shell script does "expect" like interaction with a telnet session. ------------------------------------------------------------------------------- Bash-isms <( command ) read from command (a pipe is created) <<< "$var" read from a variable (bash 'here' string) WARNING: return is appended! try: od -xc <<< "hi" Don't bother with <<[-]EOF type 'here files' See "file.txt" and look for "HERE, Here, here files" for better alternatives. ------------------------------------------------------------------------------- Basic Bourne shell file descriptor handling... Opening Files (Read) In the csh, all you've got is $<, which reads a line from your tty. What if you've redirected? Tough noogies, you still get your tty. Read in the Bourne shell allows you to read from stdin, which catches redirection. It also means that you can do things like this: exec 3file.log echo >&4 "an error has happened" Closing FDs In the Bourne shell, you can close file descriptors you don't want open, like 2>&-, which isn't the same as redirecting it to /dev/null. exec 3<&- exec 4>&- Clone a File descriptor exec 5>&4 Mave a File descriptor (essentuall clone and close) exec 5>&4- # equivelent to exec 5>&4- 4>&- Open File/Network (Read and Write full example) exec 4<>/dev/tcp/www.some.comain/80 echo >&4 GET http://www.some.comain:80/file HTTP/1.0 echo >&4 while read line; do echo $line done <&4 # not you need to close each side separately exec 4>&- 4<&- Swap stdout and stderr command 3>&1 1>&2 2>&3 3>&- ------------------------------------------------------------------------------- Using variables to hold file descriptor numbers (Bash) exec {fd}<~/lib/line-page # File Descriptor Assignment added Bash 4.1 echo "file descriptor opened : $fd" read line <&$fd # read first line echo "$line" read -u$fd line # read second line (bash) echo "$line" print_line() { echo "$2"; } # read using mapfile callback (5 lines) mapfile -C print_line -u$fd -O1 -c1 -n5 -t unset MAPFILE # Junk the lines read in (default array) while read -r -u$fd line; do # read line in a loop, till line = 20 echo "$line" [[ $line == 20 ]] && break done read -d'\n' -u$fd -a lines # read rest of lines into an array printf '%s\n' "${lines[@]}" exec {fd}<&- # close descriptor NOTE: it is important to close a descriptor before re-using variable or the file will remain open. Re-open reassigns a new file descriptor ------------------------------------------------------------------------------- Handle STDERR different to STDOUT Filter STDERR stuff () { echo standard output echo more output echo standard error 1>&2 echo more error 1>&2 } filter () { grep a } { stuff 2>&1 1>&3 | filter 1>&2; } 3>&1 standard output more output standard error # Note that 'more error' was filtered out as it has no 'a' Practical example... # Find the directory with largest inode count, # Ignoring directory errors for dirs that dissappear during the search. # raw=$( { ( du -xs --inodes /app/docker/overlay2/* | sort -nr | head -1 ) 2>&1 1>&3 | grep -v 'du: cannot access' 1>&2; } 3>&1 ) Using built-in named pipes... command > >(stdout_pipe) 2> >(stderr_pipe) OR { command | stdout_pipe; } 2> >(stderr_pipe) Unless stdout_pipe can also produce errors! Direct both output and errors to a log, but errors also to screen... (./doit >> log) 2>&1 | tee -a log WARNING: any seperation may cause order of stdout and stderr to become wrong! ------------------------------------------------------------------------------- Using named pipes This can be simplier, especially if you need an actual filename mknod /tmp/pipe$$ p echo "A plumbers job is never done" >/tmp/pipe$$ & cat /tmp/pipe$$ rm -f /tmp/pipe$$ Bash allows this to be a little more direct cat <( echo "A plumbers job is never done" ) In this case the cat was given an actual filename. ------------------------------------------------------------------------------- What file descriptors are being used by a shell ls /proc/$$/fd Long listing will show symbolic links to that actual file that is open! This lets you find an unused file descriptor number especially for older shells - see previous ------------------------------------------------------------------------------- Shell File Descriptors (advanced usage) Open file ONCE, read lines one at a time, close exec 5<&0 0 File # Write string to "File". exec 3<> File # Open "File" and assign fd 3 to it. read -n 4 <&3 # Read only 4 characters. echo -n . >&3 # Write a decimal point there. exec 3>&- # Close fd 3. cat File # ==> 1234.67890 More on read is in "Bash "read" Notes.." in "co-processes.hints" ------------------------------------------------------------------------------- More Elaborate Combinations Maybe you want to pipe stderr to a command and leave stdout alone. Not too hard an idea, right? You can't do this in the csh. In a Bourne shell, you can do things like this: exec 3>&1; grep yyy xxx 2>&1 1>&3 3>&- | sed s/file/foobar/ 1>&2 3>&- grep: xxx: No such foobar or directory Normal output would be unaffected. The closes there were in case something really cared about all it's FDs. We send stderr to the sed, and then put it back out 2. --------------------------------------------------------------------------- Anthony's General Example Of complex File descriptor use (bourne shell)... Extracting the output of 3 (or more) channels of data is a difficult task. The third channel is used to pass the status of the command, ($?) for error checking. This is vital as the status of the first command in a pipe line is normally unavailable (status of a pipe line is that of the last command!) ASIDE: Bash lets you get the status of ALL pipeline commands with $PIPESTATUS array. Note the use of sub-shells so that the output stream (example fd 3) is taken from the sub-shell where it is defined and NOT from the previous command. exec 9>&1 # set fd 9 to be the normal (at this time) stdout of program cmd() { echo OUTPUT; echo >&2 ERROR; } # fake command for testing. (( ( cmd; echo STATUS >&3 ) \ | sed 's/^/out:/' >&9 ) 2>&1 | sed 's/^/err:/' >&9 ) 3>&1 | sed 's/^/stat:/' >&9 unset cmd exec 9>&- # close fd 9 results in the following output to fd 9 (order of lines may vary) out:OUTPUT err:ERROR stat:STATUS Note however the point is not to output the status but to somehow save it (for parent shell) while doing further processing of the other channels. Dan Bernstein --- brnstnd@nyu.edu No exit status report... exec 9>&1; ( exec 2>&1; ls /tmp /foo | sed 's/^/out: /' >&9 ) \ | sed 's/^/err: /' Simplified to just tag error output exec 9>&1; ls /tmp /foo 2>&1 >&9 | sed --unbuffered 's/^/ERROR: /' See also my "cmdout" script. https://antofthy.gitlab.io/software/#cmdout ------------------------------------------------------------------------------- Anthony's Pratical example I had a pipe in which the first command may or may-not exist on the system, and if it did exist its location was unknown (non-standard unix command). If it was not present I wanted to do something else. One solution was to just run it, get its status, and redirect stderr to device null. The command before this conversion was zoo xp $Zoo $mesgname | tail -n+6 and afterward status=`( ( zoo xp $Zoo $name; echo $? >&3 ) | tail -n+6 >&9 ) 9>&1 2>/dev/null` if [ $status -ne 0 ]; then .... fi Of course it is simplier to just do a "type" test on the command to see if it is present or not (See 'general_hints.txt", "Is COMMAND available" ) But this works for unexpected errors to. Also note that saving output to a temporary file may also have been simpler, unless the command takes a very very long time, and you want to display what you have recieved so far. Complex stdout and stderr example (original): Here's something I had to do where I ran dd's stderr into a grep -v pipe to get rid of the statistics dd produces, but retain any erros, and to return the dd's exit status, not the grep's: device=/dev/rmt8 dd_noise='^[0-9]+\+[0-9]+ records (in|out)$' exec 3>&1 status=`((dd if=$device ibs=64k 2>&1 1>&3 3>&- 4>&-; echo $? >&4) | egrep -v "$dd_noise" 1>&2 3>&- 4>&-) 4>&1` exit $status; This is one example where a named pipe and a backgrounded egrep may have been more practical, and simpler to understand. Of course BASH has things that can make this much easier. ------------------------------------------------------------------------------- Linux special filenames /dev/stdin read from standard input as a pipeline file /dev/fd/0 the file descriptors for the current process ------------------------------------------------------------------------------- BASH special file handles 'tcp' reading network ports, using <> for a bi-directional stream exec {w}<>/dev/tcp/www.google.com/80 printf "%s\n\n" "GET http://www.google.com/ HTTP/1.0">&$w while read -u$w line; do echo "$line" done exec {w}>&- ------------------------------------------------------------------------------- Rewind a Read/Write File Handle Write to then re-read from a RW file handle is tricky. Here we pre-delete a temporary file after opening it. F=$(mktemp) exec {tmp}<> "$F" rm -f "$F" echo "Hello world" >&$tmp { exec < /dev/stdin; cat; } <&$tmp exec {tmp}>&- {tmp}<&- When you open a file descriptor in bash it is also visible in /dev/fd/ As such cat /dev/fd/3 or cat /proc/self/fd/3 will rewind the file descriptor for you. While echo "something" >> /dev/fd/3 or echo "something" >> /proc/self/fd/3 will append to the end. Both are repeatable as many times as you like. -------------------------------------------------------------------------------