------------------------------------------------------------------------------- Remote SSH commands... Specically, "Re-quoting commands safely as an argument to other commands" TL;DR; All remote commands are executed by the login shell of the remote system. If the shell does not allow the execution of a command, SSH cannot run a command. Exceptions.... You do NOT need a valid remote local shell to set up a remote port-forwarding link! Or for a SSH internal forward (ssh 'ProxyJump' configuration option), if either is allowed by the sshd config. But if the remote machines pam configuration includes a "pam_shells" check, then no SSH action will be allowed if the user has an invalid shell, as PAM authentication and authorization will fail. You still need a valid authorised account regardless ------------------------------------------------------------------------------- Test command My wrappers can be very complicated, involving FQDN lookup for the remote acct and username of that account, host aliases, ssh double jumps (though a jumpbox that does not allow SSH config files), and login initialisation scripts of the remote systems. As such testing that a remote commands works exactly as it would for a normal ssh command. Especially if I want to run 'rsync' or 'unison' file transfer commands, though the wrapper. I found this command useful in testing to ensure things are right in ssh command wrappers. your_ssh_cmd host 'id;pwd;printf "[-%s-]\n" "hello'\''\$t\"";hostname' Example Correct Results: ssh remote 'id;pwd;printf "[-%s-]\n" "hello'\''\$t\"";hostname' uid=601(remote) gid=100(users) groups=100(users),10(www) /home/remote [-hello'$t"-] remote.example.com The hello line specifically should be exactly as above, quotes and all, for normal ssh and for your wrapper. ------------------------------------------------------------------------------- SSH can directy run commands on a remote machine... ssh host hostname # => hostname of the remote machine Or even more that one command... ssh host 'hostname; pwd; env' # => directory and environment on the remote host What is returned depends on the setup on the remote machine, including the login shell of the account, and any 'non-interactive' configuration of that shell on the remote machine. This configuration can (though generally does not) change the current working directory, and even the "$HOME" environment variable (usually one and the same). It can also change the "$PATH" used to locate other commands to run on the remote server, or even set up shell functions and aliases alternatives for commands. Most non-interactive setups of a login shell will set the PATH, and many other environment variables, appropriately for that server, and the software installed. That is why the above set of commands can be very useful. However nothing is guaranteed to run as you would expect on the remote machine, though in all likely-hood the command will be run in some standard user shell, and all login shells I know will at least run the above set of commands, just as you would expect. ------------------------------------------------------------------------------- The Login Shell The command given will always be passed to the 'LOGIN_SHELL' of the account on the remote host. This includes when ssh is starting "scp" or "sftp" in daemon mode for those special "sub-system" protocols. It also happens when users use other similar ssh compliant commands like "rsync" or "unison", or to run a special command to DIY their "proxy tunneling" such as "netcat" or "nc", rather than use the SSH built-in version. As the LOGIN_SHELL is practially always run, it will also run any configuration profiles or scripts that it is supposed to. Bash for example will ALWAYS execute the users ".bashrc" regardless of if a login shell or a command was requested. If that file contains a "exit" it will be next to imposible for that user to login without admin help! Special notes... Built-in Proxy Tunnel The ONLY time I have found where a SSH Server will not run the Login Shell, at all, is when the user requests the SSH built-in proxy tunnel" using the '-W' option. This does not run any shell or other configuration, just sets up the tunnel using in-built handling, and can be disabled in a servers "sshd.config" using the option "AllowTcpForwarding no". Tcp Forwarding should be turned off on any host you do not want users to use as a ssh "jumpbox", though unless you also control their login shell, that will not prevent them using their own DIY proxys commands. User RC The only other thing to note is the PermitUserRC option for SSHD (default yes), and UseLogin option was not set (and some authorized key options). This gets sshd to execute a ~/.ssh/rc (if present) before the users shell is run. However it is also invoked using the Users Login Shell. ASIDE: This RC program (or script) this will not prevent the LOGIN shell from running again later after it finishs. It also cannot change the environment of the later login shell! Though it could modify the login shells configuration files. ------------------------------------------------------------------------------- How remote commands are run on remote system... The 'LOGIN SHELL' could be anything. It does not need to be an actual shell. It can be a shell script that only allows a limite set of remote commands to be run. The SSH Daemon on the client calls the shell as... LOGIN_SHELL -c 'remote command string' It is up to that 'LOGIN_SHELL' to interpret the given 'remote command' string argument, and what it will do with it. If the login shell does not understand the '-c' option, than no remote commands will be able to run, and you just get whatever error that program produces. That is only 'interactive login' style execution can happen. ALL the arguments making up the remote command are passed as a single white space separated string, regardless of how they are given to the original ssh command. As with ssh multiple command arguments such as... ssh host 'hostname;' 'pwd;' 'env' The remote 'LOGIN_SHELL' would still see the single string... LOGIN_SHELL -c 'hostname; pwd; env' Remember the 'remote command' is given to the login shell as a single string, with the individual separate arguments joined together, and it is completely up to the login shell to interpret that string, or produce an error. It is expected that there will be no output by the shell before the gven command are run. That include MOTD, or error messages. This is expecially important for special file transfer commands.... --- SSH sub-system and file transfer commands... If the user is using "scp" the remote command string will be either... 'scp -f {file_to_get}' or 'scp -t {file_to_put}' For interactive "sftp" the sftp server it is given a command such as... '/usr/libexec/openssh/sftp-server' The actual command path is specified in the "sshd.config" configuration file in the "Subsystem sftp" option. And for "rsync" which "ssh' has no builtin knowledge of, the string is... 'rsync --server {arguments_for_transfer...}' The LOGIN_SHELL can either permit these to run, or not, or even modify what those commands will do! --- Authorised Key Commands. The "authorized_keys" file on the remote destination host, can specify a 'command="command"' to be executed when a particular key is used to authorize a login. That is the key specifying the account you are logging in from, OR with using a specially provided public key. The destination host SSH server may also have a 'ForceCommand' globally set. This "command" is given the 'remote command' (as "$SSH_ORIGINAL_COMMAND" env variable), and it is then up that program to perform the appropriate actions. Again the LOGIN_SHELL will be used to run these commands, so control the login shell control everything. ------------------------------------------------------------------------------- Avoiding a 'Unknown Login Shell' To avoid the effect on an unknown login shell on the remote server, you may like to run your own shell. However commands are still given to the remote LOGIN_SHELL to be executed! So while you can run your own shell, it will be up to the LOGIN_SHELL to allow and run it. ssh host '/bin/bash -c '\''hostname; pwd; env'\' If the 'LOGIN_SHELL' understands the above in the usual way, it will then run "bash" with the quotes arguments given. It may be better also use 'exec' so the login shell is replaced by your own shell. If the LOGIN_SHELL allows such usage. ssh host 'exec /bin/bash -c '\''hostname; pwd; env'\' Note however that that does mean that any final cleanups the remote LOGIN_SHELL may normally perform on exit (logout), will NOT be performed. As "exec" will (in general) replaces that shell. But then such special cleanup code for a non-interactive remote command is uncommon. Also this adds another layer of re-quoting! ------------------------------------------------------------------------------- Keeping arguments separated, and quoted! The other problem with running remote commands is the merging of separate arguments, as ssh only provides a command to the shell as a single string. Running this works fine locally... printf "[-%s-]\n" "hello'\$t\"" # => [-hello'$t"-] NOTE: there is more than one argument and special shell command quoting involved in the above test. But doing it as a remote command is a syntax error ssh host printf "[-%s-]\n" "hello'\$t\"" # => bash ... unexpected EOF while looking for matching `'' Again extra quoting is needed once for the local shell, and again for the remote login shell... --- This is the right way to quote a command (into a single string argument) so it is run as expected on the remote client... ssh host 'printf "[-%s-]\n" "hello'\''\$t\""' # => [-hello'$t"-] An alternative is to use printf to add the appropriate quoting around each of the arguments, without you needing you to figure it out... ssh host $( printf "%q " printf "[-%s-]\n" "hello'\$t\"" ) # -> [-hello'$t"-] Also see "running a script" below for another method avoiding quote problems in arguments. In a script this can be made easier using Bash Quoting of the "$@" arguments. =======8<-------- #!/bin/bash ssh host "${@@Q}" =======8<-------- However do not use this as a replacement SSH command, as the quoting requirement of ssh is well known by programs that use SSH, and it may cause commands to become over-quoted. See below for a better method of running a 'random script' of commands on a remote system. ------------------------------------------------------------------------------- Double SSH/SU Quoting... There is a uncommon situation where you want to "ssh" to a remote client, and then "ssh" again to a second machine, (or use "sudo") with the command. That is the command is passed not only through "ssh" but is then given as an argument to another command. For example ssh host1 ssh host2 command or ssh host1 sudo command This will require you to not only quote the "command" for the first "ssh" but quote it a second time for the second remote "ssh" or "sudo". The problem is quoting things twice gets very messy very quickly! printf "[-%s-]\n" "hello'\$t\"" # => [-hello'$t"-] ssh host1 'echo "[-hello'\''\$t\"-]"' # => [-hello'$t"-] ssh host1 ssh host2 \''echo "[-hello'\'\\\'\''\$t\"-]"'\' # => [-hello'$t"-] A situation often called 'backslash hell' Here is one solution, using BOTH methods that BASH to provides to perform that extra quoting # Put command into a command array - no special quoting required! command=( printf "[-%s-]\n" "hello'\$t\"" ) ${command[@]} # execute command locally # => [-hello'$t"-] ssh host1 "${command[@]@Q}" # using variable quoting # => [-hello'$t"-] ssh host1 $(printf "%q " "${command[@]}" ) # using bash printf to quote # => [-hello'$t"-] # now combine the two for the double ssh jump! ssh host1 ssh host2 $(printf "%q " "${command[@]@Q}" ) # => [-hello'$t"-] In a ssh double jump script... =======8<-------- #!/bin/bash # ssh_double_jump host1 host2 command... host1=$1 host2=$2 shift 2 ssh "$host1" ssh "$host2" $(printf "%q " "${@@Q}" ) =======8<-------- Remember that second "ssh" could also be a "su" or a "sudo" command. ------------------------------------------------------------------------------- SSH Input/Output... The remote command can receive the standard input given the to SSH command, so you can for example send a 'shell' you run on the remote system commands to execute. echo 'hostname; pwd; env' | ssh host /bin/bash The normal output (STDOUT) of the commands will be returned, Even better error channel (STDERR) will be returned separately! And the status from the final command will also be returned. You can see this is you use my "cmdout" script to mark what a command returns. https://antofthy.gitlab.io/software/#cmdout cmdout ssh host 'echo >&2 "Error"; echo "Output"; exit 42' # => results from 'cmdout' script... # CMD: 'ssh' 'host' 'echo >&2 "Error"; echo "Output"; exit 42' # OUT: Output # ERR: Error # STAT:42 WARNING: Be wary of SSH errors and status error that may come out of order due to network failures! In some cases SSH may even hang for 5 to 10 minutes due to DNS lookup failures! cmdout ssh unknown 'echo "it worked"' # => results from 'cmdout' script... # CMD: 'ssh' 'unknown' 'echo "it worked"' # ERR: ssh: Could not resolve hostname unknown: Name or service not known # STAT:255 However the remote commands will not have a TTY available. ssh host 'tty' # => not a tty Which mean you cannot run commands that require passwords, or a editor unless you tell ssh to create a pseudo-tty for the commands to run in, using a "-t" option. cmdout ssh -t host 'tty' # => # CMD: 'ssh' '-t' 'host.example.com' 'tty' # OUT: /dev/pts/0 # ERR: Connection to host.example.com closed. # STAT:0 Note the extra output on the error channel (unless you also give ssh a "-q" option. But it does now have a tty, which means you can now use an editor directly on a remote host! ssh -t host 'vim some_file' HOWEVER the pseudo-tty means STDOUT and STDERR will now become merge into a single IO stream... cmdout ssh -t host 'echo >&2 "Error"; echo "Output"; exit 42' # CMD: 'ssh' 'host' 'echo >&2 "Error"; echo "Output"; exit 42' # OUT: Output # OUT: Error # ERR: Connection to host.example.com closed. # STAT:42 As you see 'Error' on the error channel is no longer distinct from the standard output channel. Also I have found the merged streams sometimes results in the time order becoming 'mixed up'. On the bright-side the error channel will ONLY consist of ssh and network errors. ------------------------------------------------------------------------------- Setting up an interactive shell I had one case where I wanted change the 'HOME' of my login shell on a remote server, before switching to an interactive shell. This is what I did... ssh host 'cd /home/fake 2>/dev/null && HOME=$(pwd) exec bash -l' This will switch to a new home of "/home/fake" if it exists, but otherwise leaves home as is. This can also be useful if you need to switch user (via su or sudo) on the remote server. ------------------------------------------------------------------------------- Running a complex shell script on remote server. The simplest way to run a script on the remote server is to copy the script over, run it, and optionally clean it up again afterwards. unique_id=$$ rsync -aHvcx script /tmp/script_$unique_id ssh host "/tmp/script_$unique_id; rm /tmp/script_$unique_id" Of course you would want to also include error checking, and the cleanup code can also become tricky. However the above means two separate network connections, one to transfer the script, and another to execute it. This can a lot more time, and double the amount of password typing if it's needed for authentication Another way is to pipe the shell script directly into the remote shell. cat script | ssh host 'exec /bin/bash' However there is a possibility that the script hits a special SSH escape character (normally '~' at the start of a line). This can happen in a bash script that is running a command in the users remote home directory. cat script | ssh -e '' host 'exec /bin/bash' Also piping it means you loose STDIN, and the possibility of using a TTY for interactive commands. ------------------------------------------------------------------------------- Better Remote Scripting... An alternative way to sent the script and avoid quoting problems, is to 'encode' the script. This relies in the fact that command lines can now be huge (megabytes long) in modern shells, without any problem. Basic example.. ssh host "bash <(base64 --decode <<<'$(base64 < script_file)' )" The "$(...)" first base64 encoded the script into string, before the ssh is even run. On the remote side this string given to "base64 --decode" as a bash '<<<' here string. The decoded script then given to bash as a 'piped filename' using '<(...)'. That is everything is performed in sequence right to left. You can even pre-prepare the base64 encoded strings as a separate step. The "base64 --decode" will even accept two separately encoded base64 strings, that are simply concatenated together, allowing you to build a more complex sequence of commands, from multiple scripts. If you want an interactive bash shell after the script has set up the remote environment, you can pass the 'decode' to "bash" as a "--rcfile" profile... ssh -t host "/bin/bash --rcfile <(base64 --decode <<<'$(base64 < script)' )" It is up to you if you want to prepend 'exec' to the above command. --- An alternative is to use "sshrc" program, which essentially performs the above, as well as transfers/updates configuration files when you login to a remote server. sshrc https://github.com/IngoHeimbach/sshrc/ This program takes a script saved locally as ".sshrc", and transfers and runs it on the remote server. It also copies all the files found in ".sshrc.d" before starting the interactive shell on the remote server. All in a single ssh connection. -------------------------------------------------------------------------------