condor_ssh_to_job [-debug] [-name schedd-name] [-pool pool-name] [-ssh ssh-command] [-keygen-options ssh-keygen-options] [-shells shell1,shell2,...] [-auto-retry] cluster | cluster.process | cluster.process.node [remote-command]
condor_ssh_to_job creates an ssh session to a running job. The job is specified with the argument. If only the job cluster id is given, then the job process id defaults to the value 0.
It is available in Unix Condor distributions, and it works for vanilla, java, local, and parallel universe jobs. The user must be the owner of the job or must be a queue super user, and both the condor_schedd and condor_starter daemons must allow condor_ssh_to_job access. If no remote-command is specified, an interactive shell is created. An alternate ssh program such as sftp may be specified, using the -ssh option for uploading and downloading files.
The remote command or shell runs with the same user id as the running job, and it is initialized with the same working directory. The environment is initialized to be the same as that of the job, plus any changes made by the shell setup scripts and any environment variables passed by the ssh client. In addition, the environment variable _CONDOR_JOB_PIDS is defined. It is a space-separated list of PIDs associated with the job. At a minimum, the list will contain the PID of the process started when the job was launched, and it will be the first item in the list. It may contain additional PIDs of other processes that the job has created.
The ssh session and all processes it creates are treated by Condor as though they are processes belonging to the job. If the slot is preempted or suspended, the ssh session is killed or suspended along with the job. If the job exits before the ssh session finishes, the slot remains in the Claimed Busy state and is treated as though not all job processes have exited until all ssh sessions are closed. Multiple ssh sessions may be created to the same job at the same time. Resource consumption of the sshd process and all processes spawned by it are monitored by the condor_starter as though these processes belong to the job, so any policies such as PREEMPT that enforce a limit on resource consumption also take into account resources consumed by the ssh session.
condor_ssh_to_job stores ssh keys in temporary files within a newly created and uniquely named directory. The newly created directory will be within the directory defined by the environment variable TMPDIR. When the ssh session is finished, this directory and the ssh keys contained within it are removed.
See section 3.3.34 for details of the configuration variables related to condor_ssh_to_job.
An ssh session works by first authenticating and authorizing a secure connection between condor_ssh_to_job and the condor_starter daemon, using Condor protocols. The condor_starter generates an ssh key pair and sends it securely to condor_ssh_to_job. Then the condor_starter spawns sshd in inetd mode with its stdin and stdout attached to the TCP connection from condor_ssh_to_job. condor_ssh_to_job acts as a proxy for the ssh client to communicate with sshd, using the existing connection authorized by Condor. At no point is sshd listening on the network for connections or running with any privileges other than that of the user identity running the job. If CCB is being used to enable connectivity to the execute node from outside of a firewall or private network, condor_ssh_to_job is able to make use of CCB in order to form the ssh connection.
The login shell of the user id running the job is used to run the requested command, sshd subsystem, or interactive shell. This is hard-coded behavior in OpenSSH and cannot be overridden by configuration. This means that condor_ssh_to_job access is effectively disabled if the login shell disables access, as in the example programs /bin/true and /sbin/nologin.
condor_ssh_to_job is intended to work with OpenSSH
as installed in typical environments.
It does not work on Windows platforms.
If the ssh programs are installed in non-standard locations,
then the paths to
these programs will need to be customized within the Condor configuration.
Versions of ssh other than OpenSSH may work,
but they will likely
require additional configuration of command-line arguments,
changes to the sshd configuration template file,
and possibly modification of the
$(LIBEXEC)/condor_ssh_to_job_sshd_setup
script
used by the condor_starter to set up sshd.
% condor_ssh_to_job 32.0 Welcome to slot2@tonic.cs.wisc.edu! Your condor job is running with pid(s) 65881. % gdb -p 65881 (gdb) where ... % logout Connection to condor-job.tonic.cs.wisc.edu closed.
To upload or download files interactively with sftp:
% condor_ssh_to_job -ssh sftp 32.0 Connecting to condor-job.tonic.cs.wisc.edu... sftp> ls ... sftp> get outputfile.dat
This example shows downloading a file from the job with scp. The string "remote" is used in place of a host name in this example. It is not necessary to insert the correct remote host name, or even a valid one, because the connection to the job is created automatically. Therefore, the placeholder string "remote" is perfectly fine.
% condor_ssh_to_job -ssh scp 32 remote:outputfile.dat .
This example uses condor_ssh_to_job to accomplish the task of running rsync to synchronize a local file with a remote file in the job's working directory. Job id 32.0 is used in place of a host name in this example. This causes rsync to insert the expected job id in the arguments to condor_ssh_to_job.
% rsync -v -e "condor_ssh_to_job" 32.0:outputfile.dat .
condor_ssh_to_job will exit with a non-zero status value if it fails to set up an ssh session. If it succeeds, it will exit with the status value of the remote command or shell.
See the Condor Version 7.7.5 Manual or http://www.condorproject.org/license for additional notices.