If you manually shell into a remote supercomputer when you deploy into production you are doing it wrong.
Tools exist to help you manage deployment.
"Systems administration" should be treated with the same respect as development.
This is systems programming. (Aka dev ops.)
cat ~/devel/myproject/fabfile.py
fab deploy
from fabric.api import local
env.host = 'gauss.chem.ucl.ac.uk'
env.repo = 'ssh://hg@myserver/myrepo'
env.remote_path='/var/www/www.myapp.com/'
@task
def deploy():
with cd(env.remote_path)
run('git clone {repo}'.format(**env))
@task
def configure(*configurations,**extras):
"""CMake configure step for HemeLB and dependencies."""
configure_cmake(configurations,extras)
with cd(env.build_path):
with prefix(env.build_prefix):
run(template("rm -f $build_path/CMakeCache.txt"))
run(template("cmake $repository_path $cmake_flags"))
cd devel/projects/hemelb
fab hector cold
fab tianhe send_geometry:cylinder
fab archer hemelb:cylinder
fab dirac wait_on_run
fab legion steer
fab stampede fetch_results
cat ~/devel/hemelb/deploy/machines.yml
hector:
remote: "login.hector.ac.uk"
username: jamespjh
job_dispatch: "qsub"
run_command: "aprun -n $cores -N
$coresusedpernode"
batch_header: pbs
max_job_name_chars: 15
make_jobs: 4
legion:
...
We use it to: