Skip to content
Snippets Groups Projects
Commit ab86bb73 authored by Sylvester Joosten's avatar Sylvester Joosten
Browse files

Merge branch '63-tweak-singularity-container-deploy' into 'master'

Resolve "Tweak singularity container deploy"

Closes #63

See merge request !65
parents 8901e1de 7061fc3a
Branches
Tags
1 merge request!65Resolve "Tweak singularity container deploy"
......@@ -24,9 +24,6 @@ variables:
DOCKER_NTRIES: 5
DOCKER_WAIT_TIME: 5
## By default this is not a nightly build, unless the CI says so
NIGHTLY: 0
stages:
- config
- build:base ## base OS image
......@@ -48,8 +45,8 @@ default:
## only run CI for in the following cases:
## master, stable branch, release tag, MR event and nightly builds
## not that nightly builds got from the master branch, but with "NIGHTLY" set to
## 1 which triggers a slightly different workflow
## nightly builds are now part of the regular master build in order to keep
## all artifacts available at all times.
workflow:
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
......@@ -60,17 +57,17 @@ workflow:
## plan:
## Workflows:
## - master --> config + all build stages + singularity
## - <nightly> --> config + build:release only + singularity
# + nightly build:release + nightly singularity
## - v3.0-stable --> config + all build stages + singularity
## - v3.0.0 --> config + all build stages + singularity
## - MR --> config + all build stages
##
## Container images tags
## - master --> testing
## - <nightly> --> nightly
## - <nightly> --> nightly (run as part of master)
## - v3.0-stable --> 3.0-stable
## - v3.0.0 --> 3.0-stable, 3.0.0
## - MR --> unstable (on all registries)
## - MR --> 3.0-unstable (on all registries)
## --> unstable-mr-XXX (on eicweb only, untag at end of pipeline)
## - all other --> do nothing
##
......@@ -85,7 +82,8 @@ version:
VERSION=`head -n1 VERSION`
STABLE=${VERSION%.*}-stable
TESTING="testing"
UNSTABLE="unstable"
NIGHTLY="nightly"
UNSTABLE=${VERSION%.*}-unstable
## determine appropriate major docker tag for this scenario
- |
## internal tag used for the CI. Also temporarily tagged
......@@ -94,27 +92,28 @@ version:
## main export tag, optional secondary export tag,
EXPORT_TAG=${TESTING}
EXPORT_TAG2=
## nightly tag, only used in master
NIGHTLY_TAG=${NIGHTLY}
if [ "x${CI_PIPELINE_SOURCE}" == "xmerge_request_event" ]; then
INTERNAL_TAG="unstable-mr-${CI_MERGE_REQUEST_ID}"
NIGHTLY_TAG=
EXPORT_TAG=$UNSTABLE
EXPORT_TAG2=
elif [ "$CI_COMMIT_TAG" = "v${VERSION}" ]; then
INTERNAL_TAG="stable-br-${VERSION}"
NIGHTLY_TAG=
EXPORT_TAG=${STABLE}
EXPORT_TAG2=${VERSION}
elif [ "$CI_COMMIT_BRANCH" == "v${STABLE}" ]; then
INTERNAL_TAG="stable-tag-${VERSION}"
NIGHTLY_TAG=
EXPORT_TAG=${STABLE}
EXPORT_TAG2=
elif [ "$NIGHTLY" != "0" ]; then
INTERNAL_TAG="nightly-${VERSION}"
EXPORT_TAG="nightly"
EXPORT_TAG2=
fi
echo "INTERNAL_TAG=$INTERNAL_TAG" >> build.env
echo "NIGHTLY_TAG=$NIGHTLY_TAG" >> build.env
echo "EXPORT_TAG=$EXPORT_TAG" >> build.env
echo "EXPORT_TAG2=$EXPORT_TAG2" >> build.env
echo "PIPELINE_TMP_TAG=$PIPELINE_TMP_TAG" >> build.env
cat build.env
artifacts:
......@@ -127,8 +126,6 @@ version:
## note that the nightly builds use a different pipeline
.build:
rules:
- if: '$NIGHTLY != "0"'
when: never
- when: on_success
## cookie-cutter docker push code, to be included at the
## end of the regular job scripts
......@@ -207,24 +204,25 @@ jug_xl:nightly:
extends: .build
stage: build:release
rules:
- if: '$NIGHTLY != "0"'
when: always
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: never
needs:
- version
- jug_dev:default
variables:
BUILD_IMAGE: "jug_xl"
script:
- docker build -t ${CI_REGISTRY_IMAGE}/${BUILD_IMAGE}:${INTERNAL_TAG}
- docker build -t ${CI_REGISTRY_IMAGE}/${BUILD_IMAGE}:${NIGHTLY_TAG}
-f containers/jug/Dockerfile.xl
--build-arg INTERNAL_TAG="testing"
--build-arg INTERNAL_TAG=${INTERNAL_TAG}
containers/jug
- !reference [.build, script]
- ./gitlab-ci/docker_push.sh -i ${BUILD_IMAGE} -l ${NIGHTLY_TAG}
-n $DOCKER_NTRIES -t $DOCKER_WAIT_TIME
${NIGHTLY_TAG}
.singularity:
rules:
- if: '$NIGHTLY != "0"'
when: never
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
when: never
- when: on_success
......@@ -258,14 +256,15 @@ jug_xl:singularity:nightly:
extends: .singularity
stage: deploy
rules:
- if: '$NIGHTLY != "0"'
when: always
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: never
needs:
- version
- jug_xl:nightly
variables:
BUILD_IMAGE: "jug_xl"
INTERNAL_TAG: ${NIGHTLY_TAG}
cleanup:
stage: finalize
......
......@@ -25,90 +25,18 @@ eic-shell
4. Within your development environment (`eic-shell`), you can install software to the
internal `$ATHENA_PREFIX`
Installation
------------
1. Clone the repository and go into the directory
```bash
git clone https://eicweb.phy.anl.gov/containers/eic_container.git
cd eic_container
```
2. Run the install script `install.py` to install to your `<PREFIX>` of choice
(e.g. $HOME/local/opt/eic_container_1.0.4). By default the
modeuefile will be installed to `$PREFIX/../../etc/modulefiles`.
You can use the `-v` flag to select the version you want to install, or omit the
flag if you want to install the master build. The recommended stable
release version is `v3.0.1`.
```bash
./install.py -v 3.0.1 <PREFIX>
```
Available flags:
```bash
-c CONTAINER, --container CONTAINER
(opt.) Container to install. D: jug_xl (also available: jug_dev, and legacy eic container).
-v VERSION, --version VERSION
(opt.) project version. D: 3.0.1. For MRs, use mr-XXX.
-f, --force Force-overwrite already downloaded container
-b BIND_PATHS, --bind-path BIND_PATHS
(opt.) extra bind paths for singularity.
-m MODULE_PATH, --module-path MODULE_PATH
(opt.) Root module path to install a modulefile. D: Do not install a
modulefile
```
3. To use the container in installed mode, you can load the modulefile,
and then use the included apps as if they are native apps on your system!
```bash
module load eic_container
```
4. To use the container in local mode, you can install the container without the `-m` flag,
and then use the runscripts (under `$PREFIX/bin`) manually.
```bash
./install.py $PREFIX -l
...
$PREFIX/bin/eic-shell
```
4. (Advanced) If you need to add additional bind directives for the internal singularity container,
you can add them with the `-b` flag. Run `./install.py -h` to see a list of all
supported options.
Usage
-----
### A. Running the singularity development environment with modulefiles
1. Add the installed modulefile to your module path, e.g.,
```bash
module use <prefix>/../../etc/modulefiles
```
2. Load the eic container
```bash
module load eic_container
```
3. To start a shell in the container environment, do
```bash
eic-shell
```
### B. Running the singularity development locally (without modulefiles)
1. This is assuming you installed with the `-l` flag to a prefix `$PREFIX`:
```bash
./install.py $PREFIX
```
2. To start a shell in the container environment, do
```bash
$PREFIX/bin/eic-shell
```
### C. Using the docker container for your CI purposes
Using the docker container for your CI purposes
-----------------------------------------------
The docker containers are publicly accessible from
[Dockerhub](https://hub.docker.com/u/eicweb). You probably want to use the default
`jug_xl` container. Relevant versions are:
- `eicweb/jug_xl:nightly`: nightly release, with latest detector and reconstruction
version. This is probably what you want to use unless you are dispatching a large
simulation/reconstruciton job
- `eicweb/jug_xl:3.0-stable`: latest stable release, what you want to use for large
simulation jobs (for reproducibility). Please coordinate with the software group to
ensure all desired software changes are present in this container.
1. To load the container environment in your run scripts, you have to do nothing special.
The environment is already setup with good defaults, so you can use all the programs
......
......@@ -134,23 +134,25 @@ RUN cd /opt/spack-environment \
/etc/profile.d/z10_spack_environment.sh \
&& cd /opt/spack-environment \
&& echo -n "" \
&& echo "Add extra environment variables for Podio and Gaudi" \
&& echo "Add extra environment variables for Jug, Podio and Gaudi" \
&& spack env activate . \
&& echo "export JUG_DEV_VERSION=${INTERNAL_TAG}-$(date +%Y-%m-%d)" \
>> /etc/profile.d/z11_jug_env.sh \
&& export PODIO=`spack find -p podio \
| grep software \
| awk '{print $2}'` \
&& echo "export PODIO=${PODIO};" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& echo "export BINARY_TAG=x86_64-linux-gcc9-opt" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& echo "if [ ! -z \${ATHENA_PREFIX} ]; then" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& echo "export LD_LIBRARY_PATH=\$ATHENA_PREFIX/lib:\$LD_LIBRARY_PATH" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& echo "export PATH=\$ATHENA_PREFIX/bin:\$PATH" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& echo "fi" \
>> /etc/profile.d/z10_spack_environment.sh \
>> /etc/profile.d/z11_jug_env.sh \
&& cd /opt/spack-environment && spack env activate . \
&& echo -n "" \
&& echo "Installing additional python packages" \
......@@ -204,6 +206,9 @@ RUN --mount=from=staging,target=/staging \
&& cp -r /staging/usr/local /usr/local \
&& cp /staging/etc/profile.d/z10_spack_environment.sh /etc/eic-env.sh \
&& sed -i '/MANPATH/ s/;$/:;/' /etc/eic-env.sh \
&& cp /staging/etc/profile.d/z11_jug_env.sh \
/etc/profile.d/z11_jug_env.sh \
&& cat /etc/profile.d/z11_jug_env.sh >> /etc/eic-env.sh \
&& cp /etc/eic-env.sh /etc/profile.d/z10_eic-env.sh
## Bugfix to address issues loading the Qt5 libraries on Linux kernels prior to 3.15
......
#!/bin/bash
CONTAINER="jug_xl"
VERSION="3.0-stable"
VERSION="nightly"
PREFIX="$PWD"
function print_the_help {
echo "USAGE: ./install.sh [-p PREFIX] [-v VERSION]"
echo "OPTIONAL ARGUMENTS:"
echo " -p,--prefix Working directory to deploy the environment (D: $PREFIX)"
echo " -v,--version Version to install (D: $VERSION)"
echo " -h,--help Print this message"
echo ""
echo " Set up containerized development environment."
echo ""
echo "EXAMPLE: ./install.sh"
exit
}
while [ $# -gt 0 ]; do
key=$1
case $key in
-p|--prefix)
PREFIX=$2
shift
shift
;;
-v|--version)
VERSION=$2
shift
shift
;;
-h|--help)
print_the_help
exit 0
;;
*)
echo "ERROR: unknown argument: $key"
echo "use --help for more info"
exit 1
;;
esac
done
mkdir -p $PREFIX || exit 1
if [ ! -d $PREFIX ]; then
echo "ERROR: not a valid directory: $PREFIX"
echo "use --help for more info"
exit 1
fi
echo "Setting up development environment for eicweb/$CONTAINER:$VERSION"
## Simple setup script that installs the container
## in your local environment under $PWD/local/lib
## in your local environment under $PREFIX/local/lib
## and creates a simple top-level launcher script
## that launches the container for this working directory
## with the $ATHENA_PREFIX variable pointing
## to the $PWD/local directory
## to the $PREFIX/local directory
mkdir -p local/lib || exit 1
......@@ -53,11 +100,11 @@ if [ ${SINGULARITY_VERSION:0:1} = 2 ]; then
echo "We will attempt to use a fall-back SIMG image to be used with this singularity version"
if [ -f /gpfs02/eic/athena/jug_xl-3.0-stable.simg ]; then
ln -sf /gpfs02/eic/athena/jug_xl-3.0-stable.simg local/lib
SIF="$PWD/local/lib/jug_xl-3.0-stable.simg"
SIF="$PREFIX/local/lib/jug_xl-3.0-stable.simg"
else
echo "Attempting last-resort singularity pull for old image"
echo "This may take a few minutes..."
SIF="$PWD/local/lib/jug_xl-3.0-stable.simg"
SIF="$PREFIX/local/lib/jug_xl-3.0-stable.simg"
singularity pull --name "$SIF" docker://eicweb/$CONTAINER:$VERSION
fi
## we are in sane territory, yay!
......@@ -65,22 +112,22 @@ else
## check if we can just use cvmfs for the image
if [ -d /cvmfs/singularity.opensciencegrid.org/eicweb/jug_xl:${VERSION} ]; then
ln -sf /cvmfs/singularity.opensciencegrid.org/eicweb/jug_xl:${VERSION} local/lib
SIF="$PWD/local/lib/jug_xl:${VERSION}"
SIF="$PREFIX/local/lib/jug_xl:${VERSION}"
elif [ -f /gpfs02/cvmfst0/eic.opensciencegrid.org/singularity/athena/jug_xl_v3.0-stable.sif ]; then
ln -sf /gpfs02/cvmfst0/eic.opensciencegrid.org/singularity/athena/jug_xl_v3.0-stable.sif local/lib
SIF="$PWD/local/lib/jug_xl_v${VERSION}.sif"
SIF="$PREFIX/local/lib/jug_xl_v${VERSION}.sif"
## if not, download the container to the system
else
## get the python installer and run the old-style install
wget https://eicweb.phy.anl.gov/containers/eic_container/-/raw/master/install.py
chmod +x install.py
./install.py -c $CONTAINER -v $VERSION $PWD/local
./install.py -f -c $CONTAINER -v $VERSION $PREFIX/local
## Don't place eic-shell in local/bin as this may
## conflict with things we install inside the container
rm $PWD/local/bin/eic-shell
rm $PREFIX/local/bin/eic-shell
## Cleanup
rm -rf __pycache__ install.py
SIF=$PWD/local/lib/${CONTAINER}.sif.${VERSION}
SIF=$PREFIX/local/lib/${CONTAINER}.sif.${VERSION}
fi
fi
......@@ -90,6 +137,20 @@ else
echo " - Deployed ${CONTAINER} image: $SIF"
fi
## We want to make sure the root directory of the install directory
## is always bound. We also check for the existence of a few standard
## locations (/scratch /volatile /cache) and bind those too if found
echo " - Determining additional bind paths"
PREFIX_ROOT="/$(realpath $PREFIX | cut -d "/" -f2)"
BINDPATH=$PREFIX_ROOT
echo " --> $PREFIX_ROOT"
for dir in /work /scratch /volatile /cache; do
if [ -d $dir ]; then
echo " --> $dir"
BINDPATH="${BINDPATH},$dir"
fi
done
## create a new top-level eic-shell launcher script
## that sets the ATHENA_PREFIX and then starts singularity
## need different script for old singularity versions
......@@ -97,14 +158,16 @@ if [ ${SINGULARITY_VERSION:0:1} != 2 ]; then
## newer singularity
cat << EOF > eic-shell
#!/bin/bash
export ATHENA_PREFIX=$PWD/local
export ATHENA_PREFIX=$PREFIX/local
export SINGULARITY_BINDPATH=$BINDPATH
$SINGULARITY run $SIF
EOF
else
## ancient singularity
cat << EOF > eic-shell
#!/bin/bash
export ATHENA_PREFIX=$PWD/local
export ATHENA_PREFIX=$PREFIX/local
export SINGULARITY_BINDPATH=$BINDPATH
$SINGULARITY exec $SIF eic-shell
EOF
fi
......
#!/bin/bash
CONTAINER="jug_dev"
VERSION="testing"
ODIR="$PWD"
function print_the_help {
echo "USAGE: ./install_dev.sh [-o DIR] [-v VERSION]"
echo "OPTIONAL ARGUMENTS:"
echo " -o,--outdir Directory to download the container to (D: $ODIR)"
echo " -v,--version Version to install (D: $VERSION)"
echo " -h,--help Print this message"
echo ""
echo " Download development container into an output directory"
echo ""
echo "EXAMPLE: ./install.sh"
exit
}
while [ $# -gt 0 ]; do
key=$1
case $key in
-o|--outdir)
ODIR=$2
shift
shift
;;
-v|--version)
VERSION=$2
shift
shift
;;
-h|--help)
print_the_help
exit 0
;;
*)
echo "ERROR: unknown argument: $key"
echo "use --help for more info"
exit 1
;;
esac
done
mkdir -p $ODIR || exit 1
if [ ! -d $ODIR ]; then
echo "ERROR: not a valid directory: $ODIR"
echo "use --help for more info"
exit 1
fi
echo "Deploying development container for eicweb/$CONTAINER:$VERSION to $ODIR"
## Simple setup script that installs the container
## in your local environment under $ODIR/local/lib
## and creates a simple top-level launcher script
## that launches the container for this working directory
## with the $ATHENA_ODIR variable pointing
## to the $ODIR/local directory
mkdir -p local/lib || exit 1
## Always deploy the SIF image using the python installer,
## as this is for experts only anyway
SIF=
## work in temp directory
tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX)
pushd $tmp_dir
wget https://eicweb.phy.anl.gov/containers/eic_container/-/raw/master/install.py
chmod +x install.py
./install.py -f -c $CONTAINER -v $VERSION .
SIF=`ls lib/$CONTAINER.sif.* | head -n1`
## That's all
if [ -z $SIF -o ! -f $SIF ]; then
echo "ERROR: no singularity image found"
else
echo "Container download succesfull"
fi
## move over the container to our output directory
mv $SIF $ODIR
## cleanup
popd
rm -rf $tmp_dir
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment