Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
eic_container
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
containers
eic_container
Commits
ab86bb73
Commit
ab86bb73
authored
3 years ago
by
Sylvester Joosten
Browse files
Options
Downloads
Plain Diff
Merge branch '63-tweak-singularity-container-deploy' into 'master'
Resolve "Tweak singularity container deploy" Closes
#63
See merge request
!65
parents
8901e1de
7061fc3a
Branches
Branches containing commit
Tags
Tags containing commit
1 merge request
!65
Resolve "Tweak singularity container deploy"
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
.gitlab-ci.yml
+24
-25
24 additions, 25 deletions
.gitlab-ci.yml
README.md
+12
-84
12 additions, 84 deletions
README.md
containers/jug/Dockerfile.dev
+12
-7
12 additions, 7 deletions
containers/jug/Dockerfile.dev
install.sh
+75
-12
75 additions, 12 deletions
install.sh
install_dev.sh
+88
-0
88 additions, 0 deletions
install_dev.sh
with
211 additions
and
128 deletions
.gitlab-ci.yml
+
24
−
25
View file @
ab86bb73
...
...
@@ -24,9 +24,6 @@ variables:
DOCKER_NTRIES
:
5
DOCKER_WAIT_TIME
:
5
## By default this is not a nightly build, unless the CI says so
NIGHTLY
:
0
stages
:
-
config
-
build:base
## base OS image
...
...
@@ -48,8 +45,8 @@ default:
## only run CI for in the following cases:
## master, stable branch, release tag, MR event and nightly builds
##
not that
nightly builds
got from the master branch, but with "NIGHTLY" set to
##
1 which triggers a slightly different workflow
## nightly builds
are now part of the regular master build in order to keep
##
all artifacts available at all times.
workflow
:
rules
:
-
if
:
'
$CI_PIPELINE_SOURCE
==
"merge_request_event"'
...
...
@@ -60,17 +57,17 @@ workflow:
## plan:
## Workflows:
## - master --> config + all build stages + singularity
#
# - <nightly> --> config +
build:release
only +
singularity
#
+ nightly
build:release
+ nightly
singularity
## - v3.0-stable --> config + all build stages + singularity
## - v3.0.0 --> config + all build stages + singularity
## - MR --> config + all build stages
##
## Container images tags
## - master --> testing
## - <nightly> --> nightly
## - <nightly> --> nightly
(run as part of master)
## - v3.0-stable --> 3.0-stable
## - v3.0.0 --> 3.0-stable, 3.0.0
## - MR --> unstable (on all registries)
## - MR -->
3.0-
unstable (on all registries)
## --> unstable-mr-XXX (on eicweb only, untag at end of pipeline)
## - all other --> do nothing
##
...
...
@@ -85,7 +82,8 @@ version:
VERSION=`head -n1 VERSION`
STABLE=${VERSION%.*}-stable
TESTING="testing"
UNSTABLE="unstable"
NIGHTLY="nightly"
UNSTABLE=${VERSION%.*}-unstable
## determine appropriate major docker tag for this scenario
-
|
## internal tag used for the CI. Also temporarily tagged
...
...
@@ -94,27 +92,28 @@ version:
## main export tag, optional secondary export tag,
EXPORT_TAG=${TESTING}
EXPORT_TAG2=
## nightly tag, only used in master
NIGHTLY_TAG=${NIGHTLY}
if [ "x${CI_PIPELINE_SOURCE}" == "xmerge_request_event" ]; then
INTERNAL_TAG="unstable-mr-${CI_MERGE_REQUEST_ID}"
NIGHTLY_TAG=
EXPORT_TAG=$UNSTABLE
EXPORT_TAG2=
elif [ "$CI_COMMIT_TAG" = "v${VERSION}" ]; then
INTERNAL_TAG="stable-br-${VERSION}"
NIGHTLY_TAG=
EXPORT_TAG=${STABLE}
EXPORT_TAG2=${VERSION}
elif [ "$CI_COMMIT_BRANCH" == "v${STABLE}" ]; then
INTERNAL_TAG="stable-tag-${VERSION}"
NIGHTLY_TAG=
EXPORT_TAG=${STABLE}
EXPORT_TAG2=
elif [ "$NIGHTLY" != "0" ]; then
INTERNAL_TAG="nightly-${VERSION}"
EXPORT_TAG="nightly"
EXPORT_TAG2=
fi
echo "INTERNAL_TAG=$INTERNAL_TAG" >> build.env
echo "NIGHTLY_TAG=$NIGHTLY_TAG" >> build.env
echo "EXPORT_TAG=$EXPORT_TAG" >> build.env
echo "EXPORT_TAG2=$EXPORT_TAG2" >> build.env
echo "PIPELINE_TMP_TAG=$PIPELINE_TMP_TAG" >> build.env
cat build.env
artifacts
:
...
...
@@ -127,8 +126,6 @@ version:
## note that the nightly builds use a different pipeline
.build
:
rules
:
-
if
:
'
$NIGHTLY
!=
"0"'
when
:
never
-
when
:
on_success
## cookie-cutter docker push code, to be included at the
## end of the regular job scripts
...
...
@@ -207,24 +204,25 @@ jug_xl:nightly:
extends
:
.build
stage
:
build:release
rules
:
-
if
:
'
$
NIGHTLY
!=
"0
"'
when
:
alway
s
-
if
:
'
$
CI_COMMIT_BRANCH
==
"master
"'
when
:
on_succes
s
-
when
:
never
needs
:
-
version
-
jug_dev:default
variables
:
BUILD_IMAGE
:
"
jug_xl"
script
:
-
docker build -t ${CI_REGISTRY_IMAGE}/${BUILD_IMAGE}:${
INTERNAL
_TAG}
-
docker build -t ${CI_REGISTRY_IMAGE}/${BUILD_IMAGE}:${
NIGHTLY
_TAG}
-f containers/jug/Dockerfile.xl
--build-arg INTERNAL_TAG=
"testing"
--build-arg INTERNAL_TAG=
${INTERNAL_TAG}
containers/jug
-
!reference
[
.build
,
script
]
-
./gitlab-ci/docker_push.sh -i ${BUILD_IMAGE} -l ${NIGHTLY_TAG}
-n $DOCKER_NTRIES -t $DOCKER_WAIT_TIME
${NIGHTLY_TAG}
.singularity
:
rules
:
-
if
:
'
$NIGHTLY
!=
"0"'
when
:
never
-
if
:
'
$CI_PIPELINE_SOURCE
==
"merge_request_event"'
when
:
never
-
when
:
on_success
...
...
@@ -258,14 +256,15 @@ jug_xl:singularity:nightly:
extends
:
.singularity
stage
:
deploy
rules
:
-
if
:
'
$
NIGHTLY
!=
"0
"'
when
:
alway
s
-
if
:
'
$
CI_COMMIT_BRANCH
==
"master
"'
when
:
on_succes
s
-
when
:
never
needs
:
-
version
-
jug_xl:nightly
variables
:
BUILD_IMAGE
:
"
jug_xl"
INTERNAL_TAG
:
${NIGHTLY_TAG}
cleanup
:
stage
:
finalize
...
...
This diff is collapsed.
Click to expand it.
README.md
+
12
−
84
View file @
ab86bb73
...
...
@@ -25,90 +25,18 @@ eic-shell
4.
Within your development environment (
`eic-shell`
), you can install software to the
internal
`$ATHENA_PREFIX`
Installation
------------
1.
Clone the repository and go into the directory
```
bash
git clone https://eicweb.phy.anl.gov/containers/eic_container.git
cd
eic_container
```
2.
Run the install script
`install.py`
to install to your
`<PREFIX>`
of choice
(e.g. $HOME/local/opt/eic_container_1.0.4). By default the
modeuefile will be installed to
`$PREFIX/../../etc/modulefiles`
.
You can use the
`-v`
flag to select the version you want to install, or omit the
flag if you want to install the master build. The recommended stable
release version is
`v3.0.1`
.
```
bash
./install.py
-v
3.0.1 <PREFIX>
```
Available flags:
```
bash
-c
CONTAINER,
--container
CONTAINER
(
opt.
)
Container to install. D: jug_xl
(
also available: jug_dev, and legacy eic container
)
.
-v
VERSION,
--version
VERSION
(
opt.
)
project version. D: 3.0.1. For MRs, use mr-XXX.
-f
,
--force
Force-overwrite already downloaded container
-b
BIND_PATHS,
--bind-path
BIND_PATHS
(
opt.
)
extra
bind
paths
for
singularity.
-m
MODULE_PATH,
--module-path
MODULE_PATH
(
opt.
)
Root module path to
install
a modulefile. D: Do not
install
a
modulefile
```
3.
To use the container in installed mode, you can load the modulefile,
and then use the included apps as if they are native apps on your system!
```
bash
module load eic_container
```
4.
To use the container in local mode, you can install the container without the
`-m`
flag,
and then use the runscripts (under
`$PREFIX/bin`
) manually.
```
bash
./install.py
$PREFIX
-l
...
$PREFIX
/bin/eic-shell
```
4.
(Advanced) If you need to add additional bind directives for the internal singularity container,
you can add them with the
`-b`
flag. Run
`./install.py -h`
to see a list of all
supported options.
Usage
-----
### A. Running the singularity development environment with modulefiles
1.
Add the installed modulefile to your module path, e.g.,
```
bash
module use <prefix>/../../etc/modulefiles
```
2.
Load the eic container
```
bash
module load eic_container
```
3.
To start a shell in the container environment, do
```
bash
eic-shell
```
### B. Running the singularity development locally (without modulefiles)
1.
This is assuming you installed with the
`-l`
flag to a prefix
`$PREFIX`
:
```
bash
./install.py
$PREFIX
```
2.
To start a shell in the container environment, do
```
bash
$PREFIX
/bin/eic-shell
```
### C. Using the docker container for your CI purposes
Using the docker container for your CI purposes
-----------------------------------------------
The docker containers are publicly accessible from
[
Dockerhub
](
https://hub.docker.com/u/eicweb
)
. You probably want to use the default
`jug_xl`
container. Relevant versions are:
-
`eicweb/jug_xl:nightly`
: nightly release, with latest detector and reconstruction
version. This is probably what you want to use unless you are dispatching a large
simulation/reconstruciton job
-
`eicweb/jug_xl:3.0-stable`
: latest stable release, what you want to use for large
simulation jobs (for reproducibility). Please coordinate with the software group to
ensure all desired software changes are present in this container.
1.
To load the container environment in your run scripts, you have to do nothing special.
The environment is already setup with good defaults, so you can use all the programs
...
...
This diff is collapsed.
Click to expand it.
containers/jug/Dockerfile.dev
+
12
−
7
View file @
ab86bb73
...
...
@@ -134,23 +134,25 @@ RUN cd /opt/spack-environment \
/etc/profile.d/z10_spack_environment.sh \
&& cd /opt/spack-environment \
&& echo -n "" \
&& echo "Add extra environment variables for Podio and Gaudi"
\
&& echo "Add extra environment variables for
Jug,
Podio and Gaudi" \
&& spack env activate . \
&& echo "export JUG_DEV_VERSION=${INTERNAL_TAG}-$(date +%Y-%m-%d)" \
>> /etc/profile.d/z11_jug_env.sh \
&& export PODIO=`spack find -p podio \
| grep software \
| awk '{print $2}'` \
&& echo "export PODIO=${PODIO};" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& echo "export BINARY_TAG=x86_64-linux-gcc9-opt" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& echo "if [ ! -z \${ATHENA_PREFIX} ]; then" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& echo "export LD_LIBRARY_PATH=\$ATHENA_PREFIX/lib:\$LD_LIBRARY_PATH" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& echo "export PATH=\$ATHENA_PREFIX/bin:\$PATH" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& echo "fi" \
>> /etc/profile.d/z1
0_spack_environment.sh
\
>> /etc/profile.d/z1
1_jug_env.sh
\
&& cd /opt/spack-environment && spack env activate . \
&& echo -n "" \
&& echo "Installing additional python packages" \
...
...
@@ -204,6 +206,9 @@ RUN --mount=from=staging,target=/staging \
&& cp -r /staging/usr/local /usr/local \
&& cp /staging/etc/profile.d/z10_spack_environment.sh /etc/eic-env.sh \
&& sed -i '/MANPATH/ s/;$/:;/' /etc/eic-env.sh \
&& cp /staging/etc/profile.d/z11_jug_env.sh \
/etc/profile.d/z11_jug_env.sh \
&& cat /etc/profile.d/z11_jug_env.sh >> /etc/eic-env.sh \
&& cp /etc/eic-env.sh /etc/profile.d/z10_eic-env.sh
## Bugfix to address issues loading the Qt5 libraries on Linux kernels prior to 3.15
...
...
This diff is collapsed.
Click to expand it.
install.sh
+
75
−
12
View file @
ab86bb73
#!/bin/bash
CONTAINER
=
"jug_xl"
VERSION
=
"3.0-stable"
VERSION
=
"nightly"
PREFIX
=
"
$PWD
"
function
print_the_help
{
echo
"USAGE: ./install.sh [-p PREFIX] [-v VERSION]"
echo
"OPTIONAL ARGUMENTS:"
echo
" -p,--prefix Working directory to deploy the environment (D:
$PREFIX
)"
echo
" -v,--version Version to install (D:
$VERSION
)"
echo
" -h,--help Print this message"
echo
""
echo
" Set up containerized development environment."
echo
""
echo
"EXAMPLE: ./install.sh"
exit
}
while
[
$#
-gt
0
]
;
do
key
=
$1
case
$key
in
-p
|
--prefix
)
PREFIX
=
$2
shift
shift
;;
-v
|
--version
)
VERSION
=
$2
shift
shift
;;
-h
|
--help
)
print_the_help
exit
0
;;
*
)
echo
"ERROR: unknown argument:
$key
"
echo
"use --help for more info"
exit
1
;;
esac
done
mkdir
-p
$PREFIX
||
exit
1
if
[
!
-d
$PREFIX
]
;
then
echo
"ERROR: not a valid directory:
$PREFIX
"
echo
"use --help for more info"
exit
1
fi
echo
"Setting up development environment for eicweb/
$CONTAINER
:
$VERSION
"
## Simple setup script that installs the container
## in your local environment under $P
WD
/local/lib
## in your local environment under $P
REFIX
/local/lib
## and creates a simple top-level launcher script
## that launches the container for this working directory
## with the $ATHENA_PREFIX variable pointing
## to the $P
WD
/local directory
## to the $P
REFIX
/local directory
mkdir
-p
local
/lib
||
exit
1
...
...
@@ -53,11 +100,11 @@ if [ ${SINGULARITY_VERSION:0:1} = 2 ]; then
echo
"We will attempt to use a fall-back SIMG image to be used with this singularity version"
if
[
-f
/gpfs02/eic/athena/jug_xl-3.0-stable.simg
]
;
then
ln
-sf
/gpfs02/eic/athena/jug_xl-3.0-stable.simg
local
/lib
SIF
=
"
$P
WD
/local/lib/jug_xl-3.0-stable.simg"
SIF
=
"
$P
REFIX
/local/lib/jug_xl-3.0-stable.simg"
else
echo
"Attempting last-resort singularity pull for old image"
echo
"This may take a few minutes..."
SIF
=
"
$P
WD
/local/lib/jug_xl-3.0-stable.simg"
SIF
=
"
$P
REFIX
/local/lib/jug_xl-3.0-stable.simg"
singularity pull
--name
"
$SIF
"
docker://eicweb/
$CONTAINER
:
$VERSION
fi
## we are in sane territory, yay!
...
...
@@ -65,22 +112,22 @@ else
## check if we can just use cvmfs for the image
if
[
-d
/cvmfs/singularity.opensciencegrid.org/eicweb/jug_xl:
${
VERSION
}
]
;
then
ln
-sf
/cvmfs/singularity.opensciencegrid.org/eicweb/jug_xl:
${
VERSION
}
local
/lib
SIF
=
"
$P
WD
/local/lib/jug_xl:
${
VERSION
}
"
SIF
=
"
$P
REFIX
/local/lib/jug_xl:
${
VERSION
}
"
elif
[
-f
/gpfs02/cvmfst0/eic.opensciencegrid.org/singularity/athena/jug_xl_v3.0-stable.sif
]
;
then
ln
-sf
/gpfs02/cvmfst0/eic.opensciencegrid.org/singularity/athena/jug_xl_v3.0-stable.sif
local
/lib
SIF
=
"
$P
WD
/local/lib/jug_xl_v
${
VERSION
}
.sif"
SIF
=
"
$P
REFIX
/local/lib/jug_xl_v
${
VERSION
}
.sif"
## if not, download the container to the system
else
## get the python installer and run the old-style install
wget https://eicweb.phy.anl.gov/containers/eic_container/-/raw/master/install.py
chmod
+x install.py
./install.py
-c
$CONTAINER
-v
$VERSION
$P
WD
/local
./install.py
-f
-c
$CONTAINER
-v
$VERSION
$P
REFIX
/local
## Don't place eic-shell in local/bin as this may
## conflict with things we install inside the container
rm
$P
WD
/local/bin/eic-shell
rm
$P
REFIX
/local/bin/eic-shell
## Cleanup
rm
-rf
__pycache__ install.py
SIF
=
$P
WD
/local/lib/
${
CONTAINER
}
.sif.
${
VERSION
}
SIF
=
$P
REFIX
/local/lib/
${
CONTAINER
}
.sif.
${
VERSION
}
fi
fi
...
...
@@ -90,6 +137,20 @@ else
echo
" - Deployed
${
CONTAINER
}
image:
$SIF
"
fi
## We want to make sure the root directory of the install directory
## is always bound. We also check for the existence of a few standard
## locations (/scratch /volatile /cache) and bind those too if found
echo
" - Determining additional bind paths"
PREFIX_ROOT
=
"/
$(
realpath
$PREFIX
|
cut
-d
"/"
-f2
)
"
BINDPATH
=
$PREFIX_ROOT
echo
" -->
$PREFIX_ROOT
"
for
dir
in
/work /scratch /volatile /cache
;
do
if
[
-d
$dir
]
;
then
echo
" -->
$dir
"
BINDPATH
=
"
${
BINDPATH
}
,
$dir
"
fi
done
## create a new top-level eic-shell launcher script
## that sets the ATHENA_PREFIX and then starts singularity
## need different script for old singularity versions
...
...
@@ -97,14 +158,16 @@ if [ ${SINGULARITY_VERSION:0:1} != 2 ]; then
## newer singularity
cat
<<
EOF
> eic-shell
#!/bin/bash
export ATHENA_PREFIX=
$PWD
/local
export ATHENA_PREFIX=
$PREFIX
/local
export SINGULARITY_BINDPATH=
$BINDPATH
$SINGULARITY
run
$SIF
EOF
else
## ancient singularity
cat
<<
EOF
> eic-shell
#!/bin/bash
export ATHENA_PREFIX=
$PWD
/local
export ATHENA_PREFIX=
$PREFIX
/local
export SINGULARITY_BINDPATH=
$BINDPATH
$SINGULARITY
exec
$SIF
eic-shell
EOF
fi
...
...
This diff is collapsed.
Click to expand it.
install_dev.sh
0 → 100755
+
88
−
0
View file @
ab86bb73
#!/bin/bash
CONTAINER
=
"jug_dev"
VERSION
=
"testing"
ODIR
=
"
$PWD
"
function
print_the_help
{
echo
"USAGE: ./install_dev.sh [-o DIR] [-v VERSION]"
echo
"OPTIONAL ARGUMENTS:"
echo
" -o,--outdir Directory to download the container to (D:
$ODIR
)"
echo
" -v,--version Version to install (D:
$VERSION
)"
echo
" -h,--help Print this message"
echo
""
echo
" Download development container into an output directory"
echo
""
echo
"EXAMPLE: ./install.sh"
exit
}
while
[
$#
-gt
0
]
;
do
key
=
$1
case
$key
in
-o
|
--outdir
)
ODIR
=
$2
shift
shift
;;
-v
|
--version
)
VERSION
=
$2
shift
shift
;;
-h
|
--help
)
print_the_help
exit
0
;;
*
)
echo
"ERROR: unknown argument:
$key
"
echo
"use --help for more info"
exit
1
;;
esac
done
mkdir
-p
$ODIR
||
exit
1
if
[
!
-d
$ODIR
]
;
then
echo
"ERROR: not a valid directory:
$ODIR
"
echo
"use --help for more info"
exit
1
fi
echo
"Deploying development container for eicweb/
$CONTAINER
:
$VERSION
to
$ODIR
"
## Simple setup script that installs the container
## in your local environment under $ODIR/local/lib
## and creates a simple top-level launcher script
## that launches the container for this working directory
## with the $ATHENA_ODIR variable pointing
## to the $ODIR/local directory
mkdir
-p
local
/lib
||
exit
1
## Always deploy the SIF image using the python installer,
## as this is for experts only anyway
SIF
=
## work in temp directory
tmp_dir
=
$(
mktemp
-d
-t
ci-XXXXXXXXXX
)
pushd
$tmp_dir
wget https://eicweb.phy.anl.gov/containers/eic_container/-/raw/master/install.py
chmod
+x install.py
./install.py
-f
-c
$CONTAINER
-v
$VERSION
.
SIF
=
`
ls
lib/
$CONTAINER
.sif.
*
|
head
-n1
`
## That's all
if
[
-z
$SIF
-o
!
-f
$SIF
]
;
then
echo
"ERROR: no singularity image found"
else
echo
"Container download succesfull"
fi
## move over the container to our output directory
mv
$SIF
$ODIR
## cleanup
popd
rm
-rf
$tmp_dir
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment