-
e1c5a581
Essential components
Prerequisites
System dependencies
Other requirements should be provided natively by your system or can be installed from the official repositories of your Unix-like distribution:
- subversion (svn) for version control of XIOS sources
- git for version control of NEMO sources
- Perl interpreter
-
Fortran compiler (
ifort
,gfortran
,pgfortran
,ftn
, ...), - Message Passing Interface (MPI) implementation (e.g. OpenMPI or MPICH).
- Network Common Data Form (NetCDF) library with its underlying Hierarchical Data Form (HDF)
NEMO, by default, takes advantage of some MPI features introduced into the MPI-3 standard.
Hint
The MPI implementation is not strictly essential since it is possible to compile and run NEMO on a single processor. However most realistic configurations will require the parallel capabilities of NEMO and these use the MPI standard.
Note
On older systems, that do not support MPI-3 features,
the key_mpi2
preprocessor key should be used at compile time.
This will limit MPI features to those defined within the MPI-2 standard
(but will lose some performance benefits).
Specifics for NetCDF and HDF
NetCDF and HDF versions from official repositories may have not been compiled with MPI support. However access to all the options available with the XIOS IO-server will require the parallelism of these libraries.
Hint
$ ./configure [--{enable-fortran,disable-shared,enable-parallel}] ...
It is recommended to build the tests --enable-parallel-tests
and run them with make check
Particular versions of these libraries may have their own restrictions. Note the following requirements for netCDF-4 support:
Caution!
Extract and install XIOS
With the sole exception of running NEMO in mono-processor mode
(in which case output options are limited to those supported by the IOIPSL
library),
diagnostic outputs from NEMO are handled by the third party XIOS
library.
It can be used in two different modes:
attached: | Every NEMO process also acts as a XIOS server |
---|---|
detached: | Every NEMO process runs as a XIOS client. Output is collected and collated by external, stand-alone XIOS server processors. |
Instructions on how to install XIOS can be found on its :xios:`wiki<>`.
Hint
Prior to NEMO version 4.2.0 it was recommended to use XIOS 2.5 release. However, versions 4.2.0 and beyond utilise some newer features of XIOS2 and users will need to upgrade to the trunk version of XIOS2. Note 4.2.1 does not support the use of XIOS3.
This can be checked out with:
svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS2/trunk
Download and install the NEMO code
Checkout the NEMO source
There are several ways to obtain the NEMO source code. Users who are not familiar with git
and simply want a fixed code to compile and run, can download a tarball from the :tarrepo:`4.2.0 release site<>`
Users who are familiar with git
and likely to use it to manage their own local branches and
modifications, can clone the repository at the release tag:
git clone --branch 4.2.1 https://forge.nemo-ocean.eu/nemo/nemo.git nemo_4.2.1
Experienced developers who may wish to experiment with other branches or code more recent than the release -perhaps with a view to returning developements to the system, can clone the main repository and then switch to the tagged release:
git clone https://forge.nemo-ocean.eu/nemo/nemo.git
cd nemo
git switch --detach 4.2.1
Description of 1st level tree structure
:file:`arch` | Compilation settings |
:file:`cfgs` | :doc:`Reference configurations <cfgs>` |
:file:`ext` | Dependencies included
(AGRIF , FCM , PPR & IOIPSL ) |
:file:`mk` | Compilation scripts |
:file:`src` | NEMO codebase |
:file:`tests` | :doc:`Test cases <tests>` (unsupported) |
:file:`tools` | :doc:`Utilities <tools>` to {pre,post}process data |
:file:`sette` | :doc:`SETTE <sette>` a code testing framework |
Setup your architecture configuration file
All compiler options in NEMO are controlled using files in :file:`./arch/arch-'my_arch'.fcm` where
my_arch
is the name of the computing architecture
(generally following the pattern HPCC-compiler
or OS-compiler
).
It is recommended to copy and rename an configuration file from an architecture similar to your own.
You will need to set appropriate values for all of the variables in the file.
In particular the FCM variables:
%NCDF_HOME
; %HDF5_HOME
and %XIOS_HOME
should be set to
the installation directories used for XIOS installation
%NCDF_HOME /usr/local/path/to/netcdf
%HDF5_HOME /usr/local/path/to/hdf5
%XIOS_HOME /home/$( whoami )/path/to/xios-trunk
%OASIS_HOME /home/$( whoami )/path/to/oasis
Note
Any reason not to add build_arch-auto.sh
to 4.2.1?
Preparing an experiment
Create and compile a new configuration
The main script to {re}compile and create executable is called :file:`makenemo` located at the root of the working copy. It is used to identify the routines you need from the source code, to build the makefile and run it. As an example, compile a :file:`MY_GYRE` configuration from GYRE with 'my_arch':
./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE'
Then at the end of the configuration compilation, :file:`MY_GYRE` directory will have the following structure.
Directory | Purpose |
---|---|
BLD |
BuiLD folder: target executable, headers, libs, preprocessed routines, ... |
EXP00 |
Run folder: link to executable, namelists, *.xml and IOs |
EXPREF |
Files under version control only for :doc:`official configurations <cfgs>` |
MY_SRC |
New routines or modified copies of NEMO sources |
WORK |
Links to all raw routines from :file:`./src` considered |
After successful execution of :file:`makenemo` command, the executable called nemo is available in the :file:`EXP00` directory
Viewing and changing list of active CPP keys
For a given configuration (here called MY_CONFIG
),
the list of active CPP keys can be found in :file:`./cfgs/'MYCONFIG'/cpp_MY_CONFIG.fcm`
This text file can be edited by hand or with :file:`makenemo` to change the list of active CPP keys.
Once changed, one needs to recompile nemo
in order for this change to be taken in account.
Note that most NEMO configurations will need to specify the following CPP keys:
key_xios
for IOs. MPI parallelism is activated by default. Use key_mpi_off
to compile without MPI.
More :file:`makenemo` options
makenemo
has several other options that can control which source files are selected and
the operation of the build process itself.
These options can be useful for maintaining several code versions with only minor differences but
they should be used sparingly.
Note however the -j
option which should be used more routinely to speed up the build process.
For example:
./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8
will compile up to 8 processes simultaneously.
Default behaviour
At the first use,
you need the -m
option to specify the architecture configuration file
(compiler and its options, routines and libraries to include),
then for next compilation, it is assumed you will be using the same compiler.
If the -n
option is not specified the last compiled configuration will be used.
Tools used during the process
The various bash scripts used by makenemo
(for instance, to create the
WORK
directory) are located in the mk
subdirectory. In most cases, there
should be no need for user intervention with these scripts. Occasionally,
incomplete builds can leave the environment in a indeterminate state. If
problems are experienced with subsequent attempts then running:
./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' clean
will prepare the directories for a fresh attempt and remove any intermediate files that may be causing issues.
The reference configurations that may be provided to the -r
argument of makenemo
are listed in the :file:`cfgs/ref_cfgs.txt` file:
AGRIF_DEMO OCE ICE TOP NST
AMM12 OCE
C1D_PAPA OCE
GYRE_BFM OCE TOP
GYRE_PISCES OCE TOP
ORCA2_OFF_PISCES OCE TOP OFF
ORCA2_OFF_TRC OCE TOP OFF
ORCA2_SAS_ICE OCE ICE NST SAS
ORCA2_ICE_PISCES OCE TOP ICE NST ABL
ORCA2_ICE_ABL OCE ICE ABL
SPITZ12 OCE ICE
WED025 OCE ICE
User added configurations will be listed in :file:`cfgs/work_cfgs.txt`
Running the model
Once :file:`makenemo` has run successfully, a symbolic link to
the nemo
executable is available in :file:`./cfgs/MY_CONFIG/EXP00`.
For the reference configurations, the :file:`EXP00` folder also contains the initial input files
(namelists, *.xml
files for the IOs, ...).
If the configuration needs other input files, they have to be placed here.
cd 'MY_CONFIG'/EXP00
mpirun -n $NPROCS ./nemo # $NPROCS is the number of processes
# mpirun is your MPI wrapper
If you are completely new to NEMO or this is your first experience of running MPI-based
ocean models, then you may wish to consider activating the SETTE
testing framework. If
your HPC environment matches or is similar to one of the supported environments then you
may find it simpler to activate one of the reference configurations this way since SETTE
includes example submission scripts for common batch environments and scripts to fetch
all the input files for the more realistic reference configurations. See the
:doc:`SETTE <sette>` section of this guide for more details.