Skip to content
Snippets Groups Projects
install.rst 16.1 KiB
Newer Older
********************
Essential components
********************
| The NEMO source code is written in *Fortran 2008* and
  some of its prerequisite tools and libraries are already included in the download.
| It contains the AGRIF_ mesh refinement library; the FCM_ build system ; the PPR_ polynomial reconstruction library and
  the IOIPSL_ library for parts of the output.

System prerequisites
--------------------
The following should be provided natively by your system, if not, they need to be installed from the official repositories:
- You need a Unix-like machine (e.g. Linux Distributions, MacOS) 
- *subversion (svn)* for version control of XIOS sources
- *git* for version control of NEMO sources
- *Perl* interpreter
- *Fortran* compiler (``ifort``, ``gfortran``, ``pgfortran``, ``ftn``, ...),
- *Message Passing Interface (MPI)* implementation (e.g. |OpenMPI|_ or |MPICH|_).
- |NetCDF|_ library with its underlying |HDF|_

.. note::

   | By default, NEMO requires MPI-3. However, it is possible to circumnavigate this by using the following work-arounds:

 - Activate the ``key_mpi2`` preprocessor key at compile time. This will allow you to run the model using MPI-2, but keep in mind that you will lose some performance benefits. 
 - Activate the ``key_mpi_off`` preprocessor key at compile time. This will allow you to run the model only on a single process (no MPI parallelization) and you will not be able to use XIOS. 

.. |OpenMPI| replace:: *OpenMPI*
.. _OpenMPI: https://www.open-mpi.org
.. |MPICH|   replace:: *MPICH*
.. _MPICH:   https://www.mpich.org
.. |NetCDF|  replace:: *Network Common Data Form (NetCDF)*
.. _NetCDF:  https://www.unidata.ucar.edu
.. |HDF|     replace:: *Hierarchical Data Form (HDF)*
.. _HDF:     https://www.hdfgroup.org

Specifics for NetCDF and HDF
----------------------------

In order to take full advantage of the XIOS IO-server (one_file option; i.e. combines all of your output into one file), HDF (C library) and NetCDF (C and Fortran libraries) must be compiled with MPI support. To do this:
- You need to compile these libraries with the same version of the MPI implementation that
  both NEMO and XIOS will be compiled and linked with (see below).
- When compiling the HDF library, you need to use the ``--enable-parallel`` option when calling configure:

   .. code-block:: console

      $ ./configure --enable-parallel ...
  
.. note::
   | For XIOS you need to use NetCDF-4. NetCDF-3 can still be used in NEMO if you do not wish to use XIOS. 
   | The output created by XIOS are NetCDF-4 and not NetCDF4-classic, and are therefore incompatible with NetCDF-3 software. In order to handle any XIOS output, you need a software which is compatible with true NetCDF-4 files (e.g. ncview, Matlab, Python). If you would like to use other software (which is not compatible with NetCDF-4), then you can convert your XIOS output into NetCDF4-classic format by using the following command: 
Install XIOS library
--------------------
With the sole exception of running NEMO without MPI
(in which case output options are limited to the default minimum),
diagnostic outputs from NEMO are handled by the third party ``XIOS`` library.
It can be used in two different modes:
:*attached*:  Each NEMO process directly deals with its own output.
:*detached*:  You have separate XIOS processes that deal with the output.
For both of these options, you can activate the option for "one_file" or "multiple_file" mode. For the former, output is collected and collated to directly produce one single file for your domain. For the latter option, you will have as many output files as your number of NEMO processes (if in attached) or XIOS processes (if in detatched).
Instructions on how to install XIOS can be found on its :xios_doc:`wiki<>`.
   Prior to NEMO version 4.2.0 it was recommended to use XIOS 2.5 release. However, 
   versions 4.2.0 and beyond utilise some newer features of XIOS2 and users will need to
   upgrade to the trunk version of XIOS2. Note 4.2.1 does support the use of XIOS3 by activating "key_xios3" (in this case you cannot use the tiling capability).
   XIOS2 trunk can be checked out with:
  
   .. code-block:: console

      $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS2/trunk

If you find problems at this stage, support can be found by subscribing to
the :xios:`XIOS mailing list <../mailman/listinfo.cgi/xios-users>` and sending a mail message to it.

**********************************
Download and install the NEMO code
**********************************

Checkout the NEMO source
------------------------

There are several ways to obtain the NEMO source code. Users who are not familiar with ``git``
and simply want a fixed code to compile and run, can download a tarball from the :tarrepo:`4.2.1 release site<>`

Users who are familiar with ``git`` and likely to use it to manage their own local branches and 
modifications, can clone the repository at the release tag:

.. code:: console

    git clone --branch 4.2.1 https://forge.nemo-ocean.eu/nemo/nemo.git nemo_4.2.1
   
Experienced developers who may wish to experiment with other branches or code more recent than the 
release -perhaps with a view to returning developements to the system, can clone the main repository 
and then switch to the tagged release:

.. code:: console

    git clone https://forge.nemo-ocean.eu/nemo/nemo.git
    cd nemo
    git switch --detach 4.2.1
------------------------------------------------

+---------------+-------------------------------------------------+
| :file:`arch`  | Compilation settings                            |
+---------------+-------------------------------------------------+
| :file:`cfgs`  | :doc:`Reference configurations <cfgs>`          |
+---------------+-------------------------------------------------+
| :file:`ext`   | Dependencies included                           |
|               | (``AGRIF``, ``FCM``, ``PPR`` & ``IOIPSL``)      |
+---------------+-------------------------------------------------+
| :file:`mk`    | Compilation scripts                             |
+---------------+-------------------------------------------------+
| :file:`src`   | NEMO codebase                                   |
+---------------+-------------------------------------------------+
| :file:`tests` | :doc:`Test cases <tests>`                       |
+---------------+-------------------------------------------------+
| :file:`tools` | :doc:`Utilities <tools>`                        |
|               | to {pre,post}process data                       |
+---------------+-------------------------------------------------+
| :file:`sette` | :doc:`SETTE <sette>`                            |
|               | a code testing framework                        |
+---------------+-------------------------------------------------+

Setup your architecture configuration file
------------------------------------------

All compiler options in NEMO are controlled using files in :file:`./arch/arch-'my_arch'.fcm` where
``my_arch`` is the name you use to refer to your computing environment.

.. note::

	| You can use build_arch-auto.sh to automatically setup your arch file. 
   
   .. code:: console

    	cd arch
    	./build_arch-auto.sh

   If you want further help on how to use this functionality: ./build_arch-auto.sh -h

Alternatively, you can copy, rename and edit a configuration file from an architecture similar to your own.
You will need to set appropriate values for all of the variables in the file.
``%NCDF_HOME``; ``%HDF5_HOME`` and ``%XIOS_HOME`` should be set to
the installation directories used for XIOS installation. For example: 

.. code-block:: sh

   %NCDF_HOME    /usr/local/path/to/netcdf
   %HDF5_HOME    /usr/local/path/to/hdf5
   %XIOS_HOME    /home/$( whoami )/path/to/xios-trunk
   %OASIS_HOME   /home/$( whoami )/path/to/oasis

***********************
Preparing an experiment
***********************

Create and compile a new configuration
--------------------------------------

The main script to {re}compile and create executable is called :file:`makenemo` located at
the root of the working copy.
It is used to identify the routines you need from the source code, to build the makefile and run it.
As an example, compile a :file:`MY_GYRE` configuration from GYRE with 'my_arch':

.. code-block:: sh

   ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE'

Then at the end of the configuration compilation,
:file:`MY_GYRE` directory will have the following structure.

+------------+----------------------------------------------------------------------------+
| Directory  | Purpose                                                                    |
+============+============================================================================+
| ``BLD``    | BuiLD folder: target executable, headers, libs, preprocessed routines, ... |
+------------+----------------------------------------------------------------------------+
| ``EXP00``  | Run   folder: link to executable, namelists, ``*.xml`` and IOs             |
+------------+----------------------------------------------------------------------------+
| ``EXPREF`` | Files under version control only for :doc:`official configurations <cfgs>` |
+------------+----------------------------------------------------------------------------+
| ``MY_SRC`` | New routines or modified copies of NEMO sources                            |
+------------+----------------------------------------------------------------------------+
| ``WORK``   | Links to all raw routines from :file:`./src` considered                    |
+------------+----------------------------------------------------------------------------+

After successful execution of :file:`makenemo` command,
the executable called `nemo` is available in the :file:`EXP00` directory


Viewing and changing list of active CPP keys
--------------------------------------------

For a given configuration (here called ``MY_CONFIG``),
the list of active CPP keys can be found in :file:`./cfgs/'MYCONFIG'/cpp_MY_CONFIG.fcm`

This text file can be edited by hand or with :file:`makenemo` to change the list of active CPP keys.
Once changed, one needs to recompile ``nemo`` in order for this change to be taken in account.
Note that most NEMO configurations will need to specify the following CPP keys:
``key_xios`` for IOs. MPI parallelism is activated by default. Use ``key_mpi_off`` to compile without MPI.


Configure XIOS outputs
----------------------

XIOS allows for an efficient management of diagnostic outputs as
the time averaging, file writing and even some simple arithmetic or regridding are carried out in
parallel to the NEMO model run.
This page gives a basic introduction to using XIOS with NEMO.
Additional information are available at the XIOS :xios:`wiki<>` and in the NEMO reference manual.

Use of XIOS for NEMO IOs is activated using the pre-compiler key ``key_xios``.

XIOS is controlled by means of XML input files that should be copied to your model run directory before running the model.
Examples of these files can be found in the reference configurations (:file:`./cfgs`).
The XIOS executable expects to find a file called :file:`iodef.xml` in the model run directory.
In NEMO we have made the decision to use include statements in the :file:`iodef.xml` file to include:

- :file:`field_def_nemo-oce.xml` (for physics),
- :file:`field_def_nemo-ice.xml` (for ice),
- :file:`field_def_nemo-pisces.xml` (for biogeochemistry) and
- :file:`domain_def.xml` from the :file:`./cfgs/SHARED` directory.

Most users will not need to modify :file:`domain_def.xml` or :file:`field_def_nemo-???.xml` unless
they want to add new diagnostics to the NEMO code.
The definition of the output files is organized into separate :file:`file_definition.xml` files which
are included in the :file:`iodef.xml` file.

XIOS be used along with NEMO with two different modes:

Detached Mode
^^^^^^^^^^^^^

In detached mode the XIOS executable is executed on separate cores from the NEMO model.
This is the recommended method for using XIOS for realistic model runs.
To use this mode set ``using_server`` to ``true`` at the bottom of the :file:`iodef.xml` file:

.. code-block:: xml

   <variable id="using_server" type="boolean">true</variable>

Make sure there is a copy (or link to) your XIOS executable in the working directory and
in your job submission script allocate processors to XIOS.

Attached Mode
^^^^^^^^^^^^^

In attached mode XIOS runs on each of the cores used by NEMO.
This method is less efficient than the detached mode but can be more convenient for testing or
with small configurations.
To activate this mode simply set ``using_server`` to false in the :file:`iodef.xml` file


More :file:`makenemo` options
-----------------------------

``makenemo`` has several other options that can control which source files are selected and
the operation of the build process itself.

.. rli:: https://forge.nemo-ocean.eu/nemo/nemo/-/raw/4.2.0/makenemo
   :caption: Output of ``makenemo -h``

These options can be useful for maintaining several code versions with only minor differences but
they should be used sparingly.
Note however the ``-j`` option which should be used more routinely to speed up the build process.
For example:

.. code-block:: sh

        ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8

will compile up to 8 processes simultaneously.

Default behaviour
-----------------

At the first use,
you need the ``-m`` option to specify the architecture configuration file
(compiler and its options, routines and libraries to include),
then for next compilation, it is assumed you will be using the same compiler.
If the ``-n`` option is not specified the last compiled configuration will be used.

Tools used during the process
-----------------------------

The various bash scripts used by ``makenemo`` (for instance, to create the
``WORK`` directory) are located in the ``mk`` subdirectory. In most cases, there
should be no need for user intervention with these scripts. Occasionally,
incomplete builds can leave the environment in a indeterminate state. If
problems are experienced with subsequent attempts then running:

.. code-block:: sh

        ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' clean

will prepare the directories for a fresh attempt and remove any intermediate files
that may be causing issues.

The reference configurations that may be provided to the ``-r`` argument of ``makenemo``
are listed in the :file:`cfgs/ref_cfgs.txt` file:

.. code-block:: sh

   AGRIF_DEMO OCE ICE TOP NST
   AMM12 OCE
   C1D_PAPA OCE
   GYRE_BFM OCE TOP
   GYRE_PISCES OCE TOP
   ORCA2_OFF_PISCES OCE TOP OFF
   ORCA2_OFF_TRC OCE TOP OFF
   ORCA2_SAS_ICE OCE ICE NST SAS
   ORCA2_ICE_PISCES OCE TOP ICE NST ABL
   ORCA2_ICE_ABL OCE ICE ABL
   SPITZ12 OCE ICE
   WED025 OCE ICE

User added configurations will be listed in :file:`cfgs/work_cfgs.txt`

*******************
Running the model
*******************

Once :file:`makenemo` has run successfully, a symbolic link to
the ``nemo`` executable is available in :file:`./cfgs/MY_CONFIG/EXP00`.
For the reference configurations, the :file:`EXP00` folder also contains the initial input files
(namelists, ``*.xml`` files for the IOs, ...).
If the configuration needs other input files, they have to be placed here.

.. code-block:: sh

   cd 'MY_CONFIG'/EXP00
   mpirun -n $NPROCS ./nemo   # $NPROCS is the number of processes
                              # mpirun is your MPI wrapper

If you are completely new to NEMO or this is your first experience of running MPI-based
ocean models, then you may wish to consider activating the ``SETTE`` testing framework. If 
your HPC environment matches or is similar to one of the supported environments then you
may find it simpler to activate one of the reference configurations this way since SETTE
includes example submission scripts for common batch environments and scripts to fetch 
all the input files for the more realistic reference configurations. See the 
:doc:`SETTE <sette>` section of this guide for more details.