Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
sette.rst 43.52 KiB

A guide to using SETTE

Overview

SETTE is a suite of bash scripts that automates the building, running and basic validation and verification of a broad spectrum of NEMO reference and test configurations. Because compilation and batch running environments differ wildly, automation is only achieved after some effort by the user for each new test platform. However, examples are provided for all the major systems used by the NEMO System team and many new platforms can be incorporated simply by adapting these templates.

When configured correctly, a single command will:

  • Compile multiple reference and test configurations
  • Run restartability and reproducibility tests with each configuration
  • Run additional conformance checks with any AGRIF-based configurations
  • Archive sufficient output measures for meaningful comparsion between future tests at different revisions
  • On completion, run a secondary script to table the successes or failures of each test

Many namelist-controlled options can be varied using command line arguments to the main sette script and test results can be compared across different innvocations. Thus, by chaining sette innvocations with different options, more complex and comprehensive testing can be achieved.

Installation

SETTE is provided within the main NEMO git repository and will be found in the subdirectory sette below the top-level of a checked out (cloned) copy (at the same level as src/ or cfgs/).

Initial setup

Assuming, for now, that you are intending to run SETTE on one of the platforms already supported then there are only a few settings required to setup for each individual user. These settings are all to be found in the sette/param.default file:

NEMO_VALIDATION_REF=/path/to/reference/sette/results
NEMO_REV_REF=0000
COMPILER=${SETTE_COMPILER:-XXXXXXXX}
BATCH_CMD=${SETTE_BATCH_CMD:-llsubmit}
BATCH_STAT=${SETTE_BATCH_STAT:-llq}
FORCING_DIR=${SETTE_FORCING_DIR:-$WORKDIR/FORCING}
NEMO_VALIDATION_DIR=${SETTE_NEMO_VALIDATION_DIR:-$MAIN_DIR}/NEMO_VALIDATION
JOB_PREFIX_NOMPMD=${SETTE_JOB_PREFIX_NOMPMD:-batch}
JOB_PREFIX_MPMD=${SETTE_JOB_PREFIX_MPMD:-batch-mpmd}

and each can be set either in a param.cfg file (created by copying param.default to param.cfg and editing) or through the corresponding environment variable. For example, changing the contents of a param.cfg to include:

NEMO_VALIDATION_REF=/work/n01/n01/acc/NEMO/2024/5.0.0/sette/NEMO_VALIDATION
NEMO_REV_REF=24335_c1604aac
COMPILER=X86_ARCHER2-Cray
BATCH_CMD=sbatch
BATCH_STAT=squeue
FORCING_DIR=/work/n01/n01/acc/FORCING/SETTE_inputs/5.0.0
NEMO_VALIDATION_DIR=/work/n01/n01/acc/NEMO/2024/5.0.0/sette/NEMO_VALIDATION
JOB_PREFIX_NOMPMD=batch
JOB_PREFIX_MPMD=batch

or settings:

export SETTE_COMPILER=X86_ARCHER2-Cray
export SETTE_BATCH_CMD=sbatch
export SETTE_BATCH_STAT=squeue
export SETTE_FORCING_DIR=/work/n01/n01/acc/FORCING/SETTE_inputs/5.0.0
export SETTE_NEMO_VALIDATION_DIR=/work/n01/n01/acc/NEMO/2024/5.0.0/sette/NEMO_VALIDATION
export SETTE_JOB_PREFIX_NOMPMD=batch
export SETTE_JOB_PREFIX_MPMD=batch

in your runtime environment will achieve the same result. The requirement to create a param.cfg from param.default for each installation protects against developers accidentally returning a local param.cfg to the main repository. The param.default file should only be altered by SETTE developers.

Note

Apart from NEMO_VALIDATION_REF and NEMO_REV_REF which do not have an equivalent environment variable - TO BE FIXED

The purposes of these settings should be clear:

  • NEMO_VALIDATION_REF points to a SETTE-generated directory of previous archived results to be used optionally as a reference set.
  • NEMO_REV_REF defines the reference tag (since raw git commit hashes do not list chronologically, they are prepended with a string constructed from the year and year-day)
  • COMPILER names the architecture file to be used to compile NEMO
  • BATCH_CMD names the command used to submit jobs to the batch system
  • BATCH_STAT names the command used to query the batch system and return a list of queued and running jobs
  • FORCING_DIR points to a directory containing the input files required by the reference configurations. Details of how to obtain the files with which to populate this directory are provided in the Obtaining configuration input files section. Note that this directory must be in a part of the filesystem that is visible to the back-end compute nodes.
  • NEMO_VALIDATION_DIR points to a directory under which SETTE will archive its results.
  • JOB_PREFIX_NOMPMD A prefix for the template batch file if running in SPMD mode (e,g. with attached XIOS servers)
  • JOB_PREFIX_MPMD An alternative prefix for the template batch file if running in MPMD mode (i.e. with detached XIOS servers). This isn't necessarily different to the SPMD setting since some of the templates provided are written to handle both modes. See the Template batch files section for details.

Obtaining configuration input files

Many of the reference configurations require domain files, initial conditions and surface forcing fields. The exceptions are the GYRE_PISCES configurations and the simpler, test-cases. It is possible to limit SETTE to a subset of tests to avoid the need for downloading data files but far less of the code will be covered by the tests in this case. All the required files are available from the SETTE inputs site .

The entire set of inputs can be downloaded using the ./sette_fetch_inputs.sh script which uses wget to retrieve the files and populate the FORCING_DIR directory. I.e.:

sette_fetch_inputs.sh -h
sette_fetch_inputs.sh :
     Fetch 5.0.0 input files from remote store

Template batch files

If you have previously compiled NEMO successfully on your test platform then you can have confidence that providing the same environment and arch file to SETTE will allow SETTE to compile successfully (barring any compile-time bugs in previously untested code). However, SETTE also needs to configure and run a series of batch jobs with varying resource requirements. To do this, you must provide SETTE with a means to generate valid job submission scripts. There are essentially two ways of doing this:

  • Provide a template batch file with known strings that SETTE can replace with settings based on the needs of each job.
  • Provide an external script that can accept those settings and generate a new batch file.

If you are working on a test platform already supported by SETTE, a solution will already be in place and you can skip to the Component scripts section.

If you are commissioning a new platform then you will need to provide either a template batch file or an external generating script. Batch file templates are located in the sette/BATCH_TEMPLATE subdirectory. They are named with the COMPILER setting prefixed by the JOB_PREFIX_NOMPMD (or JOB_PREFIX_NOMPMD) setting and separated by a hyphen. To use the first method, the batch template should be a version of a working submission script with the following strings in place of their corresponding numerical values:

NODES                  The total number of nodes required
TOTAL_NPROCS           The total number of cores required
NPROCS                 The number of ocean cores
NXIOPROCS              The number of XIOS cores
MPI_FLAG               A logical to declare if MPI is being used
DEF_EXE_DIR            The test execution directory
                       Paths and names that have to be passed through
                       to the post-run tidy-up script:
DEF_SETTE_DIR
DEF_INPUT_DIR
DEF_CONFIG_DIR
DEF_TOOLS_DIR
DEF_NEMO_VALIDATION
DEF_NEW_CONF
DEF_CMP_NAM
DEF_TEST_NAME

Not all of these need to used and if your particular system needs additional information then this can be added as a case statement (dependent on COMPILER) in the prepare_job.sh script

An example template file is given in the sette_batch_template file:

cat BATCH_TEMPLATE/sette_batch_template
#!/bin/bash
#!
# @ job_name = MPI_config
# standard output file
# @ output = $(job_name).$(jobid)
# standard error file
# @ error =  $(job_name).$(jobid)
# job type
# @ job_type = parallel
# Number of procs
# @ total_tasks = NPROCS
# time
# @ wall_clock_limit = 0:30:00
# @ queue

#
# Test specific settings. Do not hand edit these lines; the prepare_job.sh script will set these
# (via sed operating on this template job file).
#
  OCEANCORES=NPROCS
  export SETTE_DIR=DEF_SETTE_DIR

###############################################################
#
# set up mpp computing environment
#
# Local settings for machine IBM Power6 (VARGAS at IDRIS France)
#
export MPIRUN="mpiexec -n $OCEANCORES"

#
# load sette functions (only post_test_tidyup needed)
#
  . ${SETTE_DIR}/all_functions.sh

# Do not remove or change the following comment line
# BODY


#
# These variables are needed by post_test_tidyup function in all_functions.sh
#
  export EXE_DIR=DEF_EXE_DIR
  export INPUT_DIR=DEF_INPUT_DIR
  export CONFIG_DIR=DEF_CONFIG_DIR
  export TOOLS_DIR=DEF_TOOLS_DIR
  export NEMO_VALIDATION_DIR=DEF_NEMO_VALIDATION
  export NEW_CONF=DEF_NEW_CONF
  export CMP_NAM=DEF_CMP_NAM
  export TEST_NAME=DEF_TEST_NAME
#
# end of set up


###############################################################
#
# change to the working directory
#
cd ${EXE_DIR}

  echo Running on host `hostname`
  echo Time is `date`
  echo Directory is `pwd`
#
#  Run the parallel MPI executable
#
  if [ MPI_FLAG == "yes" ]; then
  echo "Running time ${MPIRUN} ./nemo"
     time ${MPIRUN} ./nemo
  else
  echo "Running time./nemo"
     time ./nemo
  fi


#
  post_test_tidyup

# END_BODY
# Do not remove or change the previous comment line

  exit

But commamds and environments are most likely different on every test platform so it may require some effort to produce an equivalent template for new platforms.

For cases where the calculation and declaration of resources is more complex (for example, in hetrogeneous computing environments requiring het-job declarations), in may be easier to provide an external script to generate the job script. An example is included in the case of X86_ARCHER2-Cray where the batch template is simply a placeholder:

#!/bin/bash
#
# A batch script will be generated using:
# /work/n01/shared/acc/mkslurm_settejob -S $NXIO_PROC -s 8 -m 4 -C $NB_PROC -g 2 -a n01-CLASS -j sette_job -t 20:00 > ${SETTE_DIR}/job_batch_template
# by prepare_job.sh
#

and the suggested script is executed by prepare_job.sh instead of editing the template:

case ${COMPILER} in
   X64_MOBILIS*)
     .
     .
     ;;
   X86_ARCHER2*)
     MK_TEMPLATE=$( /work/n01/shared/acc/mkslurm_settejob_4.2 -S $NXIO_PROC -s 8 -m 4 -C $NB_PROC -g 2 -a n01-CLASS -j sette_job -t 20:00 > ${SETTE_DIR}/job_batch_template )
     ;;

Any such solutions should be fed back to the system team for incorporation into future releases.

Component scripts

SETTE consists of a suite of scripts and settings files. A complete list is given here but basic use of SETTE only requires familiarisation with the first two listed:

  • User scripts and settings
    • param.cfg
    • sette.sh
      • sette_reference-configurations.sh
      • sette_test-cases.sh
    • sette_rpt.sh
    • sette_eval.sh
    • sette_fetch_inputs.sh
    • sette_list_avail_cfg.sh
    • sette_list_avail_rev.sh
  • Internal scripts and settings
    • all_functions.sh
    • fcm_job.sh
    • prepare_exe_dir.sh
    • prepare_job.sh
    • input_<CONFIG>.cfg

Usage of main scripts

The purpose and contents of param.cfg were explained in the Initial setup section. sette.sh is the main utility script that, when executed without any arguments, will compile, configure and submit a pre-set series of tests. After all the tests have completed, a basic report is presented to the user which lists the various successes or failures.

./sette.sh

<lots of progress information and compilation stages followed by:>


SETTE validation report generated for :

       trunk @ c1604aac (with local changes)

       on X86_ARCHER2-Cray arch file


!!---------------1st pass------------------!!

   !----restart----!
GYRE_PISCES                  run.stat    restartability  passed :  24335_c1604aac+
GYRE_PISCES                  tracer.stat restartability  passed :  24335_c1604aac+
ORCA2_ICE_PISCES             run.stat    restartability  passed :  24335_c1604aac+
ORCA2_ICE_PISCES             tracer.stat restartability  passed :  24335_c1604aac+
ORCA2_OFF_PISCES             tracer.stat restartability  passed :  24335_c1604aac+
AMM12                        run.stat    restartability  passed :  24335_c1604aac+
ORCA2_SAS_ICE                run.stat    restartability  passed :  24335_c1604aac+
AGRIF_DEMO                   run.stat    restartability  passed :  24335_c1604aac+
AGRIF_DEMO                   tracer.stat restartability  passed :  24335_c1604aac+
WED025                       run.stat    restartability  passed :  24335_c1604aac+
ISOMIP+                      run.stat    restartability  passed :  24335_c1604aac+
OVERFLOW                     run.stat    restartability  passed :  24335_c1604aac+
LOCK_EXCHANGE                run.stat    restartability  passed :  24335_c1604aac+
VORTEX                       run.stat    restartability  passed :  24335_c1604aac+
ICE_AGRIF                    run.stat    restartability  passed :  24335_c1604aac+
SWG                          run.stat    restartability  passed :  24335_c1604aac+

   !----repro----!
GYRE_PISCES                  run.stat    reproducibility passed :  24335_c1604aac+
GYRE_PISCES                  tracer.stat reproducibility passed :  24335_c1604aac+
ORCA2_ICE_PISCES             run.stat    reproducibility passed :  24335_c1604aac+
ORCA2_ICE_PISCES             tracer.stat reproducibility passed :  24335_c1604aac+
ORCA2_OFF_PISCES             tracer.stat reproducibility passed :  24335_c1604aac+
AMM12                        run.stat    reproducibility passed :  24335_c1604aac+
ORCA2_SAS_ICE                run.stat    reproducibility passed :  24335_c1604aac+
ORCA2_ICE_OBS                run.stat    reproducibility passed :  24335_c1604aac+
AGRIF_DEMO                   run.stat    reproducibility passed :  24335_c1604aac+
AGRIF_DEMO                   tracer.stat reproducibility passed :  24335_c1604aac+
WED025                       run.stat    reproducibility passed :  24335_c1604aac+
ISOMIP+                      run.stat    reproducibility passed :  24335_c1604aac+
VORTEX                       run.stat    reproducibility passed :  24335_c1604aac+
ICE_AGRIF                    run.stat    reproducibility passed :  24335_c1604aac+
SWG                          run.stat    reproducibility passed :  24335_c1604aac+

   !----agrif check----!
ORCA2 AGRIF vs ORCA2 NOAGRIF run.stat    unchanged  -    passed :  24335_c1604aac+ 15541

   !----result comparison check----!

check result differences between :
VALID directory : /work/n01/n01/acc/NEMO/4.2.0/sette/NEMO_VALIDATION/MAIN at rev 24335_c1604aac+
and
REFERENCE directory : /work/n01/n01/acc/NEMO/4.2.0/sette/NEMO_VALIDATION/MAIN at rev 15150

GYRE_PISCES           run.stat    files are identical
GYRE_PISCES           tracer.stat files are identical
ORCA2_ICE_PISCES      run.stat    files are DIFFERENT (results are different after  1  time steps)
ORCA2_ICE_PISCES      tracer.stat files are DIFFERENT (results are different after  1  time steps)
ORCA2_OFF_PISCES      tracer.stat files are DIFFERENT (results are different after  1  time steps)
AMM12                 run.stat    files are DIFFERENT (results are different after  1  time steps)
ORCA2_SAS_ICE         run.stat    files are DIFFERENT (results are different after  3  time steps)
AGRIF_DEMO            run.stat    files are DIFFERENT (results are different after  1  time steps)
AGRIF_DEMO            tracer.stat files are DIFFERENT (results are different after  1  time steps)
WED025                run.stat    files are DIFFERENT (results are different after  1  time steps)
ISOMIP+               run.stat    files are DIFFERENT (results are different after  1  time steps)
VORTEX                run.stat    files are DIFFERENT (results are different after  1  time steps)
ICE_AGRIF             run.stat    files are DIFFERENT (results are different after  2  time steps)
OVERFLOW              run.stat    files are identical
LOCK_EXCHANGE         run.stat    files are identical
SWG                   run.stat    files are identical

Report timing differences between REFERENCE and VALID (if available) :
GYRE_PISCES              ref. time:     22.805     cur. time:     40.126         diff.:     17.321
ORCA2_ICE_PISCES         ref. time:    133.614     cur. time:     63.484         diff.:     -70.13
ORCA2_OFF_PISCES         ref. time:    172.469     cur. time:    471.569         diff.:      299.1
AMM12                    ref. time:    139.546     cur. time:    222.412         diff.:     82.866
WED025                   ref. time:    462.350     cur. time:    913.722         diff.:    451.372
ISOMIP+                  ref. time:     33.319     cur. time:     69.091         diff.:     35.772
OVERFLOW                 ref. time:     16.864     cur. time:     35.474         diff.:      18.61
LOCK_EXCHANGE            ref. time:     11.912     cur. time:     13.802         diff.:       1.89

The report shows the result of restartability and reproducibility tests on the whole range of test configurations. Passing these tests is a necessary and mandatory requirement for any official release of NEMO. Note these tests are not sufficient to guarantee restartability and reproducibility in all user-defined configrations and anyone running configurations, which are not close variants of the reference or test configurations, should conduct their own tests.

This report ends by comparing the latest results against a reference set (as defined in param.cfg). In this case the comparison is between revisions that were known to introduce numerical differences and between runs with different levels of compiler optimisation. This is confined by the comparsion but the report is most useful when numerical results are not expected to change between revisions and when changes are expected to provide a performance benefit. It is not shown here but, on many terminals, test failures or performance drops are presented in red to highlight areas of concern.

The set of tests executed by default are set in param.cfg in the TEST_CONFIGS environment variable:

grep TEST_CONFIGS= param.cfg
export TEST_CONFIGS=(${SETTE_TEST_CONFIGS[@]:-"ORCA2_ICE_PISCES ORCA2_OFF_PISCES AMM12 AGRIF WED025 GYRE_PISCES SAS ORCA2_ICE_OBS SWG ICE_AGRIF OVERFLOW LOCK_EXCHANGE VORTEX ISOMIP+"})

Note this set can be overridden by externally setting the SETTE_TEST_CONFIGS environment variable but individual or sub-sets of tests can also be selected by arguments to the -n option to sette.sh. This is more explicit and the recommended method since release 4.2.

Other options to sette.sh can be listed using the -h argument:

./sette.sh -h
sette.sh with no arguments (in this case all configuration will be tested with default options)
  -T to set ln_timing false for configurations (default: true)
  -t set ln_tile false in all tests that support it (default: true)
  -e set nn_hls=3 but it is not yet supported (default: nn_hls=2)
  -i set ln_icebergs false (default: true)
  -a set ln_abl true (default: false)
  -C set nn_comm=1 (default: nn_comm=2 ==> use MPI3 collective comms)
  -N set ln_nnogather false for ORCA2 configurations (default: true)
  -q to remove the key_qco key (default: added)
  -X to remove the key_xios key (default: added)
  -Q to remove the key_RK3 key
  -A to run tests in attached (SPMD) mode (default: MPMD with key_xios)
  -n "CFG1_to_test CFG2_to_test ..." to test some specific configurations
  -x "TEST_type TEST_type ..." to specify particular type(s) of test(s) to run after compilation
                TEST_type choices are: COMPILE RESTART REPRO CORRUPT PHYOPTS TRANSFORM
  -v "subdir" optional validation record subdirectory to be created below NEMO_VALIDATION_DIR
  -g "group_suffix" single character suffix to be appended to the standard _ST suffix used
                    for SETTE-built configurations (needed if sette.sh invocations may overlap)
  -r to execute without waiting to run sette_rpt.sh at the end (useful for chaining sette.sh invocations)
  -y to perform a dryrun to simply report what settings will be used
  -b to compile Nemo with debug options (only if %DEBUG_FCFLAGS if defined in your arch file)
  -c to clean each configuration
  -s to synchronise the sette MY_SRC and EXP00 with the reference MY_SRC and EXPREF
  -w to wait for Sette jobs to finish
  -r to print Sette report after Sette jobs completion
  -u to run sette.sh without any user interaction. This means no checks on creating
            directories etc. i.e. no safety net!

The first 11 options are switches to toggle commonly used namelist options or compile time keys. The default setting is to have the option set .true. or the corresponding key added. Thus, for example, running:

./sette.sh -i -Q

will run the full suite without icebergs (i.e. ln_icebergs=.false.) and without key_RK3 added at compile time. Some of these options (-i, for example) will only affect those configurations that activate the related code option by default. The -n option allows a sub-set of tests to be named on the command line. Testing will be restricted to the named tests; multiple tests can be listed as a quoted, space-separated string. For example:

./sette.sh
 and
./sette.sh -n "ORCA2_ICE_PISCES ORCA2_OFF_PISCES AMM12 AGRIF_DEMO WED025 GYRE_PISCES ORCA2_SAS_ICE ORCA2_ICE_OBS C1D_PAPA SWG ICE_AGRIF OVERFLOW LOCK_EXCHANGE VORTEX ISOMIP+ IWAVE"

are equivalent. As a point of interest, it is good practise to list tests in decreasing order of their expected execution time. This will enable compilation time to overlap run-time as much as possible and should minimise time to completion

The default operation is to perform all RESTART, REPRO and agrif-processed, CORRUPT checks. The checks performed can be limited to a sub-set of these by supplying arguments to the -x option. The combination of -n and -x is particularly useful when working to solve a specific issue with a single configuration. There is also a PHYOPTS check which is not currently used but has been implemented, as a demonstration, to run the OVERFLOW and LOCK_EXCHANGE test cases with a selection of different physical schemes. Any other string supplied as an argument to the -x option will force a compilation only. This is useful for quickly checking for compile-time errors. Although any non-recognised string will trigger this, it is good practise to be explicit, i.e.:

./sette.sh -x COMPILE

The -s option forces a synchronisation of MY_SRC contents and input files from the EXREF directory of the reference configuration on which the test is based. This is useful if you know these contents have been changed but do not wish to enforce a complete rebuild. The -c option forces a clean rebuild from scratch of the test configuration. The *_ST? directory will be deleted and recreated before a complete compilation and run is performed. Finally, the -u option disables any interaction with the user. By default, sette.sh will request confirmation from the user for actions such as: creating the base NEMO_VALIDATION_DIR or disabling options when incompatible choices are selected. This interaction can be problematic with continuous integration systems and the -u option should always be used in these applications. It is the responsibility of the user to ensure that the correct information is provided to sette.sh in these cases.

The test results archive

This latest version of SETTE (released with NEMO v4.2) changes the organisation of the records kept under the NEMO_VALIDATION_DIR. This is partly to accommodate the fact that the new command-line options provide so much flexibility for running a series of tests on any one revision of the code with different options. To facilitate such testing, two new command-line options have been introduced: -v <subdir> and -g <sub-group_suffix>.

-v names a sub-directory to create (or re-use) beneath NEMO_VALIDATION_DIR as the root of the records tree. If the -v option is not used then root of the directory tree will be the branch name as returned by the:

git branch --show-current

command (or MAIN if this command fails for any reason).

-g names a single character suffix that will be appended to the traditional _ST suffix that is added to the configurations built for testing. I.e.:

./sette.sh -e -v HALO1 -g 0

will compile and run the full suite with nn_hls=1. The configurations will be constructed with names such as: GYRE_PISCES_ST0 and the directory structure eventually populated with the test records would be similar to:

./NEMO_VALIDATION
   |-HALO1
     |---X86_ARCHER2-Cray
       |-----21334_a2c5986+
         |-------AGRIF_DEMO
           |---------LONG
           |---------ORCA2
           |---------REPRO_2_8
           |---------REPRO_4_4
           |---------SHORT
         |-------AGRIF_DEMO_NOAGRIF
           |---------ORCA2
         |-------AMM12
           |---------LONG
           |---------REPRO_4_8
           |---------REPRO_8_4
           |---------SHORT
         |-------GYRE_PISCES
           |---------LONG
           |---------REPRO_2_4
           |---------REPRO_4_2
           |---------SHORT
.
.
.

Use of the -g option isn't always necessary. In this case, for example, -e only triggers a namelist changes so there is no difference in the compiled code between this set and the default set (which will use names such as GYRE_PISCES_ST and would have records stored under branch-name). Thus, running tests sequentially such as:

./sette.sh
./sette.sh -e -v HALO1

will reuse the same run-time directories and only require one set of compilations. However, it will not be possible to diagnose any issues with the first set after the second has run. The use of -g is recommended when running multiple tests with different compilation keys since future tests with updated code may only need to recompile changed modules and dependencies.

Note also that the move from subversion to git forces a change in the revision tag used to identify the code base being tested. Whereas, with subversion, the revision number was a integer that increased monotonically in the time-order of commits, git identifies its commits with long, hexadecimal hash strings that are not necessarily correctly time-ordered when listed alphanumerically. The:

git rev-list --abbrev-commit -1 origin

command can be used to obtain a abbreviated commit hash that still provides a unique identifier but extra steps are required to provide a revision_tag that retains some indication of time-order. The current solution is to prepend the abbreviated hash string with a representation of the date on which the commit was made. This information can be obtained from the git log response as follows:

git log -1 | grep Date | sed -e 's/.*Date: *//' -e's/ +.*$//'
Mon Dec 6 15:24:36 2021

Condensing this string into something usable requires use of the unix date command which can vary in different flavours of the OS. Two examples are currently supported, one for MacOSX and one, more general, POSIX variety. More can be added in param.cfg as required. Each supported style is tested in param.cfg to determine which form to use:

# command for converting date (from git log -1) into 2-digit year + yearday
#
date -j -f "%a %b %d %H:%M:%S %Y" "Tue Nov 30 17:10:53 2021" +"%y%j" >& /dev/null
if [ $? == 0 ] ; then DATE_CONV='date -j -f "%a %b %d %H:%M:%S %Y" ' ;fi
#
date --date="Tue Nov 30 17:10:53 2021" +"%y%j" >& /dev/null
if [ $? == 0 ] ; then DATE_CONV='date --date=' ;fi

In both cases, the output format is a 2-digit year + a 3-digit year-day resulting in a 5-digit string to prepend to the short hash. There is still scope for slight mis-ordering between commits committed on the same day but this compromise avoids over-long revision tags.

In the example directory listing above the revision tag is shown as: 21334_a2c5986+ which displays a typical 5-digit date and 7-digit short hash separated by an underscore. In this case a + has been appended because sette has detected local changes to the base commit. The output of:

git status --short -uno

is used to check for local modifications when making this decision.