Developement workflow
Preliminary document on GitLab configuration for development: MP2021 Minutes
Branch naming
Proposed convention for (similar to what was previously done on SVN) :
- development:
dev_“YEAR”_”STREAM”_”PI”_”NAME”
- bugfix :
issue_”NB”_“YEAR”_”PI”
With our current GL CE edition, we don't have access to push rules feature which would make it possible to define pre-receive hooks for applying rules on commit messages or branch naming. A branch can be created by various ways: directly on the remote repo from an issue or the project dashboard's page or simply by pushing branch from local to remote repo. So the compliance with the naming convention will depend on the goodwill of the developer when it specifies the branch name or the issue title as stated by official GL doc:
"The branch name is based on an internal ID, and the issue title."
But I'm wary of applying naming convention for bugfixes: do we want all issues with same pattern in title? I don't think so.
Also where is the added value if the developer spends as much time as setting the branch name as coding? If we stick to the usual way, I think we should focus on providing relevant informations on the issue because the only way to recover a deleted branch is to identify the SHA1 checksum of the last commit which will be quite challenging if you are not the owner or the reviewer of the branch (having had at some point a local copy of the dev branch).
Open questions
Common issues
Do we need any label for issue triage like Trac ticket parameter Priority low
<normal
<high
or Severity minor
<major
<critical
?
Do we need a template for bug/enhancement MR? I would say "No" but that's just my view...
Workplan action
I decided in the template to remove the table summary of the development task because the informations are already present in the issue and MR parameters.
Also it is much appropriate under GL to group all activities related to a given development under the issue because the wiki has become more incidental to the code development and GL has replaced it by its "epic" feature, which is not available with our free licence.
CI
Pipelines have yet to be implemented for launching SETTE tests to check developments as new code is pushed to the repository.
@acoward has done the job for the user guide, and on my side I have coded something almost functional for the LaTeX compilation of the reference manuals. Regarding testing our model, @khutchinson and @mathiot have validated the possibility to initiate pipeline on French national HPCC, by means of installing gitlab-runner
under Anaconda environment in user's HOME and authorizing the connection between our forge and the HPCC frontend. It remains that the aforementioned procedure is only valid if you maintain the connection the time it will take to complete the tests (good luck with your job queue
I will investigate with @mpeltier if one could routinely submit pipeline jobs to internal runners within MOI.
Issue templates
Please find below my template drafts, I will add them to the development branch of an upcoming related MR so anyone will be able to edit them.
Bug
Context
Please provide informations on how to reproduce the bug:
Branches impacted: current release and/or main Reference configuration/test case (chosen or used as template) AMM gyre orca papa or engines involved ABL NST OFF TOP SAO SAS SI³ SWE Computing architecture: compiler, MPI & NetCDF libs (name and version) Dependencies: AGRIF BFM CICE OASIS XIOS (with known branch and hash/revision/version), ... Any other relevant information Analysis
Please give your thoughts on the issue.
Fix
Please share your proven solution or your recommendation on how to proceed.
You can
📋 Copy code blocks (```fortran ...```) or diff outputs (```diff ...```)📎 Include files🔗 Add external links.
⚠ Please remove all unnecessary lines in this description, like the one you are reading in italic, before creating the issue.⚠
Feature or Enhancement
Context
Please provide informations on how to set the modelling environment
Reference configuration/test case (to add, chosen or used as template) Modifications of versioned files: Fortran routines ( *.[Ffh]90
), namelists (namelist\_*cfg
), outputs settings (*.xml
), ...Additional dependencies New datasets Any other relevant information Proposal
Please share your ideas or your wishes about modelling improvements for the NEMO model.
In particular, express if you are willing to contribute personally to the implementation of this feature in NEMO.
You can
📋 Copy code blocks (```fortran ...```) or diff outputs (```diff ...```)📎 Include files🔗 Add external links.
⚠ Please remove all unnecessary lines in this description, like the one you are reading in italic, before creating the issue.⚠
Task or Workplan action preview
Development description
Describe the goal and the methodology.
Add reference documents or publications if relevant.Code implementation
Describe flow chart of the changes in the code.
List the Fortran modules and subroutines to be created/edited/deleted.
Detailed list of new variables to be defined (including namelists), give for each the chosen name and description wrt coding rules.Documentation updates
Using previous parts, define the main changes to be done in the doc (manuals, guide, web pages, ...).
Merge request template
Workplan action review
Tests
Once the development is done, the PI should complete the tests section below and after ask the reviewers to start their review.
This part should contain the detailed results of SETTE tests (restartability and reproducibility for each of the reference configuration) and detailed results of restartability and reproducibility when the option is activated on specified configurations used for this test.Regular checks
Can this change be shown to produce expected impact (option activated)? Can this change be shown to have a null impact (option not activated)? Results of the required bit comparability tests been run: are there no differences when activating the development? If some differences appear, is reason for the change valid/understood? If some differences appear, is the impact as expected on model configurations? Is this change expected to preserve all diagnostics?
If no, is reason for the change valid/understood? Are there significant changes in run time/memory? Review
A successful review is needed to schedule the merge of this development into the future NEMO release during next Merge Party (usually in November).
Assessments
Is the proposed methodology now implemented? Are the code changes in agreement with the flowchart defined at preview step? Are the code changes in agreement with list of routines and variables as proposed at preview step?
If, not, are the discrepancies acceptable? Is the in-line documentation accurate and sufficient? Do the code changes comply with NEMO coding standards? Is the development documented with sufficient details for others to understand the impact of the change? Is the project doc (manual, guide, web, ...) now updated or completed following the proposed summary in preview section?