Resolve: "Basic fixes for the trends diagnostics"
Closes #395 (closed).
Tests
Regular checks
-
Can this change be shown to produce expected impact (option activated)? -
Can this change be shown to have a null impact (option not activated)? -
Results of the required bit comparability tests been run: are there no differences when activating the development? -
If some differences appear, is reason for the change valid/understood? -
If some differences appear, is the impact as expected on model configurations?
-
-
Is this change expected to preserve all diagnostics? -
If no, is reason for the change valid/understood?
-
-
Are there significant changes in run time/memory?
Note that SETTE does not use the trends diagnostics.
Other tests
A 10-day run of an ocean-only ORCA2 configuration was used for testing. This just checked that the diagnostics could be run successfully- they are not necessarily correct- and that they were restartable. I also did not check whether the results were the same as some reference commit- there have been too many changes to the model since the diagnostics last worked.
As noted in #395 (closed), the wind stress trends are not restartable when using RK3.
I did not check to see if the passive tracers trends diagnostics worked. The aim was to get the diagnostics to at least run in an ocean-only configuration. There will be more work on these diagnostics in the future.
Review
Assessments
-
Is the proposed methodology now implemented? -
Are the code changes in agreement with the flowchart defined at preview step? -
Are the code changes in agreement with list of routines and variables as proposed at preview step? -
If, not, are the discrepancies acceptable?
-
-
Is the in-line documentation accurate and sufficient? -
Do the code changes comply with NEMO coding standards? -
Is the development documented with sufficient details for others to understand the impact of the change? -
Is the project doc (manual, guide, web, ...) now updated or completed following the proposed summary in preview section?