Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
\documentclass[../main/NEMO_manual]{subfiles}
\begin{document}
\chapter{Surface Boundary Condition (SBC, SAS, ISF, ICB, TDE)}
\label{chap:SBC}
\chaptertoc
\paragraph{Changes record} ~\\
{\footnotesize
\begin{tabularx}{\textwidth}{l||X|X}
Release & Author(s) & Modifications \\
\hline
{\em next} & {\em A. Moulin, E. Clementi} & {\em Update of \autoref{sec:SBC_wave}}\\[2mm]
{\em next} & {\em Simon M{\" u}ller} & {\em Update of \autoref{sec:SBC_TDE}; revision of \autoref{subsec:SBC_fwb}}\\[2mm]
{\em next} & {\em Pierre Mathiot} & {\em update of the ice shelf section (2019 developments)}\\[2mm]
{\em 4.0} & {\em ...} & {\em ...} \\
{\em 3.6} & {\em ...} & {\em ...} \\
{\em 3.4} & {\em ...} & {\em ...} \\
{\em <=3.4} & {\em ...} & {\em ...}
\end{tabularx}
}
\clearpage
\begin{listing}
\nlst{namsbc}
\caption{\forcode{&namsbc}}
\label{lst:namsbc}
\end{listing}
The ocean needs seven fields as surface boundary condition:
\begin{itemize}
\item the two components of the surface ocean stress $\left( {\tau_u \;,\;\tau_v} \right)$
\item the incoming solar and non solar heat fluxes $\left( {Q_{ns} \;,\;Q_{sr} } \right)$
\item the surface freshwater budget $\left( {\textit{emp}} \right)$
\item the surface salt flux associated with freezing/melting of seawater $\left( {\textit{sfx}} \right)$
\item the atmospheric pressure at the ocean surface $\left( p_a \right)$
\end{itemize}
Five different ways are available to provide these fields to the ocean. They are controlled by
namelist \nam{sbc}{sbc} variables:
\begin{itemize}
\item a bulk formulation (\np[=.true.]{ln_blk}{ln\_blk}), featuring a selection of five bulk parameterization algorithms,
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
\item an atmospheric boundary layer model (\np[=.true.]{ln_abl}{ln\_abl}) associated with the bulk formulation,
\item a flux formulation (\np[=.true.]{ln_flx}{ln\_flx}),
\item a coupled or mixed forced/coupled formulation (exchanges with a atmospheric model via the OASIS coupler),
(\np{ln_cpl}{ln\_cpl} or \np[=.true.]{ln_mixcpl}{ln\_mixcpl}),
\item a user defined formulation (\np[=.true.]{ln_usr}{ln\_usr}).
\end{itemize}
The frequency at which the forcing fields have to be updated is given by the \np{nn_fsbc}{nn\_fsbc} namelist parameter.
When the fields are supplied from data files (bulk, abl, flux and mixed formulations),
the input fields do not need to be supplied on the model grid.
Instead, a file of coordinates and weights can be supplied to map the data from the input fields grid to
the model points (so called "Interpolation on the Fly", see \autoref{subsec:SBC_iof}).
If the "Interpolation on the Fly" option is used, input data belonging to land points (in the native grid)
should be masked or filled to avoid spurious results in proximity of the coasts, as
large sea-land gradients characterize most of the atmospheric variables.
In addition, the resulting fields can be further modified using several namelist options.
These options control:
\begin{itemize}
\item the rotation of vector components supplied relative to an east-north coordinate system onto
the local grid directions in the model,
\item the use of a land/sea mask for input fields (\np[=.true.]{nn_lsm}{nn\_lsm}),
\item the addition of a surface restoring term to observed SST and/or SSS (\np[=.true.]{ln_ssr}{ln\_ssr}),
\item the modification of fluxes below ice-covered areas (using climatological ice-cover or a sea-ice model)
(\np[=0..3]{nn_ice}{nn\_ice}),
\item the addition of river runoffs as surface freshwater fluxes or lateral inflow (\np[=.true.]{ln_rnf}{ln\_rnf}),
\item the addition of a freshwater flux adjustment in order to avoid a mean sea-level drift
(\np[=0..2]{nn_fwb}{nn\_fwb}),
\item the transformation of the solar radiation (if provided as daily mean) into an analytical diurnal cycle
(\np[=.true.]{ln_dm2dc}{ln\_dm2dc}),
\item the activation of wave effects from an external wave model (\np[=.true.]{ln_wave}{ln\_wave}),
\item the light penetration in the ocean (\np[=.true.]{ln_traqsr}{ln\_traqsr} with \nam{tra_qsr}{tra\_qsr}),
\item the atmospheric surface pressure gradient effect on ocean and ice dynamics (\np[=.true.]{ln_apr_dyn}{ln\_apr\_dyn} with namelist \nam{sbc_apr}{sbc\_apr}),
\item the effect of sea-ice pressure on the ocean (\np[=.true.]{ln_ice_embd}{ln\_ice\_embd}).
\end{itemize}
In this chapter, we first discuss where the surface boundary conditions appear in the model equations.
Then we present the four ways of providing the surface boundary conditions,
followed by the description of the atmospheric pressure and the river runoff.
Next, the scheme for interpolation on the fly is described.
Finally, the different options that further modify the fluxes applied to the ocean are discussed.
One of these is modification by icebergs (see \autoref{sec:SBC_ICB_icebergs}),
which act as drifting sources of fresh water.
%% =================================================================================================
\section{Surface boundary condition for the ocean}
\label{sec:SBC_ocean}
The surface ocean stress is the stress exerted by the wind and the sea-ice on the ocean.
It is applied in \mdl{dynzdf} module as a surface boundary condition of the computation of
the momentum vertical mixing trend (see \autoref{eq:DYN_zdf_sbc} in \autoref{sec:DYN_zdf}).
As such, it has to be provided as a 2D vector interpolated onto the horizontal velocity ocean mesh,
\ie\ resolved onto the model (\textbf{i},\textbf{j}) direction at $u$- and $v$-points.
The surface heat flux is decomposed into two parts, a non solar and a solar heat flux,
$Q_{ns}$ and $Q_{sr}$, respectively.
The former is the non penetrative part of the heat flux
(\ie\ the sum of sensible, latent and long wave heat fluxes plus
the heat content of the mass exchange between the ocean and sea-ice).
It is applied in \mdl{trasbc} module as a surface boundary condition trend of
the first level temperature time evolution equation
(see \autoref{eq:TRA_sbc} and \autoref{eq:TRA_sbc_lin} in \autoref{subsec:TRA_sbc}).
The latter is the penetrative part of the heat flux.
It is applied as a 3D trend of the temperature equation (\mdl{traqsr} module) when
\np[=.true.]{ln_traqsr}{ln\_traqsr}.
The way the light penetrates inside the water column is generally a sum of decreasing exponentials
(see \autoref{subsec:TRA_qsr}).
The surface freshwater budget is provided by the \textit{emp} field.
It represents the mass flux exchanged with the atmosphere (evaporation minus precipitation) and
possibly with the sea-ice and ice shelves (freezing minus melting of ice).
It affects the ocean in two different ways:
$(i)$ it changes the volume of the ocean, and therefore appears in the sea surface height equation as %GS: autoref ssh equation to be added
a volume flux, and
$(ii)$ it changes the surface temperature and salinity through the heat and salt contents of
the mass exchanged with atmosphere, sea-ice and ice shelves.
%\colorbox{yellow}{Miss: }
%A extensive description of all namsbc namelist (parameter that have to be
%created!)
%Especially the \np{nn_fsbc}{nn\_fsbc}, the \mdl{sbc\_oce} module (fluxes + mean sst sss ssu
%ssv) \ie\ information required by flux computation or sea-ice
%\mdl{sbc\_oce} containt the definition in memory of the 7 fields (6+runoff), add
%a word on runoff: included in surface bc or add as lateral obc{\ldots}.
%Sbcmod manage the ``providing'' (fourniture) to the ocean the 7 fields
%Fluxes update only each nf\_sbc time step (namsbc) explain relation
%between nf\_sbc and nf\_ice, do we define nf\_blk??? ? only one
%nf\_sbc
%Explain here all the namlist namsbc variable{\ldots}.
% explain : use or not of surface currents
%\colorbox{yellow}{End Miss }
The ocean model provides, at each time step, to the surface module (\mdl{sbcmod})
the surface currents, temperature and salinity.
These variables are averaged over \np{nn_fsbc}{nn\_fsbc} time-step (\autoref{tab:SBC_ssm}), and
these averaged fields are used to compute the surface fluxes at the frequency of \np{nn_fsbc}{nn\_fsbc} time-steps.
\begin{table}[tb]
\centering
\begin{tabular}{|l|l|l|l|}
\hline
Variable description & Model variable & Units & point \\
\hline
i-component of the surface current & ssu\_m & $m.s^{-1}$ & U \\
\hline
j-component of the surface current & ssv\_m & $m.s^{-1}$ & V \\
\hline
Sea surface temperature & sst\_m & \r{}$K$ & T \\\hline
Sea surface salinty & sss\_m & $psu$ & T \\ \hline
\end{tabular}
\caption[Ocean variables provided to the surface module)]{
Ocean variables provided to the surface module (\texttt{SBC}).
The variable are averaged over \protect\np{nn_fsbc}{nn\_fsbc} time-step,
\ie\ the frequency of computation of surface fluxes.}
\label{tab:SBC_ssm}
\end{table}
%\colorbox{yellow}{Penser a} mettre dans le restant l'info nn\_fsbc ET nn\_fsbc*rdt de sorte de reinitialiser la moyenne si on change la frequence ou le pdt
%% =================================================================================================
\section{Input data generic interface}
\label{sec:SBC_input}
A generic interface has been introduced to manage the way input data
(2D or 3D fields, like surface forcing or ocean T and S) are specified in \NEMO.
This task is achieved by \mdl{fldread}.
The module is designed with four main objectives in mind:
\begin{enumerate}
\item optionally provide a time interpolation of the input data every specified model time-step, whatever their input frequency is,
and according to the different calendars available in the model.
\item optionally provide an on-the-fly space interpolation from the native input data grid to the model grid.
\item make the run duration independent from the period cover by the input files.
\item provide a simple user interface and a rather simple developer interface by
limiting the number of prerequisite informations.
\end{enumerate}
As a result, the user has only to fill in for each variable a structure in the namelist file to
define the input data file and variable names, the frequency of the data (in hours or months),
whether its is climatological data or not, the period covered by the input file (one year, month, week or day),
and three additional parameters for the on-the-fly interpolation.
When adding a new input variable, the developer has to add the associated structure in the namelist,
read this information by mirroring the namelist read in \rou{sbc\_blk\_init} for example,
and simply call \rou{fld\_read} to obtain the desired input field at the model time-step and grid points.
The only constraints are that the input file is a NetCDF file, the file name follows a nomenclature
(see \autoref{subsec:SBC_fldread}), the period it cover is one year, month, week or day, and,
if on-the-fly interpolation is used, a file of weights must be supplied (see \autoref{subsec:SBC_iof}).
Note that when an input data is archived on a disc which is accessible directly from the workspace where
the code is executed, then the user can set the \np{cn_dir}{cn\_dir} to the pathway leading to the data.
By default, the data are assumed to be in the same directory as the executable, so that cn\_dir='./'.
%% =================================================================================================
\subsection[Input data specification (\textit{fldread.F90})]{Input data specification (\protect\mdl{fldread})}
\label{subsec:SBC_fldread}
The structure associated with an input variable contains the following information:
\begin{forlines}
! file name ! frequency (hours) ! variable ! time interp. ! clim ! 'yearly'/ ! weights ! rotation ! land/sea mask !
! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! filename !
\end{forlines}
where
\begin{description}
\item [File name]: the stem name of the NetCDF file to be opened.
This stem will be completed automatically by the model, with the addition of a '.nc' at its end and
by date information and possibly a prefix (when using AGRIF).
\autoref{tab:SBC_fldread} provides the resulting file name in all possible cases according to
whether it is a climatological file or not, and to the open/close frequency (see below for definition).
\begin{table}[htbp]
\centering
\begin{tabular}{|l|c|c|c|}
\hline
& daily or weekLL & monthly & yearly \\
\hline
\texttt{clim=.false.} & fn\_yYYYYmMMdDD.nc & fn\_yYYYYmMM.nc & fn\_yYYYY.nc \\
\texttt{clim=.true.} & not possible & fn\_m??.nc & fn \\
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
\hline
\end{tabular}
\caption[Naming nomenclature for climatological or interannual input file]{
Naming nomenclature for climatological or interannual input file,
as a function of the open/close frequency.
The stem name is assumed to be 'fn'.
For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week
(\ie\ 'sun','sat','fri','thu','wed','tue','mon').
The 'YYYY', 'MM' and 'DD' should be replaced by the actual year/month/day,
always coded with 4 or 2 digits.
Note that (1) in mpp, if the file is split over each subdomain,
the suffix '.nc' is replaced by '\_PPPP.nc',
where 'PPPP' is the process number coded with 4 digits;
(2) when using AGRIF, the prefix '\_N' is added to files, where 'N' is the child grid number.
}
\label{tab:SBC_fldread}
\end{table}
\item [Record frequency]: the frequency of the records contained in the input file.
Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative
(for example -1 for monthly forcing or -12 for annual forcing).
Note that this frequency must REALLY be an integer and not a real.
On some computers, setting it to '24.' can be interpreted as 240!
\item [Variable name]: the name of the variable to be read in the input NetCDF file.
\item [Time interpolation]: a logical to activate, or not, the time interpolation.
If set to 'false', the forcing will have a steplike shape remaining constant during each forcing period.
For example, when using a daily forcing without time interpolation, the forcing remaining constant from
00h00'00'' to 23h59'59".
If set to 'true', the forcing will have a broken line shape.
Records are assumed to be dated at the middle of the forcing period.
For example, when using a daily forcing with time interpolation,
linear interpolation will be performed between mid-day of two consecutive days.
If you want to change this behaviour, it is possible to prepend the variable name with a '-' or a '+' sign.
In the first case, the records will be dated at the beginning of the forcing period,
while in the second case, the records will be dated at the end of the forcing period.
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
\item [Climatological forcing]: a logical to specify if a input file contains climatological forcing which can be cycle in time,
or an interannual forcing which will requires additional files if
the period covered by the simulation exceeds the one of the file.
See the above file naming strategy which impacts the expected name of the file to be opened.
\item [Open/close frequency]: the frequency at which forcing files must be opened/closed.
Four cases are coded:
'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week), 'monthly' and 'yearly' which
means the forcing files will contain data for one day, one week, one month or one year.
Files are assumed to contain data from the beginning of the open/close period.
For example, the first record of a yearly file containing daily data is Jan 1st even if
the experiment is not starting at the beginning of the year.
\item [Others]: 'weights filename', 'pairing rotation' and 'land/sea mask' are associated with
on-the-fly interpolation which is described in \autoref{subsec:SBC_iof}.
\end{description}
Additional remarks:\\
(1) The time interpolation is a simple linear interpolation between two consecutive records of the input data.
The only tricky point is therefore to specify the date at which we need to do the interpolation and
the date of the records read in the input files.
Following \citet{leclair.madec_OM09}, the date of a time step is set at the middle of the time step.
For example, for an experiment starting at 0h00'00" with a one-hour time-step,
a time interpolation will be performed at the following time: 0h30'00", 1h30'00", 2h30'00", etc.
However, for forcing data related to the surface module,
values are not needed at every time-step but at every \np{nn_fsbc}{nn\_fsbc} time-step.
For example with \np[=3]{nn_fsbc}{nn\_fsbc}, the surface module will be called at time-steps 1, 4, 7, etc.
The date used for the time interpolation is thus redefined to the middle of \np{nn_fsbc}{nn\_fsbc} time-step period.
In the previous example, this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\
(2) For code readablility and maintenance issues, we don't take into account the NetCDF input file calendar.
The calendar associated with the forcing field is build according to the information provided by
user in the record frequency, the open/close frequency and the type of temporal interpolation.
For example, the first record of a yearly file containing daily data that will be interpolated in time is assumed to
start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\
(3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file when
interpolating data with the first (last) record of the open/close period.
For example, if the input file specifications are ''yearly, containing daily data to be interpolated in time'',
the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values between
Dec 31st 12h00'00" and Jan 1st 12h00'00".
If the forcing is climatological, Dec and Jan will be keep-up from the same year.
However, if the forcing is not climatological, at the end of
the open/close period, the code will automatically close the current file and open the next one.
Note that, if the experiment is starting (ending) at the beginning (end) of
an open/close period, we do accept that the previous (next) file is not existing.
In this case, the time interpolation will be performed between two identical values.
For example, when starting an experiment on Jan 1st of year Y with yearly files and daily data to be interpolated,
we do accept that the file related to year Y-1 is not existing.
The value of Jan 1st will be used as the missing one for Dec 31st of year Y-1.
If the file of year Y-1 exists, the code will read its last record.
Therefore, this file can contain only one record corresponding to Dec 31st,
a useful feature for user considering that it is too heavy to manipulate the complete file for year Y-1.
%% =================================================================================================
\subsection{Interpolation on-the-fly}
\label{subsec:SBC_iof}
Interpolation on the Fly allows the user to supply input files required for the surface forcing on
grids other than the model grid.
To do this, he or she must supply, in addition to the source data file(s), a file of weights to be used to
interpolate from the data grid to the model grid.
The original development of this code used the SCRIP package
(freely available \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement).
In principle, any package such as CDO can be used to generate the weights, but the variables in
the input weights file must have the same names and meanings as assumed by the model.
Two methods are currently available: bilinear and bicubic interpolations.
Prior to the interpolation, providing a land/sea mask file, the user can decide to remove land points from
the input file and substitute the corresponding values with the average of the 8 neighbouring points in
the native external grid.
Only "sea points" are considered for the averaging.
The land/sea mask file must be provided in the structure associated with the input variable.
The netcdf land/sea mask variable name must be 'LSM' and must have the same horizontal and vertical dimensions as
the associated variables and should be equal to 1 over land and 0 elsewhere.
The procedure can be recursively applied by setting nn\_lsm > 1 in namsbc namelist.
Note that nn\_lsm=0 forces the code to not apply the procedure, even if a land/sea mask file is supplied.
%% =================================================================================================
\subsubsection{Bilinear interpolation}
\label{subsec:SBC_iof_bilinear}
The input weights file in this case has two sets of variables:
src01, src02, src03, src04 and wgt01, wgt02, wgt03, wgt04.
The "src" variables correspond to the point in the input grid to which the weight "wgt" is applied.
Each src value is an integer corresponding to the index of a point in the input grid when
written as a one dimensional array.
For example, for an input grid of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8.
There are four of each variable because bilinear interpolation uses the four points defining
the grid box containing the point to be interpolated.
All of these arrays are on the model grid, so that values src01(i,j) and wgt01(i,j) are used to
generate a value for point (i,j) in the model.
Symbolically, the algorithm used is:
\[
f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}
\]
where function idx() transforms a one dimensional index src(k) into a two dimensional index,
and wgt(1) corresponds to variable "wgt01" for example.
%% =================================================================================================
\subsubsection{Bicubic interpolation}
\label{subsec:SBC_iof_bicubic}
Again, there are two sets of variables: "src" and "wgt".
But in this case, there are 16 of each.
The symbolic algorithm used to calculate values on the model grid is now:
\[
\begin{split}
f_{m}(i,j) = f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}
+ \sum_{k=5 }^{8 } {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} } \\
+& \sum_{k=9 }^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} }
+ \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} }
\end{split}
\]
The gradients here are taken with respect to the horizontal indices and not distances since
the spatial dependency has been included into the weights.
%% =================================================================================================
\subsubsection{Implementation}
\label{subsec:SBC_iof_imp}
To activate this option, a non-empty string should be supplied in
the weights filename column of the relevant namelist;
if this is left as an empty string no action is taken.
In the model, weights files are read in and stored in a structured type (WGT) in the fldread module,
as and when they are first required.
This initialisation procedure determines whether the input data grid should be treated as cyclical or not by
inspecting a global attribute stored in the weights input file.
This attribute must be called "ew\_wrap" and be of integer type.
If it is negative, the input non-model grid is assumed to be not cyclic.
If zero or greater, then the value represents the number of columns that overlap.
$E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0;
if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2.
If the model does not find attribute ew\_wrap, then a value of -999 is assumed.
In this case, the \rou{fld\_read} routine defaults ew\_wrap to value 0 and
therefore the grid is assumed to be cyclic with no overlapping columns.
(In fact, this only matters when bicubic interpolation is required.)
Note that no testing is done to check the validity in the model,
since there is no way of knowing the name used for the longitude variable,
so it is up to the user to make sure his or her data is correctly represented.
Next the routine reads in the weights.
Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise bilinear interpolation is used.
The WGT structure includes dynamic arrays both for the storage of the weights (on the model grid),
and when required, for reading in the variable to be interpolated (on the input data grid).
The size of the input data array is determined by examining the values in the "src" arrays to
find the minimum and maximum i and j values required.
Since bicubic interpolation requires the calculation of gradients at each point on the grid,
the corresponding arrays are dimensioned with a halo of width one grid point all the way around.
When the array of points from the data file is adjacent to an edge of the data grid,
the halo is either a copy of the row/column next to it (non-cyclical case),
or is a copy of one from the first few columns on the opposite side of the grid (cyclical case).
%% =================================================================================================
\subsubsection{Limitations}
\label{subsec:SBC_iof_lim}
\begin{enumerate}
\item The case where input data grids are not logically rectangular (irregular grid case) has not been tested.
\item This code is not guaranteed to produce positive definite answers from positive definite inputs when
a bicubic interpolation method is used.
\item The cyclic condition is only applied on left and right columns, and not to top and bottom rows.
\item The gradients across the ends of a cyclical grid assume that the grid spacing between
the two columns involved are consistent with the weights used.
\item Neither interpolation scheme is conservative. (There is a conservative scheme available in SCRIP,
but this has not been implemented.)
\end{enumerate}
%% =================================================================================================
\subsubsection{Utilities}
\label{subsec:SBC_iof_util}
% to be completed
A set of utilities to create a weights file for a rectilinear input grid is available
(see the directory NEMOGCM/TOOLS/WEIGHTS).
%% =================================================================================================
\subsection{Standalone surface boundary condition scheme (SAS)}
\label{subsec:SBC_SAS}
\begin{listing}
\nlst{namsbc_sas}
\caption{\forcode{&namsbc_sas}}
\label{lst:namsbc_sas}
\end{listing}
In some circumstances, it may be useful to avoid calculating the 3D temperature,
salinity and velocity fields and simply read them in from a previous run or receive them from OASIS.
For example:
\begin{itemize}
\item Multiple runs of the model are required in code development to
see the effect of different algorithms in the bulk formulae.
\item The effect of different parameter sets in the ice model is to be examined.
\item Development of sea-ice algorithms or parameterizations.
\item Spinup of the iceberg floats
\item Ocean/sea-ice simulation with both models running in parallel (\np[=.true.]{ln_mixcpl}{ln\_mixcpl})
\end{itemize}
The Standalone Surface scheme provides this capacity.
Its options are defined through the \nam{sbc_sas}{sbc\_sas} namelist variables.
A new copy of the model has to be compiled with a configuration based on ORCA2\_SAS\_LIM.
However, no namelist parameters need be changed from the settings of the previous run (except perhaps nn\_date0).
In this configuration, a few routines in the standard model are overriden by new versions.
Routines replaced are:
\begin{itemize}
\item \mdl{nemogcm}: This routine initialises the rest of the model and repeatedly calls the stp time stepping routine (\mdl{step}).
Since the ocean state is not calculated all associated initialisations have been removed.
\item \mdl{step}: The main time stepping routine now only needs to call the sbc routine (and a few utility functions).
\item \mdl{sbcmod}: This has been cut down and now only calculates surface forcing and the ice model required.
New surface modules that can function when only the surface level of the ocean state is defined can also be added
(\eg\ icebergs).
\item \mdl{daymod}: No ocean restarts are read or written (though the ice model restarts are retained),
so calls to restart functions have been removed.
This also means that the calendar cannot be controlled by time in a restart file,
so the user must check that nn\_date0 in the model namelist is correct for his or her purposes.
\item \mdl{stpctl}: Since there is no free surface solver, references to it have been removed from \rou{stp\_ctl} module.
\item \mdl{diawri}: All 3D data have been removed from the output.
The surface temperature, salinity and velocity components (which have been read in) are written along with
relevant forcing and ice data.
\end{itemize}
One new routine has been added:
\begin{itemize}
\item \mdl{sbcsas}: This module initialises the input files needed for reading temperature, salinity and
velocity arrays at the surface.
These filenames are supplied in namelist namsbc\_sas.
Unfortunately, because of limitations with the \mdl{iom} module,
the full 3D fields from the mean files have to be read in and interpolated in time,
before using just the top level.
Since fldread is used to read in the data, Interpolation on the Fly may be used to change input data resolution.
\end{itemize}
The user can also choose in the \nam{sbc_sas}{sbc\_sas} namelist to read the mean (nn\_fsbc time-step) fraction of solar net radiation absorbed in the 1st T level using
(\np[=.true.]{ln_flx}{ln\_flx}) and to provide 3D oceanic velocities instead of 2D ones (\np{ln_flx}{ln\_flx}\forcode{=.true.}). In that last case, only the 1st level will be read in.
%% =================================================================================================
\section[Flux formulation (\textit{sbcflx.F90})]{Flux formulation (\protect\mdl{sbcflx})}
\label{sec:SBC_flx}
% Laurent: DO NOT mix up ``bulk formulae'' (the classic equation) and the ``bulk
% parameterization'' (i.e NCAR, COARE, ECMWF...)
\begin{listing}
\nlst{namsbc_flx}
\caption{\forcode{&namsbc_flx}}
\label{lst:namsbc_flx}
\end{listing}
In the flux formulation (\np[=.true.]{ln_flx}{ln\_flx}),
the surface boundary condition fields are directly read from input files.
The user has to define in the namelist \nam{sbc_flx}{sbc\_flx} the name of the file,
the name of the variable read in the file, the time frequency at which it is given (in hours),
and a logical setting whether a time interpolation to the model time step is required for this field.
See \autoref{subsec:SBC_fldread} for a more detailed description of the parameters.
Note that in general, a flux formulation is used in associated with a restoring term to observed SST and/or SSS.
See \autoref{subsec:SBC_ssr} for its specification.
%% =================================================================================================
\section[Bulk formulation (\textit{sbcblk.F90})]{Bulk formulation (\protect\mdl{sbcblk})}
\label{sec:SBC_blk}
% L. Brodeau, December 2019... %
\begin{listing}
\nlst{namsbc_blk}
\caption{\forcode{&namsbc_blk}}
\label{lst:namsbc_blk}
\end{listing}
If the bulk formulation is selected (\np[=.true.]{ln_blk}{ln\_blk}), the air-sea
fluxes associated with surface boundary conditions are estimated by means of the
traditional \emph{bulk formulae}. As input, bulk formulae rely on a prescribed
near-surface atmosphere state (typically extracted from a weather reanalysis)
and the prognostic sea (-ice) surface state averaged over \np{nn_fsbc}{nn\_fsbc}
time-step(s).
% Turbulent air-sea fluxes are computed using the sea surface properties and
% atmospheric SSVs at height $z$ above the sea surface, with the traditional
% aerodynamic bulk formulae:
Note: all the NEMO Fortran routines involved in the present section have been
initially developed (and are still developed in parallel) in
the \href{https://brodeau.github.io/aerobulk}{\texttt{AeroBulk}} open-source project
\citep{brodeau.barnier.ea_JPO16}.
%%% Bulk formulae are this:
\subsection{Bulk formulae}
\label{subsec:SBC_blkform}
In NEMO, the set of equations that relate each component of the surface fluxes
to the near-surface atmosphere and sea surface states writes
\begin{subequations}
\label{eq:SBC_bulk}
\label{eq:SBC_bulk_form}
\begin{align}
\mathbf{\tau} &= \rho~ C_D ~ \mathbf{U}_z ~ U_B \\
Q_H &= \rho~C_H~C_P~\big[ \theta_z - T_s \big] ~ U_B \\
E &= \rho~C_E ~\big[ q_s - q_z \big] ~ U_B \\
Q_L &= -L_v \, E \\
Q_{sr} &= (1 - a) Q_{sw\downarrow} \\
Q_{ir} &= \delta (Q_{lw\downarrow} -\sigma T_s^4)
\end{align}
\end{subequations}
with
\[ \theta_z \simeq T_z+\gamma z \]
\[ q_s \simeq 0.98\,q_{sat}(T_s,p_a ) \]
from which, the the non-solar heat flux is \[ Q_{ns} = Q_L + Q_H + Q_{ir} \]
where $\mathbf{\tau}$ is the wind stress vector, $Q_H$ the sensible heat flux,
$E$ the evaporation, $Q_L$ the latent heat flux, and $Q_{ir}$ the net longwave
flux.
$Q_{sw\downarrow}$ and $Q_{lw\downarrow}$ are the surface downwelling shortwave
and longwave radiative fluxes, respectively.
Note: a positive sign for $\mathbf{\tau}$, $Q_H$, $Q_L$, $Q_{sr}$ or $Q_{ir}$
implies a gain of the relevant quantity for the ocean, while a positive $E$
implies a freshwater loss for the ocean.
$\rho$ is the density of air. $C_D$, $C_H$ and $C_E$ are the bulk transfer
coefficients for momentum, sensible heat, and moisture, respectively.
$C_P$ is the heat capacity of moist air, and $L_v$ is the latent heat of
vaporization of water.
$\theta_z$, $T_z$ and $q_z$ are the potential temperature, absolute temperature,
and specific humidity of air at height $z$ above the sea surface,
respectively. $\gamma z$ is a temperature correction term which accounts for the
adiabatic lapse rate and approximates the potential temperature at height
$z$ \citep{josey.gulev.ea_OCC13}.
$\mathbf{U}_z$ is the wind speed vector at height $z$ above the sea surface
(possibly referenced to the surface current $\mathbf{u_0}$).%,
%\autoref{s_res1}.\autoref{ss_current}). %% Undefined references
The bulk scalar wind speed, namely $U_B$, is the scalar wind speed,
$|\mathbf{U}_z|$, with the potential inclusion of a gustiness contribution.
$a$ and $\delta$ are the albedo and emissivity of the sea surface, respectively.\\
%$p_a$ is the mean sea-level pressure (SLP).
$T_s$ is the sea surface temperature. $q_s$ is the saturation specific humidity
of air at temperature $T_s$; it includes a 2\% reduction to account for the
presence of salt in seawater \citep{sverdrup.johnson.ea_bk42,kraus.businger_QJRMS96}.
Depending on the bulk parametrization used, $T_s$ can either be the temperature
at the air-sea interface (skin temperature, hereafter SSST) or at typically a
few tens of centimeters below the surface (bulk sea surface temperature,
hereafter SST).
The SSST differs from the SST due to the contributions of two effects of
opposite sign, the \emph{cool skin} and \emph{warm layer} (hereafter CS and WL,
respectively, see \autoref{subsec:SBC_skin}).
Technically, when the ECMWF or COARE* or MFS bulk parametrizations are selected
(\np[=.true.]{ln_ECMWF}{ln\_ECMWF} or \np[=.true.]{ln_COARE*}{ln\_COARE\*} or \np[=.true.]{ln_MFS}{ln\_MFS}),
$T_s$ is the SSST, as opposed to the NCAR bulk parametrization
(\np[=.true.]{ln_NCAR}{ln\_NCAR}) for which $T_s$ is the bulk SST (\ie~temperature
at first T-point level).
For more details on all these aspects the reader is invited to refer
to \citet{brodeau.barnier.ea_JPO16}.
\subsection{Bulk parametrizations}
\label{subsec:SBC_blk_ocean}
%%%\label{subsec:SBC_param}
Accuracy of the estimate of surface turbulent fluxes by means of bulk formulae
strongly relies on that of the bulk transfer coefficients: $C_D$, $C_H$ and
$C_E$. They are estimated with what we refer to as a \emph{bulk
parametrization} algorithm. When relevant, these algorithms also perform the
height adjustment of humidity and temperature to the wind reference measurement
height (from \np{rn_zqt}{rn\_zqt} to \np{rn_zu}{rn\_zu}).
For the open ocean, five bulk parametrization algorithms are available in NEMO:
\begin{itemize}
\item NCAR, formerly known as CORE, \citep{large.yeager_trpt04,large.yeager_CD09}
\item COARE 3.0 \citep{fairall.bradley.ea_JC03}
\item COARE 3.6 \citep{edson.jampana.ea_JPO13}
\item ECMWF (IFS documentation, cy45)
\item MFS \citep{castellari_JMS98}
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
\end{itemize}
With respect to version 3, the principal advances in version 3.6 of the COARE
bulk parametrization are built around improvements in the representation of the
effects of waves on
fluxes \citep{edson.jampana.ea_JPO13,brodeau.barnier.ea_JPO16}. This includes
improved relationships of surface roughness, and whitecap fraction on wave
parameters. It is therefore recommended to chose version 3.6 over 3.
\subsection[Cool-skin and warm-layer parameterizations ( \forcode{ln_skin_cs} \& \forcode{ln_skin_wl} )]
{Cool-skin and warm-layer parameterizations (\protect\np{ln_skin_cs}{ln\_skin\_cs} \& \np{ln_skin_wl}{ln\_skin\_wl})}
\label{subsec:SBC_skin}
As opposed to the NCAR bulk parametrization, more advanced bulk
parametrizations such as COARE3.x and ECMWF are meant to be used with the skin
temperature $T_s$ rather than the bulk SST (which, in NEMO is the temperature at
the first T-point level, see \autoref{subsec:SBC_blkform}).
As such, the relevant cool-skin and warm-layer parametrization must be
activated through \np[=T]{ln_skin_cs}{ln\_skin\_cs}
and \np[=T]{ln_skin_wl}{ln\_skin\_wl} to use COARE3.x or ECMWF in a consistent
way.
\texttt{\#LB: ADD BLBLA ABOUT THE TWO CS/WL PARAMETRIZATIONS (ECMWF and COARE) !!!}
For the cool-skin scheme parametrization COARE and ECMWF algorithms share the same
basis: \citet{fairall.bradley.ea_JGRO96}. With some minor updates based
on \citet{zeng.beljaars_GRL05} for ECMWF \iffalse, and \citet{fairall.ea_19?} for COARE \fi
3.6.
For the warm-layer scheme, ECMWF is based on \citet{zeng.beljaars_GRL05} with a
recent update from \citet{takaya.bidlot.ea_JGR10} (consideration of the
turbulence input from Langmuir circulation).
Importantly, COARE warm-layer scheme \iffalse \citep{fairall.ea_19?} \fi includes a prognostic
equation for the thickness of the warm-layer, while it is considered as constant
in the ECWMF algorithm.
\subsection{Appropriate use of each bulk parametrization}
\subsubsection{NCAR}
NCAR bulk parametrizations (formerly known as CORE) is meant to be used with the
CORE II atmospheric forcing \citep{large.yeager_CD09}. The expected sea surface
temperature is the bulk SST. Hence the following namelist parameters must be
set:
\begin{forlines}
...
ln_NCAR = .true.
...
rn_zqt = 10. ! Air temperature & humidity reference height (m)
rn_zu = 10. ! Wind vector reference height (m)
...
ln_skin_cs = .false. ! use the cool-skin parameterization
ln_skin_wl = .false. ! use the warm-layer parameterization
...
ln_humi_sph = .true. ! humidity "sn_humi" is specific humidity [kg/kg]
\end{forlines}
\subsubsection{ECMWF}
With an atmospheric forcing based on a reanalysis of the ECMWF, such as the
Drakkar Forcing Set \citep{brodeau.barnier.ea_OM10}, we strongly recommend to
use the ECMWF bulk parametrizations with the cool-skin and warm-layer
parametrizations activated. In ECMWF reanalyzes, since air temperature and
humidity are provided at the 2\,m height, and given that the humidity is
distributed as the dew-point temperature, the namelist must be tuned as follows:
\begin{forlines}
...
ln_ECMWF = .true.
...
rn_zqt = 2. ! Air temperature & humidity reference height (m)
rn_zu = 10. ! Wind vector reference height (m)
...
ln_skin_cs = .true. ! use the cool-skin parameterization
ln_skin_wl = .true. ! use the warm-layer parameterization
...
ln_humi_dpt = .true. ! humidity "sn_humi" is dew-point temperature [K]
...
\end{forlines}
Note: when \np{ln_ECMWF}{ln\_ECMWF} is selected, the selection
of \np{ln_skin_cs}{ln\_skin\_cs} and \np{ln_skin_wl}{ln\_skin\_wl} implicitly
triggers the use of the ECMWF cool-skin and warm-layer parametrizations,
respectively (found in \textit{sbcblk\_skin\_ecmwf.F90}).
\subsubsection{COARE 3.x}
Since the ECMWF parametrization is largely based on the COARE* parametrization,
the two algorithms are very similar in terms of structure and closure
approach. As such, the namelist tuning for COARE 3.x is identical to that of
ECMWF:
\begin{forlines}
...
ln_COARE3p6 = .true.
...
ln_skin_cs = .true. ! use the cool-skin parameterization
ln_skin_wl = .true. ! use the warm-layer parameterization
...
\end{forlines}
Note: when \np[=T]{ln_COARE3p0}{ln\_COARE3p0} is selected, the selection
of \np{ln_skin_cs}{ln\_skin\_cs} and \np{ln_skin_wl}{ln\_skin\_wl} implicitly
triggers the use of the COARE cool-skin and warm-layer parametrizations,
respectively (found in \textit{sbcblk\_skin\_coare.F90}).
%lulu
% In a typical bulk algorithm, the BTCs under neutral stability conditions are
% defined using \emph{in-situ} flux measurements while their dependence on the
% stability is accounted through the \emph{Monin-Obukhov Similarity Theory} and
% the \emph{flux-profile} relationships \citep[\eg{}][]{Paulson_1970}. BTCs are
% functions of the wind speed and the near-surface stability of the atmospheric
% surface layer (hereafter ASL), and hence, depend on $U_B$, $T_s$, $T_z$, $q_s$
% and $q_z$.
Loading
Loading full blame...