How to add a new type of flux data to be processed by the CIF into a model’s inputs

Pre-requisites

Before starting to implement a new flux plugin, you must have:

  • a yaml file ready with a simulation that works with known plugins.

  • a folder where the data you need to implement is stored

  • basic information about the data you need to implement (licensing, format, etc.)

We help you below to navigate through different documentation pages to implement your plugin. The main reference pages are the datastream documentation page and the flux template documentation page.

Switch from working fluxes to the reference template

The datavect paragraph of your working yaml should look like that:

Example with CHIMERE

 1datavect:
 2  plugin:
 3    name: standard
 4    version: std
 5  components:
 6    flux:
 7      parameters:
 8        CO2:
 9          plugin:
10            name: CHIMERE
11            type: flux
12            version: AEMISSIONS
13          file_freq: 120H
14          dir: some_dir
15          file: some_file

Do the following to make it work with the template flux:

  1. follow the initial steps in the flux template documentation page to initialize your new plugin and register it.

    It includes copying the template folder to a new path and changing the variables _name, _fullname and _version in the file __init__.py

  2. update your Yaml to use the template flux (renamed with your preference). It should now look like that:

    Show/Hide Code

     1  datavect:
     2    plugin:
     3      name: standard
     4      version: std
     5    components:
     6      flux:
     7        parameters:
     8          CO2:
     9            plugin:
    10              name: your_new_name
    11              type: flux
    12              version: your_version
    
  3. Test running again your test case. It should generate fluxes with random values as in the template

Document your plugin

Before going further, be sure to document your plugin properly.

To do so, please replace the docstring header in the file __init__.py.

Include the following information:

  • licensing information

  • permanent link to download the data (or a contact person if no link is publicly available)

  • data format (temporal and horizontal resolution, names and shape of the data files)

  • any specific treatment that prevents the plugin from working with another type of files.

Build and check the documentation

Before going further, please compile the documentation and check that your new plugin appears in the list of datastreams plugins here.

Also check that the documentation of your new plugin is satisfactory.

To compile the documentation, use the command:

cd $CIF_root/docs
make html

Further details can be found here.

Updating functions and data to implement your flux data

Your new plugin will need functions to be coded to work.

fetch

The fetch function determines what files and corresponding dates are available for running the present case. The structure of the fetch function is shown here: fetch. Please read carefully all explanations therein before starting implementing your case.

By default, the fetch function will use the arguments dir and file in your yaml. Make sure to update your yaml accordingly:

Show/Hide Code

 1  datavect:
 2    plugin:
 3      name: standard
 4      version: std
 5    components:
 6      flux:
 7        parameters:
 8          CO2:
 9            plugin:
10              name: your_new_name
11              type: flux
12              version: your_version
13            dir: path_to_data
14            file: file_name

Depending on how you implement your data stream, extra parameters may be needed. Please document them on-the-fly in the input_arguments variable in __init__.py.

One classical parameter is file_freq, which gives the frequency of the input files (independently to the simulation to be computed).

Once implemented, re-run your test case. You can check that everything went as expected by checking:

  1. in the folder $workdir/datavect/flux/your_species/, links to original data files should be initialized

  2. it is possible to check that the list of dates and files is initialized as expected. To do so, use the option dump_debug in the datavect paragraph in the yaml (see details here). It will dump the list of dates and files in a file named $workdir/datavect/flux.your_species.txt

get_domain (optional)

A datastream plugin needs to be described by a domain to be processed in pyCIF. There are three valid approaches to associate a valid domain to your flux data. The two first one are given for information, but the third one is the one to be preferred in most cases:

  1. fetch it from another object in the set-up. This is relevant when the domain should be exactly the same as the one of another Plugin in your configuration. For instance, if you are implementing a flux plugin dedicated to a model, you will expect it to have exactly the same domain as the model.

    To ensure that your flux plugin fetch the domain from the present set-up, it is possible to define a so-called requirement. This is done be adding the following lines to the __init__.py file

    requirements = {
        "domain": {"name": "CHIMERE", "version": "std", "empty": False},
        }
    

    In that case, the flux will expect a CHIMERE domain to be defined, otherwise pycif will return an exception

  2. directly define the domain in the yaml as a sub-paragraph. This will look like that:

    Show/Hide Code

     1  datavect:
     2    plugin:
     3      name: standard
     4      version: std
     5    components:
     6      flux:
     7        parameters:
     8          CO2:
     9            plugin:
    10              name: your_new_name
    11              type: flux
    12              version: your_version
    13            dir: path_to_data
    14            file: file_name
    15            domain:
    16              plugin:
    17                name: my_domain_name
    18                version: my_domain_version
    19              some_extra_parameters: grub
    

    Such an approach is not necessarily recommended as it forces the user to properly configure his/her Yaml file to make the case working properly.

    Warning

    If this path is chosen please document the usage very carefully.

  3. Using the function get_domain to define the domain dynamically, based on input files, or with fixed parameters.

    The structure of the get_domain function is shown here: get_domain (optional). Please read carefully all explanations therein before starting implementing your case.

Once implemented, re-run your test case. The implementation of the correct domain will have an impact on the native resolution used to randomly generate fluxes (remember that the read function still comes from the template and thus generate random fluxes for the corresponding domain). Therefore, pycif will automatically reproject the fluxes from the implemented domain to your model’s domain.

One can check that the implemented domain is correct by:

  1. check that the flux files generated for your model seem to follow the native resolution of your data

  2. it is possible to dump intermediate data during the computation of pycif. To do so, activate the option save_debug in the obsoperator:

    Show/Hide Code

    1  obsoperator:
    2    plugin:
    3      name: standard
    4      version: std
    5    save_debug: True
    

    When activated this option dumps intermediate states in $workdir/obsoperator/$run_id/transform_debug/. One has to find the ID of the regrid transform reprojecting the native fluxes to your model’s domain. This information can be found in $workdir/obsoperator/transform_description.txt.

    Once the transform ID retrieved, go to the folder $workdir/obsoperator/$run_id/transform_debug/$transform_ID. The directory tree below that folder can be complex, go to the deepest level. You should find two netCDF files, one for the inputs, one for the outputs. In the outputs, you should find the native resolution, in the output, the projected one.

read

The read function simply reads data for a list of dates and files as deduced from the read function. The expected structure for the fetch function is shown here: read.

This function is rather straighforward to implement. Be sure to have the following structure in outputs:

output_data.shape = (ndate, nlevel, nlat, nlon)
output_dates = start_date_of_each_interval

return xr.DataArray(
    output_data,
    coords={"time": output_dates},
    dims=("time", "lev", "lat", "lon"),
)

Similarly to the get_domain function, it is possible to check that the read function is properly implemented by using the option save_debug and checking that the input fluxes are correct.

Warning

It is likely that the fluxes in your native data stream don’t have the same unit as the one expected by your model. To convert the unit properly, add the unit_conversion paragraph to your Yaml file:

Show/Hide Code

 1  datavect:
 2    plugin:
 3      name: standard
 4      version: std
 5    components:
 6      flux:
 7        parameters:
 8          CO2:
 9            plugin:
10              name: your_new_name
11              type: flux
12              version: your_version
13            dir: path_to_data
14            file: file_name
15            unit_conversion:
16              scale: ${scaling factor to apply}

write (optional)

This function is optional and is necessary only when called by other plugins. One probably does not need to bother about it at the moment…