How to add a new basic transformation#


Before starting to implement a new transformation, you must have:

  • a yaml file ready with a simulation that works with a known transformation

We help you below to navigate through different documentation pages to implement your basic transformation. The main reference pages are the basic transformations’ documentation page and the basic transformation template documentation page.

Switch from a working basic transformation to the reference template#

Here, we take the example of a basic transormation applied to fluxes in CHIMERE. The datavect paragraph of your working yaml should look like that:

Copy the template with the new name and use it in your yaml. XXXXXXXXXXXXXXXXXXXX

 1  datavect:
 2    plugin:
 3      name: standard
 4      version: std
 5    components:
 6      flux:
 7        parameters:
 8          CH4:
 9            plugin:
10              name: CHIMERE
11              type: flux
12              version: AEMISSIONS
13          dir: /home/users/ipison/CHIMERE/debugchimere/
14          file:
15          file_freq: 3H
16          your_new_basic_transf:
17            dummy_arg: "new!"

Document your plugin#

Before going further, be sure to document your plugin properly.

To do so, please replace the docstring header in the file

Include the following information:

  • licensing information

  • permanent link to download the data (or a contact person if no link is publicly available)

  • data format (temporal and horizontal resolution, names and shape of the data files)

  • any specific treatment that prevents the plugin from working with another type of files.

Build and check the documentation#

Before going further, please compile the documentation and check that your new plugin appears in the list of datastreams plugins here.

Also check that the documentation of your new plugin is satisfactory.

To compile the documentation, use the command:

cd $CIF_root/docs
make html

Further details can be found here.

Updating functions and data to implement your flux data#

Your new plugin will need functions to be coded to work.


The fetch function determines what files and corresponding dates are available for running the present case. The structure of the fetch function is shown here: fetch. Please read carefully all explanations therein before starting implementing your case.

By default, the fetch function will use the arguments dir and file in your yaml. Make sure to update your yaml accordingly:

 1  datavect:
 2    plugin:
 3      name: standard
 4      version: std
 5    components:
 6      flux:
 7        parameters:
 8          CO2:
 9            plugin:
10              name: your_new_name
11              type: flux
12              version: your_version
13            dir: path_to_data
14            file: file_name

Depending on how you implement your data stream, extra parameters may be needed. Please document them on-the-fly in the input_arguments variable in

One classical parameter is file_freq, which gives the frequency of the input files (independently to the simulation to be computed).

Once implemented, re-run your test case. You can check that everything went as expected by checking:

  1. in the folder $workdir/datavect/flux/your_species/, links to original data files should be initialized

  2. it is possible to check that the list of dates and files is initialized as expected. To do so, use the option dump_debug in the datavect paragraph in the yaml (see details here). It will dump the list of dates and files in a file named $workdir/datavect/flux.your_species.txt