######################################## How to add a new basic transformation ######################################## .. role:: bash(code) :language: bash Pre-requisites ================ Before starting to implement a new transformation, you must have: - a yaml file ready with a simulation that works with a known transformation We help you below to navigate through different documentation pages to implement your basic transformation. The main reference pages are :doc:`the basic transformations' documentation page ` and :doc:`the basic transformation template documentation page`. Switch from a working basic transformation to the reference template ==================================================================== Here, we take the example of a basic transormation applied to fluxes in CHIMERE. The :bash:`datavect` paragraph of your working yaml should look like that: .. container:: toggle .. container:: header Example with CHIMERE .. code-block:: yaml :linenos: datavect: plugin: name: standard version: std components: flux: parameters: CH4: plugin: name: CHIMERE type: flux version: AEMISSIONS dir: /home/users/ipison/CHIMERE/debugchimere/ file: AEMISSIONS.%Y%m%d%H.3.nc file_freq: 3H unit_conversion: scale: 1e12 Copy the template with the new name and use it in your yaml. XXXXXXXXXXXXXXXXXXXX .. container:: toggle .. container:: header Show/Hide Code .. code-block:: yaml :linenos: datavect: plugin: name: standard version: std components: flux: parameters: CH4: plugin: name: CHIMERE type: flux version: AEMISSIONS dir: /home/users/ipison/CHIMERE/debugchimere/ file: AEMISSIONS.%Y%m%d%H.3.nc file_freq: 3H your_new_basic_transf: dummy_arg: "new!" Document your plugin ==================== Before going further, be sure to document your plugin properly. To do so, please replace the docstring header in the file :bash:`__init__.py`. Include the following information: - licensing information - permanent link to download the data (or a contact person if no link is publicly available) - data format (temporal and horizontal resolution, names and shape of the data files) - any specific treatment that prevents the plugin from working with another type of files. Build and check the documentation ================================= Before going further, please compile the documentation and check that your new plugin appears in the list of datastreams plugins :doc:`here`. Also check that the documentation of your new plugin is satisfactory. To compile the documentation, use the command: .. code-block:: bash cd $CIF_root/docs make html Further details can be found :doc:`here`. Updating functions and data to implement your flux data ======================================================= Your new plugin will need functions to be coded to work. fetch ------ The :bash:`fetch` function determines what files and corresponding dates are available for running the present case. The structure of the :bash:`fetch` function is shown here: :ref:`datastreams-fetch-funtions`. Please read carefully all explanations therein before starting implementing your case. By default, the :bash:`fetch` function will use the arguments :bash:`dir` and :bash:`file` in your yaml. Make sure to update your yaml accordingly: .. container:: toggle .. container:: header Show/Hide Code .. code-block:: yaml :linenos: datavect: plugin: name: standard version: std components: flux: parameters: CO2: plugin: name: your_new_name type: flux version: your_version dir: path_to_data file: file_name Depending on how you implement your data stream, extra parameters may be needed. Please document them on-the-fly in the :bash:`input_arguments` variable in :bash:`__init__.py`. One classical parameter is :bash:`file_freq`, which gives the frequency of the input files (independently to the simulation to be computed). Once implemented, re-run your test case. You can check that everything went as expected by checking: 1. in the folder :bash:`$workdir/datavect/flux/your_species/`, links to original data files should be initialized 2. it is possible to check that the list of dates and files is initialized as expected. To do so, use the option :bash:`dump_debug` in the :bash:`datavect` paragraph in the yaml (see details :doc:`here`). It will dump the list of dates and files in a file named :bash:`$workdir/datavect/flux.your_species.txt`