Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
Binary file removed docs/assets/model_evaluation/New model output.png
Binary file not shown.
Binary file removed docs/assets/model_evaluation/Run analysis.png
Binary file not shown.
Binary file added docs/assets/model_evaluation/meorg initialise.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
24 changes: 15 additions & 9 deletions docs/user_guide/config_options.md
Original file line number Diff line number Diff line change
Expand Up @@ -375,9 +375,21 @@ realisations:
### [meorg_output_name](#meorg_output_name)


: **Default:** unset, _optional key_. :octicons-dash-24: Chosen as the model name for one of the realisations, if the user wants to upload the Model Output to me.org for further analysis. A `base32` format hash derived from `model_profile_id` and `$USER` is appended to the model name.

Note: It is the user's responsbility to ensure the model output name does not clash with existing names belonging to other users on modelevaluation.org. The realisation name is set via `name` if provided, otherwise the default realisation name of the `Repo`.
: **Default:** unset, _optional key_. :octicons-dash-24: Chosen as the model name for one of the realisations, if the user wants to upload the Model Output to me.org for further analysis. The following workflow is executed:

1. A `model_output_name` is created using the format `<realisation_name>-<hash>`. Here, The `realisation_name` is determined where `meorg_output_name` is set as `true`.
**Note**: The `realisation_name` is set via [name](#name) if provided, otherwise the default repository name is used. A 6-character hash derived from `realisations`, `model_profile_id` and `$USER` is appended at the end. The hash is used to minimise name conflicts for different users' needs.
**Note**: In case `model_output_name` already exists on `me.org`, the files within that model output are deleted. This is done to send a fresh set of benchmarking results for analysis, ensuring that the user can re-run `benchcab` without any issues.
2. The following settings are taken by default for the model output:
* Model Profile - `CABLE`
* State Selection - `default`
* Parameter Selection - `automated`
* Bundled experiments - `true`
* Comments - `none`
3. Depending on the fluxsite [`experiment`](#`experiment`), `benchcab` will do the following:
- Add the correponding experiment to model output.
- Associate the experiment with base benchmark (already stored in `me.org`), and other listed realisations (since they share the same experiment).
4. Run the analysis, and provide a link to the user to check status.

The model output name should also follow the Github issue branch format (i.e. it should start with a digit, with words separated by dashes). Finally, the maximum number of characters allowed for `meorg_output_name` is 50.

Expand All @@ -393,12 +405,6 @@ realisations:
git:
branch: 456-my-branch
```
f(mo_name, user, profile)
123-my-branch-34akg9 # Add by default

<!-- Branch name different from mo_name -->



### [name](#name)

Expand Down
51 changes: 8 additions & 43 deletions docs/user_guide/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,13 +237,13 @@ The following files and directories are created when `benchcab run` executes suc

## Analyse the output with [modelevaluation.org][meorg]

Once the benchmarking has finished running all the simulations, you need to upload the output files to [modelevaluation.org][meorg] via the web interface. To do this:

!!! warning "Limitations"
Model evaluation for offline spatial outputs is not yet available (see issue [CABLE-LSM/benchcab#193](https://github.com/CABLE-LSM/benchcab/issues/193)).

`benchcab` communicates with `meorg` using `meorg_client` package (available on `xp65` conda environment in Gadi). The benchmarking results are uploaded to `modelevaluation.org` for further analysis, which can be seen via the web interface. To enable support:

1. Go to [modelevaluation.org][meorg] and login or create a new account.
2. Navigate to the `benchcab-evaluation` workspace. To do this, click the **Current Workspace** button at the top of the page, and select `benchcab-evaluation` under "Workspaces Shared With Me".
2. To view analysis in the web interface, one needs to enable `benchcab-evaluation` workspace. To do this, click the **Current Workspace** button at the top of the page, and select `benchcab-evaluation` under "Workspaces Shared With Me".
<figure markdown>
![Workspace Button](../assets/model_evaluation/Current%20Workspace%20button.png){ width="500" }
<figcaption>Button to choose workspace</figcaption>
Expand All @@ -253,48 +253,13 @@ Once the benchmarking has finished running all the simulations, you need to uplo
<figcaption>Workspaces available to you</figcaption>
</figure>

3. Create a model profile for your set of model outputs. You can see [this example][model_profile_eg] to get started. To create your own, select the **Model Profiles** tab and click **Create Model Profile**.
<figure markdown>
![Model profile](../assets/model_evaluation/Create%20model%20profile.png){ width="500" }
<figcaption>Create model profile</figcaption>
</figure>

The model profile should describe the versions of CABLE used to generate the model outputs and the URLs to the repository pointing to the code versions. You are free to set the name as you like.

4. Upload model outputs created by `benchcab` by doing the following:
1. Transfer model outputs from the `runs/fluxsite/outputs/` directory to your local computer so that they can be uploaded via the web interface.
2. Create a new model output form. You can see [this example][model_output_eg] to get started. To create your own, select the **Model Outputs** tab on [modelevaluation.org][meorg] and click **Upload Model Output**.
<figure markdown>
![Model output](../assets/model_evaluation/New%20model%20output.png){ width="500" }
<figcaption>Create model output</figcaption>
</figure>

3. Fill out the fields for "Name", "Experiment" and "Model" ("State Selection", "Parameter Selection" and "Comments" are optional):
- **The experiment** should correspond to the experiment specified in the [configuration file][config_options] used to run `benchcab`.
- **The model** should correspond to the Model Profile created in the previous step.
- Optionally, in **the comments**, you may also want to include the URL to the Github repository containing the benchcab configuration file used to run `benchcab` and any other information needed to reproduce the outputs.

4. Under "Model Output Files", click **Upload Files**. This should prompt you to select the model outputs you want to upload from your file system. We recommend users to make their model outputs public to download by checking **Downloadable by other users**.
<figure markdown>
![Public output](../assets/model_evaluation/Public%20output.png){ width="300" }
<figcaption>Make model output public</figcaption>
</figure>

5. Under "Benchmarks", you may need to add a benchmark depending on the experiment chosen. This is an error and will be fixed soon.
- **Five site test** and **Forty two site test**: a benchmark is required to run the analysis for the `Five site test` experiment. You can use:
- [this model profile][benchmark_5] as a benchmark for the **five site experiment**.
- [this model profile][benchmark_42] as a benchmark for the **forty-two site experiment**.
- **single site experiments**: No benchmark is required. You can add your own if you would like to. You can use [this example][benchmark_eg] to know how to set up your own model output as a benchmark.

6. **Save** your model output!

5. Once the model outputs have been uploaded you can then start the analysis by clicking the **Run Analysis** button at the top of the page. The same button is also found at the bottom of the page.
2. `benchcab` requires access to the necessary permissions for interfacing with `meorg`. Use `meorg initialise` to create the credentials file.
<figure markdown>
![Run analysis](../assets/model_evaluation/Run%20analysis.png){ width="700" }
<figcaption>Run analysis button</figcaption>
![View plots](../assets/model_evaluation/meorg%20initialise.png){ width="500" }
<figcaption>Initialising `meorg_client`</figcaption>
</figure>

6. Once the analysis has completed, view the generated plots by clicking **view plots** under "Analyses".
3. Run `benchcab`, making sure to set [`meorg_output_name`](config_options.md#meorg_output_name) as `true` for one of the realisations to enable the analysis workflow (run as a PBS jobscript). Upon successful submission of the files and starting the analysis, the jobscript output will contain a link to check the analyses status/plots.
4. Once the analysis has completed, other than the direct link provided above, one can also view the generated plots by clicking **view plots** under "Analyses".
<figure markdown>
![View plots](../assets/model_evaluation/View%20plot.png){ width="500" }
<figcaption>Link to plots</figcaption>
Expand Down
Loading