diff --git a/docs/assets/model_evaluation/Create model profile.png b/docs/assets/model_evaluation/Create model profile.png deleted file mode 100644 index 27bb80b6..00000000 Binary files a/docs/assets/model_evaluation/Create model profile.png and /dev/null differ diff --git a/docs/assets/model_evaluation/New model output.png b/docs/assets/model_evaluation/New model output.png deleted file mode 100644 index 48725f12..00000000 Binary files a/docs/assets/model_evaluation/New model output.png and /dev/null differ diff --git a/docs/assets/model_evaluation/Run analysis.png b/docs/assets/model_evaluation/Run analysis.png deleted file mode 100644 index cba09450..00000000 Binary files a/docs/assets/model_evaluation/Run analysis.png and /dev/null differ diff --git a/docs/assets/model_evaluation/meorg initialise.png b/docs/assets/model_evaluation/meorg initialise.png new file mode 100644 index 00000000..78362853 Binary files /dev/null and b/docs/assets/model_evaluation/meorg initialise.png differ diff --git a/docs/user_guide/config_options.md b/docs/user_guide/config_options.md index 2c83ed44..4c152344 100644 --- a/docs/user_guide/config_options.md +++ b/docs/user_guide/config_options.md @@ -375,9 +375,21 @@ realisations: ### [meorg_output_name](#meorg_output_name) -: **Default:** unset, _optional key_. :octicons-dash-24: Chosen as the model name for one of the realisations, if the user wants to upload the Model Output to me.org for further analysis. A `base32` format hash derived from `model_profile_id` and `$USER` is appended to the model name. - -Note: It is the user's responsbility to ensure the model output name does not clash with existing names belonging to other users on modelevaluation.org. The realisation name is set via `name` if provided, otherwise the default realisation name of the `Repo`. +: **Default:** unset, _optional key_. :octicons-dash-24: Chosen as the model name for one of the realisations, if the user wants to upload the Model Output to me.org for further analysis. The following workflow is executed: + +1. A `model_output_name` is created using the format `-`. Here, The `realisation_name` is determined where `meorg_output_name` is set as `true`. +**Note**: The `realisation_name` is set via [name](#name) if provided, otherwise the default repository name is used. A 6-character hash derived from `realisations`, `model_profile_id` and `$USER` is appended at the end. The hash is used to minimise name conflicts for different users' needs. +**Note**: In case `model_output_name` already exists on `me.org`, the files within that model output are deleted. This is done to send a fresh set of benchmarking results for analysis, ensuring that the user can re-run `benchcab` without any issues. +2. The following settings are taken by default for the model output: + * Model Profile - `CABLE` + * State Selection - `default` + * Parameter Selection - `automated` + * Bundled experiments - `true` + * Comments - `none` +3. Depending on the fluxsite [`experiment`](#`experiment`), `benchcab` will do the following: + - Add the correponding experiment to model output. + - Associate the experiment with base benchmark (already stored in `me.org`), and other listed realisations (since they share the same experiment). +4. Run the analysis, and provide a link to the user to check status. The model output name should also follow the Github issue branch format (i.e. it should start with a digit, with words separated by dashes). Finally, the maximum number of characters allowed for `meorg_output_name` is 50. @@ -393,12 +405,6 @@ realisations: git: branch: 456-my-branch ``` -f(mo_name, user, profile) -123-my-branch-34akg9 # Add by default - - - - ### [name](#name) diff --git a/docs/user_guide/index.md b/docs/user_guide/index.md index ffd99a78..204ac13e 100644 --- a/docs/user_guide/index.md +++ b/docs/user_guide/index.md @@ -237,13 +237,13 @@ The following files and directories are created when `benchcab run` executes suc ## Analyse the output with [modelevaluation.org][meorg] -Once the benchmarking has finished running all the simulations, you need to upload the output files to [modelevaluation.org][meorg] via the web interface. To do this: - !!! warning "Limitations" Model evaluation for offline spatial outputs is not yet available (see issue [CABLE-LSM/benchcab#193](https://github.com/CABLE-LSM/benchcab/issues/193)). +`benchcab` communicates with `meorg` using `meorg_client` package (available on `xp65` conda environment in Gadi). The benchmarking results are uploaded to `modelevaluation.org` for further analysis, which can be seen via the web interface. To enable support: + 1. Go to [modelevaluation.org][meorg] and login or create a new account. -2. Navigate to the `benchcab-evaluation` workspace. To do this, click the **Current Workspace** button at the top of the page, and select `benchcab-evaluation` under "Workspaces Shared With Me". +2. To view analysis in the web interface, one needs to enable `benchcab-evaluation` workspace. To do this, click the **Current Workspace** button at the top of the page, and select `benchcab-evaluation` under "Workspaces Shared With Me".
![Workspace Button](../assets/model_evaluation/Current%20Workspace%20button.png){ width="500" }
Button to choose workspace
@@ -253,48 +253,13 @@ Once the benchmarking has finished running all the simulations, you need to uplo
Workspaces available to you
-3. Create a model profile for your set of model outputs. You can see [this example][model_profile_eg] to get started. To create your own, select the **Model Profiles** tab and click **Create Model Profile**. -
- ![Model profile](../assets/model_evaluation/Create%20model%20profile.png){ width="500" } -
Create model profile
-
- - The model profile should describe the versions of CABLE used to generate the model outputs and the URLs to the repository pointing to the code versions. You are free to set the name as you like. - -4. Upload model outputs created by `benchcab` by doing the following: - 1. Transfer model outputs from the `runs/fluxsite/outputs/` directory to your local computer so that they can be uploaded via the web interface. - 2. Create a new model output form. You can see [this example][model_output_eg] to get started. To create your own, select the **Model Outputs** tab on [modelevaluation.org][meorg] and click **Upload Model Output**. -
- ![Model output](../assets/model_evaluation/New%20model%20output.png){ width="500" } -
Create model output
-
- - 3. Fill out the fields for "Name", "Experiment" and "Model" ("State Selection", "Parameter Selection" and "Comments" are optional): - - **The experiment** should correspond to the experiment specified in the [configuration file][config_options] used to run `benchcab`. - - **The model** should correspond to the Model Profile created in the previous step. - - Optionally, in **the comments**, you may also want to include the URL to the Github repository containing the benchcab configuration file used to run `benchcab` and any other information needed to reproduce the outputs. - - 4. Under "Model Output Files", click **Upload Files**. This should prompt you to select the model outputs you want to upload from your file system. We recommend users to make their model outputs public to download by checking **Downloadable by other users**. -
- ![Public output](../assets/model_evaluation/Public%20output.png){ width="300" } -
Make model output public
-
- - 5. Under "Benchmarks", you may need to add a benchmark depending on the experiment chosen. This is an error and will be fixed soon. - - **Five site test** and **Forty two site test**: a benchmark is required to run the analysis for the `Five site test` experiment. You can use: - - [this model profile][benchmark_5] as a benchmark for the **five site experiment**. - - [this model profile][benchmark_42] as a benchmark for the **forty-two site experiment**. - - **single site experiments**: No benchmark is required. You can add your own if you would like to. You can use [this example][benchmark_eg] to know how to set up your own model output as a benchmark. - - 6. **Save** your model output! - -5. Once the model outputs have been uploaded you can then start the analysis by clicking the **Run Analysis** button at the top of the page. The same button is also found at the bottom of the page. +2. `benchcab` requires access to the necessary permissions for interfacing with `meorg`. Use `meorg initialise` to create the credentials file.
- ![Run analysis](../assets/model_evaluation/Run%20analysis.png){ width="700" } -
Run analysis button
+ ![View plots](../assets/model_evaluation/meorg%20initialise.png){ width="500" } +
Initialising `meorg_client`
- -6. Once the analysis has completed, view the generated plots by clicking **view plots** under "Analyses". +3. Run `benchcab`, making sure to set [`meorg_output_name`](config_options.md#meorg_output_name) as `true` for one of the realisations to enable the analysis workflow (run as a PBS jobscript). Upon successful submission of the files and starting the analysis, the jobscript output will contain a link to check the analyses status/plots. +4. Once the analysis has completed, other than the direct link provided above, one can also view the generated plots by clicking **view plots** under "Analyses".
![View plots](../assets/model_evaluation/View%20plot.png){ width="500" }
Link to plots