Reporting

Validation results can be visualized in an HTML report using the function wind_validation.reporting.create_report(). Currently, there are three types of reports one can generate:

  1. Aggregated report

    The aggregated report takes in one or many points that result from the wind_validation.validation.validate() method. The results are shown in your browser by default so you can inspect the results. It shows mean error values, histograms or distributions of error metrics for point to point comparison between mode and observations.

  2. Comparison report

    Alternatively, the comparison report shows a similar report but it enables you to input a list of multiple validation results and in turn see side by side comparison on how different models perform on the same set of sensor data points. That can help to interpret which model is performing better. The type of a report is selected by type of the input - if it’s a single validation object the function tries to make an aggregated report and if it’s a list it will produce a comparison report. For more information of types, please see the documentation below.

  3. Detailed report

    This report is generated when you pass in a dataset with a single point. All the metrics and statistics that were used to generate the validation are reported. The map is automatically zoomed in to the site. This can be useful if you want to do a detailed investigation in a single site, for example if it shows large errors.

wind_validation.reporting.create_report(validation_results: Union[list[xr.Dataset], xr.Dataset], **kwargs) None[source]

Create HTML report from validation results comparing model with data. It is also possible to compare multiple model on the same dataset to inspect the performance of different techniques against observations.

Parameters
  • validation_results (xarray.Dataset | list[xr.Dataset]) – Validation result produced by main “validate” function. The results are compatible with this reporting function. If comparing multiple models a list of datasets can be passed with the same structure as described before.

  • obs_name (str | list[str], optional) – Observations site name will be used in the general description of a report. This can be string when comparing single model to single observation dataset however when making comparison the list of names is required because otherwise there is no way to make a distinction between models.

  • mod_name (str | list[str], optional) – Model name will be used in the general description of a report. This can be string when comparing single model to single observation dataset however when making comparison the list of names is required because otherwise there is no way to make a distinction between models.

  • dest (str, optional) – Path of the html file to write the report to. If not provided html file is saved in tmp directory. By default None

  • author (str, optional) – Author of the generated report to be stated in the description. By default “”

  • show (bool, optional) – If True - tries to open the report in the browser right away. By default False

Raises

RuntimeError – If destination folder doesn’t exist the generation will fail. If results type doesn’t match the documented ones. If model and observation names are not provided for comparison report.