Skip to contents

Compute reward functions for all weeks from 1 to 52 and for all scenarios in mcyears for the given area from simulations listed in simulation_names. For a specific week and a specific scenario, the reward function is evaluated based on results of all simulations depending on the method_old chosen. Mainly used in Grid_Matrix().

Usage

get_Reward(
  simulation_values = NULL,
  simulation_names = NULL,
  opts,
  correct_monotony = FALSE,
  method_old = TRUE,
  possible_controls = NULL,
  max_hydro_hourly = NULL,
  mcyears = "all",
  area,
  efficiency = NULL,
  expansion = FALSE
)

Arguments

simulation_values

A dplyr::tibble() with columns "week", "sim", "u" and "mcYear" (optional) that gives constraint values per week (and per scenario) used in each simulation. Correspond to simulation_values output of runWaterValuesSimulation().

simulation_names

Vector of character. List of simulations names to use to compute reward. Correspond to simulation_names output of runWaterValuesSimulation().

opts

List of study parameters returned by the function antaresRead::setSimulationPath(simulation="input") in input mode.

correct_monotony

Binary. True to correct monotony of rewards if method_old = TRUE.

method_old

Binary. Method to build reward function. See vignette("Reward-interpolation").

possible_controls

If method_old=FALSE, controls for which to compute reward, generated by constraint_generator().

max_hydro_hourly

Maximum hourly pumping and generating power generated by the function get_max_hydro() with timeStep="hourly".

mcyears

Vector of integer. Monte Carlo years used to compute water values.

area

Character. The Antares area concerned by water values computation.

efficiency

Double between 0 and 1. Pumping efficiency ratio. Get it with getPumpEfficiency().

expansion

Binary. True if mode expansion (ie linear relaxation) of Antares is used to run simulations, argument passed to runSimulation. It is recommended to use mode expansion, it will be faster (only one iteration is done) and results will be smoother as the cost result will correspond to the linear relaxation of the problem.

Value

reward

A dplyr::tibble() with columns "timeId", "mcYear", "control" and "reward". Reward functions for all weeks (timeId) and scenarios (mcYear).

local_reward

Only if method_old=FALSE. A dplyr::tibble() with columns "week", "mcYear", "u", "reward" and "simulation". All reward functions for all different simulations computed with get_local_reward() and reward_offset().

simulation_names

See arguments.

simulation_values

See arguments.