Calculate an optimal trajectory for the reservoir levels based on water values taking into account the mean inflow, used in calculateBellmanWithIterativeSimulations
Source: R/iterations_simulation_DP.R
getOptimalTrend.RdCalculate an optimal trajectory for the reservoir levels based on water values
taking into account the mean inflow,
used in calculateBellmanWithIterativeSimulations
Usage
getOptimalTrend(
level_init,
watervalues,
mcyears,
reward,
controls,
niveau_max,
df_levels,
penalty_low,
penalty_high,
penalty_final_level,
final_level,
max_hydro_weekly,
n = 0,
pump_eff,
mix_scenario = TRUE,
df_previous_cut = NULL
)Arguments
- level_init
Initial level of the reservoir in MWh
- watervalues
Data frame aggregated watervalues generated by
Grid_Matrix- mcyears
Vector of monte carlo years used to evaluate rewards
- reward
Data frame containing estimation of the reward function, same format as the output of
reward_offset- controls
Data frame containing possible transition for each week, generated by the function
constraint_generator- niveau_max
Capacity of the reservoir in MWh
- df_levels
Data frame containing all previous evaluated controls, same format as
getOptimalTrend- penalty_low
Penalty for violating the bottom rule curve
- penalty_high
Penalty for violating the top rule curve
- penalty_final_level
Penalty for final level
- final_level
Final level
- max_hydro_weekly
Data frame with weekly maximum pumping and generating powers
- n
Iteration
- pump_eff
Pumping efficiency (1 if no pumping)
- mix_scenario
Should scenario be mix from one week to another ?
- df_previous_cut
Data frame containing previous estimations of cuts