This script trains an RNN model on previously prepared data, which is loaded into a pandas Dataframe from a csv file.
Hyperparameter exploration using optuna is also possible if desired. The trained model is saved at the location
specified under "output_path":
in the corresponding config.json and can be loaded via torch.load() or evaluated by
using the evaluate.py script. This script scales the data, loads a custom data structure and then generates and
trains a neural net.
Any training parameter is considered a hyper parameter as long as it is specified in either config.json or tuning.json. The latter is the standard file where the (so far) best found configuration is saved and should usually not be manually adapted unless new tests are for some reason not comparable to older ones (e.g. after changing the loss function).
The config should define parameters that are not considered hyperparameters. Anything considered a hyperparameter (see above) may be defined but will be overwritten in training. These are many parameters not listed here, the structure is evident from the config however. The not so obvious parameters are:
Here all settings considering hyperparameter exploration can be adjusted. Any hyper parameter that is to be explored should be defined here in the “settings” dict (example below). In addition to settings dict there are two settings:
Example:
{
"number_of_tests": 100,
"settings": {
"learning_rate": {
"function": "suggest_loguniform",
"kwargs": {
"name": "learning_rate",
"low": 0.000001,
"high": 0.0001
}
},
"history_horizon": {
"function": "suggest_int",
"kwargs": {
"name": "history_horizon",
"low": 1,
"high": 168
}
},
"batch_size": {
"function": "suggest_int",
"kwargs": {
"name": "batch_size",
"low": 32,
"high": 120
}
}
}
}
Possible hyperparameters are:
target_column, encoder_features, decoder_features, max_epochs, learning_rate, batch_size, shuffle, history_horizon, forecast_horizon, train_split, validation_split, core_net, relu_leak, dropout_fc, dropout_core, rel_linear_hidden_size, rel_core_hidden_size, optimizer_name, cuda_id
One of the following:
If you need more details, please take a look at the docs for this script.