python – validation and test loss for a variety of PyTorch time series forecasting models

Hi everyone I’m trying to reduce the complexity of some of my Python code. The function below aims to compute the validation and test loss for a variety of PyTorch time series forecasting models. I won’t go into all the intricacies but needs to support models that return multiple targets, an output distribution + std (as opposed to a single tensor), and models that require masked elements of the target sequence. This over time has resulted in long if else blocks and lots of other bad practices.

I’ve used dictionaries before to map long if else statements but due to the nested nature of this code it doesn’t seem like it would work well here. I also don’t really see the point in just creating more functions as that just moves the if else statements somewhere else and requires passing more parameters around. Does anyone have any ideas? There are several unit tests that run from the different paths in this code now. However, it is still cumbersome to read. Plus soon I will have even more model variations to expand and support. Full code can in context can be seen at this link.

def compute_validation(validation_loader: DataLoader,
                       model,
                       epoch: int,
                       sequence_size: int,
                       criterion: Type(torch.nn.modules.loss._Loss),
                       device: torch.device,
                       decoder_structure=False,
                       meta_data_model=None,
                       use_wandb: bool = False,
                       meta_model=None,
                       multi_targets=1,
                       val_or_test="validation_loss",
                       probabilistic=False) -> float:
    """Function to compute the validation loss metrics

    :param validation_loader: The data-loader of either validation or test-data
    :type validation_loader: DataLoader
    :param model: model
    :type model: (type)
    :param epoch: The epoch where the validation/test loss is being computed.
    :type epoch: int
    :param sequence_size: The number of historical time steps passed into the model
    :type sequence_size: int
    :param criterion: The evaluation metric function
    :type criterion: Type(torch.nn.modules.loss._Loss)
    :param device: The device
    :type device: torch.device
    :param decoder_structure: Whether the model should use sequential decoding, defaults to False
    :type decoder_structure: bool, optional
    :param meta_data_model: The model to handle the meta-data, defaults to None
    :type meta_data_model: PyTorchForecast, optional
    :param use_wandb: Whether Weights and Biases is in use, defaults to False
    :type use_wandb: bool, optional
    :param meta_model: Whether the model leverages meta-data, defaults to None
    :type meta_model: bool, optional
    :param multi_targets: Whether the model, defaults to 1
    :type multi_targets: int, optional
    :param val_or_test: Whether validation or test loss is computed, defaults to "validation_loss"
    :type val_or_test: str, optional
    :param probabilistic: Whether the model is probablistic, defaults to False
    :type probabilistic: bool, optional
    :return: The loss of the first metric in the list.
    :rtype: float
    """
    print('Computing validation loss')
    unscaled_crit = dict.fromkeys(criterion, 0)
    scaled_crit = dict.fromkeys(criterion, 0)
    model.eval()
    output_std = None
    multi_targs1 = multi_targets
    scaler = None
    if validation_loader.dataset.no_scale:
        scaler = validation_loader.dataset
    with torch.no_grad():
        i = 0
        loss_unscaled_full = 0.0
        for src, targ in validation_loader:
            src = src if isinstance(src, list) else src.to(device)
            targ = targ if isinstance(targ, list) else targ.to(device)
            # targ = targ if isinstance(targ, list) else targ.to(device)
            i += 1
            if decoder_structure:
                if type(model).__name__ == "SimpleTransformer":
                    targ_clone = targ.detach().clone()
                    output = greedy_decode(
                        model,
                        src,
                        targ.shape(1),
                        targ_clone,
                        device=device)(
                        :,
                        :,
                        0)
                elif type(model).__name__ == "Informer":
                    multi_targets = multi_targs1
                    filled_targ = targ(1).clone()
                    pred_len = model.pred_len
                    filled_targ(:, -pred_len:, :) = torch.zeros_like(filled_targ(:, -pred_len:, :)).float().to(device)
                    output = model(src(0).to(device), src(1).to(device), filled_targ.to(device), targ(0).to(device))
                    labels = targ(1)(:, -pred_len:, 0:multi_targets)
                    src = src(0)
                    multi_targets = False
                else:
                    output = simple_decode(model=model,
                                           src=src,
                                           max_seq_len=targ.shape(1),
                                           real_target=targ,
                                           output_len=sequence_size,
                                           multi_targets=multi_targets,
                                           probabilistic=probabilistic,
                                           scaler=scaler)
                    if probabilistic:
                        output, output_std = output(0), output(1)
                        output, output_std = output(:, :, 0), output_std(0)
                        output_dist = torch.distributions.Normal(output, output_std)
            else:
                if probabilistic:
                    output_dist = model(src.float())
                    output = output_dist.mean.detach().numpy()
                    output_std = output_dist.stddev.detach().numpy()
                else:
                    output = model(src.float())
            if multi_targets == 1:
                labels = targ(:, :, 0)
            elif multi_targets > 1:
                labels = targ(:, :, 0:multi_targets)
            validation_dataset = validation_loader.dataset
            for crit in criterion:
                if validation_dataset.scale:
                    # Should this also do loss.item() stuff?
                    if len(src.shape) == 2:
                        src = src.unsqueeze(0)
                    src1 = src(:, :, 0:multi_targets)
                    loss_unscaled_full = compute_loss(labels, output, src1, crit, validation_dataset,
                                                      probabilistic, output_std, m=multi_targets)
                    unscaled_crit(crit) += loss_unscaled_full.item() * len(labels.float())
                loss = compute_loss(labels, output, src, crit, False, probabilistic, output_std, m=multi_targets)
                scaled_crit(crit) += loss.item() * len(labels.float())
    if use_wandb:
        if loss_unscaled_full:
            scaled = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in scaled_crit.items()}
            newD = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in unscaled_crit.items()}
            wandb.log({'epoch': epoch,
                       val_or_test: scaled,
                       "unscaled_" + val_or_test: newD})
        else:
            scaled = {k.__class__.__name__: v / (len(validation_loader.dataset) - 1) for k, v in scaled_crit.items()}
            wandb.log({'epoch': epoch, val_or_test: scaled})
    model.train()
    return list(scaled_crit.values())(0)

time series – What could I be doing wrong to get this result from Azure AutoML timeseries forecasting?

I’m experimenting with Azure AutoML for timeseries forecasting. I have a simple two column training dataset with two years of data at hourly intervals. Column 1 is Date/Time Column 2 is the variable I want to predict. I’ve done several runs of Azure AutoML and it seems to complete successfully. However, when I do a forecast and graph it something is obviously wrong. It looks like the forecast is being quantised somehow. The graph below is for the 7 days after the training set. Blue is actual and red is the forecast. This is obviously not right.

enter image description here

Here is my configuration for the training (python):

lags = (1,24,168)
forecast_horizon = 7 * 24 # 7 days of hourly data
forecasting_parameters = ForecastingParameters(
    time_column_name="DateTime",
    forecast_horizon=forecast_horizon,
    target_lags=lags,
    country_or_region_for_holidays='NZ',
    freq='H',
    use_stl='season',
    seasonality='auto'
)
automl_config = AutoMLConfig(task='forecasting',
                             debug_log='automl_forecasting_function.log',
                             primary_metric='normalized_root_mean_squared_error',
                             experiment_timeout_hours=1,
                             experiment_exit_score=0.05, 
                             enable_early_stopping=True,
                             training_data=train_df,
                             compute_target=compute,
                             n_cross_validations=10,
                             verbosity = logging.INFO,
                             max_concurrent_iterations=19,
                             max_cores_per_iteration=19,
                             label_column_name="Output",
                             forecasting_parameters=forecasting_parameters,
                             featurization="auto",
                             enable_dnn=False)

The best model from the run is a VotingEnsemble:

ForecastingPipelineWrapper(pipeline=Pipeline(
  memory=None,
  steps=(('timeseriestransformer',
  TimeSeriesTransformer(
    featurization_config=None,
    pipeline_type=<TimeSeriesPipelineType.FULL: 1>)),
  ('prefittedsoftvotingregressor',
  PreFittedSoftVotingRegressor(estimators=(('7',
  Pipeline(memory=None,
  steps=(('minmaxscaler',
  MinMaxScaler(copy=True,
  feature_range=(0,
  1))...
  DecisionTreeRegressor(ccp_alpha=0.0,
  criterion='mse',
  max_depth=None,
  max_features=0.5,
  max_leaf_nodes=None,
  min_impurity_decrease=0.0,
  min_impurity_split=None,
  min_samples_leaf=0.00218714609400816,
  min_samples_split=0.00630957344480193,
  min_weight_fraction_leaf=0.0,
  presort='deprecated',
  random_state=None,
  splitter='best'))),
  verbose=False))),
  weights=(0.5,
  0.5)))),
  verbose=False),
  stddev=None)

Python Project Structure for Forecasting Application

I have written a forecasting application using a Jupyter notebook and would like to structure the application (and supporting) code as a Python project.

At a high level, the application:

  1. fetches data from a remote database, and organizes and formats it
  2. trains a model using this data
  3. evaluates its goodness-of-fit
  4. extrapolates the model into the future (thereby generating a forecast)
  5. upload this forecast to a remote database

Below is my initial attempt at a project structure.
My intention is for the application to be executed by running main.py on the command line.
Note that I would not intend for git to track ./output/ ; the directory would exist to provide a copy of the forecast and plots of data for diagnostics by an analyst.

forecast/
│
├── docs/
│   ├── fetch.md
│   ├── train.md
│   ├── evaluate.md
│   ├── extrapolate.md
│   └── upload.md
│
├── forecast/
│   ├── __init__.py
│   ├── main.py
│   │
│   ├── fetch/
│   │   ├── __init__.py
│   │   ├── fetch.py
│   │   ├── organize.py
│   │   └── format.py
│   │
│   ├── train/
│   │   ├── __init__.py
│   │   └── train.py
│   │
│   ├── evaluate/
│   │   ├── __init__.py
│   │   └── metrics.py
│   │
│   ├── extrapolate/
│   │   ├── __init__.py
│   │   └── extrapolate.py
│   │
│   ├── upload/
│   │   ├── __init__.py
│   │   └── upload.py
│   │
│   └── helpers/
│       ├── __init__.py
│       └── helpers.py
│
├── data/
│   ├── sql
│   │   ├── fetch.sql
│   │   └── upload.sql
│   └── config
│       ├── database_connection.json
│       └── holiday.json
│
├── output/
│   ├── plots
│   │   └── …
│   └── forecast
│       └── forecast.tsv
│
├── tests/
│   └── …
│
├── .gitignore
├── LICENSE
├── proof_of_concept.ipynb
├── requirements.txt
└── README.md

What could be improved about this project structure?

c ++ program for forecasting, storing and displaying data about a virus

This is one of my first projects in C ++ and I would call it a program for storing, displaying data and forecasting numbers.

I will try to explain my code starting with the main method which is at the end of the code. The main method in my program is like the main menu. There are four ways to start the program: view data, enter data, forecast numbers, and exit. The source code is at the end of this entry.

1. Display data

If you select this option, your output follows (if the data file is not empty):

Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: d
___________________
|Day|Infected|Dead|
|1  |2       |0   |
|2  |2       |0   |
|3  |3       |0   |
|4  |3       |0   |
|5  |9       |0   |
|6  |14      |0   |
|7  |18      |0   |
|8  |21      |0   |
|9  |29      |0   |
|10 |41      |0   |
|11 |55      |0   |
|12 |79      |0   |
|13 |104     |0   |
|14 |131     |0   |
|15 |182     |0   |
|16 |246     |0   |
|17 |302     |1   |
|18 |504     |1   |
|19 |655     |1   |
|20 |860     |1   |
|21 |1016    |3   |
|22 |1332    |3   |
|23 |1646    |4   |
|24 |2053    |6   |
|25 |2388    |6   |
Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: 

If you select this option, the function displayData() with the parameter file_name is called. The function has two parameters, but the second, prognose_days is set to -1 by default. This is because there are two ways to do this. The first way is only to display the current data (prognose_days = -1), and the second way is to display the current data + forecast numbers (prognose_days = n).

After setting a few variables, the function first checks whether the data file contains data. Then it takes every line from the data file and divides them at the delimiter :. The division is made using the method splitString(). The divided lines are saved in a called vector vector_list.

The next condition checks whether only the current data or also forecast numbers should be displayed. If forecasted numbers are also to be displayed, these are displayed at the end of the vector_list.

The rest of the function is pretty complicated to read for others, I think. The rest is to display the table with the data correctly and well. First, the longest length of each column (day, infected, dead) is saved in a variable with the function getLongestLength(). Then the table is output using a ternary operator with the correct width, etc.

2. Enter data

If you select this option, your output is as follows:

Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: i
Day 26: Enter new data or quit (Q): 2500:7 
Day 27: Enter new data or quit (Q): q
Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: 

This option retrieves the numbers you entered (e.g. 2500: 7 -> 2500 infections and 7 deaths) and writes them using the function writeFile() into the data file.

3. Predict numbers

If you select this option, your output is as follows:

Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: p
How many days to you want to prognose?: 3
______________________
|Day   |Infected|Dead|
|1     |2       |0   |
|2     |2       |0   |
|3     |3       |0   |
|4     |3       |0   |
|5     |9       |0   |
|6     |14      |0   |
|7     |18      |0   |
|8     |21      |0   |
|9     |29      |0   |
|10    |41      |0   |
|11    |55      |0   |
|12    |79      |0   |
|13    |104     |0   |
|14    |131     |0   |
|15    |182     |0   |
|16    |246     |0   |
|17    |302     |1   |
|18    |504     |1   |
|19    |655     |1   |
|20    |860     |1   |
|21    |1016    |3   |
|22    |1332    |3   |
|23    |1646    |4   |
|24    |2053    |6   |
|25    |2388    |6   |
|26    |2500    |7   |
|27 (P)|3409    |9   |
|28 (P)|4649    |12  |
|29 (P)|6339    |16  |
Infections: 6339
Deaths: 16
Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: 

This option calculates a forecast by calculating the average factor for the numbers. Ex.

1 Infected
3 Infected (x3)
8 Infected (x2.66)
9 Infected (x1.125)
Average Factor: (3+2.66+1.125) / 3 = 2,26

Prognose for two days:
9*2,26 = 20,34
20,34*2,26 = 45,96

I know mathematically that this is not the best approach, but it is not important now.
The function does exactly what I calculated above and returns an array of the predicted infections and deaths.

4. Exit

Exits the program.

My questions are:

  • Does my code have major errors or no-gos when programming or programming in C ++?
  • Should I have used pointers and / or addresses in my code?
  • What is the general style of my programming?
  • Did you notice anything else?

coronavirus.cpp

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

using namespace std;

// DECLARES METHODS
vector splitString(string s, char delim);
// DEFAULTLY PROGNOSE DAYS IS -1 WHICH MEANS IT ONLY DISPLAYS CURRENT DATA AND NOT PROGNOSE DATA
void displayData(string file_name, int prognose_days = -1);

// PUBLIC VARIABLE FOR FILENAME
string file_name = "/home/adrianone/Schreibtisch/Development/CPPDevelopment/Corona/coronaprognose/corona_numbers.dat";

// CALCULATES PROGNOSE WITH FILE AND PROGNOSE DAYS
vector calculatePrognose(double days, string file_name) {
    int zero_dead = 0, zero_infected = 0;
    // DECLARES INPUT FILESTREAM TO READ FILE
    ifstream file(file_name);
    // DECLARS VECTOR WHICH CONTAINS VECTORS OF SPLITTED FILE-LINES
    vector> vector_list;
    string line;
    // DECLARES VARIABLES
    double avg_infected_factor, avg_dead_factor, last_infected_count, last_dead_count, infected = 0, dead = 0;
    vector empty_file = { -1, -1};
    // CHECKS IF FILE EXISTS/IS EMPTY
    if (file.peek() == ifstream::traits_type::eof()) {
        cout << "33(1;31m(ERROR)33(0m File is empty. Please input data!" << endl;
        return empty_file;
    }

    // GETS LINES OF FILE, SPLITS THEM AND PUTS THEM IN A VECTOR
    while(getline(file, line)) {
        vector splitted_string = splitString(line, ':');
        vector_list.push_back(splitted_string);
    }

    //TODO: CHANGE CALCULATION OF AVGFACTOR WITH SORTING OUT EXTREMES

    // SETS FIRST NUMBER OF DATA TO LASTCOUNT
    last_infected_count = stoi(vector_list(0)(1));
    last_dead_count = stoi(vector_list(0)(2));

    avg_dead_factor, avg_infected_factor = 0;

    // SUMS AVEREAGES OF COUNTS
    for(int i = 1; i < vector_list.size(); i++) {

        // CHECK IF LAST NUMBER IS ZERO TO PREVENT A DIVISION THROUGH ZERO
        if (last_dead_count != 0) {
            avg_dead_factor += stoi(vector_list(i)(2)) / last_dead_count;
            last_dead_count = stoi(vector_list(i)(2));

        } else {
            // INCREMENT COUNTER FOR TIMES ZERO PEOPLE WERE DEAD
            zero_dead++;
        }

        // CHECK IF LAST NUMBER IS ZERO TO PREVENT A DIVISION THROUGH ZERO
        if (last_infected_count != 0) {
            avg_infected_factor += stoi(vector_list(i)(1)) / last_infected_count;
            last_infected_count = stoi(vector_list(i)(1));

        } else {
            // INCREMENT COUNTER FOR TIMES ZERO PEOPLE WERE INFECTED
            zero_infected++;
        }

        // SET LAST INFECTED COUNT TO CURRENT COUNT
        last_infected_count = stoi(vector_list(i)(1));
        last_dead_count = stoi(vector_list(i)(2));
    }


    // CALCULATES AVERAGES
    avg_infected_factor = avg_infected_factor / (vector_list.size() - 1 - zero_infected);
    avg_dead_factor = avg_dead_factor / (vector_list.size() - 1 - zero_dead);

    // CALCULATES PROGNOSES
    infected = stoi(vector_list(vector_list.size() - 1)(1)) * pow(avg_infected_factor, days);
    dead = stoi(vector_list(vector_list.size() - 1)(2)) * pow(avg_dead_factor, days);

    vector prognose = { (int) infected, (int) dead };

    // DISPLAYS PROGNOSEDATA IN TABLE

    return prognose;
}

// SPLITS STRING AT DELIMETER AND RETURNS A VECTOR
vector splitString(string s, char delim) {
    string string_token;
    vector string_vector;

    for (int i = 0; i < s.length(); i++) {
        if (s(i) == delim) {
            string_vector.push_back(string_token);
            string_token = "";
        } else {
            string_token += s(i);
        }
    }
    return string_vector;
}

// WRITES TO FILE
void writeFile(string content, string file_name) {
    ofstream file;
    file.open(file_name, ios_base::out | ios_base::app);
    file << content;
    file.close();
}

// GET LONGEST NUMBER IN DATA TO DISPLAY TABLE CORRECTLY
int getLongestLength(vector> vector_list, int index) {
    int result;
    result = vector_list(0)(index).length();
    for(vector splitted_string: vector_list) {
        if (splitted_string(index).length() > result) {
            result = splitted_string(index).length();
        }
    }
    return result;
}

// DISPLAY PROGNOSE FOR EACH DAY IN TABLE
// DISPLAYS CURRENT DATA AS TABLE
void displayData(string file_name, int prognose_days) {
    int width_day, width_infected, width_dead, current_day;
    string day = "Day";
    string infected = "Infected";
    string dead = "Dead";
    ifstream file(file_name);
    string line;
    vector> vector_list;

    // CHECKS IF FILE EXISTS/IS EMPTY
    if (file.peek() == ifstream::traits_type::eof()) {
        cout << "33(1;31m(ERROR)33(0m File is empty. Please input data!" << endl;
        return;
    }

    // GETS LINES OF FILE, SPLITS THEM AND PUTS THEM IN A VECTOR
    while(getline(file, line)) {
        vector splitted_string = splitString(line, ':');
        vector_list.push_back(splitted_string);
    }


    if (prognose_days != -1) {
        vector prognose_vector;
        ifstream file(file_name);
        string line;
        int current_day;
        while(getline(file, line)) {
            vector splitted_string = splitString(line, ':');
            // GETS LAST DAY AND CONTINUES WITH THE NEXT
            current_day = stoi(splitted_string(0)) + 1;
        }

        for(int i = 1; i <= prognose_days; i++) {
            // CREATE EXTRA STRING TO ADD COLOR


            prognose_vector.push_back(to_string(current_day) + " (P)");
            prognose_vector.push_back(to_string(calculatePrognose(i, file_name)(0)));
            prognose_vector.push_back(to_string(calculatePrognose(i, file_name)(1)));


            vector_list.push_back(prognose_vector);
            current_day++;
            prognose_vector.clear();
        }
    }

    file.close();

    // GET LONGEST LENGTH OF EACH COLUMN
    width_day = getLongestLength(vector_list, 0);
    width_infected = getLongestLength(vector_list, 1);
    width_dead = getLongestLength(vector_list, 2);

    // DISPLAYS SEPERATORS IN CORRECT COUNT WITH TERNARY OPERATORS
    cout << string(((width_day < day.length()) ? 0 : width_day - day.length()) 
    + ((width_infected < infected.length()) ? 0 : width_infected - infected.length())
    + ((width_dead < dead.length()) ? 0 : width_dead - dead.length())
    + day.length() + infected.length() + dead.length() + 4, '_') << endl;

    // DISPLAYS DESCRIPTION IN CORRECT COUNT WITH TERNARY OPERATORS
    cout <<
    "|" << "33(1;32m" << day << "33(0m" << string((width_day < day.length()) ? 0 : width_day - day.length(), ' ') <<
    "|" << "33(1;33m" << infected << "33(0m" << string((width_infected < infected.length()) ? 0 : width_infected - infected.length(), ' ') <<
    "|" << "33(1;31m" << dead << "33(0m" << string((width_dead < dead.length()) ? 0 : width_dead - dead.length(), ' ') << 
    "|" << endl;

    // DISPLAYS DATA IN CORRECT COUNT WITH TERNARY OPERATORS
    for(vector splitted_string: vector_list) {
        cout << 
        "|" << splitted_string(0) << string((splitted_string(0).length() < day.length())
        ? ((width_day < day.length()) ? day.length() - splitted_string(0).length() 
        : width_day - splitted_string(0).length()) : width_day - splitted_string(0).length(), ' ') <<
        "|" << splitted_string(1) << string((splitted_string(1).length() < infected.length())
        ? ((width_infected < infected.length()) ? infected.length() - splitted_string(1).length() 
        : width_infected - splitted_string(1).length()) : width_infected - splitted_string(1).length(), ' ') << 
        "|" << splitted_string(2) << string((splitted_string(2).length() < dead.length())
        ? ((width_dead < dead.length()) ? dead.length() - splitted_string(2).length() 
        : width_dead - splitted_string(2).length()) : width_dead - splitted_string(2).length(), ' ') << 
        "|" << endl;
    }
}


int main(int argc, char **argv) {
    string use_choice, infected_and_dead;
    double prognose_days;

    // LOOP TO GET TO MAIN MENU WHEN LEAVING A SUB MENU
    while (true) {
        // GETS CHOICE
        cout << "Do you want to make a prognose (P), input new data (I), show current data (D) or quit (Q)?: ";
        cin >> use_choice;
        // TRANSFORMS CHOICE TO LOWERCASE TO IGNORE CASE
        transform(use_choice.begin(), use_choice.end(), use_choice.begin(), ::tolower);

        // PROGNOSE
        if (use_choice == "p") {
            cout << "How many days to you want to prognose?: ";
            cin >> prognose_days;
            // CALCULATES PROGNOSE OUT OF SUBMITTED DAYS
            vector prognose = calculatePrognose(prognose_days, file_name);

            if (prognose(0) && prognose(1) == -1) {
                continue;
            } else {
                displayData(file_name, prognose_days);
                cout << "33(1;33mInfections:33(0m " << prognose(0) << endl
                << "33(1;31mDeaths: 33(0m" << prognose(1) << endl;
            }

        // INPUT NEW DATA
        } else if (use_choice == "i") {
            ifstream file(file_name);
            string line;
            int current_day;
            // CHECKS IF FILE EXISTS/IS EMPTY AND START WITH DAY 1
            if (file.peek() == ifstream::traits_type::eof()) {
                current_day = 1;
            } else {
                while(getline(file, line)) {
                    vector splitted_string = splitString(line, ':');
                    // GETS LAST DAY AND CONTINUES WITH THE NEXT
                    current_day = stoi(splitted_string(0)) + 1;
                }
            }
            file.close();
            while (true) {
                cout << "Day " << current_day << ": Enter new data or quit (Q): ";
                cin >> infected_and_dead;
                // TRANSFORMS CHOICE TO LOWERCASE TO IGNORE CASE
                transform(infected_and_dead.begin(), infected_and_dead.end(), infected_and_dead.begin(), ::tolower);
                if (infected_and_dead == "q") {
                    current_day++;
                    break;
                }
                // CONVERTS NUMBERS TO STRINGS
                infected_and_dead = to_string(current_day) + ":" + infected_and_dead + ":" + "n";
                // CHECKS IF DATA IS CORRECTLY INPUTTED
                if (splitString(infected_and_dead, ':').size() < 3) {
                    cout << "33(1;31m(ERROR)33(0m Invalid Input! Please enter data in this form: 'infected:dead' " << endl;
                    continue;
                }
                // WRITES DATA TO FILE
                writeFile(infected_and_dead, file_name);
                current_day++;
            }
        // DISPLAY DATA
        } else if (use_choice == "d") {
            displayData(file_name);
        // QUIT
        } else if (use_choice == "q") {
            break;
        } else {
            // DISPLAYS ERROR MESSAGE WHEN WRONG CHOICE IS SUBMITTED
            cout << "33(1;31m(ERROR)33(0m Invalid Input" << endl;
        }
    }
    return 0;
}

corona_numbers.dat (example)

1:10:0:
2:25:2:
3:50:3:

finance – What financial forecasting methods are available to assess the profitability of a cash flow?

I am sorry for this question. I'm looking for forecasting methods to assess the profitability of a cash flow

I have no experience in this industry, I did some searches in the documentation, but I couldn't find much stuff.

Does Mathematica have anything better than Excel IRR? Any suggestion of where to look is appreciated. I will use the suggested references to improve this question.

Python MLP network for forecasting

Good night
I'm new to neural networks and programming. I am trying to develop a network for the prediction of power consumption. But I am stuck in my code and can not make progress.

The idea is to have a neural network that finds the best delay for prediction by the wrapper method.
Let the number of neurons vary from 5 to 5.
Leave n_iterations = 30 (in the case of the network I'm trying, I'm testing with 3 iterations (just to see if it's running), but in the end I just wanted to leave 1 iteration in the same network configuration.).
I need to get MSE out of training, testing, and validating the 30 iterations.
And the denormalized network test output (18 dates) for the 30 iterations.
Below is my network, could someone help me?

import pandas as pd
Import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing
from numpy.random import rand
from numpy import ndarray, array
def randomNumber ():
return edge ()

df = pd.read_excel (r "/ content / Data Paraná – Copia.xlsx")

file = df (df.columns (0)). values
file = file.reshape (-1, 1) # MSM WHAT (185 rows, 1 column)

min_max_scaler = preprocessing.MinMaxScaler (feature_range = (- 1,1))
file = min_max_scaler.fit_transform (file)
narquivo = narquivo.flatten ()
df = pd.Series (data = file)

def separateDataTreinValTest (vector, qt_val: int, qt_test: int):
vector = np.array (vector)
qt_trein = vector.shape (0) – qt_val – qt_test
x1 = vector (: qt_trein)
x2 = vector (qt_trein: qt_trein + qt_val)
x3 = vector (qt_trein + qt_val 🙂
return x1, x2, x3

def separateDataXY (vector):
vector = np.array (vector)
result = np.split (vector, (vector.shape (1) -1), axis = 1)
#len (vector)
Return result

def setDFWithDelays (df, delay, forward_steps = 1):
if isinstance (df, (pd.Series, pd.DataFrame)) is not true:
df = pd.Series (np.array (df))
max_delay = max (delays)
df_atrasos = pd.DataFrame ()
for i, delay in enumeration (delays):
df_atrasos (& # 39; delay & # 39; + str (delays (-i-1)) = df.iloc (max_delays (-i-1): – delays (-i-1) – (steps_to_front-1)). reset_index (drop = true)
df_atrasos (& # 39; Output & # 39;) = df.iloc (max_delay + steps_front-1 :). reset_index (drop = true)
return df_atrasos

def redeMLP (xtrein, ytrein, xval, yval, xtest, ytest, maxSeasons = 10000, tap = 0.01, ee = 1e-6, caI = 10, caS = 1, n_interactions = 3):
q = xtrein.shape (0)
caE = xtrein.shape (1)
Bias = 1
e = np.zeros (q)
MSE = (0)
p = xtest.shape (0)

for i in range (n_interactions):
# Randomly introduced layer weights
w1 = np.random.random ((caI, caE + 1))
w2 = np.random.random ((caS, caI + 1))
seasons = 1
e = np.zeros (q)
MSE = (0)
MSE_val = ()
MSET = ()
Error_Test = np.zeros (p)

erro = 1
while erro > ee:

  xb = np.insert(xtrein, 0, bias, axis=1) 
  saidacaI = np.tanh(np.dot(xb, w1.T)) 
  #função de ativação tangente hiperbólica, esse np.tanh
  saidacaIb = np.insert(saidacaI, 0, bias, axis=1) 
  saidacaS = np.tanh(np.dot(saidacaIb, w2.T))  
  # erro = saida desejada - saida da rede
  e = ytrein - saidacaS
  #backpropagation, cálculo do gradiente na camada de saída
  delta2 = e*(1 - saidacaS)*saidacaS 
  vdelta2 = delta2*(w2(0:,1:))
  delta1 = (saidacaI*(1 - saidacaI)*vdelta2)
  #atualizando os pesos
  w1 = w1 + tap*(np.dot(delta1.T, xb))
  w2 = w2 + tap*(np.dot(delta2.T, saidacaIb))
  #calculando MSE
  MSE.append((e**2).mean())
  erro = abs(MSE(-1) - MSE(-2))
  nEpocas += 1
  xb_val = np.insert(xval, 0, bias, axis=1) 
  saidacaI_val = np.tanh(np.dot(xb_val, w1.T)) 
  saidacaIb_val = np.insert(saidacaI_val, 0, bias, axis=1) 
  saidacaS_val = np.tanh(np.dot(saidacaIb_val, w2.T))          
  # erro = saida desejada - saida da rede
  e_val = yval - saidacaS_val       
  MSE_val.append((e_val**2).mean())

  if(MSE_val(-1) <= min(MSE_val)):
    ErroValMenor = min(MSE_val)
    epocaIdeal = nEpocas
    w1_Ideal = w1
    w2_Ideal = w2

  if(nEpocas >= maxEpocas):
    break       
xb_teste = np.insert(xtest, 0, bias, axis=1) 
saidacaI_teste = np.tanh(np.dot(xb_teste, w1_Ideal.T)) 
saidacaIb_teste = np.insert(saidacaI_teste, 0, bias, axis=1) 
saidacaS_teste = np.tanh(np.dot(saidacaIb_teste, w2_Ideal.T)) 
e_teste = ytest.T - saidacaS_teste
MSET.append((e_teste**2).mean())

Return of MSE, MSE_val, MSET, saidacaS_test

Class Wrapper ():
def init(self, order = 5):
self.order = order #maximum delay amount
self.n_delays = order * (order + 1) // 2
self.all_delays = ()
self.n_row = -1 # as if it were a pyramid

    self.sum_delays_per_row = (0)
    n_rows = 0
    sum_rows=0
    for i in range(1, self.n_delays+1):
        if i == (n_rows+1)*self.order-sum_rows:  #se for nova linha
            n_rows +=1
            sum_rows += n_rows
            self.sum_delays_per_row.append(i)

def newDiferentDelays(self, delays_add):
    self.delays_to_add = ()
    for i in range(self.order):
        if i+1 not in delays_add:
            self.delays_to_add.append(i+1)

def nextDelay(self, results:array):
    if isinstance(results, ndarray) is not True:
        results = array(results)
    if len(self.all_delays) in self.sum_delays_per_row: #se for nova linha
        if len(results) == 0:
            self.best_delays = ()
        else:
            argmin_from_delays_to_add = results(self.sum_delays_per_row(self.n_row):self.sum_delays_per_row(self.n_row+1)).argmin()
            best_delay = self.all_delays(argmin_from_delays_to_add+self.sum_delays_per_row(self.n_row))(-1)
            self.best_delays.append(best_delay)
        self.newDiferentDelays(self.best_delays)
        self.n_row +=1
        # self.sum_n_rows += self.n_row
    new_delay = self.best_delays.copy()
    new_delay.append(self.delays_to_add.pop(0))
    self.all_delays.append(new_delay)
    if len(self.all_delays) == self.n_delays and len(results) > 0:
        self.best_delay = self.all_delays(results.argmin())
    elif len(results) == 0:
        self.best_delay = (1)
    else:
        self.best_delay = None
    return self.all_delays(-1)

qt_val = 24
qt_test = 18

n_interactions = 10
wrappers = wrappers (5)
Results = ()
caI = (5, 10, 15, 20) # np.arange (20) * 5 + 5 produces all 5 numbers from 5 to 100
for in range (wrapper.n_delays):
next_delay = wrapper.nextDelay (results)
df_atrasos = setDFWithDelays (df, next_delay, forward_steps = 1)
train, val, test = separateDataTreinValTest (df_atrasos, qt_val, qt_test)
xtrein, ytrein = separateDataXY (train)
xval, yval = separateDataXY (val)
xtest, ytest = separateDataXY (test)
neuron error = ()
for n in caI: # the neurons vary, rotate and store the neurons with the smallest error
Interaction_ Error = ()
for i in range (n_interactions): Rotate # N times and save interaction with the lowest MSE error
MSE, MSE_val, MSET, saidacaS_test = redeMLP (xtrein, ytrein, xval, yval, xtest, ytest, maxSeasons = 2000, tap = 0.001, ee = 1e-6, caI = n, caS = 1)
#mse = randomNumber ()
interaction_error.append (MSET)
Interaction_ Error = np.array (Interaction_ Error)
error_neuronios.append (error_interaction.min ()) #min or mean
neuron error = np.array (neuron error)
results.append (error_neuronios.min ())

Run the last time for the best delay now

df_atrasos = setDFWithDelays (df, wrapper.best_delay, forward_ steps = 1)
train, val, test = separateDataTreinValTest (df_atrasos, qt_val, qt_test)
xtrein, ytrein = separateDataXY (train)
xval, yval = separateDataXY (val)
xtest, ytest = separateDataXY (test)
neuron error = ()
for n in caI: # the neurons vary, rotate and store the neurons with the smallest error
Interaction_ Error = ()
for i in range (n_interactions): Rotate # N times and save interaction with the lowest MSE error
MSE, MSE_val, MSET, saidacaS_test = redeMLP (xtrein, ytrein, xval, yval, xtest, ytest, maxSeasons = 2000, tap = 0.001, ee = 1e-6, caI = n, caS = 1)
#mse = randomNumber ()
interaction_error.append (MSET)
Interaction_ Error = np.array (Interaction_ Error)
error_neuronios.append (error_interaction.min ()) #min or mean
neuron error = np.array (neuron error)
best_neuron = caI (error_neuronios.argmin ())

print ("MSE training (s) =" + str (MSE (-1)))
print ("MSE Validation (s) =", min (MSE_val))
print ("MSE test (s) =", MSET)
print (saidacaS_test)
saidacaS_teste_desn = min_max_scaler.inverse_transform (saidacaS_test)
print (saidacaS_test_desn)
print (best_neuron)
print (df_atrasos)

Global Fireproof Safes Market Insights, Forecasting until 2025 – Everything Else

This forecast for the 2019-2025 research report is a significant source of clever information for corporate strategists. It provides the business design with development review and recorded and up-to-date information on costs, revenues, inquiries and deliveries (depending on relevance). The exploration auditors provide a detailed representation of the value chain and their wholesaler audit. This Market Concentrate contains extensive information that enhances the retrieval, expansion, and use of this report.

The report shows the market-aggressive scene and a comparative close examination of the major traders / key players in the market. Top companies in the global refractory safe market: AMSEC safes, Liberty Safe, Godrej and Boyce, Gunnebo, Kaba Group, access security products, Cannon Safe, SentrySafe, Paragon, Honeywell, First Alert, Gardall Safes, Paritet-K, Stack -On, V-Line, John Deere, China Wangli, Barska, Viking Security Safe and others.

Finish the connection for a free example copy of the report:

https://www.marketinsightsreports.com/reports/04301213068/worldwide

flame retardant safe-deposit-knowledge-component until -2025 / request? Source = ottheedge & Mode = 28

This report divides the global refractory safe market into the following types:

Money that ensures the board

Gun safes

media safes

Other

Based on the application, the global market for refractory safes is divided into the following areas:

home use

office

Accommodations

diversion centers

Other

Essentially, this survey will identify in which market segments or regions or countries Fireproof Safes should be located to channel their efforts and plans to increase growth and productivity. The report provides an introduction to the market-driven scene and a constant analysis of the key sellers / key players in the market.

Provincial analysis for the market for refractory safes:

In order to gain a broad understanding of the market elements, the global market for refractory safes will be cross-examined on the basis of the most important topographies: USA, China, Europe, Japan, Southeast Asia, India and others. Each of these regions is prepared on the basis of market discoveries about major nations in these areas in order to gain a comprehensive understanding of the market.

Key features listed under offer and main features of the reports:

– Detailed description of the market

– Change of the branch elements of the business

– Distribution of the market from top to bottom by type, application, etc.

– Historical, current and expected market assessment in terms of volume and value

– Current industry patterns and improvements

– Competition scene of the market for refractory safes

– Main actors' strategies and article contributions

– Potential and specialized sections / districts that have a promising development

Finally, the market report on fireproof safes contains some important recommendations for another refractory company before assessing its accessibility. In general, the report provides top-to-bottom knowledge about the market and covers enormously important parameters. Table, figure, outlines, tables of contents, parts etc. given by the industry. Perfectly clear information for the customer with precise information about the market and its patterns.

We also offer customizations for reports that depend on the explicit customer requirement:

1-level investigation at country level for any 5 countries of your decision.

2-Free Competition Review of 5 Major Market Participants.

3-Free 40 Expert Lessons to cover some other information topics

Read More – Top 10 Best Gun Safes Reviews

,