Estimating nitrogen concentrations in streams and rivers using NN

Antonio Fonseca

GeoComput & ML

May 20th, 2021

Exercise base on the:

Estimating nitrogen and phosphorus concentrations in streams and rivers, within a machine learning framework

Longzhu Q. Shen, Giuseppe Amatulli, Tushar Sethi, Peter Raymond & Sami Domisch
Scientific Data volume 7, Article number: 161 (2020)

Lecture: Artificial Neural Networks for geo-data

Introduction

The field of deep learning begins with the assumption that everything is a function, and leverages powerful tools like Gradient Descent to efficiently learn these functions. Although many deep learning tasks (like classification) require supervised learning (with labels, testing, and training sets), a rich subset of the field has developed potent methods for automated, non-linear unsupervised learning, in which all you need to provide is the data. These unsupervised methods include Autoencoders, Variational Autoencoders, and Generative Adversarial Networks. They can be used to visualize data or to compress it; to generate novel data, and even to learn the functions underlying your data. In this assignment, you’ll gain hands-on experience using simple networks to perform classification and regression on a variety of datasets, and will then apply these techniques to generate new samples using Variational Autoencoders and GANs.

This assignment will also serve as a hands-on introduction to PyTorch. At present, PyTorch is the single most popular machine learning library in Python. It provides a framework of pre-built classes and helper functions to greatly simplify the creation of neural networks and their paraphernalia. Before PyTorch and its ilk, machine learning researchers were known to spend days juggling weight and bias vectors, or tediously implementing their own data processing functions. With PyTorch, this takes minutes.

Before diving into this assignment, you’ll need to install PyTorch and some other important packages (see below). The PyTorch website provides an interactive quick-start guide to tailor the installation to your system’s configuration https://pytorch.org/get-started/locally/. (The installation instructions will ask you to install torchvision in addition to torch. Thankfully we have a VM prepared for you!)

Needed packages for this lecture:

conda install pandas
conda install -c conda-forge scikit-learn
conda install -c anaconda seaborn
conda install -c conda-forge tensorflow
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
conda install ipywidgets

BackGround

  • Geoenviornmental variables

  • Ground observationd : Nitrogen in US streams

[1]:
import numpy as np
import codecs
import os
import copy
import json
from scipy.spatial.distance import cdist, pdist, squareform
from scipy.linalg import eigh
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import random
from sklearn import manifold
import pandas as pd
from torch.nn import functional as F


import pandas as pd
from sklearn.metrics import r2_score
from sklearn.preprocessing import MinMaxScaler
import seaborn as sns

import torch
from torch.utils.data import Dataset, DataLoader
from torch.utils.data.sampler import SubsetRandomSampler,RandomSampler
from torchvision import datasets, transforms
from torch.nn.functional import softmax
from torch import optim, nn
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import time

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
cpu

Loading data and Logistic Regression

[2]:
# Loading the dataset and create dataloaders
mnist_train = datasets.MNIST(root = 'data', train=True, download=True, transform = transforms.ToTensor())
mnist_test = datasets.MNIST(root = 'data', train=False, download=True, transform = transforms.ToTensor())

Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Failed to download (trying next):
HTTP Error 503: Service Unavailable

Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz

Extracting data/MNIST/raw/train-images-idx3-ubyte.gz to data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Failed to download (trying next):
HTTP Error 503: Service Unavailable

Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/train-labels-idx1-ubyte.gz to data/MNIST/raw/train-labels-idx1-ubyte.gz

Extracting data/MNIST/raw/train-labels-idx1-ubyte.gz to data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Failed to download (trying next):
HTTP Error 503: Service Unavailable

Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-images-idx3-ubyte.gz to data/MNIST/raw/t10k-images-idx3-ubyte.gz

Extracting data/MNIST/raw/t10k-images-idx3-ubyte.gz to data/MNIST/raw

Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Failed to download (trying next):
HTTP Error 503: Service Unavailable

Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz
Downloading https://ossci-datasets.s3.amazonaws.com/mnist/t10k-labels-idx1-ubyte.gz to data/MNIST/raw/t10k-labels-idx1-ubyte.gz

Extracting data/MNIST/raw/t10k-labels-idx1-ubyte.gz to data/MNIST/raw

Processing...
/home/user/miniconda3/lib/python3.8/site-packages/torchvision/datasets/mnist.py:502: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1616554788289/work/torch/csrc/utils/tensor_numpy.cpp:143.)
  return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s)
Done!
[3]:
# Prepare the dataset
class MyDataset(Dataset):
    def __init__(self, data, target):

        self.data = torch.from_numpy(data).float()
        self.target = torch.from_numpy(target).float()

    def __getitem__(self, index):
        x = self.data[index]
        y = self.target[index]
        return x, y

    def __len__(self):
        return len(self.data)
[1]:
def DataPreProcessing(verbose=False):
    dataset = pd.read_csv("/media/sf_LVM_shared/my_SE_data/exercise/txt/US_TN_season_1_proc.csv")

    #Check for NaN in this table and drop them if there are
    dataset.isna().sum()
    dataset.dropna()

    # Remove extra variables from the dataset (keep just the 47 predictors and the 'bcmean' (what we are predicting)
    dataset = dataset.drop(["RVunif_bc","mean","std","cv","longitude","latitude","RVunif"],axis=1)
    if verbose:
        print('Example of the dataset: \n',dataset.head())

    dataset_orig = dataset.copy()

    # Rescale: differences in scales accross input variables may increase the difficulty of the problem being modeled and results on unstable weights for connections
    sc = MinMaxScaler(feature_range = (0,1)) #Scaling features to a range between 0 and 1

    # Scaling and translating each feature to our chosen range
    dataset = sc.fit_transform(dataset)
    dataset = pd.DataFrame(dataset, columns = dataset_orig.columns)
    if verbose:
        print('dataset (after transform): \n',dataset.head())
    dataset_scaled = dataset.copy() #Just backup
    inverse_data = sc.inverse_transform(dataset) #just to make sure it works
    inverse_data = pd.DataFrame(inverse_data, columns = dataset_orig.columns)
    if verbose:
        print('inverse_data: \n',inverse_data.head())


    #Check the overall stats
    train_stats = dataset.describe()
    train_stats.pop('bcmean') #because that is what we are trying to predict
    train_stats = train_stats.transpose()
    if verbose: print('train_stats: ',train_stats) #now train_stats has 47 predictors (as described in the paper).


    labels = dataset.pop('bcmean')
    if verbose: print('labels.describe: ',labels.describe())
    dataset = MyDataset(dataset.to_numpy(), labels.to_numpy())
    dataset_size  = len(dataset)
    if verbose: print('dataset_size: {}'.format(dataset_size))
    validation_split=0.3

    batch_size=25 #How many samples are actually going to be selected

    # -- split dataset
    indices       = list(range(dataset_size))
    split         = int(np.floor(validation_split*dataset_size))
    if verbose: print('samples in validation: {}'.format(split))
    np.random.shuffle(indices) # Randomizing the indices is not a good idea if you want to model the sequence
    train_indices, val_indices = indices[split:], indices[:split]

    # -- create dataloaders
    train_sampler = RandomSampler(train_indices)
    valid_sampler = RandomSampler(val_indices)

    dataloaders   = {
        'train': torch.utils.data.DataLoader(dataset, batch_size=batch_size, num_workers=1, sampler=train_sampler),
        'val': torch.utils.data.DataLoader(dataset, batch_size=batch_size, num_workers=1, sampler=valid_sampler),
        'test': torch.utils.data.DataLoader(dataset,  batch_size=dataset_size, num_workers=1, shuffle=False),
        'all_val': torch.utils.data.DataLoader(dataset, batch_size=split, num_workers=1, shuffle=True),
        }


    if verbose:
        #Inspect the original mean (still missing some formatting)
        sns.set()
        f, (ax1,ax2) = plt.subplots(2, 1,sharex=True)
        sns.distplot(labels,hist=True,kde=False,bins=75,color='darkblue',  ax=ax1, axlabel=False)
        sns.kdeplot(labels,bw=0.15,legend=True,color='darkblue', ax=ax2)

        ax1.set_title('Original  histogram')
        ax1.legend(['bcmean'])
        ax2.set_title('KDE')
        ax2.set_xlabel('Mean Concentration N')
        ax1.set_ylabel('Count')
        ax2.set_ylabel('Dist')

    # Inspect the joint distribution of a few pairs of columns from the training set
    # We can observe that the process of scalling the data did not affect the skewness of the data
    if verbose:
        sns.pairplot(dataset_scaled[["lc09", "lc07", "hydro05", "hydro07","soil01","dem"]], diag_kind="kde")
        plt.show()

    return dataloaders


[6]:
dataloaders = DataPreProcessing(verbose=True)

#Check out how the dataloeader works
dataiter = iter(dataloaders['val'])
samples_tmp, labels_tmp = dataiter.next()

print('Dim of raw sample image: {}\n'.format(samples_tmp.shape))
print('Dim of labels: {}\n'.format(labels_tmp.shape))
print('max {}, min {}'.format(torch.max(samples_tmp),torch.min(samples_tmp)))
print('max_labels {}, min {}'.format(torch.max(labels_tmp),torch.min(labels_tmp)))
Example of the dataset:
    lc01  lc02  lc03  lc04  lc05  lc06  lc07  lc08  lc09  lc10  ...   hydro14  \
0    66     0     1    33     0     0     0     0     0     0  ...      4418
1    53     0     2    43     0     0     0     0     0     0  ...      3710
2    28     0    15    45    23    22    12     0     0     0  ...  38819396
3    59     0     2    33     0     0     6     0     0     0  ...      7735
4    52     0     2    43     0     0     1     0     0     0  ...      3999

   hydro15    hydro16    hydro17    hydro18    hydro19   dem  slope  order  \
0       68     182937      21313     182937      21313   498    407      2
1       68     155202      18107     155202      18107   365    402      2
2       29  310689856  136347296  310689856  136347296  1470    492      8
3       67     302169      37266     302169      37266   342    467      2
4       68     167904      19575     167904      19575   341    391      2

     bcmean
0 -1.435592
1 -1.070739
2 -0.474586
3 -0.747083
4 -0.589795

[5 rows x 48 columns]
dataset (after transform):
        lc01  lc02      lc03      lc04    lc05     lc06      lc07  lc08  lc09  \
0  0.880000   0.0  0.010204  0.478261  0.0000  0.00000  0.000000   0.0   0.0
1  0.706667   0.0  0.020408  0.623188  0.0000  0.00000  0.000000   0.0   0.0
2  0.373333   0.0  0.153061  0.652174  0.2875  0.30137  0.127660   0.0   0.0
3  0.786667   0.0  0.020408  0.478261  0.0000  0.00000  0.063830   0.0   0.0
4  0.693333   0.0  0.020408  0.623188  0.0000  0.00000  0.010638   0.0   0.0

   lc10  ...   hydro14   hydro15   hydro16   hydro17   hydro18   hydro19  \
0   0.0  ...  0.000024  0.759036  0.000140  0.000035  0.000140  0.000035
1   0.0  ...  0.000020  0.759036  0.000118  0.000030  0.000118  0.000030
2   0.0  ...  0.209210  0.289157  0.237482  0.225111  0.237482  0.225111
3   0.0  ...  0.000042  0.746988  0.000231  0.000061  0.000231  0.000061
4   0.0  ...  0.000021  0.759036  0.000128  0.000032  0.000128  0.000032

        dem     slope  order    bcmean
0  0.143854  0.374539   0.25  0.028016
1  0.105202  0.369926   0.25  0.130145
2  0.426330  0.452952   1.00  0.297018
3  0.098518  0.429889   0.25  0.220742
4  0.098227  0.359779   0.25  0.264769

[5 rows x 48 columns]
inverse_data:
    lc01  lc02  lc03  lc04  lc05  lc06  lc07  lc08  lc09  lc10  ...  \
0  66.0   0.0   1.0  33.0   0.0   0.0   0.0   0.0   0.0   0.0  ...
1  53.0   0.0   2.0  43.0   0.0   0.0   0.0   0.0   0.0   0.0  ...
2  28.0   0.0  15.0  45.0  23.0  22.0  12.0   0.0   0.0   0.0  ...
3  59.0   0.0   2.0  33.0   0.0   0.0   6.0   0.0   0.0   0.0  ...
4  52.0   0.0   2.0  43.0   0.0   0.0   1.0   0.0   0.0   0.0  ...

      hydro14  hydro15      hydro16      hydro17      hydro18      hydro19  \
0      4418.0     68.0     182937.0      21313.0     182937.0      21313.0
1      3710.0     68.0     155202.0      18107.0     155202.0      18107.0
2  38819396.0     29.0  310689856.0  136347296.0  310689856.0  136347296.0
3      7735.0     67.0     302169.0      37266.0     302169.0      37266.0
4      3999.0     68.0     167904.0      19575.0     167904.0      19575.0

      dem  slope  order    bcmean
0   498.0  407.0    2.0 -1.435592
1   365.0  402.0    2.0 -1.070739
2  1470.0  492.0    8.0 -0.474586
3   342.0  467.0    2.0 -0.747083
4   341.0  391.0    2.0 -0.589795

[5 rows x 48 columns]
train_stats:            count      mean       std  min       25%       50%       75%  max
lc01     1118.0  0.137448  0.193109  0.0  0.013333  0.053333  0.173333  1.0
lc02     1118.0  0.008497  0.057215  0.0  0.000000  0.000000  0.000000  1.0
lc03     1118.0  0.204839  0.247051  0.0  0.020408  0.081633  0.364796  1.0
lc04     1118.0  0.220464  0.175918  0.0  0.101449  0.159420  0.275362  1.0
lc05     1118.0  0.033777  0.107309  0.0  0.000000  0.000000  0.012500  1.0
lc06     1118.0  0.149607  0.162242  0.0  0.041096  0.095890  0.191781  1.0
lc07     1118.0  0.339512  0.271629  0.0  0.106383  0.265957  0.542553  1.0
lc08     1118.0  0.005134  0.057887  0.0  0.000000  0.000000  0.000000  1.0
lc09     1118.0  0.080954  0.188890  0.0  0.000000  0.012048  0.048193  1.0
lc10     1118.0  0.004584  0.054904  0.0  0.000000  0.000000  0.000000  1.0
lc11     1118.0  0.014183  0.067649  0.0  0.000000  0.000000  0.000000  1.0
lc12     1118.0  0.018244  0.069379  0.0  0.000000  0.000000  0.015873  1.0
prec     1118.0  0.010144  0.066673  0.0  0.000039  0.000184  0.001226  1.0
tmin     1118.0  0.497205  0.158301  0.0  0.400538  0.492608  0.586022  1.0
tmax     1118.0  0.528114  0.172596  0.0  0.405479  0.526027  0.642123  1.0
soil01   1118.0  0.090513  0.072919  0.0  0.053125  0.071875  0.112500  1.0
soil02   1118.0  0.492454  0.187220  0.0  0.358974  0.461538  0.615385  1.0
soil03   1118.0  0.517907  0.172268  0.0  0.431373  0.509804  0.588235  1.0
soil04   1118.0  0.493104  0.173381  0.0  0.387097  0.483871  0.580645  1.0
soil05   1118.0  0.383697  0.128425  0.0  0.331081  0.405405  0.459459  1.0
soil06   1118.0  0.211986  0.160954  0.0  0.114286  0.171429  0.257143  1.0
soil07   1118.0  0.289927  0.159142  0.0  0.138889  0.291667  0.416667  1.0
soil08   1118.0  0.701262  0.114678  0.0  0.656499  0.718833  0.773210  1.0
soil09   1118.0  0.978158  0.090370  0.0  1.000000  1.000000  1.000000  1.0
soil10   1118.0  0.210858  0.122884  0.0  0.130435  0.217391  0.260870  1.0
hydro01  1118.0  0.448534  0.193156  0.0  0.320513  0.431624  0.585470  1.0
hydro02  1118.0  0.396587  0.152295  0.0  0.290598  0.358974  0.435897  1.0
hydro03  1118.0  0.400296  0.187627  0.0  0.281250  0.375000  0.531250  1.0
hydro04  1118.0  0.550702  0.166883  0.0  0.467990  0.567450  0.618271  1.0
hydro05  1118.0  0.570012  0.182000  0.0  0.433333  0.560000  0.718333  1.0
hydro06  1118.0  0.447054  0.170048  0.0  0.346204  0.439791  0.562173  1.0
hydro07  1118.0  0.584048  0.161894  0.0  0.509434  0.586478  0.672956  1.0
hydro08  1118.0  0.650470  0.212953  0.0  0.537500  0.712500  0.790625  1.0
hydro09  1118.0  0.471208  0.243332  0.0  0.294964  0.386091  0.740408  1.0
hydro10  1118.0  0.593690  0.191173  0.0  0.469613  0.588398  0.745856  1.0
hydro11  1118.0  0.434005  0.176386  0.0  0.320917  0.418338  0.564470  1.0
hydro12  1118.0  0.011632  0.073962  0.0  0.000046  0.000223  0.001267  1.0
hydro13  1118.0  0.012683  0.077900  0.0  0.000042  0.000218  0.001422  1.0
hydro14  1118.0  0.010287  0.069454  0.0  0.000042  0.000209  0.001256  1.0
hydro15  1118.0  0.258196  0.216782  0.0  0.096386  0.168675  0.385542  1.0
hydro16  1118.0  0.012388  0.077245  0.0  0.000041  0.000211  0.001403  1.0
hydro17  1118.0  0.010475  0.069597  0.0  0.000047  0.000219  0.001358  1.0
hydro18  1118.0  0.012388  0.077245  0.0  0.000041  0.000211  0.001403  1.0
hydro19  1118.0  0.010475  0.069597  0.0  0.000047  0.000219  0.001358  1.0
dem      1118.0  0.155945  0.206628  0.0  0.047733  0.085731  0.140294  1.0
slope    1118.0  0.139799  0.166496  0.0  0.035978  0.071033  0.174354  1.0
order    1118.0  0.338327  0.229210  0.0  0.125000  0.375000  0.500000  1.0
labels.describe:  count    1118.000000
mean        0.511352
std         0.229915
min         0.000000
25%         0.341402
50%         0.509617
75%         0.676309
max         1.000000
Name: bcmean, dtype: float64
dataset_size: 1118
samples in validation: 335
/home/user/miniconda3/lib/python3.8/site-packages/seaborn/distributions.py:2551: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
  warnings.warn(msg, FutureWarning)
/home/user/miniconda3/lib/python3.8/site-packages/seaborn/distributions.py:1659: FutureWarning: The `bw` parameter is deprecated in favor of `bw_method` and `bw_adjust`. Using 0.15 for `bw_method`, but please see the docs for the new parameters and update your code.
  warnings.warn(msg, FutureWarning)
../_images/CASESTUDY_NN-day1_8_2.png
../_images/CASESTUDY_NN-day1_8_3.png
Dim of raw sample image: torch.Size([25, 47])

Dim of labels: torch.Size([25])

max 1.0, min 0.0
max_labels 0.9762871265411377, min 0.10735689848661423
[7]:
# Training and Evaluation routines
def train(model,loss_fn, optimizer, train_loader, test_loader, config_str, num_epochs=None, verbose=False):
    """
    This is a standard training loop, which leaves some parts to be filled in.
    INPUT:
    :param model: an untrained pytorch model
    :param loss_fn: e.g. Cross Entropy loss of Mean Squared Error.
    :param optimizer: the model optimizer, initialized with a learning rate.
    :param training_set: The training data, in a dataloader for easy iteration.
    :param test_loader: The testing data, in a dataloader for easy iteration.
    """

    path_to_save = './plots'
    if not os.path.exists(path_to_save):
        os.makedirs(path_to_save)

    print('optimizer: {}'.format(optimizer))
    if num_epochs is None:
        num_epochs = 100
    print('n. of epochs: {}'.format(num_epochs))
    train_loss=[]
    val_loss=[]
    r2train=[]
    r2val=[]
    for epoch in range(num_epochs+1):
        # loop through each data point in the training set
        all_loss_train=0
        for data, targets in train_loader:
            start = time.time()
            # run the model on the data
            model_input = data.view(data.size(0),-1).to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
            if verbose:
                print('model_input.shape: {}'.format(model_input.shape))
                print('model_input: {}'.format(model_input))

            # Clear gradients w.r.t. parameters
            optimizer.zero_grad()

            out = model(model_input).squeeze()
            if verbose:
                print('targets: {}'.format(targets.shape))
                print('out: {}'.format(out.shape))

            # Calculate the loss
            targets = targets.to(device) # add an extra dimension to keep CrossEntropy happy.
            if verbose: print('targets.shape: {}'.format(targets.shape))
            loss = loss_fn(out,targets)
            if verbose: print('loss: {}'.format(loss))

            # Find the gradients of our loss via backpropogation
            loss.backward()

            # Adjust accordingly with the optimizer
            optimizer.step()
            all_loss_train += loss.item()
        train_loss.append(all_loss_train/len(train_loader))

        with torch.no_grad():
            all_loss_val=0
            for data, targets in test_loader:

                # run the model on the data
                model_input = data.view(data.size(0),-1).to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
                out = model(model_input).squeeze()
                targets = targets.to(device) # add an extra dimension to keep CrossEntropy happy.
                loss = loss_fn(out,targets)
                all_loss_val += loss.item()
            val_loss.append(all_loss_val/len(test_loader))


        # Give status reports every 100 epochs
        if epoch % 10==0:
            print(f" EPOCH {epoch}. Progress: {epoch/num_epochs*100}%. ")
            r2train.append(evaluate(model,train_loader,verbose))
            r2val.append(evaluate(model,test_loader,verbose))
            print(" Training R^2: {:.4f}. Test R^2: {:.4f}. Loss Train: {:.4f}. Loss Val: {:.4f}. Time: {:.4f}".format(r2train[-1], r2val[-1],
                                                                                                                train_loss[-1], val_loss[-1], 10*(time.time() - start))) #TODO: implement the evaluate function to provide performance statistics during training.

    # Plot
    plt.close('all')
    fig,ax = plt.subplots(1,2,figsize=(10,5))
    ax[0].plot(np.arange(num_epochs+1),train_loss, label='Training')
    ax[0].plot(np.arange(num_epochs+1),val_loss, label='Test')
    ax[0].set_xlabel('Epochs')
    ax[0].set_ylabel('Loss')
    ax[0].legend()

    ax[1].plot(np.arange(0,num_epochs+1,10),r2train, label='Training')
    ax[1].plot(np.arange(0,num_epochs+1,10),r2val, label='Test')
    ax[1].set_xlabel('Epochs')
    ax[1].set_ylabel('Loss')
    ax[1].legend()

    plt.show()
    print('saving ', os.path.join(path_to_save,  config_str + '.png'))
    fig.savefig(os.path.join(path_to_save, config_str + '.png'), bbox_inches='tight')

def evaluate(model, evaluation_set, verbose=False):
    """
    Evaluates the given model on the given dataset.
    Returns the percentage of correct classifications out of total classifications.
    """
    with torch.no_grad(): # this disables backpropogation, which makes the model run much more quickly.
        r_score = []
        for data, targets in evaluation_set:

            # run the model on the data
            model_input = data.view(data.size(0),-1).to(device) #Turn the 28 by 28 image tensors into a 784 dimensional tensor.
            if verbose:
                print('model_input.shape: {}'.format(model_input.shape))
                print('targets.shape: {}'.format(targets.shape))
            predicted = model(model_input).squeeze()

            if verbose:
                print('predicted[:5]: {}'.format(predicted[:5].cpu()))
                print('targets[:5]: {}'.format(targets[:5]))

            r_score.append(r2_score(targets, predicted.cpu()))
            if verbose: print('r2_score(targets, out): ',r2_score(targets, predicted.cpu()))

    r_score = np.array(r_score)
    r_score = r_score.mean()

    return r_score

[8]:
# Testing a regular FFnet
class LogisticRegression(nn.Module):
    def __init__(self, in_dim,out_dim,verbose=False):
        super(LogisticRegression, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(in_dim, out_dim)
        torch.nn.init.zeros_(self.fc1.weight)
        torch.nn.init.zeros_(self.fc1.bias)

    def forward(self, x):
        # Linear function
        out = self.fc1(x)
        return out
[10]:
lr_range = [0.01, 0.001,0.0001]
weight_decay_range = [0,0.1,0.01,0.001]
momentum_range = [0,0.1,0.01,0.001]
dampening_range = [0,0.1,0.01,0.001]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                    model = LogisticRegression(in_dim=47,out_dim=1, verbose=False).to(device)
                    print(model)
                    SGD = torch.optim.SGD(model.parameters(), lr = lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov) # This is absurdly high.
                    # initialize the loss function. You don't want to use this one, so change it accordingly
                    loss_fn = nn.MSELoss()
                    config_str = 'lr' + str(lr) + '_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov)
                    train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=100, verbose=False)
[17]:
# Best configuration for longer
lr_range = [0.01]
weight_decay_range = [0.001]
momentum_range = [0.1,0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    try:
                        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                        model = LogisticRegression(in_dim=47,out_dim=1, verbose=False).to(device)
                        print(model)
                        SGD = torch.optim.SGD(model.parameters(), lr = lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov) # This is absurdly high.
                        # initialize the loss function. You don't want to use this one, so change it accordingly
                        loss_fn = nn.MSELoss()
                        config_str = 'lr' + str(lr) + 'LONGER_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov)
                        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=500, verbose=False)
                    except:
                        pass

lr: 0.01, momentum: 0.1, weight_decay: 0.001, dampening: 0, nesterov: False
LogisticRegression(
  (fc1): Linear(in_features=47, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0.1
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.1292. Test R^2: 0.1760. Loss Train: 0.0809. Loss Val: 0.0527. Time: 5.9460
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4196. Test R^2: 0.3470. Loss Train: 0.0322. Loss Val: 0.0412. Time: 2.5306
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4320. Test R^2: 0.3854. Loss Train: 0.0303. Loss Val: 0.0404. Time: 2.1711
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4587. Test R^2: 0.4069. Loss Train: 0.0289. Loss Val: 0.0396. Time: 3.1840
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.4671. Test R^2: 0.3842. Loss Train: 0.0282. Loss Val: 0.0383. Time: 2.2062
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.4788. Test R^2: 0.4042. Loss Train: 0.0279. Loss Val: 0.0382. Time: 2.1512
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.4832. Test R^2: 0.3908. Loss Train: 0.0277. Loss Val: 0.0405. Time: 2.3680
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.4932. Test R^2: 0.3883. Loss Train: 0.0272. Loss Val: 0.0390. Time: 2.0704
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.4834. Test R^2: 0.4274. Loss Train: 0.0272. Loss Val: 0.0373. Time: 2.1498
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.4877. Test R^2: 0.4394. Loss Train: 0.0268. Loss Val: 0.0375. Time: 2.1138
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.4941. Test R^2: 0.3365. Loss Train: 0.0270. Loss Val: 0.0365. Time: 2.3166
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.4872. Test R^2: 0.3795. Loss Train: 0.0267. Loss Val: 0.0376. Time: 2.1446
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.4927. Test R^2: 0.4362. Loss Train: 0.0264. Loss Val: 0.0374. Time: 2.8796
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.4973. Test R^2: 0.4363. Loss Train: 0.0267. Loss Val: 0.0379. Time: 3.0462
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.4965. Test R^2: 0.4315. Loss Train: 0.0267. Loss Val: 0.0377. Time: 2.1279
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.4898. Test R^2: 0.3951. Loss Train: 0.0267. Loss Val: 0.0361. Time: 2.0948
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5092. Test R^2: 0.4080. Loss Train: 0.0267. Loss Val: 0.0386. Time: 2.1599
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5045. Test R^2: 0.4345. Loss Train: 0.0263. Loss Val: 0.0363. Time: 2.7793
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.4790. Test R^2: 0.4126. Loss Train: 0.0262. Loss Val: 0.0354. Time: 2.1298
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.4640. Test R^2: 0.4252. Loss Train: 0.0262. Loss Val: 0.0364. Time: 2.1042
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5035. Test R^2: 0.4357. Loss Train: 0.0261. Loss Val: 0.0350. Time: 2.1428
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5116. Test R^2: 0.4321. Loss Train: 0.0259. Loss Val: 0.0359. Time: 2.1280
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5220. Test R^2: 0.4098. Loss Train: 0.0263. Loss Val: 0.0364. Time: 2.0515
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5080. Test R^2: 0.3754. Loss Train: 0.0262. Loss Val: 0.0354. Time: 2.1203
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5181. Test R^2: 0.4549. Loss Train: 0.0259. Loss Val: 0.0357. Time: 2.0732
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5137. Test R^2: 0.4251. Loss Train: 0.0260. Loss Val: 0.0371. Time: 2.1298
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.5134. Test R^2: 0.4464. Loss Train: 0.0261. Loss Val: 0.0360. Time: 2.3224
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.4937. Test R^2: 0.4324. Loss Train: 0.0257. Loss Val: 0.0357. Time: 2.0788
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.5045. Test R^2: 0.4489. Loss Train: 0.0257. Loss Val: 0.0367. Time: 2.2380
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5001. Test R^2: 0.4146. Loss Train: 0.0260. Loss Val: 0.0359. Time: 2.1239
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5207. Test R^2: 0.4330. Loss Train: 0.0260. Loss Val: 0.0355. Time: 2.6492
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5231. Test R^2: 0.4120. Loss Train: 0.0267. Loss Val: 0.0363. Time: 2.3547
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5032. Test R^2: 0.4650. Loss Train: 0.0257. Loss Val: 0.0366. Time: 2.1155
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.5227. Test R^2: 0.4368. Loss Train: 0.0259. Loss Val: 0.0365. Time: 2.0535
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.5147. Test R^2: 0.4457. Loss Train: 0.0262. Loss Val: 0.0385. Time: 2.6143
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5198. Test R^2: 0.4558. Loss Train: 0.0260. Loss Val: 0.0356. Time: 3.3361
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5272. Test R^2: 0.4013. Loss Train: 0.0259. Loss Val: 0.0356. Time: 2.0422
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.4989. Test R^2: 0.4503. Loss Train: 0.0263. Loss Val: 0.0345. Time: 2.2167
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.5141. Test R^2: 0.4529. Loss Train: 0.0261. Loss Val: 0.0352. Time: 2.3632
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.4922. Test R^2: 0.4053. Loss Train: 0.0258. Loss Val: 0.0360. Time: 3.0163
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.5315. Test R^2: 0.4407. Loss Train: 0.0259. Loss Val: 0.0352. Time: 2.4278
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.4209. Test R^2: 0.4272. Loss Train: 0.0257. Loss Val: 0.0351. Time: 2.7372
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.5020. Test R^2: 0.4106. Loss Train: 0.0256. Loss Val: 0.0351. Time: 2.3255
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5092. Test R^2: 0.4564. Loss Train: 0.0256. Loss Val: 0.0353. Time: 2.3361
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5206. Test R^2: 0.3941. Loss Train: 0.0262. Loss Val: 0.0344. Time: 2.4134
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5316. Test R^2: 0.4250. Loss Train: 0.0253. Loss Val: 0.0352. Time: 2.3447
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5000. Test R^2: 0.3961. Loss Train: 0.0260. Loss Val: 0.0354. Time: 2.1631
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.5037. Test R^2: 0.4630. Loss Train: 0.0254. Loss Val: 0.0347. Time: 2.8979
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.5328. Test R^2: 0.3998. Loss Train: 0.0255. Loss Val: 0.0366. Time: 2.0738
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5091. Test R^2: 0.4204. Loss Train: 0.0254. Loss Val: 0.0349. Time: 1.9961
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5191. Test R^2: 0.4330. Loss Train: 0.0256. Loss Val: 0.0351. Time: 3.1330
../_images/CASESTUDY_NN-day1_12_1.png
saving  ./plots/lr0.01LONGER_momentum0.1_wdecay0.001_dampening0_nesterovFalse.png

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
LogisticRegression(
  (fc1): Linear(in_features=47, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2370. Test R^2: 0.1497. Loss Train: 0.0843. Loss Val: 0.0538. Time: 3.2603
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4846. Test R^2: 0.4289. Loss Train: 0.0270. Loss Val: 0.0367. Time: 2.6736
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5036. Test R^2: 0.4182. Loss Train: 0.0268. Loss Val: 0.0379. Time: 2.7447
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5200. Test R^2: 0.4422. Loss Train: 0.0266. Loss Val: 0.0359. Time: 2.2237
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5165. Test R^2: 0.4516. Loss Train: 0.0260. Loss Val: 0.0370. Time: 7.8190
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.4289. Test R^2: 0.4077. Loss Train: 0.0289. Loss Val: 0.0379. Time: 2.4827
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5101. Test R^2: 0.4489. Loss Train: 0.0254. Loss Val: 0.0353. Time: 2.3484
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5097. Test R^2: 0.4530. Loss Train: 0.0268. Loss Val: 0.0355. Time: 3.0138
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.5224. Test R^2: 0.4542. Loss Train: 0.0260. Loss Val: 0.0354. Time: 2.6839
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5271. Test R^2: 0.3807. Loss Train: 0.0260. Loss Val: 0.0349. Time: 1.8238
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5101. Test R^2: 0.4279. Loss Train: 0.0261. Loss Val: 0.0359. Time: 2.3156
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5306. Test R^2: 0.4605. Loss Train: 0.0258. Loss Val: 0.0345. Time: 1.8360
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.5264. Test R^2: 0.4653. Loss Train: 0.0254. Loss Val: 0.0367. Time: 2.4769
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5018. Test R^2: 0.4304. Loss Train: 0.0263. Loss Val: 0.0352. Time: 2.2373
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.5286. Test R^2: 0.4372. Loss Train: 0.0258. Loss Val: 0.0349. Time: 2.5191
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.4869. Test R^2: 0.4419. Loss Train: 0.0272. Loss Val: 0.0362. Time: 2.7092
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5121. Test R^2: 0.4195. Loss Train: 0.0263. Loss Val: 0.0374. Time: 2.3216
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5133. Test R^2: 0.4503. Loss Train: 0.0263. Loss Val: 0.0357. Time: 2.1948
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.5382. Test R^2: 0.3948. Loss Train: 0.0274. Loss Val: 0.0352. Time: 2.3185
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.4829. Test R^2: 0.4170. Loss Train: 0.0268. Loss Val: 0.0367. Time: 2.4032
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5256. Test R^2: 0.4373. Loss Train: 0.0262. Loss Val: 0.0350. Time: 2.3591
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5272. Test R^2: 0.4283. Loss Train: 0.0256. Loss Val: 0.0354. Time: 2.1982
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5143. Test R^2: 0.4589. Loss Train: 0.0259. Loss Val: 0.0345. Time: 3.1806
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5401. Test R^2: 0.4181. Loss Train: 0.0261. Loss Val: 0.0343. Time: 2.5634
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5014. Test R^2: 0.4760. Loss Train: 0.0255. Loss Val: 0.0352. Time: 2.0837
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5255. Test R^2: 0.4414. Loss Train: 0.0268. Loss Val: 0.0347. Time: 1.9711
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.5492. Test R^2: 0.4560. Loss Train: 0.0256. Loss Val: 0.0350. Time: 2.4712
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.3971. Test R^2: 0.4073. Loss Train: 0.0258. Loss Val: 0.0380. Time: 1.8694
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.4833. Test R^2: 0.4405. Loss Train: 0.0252. Loss Val: 0.0342. Time: 1.9649
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5295. Test R^2: 0.4399. Loss Train: 0.0255. Loss Val: 0.0349. Time: 1.8793
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5335. Test R^2: 0.4543. Loss Train: 0.0255. Loss Val: 0.0347. Time: 2.0025
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.4871. Test R^2: 0.4498. Loss Train: 0.0256. Loss Val: 0.0348. Time: 1.9823
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5254. Test R^2: 0.4363. Loss Train: 0.0265. Loss Val: 0.0346. Time: 2.0350
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.4671. Test R^2: 0.4699. Loss Train: 0.0263. Loss Val: 0.0348. Time: 2.0128
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.4482. Test R^2: 0.4442. Loss Train: 0.0263. Loss Val: 0.0365. Time: 1.9249
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5402. Test R^2: 0.4583. Loss Train: 0.0253. Loss Val: 0.0346. Time: 2.1697
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5130. Test R^2: 0.4348. Loss Train: 0.0281. Loss Val: 0.0350. Time: 1.9188
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.5427. Test R^2: 0.4000. Loss Train: 0.0257. Loss Val: 0.0349. Time: 1.9767
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.5321. Test R^2: 0.4589. Loss Train: 0.0261. Loss Val: 0.0366. Time: 1.8689
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5216. Test R^2: 0.4523. Loss Train: 0.0263. Loss Val: 0.0352. Time: 1.7124
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.4896. Test R^2: 0.4235. Loss Train: 0.0262. Loss Val: 0.0367. Time: 1.9804
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5069. Test R^2: 0.4444. Loss Train: 0.0251. Loss Val: 0.0351. Time: 2.0140
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.4853. Test R^2: 0.4509. Loss Train: 0.0262. Loss Val: 0.0361. Time: 1.9275
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5214. Test R^2: 0.4328. Loss Train: 0.0255. Loss Val: 0.0358. Time: 1.9275
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5252. Test R^2: 0.4525. Loss Train: 0.0254. Loss Val: 0.0357. Time: 2.0793
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5405. Test R^2: 0.4435. Loss Train: 0.0254. Loss Val: 0.0339. Time: 1.9715
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5270. Test R^2: 0.4243. Loss Train: 0.0266. Loss Val: 0.0346. Time: 1.9307
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.5364. Test R^2: 0.4669. Loss Train: 0.0254. Loss Val: 0.0343. Time: 2.1174
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.4949. Test R^2: 0.4394. Loss Train: 0.0261. Loss Val: 0.0339. Time: 1.9938
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.4509. Test R^2: 0.4009. Loss Train: 0.0257. Loss Val: 0.0414. Time: 1.9509
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5098. Test R^2: 0.4527. Loss Train: 0.0258. Loss Val: 0.0337. Time: 2.0501
../_images/CASESTUDY_NN-day1_12_3.png
saving  ./plots/lr0.01LONGER_momentum0.9_wdecay0.001_dampening0_nesterovFalse.png
[18]:
# Test
with torch.no_grad():
    data, targets_val = next(iter(dataloaders['all_val']))
    model_input = data.to(device)
    predicted_val = model(model_input).squeeze()
#         _, predicted = torch.max(out, 1)
    print('predicted.shape: {}'.format(predicted_val.shape))
    print('predicted[:20]: \t{}'.format(predicted_val[:20]))
    print('targets[:20]: \t\t{}'.format(targets_val[:20]))

# Test
with torch.no_grad():
    data, targets = next(iter(dataloaders['test']))
    model_input = data.to(device)
    predicted = model(model_input).squeeze()
    print('predicted.shape: {}'.format(predicted.shape))
    print('predicted[:20]: \t{}'.format(predicted[:20]))
    print('targets[:20]: \t\t{}'.format(targets[:20]))
predicted.shape: torch.Size([335])
predicted[:20]:         tensor([0.4197, 0.4249, 0.6119, 0.8562, 0.6343, 0.3911, 0.4445, 0.2016, 0.4953,
        0.5134, 0.2582, 0.4030, 0.3671, 0.4673, 0.5299, 0.7208, 0.5424, 0.1291,
        0.6937, 0.1303])
targets[:20]:           tensor([0.3880, 0.2883, 0.7370, 0.9044, 0.5947, 0.5528, 0.3247, 0.1974, 0.5815,
        0.8917, 0.3609, 0.3433, 0.3853, 0.5961, 0.9056, 0.7102, 0.6711, 0.0362,
        0.5681, 0.2078])
predicted.shape: torch.Size([1118])
predicted[:20]:         tensor([0.3200, 0.3313, 0.6222, 0.3567, 0.3359, 0.4297, 0.3794, 0.4444, 0.5455,
        0.5131, 0.5163, 0.5323, 0.5762, 0.5698, 0.5697, 0.5701, 0.5701, 0.5715,
        0.5823, 0.5200])
targets[:20]:           tensor([0.0280, 0.1301, 0.2970, 0.2207, 0.2648, 0.3342, 0.2873, 0.3362, 0.6023,
        0.5031, 0.5335, 0.6247, 0.5454, 0.4926, 0.4442, 0.5030, 0.5472, 0.5128,
        0.5702, 0.5017])
[20]:
#Time for a real test
path_to_save = './plots'
if not os.path.exists(path_to_save):
    os.makedirs(path_to_save)

fig, (ax1,ax2) = plt.subplots(1,2, sharey=True)
# test_predictions = model(normed_test_data).flatten()
r = r2_score(targets_val, predicted_val.cpu())
ax1.scatter(targets_val, predicted_val.cpu(),alpha=0.5, label='$R^2$ = %.3f' % (r))
ax1.legend(loc="upper left")
ax1.set_xlabel('True Values [Mean Conc.]')
ax1.set_ylabel('Predictions [Mean Conc.]')
ax1.axis('equal')
ax1.axis('square')
ax1.set_xlim([0,1])
ax1.set_ylim([0,1])
_ = ax1.plot([-100, 100], [-100, 100], 'r:')
ax1.set_title('Test dataset')
fig.set_figheight(30)
fig.set_figwidth(10)

#Whole dataset
r = r2_score(targets, predicted.cpu())
ax2.scatter(targets, predicted.cpu(), alpha=0.5, label='$R^2$ = %.3f' % (r))
ax2.legend(loc="upper left")
ax2.set_xlabel('True Values [Mean Conc.]')
ax2.set_ylabel('Predictions [Mean Conc.]')
ax2.axis('equal')
ax2.axis('square')
ax2.set_xlim([0,1])
ax2.set_ylim([0,1])
_ = ax2.plot([-100, 100], [-100, 100], 'r:')
ax2.set_title('Whole dataset')
fig.savefig(os.path.join(path_to_save, 'LogisticReg_LONGER_R2Score_' + config_str + '.png'), bbox_inches='tight')

../_images/CASESTUDY_NN-day1_14_0.png

Question 1

What percentage classification accuracy did your simple network achieve? Make a table with the configurations you tested and results you obtained!

Feed-forward Neural Network

This time, keeping the rest of your logistic model fixed: - Create one hidden layer (with 128 units) between the input and output by creating another weight and bias variable. - Try training this without a non-linearity between the layers (linear activation), and then try adding a sigmoid non-linearity both before the hidden layer and after the hidden layer, recording your test accuracy results for each in a table. - Try adjusting the learning rate (by making it smaller) if your model is not onverging/improving accuracy. You might also try increasing the number of epochs used. - Experiment with the non-linearity used before the middle layer. Here are some activation functions to choose from: relu, softplus, elu, tanh. - Lastly, experiment with the width of the hidden layer, keeping the activation function that performs best. Remember to add these results to your table.

[21]:
class FeedForwardNet(nn.Module):
    def __init__(self, in_dim,hid_dim,out_dim,verbose=False):
        super(FeedForwardNet, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(in_dim, hid_dim)
        self.fc2 = nn.Linear(hid_dim, out_dim)

    def forward(self, x):
        # Linear function
        out = self.fc1(x)
        out = self.fc2(out)
        return out
[22]:
# Best configuration for longer
lr_range = [0.001]
hid_dim_range = [64,128]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    for hid_dim in hid_dim_range:
                        try:
                            print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                            model = FeedForwardNet(in_dim=47,hid_dim=hid_dim, out_dim=1, verbose=False).to(device)
                            print(model)
                            SGD = torch.optim.SGD(model.parameters(), lr = lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov) # This is absurdly high.
                            # initialize the loss function. You don't want to use this one, so change it accordingly
                            loss_fn = nn.MSELoss()
                            config_str = 'lr' + str(lr) + 'FFNet_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
                            train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=500, verbose=False)
                        except:
                            pass

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedForwardNet(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.001
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: -0.1002. Test R^2: -0.1883. Loss Train: 0.1547. Loss Val: 0.0770. Time: 2.8679
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.3881. Test R^2: 0.2935. Loss Train: 0.0333. Loss Val: 0.0443. Time: 2.2752
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4002. Test R^2: 0.3021. Loss Train: 0.0306. Loss Val: 0.0407. Time: 7.0543
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4327. Test R^2: 0.3883. Loss Train: 0.0296. Loss Val: 0.0408. Time: 2.1320
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.4659. Test R^2: 0.3870. Loss Train: 0.0287. Loss Val: 0.0389. Time: 2.2508
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.4600. Test R^2: 0.3723. Loss Train: 0.0281. Loss Val: 0.0391. Time: 2.6312
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.4675. Test R^2: 0.4191. Loss Train: 0.0280. Loss Val: 0.0381. Time: 2.1675
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.4880. Test R^2: 0.3722. Loss Train: 0.0277. Loss Val: 0.0394. Time: 3.2555
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.4800. Test R^2: 0.4036. Loss Train: 0.0273. Loss Val: 0.0388. Time: 3.5889
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.4860. Test R^2: 0.4194. Loss Train: 0.0267. Loss Val: 0.0368. Time: 2.2345
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.4921. Test R^2: 0.4067. Loss Train: 0.0269. Loss Val: 0.0372. Time: 2.6351
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.4988. Test R^2: 0.4150. Loss Train: 0.0276. Loss Val: 0.0368. Time: 2.4887
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.4870. Test R^2: 0.3986. Loss Train: 0.0266. Loss Val: 0.0366. Time: 2.3648
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5127. Test R^2: 0.4201. Loss Train: 0.0265. Loss Val: 0.0361. Time: 2.3501
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.4983. Test R^2: 0.4107. Loss Train: 0.0265. Loss Val: 0.0359. Time: 2.6944
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.4872. Test R^2: 0.4233. Loss Train: 0.0272. Loss Val: 0.0364. Time: 2.9972
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.4895. Test R^2: 0.4423. Loss Train: 0.0263. Loss Val: 0.0364. Time: 3.7616
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5143. Test R^2: 0.4369. Loss Train: 0.0262. Loss Val: 0.0362. Time: 2.2601
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.5009. Test R^2: 0.4351. Loss Train: 0.0269. Loss Val: 0.0372. Time: 2.2206
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.4984. Test R^2: 0.4373. Loss Train: 0.0259. Loss Val: 0.0361. Time: 2.2813
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5306. Test R^2: 0.4306. Loss Train: 0.0265. Loss Val: 0.0359. Time: 2.2075
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5060. Test R^2: 0.4286. Loss Train: 0.0260. Loss Val: 0.0362. Time: 2.2509
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5179. Test R^2: 0.4087. Loss Train: 0.0267. Loss Val: 0.0351. Time: 2.0607
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.4969. Test R^2: 0.4329. Loss Train: 0.0260. Loss Val: 0.0359. Time: 3.0709
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5218. Test R^2: 0.4391. Loss Train: 0.0259. Loss Val: 0.0359. Time: 2.5126
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5194. Test R^2: 0.4123. Loss Train: 0.0260. Loss Val: 0.0363. Time: 3.0602
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.4955. Test R^2: 0.4243. Loss Train: 0.0262. Loss Val: 0.0354. Time: 2.8255
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.4839. Test R^2: 0.4214. Loss Train: 0.0262. Loss Val: 0.0357. Time: 2.8350
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.5136. Test R^2: 0.4427. Loss Train: 0.0262. Loss Val: 0.0361. Time: 2.8567
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5287. Test R^2: 0.4405. Loss Train: 0.0266. Loss Val: 0.0369. Time: 2.5230
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5248. Test R^2: 0.4311. Loss Train: 0.0257. Loss Val: 0.0376. Time: 4.1712
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5220. Test R^2: 0.4311. Loss Train: 0.0256. Loss Val: 0.0356. Time: 2.7101
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5235. Test R^2: 0.4212. Loss Train: 0.0260. Loss Val: 0.0357. Time: 2.4286
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.3678. Test R^2: 0.4150. Loss Train: 0.0262. Loss Val: 0.0366. Time: 2.4606
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.5088. Test R^2: 0.4462. Loss Train: 0.0266. Loss Val: 0.0349. Time: 2.1945
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5135. Test R^2: 0.4145. Loss Train: 0.0271. Loss Val: 0.0354. Time: 2.5988
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5219. Test R^2: 0.3990. Loss Train: 0.0256. Loss Val: 0.0356. Time: 2.5520
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.4989. Test R^2: 0.4453. Loss Train: 0.0254. Loss Val: 0.0372. Time: 2.2263
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.5259. Test R^2: 0.4489. Loss Train: 0.0256. Loss Val: 0.0375. Time: 2.3282
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5307. Test R^2: 0.4354. Loss Train: 0.0256. Loss Val: 0.0354. Time: 2.7453
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.5182. Test R^2: 0.3862. Loss Train: 0.0257. Loss Val: 0.0357. Time: 2.2663
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5016. Test R^2: 0.4173. Loss Train: 0.0256. Loss Val: 0.0354. Time: 2.8625
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.5244. Test R^2: 0.4451. Loss Train: 0.0257. Loss Val: 0.0353. Time: 2.4082
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5214. Test R^2: 0.4493. Loss Train: 0.0267. Loss Val: 0.0348. Time: 2.2367
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5235. Test R^2: 0.4562. Loss Train: 0.0255. Loss Val: 0.0357. Time: 2.6790
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.4598. Test R^2: 0.4491. Loss Train: 0.0255. Loss Val: 0.0346. Time: 2.3281
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5301. Test R^2: 0.4335. Loss Train: 0.0252. Loss Val: 0.0359. Time: 2.3309
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.5316. Test R^2: 0.4573. Loss Train: 0.0256. Loss Val: 0.0364. Time: 2.3654
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.4818. Test R^2: 0.4548. Loss Train: 0.0255. Loss Val: 0.0356. Time: 2.9039
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5159. Test R^2: 0.4394. Loss Train: 0.0256. Loss Val: 0.0348. Time: 2.6864
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5061. Test R^2: 0.4538. Loss Train: 0.0254. Loss Val: 0.0348. Time: 2.3089
../_images/CASESTUDY_NN-day1_18_1.png
saving  ./plots/lr0.001FFNet_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedForwardNet(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (fc2): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.001
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: -0.1650. Test R^2: 0.1370. Loss Train: 0.1189. Loss Val: 0.0560. Time: 2.2674
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.3564. Test R^2: 0.3440. Loss Train: 0.0348. Loss Val: 0.0429. Time: 3.4910
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4045. Test R^2: 0.3636. Loss Train: 0.0316. Loss Val: 0.0414. Time: 2.2355
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4447. Test R^2: 0.3739. Loss Train: 0.0294. Loss Val: 0.0425. Time: 2.1547
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.4718. Test R^2: 0.3802. Loss Train: 0.0298. Loss Val: 0.0385. Time: 2.1527
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.4640. Test R^2: 0.4105. Loss Train: 0.0283. Loss Val: 0.0374. Time: 2.1591
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.4905. Test R^2: 0.4069. Loss Train: 0.0281. Loss Val: 0.0376. Time: 2.1030
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.4900. Test R^2: 0.3919. Loss Train: 0.0270. Loss Val: 0.0376. Time: 2.2291
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.4953. Test R^2: 0.4300. Loss Train: 0.0271. Loss Val: 0.0387. Time: 2.1138
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.4747. Test R^2: 0.4137. Loss Train: 0.0270. Loss Val: 0.0377. Time: 2.2466
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5014. Test R^2: 0.4071. Loss Train: 0.0268. Loss Val: 0.0369. Time: 2.1457
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5030. Test R^2: 0.4142. Loss Train: 0.0267. Loss Val: 0.0379. Time: 2.0818
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.4969. Test R^2: 0.4068. Loss Train: 0.0269. Loss Val: 0.0379. Time: 2.1583
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5030. Test R^2: 0.4296. Loss Train: 0.0266. Loss Val: 0.0364. Time: 2.1612
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.5082. Test R^2: 0.4010. Loss Train: 0.0265. Loss Val: 0.0361. Time: 2.1903
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.5041. Test R^2: 0.4510. Loss Train: 0.0260. Loss Val: 0.0359. Time: 2.0836
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5151. Test R^2: 0.4335. Loss Train: 0.0259. Loss Val: 0.0389. Time: 2.3125
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.4959. Test R^2: 0.4220. Loss Train: 0.0262. Loss Val: 0.0364. Time: 2.1887
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.4698. Test R^2: 0.4247. Loss Train: 0.0261. Loss Val: 0.0366. Time: 2.2226
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.5144. Test R^2: 0.3948. Loss Train: 0.0267. Loss Val: 0.0356. Time: 2.1010
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5189. Test R^2: 0.4354. Loss Train: 0.0274. Loss Val: 0.0359. Time: 2.1634
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5187. Test R^2: 0.4082. Loss Train: 0.0261. Loss Val: 0.0355. Time: 2.1772
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.4928. Test R^2: 0.4425. Loss Train: 0.0259. Loss Val: 0.0356. Time: 2.2574
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5200. Test R^2: 0.4205. Loss Train: 0.0258. Loss Val: 0.0364. Time: 2.3135
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5169. Test R^2: 0.3820. Loss Train: 0.0261. Loss Val: 0.0363. Time: 2.1045
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5190. Test R^2: 0.4478. Loss Train: 0.0257. Loss Val: 0.0361. Time: 2.0785
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.5122. Test R^2: 0.4269. Loss Train: 0.0259. Loss Val: 0.0356. Time: 2.0899
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.5235. Test R^2: 0.4715. Loss Train: 0.0257. Loss Val: 0.0351. Time: 2.3125
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.4812. Test R^2: 0.4334. Loss Train: 0.0256. Loss Val: 0.0361. Time: 2.3766
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5019. Test R^2: 0.4556. Loss Train: 0.0261. Loss Val: 0.0365. Time: 2.0649
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5156. Test R^2: 0.4563. Loss Train: 0.0265. Loss Val: 0.0371. Time: 2.3816
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5344. Test R^2: 0.4310. Loss Train: 0.0258. Loss Val: 0.0349. Time: 2.0712
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5257. Test R^2: 0.4413. Loss Train: 0.0255. Loss Val: 0.0360. Time: 2.1857
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.4794. Test R^2: 0.4378. Loss Train: 0.0252. Loss Val: 0.0364. Time: 2.0993
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.5277. Test R^2: 0.4453. Loss Train: 0.0253. Loss Val: 0.0353. Time: 2.1207
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5116. Test R^2: 0.4509. Loss Train: 0.0257. Loss Val: 0.0359. Time: 2.1579
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5204. Test R^2: 0.4529. Loss Train: 0.0254. Loss Val: 0.0349. Time: 2.4573
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.5156. Test R^2: 0.4320. Loss Train: 0.0254. Loss Val: 0.0348. Time: 4.0994
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.5010. Test R^2: 0.4440. Loss Train: 0.0265. Loss Val: 0.0355. Time: 2.2201
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.4996. Test R^2: 0.4569. Loss Train: 0.0257. Loss Val: 0.0349. Time: 2.3245
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.5232. Test R^2: 0.4469. Loss Train: 0.0256. Loss Val: 0.0355. Time: 2.3519
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5372. Test R^2: 0.4467. Loss Train: 0.0253. Loss Val: 0.0348. Time: 2.4484
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.5324. Test R^2: 0.4684. Loss Train: 0.0257. Loss Val: 0.0352. Time: 2.1310
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.4772. Test R^2: 0.4459. Loss Train: 0.0254. Loss Val: 0.0357. Time: 2.2095
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5210. Test R^2: 0.4543. Loss Train: 0.0256. Loss Val: 0.0343. Time: 2.1805
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5117. Test R^2: 0.4440. Loss Train: 0.0253. Loss Val: 0.0365. Time: 2.0739
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5289. Test R^2: 0.4707. Loss Train: 0.0260. Loss Val: 0.0354. Time: 2.1470
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.5273. Test R^2: 0.4631. Loss Train: 0.0259. Loss Val: 0.0358. Time: 2.1357
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.5195. Test R^2: 0.4381. Loss Train: 0.0256. Loss Val: 0.0358. Time: 2.1766
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5120. Test R^2: 0.4202. Loss Train: 0.0254. Loss Val: 0.0348. Time: 2.1652
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5386. Test R^2: 0.4514. Loss Train: 0.0254. Loss Val: 0.0345. Time: 2.0892
../_images/CASESTUDY_NN-day1_18_3.png
saving  ./plots/lr0.001FFNet_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png
[23]:
# Testing a regular FFnet
class FeedforwardNeuralNetModel(nn.Module):
    def __init__(self, in_dim, hid_dim, out_dim):
        super(FeedforwardNeuralNetModel, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(in_dim, hid_dim)
        # Non-linearity
        self.relu = nn.ReLU()
        # Linear function (readout)
        self.fc2 = nn.Linear(hid_dim, out_dim)

    def forward(self, x):
        # Linear function
        out = self.fc1(x)
        # Non-linearity
        out = self.relu(out)
        # Linear function (readout)
        out = self.fc2(out)
        return out

# Best configuration for longer
lr_range = [0.1,0.01]
hid_dim_range = [64,128]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    for hid_dim in hid_dim_range:
#                         try:
                        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                        model = FeedforwardNeuralNetModel(in_dim=47,hid_dim=hid_dim, out_dim=1).to(device)
                        print(model)
                        SGD = torch.optim.SGD(model.parameters(), lr = lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov) # This is absurdly high.
                        # initialize the loss function. You don't want to use this one, so change it accordingly
                        loss_fn = nn.MSELoss()
                        config_str = 'lr' + str(lr) + 'FFNet_ReLU_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
                        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=500, verbose=False)
#                         except:
#                             pass

lr: 0.1, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.1
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.3646. Test R^2: 0.3192. Loss Train: 0.0818. Loss Val: 0.0423. Time: 2.5027
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.1156. Test R^2: 0.0489. Loss Train: 0.0308. Loss Val: 0.0594. Time: 2.2376
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5008. Test R^2: 0.3898. Loss Train: 0.0249. Loss Val: 0.0345. Time: 2.1767
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4401. Test R^2: 0.4661. Loss Train: 0.0269. Loss Val: 0.0349. Time: 2.1309
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.3220. Test R^2: 0.4064. Loss Train: 0.0248. Loss Val: 0.0372. Time: 2.3498
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.5217. Test R^2: 0.4487. Loss Train: 0.0277. Loss Val: 0.0333. Time: 2.7613
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.4267. Test R^2: 0.3631. Loss Train: 0.0303. Loss Val: 0.0404. Time: 2.1412
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5459. Test R^2: 0.4925. Loss Train: 0.0289. Loss Val: 0.0316. Time: 2.1796
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.5843. Test R^2: 0.5128. Loss Train: 0.0227. Loss Val: 0.0298. Time: 2.2113
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5063. Test R^2: 0.4281. Loss Train: 0.0249. Loss Val: 0.0350. Time: 2.5134
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5870. Test R^2: 0.5082. Loss Train: 0.0230. Loss Val: 0.0314. Time: 2.3755
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5838. Test R^2: 0.5122. Loss Train: 0.0248. Loss Val: 0.0296. Time: 2.1351
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.5589. Test R^2: 0.4741. Loss Train: 0.0267. Loss Val: 0.0315. Time: 2.1909
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5485. Test R^2: 0.4603. Loss Train: 0.0239. Loss Val: 0.0342. Time: 2.1711
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.5472. Test R^2: 0.5175. Loss Train: 0.0244. Loss Val: 0.0301. Time: 2.1524
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.5417. Test R^2: 0.5201. Loss Train: 0.0276. Loss Val: 0.0314. Time: 2.1584
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5917. Test R^2: 0.5359. Loss Train: 0.0244. Loss Val: 0.0289. Time: 2.3367
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5555. Test R^2: 0.4634. Loss Train: 0.0261. Loss Val: 0.0346. Time: 2.0601
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.4819. Test R^2: 0.3625. Loss Train: 0.0251. Loss Val: 0.0363. Time: 2.1442
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.4647. Test R^2: 0.3741. Loss Train: 0.0245. Loss Val: 0.0385. Time: 2.0878
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5655. Test R^2: 0.4926. Loss Train: 0.0234. Loss Val: 0.0311. Time: 2.3180
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5308. Test R^2: 0.4326. Loss Train: 0.0240. Loss Val: 0.0345. Time: 2.2221
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5925. Test R^2: 0.5164. Loss Train: 0.0253. Loss Val: 0.0316. Time: 2.1547
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5802. Test R^2: 0.4898. Loss Train: 0.0255. Loss Val: 0.0318. Time: 2.9204
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5904. Test R^2: 0.5499. Loss Train: 0.0236. Loss Val: 0.0290. Time: 2.1554
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5527. Test R^2: 0.5071. Loss Train: 0.0262. Loss Val: 0.0305. Time: 2.1597
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.5447. Test R^2: 0.4810. Loss Train: 0.0255. Loss Val: 0.0335. Time: 2.1949
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.5747. Test R^2: 0.4692. Loss Train: 0.0230. Loss Val: 0.0324. Time: 2.1299
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.5909. Test R^2: 0.5150. Loss Train: 0.0233. Loss Val: 0.0316. Time: 2.0958
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5533. Test R^2: 0.4990. Loss Train: 0.0307. Loss Val: 0.0322. Time: 2.2280
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5911. Test R^2: 0.5194. Loss Train: 0.0244. Loss Val: 0.0296. Time: 2.0876
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5781. Test R^2: 0.5346. Loss Train: 0.0246. Loss Val: 0.0312. Time: 2.1579
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5869. Test R^2: 0.4697. Loss Train: 0.0293. Loss Val: 0.0308. Time: 2.1631
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.5359. Test R^2: 0.5073. Loss Train: 0.0249. Loss Val: 0.0322. Time: 2.1336
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.3939. Test R^2: 0.3366. Loss Train: 0.0255. Loss Val: 0.0414. Time: 2.5117
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5605. Test R^2: 0.5145. Loss Train: 0.0245. Loss Val: 0.0322. Time: 2.1713
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5575. Test R^2: 0.4871. Loss Train: 0.0250. Loss Val: 0.0341. Time: 2.1935
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.5218. Test R^2: 0.4586. Loss Train: 0.0244. Loss Val: 0.0341. Time: 2.1901
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.3631. Test R^2: 0.2953. Loss Train: 0.0306. Loss Val: 0.0404. Time: 2.3216
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5393. Test R^2: 0.4570. Loss Train: 0.0252. Loss Val: 0.0342. Time: 2.1549
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.5360. Test R^2: 0.4534. Loss Train: 0.0275. Loss Val: 0.0353. Time: 2.1271
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5757. Test R^2: 0.4784. Loss Train: 0.0249. Loss Val: 0.0299. Time: 2.1469
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.5373. Test R^2: 0.5167. Loss Train: 0.0230. Loss Val: 0.0320. Time: 2.0999
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5818. Test R^2: 0.4904. Loss Train: 0.0247. Loss Val: 0.0332. Time: 2.1413
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.4773. Test R^2: 0.4302. Loss Train: 0.0263. Loss Val: 0.0355. Time: 2.1924
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.4953. Test R^2: 0.4957. Loss Train: 0.0241. Loss Val: 0.0323. Time: 2.0620
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5256. Test R^2: 0.5028. Loss Train: 0.0247. Loss Val: 0.0302. Time: 2.1600
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.3495. Test R^2: 0.4748. Loss Train: 0.0278. Loss Val: 0.0320. Time: 2.2276
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.4896. Test R^2: 0.4707. Loss Train: 0.0258. Loss Val: 0.0345. Time: 2.3685
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.6014. Test R^2: 0.5493. Loss Train: 0.0254. Loss Val: 0.0291. Time: 2.1611
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5324. Test R^2: 0.5026. Loss Train: 0.0244. Loss Val: 0.0318. Time: 2.2871
../_images/CASESTUDY_NN-day1_19_1.png
saving  ./plots/lr0.1FFNet_ReLU_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.1, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.1
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2283. Test R^2: 0.2728. Loss Train: 0.0909. Loss Val: 0.0450. Time: 2.7173
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.5042. Test R^2: 0.3939. Loss Train: 0.0308. Loss Val: 0.0374. Time: 2.1023
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4693. Test R^2: 0.4194. Loss Train: 0.0277. Loss Val: 0.0383. Time: 2.0253
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5159. Test R^2: 0.4670. Loss Train: 0.0281. Loss Val: 0.0342. Time: 1.9902
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5297. Test R^2: 0.4673. Loss Train: 0.0303. Loss Val: 0.0332. Time: 2.0638
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.4762. Test R^2: 0.4752. Loss Train: 0.0288. Loss Val: 0.0344. Time: 2.0154
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5820. Test R^2: 0.4874. Loss Train: 0.0242. Loss Val: 0.0311. Time: 1.9918
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5141. Test R^2: 0.5016. Loss Train: 0.0249. Loss Val: 0.0318. Time: 2.0654
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.5299. Test R^2: 0.4122. Loss Train: 0.0248. Loss Val: 0.0366. Time: 2.0807
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5084. Test R^2: 0.4630. Loss Train: 0.0238. Loss Val: 0.0351. Time: 2.0694
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5886. Test R^2: 0.5127. Loss Train: 0.0267. Loss Val: 0.0292. Time: 2.3217
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5817. Test R^2: 0.5371. Loss Train: 0.0245. Loss Val: 0.0301. Time: 1.9418
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.4942. Test R^2: 0.3745. Loss Train: 0.0241. Loss Val: 0.0372. Time: 2.0107
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5857. Test R^2: 0.5444. Loss Train: 0.0229. Loss Val: 0.0286. Time: 2.0223
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.3522. Test R^2: 0.2840. Loss Train: 0.0253. Loss Val: 0.0461. Time: 2.0770
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.6004. Test R^2: 0.5261. Loss Train: 0.0258. Loss Val: 0.0298. Time: 2.6616
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5011. Test R^2: 0.4417. Loss Train: 0.0314. Loss Val: 0.0363. Time: 2.1031
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5446. Test R^2: 0.4964. Loss Train: 0.0241. Loss Val: 0.0298. Time: 2.7993
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.5949. Test R^2: 0.5359. Loss Train: 0.0219. Loss Val: 0.0297. Time: 1.7726
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.5681. Test R^2: 0.5055. Loss Train: 0.0234. Loss Val: 0.0325. Time: 2.2046
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5713. Test R^2: 0.4666. Loss Train: 0.0258. Loss Val: 0.0318. Time: 2.1275
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.4814. Test R^2: 0.4655. Loss Train: 0.0252. Loss Val: 0.0334. Time: 2.6702
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5877. Test R^2: 0.5165. Loss Train: 0.0226. Loss Val: 0.0284. Time: 2.3608
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.2601. Test R^2: 0.2126. Loss Train: 0.0261. Loss Val: 0.0508. Time: 2.8814
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.3508. Test R^2: 0.3333. Loss Train: 0.0272. Loss Val: 0.0426. Time: 2.2305
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5467. Test R^2: 0.5143. Loss Train: 0.0257. Loss Val: 0.0296. Time: 2.4325
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.4763. Test R^2: 0.4690. Loss Train: 0.0284. Loss Val: 0.0351. Time: 2.6164
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.5053. Test R^2: 0.4363. Loss Train: 0.0280. Loss Val: 0.0361. Time: 2.3808
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.5457. Test R^2: 0.5389. Loss Train: 0.0236. Loss Val: 0.0302. Time: 2.1011
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5960. Test R^2: 0.5262. Loss Train: 0.0260. Loss Val: 0.0295. Time: 2.2730
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5312. Test R^2: 0.5019. Loss Train: 0.0225. Loss Val: 0.0328. Time: 2.2471
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5914. Test R^2: 0.5194. Loss Train: 0.0227. Loss Val: 0.0310. Time: 2.1525
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.4003. Test R^2: 0.3783. Loss Train: 0.0240. Loss Val: 0.0390. Time: 2.0833
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.6086. Test R^2: 0.5362. Loss Train: 0.0260. Loss Val: 0.0285. Time: 1.9955
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.4901. Test R^2: 0.4330. Loss Train: 0.0242. Loss Val: 0.0356. Time: 2.1022
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5079. Test R^2: 0.4892. Loss Train: 0.0285. Loss Val: 0.0334. Time: 2.0113
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5812. Test R^2: 0.5080. Loss Train: 0.0222. Loss Val: 0.0305. Time: 2.2167
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.4940. Test R^2: 0.4996. Loss Train: 0.0241. Loss Val: 0.0316. Time: 2.0167
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.5079. Test R^2: 0.4638. Loss Train: 0.0228. Loss Val: 0.0327. Time: 2.1083
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5362. Test R^2: 0.4818. Loss Train: 0.0228. Loss Val: 0.0343. Time: 2.0472
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.3227. Test R^2: 0.2568. Loss Train: 0.0262. Loss Val: 0.0475. Time: 2.1222
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5846. Test R^2: 0.5075. Loss Train: 0.0231. Loss Val: 0.0304. Time: 2.0978
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.5472. Test R^2: 0.4843. Loss Train: 0.0258. Loss Val: 0.0327. Time: 2.0564
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.6086. Test R^2: 0.5734. Loss Train: 0.0248. Loss Val: 0.0289. Time: 2.3157
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5590. Test R^2: 0.5247. Loss Train: 0.0233. Loss Val: 0.0302. Time: 2.2981
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5330. Test R^2: 0.4452. Loss Train: 0.0237. Loss Val: 0.0330. Time: 2.2508
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5879. Test R^2: 0.5059. Loss Train: 0.0243. Loss Val: 0.0320. Time: 2.3504
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.5669. Test R^2: 0.4830. Loss Train: 0.0298. Loss Val: 0.0332. Time: 2.4113
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.5283. Test R^2: 0.4607. Loss Train: 0.0257. Loss Val: 0.0355. Time: 2.3665
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5325. Test R^2: 0.4850. Loss Train: 0.0241. Loss Val: 0.0329. Time: 2.3898
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5753. Test R^2: 0.5075. Loss Train: 0.0240. Loss Val: 0.0324. Time: 2.6969
../_images/CASESTUDY_NN-day1_19_3.png
saving  ./plots/lr0.1FFNet_ReLU_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2071. Test R^2: 0.1615. Loss Train: 0.0837. Loss Val: 0.0541. Time: 5.2521
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4945. Test R^2: 0.4320. Loss Train: 0.0276. Loss Val: 0.0365. Time: 2.5691
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4887. Test R^2: 0.3529. Loss Train: 0.0265. Loss Val: 0.0354. Time: 2.6201
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4845. Test R^2: 0.3973. Loss Train: 0.0252. Loss Val: 0.0370. Time: 3.0503
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5197. Test R^2: 0.4612. Loss Train: 0.0244. Loss Val: 0.0340. Time: 2.6444
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.5593. Test R^2: 0.4810. Loss Train: 0.0240. Loss Val: 0.0317. Time: 2.2123
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5602. Test R^2: 0.5208. Loss Train: 0.0232. Loss Val: 0.0313. Time: 2.2148
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5714. Test R^2: 0.5037. Loss Train: 0.0232. Loss Val: 0.0301. Time: 2.3758
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.5782. Test R^2: 0.4720. Loss Train: 0.0219. Loss Val: 0.0300. Time: 2.3277
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5353. Test R^2: 0.5136. Loss Train: 0.0246. Loss Val: 0.0319. Time: 2.3695
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5828. Test R^2: 0.5317. Loss Train: 0.0217. Loss Val: 0.0282. Time: 2.6416
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5944. Test R^2: 0.5701. Loss Train: 0.0214. Loss Val: 0.0278. Time: 2.5943
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.5940. Test R^2: 0.5253. Loss Train: 0.0215. Loss Val: 0.0288. Time: 2.2874
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.6363. Test R^2: 0.5689. Loss Train: 0.0219. Loss Val: 0.0269. Time: 2.2321
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.6164. Test R^2: 0.5844. Loss Train: 0.0205. Loss Val: 0.0286. Time: 2.7046
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.6418. Test R^2: 0.5830. Loss Train: 0.0201. Loss Val: 0.0257. Time: 2.2595
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.6178. Test R^2: 0.5370. Loss Train: 0.0212. Loss Val: 0.0260. Time: 2.1360
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.6210. Test R^2: 0.6071. Loss Train: 0.0195. Loss Val: 0.0257. Time: 2.0929
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.6152. Test R^2: 0.5949. Loss Train: 0.0199. Loss Val: 0.0257. Time: 2.3402
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.6336. Test R^2: 0.6208. Loss Train: 0.0190. Loss Val: 0.0240. Time: 4.4691
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.6342. Test R^2: 0.6176. Loss Train: 0.0185. Loss Val: 0.0250. Time: 2.1974
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.6577. Test R^2: 0.6409. Loss Train: 0.0188. Loss Val: 0.0253. Time: 2.3948
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.6419. Test R^2: 0.5804. Loss Train: 0.0188. Loss Val: 0.0257. Time: 2.3758
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.6658. Test R^2: 0.6293. Loss Train: 0.0184. Loss Val: 0.0236. Time: 2.4037
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.6620. Test R^2: 0.6130. Loss Train: 0.0185. Loss Val: 0.0243. Time: 3.1687
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.6337. Test R^2: 0.6433. Loss Train: 0.0186. Loss Val: 0.0234. Time: 2.2961
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.6647. Test R^2: 0.6205. Loss Train: 0.0184. Loss Val: 0.0239. Time: 2.2908
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.6879. Test R^2: 0.6262. Loss Train: 0.0177. Loss Val: 0.0229. Time: 2.6227
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.6833. Test R^2: 0.6393. Loss Train: 0.0173. Loss Val: 0.0225. Time: 2.2992
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.6869. Test R^2: 0.6364. Loss Train: 0.0177. Loss Val: 0.0223. Time: 3.4453
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.6897. Test R^2: 0.6678. Loss Train: 0.0171. Loss Val: 0.0216. Time: 2.4738
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.6930. Test R^2: 0.6566. Loss Train: 0.0175. Loss Val: 0.0214. Time: 2.7012
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.6923. Test R^2: 0.6470. Loss Train: 0.0167. Loss Val: 0.0232. Time: 2.5880
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.6725. Test R^2: 0.6499. Loss Train: 0.0167. Loss Val: 0.0215. Time: 2.1894
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.6971. Test R^2: 0.6649. Loss Train: 0.0177. Loss Val: 0.0206. Time: 2.2053
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.6934. Test R^2: 0.5029. Loss Train: 0.0160. Loss Val: 0.0212. Time: 2.2440
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.6842. Test R^2: 0.6561. Loss Train: 0.0169. Loss Val: 0.0217. Time: 3.0857
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.6782. Test R^2: 0.6649. Loss Train: 0.0165. Loss Val: 0.0225. Time: 3.0184
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.6628. Test R^2: 0.6578. Loss Train: 0.0167. Loss Val: 0.0231. Time: 2.4221
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.6963. Test R^2: 0.7008. Loss Train: 0.0161. Loss Val: 0.0207. Time: 2.3058
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.7049. Test R^2: 0.6829. Loss Train: 0.0167. Loss Val: 0.0199. Time: 2.3138
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.6270. Test R^2: 0.6322. Loss Train: 0.0178. Loss Val: 0.0232. Time: 2.2893
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.7030. Test R^2: 0.6925. Loss Train: 0.0163. Loss Val: 0.0204. Time: 2.3054
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.7005. Test R^2: 0.6811. Loss Train: 0.0164. Loss Val: 0.0197. Time: 2.3668
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.6835. Test R^2: 0.6851. Loss Train: 0.0155. Loss Val: 0.0216. Time: 2.5748
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.6968. Test R^2: 0.6856. Loss Train: 0.0159. Loss Val: 0.0204. Time: 2.3139
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.6903. Test R^2: 0.5642. Loss Train: 0.0160. Loss Val: 0.0213. Time: 2.2897
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.7064. Test R^2: 0.6803. Loss Train: 0.0167. Loss Val: 0.0207. Time: 2.3248
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.7177. Test R^2: 0.6822. Loss Train: 0.0152. Loss Val: 0.0198. Time: 2.3316
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.7157. Test R^2: 0.6994. Loss Train: 0.0158. Loss Val: 0.0194. Time: 2.3627
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.6759. Test R^2: 0.6570. Loss Train: 0.0158. Loss Val: 0.0204. Time: 2.3460
../_images/CASESTUDY_NN-day1_19_5.png
saving  ./plots/lr0.01FFNet_ReLU_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.1922. Test R^2: 0.1661. Loss Train: 0.0977. Loss Val: 0.0542. Time: 2.8262
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4624. Test R^2: 0.3758. Loss Train: 0.0272. Loss Val: 0.0403. Time: 2.8381
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5202. Test R^2: 0.4412. Loss Train: 0.0261. Loss Val: 0.0368. Time: 2.4058
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5272. Test R^2: 0.4667. Loss Train: 0.0244. Loss Val: 0.0363. Time: 2.6949
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5523. Test R^2: 0.4698. Loss Train: 0.0242. Loss Val: 0.0325. Time: 2.5087
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.5638. Test R^2: 0.5104. Loss Train: 0.0232. Loss Val: 0.0314. Time: 2.4473
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5776. Test R^2: 0.5289. Loss Train: 0.0226. Loss Val: 0.0305. Time: 2.5074
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5527. Test R^2: 0.5220. Loss Train: 0.0225. Loss Val: 0.0308. Time: 2.5925
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.5839. Test R^2: 0.5442. Loss Train: 0.0217. Loss Val: 0.0308. Time: 2.3873
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5832. Test R^2: 0.5532. Loss Train: 0.0213. Loss Val: 0.0297. Time: 2.4564
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.6024. Test R^2: 0.5454. Loss Train: 0.0209. Loss Val: 0.0282. Time: 2.4184
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.6247. Test R^2: 0.5949. Loss Train: 0.0203. Loss Val: 0.0266. Time: 2.6069
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.6297. Test R^2: 0.6127. Loss Train: 0.0206. Loss Val: 0.0257. Time: 2.3823
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.6306. Test R^2: 0.5292. Loss Train: 0.0198. Loss Val: 0.0265. Time: 2.3792
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.6307. Test R^2: 0.6146. Loss Train: 0.0198. Loss Val: 0.0251. Time: 2.5422
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.6490. Test R^2: 0.5916. Loss Train: 0.0199. Loss Val: 0.0247. Time: 2.3709
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.6454. Test R^2: 0.6160. Loss Train: 0.0194. Loss Val: 0.0242. Time: 2.4208
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.6512. Test R^2: 0.6117. Loss Train: 0.0186. Loss Val: 0.0237. Time: 2.4226
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.6708. Test R^2: 0.5843. Loss Train: 0.0191. Loss Val: 0.0235. Time: 2.4393
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.6683. Test R^2: 0.6493. Loss Train: 0.0180. Loss Val: 0.0238. Time: 2.4745
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.6799. Test R^2: 0.6506. Loss Train: 0.0183. Loss Val: 0.0223. Time: 2.3793
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.6681. Test R^2: 0.6399. Loss Train: 0.0174. Loss Val: 0.0240. Time: 3.3691
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.6829. Test R^2: 0.6597. Loss Train: 0.0184. Loss Val: 0.0222. Time: 2.5731
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.6830. Test R^2: 0.6617. Loss Train: 0.0171. Loss Val: 0.0224. Time: 2.4132
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.6737. Test R^2: 0.6520. Loss Train: 0.0175. Loss Val: 0.0217. Time: 2.5320
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.6877. Test R^2: 0.6565. Loss Train: 0.0176. Loss Val: 0.0219. Time: 2.4302
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.6386. Test R^2: 0.6378. Loss Train: 0.0181. Loss Val: 0.0236. Time: 2.3277
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.6866. Test R^2: 0.6669. Loss Train: 0.0183. Loss Val: 0.0214. Time: 2.7861
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.6795. Test R^2: 0.6910. Loss Train: 0.0170. Loss Val: 0.0204. Time: 2.8787
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.6791. Test R^2: 0.6636. Loss Train: 0.0166. Loss Val: 0.0210. Time: 2.2770
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.6888. Test R^2: 0.6586. Loss Train: 0.0170. Loss Val: 0.0237. Time: 2.6371
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.6771. Test R^2: 0.6642. Loss Train: 0.0163. Loss Val: 0.0210. Time: 2.5277
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.6739. Test R^2: 0.6389. Loss Train: 0.0168. Loss Val: 0.0213. Time: 2.4118
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.7075. Test R^2: 0.6793. Loss Train: 0.0158. Loss Val: 0.0203. Time: 2.4315
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.7008. Test R^2: 0.6798. Loss Train: 0.0162. Loss Val: 0.0216. Time: 2.4063
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.6858. Test R^2: 0.6689. Loss Train: 0.0164. Loss Val: 0.0205. Time: 2.3764
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.6970. Test R^2: 0.6683. Loss Train: 0.0170. Loss Val: 0.0207. Time: 2.3104
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.7051. Test R^2: 0.6635. Loss Train: 0.0159. Loss Val: 0.0201. Time: 2.3505
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.6774. Test R^2: 0.6889. Loss Train: 0.0161. Loss Val: 0.0201. Time: 2.3744
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.6908. Test R^2: 0.6898. Loss Train: 0.0166. Loss Val: 0.0200. Time: 2.3658
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.7163. Test R^2: 0.7087. Loss Train: 0.0154. Loss Val: 0.0199. Time: 2.5999
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.7052. Test R^2: 0.7244. Loss Train: 0.0159. Loss Val: 0.0178. Time: 2.3856
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.7283. Test R^2: 0.7120. Loss Train: 0.0160. Loss Val: 0.0193. Time: 2.5393
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.7103. Test R^2: 0.6982. Loss Train: 0.0155. Loss Val: 0.0189. Time: 3.6950
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.6793. Test R^2: 0.7085. Loss Train: 0.0156. Loss Val: 0.0189. Time: 2.3910
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.7306. Test R^2: 0.7129. Loss Train: 0.0157. Loss Val: 0.0200. Time: 2.4181
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.5886. Test R^2: 0.5491. Loss Train: 0.0151. Loss Val: 0.0269. Time: 2.9231
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.7126. Test R^2: 0.7135. Loss Train: 0.0152. Loss Val: 0.0182. Time: 2.1260
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.6784. Test R^2: 0.6746. Loss Train: 0.0156. Loss Val: 0.0200. Time: 2.0990
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.7294. Test R^2: 0.6995. Loss Train: 0.0157. Loss Val: 0.0182. Time: 2.8657
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.6277. Test R^2: 0.6065. Loss Train: 0.0169. Loss Val: 0.0234. Time: 2.5456
../_images/CASESTUDY_NN-day1_19_7.png
saving  ./plots/lr0.01FFNet_ReLU_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png
[24]:
# Training the best for longer
# Testing a regular FFnet
class FeedforwardNeuralNetModel(nn.Module):
    def __init__(self, in_dim, hid_dim, out_dim):
        super(FeedforwardNeuralNetModel, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(in_dim, hid_dim)
        # Non-linearity
        self.relu = nn.ReLU()
        # Linear function (readout)
        self.fc2 = nn.Linear(hid_dim, out_dim)

    def forward(self, x):
        # Linear function
        out = self.fc1(x)
        # Non-linearity
        out = self.relu(out)
        # Linear function (readout)
        out = self.fc2(out)
        return out

# Best configuration for longer
lr_range = [0.01]
hid_dim_range = [64]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    for hid_dim in hid_dim_range:
#                         try:
                        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                        model = FeedforwardNeuralNetModel(in_dim=47,hid_dim=hid_dim, out_dim=1).to(device)
                        print(model)
                        SGD = torch.optim.SGD(model.parameters(), lr = lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov) # This is absurdly high.
                        # initialize the loss function. You don't want to use this one, so change it accordingly
                        loss_fn = nn.MSELoss()
                        config_str = 'LONGER_FFNet_ReLU_lr' + str(lr) + '_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
                        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=1000, verbose=False)
#                         except:
#                             pass

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: SGD (
Parameter Group 0
    dampening: 0
    lr: 0.01
    momentum: 0.9
    nesterov: False
    weight_decay: 0.001
)
n. of epochs: 1000
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2232. Test R^2: 0.2320. Loss Train: 0.0805. Loss Val: 0.0466. Time: 2.9867
 EPOCH 10. Progress: 1.0%.
 Training R^2: 0.4549. Test R^2: 0.3706. Loss Train: 0.0290. Loss Val: 0.0387. Time: 3.3358
 EPOCH 20. Progress: 2.0%.
 Training R^2: 0.5057. Test R^2: 0.4227. Loss Train: 0.0263. Loss Val: 0.0358. Time: 2.7223
 EPOCH 30. Progress: 3.0%.
 Training R^2: 0.5079. Test R^2: 0.4562. Loss Train: 0.0252. Loss Val: 0.0335. Time: 2.2713
 EPOCH 40. Progress: 4.0%.
 Training R^2: 0.5576. Test R^2: 0.4683. Loss Train: 0.0244. Loss Val: 0.0334. Time: 2.2900
 EPOCH 50. Progress: 5.0%.
 Training R^2: 0.5486. Test R^2: 0.4982. Loss Train: 0.0242. Loss Val: 0.0314. Time: 2.3105
 EPOCH 60. Progress: 6.0%.
 Training R^2: 0.5516. Test R^2: 0.4947. Loss Train: 0.0228. Loss Val: 0.0319. Time: 2.3228
 EPOCH 70. Progress: 7.000000000000001%.
 Training R^2: 0.5863. Test R^2: 0.5308. Loss Train: 0.0225. Loss Val: 0.0325. Time: 2.7094
 EPOCH 80. Progress: 8.0%.
 Training R^2: 0.6004. Test R^2: 0.4622. Loss Train: 0.0220. Loss Val: 0.0288. Time: 3.0514
 EPOCH 90. Progress: 9.0%.
 Training R^2: 0.5927. Test R^2: 0.5153. Loss Train: 0.0225. Loss Val: 0.0295. Time: 2.8121
 EPOCH 100. Progress: 10.0%.
 Training R^2: 0.5910. Test R^2: 0.5321. Loss Train: 0.0228. Loss Val: 0.0294. Time: 2.1782
 EPOCH 110. Progress: 11.0%.
 Training R^2: 0.5820. Test R^2: 0.5354. Loss Train: 0.0217. Loss Val: 0.0299. Time: 3.8535
 EPOCH 120. Progress: 12.0%.
 Training R^2: 0.6145. Test R^2: 0.5886. Loss Train: 0.0210. Loss Val: 0.0273. Time: 2.5599
 EPOCH 130. Progress: 13.0%.
 Training R^2: 0.6297. Test R^2: 0.5751. Loss Train: 0.0206. Loss Val: 0.0264. Time: 3.8149
 EPOCH 140. Progress: 14.000000000000002%.
 Training R^2: 0.6195. Test R^2: 0.5630. Loss Train: 0.0208. Loss Val: 0.0277. Time: 4.3139
 EPOCH 150. Progress: 15.0%.
 Training R^2: 0.6359. Test R^2: 0.6050. Loss Train: 0.0202. Loss Val: 0.0265. Time: 2.9903
 EPOCH 160. Progress: 16.0%.
 Training R^2: 0.6253. Test R^2: 0.6161. Loss Train: 0.0199. Loss Val: 0.0258. Time: 5.2340
 EPOCH 170. Progress: 17.0%.
 Training R^2: 0.6169. Test R^2: 0.6163. Loss Train: 0.0199. Loss Val: 0.0255. Time: 2.9977
 EPOCH 180. Progress: 18.0%.
 Training R^2: 0.6311. Test R^2: 0.6027. Loss Train: 0.0202. Loss Val: 0.0256. Time: 2.3679
 EPOCH 190. Progress: 19.0%.
 Training R^2: 0.6460. Test R^2: 0.5935. Loss Train: 0.0188. Loss Val: 0.0256. Time: 2.5715
 EPOCH 200. Progress: 20.0%.
 Training R^2: 0.6415. Test R^2: 0.6116. Loss Train: 0.0191. Loss Val: 0.0239. Time: 2.8218
 EPOCH 210. Progress: 21.0%.
 Training R^2: 0.6490. Test R^2: 0.6230. Loss Train: 0.0189. Loss Val: 0.0238. Time: 2.2883
 EPOCH 220. Progress: 22.0%.
 Training R^2: 0.6347. Test R^2: 0.5862. Loss Train: 0.0189. Loss Val: 0.0255. Time: 2.2429
 EPOCH 230. Progress: 23.0%.
 Training R^2: 0.6424. Test R^2: 0.5863. Loss Train: 0.0186. Loss Val: 0.0251. Time: 2.4448
 EPOCH 240. Progress: 24.0%.
 Training R^2: 0.6649. Test R^2: 0.5864. Loss Train: 0.0184. Loss Val: 0.0243. Time: 2.5898
 EPOCH 250. Progress: 25.0%.
 Training R^2: 0.6604. Test R^2: 0.6103. Loss Train: 0.0186. Loss Val: 0.0249. Time: 2.4353
 EPOCH 260. Progress: 26.0%.
 Training R^2: 0.6760. Test R^2: 0.6581. Loss Train: 0.0186. Loss Val: 0.0222. Time: 2.2146
 EPOCH 270. Progress: 27.0%.
 Training R^2: 0.6397. Test R^2: 0.5926. Loss Train: 0.0180. Loss Val: 0.0258. Time: 2.3377
 EPOCH 280. Progress: 28.000000000000004%.
 Training R^2: 0.6660. Test R^2: 0.6523. Loss Train: 0.0195. Loss Val: 0.0246. Time: 2.7603
 EPOCH 290. Progress: 28.999999999999996%.
 Training R^2: 0.6841. Test R^2: 0.6416. Loss Train: 0.0187. Loss Val: 0.0227. Time: 2.7224
 EPOCH 300. Progress: 30.0%.
 Training R^2: 0.6880. Test R^2: 0.6541. Loss Train: 0.0176. Loss Val: 0.0216. Time: 2.6406
 EPOCH 310. Progress: 31.0%.
 Training R^2: 0.6694. Test R^2: 0.6531. Loss Train: 0.0183. Loss Val: 0.0221. Time: 2.6460
 EPOCH 320. Progress: 32.0%.
 Training R^2: 0.6774. Test R^2: 0.6436. Loss Train: 0.0169. Loss Val: 0.0211. Time: 2.4566
 EPOCH 330. Progress: 33.0%.
 Training R^2: 0.6632. Test R^2: 0.6591. Loss Train: 0.0169. Loss Val: 0.0212. Time: 2.9468
 EPOCH 340. Progress: 34.0%.
 Training R^2: 0.6894. Test R^2: 0.6351. Loss Train: 0.0171. Loss Val: 0.0211. Time: 2.5672
 EPOCH 350. Progress: 35.0%.
 Training R^2: 0.6896. Test R^2: 0.6695. Loss Train: 0.0164. Loss Val: 0.0212. Time: 2.6775
 EPOCH 360. Progress: 36.0%.
 Training R^2: 0.6232. Test R^2: 0.6326. Loss Train: 0.0165. Loss Val: 0.0247. Time: 2.3944
 EPOCH 370. Progress: 37.0%.
 Training R^2: 0.6804. Test R^2: 0.6576. Loss Train: 0.0164. Loss Val: 0.0210. Time: 2.1897
 EPOCH 380. Progress: 38.0%.
 Training R^2: 0.6921. Test R^2: 0.6647. Loss Train: 0.0169. Loss Val: 0.0205. Time: 2.2228
 EPOCH 390. Progress: 39.0%.
 Training R^2: 0.6972. Test R^2: 0.6491. Loss Train: 0.0162. Loss Val: 0.0218. Time: 2.2152
 EPOCH 400. Progress: 40.0%.
 Training R^2: 0.6947. Test R^2: 0.6753. Loss Train: 0.0167. Loss Val: 0.0212. Time: 2.1881
 EPOCH 410. Progress: 41.0%.
 Training R^2: 0.7018. Test R^2: 0.6643. Loss Train: 0.0167. Loss Val: 0.0226. Time: 3.0118
 EPOCH 420. Progress: 42.0%.
 Training R^2: 0.7000. Test R^2: 0.7003. Loss Train: 0.0163. Loss Val: 0.0201. Time: 2.6304
 EPOCH 430. Progress: 43.0%.
 Training R^2: 0.6625. Test R^2: 0.6237. Loss Train: 0.0167. Loss Val: 0.0233. Time: 3.1598
 EPOCH 440. Progress: 44.0%.
 Training R^2: 0.7060. Test R^2: 0.6623. Loss Train: 0.0159. Loss Val: 0.0197. Time: 3.0086
 EPOCH 450. Progress: 45.0%.
 Training R^2: 0.6988. Test R^2: 0.6952. Loss Train: 0.0162. Loss Val: 0.0192. Time: 2.2963
 EPOCH 460. Progress: 46.0%.
 Training R^2: 0.7103. Test R^2: 0.6961. Loss Train: 0.0159. Loss Val: 0.0202. Time: 2.3076
 EPOCH 470. Progress: 47.0%.
 Training R^2: 0.7002. Test R^2: 0.7008. Loss Train: 0.0170. Loss Val: 0.0192. Time: 2.2157
 EPOCH 480. Progress: 48.0%.
 Training R^2: 0.7115. Test R^2: 0.6744. Loss Train: 0.0161. Loss Val: 0.0197. Time: 2.8708
 EPOCH 490. Progress: 49.0%.
 Training R^2: 0.7282. Test R^2: 0.7196. Loss Train: 0.0155. Loss Val: 0.0197. Time: 2.3536
 EPOCH 500. Progress: 50.0%.
 Training R^2: 0.7260. Test R^2: 0.6962. Loss Train: 0.0154. Loss Val: 0.0194. Time: 2.3680
 EPOCH 510. Progress: 51.0%.
 Training R^2: 0.7025. Test R^2: 0.6908. Loss Train: 0.0154. Loss Val: 0.0186. Time: 2.7710
 EPOCH 520. Progress: 52.0%.
 Training R^2: 0.6770. Test R^2: 0.6661. Loss Train: 0.0169. Loss Val: 0.0222. Time: 2.5762
 EPOCH 530. Progress: 53.0%.
 Training R^2: 0.5970. Test R^2: 0.6155. Loss Train: 0.0161. Loss Val: 0.0228. Time: 2.3154
 EPOCH 540. Progress: 54.0%.
 Training R^2: 0.7074. Test R^2: 0.6608. Loss Train: 0.0157. Loss Val: 0.0196. Time: 2.7320
 EPOCH 550. Progress: 55.00000000000001%.
 Training R^2: 0.7204. Test R^2: 0.7196. Loss Train: 0.0150. Loss Val: 0.0184. Time: 2.1466
 EPOCH 560. Progress: 56.00000000000001%.
 Training R^2: 0.7081. Test R^2: 0.7074. Loss Train: 0.0157. Loss Val: 0.0182. Time: 2.1762
 EPOCH 570. Progress: 56.99999999999999%.
 Training R^2: 0.6813. Test R^2: 0.6728. Loss Train: 0.0157. Loss Val: 0.0205. Time: 2.0891
 EPOCH 580. Progress: 57.99999999999999%.
 Training R^2: 0.7157. Test R^2: 0.6813. Loss Train: 0.0155. Loss Val: 0.0204. Time: 2.4246
 EPOCH 590. Progress: 59.0%.
 Training R^2: 0.7274. Test R^2: 0.6947. Loss Train: 0.0150. Loss Val: 0.0180. Time: 2.9342
 EPOCH 600. Progress: 60.0%.
 Training R^2: 0.6775. Test R^2: 0.6712. Loss Train: 0.0152. Loss Val: 0.0212. Time: 2.6074
 EPOCH 610. Progress: 61.0%.
 Training R^2: 0.7253. Test R^2: 0.6968. Loss Train: 0.0153. Loss Val: 0.0178. Time: 2.2037
 EPOCH 620. Progress: 62.0%.
 Training R^2: 0.6835. Test R^2: 0.6555. Loss Train: 0.0146. Loss Val: 0.0216. Time: 2.4210
 EPOCH 630. Progress: 63.0%.
 Training R^2: 0.7259. Test R^2: 0.6918. Loss Train: 0.0150. Loss Val: 0.0189. Time: 2.6791
 EPOCH 640. Progress: 64.0%.
 Training R^2: 0.7129. Test R^2: 0.6974. Loss Train: 0.0152. Loss Val: 0.0183. Time: 2.9655
 EPOCH 650. Progress: 65.0%.
 Training R^2: 0.7340. Test R^2: 0.7267. Loss Train: 0.0147. Loss Val: 0.0176. Time: 2.6578
 EPOCH 660. Progress: 66.0%.
 Training R^2: 0.7294. Test R^2: 0.7002. Loss Train: 0.0153. Loss Val: 0.0188. Time: 2.6740
 EPOCH 670. Progress: 67.0%.
 Training R^2: 0.7191. Test R^2: 0.7300. Loss Train: 0.0149. Loss Val: 0.0180. Time: 3.2143
 EPOCH 680. Progress: 68.0%.
 Training R^2: 0.7424. Test R^2: 0.7137. Loss Train: 0.0152. Loss Val: 0.0173. Time: 2.7885
 EPOCH 690. Progress: 69.0%.
 Training R^2: 0.6990. Test R^2: 0.6807. Loss Train: 0.0148. Loss Val: 0.0180. Time: 3.0227
 EPOCH 700. Progress: 70.0%.
 Training R^2: 0.7389. Test R^2: 0.7138. Loss Train: 0.0148. Loss Val: 0.0182. Time: 3.1861
 EPOCH 710. Progress: 71.0%.
 Training R^2: 0.7239. Test R^2: 0.7211. Loss Train: 0.0148. Loss Val: 0.0178. Time: 2.8397
 EPOCH 720. Progress: 72.0%.
 Training R^2: 0.7309. Test R^2: 0.7357. Loss Train: 0.0142. Loss Val: 0.0171. Time: 2.7046
 EPOCH 730. Progress: 73.0%.
 Training R^2: 0.7172. Test R^2: 0.7073. Loss Train: 0.0145. Loss Val: 0.0191. Time: 2.6449
 EPOCH 740. Progress: 74.0%.
 Training R^2: 0.7328. Test R^2: 0.7340. Loss Train: 0.0147. Loss Val: 0.0179. Time: 2.3361
 EPOCH 750. Progress: 75.0%.
 Training R^2: 0.7431. Test R^2: 0.6926. Loss Train: 0.0144. Loss Val: 0.0172. Time: 2.4534
 EPOCH 760. Progress: 76.0%.
 Training R^2: 0.7266. Test R^2: 0.7168. Loss Train: 0.0142. Loss Val: 0.0192. Time: 2.3319
 EPOCH 770. Progress: 77.0%.
 Training R^2: 0.7168. Test R^2: 0.7135. Loss Train: 0.0143. Loss Val: 0.0198. Time: 2.4297
 EPOCH 780. Progress: 78.0%.
 Training R^2: 0.7356. Test R^2: 0.7314. Loss Train: 0.0152. Loss Val: 0.0169. Time: 2.6842
 EPOCH 790. Progress: 79.0%.
 Training R^2: 0.7401. Test R^2: 0.7071. Loss Train: 0.0146. Loss Val: 0.0174. Time: 2.5626
 EPOCH 800. Progress: 80.0%.
 Training R^2: 0.7335. Test R^2: 0.7114. Loss Train: 0.0149. Loss Val: 0.0179. Time: 2.3640
 EPOCH 810. Progress: 81.0%.
 Training R^2: 0.6838. Test R^2: 0.7167. Loss Train: 0.0153. Loss Val: 0.0186. Time: 2.4697
 EPOCH 820. Progress: 82.0%.
 Training R^2: 0.7174. Test R^2: 0.7257. Loss Train: 0.0145. Loss Val: 0.0176. Time: 2.3813
 EPOCH 830. Progress: 83.0%.
 Training R^2: 0.7178. Test R^2: 0.7105. Loss Train: 0.0153. Loss Val: 0.0178. Time: 2.5285
 EPOCH 840. Progress: 84.0%.
 Training R^2: 0.7328. Test R^2: 0.7158. Loss Train: 0.0143. Loss Val: 0.0181. Time: 2.5079
 EPOCH 850. Progress: 85.0%.
 Training R^2: 0.7306. Test R^2: 0.7422. Loss Train: 0.0145. Loss Val: 0.0172. Time: 2.3710
 EPOCH 860. Progress: 86.0%.
 Training R^2: 0.7161. Test R^2: 0.6795. Loss Train: 0.0139. Loss Val: 0.0200. Time: 2.3648
 EPOCH 870. Progress: 87.0%.
 Training R^2: 0.7056. Test R^2: 0.7028. Loss Train: 0.0161. Loss Val: 0.0182. Time: 2.4802
 EPOCH 880. Progress: 88.0%.
 Training R^2: 0.7299. Test R^2: 0.7267. Loss Train: 0.0141. Loss Val: 0.0169. Time: 2.9453
 EPOCH 890. Progress: 89.0%.
 Training R^2: 0.7232. Test R^2: 0.7243. Loss Train: 0.0143. Loss Val: 0.0177. Time: 2.3615
 EPOCH 900. Progress: 90.0%.
 Training R^2: 0.7262. Test R^2: 0.7361. Loss Train: 0.0147. Loss Val: 0.0182. Time: 2.3210
 EPOCH 910. Progress: 91.0%.
 Training R^2: 0.7386. Test R^2: 0.6966. Loss Train: 0.0143. Loss Val: 0.0179. Time: 2.4063
 EPOCH 920. Progress: 92.0%.
 Training R^2: 0.7289. Test R^2: 0.7135. Loss Train: 0.0148. Loss Val: 0.0183. Time: 2.3717
 EPOCH 930. Progress: 93.0%.
 Training R^2: 0.7319. Test R^2: 0.7358. Loss Train: 0.0148. Loss Val: 0.0171. Time: 2.3222
 EPOCH 940. Progress: 94.0%.
 Training R^2: 0.7294. Test R^2: 0.7323. Loss Train: 0.0144. Loss Val: 0.0189. Time: 2.3810
 EPOCH 950. Progress: 95.0%.
 Training R^2: 0.7457. Test R^2: 0.7434. Loss Train: 0.0152. Loss Val: 0.0165. Time: 2.4457
 EPOCH 960. Progress: 96.0%.
 Training R^2: 0.7436. Test R^2: 0.7233. Loss Train: 0.0155. Loss Val: 0.0170. Time: 2.4220
 EPOCH 970. Progress: 97.0%.
 Training R^2: 0.7127. Test R^2: 0.6912. Loss Train: 0.0143. Loss Val: 0.0172. Time: 2.6444
 EPOCH 980. Progress: 98.0%.
 Training R^2: 0.7222. Test R^2: 0.7002. Loss Train: 0.0147. Loss Val: 0.0188. Time: 2.4263
 EPOCH 990. Progress: 99.0%.
 Training R^2: 0.7367. Test R^2: 0.7214. Loss Train: 0.0143. Loss Val: 0.0174. Time: 2.2914
 EPOCH 1000. Progress: 100.0%.
 Training R^2: 0.7344. Test R^2: 0.7440. Loss Train: 0.0151. Loss Val: 0.0177. Time: 2.4692
../_images/CASESTUDY_NN-day1_20_1.png
saving  ./plots/LONGER_FFNet_ReLU_lr0.01_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png
[26]:
# Test
with torch.no_grad():
    data, targets_val = next(iter(dataloaders['all_val']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted_val = model(model_input).squeeze()
#         _, predicted = torch.max(out, 1)
    print('predicted.shape: {}'.format(predicted_val.shape))
    print('predicted[:20]: \t{}'.format(predicted_val[:20]))
    print('targets[:20]: \t\t{}'.format(targets_val[:20]))

# Test
with torch.no_grad():
    data, targets = next(iter(dataloaders['test']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted = model(model_input).squeeze()
    print('predicted.shape: {}'.format(predicted.shape))
    print('predicted[:20]: \t{}'.format(predicted[:20]))
    print('targets[:20]: \t\t{}'.format(targets[:20]))

#Time for a real test
path_to_save = './plots'
if not os.path.exists(path_to_save):
    os.makedirs(path_to_save)

fig, (ax1,ax2) = plt.subplots(1,2, sharey=True)
# test_predictions = model(normed_test_data).flatten()
r = r2_score(targets_val, predicted_val.cpu())
ax1.scatter(targets_val, predicted_val.cpu(),alpha=0.5, label='$R^2$ = %.3f' % (r))
ax1.legend(loc="upper left")
ax1.set_xlabel('True Values [Mean Conc.]')
ax1.set_ylabel('Predictions [Mean Conc.]')
ax1.axis('equal')
ax1.axis('square')
ax1.set_xlim([0,1])
ax1.set_ylim([0,1])
_ = ax1.plot([-100, 100], [-100, 100], 'r:')
ax1.set_title('Test dataset')
fig.set_figheight(30)
fig.set_figwidth(10)
#plt.show()
#plt.close('all')

#Whole dataset
# dataset_predictions = model.predict(normed_dataset).flatten()
r = r2_score(targets, predicted.cpu())
ax2.scatter(targets, predicted.cpu(), alpha=0.5, label='$R^2$ = %.3f' % (r))
ax2.legend(loc="upper left")
ax2.set_xlabel('True Values [Mean Conc.]')
ax2.set_ylabel('Predictions [Mean Conc.]')
ax2.axis('equal')
ax2.axis('square')
ax2.set_xlim([0,1])
ax2.set_ylim([0,1])
_ = ax2.plot([-100, 100], [-100, 100], 'r:')
ax2.set_title('Whole dataset')
# plt.show()
fig.savefig(os.path.join(path_to_save, 'FFNet_LONGER_R2Score_' + config_str + '.png'), bbox_inches='tight')
# #plt.close('all')
predicted.shape: torch.Size([335])
predicted[:20]:         tensor([0.5857, 0.5157, 0.6183, 0.4734, 0.4436, 0.3312, 0.6941, 0.7504, 0.5765,
        0.1581, 0.6536, 0.8482, 0.3342, 0.6194, 0.4693, 0.5088, 0.6190, 0.2262,
        0.6980, 0.4626])
targets[:20]:           tensor([0.5908, 0.3096, 0.5722, 0.6008, 0.4230, 0.1593, 0.6624, 0.7025, 0.5815,
        0.1031, 0.5105, 0.9683, 0.2082, 0.6711, 0.5554, 0.6664, 0.4915, 0.1425,
        0.6479, 0.5573])
predicted.shape: torch.Size([1118])
predicted[:20]:         tensor([0.2354, 0.2575, 0.4211, 0.2812, 0.2703, 0.4035, 0.3168, 0.4194, 0.5279,
        0.5243, 0.5281, 0.5377, 0.5387, 0.5360, 0.5383, 0.5388, 0.5388, 0.5397,
        0.5434, 0.5010])
targets[:20]:           tensor([0.0280, 0.1301, 0.2970, 0.2207, 0.2648, 0.3342, 0.2873, 0.3362, 0.6023,
        0.5031, 0.5335, 0.6247, 0.5454, 0.4926, 0.4442, 0.5030, 0.5472, 0.5128,
        0.5702, 0.5017])
../_images/CASESTUDY_NN-day1_21_1.png
[27]:
# Trying ADAM
class FeedforwardNeuralNetModel(nn.Module):
    def __init__(self, in_dim, hid_dim, out_dim):
        super(FeedforwardNeuralNetModel, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(in_dim, hid_dim)
        # Non-linearity
        self.relu = nn.ReLU()
        # Linear function (readout)
        self.fc2 = nn.Linear(hid_dim, out_dim)

    def forward(self, x):
        # Linear function
        out = self.fc1(x)
        # Non-linearity
        out = self.relu(out)
        # Linear function (readout)
        out = self.fc2(out)
        return out

# Best configuration for longer
lr_range = [0.01,0.001]
hid_dim_range = [64]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for hid_dim in hid_dim_range:
        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
        model = FeedforwardNeuralNetModel(in_dim=47,hid_dim=hid_dim, out_dim=1).to(device)
        print(model)
        SGD = torch.optim.Adam(model.parameters(), lr = lr) # This is absurdly high.
        # initialize the loss function. You don't want to use this one, so change it accordingly
        loss_fn = nn.MSELoss().to(device)
        config_str = 'LONGER_ADAM_FFNet_ReLU_lr' + str(lr) + '_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=1000, verbose=False)

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.01
    weight_decay: 0
)
n. of epochs: 1000
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.3907. Test R^2: 0.3027. Loss Train: 0.0637. Loss Val: 0.0418. Time: 2.5571
 EPOCH 10. Progress: 1.0%.
 Training R^2: 0.5283. Test R^2: 0.4687. Loss Train: 0.0272. Loss Val: 0.0335. Time: 2.3194
 EPOCH 20. Progress: 2.0%.
 Training R^2: 0.5861. Test R^2: 0.5017. Loss Train: 0.0250. Loss Val: 0.0318. Time: 2.1796
 EPOCH 30. Progress: 3.0%.
 Training R^2: 0.5841. Test R^2: 0.5459. Loss Train: 0.0249. Loss Val: 0.0283. Time: 2.1632
 EPOCH 40. Progress: 4.0%.
 Training R^2: 0.4285. Test R^2: 0.3964. Loss Train: 0.0226. Loss Val: 0.0375. Time: 2.2345
 EPOCH 50. Progress: 5.0%.
 Training R^2: 0.6103. Test R^2: 0.6069. Loss Train: 0.0199. Loss Val: 0.0246. Time: 2.3856
 EPOCH 60. Progress: 6.0%.
 Training R^2: 0.5282. Test R^2: 0.4812. Loss Train: 0.0202. Loss Val: 0.0336. Time: 2.1624
 EPOCH 70. Progress: 7.000000000000001%.
 Training R^2: 0.6683. Test R^2: 0.6814. Loss Train: 0.0209. Loss Val: 0.0201. Time: 2.6237
 EPOCH 80. Progress: 8.0%.
 Training R^2: 0.6456. Test R^2: 0.6339. Loss Train: 0.0203. Loss Val: 0.0246. Time: 2.3192
 EPOCH 90. Progress: 9.0%.
 Training R^2: 0.6576. Test R^2: 0.6848. Loss Train: 0.0209. Loss Val: 0.0205. Time: 2.1755
 EPOCH 100. Progress: 10.0%.
 Training R^2: 0.6762. Test R^2: 0.5351. Loss Train: 0.0177. Loss Val: 0.0203. Time: 2.4397
 EPOCH 110. Progress: 11.0%.
 Training R^2: 0.7007. Test R^2: 0.6804. Loss Train: 0.0166. Loss Val: 0.0192. Time: 2.8877
 EPOCH 120. Progress: 12.0%.
 Training R^2: 0.6851. Test R^2: 0.6832. Loss Train: 0.0173. Loss Val: 0.0206. Time: 2.3207
 EPOCH 130. Progress: 13.0%.
 Training R^2: 0.6630. Test R^2: 0.6563. Loss Train: 0.0166. Loss Val: 0.0217. Time: 2.2040
 EPOCH 140. Progress: 14.000000000000002%.
 Training R^2: 0.7237. Test R^2: 0.7122. Loss Train: 0.0164. Loss Val: 0.0175. Time: 2.1338
 EPOCH 150. Progress: 15.0%.
 Training R^2: 0.6712. Test R^2: 0.6499. Loss Train: 0.0159. Loss Val: 0.0213. Time: 2.4244
 EPOCH 160. Progress: 16.0%.
 Training R^2: 0.6859. Test R^2: 0.6410. Loss Train: 0.0169. Loss Val: 0.0220. Time: 2.7059
 EPOCH 170. Progress: 17.0%.
 Training R^2: 0.7282. Test R^2: 0.7186. Loss Train: 0.0156. Loss Val: 0.0162. Time: 2.3983
 EPOCH 180. Progress: 18.0%.
 Training R^2: 0.7141. Test R^2: 0.7137. Loss Train: 0.0157. Loss Val: 0.0175. Time: 2.2243
 EPOCH 190. Progress: 19.0%.
 Training R^2: 0.7243. Test R^2: 0.7229. Loss Train: 0.0166. Loss Val: 0.0168. Time: 2.1938
 EPOCH 200. Progress: 20.0%.
 Training R^2: 0.7037. Test R^2: 0.7038. Loss Train: 0.0148. Loss Val: 0.0176. Time: 2.2693
 EPOCH 210. Progress: 21.0%.
 Training R^2: 0.6899. Test R^2: 0.7147. Loss Train: 0.0153. Loss Val: 0.0189. Time: 2.3512
 EPOCH 220. Progress: 22.0%.
 Training R^2: 0.6460. Test R^2: 0.6535. Loss Train: 0.0149. Loss Val: 0.0215. Time: 2.2150
 EPOCH 230. Progress: 23.0%.
 Training R^2: 0.7394. Test R^2: 0.7456. Loss Train: 0.0146. Loss Val: 0.0150. Time: 2.2148
 EPOCH 240. Progress: 24.0%.
 Training R^2: 0.7453. Test R^2: 0.7390. Loss Train: 0.0157. Loss Val: 0.0165. Time: 2.1932
 EPOCH 250. Progress: 25.0%.
 Training R^2: 0.7065. Test R^2: 0.7047. Loss Train: 0.0151. Loss Val: 0.0182. Time: 2.4854
 EPOCH 260. Progress: 26.0%.
 Training R^2: 0.7158. Test R^2: 0.7538. Loss Train: 0.0146. Loss Val: 0.0166. Time: 2.2082
 EPOCH 270. Progress: 27.0%.
 Training R^2: 0.6339. Test R^2: 0.5646. Loss Train: 0.0154. Loss Val: 0.0268. Time: 2.1861
 EPOCH 280. Progress: 28.000000000000004%.
 Training R^2: 0.7504. Test R^2: 0.7543. Loss Train: 0.0144. Loss Val: 0.0151. Time: 2.1763
 EPOCH 290. Progress: 28.999999999999996%.
 Training R^2: 0.7155. Test R^2: 0.7553. Loss Train: 0.0146. Loss Val: 0.0163. Time: 2.2144
 EPOCH 300. Progress: 30.0%.
 Training R^2: 0.7428. Test R^2: 0.7277. Loss Train: 0.0176. Loss Val: 0.0160. Time: 2.4493
 EPOCH 310. Progress: 31.0%.
 Training R^2: 0.7452. Test R^2: 0.7370. Loss Train: 0.0150. Loss Val: 0.0156. Time: 2.2905
 EPOCH 320. Progress: 32.0%.
 Training R^2: 0.7173. Test R^2: 0.7602. Loss Train: 0.0147. Loss Val: 0.0158. Time: 2.3167
 EPOCH 330. Progress: 33.0%.
 Training R^2: 0.6786. Test R^2: 0.6614. Loss Train: 0.0157. Loss Val: 0.0223. Time: 3.1340
 EPOCH 340. Progress: 34.0%.
 Training R^2: 0.7275. Test R^2: 0.6948. Loss Train: 0.0148. Loss Val: 0.0184. Time: 2.9551
 EPOCH 350. Progress: 35.0%.
 Training R^2: 0.7568. Test R^2: 0.7569. Loss Train: 0.0134. Loss Val: 0.0146. Time: 3.0766
 EPOCH 360. Progress: 36.0%.
 Training R^2: 0.7079. Test R^2: 0.7480. Loss Train: 0.0138. Loss Val: 0.0161. Time: 2.3230
 EPOCH 370. Progress: 37.0%.
 Training R^2: 0.7385. Test R^2: 0.7483. Loss Train: 0.0145. Loss Val: 0.0157. Time: 2.2408
 EPOCH 380. Progress: 38.0%.
 Training R^2: 0.7519. Test R^2: 0.7619. Loss Train: 0.0141. Loss Val: 0.0150. Time: 2.3781
 EPOCH 390. Progress: 39.0%.
 Training R^2: 0.7424. Test R^2: 0.7667. Loss Train: 0.0131. Loss Val: 0.0144. Time: 2.3558
 EPOCH 400. Progress: 40.0%.
 Training R^2: 0.7519. Test R^2: 0.7712. Loss Train: 0.0151. Loss Val: 0.0152. Time: 2.5086
 EPOCH 410. Progress: 41.0%.
 Training R^2: 0.7430. Test R^2: 0.7316. Loss Train: 0.0137. Loss Val: 0.0171. Time: 2.6166
 EPOCH 420. Progress: 42.0%.
 Training R^2: 0.7765. Test R^2: 0.7671. Loss Train: 0.0136. Loss Val: 0.0131. Time: 2.5005
 EPOCH 430. Progress: 43.0%.
 Training R^2: 0.6761. Test R^2: 0.6853. Loss Train: 0.0134. Loss Val: 0.0200. Time: 2.5180
 EPOCH 440. Progress: 44.0%.
 Training R^2: 0.7145. Test R^2: 0.7455. Loss Train: 0.0132. Loss Val: 0.0175. Time: 2.4153
 EPOCH 450. Progress: 45.0%.
 Training R^2: 0.7104. Test R^2: 0.7209. Loss Train: 0.0126. Loss Val: 0.0177. Time: 2.2806
 EPOCH 460. Progress: 46.0%.
 Training R^2: 0.7438. Test R^2: 0.7513. Loss Train: 0.0130. Loss Val: 0.0158. Time: 3.4491
 EPOCH 470. Progress: 47.0%.
 Training R^2: 0.7741. Test R^2: 0.7920. Loss Train: 0.0130. Loss Val: 0.0132. Time: 2.5159
 EPOCH 480. Progress: 48.0%.
 Training R^2: 0.7337. Test R^2: 0.7877. Loss Train: 0.0125. Loss Val: 0.0143. Time: 2.3047
 EPOCH 490. Progress: 49.0%.
 Training R^2: 0.7610. Test R^2: 0.7817. Loss Train: 0.0142. Loss Val: 0.0146. Time: 2.2825
 EPOCH 500. Progress: 50.0%.
 Training R^2: 0.7630. Test R^2: 0.7784. Loss Train: 0.0148. Loss Val: 0.0143. Time: 2.2451
 EPOCH 510. Progress: 51.0%.
 Training R^2: 0.7710. Test R^2: 0.8033. Loss Train: 0.0140. Loss Val: 0.0141. Time: 2.2595
 EPOCH 520. Progress: 52.0%.
 Training R^2: 0.7628. Test R^2: 0.7718. Loss Train: 0.0135. Loss Val: 0.0136. Time: 2.2565
 EPOCH 530. Progress: 53.0%.
 Training R^2: 0.7754. Test R^2: 0.7837. Loss Train: 0.0122. Loss Val: 0.0124. Time: 2.5370
 EPOCH 540. Progress: 54.0%.
 Training R^2: 0.7569. Test R^2: 0.7522. Loss Train: 0.0132. Loss Val: 0.0148. Time: 2.2073
 EPOCH 550. Progress: 55.00000000000001%.
 Training R^2: 0.7675. Test R^2: 0.7798. Loss Train: 0.0126. Loss Val: 0.0137. Time: 2.1677
 EPOCH 560. Progress: 56.00000000000001%.
 Training R^2: 0.7860. Test R^2: 0.7938. Loss Train: 0.0153. Loss Val: 0.0124. Time: 2.1945
 EPOCH 570. Progress: 56.99999999999999%.
 Training R^2: 0.7332. Test R^2: 0.7530. Loss Train: 0.0124. Loss Val: 0.0154. Time: 2.3476
 EPOCH 580. Progress: 57.99999999999999%.
 Training R^2: 0.7587. Test R^2: 0.7810. Loss Train: 0.0127. Loss Val: 0.0129. Time: 2.3162
 EPOCH 590. Progress: 59.0%.
 Training R^2: 0.7203. Test R^2: 0.7146. Loss Train: 0.0123. Loss Val: 0.0175. Time: 2.4659
 EPOCH 600. Progress: 60.0%.
 Training R^2: 0.7534. Test R^2: 0.7553. Loss Train: 0.0121. Loss Val: 0.0144. Time: 2.3201
 EPOCH 610. Progress: 61.0%.
 Training R^2: 0.7489. Test R^2: 0.7627. Loss Train: 0.0134. Loss Val: 0.0145. Time: 2.7255
 EPOCH 620. Progress: 62.0%.
 Training R^2: 0.6548. Test R^2: 0.6529. Loss Train: 0.0144. Loss Val: 0.0227. Time: 2.6257
 EPOCH 630. Progress: 63.0%.
 Training R^2: 0.7746. Test R^2: 0.8111. Loss Train: 0.0119. Loss Val: 0.0120. Time: 2.8682
 EPOCH 640. Progress: 64.0%.
 Training R^2: 0.6986. Test R^2: 0.7465. Loss Train: 0.0123. Loss Val: 0.0163. Time: 2.2709
 EPOCH 650. Progress: 65.0%.
 Training R^2: 0.7670. Test R^2: 0.7824. Loss Train: 0.0127. Loss Val: 0.0138. Time: 2.5673
 EPOCH 660. Progress: 66.0%.
 Training R^2: 0.7683. Test R^2: 0.7935. Loss Train: 0.0127. Loss Val: 0.0118. Time: 2.6559
 EPOCH 670. Progress: 67.0%.
 Training R^2: 0.7814. Test R^2: 0.8140. Loss Train: 0.0121. Loss Val: 0.0119. Time: 2.1893
 EPOCH 680. Progress: 68.0%.
 Training R^2: 0.7712. Test R^2: 0.7824. Loss Train: 0.0122. Loss Val: 0.0140. Time: 2.5471
 EPOCH 690. Progress: 69.0%.
 Training R^2: 0.7881. Test R^2: 0.8145. Loss Train: 0.0128. Loss Val: 0.0117. Time: 1.9975
 EPOCH 700. Progress: 70.0%.
 Training R^2: 0.7613. Test R^2: 0.8018. Loss Train: 0.0121. Loss Val: 0.0128. Time: 1.9574
 EPOCH 710. Progress: 71.0%.
 Training R^2: 0.7207. Test R^2: 0.7138. Loss Train: 0.0117. Loss Val: 0.0183. Time: 1.9471
 EPOCH 720. Progress: 72.0%.
 Training R^2: 0.7946. Test R^2: 0.8287. Loss Train: 0.0124. Loss Val: 0.0111. Time: 1.9482
 EPOCH 730. Progress: 73.0%.
 Training R^2: 0.7411. Test R^2: 0.7795. Loss Train: 0.0119. Loss Val: 0.0142. Time: 2.0103
 EPOCH 740. Progress: 74.0%.
 Training R^2: 0.7652. Test R^2: 0.7693. Loss Train: 0.0127. Loss Val: 0.0142. Time: 1.8568
 EPOCH 750. Progress: 75.0%.
 Training R^2: 0.7594. Test R^2: 0.8003. Loss Train: 0.0115. Loss Val: 0.0125. Time: 2.1871
 EPOCH 760. Progress: 76.0%.
 Training R^2: 0.7727. Test R^2: 0.8022. Loss Train: 0.0120. Loss Val: 0.0126. Time: 2.1412
 EPOCH 770. Progress: 77.0%.
 Training R^2: 0.7260. Test R^2: 0.6854. Loss Train: 0.0122. Loss Val: 0.0194. Time: 1.9576
 EPOCH 780. Progress: 78.0%.
 Training R^2: 0.7772. Test R^2: 0.8020. Loss Train: 0.0119. Loss Val: 0.0131. Time: 1.9466
 EPOCH 790. Progress: 79.0%.
 Training R^2: 0.7488. Test R^2: 0.8251. Loss Train: 0.0121. Loss Val: 0.0112. Time: 1.9324
 EPOCH 800. Progress: 80.0%.
 Training R^2: 0.7789. Test R^2: 0.8018. Loss Train: 0.0122. Loss Val: 0.0127. Time: 1.9675
 EPOCH 810. Progress: 81.0%.
 Training R^2: 0.7686. Test R^2: 0.7813. Loss Train: 0.0129. Loss Val: 0.0134. Time: 1.8761
 EPOCH 820. Progress: 82.0%.
 Training R^2: 0.7578. Test R^2: 0.8056. Loss Train: 0.0123. Loss Val: 0.0126. Time: 1.9206
 EPOCH 830. Progress: 83.0%.
 Training R^2: 0.7945. Test R^2: 0.7955. Loss Train: 0.0119. Loss Val: 0.0113. Time: 1.9568
 EPOCH 840. Progress: 84.0%.
 Training R^2: 0.7344. Test R^2: 0.7341. Loss Train: 0.0123. Loss Val: 0.0160. Time: 2.0485
 EPOCH 850. Progress: 85.0%.
 Training R^2: 0.7771. Test R^2: 0.8063. Loss Train: 0.0126. Loss Val: 0.0123. Time: 1.8801
 EPOCH 860. Progress: 86.0%.
 Training R^2: 0.7941. Test R^2: 0.8254. Loss Train: 0.0117. Loss Val: 0.0107. Time: 1.9319
 EPOCH 870. Progress: 87.0%.
 Training R^2: 0.7810. Test R^2: 0.8012. Loss Train: 0.0121. Loss Val: 0.0118. Time: 1.9670
 EPOCH 880. Progress: 88.0%.
 Training R^2: 0.7466. Test R^2: 0.8268. Loss Train: 0.0124. Loss Val: 0.0113. Time: 1.9424
 EPOCH 890. Progress: 89.0%.
 Training R^2: 0.7937. Test R^2: 0.8166. Loss Train: 0.0122. Loss Val: 0.0115. Time: 1.8968
 EPOCH 900. Progress: 90.0%.
 Training R^2: 0.8011. Test R^2: 0.8291. Loss Train: 0.0114. Loss Val: 0.0120. Time: 1.9333
 EPOCH 910. Progress: 91.0%.
 Training R^2: 0.7764. Test R^2: 0.7922. Loss Train: 0.0115. Loss Val: 0.0134. Time: 1.9312
 EPOCH 920. Progress: 92.0%.
 Training R^2: 0.7935. Test R^2: 0.8062. Loss Train: 0.0132. Loss Val: 0.0119. Time: 2.3265
 EPOCH 930. Progress: 93.0%.
 Training R^2: 0.7865. Test R^2: 0.8079. Loss Train: 0.0118. Loss Val: 0.0131. Time: 2.8770
 EPOCH 940. Progress: 94.0%.
 Training R^2: 0.7785. Test R^2: 0.8259. Loss Train: 0.0120. Loss Val: 0.0107. Time: 2.8656
 EPOCH 950. Progress: 95.0%.
 Training R^2: 0.7886. Test R^2: 0.8126. Loss Train: 0.0113. Loss Val: 0.0115. Time: 2.2849
 EPOCH 960. Progress: 96.0%.
 Training R^2: 0.7984. Test R^2: 0.8284. Loss Train: 0.0122. Loss Val: 0.0108. Time: 2.2682
 EPOCH 970. Progress: 97.0%.
 Training R^2: 0.7796. Test R^2: 0.8180. Loss Train: 0.0129. Loss Val: 0.0113. Time: 2.4791
 EPOCH 980. Progress: 98.0%.
 Training R^2: 0.7381. Test R^2: 0.7013. Loss Train: 0.0133. Loss Val: 0.0175. Time: 2.6938
 EPOCH 990. Progress: 99.0%.
 Training R^2: 0.7749. Test R^2: 0.8163. Loss Train: 0.0122. Loss Val: 0.0115. Time: 2.3152
 EPOCH 1000. Progress: 100.0%.
 Training R^2: 0.7629. Test R^2: 0.7712. Loss Train: 0.0131. Loss Val: 0.0133. Time: 2.2270
../_images/CASESTUDY_NN-day1_22_1.png
saving  ./plots/LONGER_ADAM_FFNet_ReLU_lr0.01_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.001
    weight_decay: 0
)
n. of epochs: 1000
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2028. Test R^2: 0.1560. Loss Train: 0.0692. Loss Val: 0.0486. Time: 2.3983
 EPOCH 10. Progress: 1.0%.
 Training R^2: 0.4759. Test R^2: 0.4325. Loss Train: 0.0263. Loss Val: 0.0361. Time: 2.0657
 EPOCH 20. Progress: 2.0%.
 Training R^2: 0.5427. Test R^2: 0.4802. Loss Train: 0.0245. Loss Val: 0.0329. Time: 2.7306
 EPOCH 30. Progress: 3.0%.
 Training R^2: 0.5391. Test R^2: 0.5306. Loss Train: 0.0224. Loss Val: 0.0307. Time: 2.2783
 EPOCH 40. Progress: 4.0%.
 Training R^2: 0.6295. Test R^2: 0.5802. Loss Train: 0.0204. Loss Val: 0.0273. Time: 2.8154
 EPOCH 50. Progress: 5.0%.
 Training R^2: 0.6422. Test R^2: 0.6217. Loss Train: 0.0190. Loss Val: 0.0242. Time: 2.1834
 EPOCH 60. Progress: 6.0%.
 Training R^2: 0.6406. Test R^2: 0.6399. Loss Train: 0.0178. Loss Val: 0.0236. Time: 2.6537
 EPOCH 70. Progress: 7.000000000000001%.
 Training R^2: 0.6753. Test R^2: 0.6471. Loss Train: 0.0180. Loss Val: 0.0239. Time: 2.2396
 EPOCH 80. Progress: 8.0%.
 Training R^2: 0.7032. Test R^2: 0.6909. Loss Train: 0.0166. Loss Val: 0.0203. Time: 2.2092
 EPOCH 90. Progress: 9.0%.
 Training R^2: 0.6927. Test R^2: 0.6677. Loss Train: 0.0153. Loss Val: 0.0215. Time: 2.2108
 EPOCH 100. Progress: 10.0%.
 Training R^2: 0.7271. Test R^2: 0.7260. Loss Train: 0.0143. Loss Val: 0.0179. Time: 2.1933
 EPOCH 110. Progress: 11.0%.
 Training R^2: 0.7351. Test R^2: 0.7490. Loss Train: 0.0140. Loss Val: 0.0169. Time: 2.1348
 EPOCH 120. Progress: 12.0%.
 Training R^2: 0.7628. Test R^2: 0.7371. Loss Train: 0.0132. Loss Val: 0.0157. Time: 2.1435
 EPOCH 130. Progress: 13.0%.
 Training R^2: 0.7598. Test R^2: 0.7347. Loss Train: 0.0139. Loss Val: 0.0153. Time: 2.3784
 EPOCH 140. Progress: 14.000000000000002%.
 Training R^2: 0.7580. Test R^2: 0.7632. Loss Train: 0.0127. Loss Val: 0.0153. Time: 2.1633
 EPOCH 150. Progress: 15.0%.
 Training R^2: 0.7713. Test R^2: 0.7808. Loss Train: 0.0130. Loss Val: 0.0147. Time: 2.8185
 EPOCH 160. Progress: 16.0%.
 Training R^2: 0.7694. Test R^2: 0.7524. Loss Train: 0.0119. Loss Val: 0.0157. Time: 2.2254
 EPOCH 170. Progress: 17.0%.
 Training R^2: 0.7840. Test R^2: 0.7902. Loss Train: 0.0122. Loss Val: 0.0133. Time: 2.1863
 EPOCH 180. Progress: 18.0%.
 Training R^2: 0.7839. Test R^2: 0.8062. Loss Train: 0.0114. Loss Val: 0.0125. Time: 2.1960
 EPOCH 190. Progress: 19.0%.
 Training R^2: 0.8026. Test R^2: 0.8171. Loss Train: 0.0114. Loss Val: 0.0120. Time: 2.8994
 EPOCH 200. Progress: 20.0%.
 Training R^2: 0.8004. Test R^2: 0.8059. Loss Train: 0.0108. Loss Val: 0.0127. Time: 2.1921
 EPOCH 210. Progress: 21.0%.
 Training R^2: 0.7806. Test R^2: 0.7829. Loss Train: 0.0110. Loss Val: 0.0130. Time: 5.2012
 EPOCH 220. Progress: 22.0%.
 Training R^2: 0.8110. Test R^2: 0.8164. Loss Train: 0.0107. Loss Val: 0.0114. Time: 2.1437
 EPOCH 230. Progress: 23.0%.
 Training R^2: 0.8206. Test R^2: 0.8211. Loss Train: 0.0102. Loss Val: 0.0120. Time: 2.1805
 EPOCH 240. Progress: 24.0%.
 Training R^2: 0.8222. Test R^2: 0.8289. Loss Train: 0.0098. Loss Val: 0.0115. Time: 2.2140
 EPOCH 250. Progress: 25.0%.
 Training R^2: 0.8244. Test R^2: 0.8094. Loss Train: 0.0102. Loss Val: 0.0112. Time: 2.3777
 EPOCH 260. Progress: 26.0%.
 Training R^2: 0.8277. Test R^2: 0.8291. Loss Train: 0.0096. Loss Val: 0.0107. Time: 2.9095
 EPOCH 270. Progress: 27.0%.
 Training R^2: 0.8364. Test R^2: 0.8429. Loss Train: 0.0095. Loss Val: 0.0100. Time: 2.1717
 EPOCH 280. Progress: 28.000000000000004%.
 Training R^2: 0.8072. Test R^2: 0.8402. Loss Train: 0.0093. Loss Val: 0.0097. Time: 7.8986
 EPOCH 290. Progress: 28.999999999999996%.
 Training R^2: 0.8334. Test R^2: 0.8464. Loss Train: 0.0092. Loss Val: 0.0100. Time: 2.7538
 EPOCH 300. Progress: 30.0%.
 Training R^2: 0.7824. Test R^2: 0.7868. Loss Train: 0.0096. Loss Val: 0.0129. Time: 2.2980
 EPOCH 310. Progress: 31.0%.
 Training R^2: 0.8343. Test R^2: 0.8366. Loss Train: 0.0090. Loss Val: 0.0100. Time: 2.1131
 EPOCH 320. Progress: 32.0%.
 Training R^2: 0.8383. Test R^2: 0.8348. Loss Train: 0.0090. Loss Val: 0.0102. Time: 2.2003
 EPOCH 330. Progress: 33.0%.
 Training R^2: 0.8388. Test R^2: 0.8457. Loss Train: 0.0088. Loss Val: 0.0095. Time: 2.4573
 EPOCH 340. Progress: 34.0%.
 Training R^2: 0.8498. Test R^2: 0.8529. Loss Train: 0.0084. Loss Val: 0.0089. Time: 2.7186
 EPOCH 350. Progress: 35.0%.
 Training R^2: 0.8288. Test R^2: 0.8138. Loss Train: 0.0089. Loss Val: 0.0115. Time: 2.4576
 EPOCH 360. Progress: 36.0%.
 Training R^2: 0.8576. Test R^2: 0.8621. Loss Train: 0.0084. Loss Val: 0.0087. Time: 3.0517
 EPOCH 370. Progress: 37.0%.
 Training R^2: 0.8561. Test R^2: 0.8660. Loss Train: 0.0081. Loss Val: 0.0082. Time: 2.6274
 EPOCH 380. Progress: 38.0%.
 Training R^2: 0.8491. Test R^2: 0.8655. Loss Train: 0.0079. Loss Val: 0.0086. Time: 2.2206
 EPOCH 390. Progress: 39.0%.
 Training R^2: 0.8490. Test R^2: 0.8699. Loss Train: 0.0080. Loss Val: 0.0086. Time: 2.3447
 EPOCH 400. Progress: 40.0%.
 Training R^2: 0.8618. Test R^2: 0.8707. Loss Train: 0.0075. Loss Val: 0.0084. Time: 2.2453
 EPOCH 410. Progress: 41.0%.
 Training R^2: 0.8660. Test R^2: 0.8819. Loss Train: 0.0075. Loss Val: 0.0088. Time: 2.1638
 EPOCH 420. Progress: 42.0%.
 Training R^2: 0.8731. Test R^2: 0.8894. Loss Train: 0.0076. Loss Val: 0.0074. Time: 2.2099
 EPOCH 430. Progress: 43.0%.
 Training R^2: 0.8611. Test R^2: 0.8809. Loss Train: 0.0074. Loss Val: 0.0089. Time: 2.2619
 EPOCH 440. Progress: 44.0%.
 Training R^2: 0.8699. Test R^2: 0.8724. Loss Train: 0.0070. Loss Val: 0.0081. Time: 2.0963
 EPOCH 450. Progress: 45.0%.
 Training R^2: 0.8672. Test R^2: 0.8861. Loss Train: 0.0073. Loss Val: 0.0074. Time: 2.6645
 EPOCH 460. Progress: 46.0%.
 Training R^2: 0.8652. Test R^2: 0.8532. Loss Train: 0.0067. Loss Val: 0.0089. Time: 2.2519
 EPOCH 470. Progress: 47.0%.
 Training R^2: 0.8743. Test R^2: 0.8826. Loss Train: 0.0076. Loss Val: 0.0069. Time: 2.6056
 EPOCH 480. Progress: 48.0%.
 Training R^2: 0.8699. Test R^2: 0.8694. Loss Train: 0.0072. Loss Val: 0.0081. Time: 2.4231
 EPOCH 490. Progress: 49.0%.
 Training R^2: 0.8383. Test R^2: 0.8550. Loss Train: 0.0077. Loss Val: 0.0085. Time: 2.3354
 EPOCH 500. Progress: 50.0%.
 Training R^2: 0.8849. Test R^2: 0.8842. Loss Train: 0.0065. Loss Val: 0.0069. Time: 2.0191
 EPOCH 510. Progress: 51.0%.
 Training R^2: 0.8795. Test R^2: 0.8905. Loss Train: 0.0068. Loss Val: 0.0068. Time: 2.4990
 EPOCH 520. Progress: 52.0%.
 Training R^2: 0.8657. Test R^2: 0.8653. Loss Train: 0.0071. Loss Val: 0.0076. Time: 2.5995
 EPOCH 530. Progress: 53.0%.
 Training R^2: 0.8777. Test R^2: 0.8941. Loss Train: 0.0066. Loss Val: 0.0070. Time: 2.3459
 EPOCH 540. Progress: 54.0%.
 Training R^2: 0.8790. Test R^2: 0.8877. Loss Train: 0.0064. Loss Val: 0.0071. Time: 2.9594
 EPOCH 550. Progress: 55.00000000000001%.
 Training R^2: 0.8903. Test R^2: 0.8929. Loss Train: 0.0062. Loss Val: 0.0065. Time: 2.1561
 EPOCH 560. Progress: 56.00000000000001%.
 Training R^2: 0.8719. Test R^2: 0.8808. Loss Train: 0.0061. Loss Val: 0.0076. Time: 2.5248
 EPOCH 570. Progress: 56.99999999999999%.
 Training R^2: 0.8848. Test R^2: 0.8927. Loss Train: 0.0064. Loss Val: 0.0063. Time: 2.1494
 EPOCH 580. Progress: 57.99999999999999%.
 Training R^2: 0.8837. Test R^2: 0.8893. Loss Train: 0.0063. Loss Val: 0.0070. Time: 2.2030
 EPOCH 590. Progress: 59.0%.
 Training R^2: 0.8776. Test R^2: 0.9042. Loss Train: 0.0072. Loss Val: 0.0060. Time: 2.2955
 EPOCH 600. Progress: 60.0%.
 Training R^2: 0.8867. Test R^2: 0.8880. Loss Train: 0.0065. Loss Val: 0.0071. Time: 2.8852
 EPOCH 610. Progress: 61.0%.
 Training R^2: 0.8930. Test R^2: 0.8890. Loss Train: 0.0065. Loss Val: 0.0068. Time: 2.9015
 EPOCH 620. Progress: 62.0%.
 Training R^2: 0.8952. Test R^2: 0.9084. Loss Train: 0.0059. Loss Val: 0.0056. Time: 2.4803
 EPOCH 630. Progress: 63.0%.
 Training R^2: 0.8945. Test R^2: 0.9036. Loss Train: 0.0061. Loss Val: 0.0063. Time: 2.2777
 EPOCH 640. Progress: 64.0%.
 Training R^2: 0.8942. Test R^2: 0.8980. Loss Train: 0.0065. Loss Val: 0.0058. Time: 2.2503
 EPOCH 650. Progress: 65.0%.
 Training R^2: 0.9067. Test R^2: 0.9076. Loss Train: 0.0056. Loss Val: 0.0056. Time: 2.1684
 EPOCH 660. Progress: 66.0%.
 Training R^2: 0.8991. Test R^2: 0.8998. Loss Train: 0.0060. Loss Val: 0.0065. Time: 2.2702
 EPOCH 670. Progress: 67.0%.
 Training R^2: 0.8743. Test R^2: 0.8937. Loss Train: 0.0073. Loss Val: 0.0069. Time: 2.4311
 EPOCH 680. Progress: 68.0%.
 Training R^2: 0.8516. Test R^2: 0.8870. Loss Train: 0.0062. Loss Val: 0.0069. Time: 2.2977
 EPOCH 690. Progress: 69.0%.
 Training R^2: 0.8600. Test R^2: 0.8760. Loss Train: 0.0056. Loss Val: 0.0075. Time: 2.1302
 EPOCH 700. Progress: 70.0%.
 Training R^2: 0.9006. Test R^2: 0.9012. Loss Train: 0.0060. Loss Val: 0.0061. Time: 2.1501
 EPOCH 710. Progress: 71.0%.
 Training R^2: 0.9048. Test R^2: 0.9060. Loss Train: 0.0059. Loss Val: 0.0055. Time: 2.1127
 EPOCH 720. Progress: 72.0%.
 Training R^2: 0.9020. Test R^2: 0.8869. Loss Train: 0.0055. Loss Val: 0.0055. Time: 2.5087
 EPOCH 730. Progress: 73.0%.
 Training R^2: 0.8889. Test R^2: 0.9134. Loss Train: 0.0054. Loss Val: 0.0054. Time: 2.6028
 EPOCH 740. Progress: 74.0%.
 Training R^2: 0.9135. Test R^2: 0.9221. Loss Train: 0.0050. Loss Val: 0.0051. Time: 2.3192
 EPOCH 750. Progress: 75.0%.
 Training R^2: 0.9123. Test R^2: 0.9141. Loss Train: 0.0060. Loss Val: 0.0053. Time: 2.2378
 EPOCH 760. Progress: 76.0%.
 Training R^2: 0.9118. Test R^2: 0.9186. Loss Train: 0.0058. Loss Val: 0.0054. Time: 2.2381
 EPOCH 770. Progress: 77.0%.
 Training R^2: 0.9103. Test R^2: 0.9151. Loss Train: 0.0051. Loss Val: 0.0056. Time: 2.1798
 EPOCH 780. Progress: 78.0%.
 Training R^2: 0.8842. Test R^2: 0.8948. Loss Train: 0.0051. Loss Val: 0.0067. Time: 2.1070
 EPOCH 790. Progress: 79.0%.
 Training R^2: 0.9094. Test R^2: 0.9133. Loss Train: 0.0055. Loss Val: 0.0053. Time: 2.2978
 EPOCH 800. Progress: 80.0%.
 Training R^2: 0.9154. Test R^2: 0.9182. Loss Train: 0.0049. Loss Val: 0.0050. Time: 2.4846
 EPOCH 810. Progress: 81.0%.
 Training R^2: 0.9108. Test R^2: 0.9168. Loss Train: 0.0058. Loss Val: 0.0053. Time: 2.7128
 EPOCH 820. Progress: 82.0%.
 Training R^2: 0.9180. Test R^2: 0.9214. Loss Train: 0.0049. Loss Val: 0.0050. Time: 2.9778
 EPOCH 830. Progress: 83.0%.
 Training R^2: 0.8779. Test R^2: 0.9026. Loss Train: 0.0051. Loss Val: 0.0062. Time: 2.0989
 EPOCH 840. Progress: 84.0%.
 Training R^2: 0.9098. Test R^2: 0.9109. Loss Train: 0.0054. Loss Val: 0.0054. Time: 2.3217
 EPOCH 850. Progress: 85.0%.
 Training R^2: 0.9130. Test R^2: 0.9193. Loss Train: 0.0045. Loss Val: 0.0052. Time: 2.8466
 EPOCH 860. Progress: 86.0%.
 Training R^2: 0.9044. Test R^2: 0.9118. Loss Train: 0.0048. Loss Val: 0.0051. Time: 2.6478
 EPOCH 870. Progress: 87.0%.
 Training R^2: 0.9165. Test R^2: 0.9212. Loss Train: 0.0052. Loss Val: 0.0045. Time: 3.0131
 EPOCH 880. Progress: 88.0%.
 Training R^2: 0.9131. Test R^2: 0.9275. Loss Train: 0.0049. Loss Val: 0.0047. Time: 2.3367
 EPOCH 890. Progress: 89.0%.
 Training R^2: 0.8942. Test R^2: 0.8939. Loss Train: 0.0050. Loss Val: 0.0062. Time: 2.5193
 EPOCH 900. Progress: 90.0%.
 Training R^2: 0.9157. Test R^2: 0.9114. Loss Train: 0.0046. Loss Val: 0.0053. Time: 2.2089
 EPOCH 910. Progress: 91.0%.
 Training R^2: 0.9176. Test R^2: 0.9082. Loss Train: 0.0052. Loss Val: 0.0050. Time: 2.2816
 EPOCH 920. Progress: 92.0%.
 Training R^2: 0.9207. Test R^2: 0.9222. Loss Train: 0.0052. Loss Val: 0.0048. Time: 2.5538
 EPOCH 930. Progress: 93.0%.
 Training R^2: 0.9163. Test R^2: 0.9321. Loss Train: 0.0051. Loss Val: 0.0044. Time: 2.2491
 EPOCH 940. Progress: 94.0%.
 Training R^2: 0.9143. Test R^2: 0.8993. Loss Train: 0.0046. Loss Val: 0.0052. Time: 2.6589
 EPOCH 950. Progress: 95.0%.
 Training R^2: 0.9080. Test R^2: 0.9124. Loss Train: 0.0054. Loss Val: 0.0055. Time: 2.3593
 EPOCH 960. Progress: 96.0%.
 Training R^2: 0.9220. Test R^2: 0.9357. Loss Train: 0.0046. Loss Val: 0.0043. Time: 2.2051
 EPOCH 970. Progress: 97.0%.
 Training R^2: 0.9123. Test R^2: 0.9244. Loss Train: 0.0044. Loss Val: 0.0049. Time: 2.2814
 EPOCH 980. Progress: 98.0%.
 Training R^2: 0.9086. Test R^2: 0.9219. Loss Train: 0.0049. Loss Val: 0.0050. Time: 2.3907
 EPOCH 990. Progress: 99.0%.
 Training R^2: 0.9198. Test R^2: 0.9207. Loss Train: 0.0051. Loss Val: 0.0048. Time: 2.3489
 EPOCH 1000. Progress: 100.0%.
 Training R^2: 0.9296. Test R^2: 0.9287. Loss Train: 0.0047. Loss Val: 0.0042. Time: 2.2467
../_images/CASESTUDY_NN-day1_22_3.png
saving  ./plots/LONGER_ADAM_FFNet_ReLU_lr0.001_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png
[29]:
# Test
with torch.no_grad():
    data, targets_val = next(iter(dataloaders['all_val']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted_val = model(model_input).squeeze()
#         _, predicted = torch.max(out, 1)
    print('predicted.shape: {}'.format(predicted_val.shape))
    print('predicted[:20]: \t{}'.format(predicted_val[:20]))
    print('targets[:20]: \t\t{}'.format(targets_val[:20]))

# Test
with torch.no_grad():
    data, targets = next(iter(dataloaders['test']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted = model(model_input).squeeze()
    print('predicted.shape: {}'.format(predicted.shape))
    print('predicted[:20]: \t{}'.format(predicted[:20]))
    print('targets[:20]: \t\t{}'.format(targets[:20]))

#Time for a real test
path_to_save = './plots'
if not os.path.exists(path_to_save):
    os.makedirs(path_to_save)

fig, (ax1,ax2) = plt.subplots(1,2, sharey=True)
# test_predictions = model(normed_test_data).flatten()
r = r2_score(targets_val, predicted_val.cpu())
ax1.scatter(targets_val, predicted_val.cpu(),alpha=0.5, label='$R^2$ = %.3f' % (r))
ax1.legend(loc="upper left")
ax1.set_xlabel('True Values [Mean Conc.]')
ax1.set_ylabel('Predictions [Mean Conc.]')
ax1.axis('equal')
ax1.axis('square')
ax1.set_xlim([0,1])
ax1.set_ylim([0,1])
_ = ax1.plot([-100, 100], [-100, 100], 'r:')
ax1.set_title('Test dataset')
fig.set_figheight(30)
fig.set_figwidth(10)
#plt.show()
#plt.close('all')

#Whole dataset
# dataset_predictions = model.predict(normed_dataset).flatten()
r = r2_score(targets, predicted.cpu())
ax2.scatter(targets, predicted.cpu(), alpha=0.5, label='$R^2$ = %.3f' % (r))
ax2.legend(loc="upper left")
ax2.set_xlabel('True Values [Mean Conc.]')
ax2.set_ylabel('Predictions [Mean Conc.]')
ax2.axis('equal')
ax2.axis('square')
ax2.set_xlim([0,1])
ax2.set_ylim([0,1])
_ = ax2.plot([-100, 100], [-100, 100], 'r:')
ax2.set_title('Whole dataset')
# plt.show()
fig.savefig(os.path.join(path_to_save, 'FFNet_ADAM_LONGER_R2Score_' + config_str + '.png'), bbox_inches='tight')
# #plt.close('all')
predicted.shape: torch.Size([335])
predicted[:20]:         tensor([0.3208, 0.7937, 0.8534, 0.3528, 0.4023, 0.1810, 0.3304, 0.0766, 0.8246,
        0.6655, 0.2339, 0.2251, 0.4263, 0.6891, 0.7257, 0.3355, 0.6974, 0.3572,
        0.8831, 0.2559])
targets[:20]:           tensor([0.8422, 0.7139, 0.8761, 0.5382, 0.2834, 0.1301, 0.3854, 0.2266, 0.8438,
        0.6774, 0.3873, 0.4123, 0.4433, 0.8151, 0.7777, 0.3105, 0.6314, 0.3211,
        0.8954, 0.2512])
predicted.shape: torch.Size([1118])
predicted[:20]:         tensor([0.0556, 0.1810, 0.2741, 0.1903, 0.2008, 0.2848, 0.2288, 0.3241, 0.5576,
        0.5037, 0.4970, 0.5641, 0.5313, 0.5002, 0.4886, 0.4925, 0.4925, 0.4964,
        0.5011, 0.4783])
targets[:20]:           tensor([0.0280, 0.1301, 0.2970, 0.2207, 0.2648, 0.3342, 0.2873, 0.3362, 0.6023,
        0.5031, 0.5335, 0.6247, 0.5454, 0.4926, 0.4442, 0.5030, 0.5472, 0.5128,
        0.5702, 0.5017])
../_images/CASESTUDY_NN-day1_23_1.png
[30]:
# Testing a regular FFnet
class FeedforwardNeuralNetModel(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(FeedforwardNeuralNetModel, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.drop = nn.Dropout(0.2)
        # Non-linearity
        self.relu = nn.ReLU()
        # Linear function (readout)
        self.fc2 = nn.Linear(hidden_dim, hidden_dim)
        self.fc3 = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        # Linear function
        out = self.relu(self.fc1(x))
        out = self.drop(out)
        out = self.relu(self.fc2(out))
        out = self.drop(out)
        out = self.fc3(out)
        return out

# Best configuration for longer
lr_range = [0.001,0.01]
hid_dim_range = [64,128]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    for hid_dim in hid_dim_range:
                        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                        model = FeedforwardNeuralNetModel(input_dim=47,hidden_dim=hid_dim, output_dim=1).to(device)
                        print(model)
                        SGD = torch.optim.Adam(model.parameters(), lr = lr) # This is absurdly high.
                        loss_fn = nn.MSELoss().to(device)
                        config_str = 'lr' + str(lr) + '_3FFNet_2ReLU_Drop_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
                        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=500, verbose=False)

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (drop): Dropout(p=0.2, inplace=False)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=64, bias=True)
  (fc3): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.001
    weight_decay: 0
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: -0.1116. Test R^2: -0.0033. Loss Train: 0.1149. Loss Val: 0.0621. Time: 3.7105
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4195. Test R^2: 0.3340. Loss Train: 0.0315. Loss Val: 0.0411. Time: 3.9621
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.4887. Test R^2: 0.4492. Loss Train: 0.0295. Loss Val: 0.0366. Time: 2.6475
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5252. Test R^2: 0.4602. Loss Train: 0.0255. Loss Val: 0.0329. Time: 2.6859
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5288. Test R^2: 0.5088. Loss Train: 0.0222. Loss Val: 0.0307. Time: 2.4811
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.6128. Test R^2: 0.5821. Loss Train: 0.0221. Loss Val: 0.0285. Time: 3.0067
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.6259. Test R^2: 0.5806. Loss Train: 0.0201. Loss Val: 0.0258. Time: 2.9824
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.6326. Test R^2: 0.5524. Loss Train: 0.0203. Loss Val: 0.0254. Time: 2.6254
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.6316. Test R^2: 0.5922. Loss Train: 0.0191. Loss Val: 0.0237. Time: 2.5129
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.6932. Test R^2: 0.6243. Loss Train: 0.0190. Loss Val: 0.0234. Time: 2.6534
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.6969. Test R^2: 0.6793. Loss Train: 0.0180. Loss Val: 0.0214. Time: 2.6627
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.6888. Test R^2: 0.6659. Loss Train: 0.0169. Loss Val: 0.0210. Time: 3.4999
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.6329. Test R^2: 0.6819. Loss Train: 0.0166. Loss Val: 0.0195. Time: 2.8489
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.6987. Test R^2: 0.6835. Loss Train: 0.0159. Loss Val: 0.0195. Time: 2.9676
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.7195. Test R^2: 0.7354. Loss Train: 0.0160. Loss Val: 0.0204. Time: 2.9275
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.7237. Test R^2: 0.7208. Loss Train: 0.0162. Loss Val: 0.0169. Time: 3.5524
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.7257. Test R^2: 0.7492. Loss Train: 0.0150. Loss Val: 0.0178. Time: 3.0206
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.7380. Test R^2: 0.7035. Loss Train: 0.0151. Loss Val: 0.0180. Time: 2.7954
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.7294. Test R^2: 0.7261. Loss Train: 0.0161. Loss Val: 0.0159. Time: 2.8494
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.7208. Test R^2: 0.7466. Loss Train: 0.0134. Loss Val: 0.0169. Time: 2.8734
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.7529. Test R^2: 0.7552. Loss Train: 0.0148. Loss Val: 0.0142. Time: 2.9881
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.7380. Test R^2: 0.7536. Loss Train: 0.0139. Loss Val: 0.0154. Time: 2.8477
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.7237. Test R^2: 0.7282. Loss Train: 0.0132. Loss Val: 0.0145. Time: 3.3642
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.7590. Test R^2: 0.7601. Loss Train: 0.0134. Loss Val: 0.0149. Time: 2.9623
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.7489. Test R^2: 0.7782. Loss Train: 0.0136. Loss Val: 0.0144. Time: 3.0059
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.7542. Test R^2: 0.7567. Loss Train: 0.0130. Loss Val: 0.0141. Time: 3.2227
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.7870. Test R^2: 0.7698. Loss Train: 0.0117. Loss Val: 0.0132. Time: 2.9511
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.7815. Test R^2: 0.7898. Loss Train: 0.0122. Loss Val: 0.0137. Time: 2.7537
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.7869. Test R^2: 0.7865. Loss Train: 0.0121. Loss Val: 0.0133. Time: 2.8151
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.7609. Test R^2: 0.7872. Loss Train: 0.0118. Loss Val: 0.0148. Time: 4.0698
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.7668. Test R^2: 0.7948. Loss Train: 0.0111. Loss Val: 0.0129. Time: 2.7746
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.7591. Test R^2: 0.7990. Loss Train: 0.0119. Loss Val: 0.0133. Time: 3.1289
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.7815. Test R^2: 0.7622. Loss Train: 0.0104. Loss Val: 0.0137. Time: 2.6921
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.7867. Test R^2: 0.7924. Loss Train: 0.0121. Loss Val: 0.0122. Time: 2.5199
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.7781. Test R^2: 0.7896. Loss Train: 0.0113. Loss Val: 0.0145. Time: 3.1129
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.7975. Test R^2: 0.8189. Loss Train: 0.0113. Loss Val: 0.0116. Time: 3.3524
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.8009. Test R^2: 0.8176. Loss Train: 0.0105. Loss Val: 0.0120. Time: 2.6168
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.7851. Test R^2: 0.8166. Loss Train: 0.0100. Loss Val: 0.0113. Time: 2.9490
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.7779. Test R^2: 0.7722. Loss Train: 0.0105. Loss Val: 0.0125. Time: 2.7381
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.8053. Test R^2: 0.8106. Loss Train: 0.0106. Loss Val: 0.0120. Time: 2.9906
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.8104. Test R^2: 0.8018. Loss Train: 0.0119. Loss Val: 0.0122. Time: 2.8241
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.8031. Test R^2: 0.8322. Loss Train: 0.0120. Loss Val: 0.0124. Time: 2.8660
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.8009. Test R^2: 0.8090. Loss Train: 0.0110. Loss Val: 0.0111. Time: 2.7802
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.7885. Test R^2: 0.7977. Loss Train: 0.0115. Loss Val: 0.0136. Time: 2.6376
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.8036. Test R^2: 0.7974. Loss Train: 0.0099. Loss Val: 0.0107. Time: 2.7612
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.8133. Test R^2: 0.8169. Loss Train: 0.0114. Loss Val: 0.0128. Time: 2.8415
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.8185. Test R^2: 0.8351. Loss Train: 0.0112. Loss Val: 0.0101. Time: 2.5918
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.8342. Test R^2: 0.8323. Loss Train: 0.0104. Loss Val: 0.0105. Time: 2.7809
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.8234. Test R^2: 0.8328. Loss Train: 0.0098. Loss Val: 0.0099. Time: 3.1542
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.8127. Test R^2: 0.8577. Loss Train: 0.0106. Loss Val: 0.0114. Time: 2.5527
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.8027. Test R^2: 0.8241. Loss Train: 0.0110. Loss Val: 0.0107. Time: 2.7262
../_images/CASESTUDY_NN-day1_24_1.png
saving  ./plots/lr0.001_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (drop): Dropout(p=0.2, inplace=False)
  (relu): ReLU()
  (fc2): Linear(in_features=128, out_features=128, bias=True)
  (fc3): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.001
    weight_decay: 0
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.1006. Test R^2: 0.1840. Loss Train: 0.0998. Loss Val: 0.0517. Time: 2.6534
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4191. Test R^2: 0.3644. Loss Train: 0.0283. Loss Val: 0.0408. Time: 2.6442
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5341. Test R^2: 0.5072. Loss Train: 0.0266. Loss Val: 0.0310. Time: 2.4408
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5715. Test R^2: 0.5125. Loss Train: 0.0233. Loss Val: 0.0297. Time: 2.9272
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.6088. Test R^2: 0.6030. Loss Train: 0.0214. Loss Val: 0.0256. Time: 2.6961
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.6220. Test R^2: 0.6237. Loss Train: 0.0203. Loss Val: 0.0232. Time: 2.7807
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.6905. Test R^2: 0.6549. Loss Train: 0.0188. Loss Val: 0.0208. Time: 2.6271
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.6881. Test R^2: 0.6740. Loss Train: 0.0175. Loss Val: 0.0180. Time: 2.8633
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.7055. Test R^2: 0.7114. Loss Train: 0.0173. Loss Val: 0.0183. Time: 2.6832
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.6565. Test R^2: 0.6804. Loss Train: 0.0156. Loss Val: 0.0210. Time: 2.5049
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.7215. Test R^2: 0.7378. Loss Train: 0.0148. Loss Val: 0.0161. Time: 2.4777
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.7345. Test R^2: 0.7626. Loss Train: 0.0147. Loss Val: 0.0148. Time: 2.8506
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.7580. Test R^2: 0.7546. Loss Train: 0.0129. Loss Val: 0.0150. Time: 2.8920
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.7640. Test R^2: 0.7900. Loss Train: 0.0144. Loss Val: 0.0148. Time: 2.6616
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.6779. Test R^2: 0.7368. Loss Train: 0.0135. Loss Val: 0.0162. Time: 3.2868
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.7856. Test R^2: 0.8021. Loss Train: 0.0132. Loss Val: 0.0123. Time: 3.0104
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.8018. Test R^2: 0.7938. Loss Train: 0.0121. Loss Val: 0.0133. Time: 2.5466
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.7795. Test R^2: 0.8494. Loss Train: 0.0122. Loss Val: 0.0118. Time: 3.2550
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.7942. Test R^2: 0.8183. Loss Train: 0.0107. Loss Val: 0.0119. Time: 2.9926
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.8078. Test R^2: 0.8063. Loss Train: 0.0111. Loss Val: 0.0112. Time: 4.9563
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.7939. Test R^2: 0.8080. Loss Train: 0.0121. Loss Val: 0.0113. Time: 2.5114
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.7413. Test R^2: 0.6860. Loss Train: 0.0116. Loss Val: 0.0165. Time: 2.8041
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.8138. Test R^2: 0.8304. Loss Train: 0.0115. Loss Val: 0.0115. Time: 2.5076
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.8340. Test R^2: 0.8408. Loss Train: 0.0104. Loss Val: 0.0111. Time: 2.7833
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.7997. Test R^2: 0.8122. Loss Train: 0.0099. Loss Val: 0.0109. Time: 4.3409
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.8180. Test R^2: 0.8270. Loss Train: 0.0110. Loss Val: 0.0100. Time: 2.4696
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.8192. Test R^2: 0.8465. Loss Train: 0.0094. Loss Val: 0.0097. Time: 3.0370
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.8276. Test R^2: 0.8379. Loss Train: 0.0084. Loss Val: 0.0094. Time: 2.6946
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.8436. Test R^2: 0.8470. Loss Train: 0.0090. Loss Val: 0.0092. Time: 2.5102
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.8386. Test R^2: 0.8624. Loss Train: 0.0089. Loss Val: 0.0087. Time: 2.8042
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.8205. Test R^2: 0.8548. Loss Train: 0.0090. Loss Val: 0.0095. Time: 3.0341
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.8296. Test R^2: 0.8491. Loss Train: 0.0094. Loss Val: 0.0094. Time: 3.0174
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.8443. Test R^2: 0.8615. Loss Train: 0.0087. Loss Val: 0.0081. Time: 2.4216
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.8441. Test R^2: 0.8643. Loss Train: 0.0088. Loss Val: 0.0092. Time: 2.8313
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.8522. Test R^2: 0.8641. Loss Train: 0.0083. Loss Val: 0.0081. Time: 2.2324
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.8495. Test R^2: 0.8272. Loss Train: 0.0082. Loss Val: 0.0086. Time: 2.4278
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.8395. Test R^2: 0.8802. Loss Train: 0.0080. Loss Val: 0.0088. Time: 2.2403
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.8253. Test R^2: 0.8545. Loss Train: 0.0079. Loss Val: 0.0095. Time: 2.3689
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.8380. Test R^2: 0.8688. Loss Train: 0.0092. Loss Val: 0.0078. Time: 2.3288
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.8439. Test R^2: 0.8692. Loss Train: 0.0076. Loss Val: 0.0073. Time: 2.3230
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.8536. Test R^2: 0.8641. Loss Train: 0.0081. Loss Val: 0.0084. Time: 2.2428
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.8551. Test R^2: 0.8414. Loss Train: 0.0082. Loss Val: 0.0090. Time: 2.3062
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.8601. Test R^2: 0.8843. Loss Train: 0.0081. Loss Val: 0.0078. Time: 2.2651
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.8603. Test R^2: 0.8657. Loss Train: 0.0079. Loss Val: 0.0080. Time: 3.3355
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.8760. Test R^2: 0.8742. Loss Train: 0.0078. Loss Val: 0.0076. Time: 2.5091
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.8422. Test R^2: 0.8567. Loss Train: 0.0083. Loss Val: 0.0077. Time: 2.9181
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.8607. Test R^2: 0.8852. Loss Train: 0.0071. Loss Val: 0.0080. Time: 2.8883
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.8590. Test R^2: 0.8947. Loss Train: 0.0075. Loss Val: 0.0085. Time: 2.7605
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.8684. Test R^2: 0.8668. Loss Train: 0.0085. Loss Val: 0.0081. Time: 2.6697
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.8751. Test R^2: 0.8838. Loss Train: 0.0067. Loss Val: 0.0071. Time: 3.1223
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.8764. Test R^2: 0.8946. Loss Train: 0.0068. Loss Val: 0.0065. Time: 2.5091
../_images/CASESTUDY_NN-day1_24_3.png
saving  ./plots/lr0.001_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=64, bias=True)
  (drop): Dropout(p=0.2, inplace=False)
  (relu): ReLU()
  (fc2): Linear(in_features=64, out_features=64, bias=True)
  (fc3): Linear(in_features=64, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.01
    weight_decay: 0
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.2998. Test R^2: 0.2777. Loss Train: 0.0643. Loss Val: 0.0466. Time: 2.5134
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.3839. Test R^2: 0.2695. Loss Train: 0.0285. Loss Val: 0.0448. Time: 2.4105
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5141. Test R^2: 0.3926. Loss Train: 0.0257. Loss Val: 0.0333. Time: 2.2853
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.5089. Test R^2: 0.4398. Loss Train: 0.0310. Loss Val: 0.0376. Time: 2.2108
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5461. Test R^2: 0.4677. Loss Train: 0.0254. Loss Val: 0.0338. Time: 2.2169
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.5158. Test R^2: 0.4651. Loss Train: 0.0260. Loss Val: 0.0338. Time: 2.3295
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5175. Test R^2: 0.4284. Loss Train: 0.0277. Loss Val: 0.0356. Time: 2.4124
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.4871. Test R^2: 0.5046. Loss Train: 0.0260. Loss Val: 0.0314. Time: 2.2909
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.4907. Test R^2: 0.4062. Loss Train: 0.0264. Loss Val: 0.0349. Time: 2.5873
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5500. Test R^2: 0.5296. Loss Train: 0.0263. Loss Val: 0.0316. Time: 2.6759
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5196. Test R^2: 0.5304. Loss Train: 0.0245. Loss Val: 0.0320. Time: 2.1618
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5000. Test R^2: 0.5130. Loss Train: 0.0256. Loss Val: 0.0318. Time: 2.2579
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.5664. Test R^2: 0.5156. Loss Train: 0.0251. Loss Val: 0.0288. Time: 2.3752
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5466. Test R^2: 0.5108. Loss Train: 0.0237. Loss Val: 0.0312. Time: 3.1384
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.5116. Test R^2: 0.5307. Loss Train: 0.0233. Loss Val: 0.0296. Time: 2.8697
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.5299. Test R^2: 0.5219. Loss Train: 0.0258. Loss Val: 0.0288. Time: 3.5238
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5890. Test R^2: 0.5202. Loss Train: 0.0226. Loss Val: 0.0265. Time: 2.9057
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.5342. Test R^2: 0.5756. Loss Train: 0.0237. Loss Val: 0.0267. Time: 2.3796
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.5633. Test R^2: 0.5595. Loss Train: 0.0236. Loss Val: 0.0295. Time: 2.3007
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.6085. Test R^2: 0.5920. Loss Train: 0.0224. Loss Val: 0.0285. Time: 3.9387
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5341. Test R^2: 0.4958. Loss Train: 0.0263. Loss Val: 0.0307. Time: 2.5295
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5787. Test R^2: 0.5873. Loss Train: 0.0218. Loss Val: 0.0267. Time: 2.5182
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.4903. Test R^2: 0.4651. Loss Train: 0.0236. Loss Val: 0.0370. Time: 3.1504
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5705. Test R^2: 0.5264. Loss Train: 0.0220. Loss Val: 0.0271. Time: 2.7024
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.6059. Test R^2: 0.6056. Loss Train: 0.0224. Loss Val: 0.0277. Time: 2.4748
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.6056. Test R^2: 0.5032. Loss Train: 0.0241. Loss Val: 0.0265. Time: 2.9292
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.5261. Test R^2: 0.5335. Loss Train: 0.0233. Loss Val: 0.0308. Time: 2.0955
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.6430. Test R^2: 0.6282. Loss Train: 0.0228. Loss Val: 0.0250. Time: 2.1221
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.6421. Test R^2: 0.6347. Loss Train: 0.0222. Loss Val: 0.0253. Time: 2.0285
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5530. Test R^2: 0.4413. Loss Train: 0.0223. Loss Val: 0.0317. Time: 2.0783
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.6043. Test R^2: 0.5717. Loss Train: 0.0240. Loss Val: 0.0274. Time: 1.9601
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5727. Test R^2: 0.6019. Loss Train: 0.0224. Loss Val: 0.0250. Time: 2.1326
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.6096. Test R^2: 0.5911. Loss Train: 0.0213. Loss Val: 0.0237. Time: 2.0559
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.5851. Test R^2: 0.5921. Loss Train: 0.0218. Loss Val: 0.0247. Time: 1.9013
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.6065. Test R^2: 0.6318. Loss Train: 0.0215. Loss Val: 0.0257. Time: 2.0601
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.6115. Test R^2: 0.6292. Loss Train: 0.0225. Loss Val: 0.0235. Time: 2.1107
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5679. Test R^2: 0.5540. Loss Train: 0.0210. Loss Val: 0.0267. Time: 1.9785
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.5849. Test R^2: 0.5736. Loss Train: 0.0219. Loss Val: 0.0277. Time: 1.9948
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.6078. Test R^2: 0.5650. Loss Train: 0.0211. Loss Val: 0.0259. Time: 2.1080
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5616. Test R^2: 0.4989. Loss Train: 0.0224. Loss Val: 0.0277. Time: 2.2105
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.5890. Test R^2: 0.5888. Loss Train: 0.0213. Loss Val: 0.0264. Time: 2.3170
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.5709. Test R^2: 0.5842. Loss Train: 0.0227. Loss Val: 0.0261. Time: 3.8046
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.6271. Test R^2: 0.6695. Loss Train: 0.0193. Loss Val: 0.0230. Time: 2.8342
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5913. Test R^2: 0.5732. Loss Train: 0.0222. Loss Val: 0.0228. Time: 2.0817
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.5846. Test R^2: 0.6495. Loss Train: 0.0202. Loss Val: 0.0256. Time: 2.2084
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5689. Test R^2: 0.5825. Loss Train: 0.0217. Loss Val: 0.0224. Time: 2.0139
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.6162. Test R^2: 0.5900. Loss Train: 0.0208. Loss Val: 0.0261. Time: 2.0481
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.6126. Test R^2: 0.6053. Loss Train: 0.0209. Loss Val: 0.0300. Time: 2.1299
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.6341. Test R^2: 0.6465. Loss Train: 0.0208. Loss Val: 0.0225. Time: 2.2350
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5622. Test R^2: 0.5931. Loss Train: 0.0238. Loss Val: 0.0257. Time: 2.1152
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.6081. Test R^2: 0.5442. Loss Train: 0.0231. Loss Val: 0.0280. Time: 2.0840
../_images/CASESTUDY_NN-day1_24_5.png
saving  ./plots/lr0.01_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim64.png

lr: 0.01, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (drop): Dropout(p=0.2, inplace=False)
  (relu): ReLU()
  (fc2): Linear(in_features=128, out_features=128, bias=True)
  (fc3): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.01
    weight_decay: 0
)
n. of epochs: 500
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.3500. Test R^2: 0.2949. Loss Train: 0.0595. Loss Val: 0.0439. Time: 1.9680
 EPOCH 10. Progress: 2.0%.
 Training R^2: 0.4798. Test R^2: 0.4110. Loss Train: 0.0307. Loss Val: 0.0393. Time: 2.0753
 EPOCH 20. Progress: 4.0%.
 Training R^2: 0.5034. Test R^2: 0.4412. Loss Train: 0.0277. Loss Val: 0.0351. Time: 2.0036
 EPOCH 30. Progress: 6.0%.
 Training R^2: 0.4865. Test R^2: 0.4305. Loss Train: 0.0260. Loss Val: 0.0345. Time: 2.0027
 EPOCH 40. Progress: 8.0%.
 Training R^2: 0.5258. Test R^2: 0.4015. Loss Train: 0.0267. Loss Val: 0.0349. Time: 2.0814
 EPOCH 50. Progress: 10.0%.
 Training R^2: 0.5195. Test R^2: 0.4381. Loss Train: 0.0278. Loss Val: 0.0341. Time: 2.3265
 EPOCH 60. Progress: 12.0%.
 Training R^2: 0.5268. Test R^2: 0.4469. Loss Train: 0.0276. Loss Val: 0.0320. Time: 2.3129
 EPOCH 70. Progress: 14.000000000000002%.
 Training R^2: 0.5594. Test R^2: 0.4859. Loss Train: 0.0252. Loss Val: 0.0318. Time: 2.7331
 EPOCH 80. Progress: 16.0%.
 Training R^2: 0.4903. Test R^2: 0.4379. Loss Train: 0.0282. Loss Val: 0.0321. Time: 2.3010
 EPOCH 90. Progress: 18.0%.
 Training R^2: 0.5962. Test R^2: 0.5093. Loss Train: 0.0269. Loss Val: 0.0277. Time: 2.2278
 EPOCH 100. Progress: 20.0%.
 Training R^2: 0.5259. Test R^2: 0.5340. Loss Train: 0.0259. Loss Val: 0.0298. Time: 2.5875
 EPOCH 110. Progress: 22.0%.
 Training R^2: 0.5319. Test R^2: 0.4856. Loss Train: 0.0232. Loss Val: 0.0296. Time: 2.3185
 EPOCH 120. Progress: 24.0%.
 Training R^2: 0.4807. Test R^2: 0.4518. Loss Train: 0.0237. Loss Val: 0.0376. Time: 2.7557
 EPOCH 130. Progress: 26.0%.
 Training R^2: 0.5671. Test R^2: 0.5112. Loss Train: 0.0233. Loss Val: 0.0319. Time: 2.0406
 EPOCH 140. Progress: 28.000000000000004%.
 Training R^2: 0.5711. Test R^2: 0.5372. Loss Train: 0.0236. Loss Val: 0.0286. Time: 2.0366
 EPOCH 150. Progress: 30.0%.
 Training R^2: 0.5868. Test R^2: 0.5373. Loss Train: 0.0244. Loss Val: 0.0269. Time: 2.0286
 EPOCH 160. Progress: 32.0%.
 Training R^2: 0.5343. Test R^2: 0.5735. Loss Train: 0.0233. Loss Val: 0.0254. Time: 2.0422
 EPOCH 170. Progress: 34.0%.
 Training R^2: 0.6077. Test R^2: 0.5212. Loss Train: 0.0224. Loss Val: 0.0267. Time: 2.4437
 EPOCH 180. Progress: 36.0%.
 Training R^2: 0.5493. Test R^2: 0.5407. Loss Train: 0.0235. Loss Val: 0.0273. Time: 2.1331
 EPOCH 190. Progress: 38.0%.
 Training R^2: 0.5637. Test R^2: 0.5632. Loss Train: 0.0225. Loss Val: 0.0247. Time: 2.1259
 EPOCH 200. Progress: 40.0%.
 Training R^2: 0.5241. Test R^2: 0.4822. Loss Train: 0.0235. Loss Val: 0.0334. Time: 1.8994
 EPOCH 210. Progress: 42.0%.
 Training R^2: 0.5772. Test R^2: 0.5895. Loss Train: 0.0223. Loss Val: 0.0265. Time: 2.5330
 EPOCH 220. Progress: 44.0%.
 Training R^2: 0.5771. Test R^2: 0.5445. Loss Train: 0.0233. Loss Val: 0.0283. Time: 1.9817
 EPOCH 230. Progress: 46.0%.
 Training R^2: 0.5913. Test R^2: 0.6073. Loss Train: 0.0248. Loss Val: 0.0274. Time: 1.9013
 EPOCH 240. Progress: 48.0%.
 Training R^2: 0.5661. Test R^2: 0.5790. Loss Train: 0.0231. Loss Val: 0.0256. Time: 2.0631
 EPOCH 250. Progress: 50.0%.
 Training R^2: 0.5597. Test R^2: 0.5917. Loss Train: 0.0219. Loss Val: 0.0281. Time: 2.0783
 EPOCH 260. Progress: 52.0%.
 Training R^2: 0.6261. Test R^2: 0.6122. Loss Train: 0.0212. Loss Val: 0.0266. Time: 2.0385
 EPOCH 270. Progress: 54.0%.
 Training R^2: 0.5363. Test R^2: 0.4878. Loss Train: 0.0241. Loss Val: 0.0287. Time: 2.4742
 EPOCH 280. Progress: 56.00000000000001%.
 Training R^2: 0.5605. Test R^2: 0.5216. Loss Train: 0.0228. Loss Val: 0.0308. Time: 2.0517
 EPOCH 290. Progress: 57.99999999999999%.
 Training R^2: 0.5725. Test R^2: 0.5658. Loss Train: 0.0215. Loss Val: 0.0254. Time: 2.1625
 EPOCH 300. Progress: 60.0%.
 Training R^2: 0.5841. Test R^2: 0.5739. Loss Train: 0.0227. Loss Val: 0.0239. Time: 2.0216
 EPOCH 310. Progress: 62.0%.
 Training R^2: 0.5970. Test R^2: 0.5840. Loss Train: 0.0258. Loss Val: 0.0264. Time: 1.9824
 EPOCH 320. Progress: 64.0%.
 Training R^2: 0.5970. Test R^2: 0.5848. Loss Train: 0.0210. Loss Val: 0.0286. Time: 2.4123
 EPOCH 330. Progress: 66.0%.
 Training R^2: 0.5824. Test R^2: 0.5670. Loss Train: 0.0225. Loss Val: 0.0268. Time: 2.2330
 EPOCH 340. Progress: 68.0%.
 Training R^2: 0.5783. Test R^2: 0.5184. Loss Train: 0.0204. Loss Val: 0.0268. Time: 2.1709
 EPOCH 350. Progress: 70.0%.
 Training R^2: 0.5371. Test R^2: 0.4856. Loss Train: 0.0262. Loss Val: 0.0311. Time: 2.0219
 EPOCH 360. Progress: 72.0%.
 Training R^2: 0.5605. Test R^2: 0.4832. Loss Train: 0.0221. Loss Val: 0.0295. Time: 2.0305
 EPOCH 370. Progress: 74.0%.
 Training R^2: 0.5855. Test R^2: 0.5614. Loss Train: 0.0216. Loss Val: 0.0264. Time: 2.3363
 EPOCH 380. Progress: 76.0%.
 Training R^2: 0.6143. Test R^2: 0.5463. Loss Train: 0.0218. Loss Val: 0.0280. Time: 3.0703
 EPOCH 390. Progress: 78.0%.
 Training R^2: 0.5974. Test R^2: 0.5755. Loss Train: 0.0276. Loss Val: 0.0264. Time: 2.8149
 EPOCH 400. Progress: 80.0%.
 Training R^2: 0.6077. Test R^2: 0.6129. Loss Train: 0.0216. Loss Val: 0.0263. Time: 1.9911
 EPOCH 410. Progress: 82.0%.
 Training R^2: 0.6044. Test R^2: 0.6259. Loss Train: 0.0217. Loss Val: 0.0260. Time: 2.0055
 EPOCH 420. Progress: 84.0%.
 Training R^2: 0.6007. Test R^2: 0.5875. Loss Train: 0.0213. Loss Val: 0.0248. Time: 1.9980
 EPOCH 430. Progress: 86.0%.
 Training R^2: 0.5965. Test R^2: 0.5867. Loss Train: 0.0229. Loss Val: 0.0257. Time: 1.8969
 EPOCH 440. Progress: 88.0%.
 Training R^2: 0.6066. Test R^2: 0.5855. Loss Train: 0.0229. Loss Val: 0.0264. Time: 2.3634
 EPOCH 450. Progress: 90.0%.
 Training R^2: 0.5972. Test R^2: 0.6188. Loss Train: 0.0219. Loss Val: 0.0253. Time: 1.9153
 EPOCH 460. Progress: 92.0%.
 Training R^2: 0.6156. Test R^2: 0.6333. Loss Train: 0.0207. Loss Val: 0.0257. Time: 2.0352
 EPOCH 470. Progress: 94.0%.
 Training R^2: 0.6052. Test R^2: 0.5840. Loss Train: 0.0243. Loss Val: 0.0284. Time: 2.0433
 EPOCH 480. Progress: 96.0%.
 Training R^2: 0.5111. Test R^2: 0.4789. Loss Train: 0.0250. Loss Val: 0.0313. Time: 1.9572
 EPOCH 490. Progress: 98.0%.
 Training R^2: 0.5837. Test R^2: 0.5839. Loss Train: 0.0233. Loss Val: 0.0282. Time: 1.9536
 EPOCH 500. Progress: 100.0%.
 Training R^2: 0.5336. Test R^2: 0.5838. Loss Train: 0.0228. Loss Val: 0.0285. Time: 2.0231
../_images/CASESTUDY_NN-day1_24_7.png
saving  ./plots/lr0.01_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png
[31]:
# ./plots/lr0.001_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png
# Testing a regular FFnet
class FeedforwardNeuralNetModel(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(FeedforwardNeuralNetModel, self).__init__()
        # Linear function
        self.fc1 = nn.Linear(input_dim, hidden_dim)
        self.drop = nn.Dropout(0.2)
        # Non-linearity
        self.relu = nn.ReLU()
        # Linear function (readout)
        self.fc2 = nn.Linear(hidden_dim, hidden_dim)
        self.fc3 = nn.Linear(hidden_dim, output_dim)

    def forward(self, x):
        # Linear function
        out = self.relu(self.fc1(x))
        out = self.drop(out)
        out = self.relu(self.fc2(out))
        out = self.drop(out)
        out = self.fc3(out)
        return out

# Best configuration for longer
lr_range = [0.001]
hid_dim_range = [128]
weight_decay_range = [0.001]
momentum_range = [0.9]
dampening_range = [0]
nesterov_range = [False]
for lr in lr_range:
    for momentum in momentum_range:
        for weight_decay in weight_decay_range:
            for nesterov in nesterov_range:
                for dampening in dampening_range:
                    for hid_dim in hid_dim_range:
                        print('\nlr: {}, momentum: {}, weight_decay: {}, dampening: {}, nesterov: {} '.format(lr, momentum, weight_decay, dampening, nesterov))
                        model = FeedforwardNeuralNetModel(input_dim=47,hidden_dim=hid_dim, output_dim=1).to(device)
                        print(model)
                        SGD = torch.optim.Adam(model.parameters(), lr = lr) # This is absurdly high.
                        loss_fn = nn.MSELoss().to(device)
                        config_str = 'lr' + str(lr) + '_3FFNet_2ReLU_Drop_momentum' + str(momentum) + '_wdecay' + str(weight_decay) + '_dampening' + str(dampening) +'_nesterov' + str(nesterov) + '_HidDim' + str(hid_dim)
                        train(model,loss_fn, SGD, dataloaders['train'], dataloaders['val'], config_str,num_epochs=1000, verbose=False)

lr: 0.001, momentum: 0.9, weight_decay: 0.001, dampening: 0, nesterov: False
FeedforwardNeuralNetModel(
  (fc1): Linear(in_features=47, out_features=128, bias=True)
  (drop): Dropout(p=0.2, inplace=False)
  (relu): ReLU()
  (fc2): Linear(in_features=128, out_features=128, bias=True)
  (fc3): Linear(in_features=128, out_features=1, bias=True)
)
optimizer: Adam (
Parameter Group 0
    amsgrad: False
    betas: (0.9, 0.999)
    eps: 1e-08
    lr: 0.001
    weight_decay: 0
)
n. of epochs: 1000
 EPOCH 0. Progress: 0.0%.
 Training R^2: 0.0927. Test R^2: 0.0356. Loss Train: 0.1167. Loss Val: 0.0563. Time: 3.3496
 EPOCH 10. Progress: 1.0%.
 Training R^2: 0.4223. Test R^2: 0.4131. Loss Train: 0.0295. Loss Val: 0.0375. Time: 2.4228
 EPOCH 20. Progress: 2.0%.
 Training R^2: 0.5436. Test R^2: 0.5038. Loss Train: 0.0266. Loss Val: 0.0323. Time: 2.4585
 EPOCH 30. Progress: 3.0%.
 Training R^2: 0.5576. Test R^2: 0.5039. Loss Train: 0.0248. Loss Val: 0.0303. Time: 2.6364
 EPOCH 40. Progress: 4.0%.
 Training R^2: 0.5357. Test R^2: 0.5614. Loss Train: 0.0220. Loss Val: 0.0256. Time: 4.1378
 EPOCH 50. Progress: 5.0%.
 Training R^2: 0.6205. Test R^2: 0.6498. Loss Train: 0.0191. Loss Val: 0.0246. Time: 2.1606
 EPOCH 60. Progress: 6.0%.
 Training R^2: 0.6510. Test R^2: 0.6262. Loss Train: 0.0191. Loss Val: 0.0223. Time: 2.5680
 EPOCH 70. Progress: 7.000000000000001%.
 Training R^2: 0.6858. Test R^2: 0.6749. Loss Train: 0.0182. Loss Val: 0.0210. Time: 2.3999
 EPOCH 80. Progress: 8.0%.
 Training R^2: 0.6682. Test R^2: 0.6727. Loss Train: 0.0174. Loss Val: 0.0197. Time: 2.4131
 EPOCH 90. Progress: 9.0%.
 Training R^2: 0.7282. Test R^2: 0.6968. Loss Train: 0.0162. Loss Val: 0.0181. Time: 2.8423
 EPOCH 100. Progress: 10.0%.
 Training R^2: 0.7063. Test R^2: 0.6932. Loss Train: 0.0162. Loss Val: 0.0208. Time: 2.4898
 EPOCH 110. Progress: 11.0%.
 Training R^2: 0.7479. Test R^2: 0.7652. Loss Train: 0.0148. Loss Val: 0.0158. Time: 3.4190
 EPOCH 120. Progress: 12.0%.
 Training R^2: 0.7348. Test R^2: 0.7248. Loss Train: 0.0150. Loss Val: 0.0175. Time: 2.4495
 EPOCH 130. Progress: 13.0%.
 Training R^2: 0.7539. Test R^2: 0.6989. Loss Train: 0.0133. Loss Val: 0.0163. Time: 2.4874
 EPOCH 140. Progress: 14.000000000000002%.
 Training R^2: 0.7633. Test R^2: 0.7787. Loss Train: 0.0133. Loss Val: 0.0138. Time: 2.5096
 EPOCH 150. Progress: 15.0%.
 Training R^2: 0.7354. Test R^2: 0.7661. Loss Train: 0.0132. Loss Val: 0.0136. Time: 2.4669
 EPOCH 160. Progress: 16.0%.
 Training R^2: 0.7896. Test R^2: 0.7857. Loss Train: 0.0121. Loss Val: 0.0133. Time: 2.5675
 EPOCH 170. Progress: 17.0%.
 Training R^2: 0.7779. Test R^2: 0.7814. Loss Train: 0.0121. Loss Val: 0.0131. Time: 2.3809
 EPOCH 180. Progress: 18.0%.
 Training R^2: 0.7951. Test R^2: 0.7855. Loss Train: 0.0112. Loss Val: 0.0140. Time: 3.3293
 EPOCH 190. Progress: 19.0%.
 Training R^2: 0.7885. Test R^2: 0.8003. Loss Train: 0.0110. Loss Val: 0.0118. Time: 2.8167
 EPOCH 200. Progress: 20.0%.
 Training R^2: 0.7988. Test R^2: 0.8261. Loss Train: 0.0110. Loss Val: 0.0111. Time: 2.5940
 EPOCH 210. Progress: 21.0%.
 Training R^2: 0.7719. Test R^2: 0.7948. Loss Train: 0.0102. Loss Val: 0.0126. Time: 2.6867
 EPOCH 220. Progress: 22.0%.
 Training R^2: 0.7516. Test R^2: 0.7911. Loss Train: 0.0108. Loss Val: 0.0125. Time: 2.1718
 EPOCH 230. Progress: 23.0%.
 Training R^2: 0.8133. Test R^2: 0.8168. Loss Train: 0.0102. Loss Val: 0.0098. Time: 1.7636
 EPOCH 240. Progress: 24.0%.
 Training R^2: 0.8212. Test R^2: 0.8234. Loss Train: 0.0090. Loss Val: 0.0092. Time: 2.1938
 EPOCH 250. Progress: 25.0%.
 Training R^2: 0.8284. Test R^2: 0.8352. Loss Train: 0.0103. Loss Val: 0.0113. Time: 2.8295
 EPOCH 260. Progress: 26.0%.
 Training R^2: 0.8318. Test R^2: 0.8382. Loss Train: 0.0094. Loss Val: 0.0104. Time: 2.3420
 EPOCH 270. Progress: 27.0%.
 Training R^2: 0.8218. Test R^2: 0.8272. Loss Train: 0.0093. Loss Val: 0.0099. Time: 3.1340
 EPOCH 280. Progress: 28.000000000000004%.
 Training R^2: 0.7975. Test R^2: 0.8056. Loss Train: 0.0099. Loss Val: 0.0130. Time: 2.1983
 EPOCH 290. Progress: 28.999999999999996%.
 Training R^2: 0.8144. Test R^2: 0.8154. Loss Train: 0.0086. Loss Val: 0.0107. Time: 2.5251
 EPOCH 300. Progress: 30.0%.
 Training R^2: 0.8338. Test R^2: 0.8196. Loss Train: 0.0088. Loss Val: 0.0118. Time: 2.5312
 EPOCH 310. Progress: 31.0%.
 Training R^2: 0.8166. Test R^2: 0.8148. Loss Train: 0.0094. Loss Val: 0.0107. Time: 3.2249
 EPOCH 320. Progress: 32.0%.
 Training R^2: 0.8454. Test R^2: 0.8754. Loss Train: 0.0084. Loss Val: 0.0098. Time: 2.2585
 EPOCH 330. Progress: 33.0%.
 Training R^2: 0.8487. Test R^2: 0.8329. Loss Train: 0.0082. Loss Val: 0.0095. Time: 2.5670
 EPOCH 340. Progress: 34.0%.
 Training R^2: 0.8199. Test R^2: 0.8629. Loss Train: 0.0090. Loss Val: 0.0098. Time: 2.3122
 EPOCH 350. Progress: 35.0%.
 Training R^2: 0.8398. Test R^2: 0.8288. Loss Train: 0.0086. Loss Val: 0.0089. Time: 3.6857
 EPOCH 360. Progress: 36.0%.
 Training R^2: 0.8452. Test R^2: 0.8503. Loss Train: 0.0078. Loss Val: 0.0103. Time: 2.3320
 EPOCH 370. Progress: 37.0%.
 Training R^2: 0.8209. Test R^2: 0.8351. Loss Train: 0.0086. Loss Val: 0.0087. Time: 2.4615
 EPOCH 380. Progress: 38.0%.
 Training R^2: 0.8335. Test R^2: 0.8488. Loss Train: 0.0080. Loss Val: 0.0092. Time: 2.4492
 EPOCH 390. Progress: 39.0%.
 Training R^2: 0.8596. Test R^2: 0.8726. Loss Train: 0.0095. Loss Val: 0.0088. Time: 2.3357
 EPOCH 400. Progress: 40.0%.
 Training R^2: 0.8378. Test R^2: 0.8771. Loss Train: 0.0084. Loss Val: 0.0087. Time: 2.3776
 EPOCH 410. Progress: 41.0%.
 Training R^2: 0.8691. Test R^2: 0.8665. Loss Train: 0.0078. Loss Val: 0.0078. Time: 2.3687
 EPOCH 420. Progress: 42.0%.
 Training R^2: 0.8564. Test R^2: 0.8439. Loss Train: 0.0076. Loss Val: 0.0108. Time: 2.3925
 EPOCH 430. Progress: 43.0%.
 Training R^2: 0.8437. Test R^2: 0.8196. Loss Train: 0.0090. Loss Val: 0.0090. Time: 2.5062
 EPOCH 440. Progress: 44.0%.
 Training R^2: 0.8572. Test R^2: 0.8826. Loss Train: 0.0082. Loss Val: 0.0086. Time: 3.2034
 EPOCH 450. Progress: 45.0%.
 Training R^2: 0.8639. Test R^2: 0.8509. Loss Train: 0.0071. Loss Val: 0.0086. Time: 2.7788
 EPOCH 460. Progress: 46.0%.
 Training R^2: 0.8728. Test R^2: 0.8566. Loss Train: 0.0067. Loss Val: 0.0101. Time: 2.2754
 EPOCH 470. Progress: 47.0%.
 Training R^2: 0.8457. Test R^2: 0.8577. Loss Train: 0.0076. Loss Val: 0.0090. Time: 2.5371
 EPOCH 480. Progress: 48.0%.
 Training R^2: 0.8591. Test R^2: 0.8624. Loss Train: 0.0083. Loss Val: 0.0085. Time: 2.4443
 EPOCH 490. Progress: 49.0%.
 Training R^2: 0.8629. Test R^2: 0.8648. Loss Train: 0.0079. Loss Val: 0.0089. Time: 3.1879
 EPOCH 500. Progress: 50.0%.
 Training R^2: 0.8636. Test R^2: 0.8729. Loss Train: 0.0077. Loss Val: 0.0084. Time: 2.5132
 EPOCH 510. Progress: 51.0%.
 Training R^2: 0.8715. Test R^2: 0.8626. Loss Train: 0.0073. Loss Val: 0.0096. Time: 2.3002
 EPOCH 520. Progress: 52.0%.
 Training R^2: 0.8685. Test R^2: 0.8791. Loss Train: 0.0071. Loss Val: 0.0076. Time: 2.3063
 EPOCH 530. Progress: 53.0%.
 Training R^2: 0.8638. Test R^2: 0.8644. Loss Train: 0.0076. Loss Val: 0.0088. Time: 2.3837
 EPOCH 540. Progress: 54.0%.
 Training R^2: 0.8898. Test R^2: 0.8858. Loss Train: 0.0070. Loss Val: 0.0081. Time: 2.3558
 EPOCH 550. Progress: 55.00000000000001%.
 Training R^2: 0.8539. Test R^2: 0.8689. Loss Train: 0.0085. Loss Val: 0.0081. Time: 2.4504
 EPOCH 560. Progress: 56.00000000000001%.
 Training R^2: 0.8736. Test R^2: 0.9091. Loss Train: 0.0068. Loss Val: 0.0072. Time: 2.4290
 EPOCH 570. Progress: 56.99999999999999%.
 Training R^2: 0.8794. Test R^2: 0.8838. Loss Train: 0.0066. Loss Val: 0.0069. Time: 2.3208
 EPOCH 580. Progress: 57.99999999999999%.
 Training R^2: 0.8754. Test R^2: 0.8821. Loss Train: 0.0069. Loss Val: 0.0076. Time: 2.3361
 EPOCH 590. Progress: 59.0%.
 Training R^2: 0.8846. Test R^2: 0.8744. Loss Train: 0.0065. Loss Val: 0.0078. Time: 2.4653
 EPOCH 600. Progress: 60.0%.
 Training R^2: 0.8788. Test R^2: 0.8516. Loss Train: 0.0069. Loss Val: 0.0089. Time: 2.2863
 EPOCH 610. Progress: 61.0%.
 Training R^2: 0.8901. Test R^2: 0.8855. Loss Train: 0.0069. Loss Val: 0.0079. Time: 2.4429
 EPOCH 620. Progress: 62.0%.
 Training R^2: 0.8856. Test R^2: 0.8852. Loss Train: 0.0060. Loss Val: 0.0079. Time: 2.4149
 EPOCH 630. Progress: 63.0%.
 Training R^2: 0.8649. Test R^2: 0.8844. Loss Train: 0.0066. Loss Val: 0.0083. Time: 2.9441
 EPOCH 640. Progress: 64.0%.
 Training R^2: 0.8742. Test R^2: 0.8713. Loss Train: 0.0072. Loss Val: 0.0074. Time: 2.3610
 EPOCH 650. Progress: 65.0%.
 Training R^2: 0.8897. Test R^2: 0.8725. Loss Train: 0.0067. Loss Val: 0.0074. Time: 2.3834
 EPOCH 660. Progress: 66.0%.
 Training R^2: 0.8690. Test R^2: 0.8578. Loss Train: 0.0066. Loss Val: 0.0100. Time: 2.4020
 EPOCH 670. Progress: 67.0%.
 Training R^2: 0.8719. Test R^2: 0.8608. Loss Train: 0.0061. Loss Val: 0.0078. Time: 2.3500
 EPOCH 680. Progress: 68.0%.
 Training R^2: 0.8727. Test R^2: 0.8649. Loss Train: 0.0078. Loss Val: 0.0076. Time: 2.4209
 EPOCH 690. Progress: 69.0%.
 Training R^2: 0.8731. Test R^2: 0.8843. Loss Train: 0.0061. Loss Val: 0.0067. Time: 2.4152
 EPOCH 700. Progress: 70.0%.
 Training R^2: 0.8366. Test R^2: 0.8930. Loss Train: 0.0067. Loss Val: 0.0072. Time: 2.4015
 EPOCH 710. Progress: 71.0%.
 Training R^2: 0.8710. Test R^2: 0.9092. Loss Train: 0.0059. Loss Val: 0.0078. Time: 2.3803
 EPOCH 720. Progress: 72.0%.
 Training R^2: 0.8553. Test R^2: 0.8707. Loss Train: 0.0067. Loss Val: 0.0078. Time: 2.3663
 EPOCH 730. Progress: 73.0%.
 Training R^2: 0.8767. Test R^2: 0.8790. Loss Train: 0.0073. Loss Val: 0.0076. Time: 2.4198
 EPOCH 740. Progress: 74.0%.
 Training R^2: 0.8737. Test R^2: 0.8797. Loss Train: 0.0057. Loss Val: 0.0063. Time: 2.2871
 EPOCH 750. Progress: 75.0%.
 Training R^2: 0.8882. Test R^2: 0.8922. Loss Train: 0.0070. Loss Val: 0.0065. Time: 2.3605
 EPOCH 760. Progress: 76.0%.
 Training R^2: 0.8805. Test R^2: 0.9162. Loss Train: 0.0067. Loss Val: 0.0058. Time: 2.4398
 EPOCH 770. Progress: 77.0%.
 Training R^2: 0.8902. Test R^2: 0.8882. Loss Train: 0.0054. Loss Val: 0.0076. Time: 2.3947
 EPOCH 780. Progress: 78.0%.
 Training R^2: 0.8868. Test R^2: 0.9099. Loss Train: 0.0066. Loss Val: 0.0065. Time: 2.3312
 EPOCH 790. Progress: 79.0%.
 Training R^2: 0.8956. Test R^2: 0.8954. Loss Train: 0.0060. Loss Val: 0.0067. Time: 2.2812
 EPOCH 800. Progress: 80.0%.
 Training R^2: 0.8889. Test R^2: 0.8974. Loss Train: 0.0058. Loss Val: 0.0067. Time: 2.3829
 EPOCH 810. Progress: 81.0%.
 Training R^2: 0.9014. Test R^2: 0.8878. Loss Train: 0.0063. Loss Val: 0.0067. Time: 2.3537
 EPOCH 820. Progress: 82.0%.
 Training R^2: 0.9012. Test R^2: 0.9011. Loss Train: 0.0067. Loss Val: 0.0058. Time: 2.5143
 EPOCH 830. Progress: 83.0%.
 Training R^2: 0.8852. Test R^2: 0.8669. Loss Train: 0.0064. Loss Val: 0.0063. Time: 2.3291
 EPOCH 840. Progress: 84.0%.
 Training R^2: 0.8828. Test R^2: 0.8866. Loss Train: 0.0067. Loss Val: 0.0072. Time: 2.4063
 EPOCH 850. Progress: 85.0%.
 Training R^2: 0.8931. Test R^2: 0.9013. Loss Train: 0.0060. Loss Val: 0.0068. Time: 2.4110
 EPOCH 860. Progress: 86.0%.
 Training R^2: 0.8884. Test R^2: 0.8992. Loss Train: 0.0059. Loss Val: 0.0064. Time: 2.4111
 EPOCH 870. Progress: 87.0%.
 Training R^2: 0.8783. Test R^2: 0.8800. Loss Train: 0.0062. Loss Val: 0.0078. Time: 2.5044
 EPOCH 880. Progress: 88.0%.
 Training R^2: 0.8779. Test R^2: 0.8987. Loss Train: 0.0062. Loss Val: 0.0066. Time: 2.4080
 EPOCH 890. Progress: 89.0%.
 Training R^2: 0.8950. Test R^2: 0.8487. Loss Train: 0.0063. Loss Val: 0.0056. Time: 4.4715
 EPOCH 900. Progress: 90.0%.
 Training R^2: 0.8870. Test R^2: 0.8729. Loss Train: 0.0060. Loss Val: 0.0068. Time: 2.4720
 EPOCH 910. Progress: 91.0%.
 Training R^2: 0.9006. Test R^2: 0.8966. Loss Train: 0.0055. Loss Val: 0.0063. Time: 2.3388
 EPOCH 920. Progress: 92.0%.
 Training R^2: 0.8962. Test R^2: 0.9175. Loss Train: 0.0059. Loss Val: 0.0058. Time: 5.9139
 EPOCH 930. Progress: 93.0%.
 Training R^2: 0.8890. Test R^2: 0.9077. Loss Train: 0.0055. Loss Val: 0.0065. Time: 2.3512
 EPOCH 940. Progress: 94.0%.
 Training R^2: 0.8723. Test R^2: 0.8937. Loss Train: 0.0068. Loss Val: 0.0062. Time: 2.3694
 EPOCH 950. Progress: 95.0%.
 Training R^2: 0.8747. Test R^2: 0.8997. Loss Train: 0.0067. Loss Val: 0.0066. Time: 2.4371
 EPOCH 960. Progress: 96.0%.
 Training R^2: 0.8907. Test R^2: 0.9183. Loss Train: 0.0055. Loss Val: 0.0054. Time: 2.4315
 EPOCH 970. Progress: 97.0%.
 Training R^2: 0.8956. Test R^2: 0.9058. Loss Train: 0.0054. Loss Val: 0.0053. Time: 2.3393
 EPOCH 980. Progress: 98.0%.
 Training R^2: 0.8887. Test R^2: 0.9052. Loss Train: 0.0063. Loss Val: 0.0067. Time: 2.3698
 EPOCH 990. Progress: 99.0%.
 Training R^2: 0.8926. Test R^2: 0.9005. Loss Train: 0.0064. Loss Val: 0.0056. Time: 2.3520
 EPOCH 1000. Progress: 100.0%.
 Training R^2: 0.9004. Test R^2: 0.9033. Loss Train: 0.0061. Loss Val: 0.0066. Time: 2.7625
../_images/CASESTUDY_NN-day1_25_1.png
saving  ./plots/lr0.001_3FFNet_2ReLU_Drop_momentum0.9_wdecay0.001_dampening0_nesterovFalse_HidDim128.png
[32]:
# Test
with torch.no_grad():
    data, targets_val = next(iter(dataloaders['all_val']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted_val = model(model_input).squeeze()
#         _, predicted = torch.max(out, 1)
    print('predicted.shape: {}'.format(predicted_val.shape))
    print('predicted[:20]: \t{}'.format(predicted_val[:20]))
    print('targets[:20]: \t\t{}'.format(targets_val[:20]))

# Test
with torch.no_grad():
    data, targets = next(iter(dataloaders['test']))
    model_input = data.to(device)# TODO: Turn the 28 by 28 image tensors into a 784 dimensional tensor.
    predicted = model(model_input).squeeze()
    print('predicted.shape: {}'.format(predicted.shape))
    print('predicted[:20]: \t{}'.format(predicted[:20]))
    print('targets[:20]: \t\t{}'.format(targets[:20]))

#Time for a real test
path_to_save = './plots'
if not os.path.exists(path_to_save):
    os.makedirs(path_to_save)

fig, (ax1,ax2) = plt.subplots(1,2, sharey=True)
# test_predictions = model(normed_test_data).flatten()
r = r2_score(targets_val, predicted_val.cpu())
ax1.scatter(targets_val, predicted_val.cpu(),alpha=0.5, label='$R^2$ = %.3f' % (r))
ax1.legend(loc="upper left")
ax1.set_xlabel('True Values [Mean Conc.]')
ax1.set_ylabel('Predictions [Mean Conc.]')
ax1.axis('equal')
ax1.axis('square')
ax1.set_xlim([0,1])
ax1.set_ylim([0,1])
_ = ax1.plot([-100, 100], [-100, 100], 'r:')
ax1.set_title('Test dataset')
fig.set_figheight(30)
fig.set_figwidth(10)
#plt.show()
#plt.close('all')

#Whole dataset
# dataset_predictions = model.predict(normed_dataset).flatten()
r = r2_score(targets, predicted.cpu())
ax2.scatter(targets, predicted.cpu(), alpha=0.5, label='$R^2$ = %.3f' % (r))
ax2.legend(loc="upper left")
ax2.set_xlabel('True Values [Mean Conc.]')
ax2.set_ylabel('Predictions [Mean Conc.]')
ax2.axis('equal')
ax2.axis('square')
ax2.set_xlim([0,1])
ax2.set_ylim([0,1])
_ = ax2.plot([-100, 100], [-100, 100], 'r:')
ax2.set_title('Whole dataset')
# plt.show()
fig.savefig(os.path.join(path_to_save, '3FFNet_2ReLU_Drop_ADAM_LONGER_R2Score_' + config_str + '.png'), bbox_inches='tight')
# #plt.close('all')
predicted.shape: torch.Size([335])
predicted[:20]:         tensor([ 0.8374,  0.4542,  0.3902,  0.4156,  0.7975,  0.6249,  0.4712,  0.2219,
         0.6796,  0.5515,  0.0676,  0.5882,  0.6912,  0.6129, -0.0232,  0.1744,
         0.7455,  0.0933,  0.4177,  0.4790])
targets[:20]:           tensor([0.9596, 0.4605, 0.8906, 0.3956, 0.8300, 0.6755, 0.6471, 0.1907, 0.6822,
        0.5598, 0.0016, 0.5665, 0.6302, 0.6531, 0.0855, 0.1301, 0.6856, 0.2712,
        0.6779, 0.5099])
predicted.shape: torch.Size([1118])
predicted[:20]:         tensor([0.0534, 0.2025, 0.3275, 0.2179, 0.1938, 0.2668, 0.3248, 0.3472, 0.5163,
        0.5817, 0.5051, 0.5089, 0.4742, 0.5025, 0.5184, 0.5432, 0.5099, 0.5161,
        0.5317, 0.4993])
targets[:20]:           tensor([0.0280, 0.1301, 0.2970, 0.2207, 0.2648, 0.3342, 0.2873, 0.3362, 0.6023,
        0.5031, 0.5335, 0.6247, 0.5454, 0.4926, 0.4442, 0.5030, 0.5472, 0.5128,
        0.5702, 0.5017])
../_images/CASESTUDY_NN-day1_26_1.png