# train_one_net.m

Generates a training library, trains a network and writes it into a file with the specified number. If the specified file exists, the function will load that network file and continue training.

## Syntax

    train_one_net(parameters,trsets,file_number)


## Arguments

  parameters  - library and network specification struc-
ture (see the header of deer_lib_gen.m)

trsets      - sizes of independent randomly generated
training libraries to sequentially train
against. [160k 160k 160k 160k 160k 160k]
works well on a Tesla V100 card.

file_number - the network object will be saved into a
file with this number as the name, this
file also serves as a restart checkpoint.


Four more fields are required in the parameters structure:

 parameters.layer_sizes - number of neurons per layer, a
row vector where the number of
elements is the number of hid-
den layers desired.

parameters.lastlayer   - activation function to use in
the output layer ('tansig' or
'logsig').

parameters.method      - training algorithm ('trainscg'
is recommended).

parameters.nobias      - if set to 1, the network would
not have bias vectors


## Outputs

This function writes a .mat file with the network object.

## Examples

The example below will train a single network using the parameters given in the netset_params.m file for the network ensemble optimized for any peak width.

	% Load training set parameters
run('net_set_any_peaks/netset_params.m');

% Specify the sizes of training databases to train against
trsets=[160e4 160e4 160e4 160e4];

% Run the network training for a a single network
train_one_net(parameters,trsets,111);


The function will save the the network into 111.mat file.

## Notes

A CUDA capable NVidia GPU is required.