Monitoring

Main module for monitoring training process

There is:

class neural_pipeline.monitoring.MonitorHub[source]

Aggregator of monitors. This class collect monitors and provide unified interface to it’s

add_monitor(monitor: neural_pipeline.monitoring.AbstractMonitor) → neural_pipeline.monitoring.MonitorHub[source]

Connect monitor to hub

Parameters:monitorAbstractMonitor object
Returns:
set_epoch_num(epoch_num: int) → None[source]

Set current epoch num

Parameters:epoch_num – num of current epoch
update_losses(losses: {}) → None[source]

Update monitor

Parameters:losses – losses values with keys ‘train’ and ‘validation’
update_metrics(metrics: {}) → None[source]

Update metrics in all monitors

Parameters:metrics – metrics dict with keys ‘metrics’ and ‘groups’
class neural_pipeline.monitoring.AbstractMonitor[source]

Basic class for every monitor.

set_epoch_num(epoch_num: int) → None[source]

Set current epoch num

Parameters:epoch_num – num of current epoch
update_losses(losses: {}) → None[source]

Update losses on monitor

Parameters:losses – losses values dict with keys is names of stages in train pipeline (e.g. [train, validation])
update_metrics(metrics: {}) → None[source]

Update metrics on monitor

Parameters:metrics – metrics dict with keys ‘metrics’ and ‘groups’
class neural_pipeline.monitoring.ConsoleMonitor[source]

Monitor, that used for write metrics to console.

Output looks like: Epoch: [#]; train: [-1, 0, 1]; validation: [-1, 0, 1]. This 3 numbers is [min, mean, max] values of training stage loss values

update_losses(losses: {}) → None[source]

Update losses on monitor

Parameters:losses – losses values dict with keys is names of stages in train pipeline (e.g. [train, validation])
class neural_pipeline.monitoring.LogMonitor(fsm: neural_pipeline.utils.file_structure_manager.FileStructManager)[source]

Monitor, used for logging metrics. It’s write full log and can also write last metrics in separate file if required

All output files in JSON format and stores in <base_dir_path>/monitors/metrics_log

Parameters:fsmFileStructManager object
close() → None[source]

Close monitor

get_final_metrics_file() → str[source]

Get final metrics file path

Returns:path or None if writing doesn’t enabled by write_final_metrics()
update_losses(losses: {}) → None[source]

Update losses on monitor

Parameters:losses – losses values dict with keys is names of stages in train pipeline (e.g. [train, validation])
update_metrics(metrics: {}) → None[source]

Update metrics on monitor

Parameters:metrics – metrics dict with keys ‘metrics’ and ‘groups’
write_final_metrics(path: str = None) → neural_pipeline.monitoring.LogMonitor[source]

Enable saving final metrics to separate file

Parameters:path – path to result file. If not defined, file will placed near full metrics log and named ‘metrics.json`
Returns:self object