Train Config¶
-
class
neural_pipeline.train_config.train_config.
TrainConfig
(train_stages: [], loss: torch.nn.modules.module.Module, optimizer: torch.optim.optimizer.Optimizer)[source]¶ Train process setting storage
Parameters: - train_stages – list of stages for train loop
- loss – loss criterion
- optimizer – optimizer object
-
class
neural_pipeline.train_config.train_config.
TrainStage
(data_producer: neural_pipeline.data_producer.data_producer.DataProducer, metrics_processor: neural_pipeline.train_config.train_config.MetricsProcessor = None, name: str = 'train')[source]¶ Standard training stage
When call
run()
it’s iterateprocess_batch()
of data processor by data loader withis_tran=True
flag.After stop iteration ValidationStage accumulate losses from
DataProcessor
.Parameters: - data_producer –
DataProducer
object - metrics_processor –
MetricsProcessor
- name – name of stage. By default ‘train’
-
disable_hard_negative_mining
() → neural_pipeline.train_config.train_config.TrainStage[source]¶ Enable hard negative mining.
Returns: self object
- data_producer –
-
class
neural_pipeline.train_config.train_config.
ValidationStage
(data_producer: neural_pipeline.data_producer.data_producer.DataProducer, metrics_processor: neural_pipeline.train_config.train_config.MetricsProcessor = None, name: str = 'validation')[source]¶ Standard validation stage.
When call
run()
it’s iterateprocess_batch()
of data processor by data loader withis_tran=False
flag.After stop iteration ValidationStage accumulate losses from
DataProcessor
.Parameters: - data_producer –
DataProducer
object - metrics_processor –
MetricsProcessor
- name – name of stage. By default ‘validation’
- data_producer –
-
class
neural_pipeline.train_config.train_config.
AbstractMetric
(name: str)[source]¶ Abstract class for metrics. When it works in neural_pipeline, it store metric value for every call of
calc()
Parameters: name – name of metric. Name wil be used in monitors, so be careful in use unsupported characters -
calc
(output: torch.Tensor, target: torch.Tensor) → numpy.ndarray[source]¶ Calculate metric by output from model and target
Parameters: - output – output from model
- target – ground truth
-
static
max_val
() → float[source]¶ Get maximum value of metric. This used for correct histogram visualisation in some monitors
Returns: maximum value
-
-
class
neural_pipeline.train_config.train_config.
MetricsGroup
(name: str)[source]¶ Class for unite metrics or another
MetricsGroup
’s in one namespace. Note: MetricsGroup may contain only 2 level ofMetricsGroup
’s. SoMetricsGroup().add(MetricsGroup().add(MetricsGroup()))
will raisesMGException
Parameters: name – group name. Name wil be used in monitors, so be careful in use unsupported characters -
add
(item: neural_pipeline.train_config.train_config.AbstractMetric) → neural_pipeline.train_config.train_config.MetricsGroup[source]¶ Add
AbstractMetric
orMetricsGroup
Parameters: item – object to add Returns: self object Return type: MetricsGroup
-
calc
(output: torch.Tensor, target: torch.Tensor) → None[source]¶ Recursive calculate all metrics in this group and all nested group
Parameters: - output – predict value
- target – target value
-
have_groups
() → bool[source]¶ Is this group contains another metrics groups
Returns: True if contains, otherwise - False
-
-
class
neural_pipeline.train_config.train_config.
MetricsProcessor
[source]¶ Collection for all
AbstractMetric
’s andMetricsGroup
’s-
add_metric
(metric: neural_pipeline.train_config.train_config.AbstractMetric) → neural_pipeline.train_config.train_config.AbstractMetric[source]¶ Add
AbstractMetric
objectParameters: metric – metric to add Returns: metric object Return type: AbstractMetric
-
add_metrics_group
(group: neural_pipeline.train_config.train_config.MetricsGroup) → neural_pipeline.train_config.train_config.MetricsGroup[source]¶ Add
MetricsGroup
objectParameters: group – metrics group to add Returns: metrics group object Return type: MetricsGroup
-
calc_metrics
(output, target) → None[source]¶ Recursive calculate all metrics
Parameters: - output – predict value
- target – target value
-
-
class
neural_pipeline.train_config.train_config.
AbstractStage
(name: str)[source]¶ Stage of training process. For example there may be 2 stages: train and validation. Every epochs in train loop is iteration by stages.
Parameters: name – name of stage -
get_losses
() → numpy.ndarray[source]¶ Get losses from this stage
Returns: array of losses or None if this stage doesn’t need losses
-
-
class
neural_pipeline.train_config.train_config.
StandardStage
(stage_name: str, is_train: bool, data_producer: neural_pipeline.data_producer.data_producer.DataProducer, metrics_processor: neural_pipeline.train_config.train_config.MetricsProcessor = None)[source]¶ Standard stage for train process.
When call
run()
it’s iterateprocess_batch()
of data processor by data loaderAfter stop iteration ValidationStage accumulate losses from
DataProcessor
.Parameters: - data_producer –
DataProducer
object - metrics_processor –
MetricsProcessor
-
metrics_processor
() → neural_pipeline.train_config.train_config.MetricsProcessor[source]¶ Get merics processor of this stage
Returns: MetricsProcessor
if specified otherwise None
- data_producer –