Data Processor¶
-
class
neural_pipeline.data_processor.data_processor.
DataProcessor
(model: torch.nn.modules.module.Module, device: torch.device = None)[source]¶ DataProcessor manage: model, data processing, device choosing
Parameters: - model – model, that will be used for process data
- device – what device pass model and data for processing
-
class
neural_pipeline.data_processor.data_processor.
TrainDataProcessor
(model: torch.nn.modules.module.Module, train_config: TrainConfig, device: torch.device = None)[source]¶ TrainDataProcessor is make all of DataProcessor but produce training process.
Parameters: - model – model, that will be used for process data
- train_config – train config
- device – what device pass model, data and optimizer for processing
-
get_state
() → {}[source]¶ Get model and optimizer state dicts
Returns: dict with keys [weights, optimizer]
-
predict
(data, is_train=False) → torch.Tensor[source]¶ Make predict by data. If
is_train
wasTrue
Parameters: - data – data in dict
- is_train – is data processor need train on data or just predict
Returns: processed output
Return type: model return type
-
process_batch
(batch: {}, is_train: bool, metrics_processor: AbstractMetricsProcessor = None) → numpy.ndarray[source]¶ Process one batch of data
Parameters: - batch – dict, contains ‘data’ and ‘target’ keys. The values for key must be instance of torch.Tensor or dict
- is_train – is batch process for train
- metrics_processor – metrics processor for collect metrics after batch is processed
Returns: array of losses with shape (N, …) where N is batch size
Model¶
-
class
neural_pipeline.data_processor.model.
Model
(base_model: torch.nn.modules.module.Module)[source]¶ Wrapper for
torch.nn.Module
. This class provide initialization, call and serialization for itParameters: base_model – torch.nn.Module
object-
model
() → torch.nn.modules.module.Module[source]¶ Get internal
torch.nn.Module
objectReturns: internal torch.nn.Module
object
-
set_checkpoints_manager
(manager: neural_pipeline.utils.file_structure_manager.CheckpointsManager) → neural_pipeline.data_processor.model.Model[source]¶ Set checkpoints manager, that will be used for identify path for weights file reading an writing
Parameters: manager – CheckpointsManager
instanceReturns: self object
-