zae_engine.models.foundations package¶
Submodules¶
zae_engine.models.foundations.benchmark module¶
- class zae_engine.models.foundations.benchmark.TimeSeriesBert(vocab_size: int, d_model: int, max_len: int, num_layers: int, num_heads: int, dim_feedforward: int, dropout: float = 0.1, dim_pool: int = None, sep_token_id: int = 102, **factory_kwargs)[source]¶
Bases:
Module
Encoder-only Transformer model based on BertBase with integrated embedding and positional encoding.
- Parameters:
vocab_size (int) – The size of the vocabulary.
d_model (int) – The dimension of the embedding space.
max_len (int) – The maximum sequence length.
num_layers (int) – The number of layers in the Transformer encoder.
num_heads (int) – The number of attention heads in the Transformer encoder.
dim_feedforward (int) – The dimension of the feedforward network in the Transformer encoder.
dropout (float, optional) – The dropout rate for regularization. Default is 0.1.
dim_pool (int, optional) – The hidden dimension for the pooler. If provided, a pooler is applied to the [CLS] token.
sep_token_id (int, optional) – The token ID for [SEP]. Default is 102.
- forward(input_ids: Tensor, positions: Tensor = None, src_mask: Tensor = None, src_key_padding_mask: Tensor = None) Tensor [source]¶
Forward pass through the encoder-only Transformer model.
- Parameters:
input_ids (torch.Tensor) – Tensor of input token IDs with shape (batch_size, seq_len).
positions (torch.Tensor, optional) – Tensor of positions (timestamps) with shape (batch_size, seq_len).
src_mask (torch.Tensor, optional) – Source mask for masking certain positions in the input. Shape: (seq_len, seq_len).
src_key_padding_mask (torch.Tensor, optional) – Mask for padding tokens in the input sequence. Shape: (batch_size, seq_len).
- Returns:
Output from the encoder or pooled output if dim_pool is set.
- Return type:
torch.Tensor
zae_engine.models.foundations.bert module¶
- class zae_engine.models.foundations.bert.BertEmbedding(vocab_size, max_len, dim_embedding)[source]¶
Bases:
Module
- forward(*input_args)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
zae_engine.models.foundations.resnet module¶
- zae_engine.models.foundations.resnet.cbam_injection(model: CNNBase) CNNBase [source]¶
Inject CBAM modules into the given ResNet model.
- Parameters:
model (cnn.CNNBase) – The ResNet model to inject CBAM modules into.
- Returns:
The ResNet model with SE modules injected.
- Return type:
- zae_engine.models.foundations.resnet.cbamresnet101(pretrained=False) CNNBase [source]¶
Create a CBAM-ResNet-101 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for CBAM modules. Default is False.
- Returns:
An instance of the CBAM-ResNet-101 model.
- Return type:
References
- zae_engine.models.foundations.resnet.cbamresnet152(pretrained=False) CNNBase [source]¶
Create a CBAM-ResNet-152 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for CBAM modules. Default is False.
- Returns:
An instance of the CBAM-ResNet-152 model.
- Return type:
References
[1] Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). https://arxiv.org/abs/1807.06521
- zae_engine.models.foundations.resnet.cbamresnet18(pretrained=False) CNNBase [source]¶
Create a CBAM-ResNet-18 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for CBAM modules. Default is False.
- Returns:
An instance of the CBAM-ResNet-18 model.
- Return type:
References
[1] Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). https://arxiv.org/abs/1807.06521
- zae_engine.models.foundations.resnet.cbamresnet34(pretrained=False) CNNBase [source]¶
Create a CBAM-ResNet-34 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for CBAM modules. Default is False.
- Returns:
An instance of the CBAM-ResNet-34 model.
- Return type:
References
[1] Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). https://arxiv.org/abs/1807.06521
- zae_engine.models.foundations.resnet.cbamresnet50(pretrained=False) CNNBase [source]¶
Create a CBAM-ResNet-50 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for CBAM modules. Default is False.
- Returns:
An instance of the CBAM-ResNet-50 model.
- Return type:
References
[1] Woo, S., Park, J., Lee, J.-Y., & Kweon, I. S. (2018). CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). https://arxiv.org/abs/1807.06521
- zae_engine.models.foundations.resnet.resnet101(pretrained=False) CNNBase [source]¶
Create a ResNet-101 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the ResNet-50 model.
- Return type:
References
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- zae_engine.models.foundations.resnet.resnet152(pretrained=False) CNNBase [source]¶
Create a ResNet-152 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the ResNet-50 model.
- Return type:
References
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- zae_engine.models.foundations.resnet.resnet18(pretrained=False) CNNBase [source]¶
Create a ResNet-18 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the ResNet-18 model.
- Return type:
References
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- zae_engine.models.foundations.resnet.resnet34(pretrained=False) CNNBase [source]¶
Create a ResNet-34 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the ResNet-34 model.
- Return type:
References
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- zae_engine.models.foundations.resnet.resnet50(pretrained=False) CNNBase [source]¶
Create a ResNet-50 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the ResNet-50 model.
- Return type:
References
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
- zae_engine.models.foundations.resnet.resnet_deco(n)[source]¶
Decorator to wrap ResNet model creation functions with predefined configurations.
- Parameters:
n (int) – The number of layers for the ResNet model.
- Returns:
Wrapped function with ResNet configurations.
- Return type:
function
- zae_engine.models.foundations.resnet.se_injection(model: CNNBase) CNNBase [source]¶
Inject SE modules into the given ResNet model.
- Parameters:
model (cnn.CNNBase) – The ResNet model to inject SE modules into.
- Returns:
The ResNet model with SE modules injected.
- Return type:
- zae_engine.models.foundations.resnet.seresnet101(pretrained=False) CNNBase [source]¶
Create an SE-ResNet-101 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for SE modules. Default is False.
- Returns:
An instance of the SE-ResNet-101 model.
- Return type:
References
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
- zae_engine.models.foundations.resnet.seresnet152(pretrained=False) CNNBase [source]¶
Create an SE-ResNet-152 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for SE modules. Default is False.
- Returns:
An instance of the SE-ResNet-152 model.
- Return type:
References
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
- zae_engine.models.foundations.resnet.seresnet18(pretrained=False) CNNBase [source]¶
Create an SE-ResNet-18 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for SE modules. Default is False.
- Returns:
An instance of the SE-ResNet-18 model.
- Return type:
References
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
- zae_engine.models.foundations.resnet.seresnet34(pretrained=False) CNNBase [source]¶
Create an SE-ResNet-34 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for SE modules. Default is False.
- Returns:
An instance of the SE-ResNet-34 model.
- Return type:
References
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
- zae_engine.models.foundations.resnet.seresnet50(pretrained=False) CNNBase [source]¶
Create an SE-ResNet-50 model with the option to load pre-trained weights.
- Parameters:
pretrained (bool, optional) – If True, prints a message indicating no pre-trained weights for SE modules. Default is False.
- Returns:
An instance of the SE-ResNet-50 model.
- Return type:
References
Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
zae_engine.models.foundations.unet module¶
- zae_engine.models.foundations.unet.unet_brain(pretrained: bool = False) AutoEncoder [source]¶
Create a U-Net model with the option to load pre-trained weights.
The U-Net model is a type of convolutional neural network developed for biomedical image segmentation.
References
[1] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in MICCAI 2015. (https://arxiv.org/abs/1505.04597)
- Parameters:
pretrained (bool, optional) – If True, loads pre-trained weights from a specified checkpoint. Default is False.
- Returns:
An instance of the AutoEncoder model with U-Net architecture.
- Return type:
zae_engine.models.autoencoder.AutoEncoder
zae_engine.models.foundations.word_embedding module¶
- class zae_engine.models.foundations.word_embedding.FastTextEmbedding[source]¶
Bases:
Module
A PyTorch module that uses pre-trained FastText embeddings from gensim.
- embedding¶
PyTorch embedding layer initialized with FastText pre-trained weights.
- Type:
nn.Embedding