zae_engine.nn_night.blocks package

Submodules

zae_engine.nn_night.blocks.conv_block module

class zae_engine.nn_night.blocks.conv_block.ConvBlock(ch_in: int = 3, ch_out: int = 3, kernel_size: int | ~typing.Tuple[int, int] = 3, dilate: int = 1, pre_norm: bool = False, conv_layer: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.conv.Conv2d'>, norm_layer: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, act_layer: ~torch.nn.modules.module.Module = <class 'torch.nn.modules.activation.ReLU'>)[source]

Bases: Module

forward(x: Tensor) Tensor[source]

Forward pass through the ConvBlock block.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, channel_in, height, width).

Returns:

Output tensor of shape (batch_size, channel_out, height, width).

Return type:

torch.Tensor

kernel_type

Residual Block with Convolution, Batch Normalization, and ReLU Activation.

This module performs a convolution followed by batch normalization and ReLU activation. It serves as a fundamental building block in the RSU (Residual U-block) structure.

Parameters:
  • ch_in (int, optional) – Number of input channels. Default is 3.

  • ch_out (int, optional) – Number of output channels. Default is 3.

  • dilate (int, optional) – Dilation rate for the convolution. Default is 1.

alias of int | Tuple[int, int]

zae_engine.nn_night.blocks.resblock module

class zae_engine.nn_night.blocks.resblock.BasicBlock(ch_in: int, ch_out: int, stride: int = 1, groups: int = 1, dilation: int = 1, norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]

Bases: Module

Basic residual block.

Parameters:
  • ch_in (int) – Number of input channels.

  • ch_out (int) – Number of output channels.

  • stride (int, optional) – Stride of the convolution. Default is 1.

  • groups (int, optional) – Number of groups for the convolution. Default is 1.

  • dilation (int, optional) – Dilation for the convolution. Default is 1.

  • norm_layer (Callable[..., nn.Module], optional) – Normalization layer to use. Default is nn.BatchNorm2d.

References

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

expansion: int = 1
forward(x: Tensor) Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class zae_engine.nn_night.blocks.resblock.Bottleneck(ch_in: int, ch_out: int, stride: int = 1, groups: int = 1, dilation: int = 1, norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>)[source]

Bases: Module

Bottleneck residual block.

Parameters:
  • ch_in (int) – Number of input channels.

  • ch_out (int) – Number of output channels.

  • stride (int, optional) – Stride of the convolution. Default is 1.

  • groups (int, optional) – Number of groups for the convolution. Default is 1.

  • dilation (int, optional) – Dilation for the convolution. Default is 1.

  • norm_layer (Callable[..., nn.Module], optional) – Normalization layer to use. Default is nn.BatchNorm2d.

References

He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

expansion: int = 4
forward(x: Tensor) Tensor[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

zae_engine.nn_night.blocks.spatial_attention module

class zae_engine.nn_night.blocks.spatial_attention.CBAM1d(ch_in: int, kernel_size: int | Tuple[int] = 7, reduction: int = 8, bias: bool = False, conv_pool: bool = False, *args, **kwargs)[source]

Bases: Module

Convolutional Block Attention Module for 1D inputs.

This module implements the Convolutional Block Attention Module (CBAM).

Parameters:
  • ch_in (int) – The channel-wise dimension of the input tensor.

  • kernel_size (_size_1_t, optional) – The kernel size for the convolutional layer. Default is 7.

  • reduction (int, optional) – The reduction ratio for the SE block. Default is 8. Must be a divisor of ch_in.

  • bias (bool, optional) – Whether to use bias in the convolutional and fully connected layers. Default is False.

  • conv_pool (bool, optional) – If True, use convolutional pooling for the spatial attention mechanism. Default is False.

References

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

spatial_wise(x: Tensor)[source]
class zae_engine.nn_night.blocks.spatial_attention.SE1d(ch_in: int, reduction: int = 8, bias: bool = False, *args, **kwargs)[source]

Bases: Module

Squeeze and Excitation module for 1D inputs.

This module implements the Squeeze and Excitation (SE) block.

Parameters:
  • ch_in (int) – The channel-wise dimension of the input tensor.

  • reduction (int, optional) – The reduction ratio for the SE block. Default is 8. Must be a divisor of ch_in.

  • bias (bool, optional) – Whether to use bias in the fully connected layers. Default is False.

References

channel_wise(x)[source]
forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

zae_engine.nn_night.blocks.unet_block module

class zae_engine.nn_night.blocks.unet_block.RSUBlock(ch_in: int, ch_mid: int, ch_out: int, height: int = 7, dilation_height: int = 7, pool_size: int = 2)[source]

Bases: Module

Recurrent Residual U-block (RSU) implementation.

Parameters:
  • ch_in (int) – Number of input channels.

  • ch_mid (int) – Number of middle channels.

  • ch_out (int) – Number of output channels.

  • height (int) – Number of layers in the RSU block (e.g., RSU4 has height=4, RSU7 has height=7).

  • dilation_height (int) – Dilation rate for convolutions within the block.

  • pool_size (int, optional) – Pooling kernel size. Default is 2.

References

down_layer(at_first: bool = False)[source]

Returns a downsampling layer or identity based on the position.

forward(x: Tensor) Tensor[source]

Forward pass through the RSUBlock.

Parameters:

x (torch.Tensor) – Input tensor of shape (batch_size, channels, height, width).

Returns:

Output tensor of shape (batch_size, out_ch, height, width).

Return type:

torch.Tensor

up_layer(at_last: bool = False)[source]

Returns an upsampling layer or identity based on the position.

class zae_engine.nn_night.blocks.unet_block.UNetBlock(ch_in: int, ch_out: int, stride: int = 1, groups: int = 1, norm_layer: ~typing.Callable[[...], ~torch.nn.modules.module.Module] = <class 'torch.nn.modules.batchnorm.BatchNorm2d'>, *args, **kwargs)[source]

Bases: BasicBlock

Two times of [Conv-normalization-activation] block for UNet architecture.

This module is a modified version of the BasicBlock used in ResNet, adapted for the UNet architecture.

Parameters:
  • ch_in (int) – The number of input channels.

  • ch_out (int) – The number of output channels.

  • stride (int, optional) – The stride of the convolution. Default is 1.

  • groups (int, optional) – The number of groups for the convolution. Default is 1.

  • norm_layer (Callable[..., nn.Module], optional) – The normalization layer to use. Default is nn.BatchNorm2d.

expansion

The expansion factor of the block, set to 1.

Type:

int

References

expansion: int = 1
forward(x: Tensor) Tensor[source]

Forward pass for the UNetBlock.

Parameters:

x (Tensor) – Input tensor.

Returns:

Output tensor after applying the block operations.

Return type:

Tensor

Module contents