This module extends nn.ModuleList to implement an additional connection.
Each input tensor is passed through its corresponding module,
and the output tensors are summed. If the shapes of the output tensors
do not match, an error is raised.
매개변수:
*args (nn.Module) – Sequence of PyTorch modules. Each module will be applied to
a corresponding input tensor in the forward pass.
Applies each module to its corresponding input tensor and returns
the sum of the output tensors. If the shapes of the output tensors
do not match, an error is raised.
Applies each module to its corresponding input tensor and returns
the sum of the output tensors. If the shapes of the output tensors
do not match, an error is raised.
매개변수:
*inputs (torch.Tensor) – Sequence of input tensors. Each tensor is passed through its corresponding module.
반환:
The sum of the output tensors of each module.
반환 형식:
torch.Tensor
예외 발생:
ValueError – If the output tensors have mismatched shapes.
Dynamic Pooling Layer using Gumbel Softmax trick for discrete pooling ratios.
This layer dynamically adjusts the pooling ratio using a learnable parameter,
allowing for adaptive pooling during training. The Gumbel Softmax trick is
applied to ensure the ratio remains discrete.
…
:param ch: Number of channels in the input tensor (signal or 1D arr)
:type ch: int
:param num_groups: Number of channels produced by the convolution
:type num_groups: int
:param kernel_size: Size of the convolving kernel
:type kernel_size: int
:param stride: Stride of the convolution. Default: 1
:type stride: int
:param reduction_ratio: Ratio of channel reduction. This value must be divisor of ch.
:type reduction_ratio: int
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
Apply sinusoidal positional encoding to the input tensor.
매개변수:
x (torch.Tensor) – Input tensor of shape (batch_size, seq_len, d_model).
positions (torch.Tensor, optional) – Optional tensor of shape (batch_size, seq_len) specifying the positions (e.g., timestamps) for each element in the sequence.
If not provided, the default positions (0 to seq_len - 1) are used.
This module extends nn.Sequential to implement a residual connection. The input tensor is added
to the output tensor of the sequence of modules provided during initialization, similar to a
residual block in ResNet architectures.
매개변수:
*args (nn.Module) – Sequence of PyTorch modules to be applied to the input tensor.
Although the recipe for forward pass needs to be defined within
this function, one should call the Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.