torch.nn.intrinsic.quantized¶
This module implements the quantized implementations of fused operations like conv + relu.
ConvReLU2d¶
-
class
torch.nn.intrinsic.quantized.
ConvReLU2d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')[source]¶ A ConvReLU2d module is a fused module of Conv2d and ReLU
We adopt the same interface as
torch.nn.quantized.Conv2d
.- Variables
as torch.nn.quantized.Conv2d (Same) –
ConvReLU3d¶
-
class
torch.nn.intrinsic.quantized.
ConvReLU3d
(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros')[source]¶ A ConvReLU3d module is a fused module of Conv3d and ReLU
We adopt the same interface as
torch.nn.quantized.Conv3d
.Attributes: Same as torch.nn.quantized.Conv3d
LinearReLU¶
-
class
torch.nn.intrinsic.quantized.
LinearReLU
(in_features, out_features, bias=True, dtype=torch.qint8)[source]¶ A LinearReLU module fused from Linear and ReLU modules
We adopt the same interface as
torch.nn.quantized.Linear
.- Variables
as torch.nn.quantized.Linear (Same) –
Examples:
>>> m = nn.intrinsic.LinearReLU(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])