Defining the general Architectures for 3D Applications

1. Init function and single Layers

Evolving Normalization-Activation Layers

https://arxiv.org/pdf/2004.02967.pdf inspired by : https://github.com/digantamisra98/EvoNorm

Drawing

instance_std[source]

instance_std(x, eps=1e-05)

group_std[source]

group_std(x, groups=32, eps=1e-05)

class EvoNorm2D[source]

EvoNorm2D(input, non_linear=True, version='S0', momentum=0.9, training=True) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

weights_init[source]

weights_init(m)

Define the weight parameters depending on the type: Conv or Batchnorm

class Quantize[source]

Quantize(dim, n_embed, decay=0.99, eps=1e-05) :: Module

Quantization 'Layer' inspired by: https://github.com/deepmind/sonnet modified from: https://github.com/rosinality/vq-vae-2-pytorch

quant = Quantize(64, 100)
input = torch.randn(16, 16, 8, 64)
quantize, diff, embed_ind = quant(input)
print(quantize.shape, diff, embed_ind.shape)
torch.Size([16, 16, 8, 64]) tensor(1.2985) torch.Size([16, 16, 8])

2. Block of Layers

class ResNetBlock[source]

ResNetBlock(n_chan, convsize=3, activation=ReLU(inplace=True), init_w='weights_init', dim=3, evo_on=False) :: Module

An individually designalble ResNet Block for 3 Dimensional Convoluions based on: https://arxiv.org/abs/1512.03385

class ConvBn[source]

ConvBn(in_chan, out_chan, convsize=3, stride=2, activation=LeakyReLU(negative_slope=0.2, inplace=True), init_w='weights_init', padding=1, dim=3, p_drop=0, evo_on=False) :: Module

An individually designalble Block for 3 Dimensional Convoluions with Batchnorm and Dropout

m = ConvBn(1, 16, convsize=4, stride=(2, 2, 2), padding=(1, 1, 1))
inp = torch.randn(20, 1, 16, 64, 64)
output = m(inp)
print(output.shape)
torch.Size([20, 16, 8, 32, 32])

class ConvTpBn[source]

ConvTpBn(in_chan, out_chan, convsize=3, stride=2, activation=ReLU(inplace=True), init_w='weights_init', padding=1, dim=3, evo_on=False) :: Module

An individually designalble Block for 3 Dimensional Transposed Convoluions with Batchnorm

m = ConvTpBn(1, 16, convsize=3, stride=(2, 2, 2), padding=(1, 1, 1))
output = m(inp)
print(output.shape)
torch.Size([20, 16, 31, 127, 127])

class LinearSigmoid[source]

LinearSigmoid(hidden_dim, y_dim, bias=False) :: Module

Helper class to provide a simple ending with linear layer and Sigmoid

3 The generic convolutional network block

class DownUpConv[source]

DownUpConv(args, n_fea_in, n_fea_next, pic_size, depth, move='down', p_drop=0) :: Module

A helper Type, which contains the generic conv network for 3d up- or downsclaing depending on "move"

4. Whole Networks

class Encoder[source]

Encoder(args, init_w='weights_init', vae_mode=True) :: Module

Encoder with 3dimensional conv setup

class Decoder[source]

Decoder(args, init_w='weights_init') :: Module

Decoder class (also a Generator)

class Discriminator[source]

Discriminator(args, diag_dim=1, init_w='weights_init', wgan=False) :: Module

Discriminator class, only for true/fake differences Classifier for Determinig between several classes