Drawing

class MoCoAE[source]

MoCoAE(device, args) :: AbsModel

The MoCoAE contains the Autoencoder based Architecture and the modified Pretext task

Functions handling the queue

concat_all_gather[source]

concat_all_gather(tensor)

Performs all_gather operation on the provided tensors. Warning : torch.distributed.all_gather has no gradient.

Functions Handling the K and Q Network updates:

copy_q2k_params[source]

copy_q2k_params(Q_network:Module, K_network:Module)

Helper function to Copy parameters from Network Q to network K. Further deactive gradient computation on k

momentum_update[source]

momentum_update(Q_network:Module, K_network:Module, m:float)

Momentum update of the key network based on the query network

Momentum Contrastive Loss:

$ Loss = -\log \left( \frac{ \exp{ \frac{q \cdot k_+}{\tau} } }{\sum_{i=0}^{n}{\exp{\frac{q \cdot k_i}{\tau} }} } \right) $

momentumContrastiveLoss[source]

momentumContrastiveLoss(k, W, q, queue, device, tau=1)

Calculate the loss of the network depending on the current key(k), the query(q) and the overall queue(queue) We follow the suggestion of the paper, Algorithm 1: https://arxiv.org/pdf/1911.05722.pdf

Testing