



Reconstruct-It! A Collection of 3D Reconstruction Datasets and Trained Splats
This collection brings together 77 carefully curated scenes with multi-view sequences, camera parameters, and pre-trained Gaussian Splats—everything you need to jump into radiance field training.
Why Does SGD Love Flat Minima?
This article takes a look through the chronicles of Stochastic Gradient Descent (SGD). We take a look at why SGD and the stochastic gradient noise is responsible for SGD working so well.
#BIS-Hard but Not Impossible: Ferromagnetic Potts Model on Expanders
How do you efficiently sample from a distribution that's algorithmically #BIS-hard? The ferromagnetic Potts model is a canonical Markov random field where monochromatic edges win the popularity contest. This article is about how polymer methods and extremal graph theory crack the sampling puzzle on d-regular weakly expanding graphs.
Compression Unlocks Statistical Learning Secrets
Characterizing the sample complexity of different machine learning tasks is an important question in learning theory. This article reviews the less conventional approach of using compression schemes for proving sample complexity upper bounds, with specific applications in learning under adversarial perturbations and learning Gaussian mixture models.
What Is the Largest Integer Not of the Form mb+nc?
What's the largest number that absolutely refuses to be written as mb+nc?
Implementing Self-supervised contrastive learning with NNCLR
Positive examples can also help each other learn better representations. This article implements NNCLR, a self-supervised learning method for computer vision.
Implementing Swin Transformers
Implementing Swin Transformers, a general-purpose backbone for computer vision.
Gradient Centralization for Better Training Performance
Gradient Centralization transforms DNN training performance by bringing gradients to zero mean, this helps with training stability and performance.