Hello! 👋 I'm a Data Scientist @ Bidgely, I graduated in Mathematics & Computing from IIT Kharagpur in 2021.

My interests include Computer Vision, Deep Learning, Software and Web developement in Python and JS, Asynchronous Programming, etc.

Feel free to reach out to me @ dibyadas998 at gmail dot com

Résumé


Blog ↓

Implicit Rank Minimization in Gradient Descent

October 14, 2020

I came across this paper “Implicit Rank-Minimizing Autoencoder” [1] by FAIR which was shared by Yann LeCun on his facebook timeline. In this paper, the authors show that inserting a few linear layers between the encoder and the decoder decreases the dimensionality of the latent space. The authors build on the results of [2] which showed how overparameterization of linear neural networks results in implicit regularization of the rank when trained via gradient descent(GD). ... Read more

Sherman-Morrison-Woodbury | Effect of low rank changes to matrix inverse

October 1, 2020

I recently came across this tweet about the Sherman-Morrison-Woodbury formula (henceforth referred to as SMW in this post). I was reading linear regression and I realized that this formula has a very practical application there. I will highlight the formula and briefly explain one of its applications. The Sherman-Morrison-Woodbury formula is : $$(A + uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^TA^{-1}}{1+v^TA^{-1}u}$$ where $A$ is n $\times$ n matrix and $u$ and $v$ are both n $\times$ 1 matrix. ... Read more

Hucore theme & Hugo ♥