Hello! 👋 I used to work as a Data Scientist @ Bidgely from 2021 to 2024. I graduated in Mathematics & Computing from IIT Kharagpur in 2021.

I took a break from my job to focus on a research oriented career path. I am actively looking for jobs in ML and Data Science. Please do reach out if you have any opportunities in research roles.

Feel free to reach out to me @ dibyadas998 at gmail dot com

Résumé


Blog ↓

t-distributed Stochastic Neighbour Embedding(t-SNE)

October 3, 2024

I first heard about t-SNE a few years back as tool to help visualize high-dimensional data in 2D or 3D. And I had come across these amazing visualizations which displayed MNIST dataset in a 2D map which immediately drew my attention. Another one I came across which showed the different clusters of Fashion MNIST data in 3D. I didn’t think of it much until very recently when I thought of reading the details of what t-SNE is and how it works. ... Read more

Understanding Gaussian Processes

September 14, 2024

It took me forever to understand what Gaussian Processes(GPs) were all about. I have been trying to understand it for quite some time but always gave up after the initial hurdle. And I think it happened because I got confused by the way a Gaussian Process is portrayed that I came across in many blog posts. The introductory text on GPs usually talk about how it can be used to model arbitrary smooth functions non-parametrically. ... Read more

Implicit Rank Minimization in Gradient Descent

October 14, 2020

I came across this paper “Implicit Rank-Minimizing Autoencoder” [1] by FAIR which was shared by Yann LeCun on his facebook timeline. In this paper, the authors show that inserting a few linear layers between the encoder and the decoder decreases the dimensionality of the latent space. The authors build on the results of [2] which showed how overparameterization of linear neural networks results in implicit regularization of the rank when trained via gradient descent(GD). ... Read more

Sherman-Morrison-Woodbury | Effect of low rank changes to matrix inverse

October 1, 2020

I recently came across this tweet about the Sherman-Morrison-Woodbury formula (henceforth referred to as SMW in this post). I was reading linear regression and I realized that this formula has a very practical application there. I will highlight the formula and briefly explain one of its applications. The Sherman-Morrison-Woodbury formula is : $$(A + uv^T)^{-1} = A^{-1} - \frac{A^{-1}uv^TA^{-1}}{1+v^TA^{-1}u}$$ where $A$ is n $\times$ n matrix and $u$ and $v$ are both n $\times$ 1 matrix. ... Read more

Waiting to Sync

July 2, 2020

The month of May, when I experienced the lowest of my productivity, I was trying out different ways to make myself do something worthwhile. But everything went in vain and I just gave up. I planned to ride out this wave of unproductivity. Among the things I tried to do, one of them was trying to learn async/await in Python which I have never been able to fully follow through in the past. ... Read more

Éireann

August 17, 2019

I spent this summers of 2019 interning at the Insight Centre for Data Analytics, NUI Galway and I would like to write my experience so that I can come back later and smile in reminiscence :). I was elated to get my VISA just in time after my exams ended so that I could make my travel plans to Ireland. Unfortunately during that time there was a severe cyclone originating in the east coast of India affecting my city and my state and this delayed my plans to spend time at home. ... Read more

intel_idle.max_cstate=1

July 4, 2018

This post is looooong overdue. I was supposed to write this post like 2 years back and it should have been my first post. I’ve finally taken out the time and I’m determined to finish & publish this. First, a little background on this. I finished my higher secondary education ( also referred to as +2 in India) under CHSE Board, Odisha. A youth empowerment initiative launched by our state government offers laptops to the top 15,000 students of the 12th board exams for free. ... Read more

Visualizing Different Normalization Techniques

May 29, 2018

While implementing Semantic Segmentation using Adversarial Networks, I came across a normalization technique called Local Contrast Normalization. Before feeding the image into the segmentor network, the image undergoes this normalization. Then I came across other ways to normalize image data. Like Simplified Whitening, Local Response Normalization. Here is a great article on the same where I drew my inspiration from. But I wanted to see for myself, how the image turns out after being normalized. ... Read more

Machine Learning, Its Implementation in Specialized hardware

December 6, 2017

Yet another post on ML 😛 Recently, I was working on an assignment with my group, about the needs and uses of specialized hardware for ML. I browsed through a few articles on the internet about this and decided to compile them here. Why specialized hardware is important for Machine Learning?Cores of a CPU are far more versatile than a GPU coreOne may ask, why a GPU is needed? ... Read more

Scraping Facebook posts and using pelican for generating a static site

June 19, 2017

There are times when I want to see an old post shared by a page. But that is buried so deep underneath that it becomes an humongous task to scroll down and retrieve it. I am talking about a page which has more than 8000 posts. You need to be Indiana Jones to find the old posts. But being a programmer has its own perks! :). I undertook a small project last week. ... Read more

Hucore theme & Hugo ♥