Project 1

Design, Implementation and Delivering of a ML System for Cloud,Shadow and Haze Detection from Satellite Data.

Goal: Given unprocessed geospatial data from standard sources, aim was to detect and predict cloud, shadow and haze cover.

This project involved formatting raw data to match a standard specification and then used as an input to a deep learning-based image segmentation model. The entire software stack was cloud based and AWS EC2 was used the main development server. Docker was used for creating virtual environments and parallel deployment of multiple training jobs. Jupyter notebooks were used as a mechanism for integrating both code and documentation into a single deliverable medium.

Effort was aimed at developing machine learning models which would suit our purpose. This involved data augmentation, data filtering, reiteration with different hyper-parameters, improving code to incorporate added functionality and implementing the solution using standardized software packages and frameworks like PyTorch.

Project 2

Optimization Methods for Better Cloud Mask from Sentinel 2 Data

Goal:Scientific understanding of how radiometric properties can affect cloud detection.

A standard cloud detection approach for visible bands will model the threshold value as a linear function with respect to reflectance. We did a linear transformation between our Top-of-Atmosphere (TOA) reflectance values (band reflectance) to the NBAR albedo reflectance values (reference reflectance). The idea was to use this approximate correction to make the threshold more accurate to generate clearer cloud masks.

Sample Sentinel 2 Cloud Mask.

Note: To do basic exploration with geospatial data and understand how it works and what it looks like, see the code here: . It is recommended to use QGIS to view TIF files when needed.

Project 3

Microsoft Kinect RGB-Depth Based Visual Tracking

The project (part of my Masters thesis) addresses the challenge of re-detection of a single occluded target. This work specifically focuses on human targets that are occluded by objects (e.g. chairs) or by other humans (e.g. a tall person). The tracking methodology proposed considers a single Kinect RGB-D camera, single-target,and is model-free, applied to long-term tracking. The model-free property means that the only supervised training example is provided by the bounding box in the first frame. The long-term tracking means that the tracker learns to re-detection after the target is lost. i.e. to infer the object position in the current frame.

Sample RGB and Depth images which were used in the process.

Click the image below to see the video of the final results.

Occlusion aware depth based tracking

Project 4

Image Classification Model Deployment Using Flask

This project builds a image classification model which recognises ‘House Numbers’ (in photos). It uses Flask to create an API, we can deploy this model and create a simple web page to load and classify new images.

The model can be tested on website:

Note: To test the model, please download sample images from here: 1, 2 and 3 or you can download all the images from here.

Code & steps to reproduce on GitHub: