Profile Headshot

Hi, I am Hardik Devrangadi,
a Roboticist
from Boston

I am a Graduate Student pursuing an MS in Robotics at Northeastern University. Currently, I conduct research at the Dynamic Systems and Control Laboratory, under the guidance of Prof. Rifat Sipahi. In this role, I lead a project focused on constructing an Autonomous UAS for high-altitude atmosphere data collection.


I have a deep passion for robotics and its transformative potential in enhancing our daily lives. I hold a Bachelor's degree in Electronics and Communication Engineering from RV College of Engineering in Bangalore, India.

Before embarking on my Master's journey, I gained invaluable experience as a Software Developer at PricewaterhouseCoopers, where I honed my skills in software development, coding practices, CI/CD and agile, particularly in the context of cutting-edge technologies.

During my academic pursuits, I had the privilege of interning at Samsung R&D, where I spearheaded a project involving the creation of a comprehensive image dataset comprising over 65,000 images using Generative Adversarial Networks (GANs) and used this to train a Neural Network. This trained CNN model was implemented on a mobile device using TensorFlow Lite. The model demonstrated exceptional accuracy in classifying images into five distinct categories. This accomplishment got us lot of appreciation and recognition from the Samsung team, culminating in awards for our exceptional work.

I'm truly passionate about robotics and firmly believe in its capacity to make a meaningful difference in our daily lives. My dedication to this field, coupled with my ability to utilize advanced technologies effectively, demonstrates my commitment to innovation and excellence. I'm excited about the opportunity to contribute further to the ongoing development of robotics and its potential to benefit society in countless ways.



Current Research

In my current role, I am spearheading the development of an autonomous Unmanned Aerial Vehicle (UAV) in collaboration with Prof. Rifat Sipahi. Our primary objective is to collect high-altitude atmospheric data to further advance the field of robotics and meteorology. As part of this project, I have meticulously integrated advanced onboard sensors capable of measuring humidity, temperature, pressure, and wind speed. This sensor suite, driven by Python-based drivers, ensures precise and real-time data acquisition, contributing to the accuracy and reliability of our atmospheric measurements.

Moreover, my responsibilities encompass the design and development of a robust Guidance, Navigation, and Control (GNC) system. This GNC system plays a pivotal role in ensuring the stable and autonomous flight of our UAV during its missions. I'm proud to say that our GNC system has proven to be highly effective in maintaining the UAV's stability even in challenging conditions.

To complement the data collection aspect of our project, I have engineered a dependable onboard imaging system. This system facilitates the real-time transmission of aerial imagery, which not only enhances the quality of the data collected but also provides valuable visual insights for analysis.

In addition to the technical aspects, I also perform mission planning and execution for our autonomous UAV through flight tests. We are currently in the process of performing these flight tests to ensure stable flight, before collecting data.

As I continue to work on this groundbreaking project, I remain committed to pushing the boundaries of what autonomous UAVs can achieve, showcasing my passion for robotics and my dedication to contributing to cutting-edge research in the field.


Realtime 2D Object Detection

A Robust Object Classification System that processes video frames to classify 15 different objects from a live webcam video stream. Leverages K-Nearest Neighbor Classification to achieve 95% accuracy in object classification.

C++, OpenCV, XCode

Disaster Response Autonomous Hexacopter - "The Eclipse"

An industrial grade autonomous UAV spanning 1000mm capable of producing over 22lbs of thrust equipped with several sensors, capable of surveying, static obstacle avoidance, target classification and air delivery.

C++, Python, OpenCV, PX4, MavLink, MavProxy, MavROS, Pixhawk 2.4

Comparative Analysis of Optical Flow Techniques: Classical Computer Vision vs Deep Learning Approach

Conducted a comparative analysis of optical flow estimation techniques, evaluating the accuracy of classical approaches such as the FarneBack algorithm, and deep learning-based methods like FlowNet 2.0. Evaluated endpoint error, angular error, flow discontinuity, velocity estimation, object tracking, pixel displacement and performed bounding box analysis.

C++, Python, OpenCV, XCode, OpenGL, PyTorch, TensorFlow

At a Glance

These are some of my mini projects, that helped me master some concepts and helped me build practical skills in various areas of technology and development. Each project represents a unique learning experience and showcases my ability to apply theoretical knowledge to real-world scenarios.
An easy way to view all my projects is my LinkTree

Panorama Image Stitching

The objective of this project is to generate a panoramic or mosaic image by stitching multiple images together. This is achieved through a series of steps, including camera calibration to remove distortions, detecting major features using the Harris Detector, and stitching images with overlapping Harris corners. The end goal is to create a seamless and distortion-free panorama from the input images.

Skills Honed:
Camera calibration, distortion correction, feature detection, Harris Detector, image stitching, image transformation, panorama creation.

Real-time Webcam Image Filtering

This project aims to apply real-time image filtering to webcam input. It involves several image processing tasks, such as converting images to grayscale, applying Gaussian blur, cartoonizing with gradient magnitude and blur/quantize filters, and adding salt and pepper noise. The core concept revolves around using convolution techniques to manipulate images. The desired image manipulations are achieved through pixel-level operations.

Skills Honed:
Real-time image processing, webcam input, grayscale conversion, Gaussian blur, cartoonization, gradient magnitude, convolution, noise addition.

Content-Based Image Retrieval

This project aims to retrieve images from a dataset based on their characteristics, such as color, texture, and spatial layout, using a selected target image. It involves the extraction of feature vectors and the calculation of the top N matches, providing practical experience in image matching and pattern recognition.

Skills Honed:
Content-based image retrieval, feature vector extraction, color spaces, histograms, texture features, spatial layout, distance metrics, pattern recognition, Gabor filters, parameter tuning.

Camera Calibration and Augmented Reality Application

This project involves camera calibration and processing image sequences from a webcam to detect and isolate a chessboard pattern. It projects detected points as 3D points for tracking movement and uses robust features. This application extracts and displays chessboard corners, performs camera calibration, estimates camera pose, and overlays a virtual object on the image frame. It also extends to perform perspective transformation and blending of images.

Skills Honed:
Camera calibration, feature detection (SURF Features), image processing, pose estimation, virtual object overlay, perspective transformation, QR code detection, Augmented Reality techniques.

RTK GPS Data Collection and Analysis

The objective of this project is to set up and collect RTK GPS (u-blox ZED-F9P) data from both a base station and a rover. This data collection is conducted in open spaces and locations with occlusions to analyze and compare the gathered data. The goal is to eliminate common noise, study the differences between stationary and moving data, and draw meaningful inferences from the collected data sets.

Skills Honed:
Hardware setup for RTK GPS data collection, base station and rover configuration, noise elimination, stationary and moving data collection, data comparison, analysis, and drawing inferences from the collected data.

IMU Data Collection and Analysis

The project's objective was to configure the VectorNav IMU hardware to collect data in the $VNYMR format and set the data collection rate at 40Hz. Data collection took place in stationary conditions, and various parameters including Orientation, Angular Velocity, Linear Acceleration, Magnetic Field, Noise Characteristics, Error Distribution, and Allan Deviation were analyzed for the IMU data.

Skills Honed:
Configuration of IMU hardware, data format setting, data collection rate adjustment, data analysis of parameters including Orientation, Angular Velocity, Linear Acceleration, Magnetic Field, Noise Characteristics, Error Distribution, and Allan Deviation.

Get in touch

Feel free to reach out if you'd like to discuss exciting opportunities in the world of robotics, computer vision, and drones, or if you're interested in collaborating on groundbreaking projects with a passionate robotics enthusiast like me.



while True:
    try:
        drone.ponder_life_choices()
        robot.find_coffee()
        perception.have_an_epiphany()
        if gps.satellite_status() == 'LOST':
            print("Lost in the code sauce! Panic!")
        else:
            drone.navigate_to("CoffeeShop")
            robot.pour_coffee()
            perception.recognize_coffee_mug()
            robot.drink_coffee()
            if perception.detect_sleepy_eyes():
                robot.wake_up()
                drone.perform_a_barrel_roll()
            else:
                code()
    except Exception as e:
        print("Oops! I crashed, but I promise it wasn't the drone.")
        drone.apologize()
	drone.land()