P R O J E C T S
In my second semester of graduate school, I took Mobile Robotics, a class entirely dedicated to SLAM. For this project, we were tasked with solving 2D and 3D pose estimation problems (in batch and incrementally) using the GTSAM library. I completed pose graph optimization for the Intel Research Lab dataset (2D) and the parking garage dataset (3D), both of which can be found here. The work done for this project involved setting up a new CMake project, implementing the pose graph optimization solutions in C++, and writing various scripts to plot the unoptimized and optimized trajectories.
Due to the Michigan Engineering Honor Code, I cannot publicly share the implementation of this project. Feel free to email me (firstname.lastname@example.org) and I can provide access to the project repository given it is used strictly to evaluate my work for future opportunities.
Robotic Systems Laboratory MBot
Botlab: Differential Drive Autonomous Robot
University of Michigan: Robotic Systems Laboratory
The final goal of this project was to implement various subsystems of a differential drive, non-holonomic autonomous robot. The robot was to complete two challenges: navigation to known locations in a known environment and autonomous navigation in an unknown environment. For the first challenge, I co-implemented open loop calibration and PID control in C for low-level motor control on a custom control board. For the robot's naïve position estimate, I also developed a gyro-fused odometry algorithm. The high-level autonomy code (SLAM and planning) was developed on a Raspberry Pi 4. For an improved estimation of the robot pose, I wrote and tested a particle filter in C++. Alongside localization, I implemented an occupancy grid mapping algorithm to map the environment. Combined, the SLAM system provided an accurate position estimate and map of the environment. Lastly, I completed the autonomous navigation system with the A* path planning algorithm and frontier exploration to enable the robot to navigate through an unknown environment.
5 DoF RX200 Robotic Arm
Armlab: 5 DOF Robotic Arm for Block Manipulation
University of Michigan: Robotic Systems Laboratory
The final goal of this project was to implement various software subsystems for the RX200 robotic manipulator to execute simple motion tasks such as stacking, classifying, and lining up colored wooden cube blocks. I worked on the robot control system for manipulation. I co-implemented forward kinematics, inverse kinematics, and simple motion planning algorithms for various tasks through the state machine. The forward kinematics method was based on a Denavit-Hartenberg (DH) table and using the DH parameters to calculate the transformation matrix representing the pose of the desired links. The inverse kinematics method was a geometric method that included defining equations for each of the five joint angles and enumerating the possible joint angle configurations in the N-by-5 configuration space. The motion tasks were implemented in a state machine by defining a path plan for each task and leveraging methods from the kinematics and computer vision components of Armlab.
Seeing Stars: A Computer Vision-Based Approach for Hotel Star Ratings
Stanford University: CS230 Final Project
With over 10 million hotels globally, there is a need for travel booking websites to provide accurate and reliable information to visitors. Star rating is most frequently used as a filtering criterion but is unreliable given the absence of commonly accepted standards for star rating assignment. Manual human verification can be subjective and incurs high operating costs. Several major third-party distribution platforms, e.g., Booking.com, therefore let hotel owners self-report their own star ratings, with highly inconsistent results.
I co-developed a computer-vision-based deep learning model that can more accurately assign hotel star ratings using images and meta-data (e.g. pricing, facilities). This promises a cheaper and more objective methodology when assigning hotel star ratings. The models leverage ReLU activation functions, one-hot encodings, and ResNet50.
Computer Vision Side Project
This side project was spawned from my mentorship pairing from Women in Robotics Project Advance. Eager to learn more about robotic perception, I read robotic vision papers, discussed their novel contributions with my mentor, and wrote short summaries on them for future reference. In the process, I also contributed to a full visual odometry pipeline using the OpenCV library to gain a deeper understanding of the various algorithms that are out there. For the VO pipeline, I worked on image preprocessing, feature extraction, feature matching, inlier detection, and motion estimation. The methods used for the pipeline are based on this paper.
Pentium 6 Out of Order Processor | EECS 470
Computer Architecture Senior Design Project
The goal of this project was to develop an out-of-order processor in SystemVerilog using a 5-stage multi-cycle processor as our baseline. We chose the Intel Pentium 6 for our design and implemented several advanced features such as superscalar execution and a load-store queue.
Indoor Localization Algorithm using WiFi Signal Strength and CNN-based Velocity Estimation | EECS 507
Embedded Systems Research Project
The basic goal of this project is to estimate an object's position in a two-dimensional space based on inertial (IMU) and WiFi signal strength (RSSI) data from a combination of Apple devices, specifically from the iPhone XR and the Apple Watch Series 3. Our algorithm combines an RSSI localization algorithm with a DCNN velocity estimator to provide an accurate position estimate.
Signal Strength Localization
RSSI measurements on a pre-determined path are compared against a database of RSSI measurements in a 9m-by-6m plane and a position is determined. The estimation corrects the integrated position from velocity over a fixed interval to improve the localization estimate.
User Velocity Estimation using a DCNN
One of my responsibilities was reproducing an algorithm from "DeepWalking: Enabling Smartphone-based Walking Speed Estimation Using Deep Learning" (Aawesh Shrestha & Myounggyu Won) which approximates a user's velocity using raw gyroscope and accelerometer data from an iPhone and an Apple Watch. The data is first filtered using a low pass filter. The magnitudes of the vertical and horizontal components are then found. A neural network is then trained on 7 minutes of velocity data collected from our pre-determined paths and tested on 3 minutes of test data. The neural network consists of convolutional, batch normalization, ReLU, fully-connected, and regression layers. I worked on the entire pipeline, from training/testing data collection to prediction and evaluation.
Light-Navigating Robot | EECS 373
Introduction to Embedded Systems Final Project
The basic goal of the project is to use the Nucleo and Zumo shield robot base to navigate a field of obstacles to reach a light source. The robot scans the field for a light source such as a lamp or phone flashlight and navigate to it. Obstacles in the field about the size of shoe boxes are avoided and maneuvered around, using the ultrasonic sensor. Additional modules were completed such as using the buzzer on the Zumo shield and stopping the robot when the reflectance sensor detects a piece of paper. Demonstration videos are provided in the presentation shown!
Image Processing Projects | R2 Space, Inc.
N O T E O F T H A N K S
I would like to recognize Jeff Pennings and Adam Skoczylas for reaching out to me to take me on in such a short notice just two weeks after losing my internship during final exams in Winter 2020. I also want to thank Clyde Stanfield, my supervisor, for his mentorship, leadership in the Processing team, and for consistently giving me impactful and challenging projects throughout the summer.
O V E R V I E W
R2 Space, established in 2018, is an aerospace startup providing end to end space-based satellite radar intelligence to the US government and to commercial customers. R2 develops satellites in house for SAR data collection and provides high quality processed SAR imagery to its customers.
During Summer 2020, I played a critical role in the development of R2's image reconstruction and processing capabilities. The Processing team was small, so each individual significantly influenced the development of R2's image processor. With the goals of significantly speeding up our processing power and demonstrating our capabilities to R2's customers, I worked on everything from supporting multithreaded applications and GPU integration with CUDA to developing scripts for performance analysis of geolocation using heatmaps.
Because part of my internship was funded by the NASA Michigan Space Grant Consortium Program, I had to submit a public report on my work. Feel free to read through my report on the left for a dive into the work I did. In my report, I do not share any names of proprietary algorithms, implementations, or final products produced by the work I did to protect R2's IP.