My Portfolio


Autonomous Systems and Perception

India     Contact me     CURRICULUM VITAE

Jerrin is a versatile and self-motivated engineer pursuing final year Bachelor’s degree from Vellore Institute of Technology (VIT), India. His research interests lie in the intersection of Autonomous Systems and Robotic Real-time Perception. He likes to develop autonomy stacks for self-driving cars; mobile, aerial robots and robotic arms.

Has strong robot programming background (Python, ROS, C++). Hard work being his core pillar, combined with practical experiences, Jerrin is someone you could count on to persevere through challenges. He is an ardent follower of cricket, chess and loves travelling.

Has bagged research oppurtunities from prestigious universities around the world including McMaster University, Canada; Indian Institute of Science, India; Arizona State University, USA and Yuan-Ze University, Taiwan. He has also worked collaborating with industries including Aero2Astro and BrainMagic InfoTech Pvt Ltd. He is contributing to the society by associating with organizations such as National Service Scheme (NSS) and Madras Scientific Research Foundation (MSRF).

Core Interests:

  •   Autonomous Systems
  •   Real-time Perception
  •   Human-Computer Interaction
  •   Soft Robotics


Vellore Institute of Technology - Chennai, India

Mechanical Engineer- School of Mechanical Engineering (SMEC)
Primarily focussed on Robotics, Mechatronics, Machine Vision, Artificial Intelligence and Automation.
July 2018 - April 2022 (scheduled)

Chettinad Vidyashram - Chennai, India

Computer Science- Central Board of Secondary Education (CBSE)
Computer Science, Physics, Maths and Chemistry
June 2004 - March 2018


Tools and Libraries

ros matlab sw sofa fusion ansys sofa
sofa tensorflow git febio gazebo proteus moveit

Programming Languages

python c++ C html css js


McMaster University Hamilton, Canada

Globalink Research Intern
  • Designing and testing software for controlling a pneumatically-powered soft robot arm.
  • Acquires real-time data from several sensors, and implement a suitable controller (e.g., model predictive control).
  • Applies advanced control systems to a real-world problem.
  • Supervised by Prof. Gary Bone @ Robotics and Manufacturing Automation Laboratory

Soft Robotics Robotic Arm MPC C++

July 2021 - Present

Arizona State University Phoenix, USA

Summer Research Intern
  • Used collaborative visual inertial multi agent Simultaneous Localization and Mapping (SLAM) to digitalize environments via visualizing data collected from sensors fusing into a unified system.
  • Custom architecture was proposed considering the need to screen dynamic features and update changes in the map in regular intervals.
  • DL algorithms were used for automated analysis.
  • The digital representations made was processed to provide insights to builders and stewards.
  • Supervised by Prof. Thomas Czerniawski @ Edifice Lab

Perception Stereo Vision Sensor Fusion Python

May 2021 - Present

Aero2Astro Chennai, India

Autonomous System Developer- Intern
  • Developed ROS based autonomous navigation firmware using Visual Inertial SLAM concepts for indoor environment.
  • Implementation was based on Sensor Fusion techniques, Extended Kalman Filters and aimed to eradicate the need for GPS.
  • Work Report | Source Code | Certificate

SLAM Kinect + IMU Visual Odometry Sensor Fusion ROS (Gazebo, RViz) ROSpy C++

October 2020 - April 2021

Madras Scientific Research Foundation Chennai, India

Research Fellow (NGO)
  • Taught and researched on various cutting-edge areas of Manufacturing, Robotics and Vision systems.
  • Worked on defect detection and reinforcement for 3D printer models; autonomous lane detection for self-driving using Convolutional Neural Networks (CNN).
  • Also, spreading awareness on basic robotics in schools and amongst unprivileged kids was done as part of the Non-Governmental Organization (NGO).
  • My NGO Profile | Source Code | Certificate

Robotic Arm 3D Printing defects Teaching Python Embedded C G-Code

October - November 2020

BrainMagic InfoTech Pvt Ltd Chennai, India

Data Science Intern
  • Automobile fault detection using vison techniques (Transfer Learning, Extremas and Augmentation) resulting in an IOU of 95%.
  • Dimensional analysis was done to locate and monitor defects to prevent catastropic failures.
  • Preliminary testing was done using FLASK, Heroku and Git deploying in a web server.
  • Later, the final built model was deployed in AWS using Amazon Sagemaker and S3 Buckets.
  • Source Code

Perception Fused Tranfer Learning AWS Buckets Python

May - July 2020

Yuan-Ze University Taoyuan City, Taiwan

Project Research Intern - Perception
  • Built a robust smart parking system using semantic segmentation with Convolutional Conditional Random Fields and Atrous Convolution.
  • Aimed to visually enhance the parking system assisting drivers.
  • Sensor Fusion of Camera and IMU using EKF to avoid failures or losses or sparse environmental conditions.
  • Supervised by Prof. Wei-Tyng Hong
  • Work Report | Certificate

Pereception Segmentation Convolutional CRFs Kalman Filters Python

April – June 2020

Atom Robotics Chennai, India

Team Captain and Co-Founder
  • An Intelligent Robotics and Satellite exploration team consisting of 50+ aspiring young minds.
  • The team focuses on Intelligent Ground Vehicles Challenge, USA; International Planetary Aerial System, Mars society South Asia (MSSA) and Can-Sat, USA.
  • Won more than 20 awards to date.
  • Team Report | Our Achievements

Intelligent Robotics Space Robotics Team Leadership Team Management

January 2019 - Present


RIACT 2020 International Conference

Optimization of quadcopter frame using generative design and comparison with DJI F450 drone frame
Jerrin Bright et al 2021 IOP Conf. Ser.: Mater. Sci. Eng. 1012 012019
  • The research accentuates to explore designing drone frame using Generative design tools.
  • A quadcopter is designed using Autodesk generative design embedded in Fusion 360.
  • Simulation results such as static stress-strain, modal frequency and displacement results of additive manufactured quadcopter are compared with a DJI flame wheel F450 drone frame.
  • The generative designed frame has minimum displacement compared to traditional designed drone frame.
  • It is observed that generative designing technique along with additive manufactured frames yields better frames with improved resistance to fracture and minimum displacement compared to traditional designed DJI flame wheel F450 drone frame.
  • Paper Link | Presentation Slides

Generative designing Drone frames GAN Autodesk

Ongoing Research

Attention Embedded Squeeze Excitation Network for Medical Imaging
  • The research focuses on establishing a benchmark for medical imaging enhanced by attention-based convolutional neural networks.
  • Considering the requirements, we tinkered the state-of-the-art networks thereby increasing performance of the network.
  • The research proves the model accuracy of 96%; 98%; 95% and 96% with a minutest intensification in computation (with brain tumor, pneumonia, diabetic retinopathy and skin lesions datasets respectively).
  • Statistical evidences demonstrating improvement by 5% and 8% when compared to the performance of state-of-the-art networks and a minimum of 3% improvement when observed with the state-of-the-art networks were observed and logged in this research.

Attention Learning Medical Imaging Deep Neural Network Residual Network Python

Ongoing Research

Panoptic-Dynamic Obstacle Avoidance for Visual Trajectory Prediction
  • Visual enabled dynamic obstacle detection has been a leading area of research amongst roboticist, bearing in mind the necessity for visual sensors to be the most indispensable in autonomous systems in general.
  • Present proposed systems perform tasks using feature tracking algorithms or use different hardwares like event cameras.
  • The objective of our research highlights using a standard visual sensor to predict and assign pixels a panoptic head (using panoptic segmentation).
  • The panoptic head will then be tracked using optical flow algorithm thus effectually envisaging the trajectory of the dynamic obstacle.

Dynamic Screening Panoptic Segmentation KLT Tracking Trajectory Prediction ROS Python


Visual Odometry
Development of python package to reconstruct indoor and outdoor environments with diverse texture contrasts using Oriented FAST and Rotated Brief feature detector, FLANN based matchers and RANSAC for outlier removal; Optical flow and PnP (DLT and Levenberg) for estimating the pose of robot.
Techniques used:
  • ORB Extraction
  • KLT Tracking
  • PnP
  • FLANN Matchers
  • Triangulation
RTAB-Map Implementation using a Mobile Robot
Implemented 3D SLAM that is RTAB-Map SLAM using Kinect sensor on ground vehicle. A ground robot was added with the necessary sensors and mapped an indoor environment with the mobile robot. Real time appearance-based mapping (RTAB-Map) was used to make the 3D map launched in gazebo environment. Then, BUG-2 algorithm was used to move the robot in the environment, reading the sensor data and ultimaately mapping. Also RViz was used to visualize the 2D map, created using Rtab-map.
Techniques used:
  • SLAM
  • ROS- Gazebo & RViz
  • Path Planning
  • Loop Closure
GMapping for AGV
Implementation of Gmapping by 2D lidar using turtle bot in ROS in Gazebo environment to construct a 2D map using RViz in an indoor environment. Also, an Adaptive Monte Carlo Localization (AMCL) based navigation stack was used to move the ground robot to a navigation goal
Techniques used:
  • SLAM- GMapping
  • ROS- Gazebo & RViz
  • Turtlebot
  • Path Planning
  • Monte Carlo Localization
  • Teleoperation
An autonomous aerial planetary system designed for logistics and reconnaissance missions in the Mars environment by our team. Completely designed and tested from scratch considering the compatibility for flight in a Martian environment.
Techniques used:
  • ORBv3
  • CFD Analysis
  • Orthomasaic stitching
  • Landscape Detection
  • SITL Pixhawk
  • XBee Communication
PID based Autonomous Drone
Developed a ROSpy based control system for a quadcopter to transverse to a set of GPS set point autonomously. The Control System has two modules namely the Altitude Controller (AC) and the Position Controller (PC), AC stabilizes the drone at the zero error Roll, Yaw, Pitch (R-P-Y) angles using a PID based controller, the PC takes in the target GPS coordinate has set point values and calculates the R-P-Y angles to successfully move to the set point coordinates. These controllers work in synchronization to autonomously fly the drone from one coordinate to another.
Techniques used:
  • PID Controller
  • ROS- Gazebo & RViz
  • Python
Click here to view some of my other projects.


Key awards won to date is shown here!
Click here to have a look at all my awards.

Outstanding Research Paper Award

RIACT 2020 International Conference

Recognized Galactic Problem Solver

NASA International Space Challenge

Second Runner-up IEEE Hackathon

Apogee’21, BITS Pilani Campus

Winner of CURRENTS’20 NIT Trichy

Autonomous Line Follower, National Level Techfest


Published more than 10 blogs to date for machine learning and deep learning enthusiasts.
Click here to have a look at all my blogs.

Contact me

My inbox is always open; glad to connect, discuss and collaborate.
Could relate to work, volunteering or a casual talk!
Feel free to shoot me an email; will get back ASAP!

Jerrin Bright | Roboticist

Copyrights @ 2021
Designed by Jerrin Bright