Anshul Paigwar
  • Me
  • Projects
  • Research
  • Life
    • Travelin
    • Creations
  • Ivlabs
  • CV
  • Me
  • Projects
  • Research
  • Life
    • Travelin
    • Creations
  • Ivlabs
  • CV

Attentional PointNet for 3D Object Detection in Point Clouds
 Anshul Paigwar, Ozgur Erkent, Christian Wolf and prof. Christian Laugier (INRIA - Team CHROMA)

Problem:
Nowadays, a large number of Automated Cyber-Physical Systems (ACPS) are based on probabilistic algorithms.
​Validation of such systems is a crucial but complex task.
  • In this work we validated CMCDOT, a probabilistic occupancy grid framework, developed at Inria,  which also estimates risk of collision in near future.
  • We use CARLA simulator to model the ego-vehicle and its sensors, as well as other  vehicles in diverse intersection crossing scenarios. 
  • To validate the CMCDOT algorithm, we define appropriate Key Performance Indicators (KPIs) and create large number of simulations to evaluate the probability of meeting the defined KPIs. For the evaluation we use Plasma Lab, a statistical model checking platform.​
Publication

Attentional PointNet for 3D Object Detection in Point Clouds
 Anshul Paigwar, Ozgur Erkent, Christian Wolf and prof. Christian Laugier (INRIA - Team CHROMA)

Problem:
Real-time accurate detection of objects in 3D point clouds is a central problem for autonomous navigation. Most existing methods
require data from multiple sensors for the detection. Such methods are prone to sensor failure.
  • We propose a novel deep architecture called Attentional PointNet for 3D object detection. The network directly operates on sparse 3D points.
  • We extend visual attention mechanisms to 3D point clouds for multiple object detection.
  • We train the model on KITTI dataset, For car detection, Attentional PointNet achieves comparable Avg Precision of 52.28% among the architectures using LiDAR data only and surpasses many approaches in terms of inference time.
  • For car detection, Attentional PointNet achieves comparable AP of 52.28% among the architectures using LiDAR data only and surpasses many approaches in terms of inference time.
Publication
Poster
Code
 

PyTorch implementation of PointWise Convolutional Neural Network

Problem:
Point cloud is an important type of geometric data structure. However,  using point clouds with neural network for feature extraction and learning  representation is an active topic of research.
  • Hao et. al. in Pointwise Convolutional Neural Network presents a new convolution operator, called pointwise convolution, which can be applied at each point in a point cloud to learn pointwise features.
  • We implement a modified pointwise convolution operator in PyTorch by writing a CUDA/C++ extension using torch ATen library.
  • For each point we divide the space around it in spherical quadrants and sections. Then using a nearest-neighbor search algorithm we sort the points in these sections. All the points in a section share same the weight which can be learned using back propagation.
  • Finally we do experiments for object classification on ModelNet40 and ACFR dataset and compare the results with PointNet.
Code
 

Ground Estimation & Point Cloud Segmentation
Lukas Rummelhard, Anshul Paigwar, Amaury Negre, Christian Laugier ( INRIA - Team CHROMA )

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Publication
 


Dynamic Object Detection and Tracking in Point Cloud
Anshul Paigwar, Zubin Priyansh, Pradyot Kvn (The Hitech Robotics Systems, Gurgaon, India)

Problem:
For Autonomous robots, Dynamic object detection and tracking is a crucial task for safer trajectory planning. Existing methods requires prior mapping of the environment  or modeling of the objects which may not be very feasible.
  • We proposed a new framework for real-time Detection and Tracking for Moving Objects (DATMO) in 3D point clouds.
  • Framework involves RANSAC based ground plane extraction, Octree based 3D spatial change detection for classification of static and dynamic objects, Kalman filter based tracking of dynamic objects and there state prediction.
  • We tested our algorithm with the point cloud from Intel Realsense Camera and Velodyne-16 in indoor environment.
Code
Documentation
 

Road Boundaries Detection
Under supervision of Anthony Wong (Institute of Infocomm Research, Singapore )

Problem:
Accuracy of GPS can vary up to few meters, for Autonomous vehicles exact information of their location  up to few centimeters in particular lane is important, detecting Road Boundaries and and combining this information with GPS is one good way to tackle this issue.  
  • We proposed two methods for road boundary detection one using RGB images from cameras and with 3D point clouds from LiDARs.
  • For detection in RGB images we used first discretize the image into blocks and find features unique to the texture of the road. We then used these hand crafted features for training a Multi-Layered Perceptron network and classifying each block as road or non road.  
  • For detection in 3D point cloud we first use RANSAC algorithm to find a dominant plane (road). Then use region growing algorithm and orientation of normal to detect the edges of the road. 
Picture
Picture
 

Localisation of Autonomous Vehicle using EKF
Under supervision of Anthony Wong (Institute of Infocomm Research, Singapore )

Problem:
Accuracy of GPS can vary up to few meters, while odometery data can drift as the vehicle moves, relying on single sensor for localisation will certainly be imprecise.
  • We designed and implemented Extended Kalman filter based sensor fusion system for localization of the Autonomous vehicle.
  • We used Constant Heading and Velocity (CHCV) Vehicle Model for the dynamics of the vehicle. We fused GPS, odometry and IMU data for accurate localisation of the vehicle.
  • We tested the sensor fusion algorithm on Toyota E-COMs experimental autonomous vehicle platform.
  • We plot the trajectory as given by raw GPS sensor and our estimated trajectory by EKF. We also superimpose the trajectory on the map created by the Lidar sensor.

Drawing smooth contour using tangible input device
Anshul Paigwar, Laura Lassance (ENSIMAG, Univ. Grenoble Alpes, France)

Problem:
Drawing smooth contours using a computer drawing software can be at many times a challenging and time consuming task. Using a mouse - the traditional input method for these software becomes not intuitive due to its indirect input nature.
  • The proposed solution is a tangible input device for drawing curves as tangible interactions with computers are proven to be more intuitive than the traditional mouse/keyboard ones.
  • To validate our hypothesis we designed a tangible input device using metal wire. We used OptiTrack motion capture system and a group of optical markers were attached to the wire. Three cameras were use to read and detect the position of the markers. Then, these positions served as input to our software, that fit a curve to these points. User can freely bend the metal wire into desired shape and and a real time image of the contour is generated on the computer screen.
  • We performed experiments with 15 people drawing curves using Inkscape - mouse and our tangible input device.  Experiment tasks were divided according to difficulty level of drawing a curve, time required,  easy of use were noted for each task and curves drawn from both type of input modalities were compared to check for precision. 
  • We found that, in terms of time, the tangible input system was more efficient for smooth contours than the traditional one, and that, in these cases, the tangible device was in average 2.64 times faster than the mouse. Concerning the precision, we observed the traditional approach had a better performance in the experiments.
Picture
Picture
Documentation
 

Omni- Directional Modular Snake Robot
Akash Singh, Anshul Paigwar, Sai Teja Manchukanti, Manish Saroya, Shital Chiddarwar  ( IVLABS, Visvesvaraya National Institute of Technology, India )

Problem:
Control, state estimation and motion planning of highly articulated snake robots have been challenging tasks for researchers. As a result, formulating gaits for the modular structure, for motion on flat trajectories as well as overcoming obstacles is mathematically complicated.
  • We presents a novel design of a Compliant Omni-directional snake robot
    (COSMOS) consisting of mechanically and software linked spherical robot modules.

  • This design eliminates the problems of planar snake robots to handle versatile motions with complex gait analysis, by leveraging Omni-directional motion capabilities of spherical robots. 
  • Spherical bots are based on BHQ-3 and barycenter offset principle, having two driving units, one to propel robot forward and other for steering propeller unit.
Publication
Poster

Blind Navigator
​An assistive wearable device for the blind
 Anshul Paigwar, Sai Teja Manchukanti, K M burchandi
  ( IVLABS, Visvesvaraya National Institute of Technology, India )

​With an aim of helping blind people to navigate, we designed the Blind Navigator,
a wearable device, through strategic design thinking, by integrating function,
ergonomics,and aesthetics while keeping product cost low.
​
Device consist of ultrasonic sensors for obstacle detection, vibratory motors for
haptic feedback, battery, charging solution, and circuit board in a compact
design that comfortably fits in a hand of normal person.

Poster
Picture
Picture
Picture
Picture
 

20 DOF Humanoid Robot
Anshul Paigwar, Akash Singh, Prasad Vagdargi, Sai Teja Manchukanti, Manish Saroya, Shivam Shrivastav,
Shital Chiddarwar, K M Burchandi (IVLABS, Visvesvaraya National Institute of Technology, India )

  • Inspired from Darwin-OP we designed 3D-CAD model of complete framework of 20 DOF humanoid robot using SOLIDWORKS and fabricated using CNC Machines in our lab.
  • Designed a sub-controller circuit board  using CADSOFT EAGLE software for the control and power management of Dynamixel-28 motors used in the robot.
  • Worked on Kinematical and inverse Kinematical modeling of 20 DOF humanoid.
  • SWAYAT is capable of walking, grabbing objects and tasks like writing and sketching.
  • We participated in humanoid sprint challenge with SWAYAT in FIRA-2016, Bejing, China.
Picture
Picture
Poster
Know More

Trajectory Generation for Spray Painting Robot
Mayur Andulkar, Shital Chiddarwar, Anshul Paigwar ( Visvesveraya National Institute of Technology, India)

Problem:
The spray gun trajectory for robotic arm in an industry is usually handcrafted and specific to the part being painted. This process is tedious, time consuming.
  • We propose a new automated offline robot trajectory generation approach for free form surfaces.
  • The spray gun trajectory is generated from the computer aided design (CAD) model of a surface depending on the process parameters and spray gun characteristics for a structured environment.
  • A 'section' based approach is developed where paint passes consisting of points and normals are generated from different overlapping 'sections'.
  • The paint thickness deviation is minimized by computing an optimal spray gun velocity per paint pass.
Picture
Publication

Design of a wireless glove based I/O interface
Anshul Paigwar, Prasad Vagdargi, Shaishav Vashi ( IVLABS, Visvesvaraya National Institute of Technology, Nagpur, India)

Problem:
Very less has been done to bridge the gap of communication between society and hearing and speaking disabled.
  • With this wireless assistive glove we aim to use the technology for the aid of disabled by embedding an array of features into the glove as:
  • speech to text and text to speech conversion, tactile keys to act as a keyboard and mouse pointer control help further control of computers and other digital environments. An android application will also help in making the learning process more interactive too.
  • The major application of this device is proposed in schools, to help the hearing and speaking disabled to learn in public schools.
Picture
Know More

Writing Arm
Anshul Paigwar, Prasad Vagdargi, Shivam Shrivastav ( IVLABS, Visvesvaraya National Institute of Technology, India )

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  •  The robotic arm is designed around the Dynamixel AX-12A  robot actuators and driven by the software ROBOTIS. OpenCM9.04 is a microcontroller board based on 32bit ARM Cortex-M3.
Know More

Design of Leech Robot
Under the supervision of Prof. Prasanna Gandhi ( Indian Institute of Technology, Bombay, India )

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • Inspired from ‘leech’ I designed, fabricated and assembled a bio-robot consisting of flexible link, capable of performing leech alike motions and stair climbing.
  • The robot consist of a flexible beam(copper beryllium) and two modules attached each at the ends of the beam. These modules consist of servo motors, lipo battery, circuit boards,  electromagnet, Inertial measurement unit, Xbee module. The ends of beam are connected to the servo motor, providing the necessary torque to lift the module at the other end.
Picture
Picture
Picture
Picture
Documentation

Line Follower Robot
Visvesvaraya National Institute of Technology, India

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Picture

Hydraulic Crane using Syringes
Anshul Paigwar, Varun Gupta ( Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.

Projects Assisted

ReBiS - Reconfigurable Bipedal Snake Robot
Rohan Thakker, Ajinkya Kamat, Sachin Bharambe, Shital Chiddarwar, KM Bhurchandi ( IVLABS, Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Paper

Low cost Underactuated Robotic hand
Parag Khanna, Khushdeep Singh, K. M. Bhurchandi, Shital S. Chiddarwar ( IVLABS, Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Paper
Know More

Garuda Drone Delivery System
Shubhanshu Gupta, Pranay Pourkar, Vedant Ranade, Anish Gupta, Aditya Bastapure, Akshay Kulkarni, Amit Balki, Shital Chiddarwar
( IVLABS, Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Know More

Low cost Portable e-Braille Reader
Deba Prakash Nayak, Abhishek Tommy, Sumedh Warade, Vivek Patel, Sai Teja Manchukanti, A. S. Gandhi
(IVLABS, Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Documentation
Know More

Butler Bot - An Omni-directional Mobile Robot
Akash Singh, Sai Teja Manchukanti, Manish Saroya, Manish Maurya, K. M. Burchandi (IVLABS, Visvesvaraya National Institute of Technology, India)

Problem:
Realtime Ground estimation and extraction of ground points is a critical pre-processing task for object detection and-tracking system or to generate proper occupancy grids for autonomous navigation.
  • We propose an adaptive method for ground labeling in 3D Point clouds by modeling ground plane using Spatio-Temporal Conditional Random Field.
  • Ground elevation parameters are estimated in parallel for each node, using an Expectation Maximization (EM) algorithm variant.
  • We use CUDA and NVIDIA GPU to achieve realtime performance. The algorithm performs efficiently with both highly-dense (Velodyne-64) and sparse (IbeoLux) 3D point cloud data.
  • The ground estimation system has been deployed on the experimental vehicle in INRIA, and has been tested on embedded systems like Nvidia Jetson TX1, TK1.
Documentation
Know More
Powered by Create your own unique website with customizable templates.