Robotics and Machine Intelligence (ROMI) Lab

Industrial robots have revolutionized industry during the last couple of decades. The world of robotics has now moved on to developing mobile robots with navigational intelligence and scene interpretation ability so that they can interact with their environment and efficiently execute a task. Therefore the robotics today is a combination of navigational intelligence (that includes localization, mapping, SLAM, path planning etc) and computer vision based scene interpretation. These research areas continue to attract researchers and there are many challenges to be solved before we can see a truly autonomous mobile agent capable to navigating on its own interacting with its environment.

Robotics and Machine Intelligence (ROMI) lab, being part of the department of electrical engineering at SEECS, is mainly interested in developing intelligent systems for robots. We work in the areas of robot localization, mapping, SLAM and path planning. These capabilities lie at the heart of any system that claims to be truly autonomous. We are currently working in all these areas with a funded project to develop efficient SLAM algorithms for indoor and outdoor mobile robots.

With the need to develop intelligent robots, scene understanding has become essential skill for robots and with that computer vision and machine learning are now part of almost any robotic system that interacts with environment. We are also interested in these areas and are actively pursuing computer vision and machine learning techniques that can be used in robotics. Current areas of work include human pose estimation for robotic application, SLAM with vision sensors in dark/low lightening conditions, reinforcement learning in robotics, and developing transferable skills for robotics.

People:

  • Dr. Latif Anjum (Founder and Lab Director)

Faculty Researchers:

  • Dr. Wajahat Hussain (Assistant Professor, SEECS)

MS Students:

  • Muhammad Mateen Zafar (path planning algorithms)
  • Muhammad Haseeb (human pose estimation for robotics applications)

UG Students:

  • Abdul Samad Usman (BEE-5, FYP: Development of an anthropomorphic robot hand with basic finger movement capability)
  • Muhammad Saad Tariq (BEE-5, FYP: Development of an anthropomorphic robot hand with basic finger movement capability)
  • Ammad Ahmed (BEE-5, FYP: Development of an anthropomorphic robot hand with basic finger movement capability)
  • Yasir Islam (BEE-5, FYP: Automatic Multistoried Car Parking System)
  • Hafiz Faisal Naseer (BEE-5, FYP: Automatic Multistoried Car Parking System)   
  • Osama Imran (BEE-5, FYP: Automatic Multistoried Car Parking System)

Research:

Funded Projects:

Project Title: Developing an efficient and robust SLAM algorithm for indoor and outdoor mobile robots.
Funding Agency: Higher Education Commission (HEC), Government of Pakistan. NRPU-2016/2017
PI: Dr Latif Anjum
Co – PI: Dr Osman Hasan
Funded amount: PKR 3.86811 M.
Project duration: 2 Years

This project is aimed at producing robust and efficient SLAM algorithm for indoor and outdoor navigation of mobile robots. SLAM implementations reported in literature vary depending upon the use of sensors for localization and mapping and the use of filtering algorithms. Proprioceptive sensors (such as encoders, gyroscope and accelerometer) have generally been used for localization. An increasing trend is to use exteroceptive sensors (such as laser range finders, and depth cameras) for localization thanks to their convenience of use and efficiency.

Mapping the environment requires sensors that can observe features of the environment. The most popular exteroceptive sensors used for mapping include laser range finders, depth cameras, and RGB cameras. Apart from the use of sensors, the SLAM implementations differ in their use of filtering algorithm they employ. The most common filtering algorithms applied are particle filter, extended Kalman filter, information filter, and Rao – Blackwell algorithm.

Our first implementation of SLAM will utilize both laser range finders and RGB-D sensors. Both sensors are used to find distance to the near obstacles and have been independently used in many SLAM implementations. Laser range finders (for example SICK LMS500 laser scanner) provide distance and angle data output in a 2 – dimensional field. Same information can be extracted from RGB-D cameras (such as Kinect) mounted in specified directions. The accuracy of the sensor measurement can be greatly improved if data from both the sensors is utilized using an efficient sensor fusion algorithm. We plan to use unscented Kalman filter (UKF) which is widely reported to be more accurate as compared to conventionally used extended Kalman filter.

The project aims to produce at least three variants of SLAM using various sensors and filtering algorithms. A variant of the above implementation would use Rao – Blackwellised particle filter with laser range finders and RGB-D sensor. The accuracy of localization can be greatly improved if data from inertial sensors and encoders is used together with laser range finder. Our third variant of SLAM implementation will use data from encoders and inertial sensors along with laser range finders to implement SLAM algorithm. The results of each implementation will be shared with robotics community through publication in reputed journals/conferences.

Path Planning for Mobile Robots:

Resource person: Muhammad Mateen Zafar

We are working towards the development of robust and fast algorithm for robotic path planning. Path planning is basic requirement for mobile robots to autonomously navigate. The task is to find a path to the destination within a given map. We are looking to optimize multi-objective cost function that will include time to destination, computational efficiency and maximizing distance to obstacle.

Publications:

Anjum, M.L., Rosa, S., & Bona, B. (2017 in press). Tracking a Subset of Skeleton Joints - An Effective Approach Towards Complex Human Activity Recognition. In Journal of Robotics.
Anjum, M. L., Ahmad, O., Rosa, S., Yin, J., & Bona, B. (2014). Skeleton Tracking Based Complex Human Activity Recognition Using Kinect Camera. In Social Robotics (pp. 23-33). LNAI-8755,
Springer International Publishing.
Anjum, M. L., Ahmad, O., Bona, B., & Cho, D.I. (2014). Sensor Data Fusion Using Unscented Kalman Filter for VOR-Based Vision Tracking System for Mobile Robots. In Towards Autonomous
Robotic Systems (pp. 103-113), LNAI-8069, Springer Berlin Heidelberg. Yin, J., Carlone, L., Rosa, S., Anjum, M. L., & Bona, B. (2014). Scan Matching for Graph SLAM
in Indoor Dynamic Scenarios. In The 27th International Conference of the Florida Artificial Intel-
ligence Research Society (FLAIRS-27), (pp. 418-423). Florida, USA.
Ahmad, O., Bona, B., Anjum, M. L., & Khosa, I. (2014). Using Time Proportionate Intensity Images with Non-linear Classifiers for Hand Gesture Recognition. In The 8th International Conference on Robotic, Vision, Signal Processing & Power Applications, (ROVISP-2013), Penang, Malaysia. (pp.
343-354). Springer Singapore.
Anjum, M. L., Park, J., Hwang, W., Kwon, H. I., Kim, J. H., Lee, & Cho, D. I. (2010). Sensor data fusion using unscented Kalman filter for accurate localization of mobile robots. In IEEE
International Conference on Control Automation and Systems (ICCAS-2010), (pp. 947-952). Seoul, South Korea. Park, J., Hwang, W., Kwon, H. I., Kim, J. H., Lee, C. H., Anjum, M. L., & Cho, D. I. (2010).
High performance vision tracking system for mobile robot using sensor data fusion with Kalman filter. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2010), (pp. 3778-3783). Taipei, Taiwan.
Kwon, H. I., Park, J., Hwang, W., Kim, J. H., Lee, C. H., Anjum, M. L., & Cho, D. I. (2010). Sensor data fusion using fuzzy control for VOR-based vision tracking system. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS-2010), (pp. 2920-2925). Taipei, Taiwan.
Hwang, W., Park, J., Kwon, H. I., Anjum, M. L., Kim, J. H., Lee, C., & Cho, D. I. D. (2010). Vision tracking system for mobile robots using two Kalman filters and a slip detector. In IEEE International Conference on Control Automation and Systems (ICCAS-2010), (pp. 2041-2046).
Seoul, South Korea.
Shim, E. S., Hwang, W., Anjum, M. L., Kim, H. S., Park, K. S., Kim, K., & Cho, D. I. D. (2009). Stable vision system for indoor moving robot using encoder information. In Robot Control (Vol. 9, No. 1), (pp. 50-55), The International Federation of Automatic Control (IFAC).