Overview of Harris Underwater Grasping Project

Robotic grasping is a challenging problem. Performing the grasps underwater with a robotic arm attached to an Autonomous Underwater Vehicle is an even greater challenge. Without previous knowledge of the objects that need to be grasped, online grasp planning using point cloud data directly as the input is very important. To be able to do this, we need to answer the following questions,
  • Perception - How well can we locate and identify objects vs. the background?
  • Object Extraction - How can we extract the object from the point cloud data without losing the graspable features?
  • Grasp Planning - How do we plan grasps on the approximated object representation within the time limit?
  • Execution - How well can we execute the plan if our perception and control are not ideal?

  • The raw point cloud data that we need to be able to plan grasps on is as followed,

    The point cloud data for a top hat
    The point cloud data for a pipe frame

    Our Solution Approach

    Point cloud data is taken either by laser range finders, which should be the case for underwater vehicles, or by cameras. The point cloud data is clustered and each cluster is fitted using primitive shapes to get the approximation of the object's shape representation. Then the approximated object shape representation is feed to RPI modified version of GraspIt!. A primitive shape grasp planner will generate potential grasps a few inches away from the object (known as "pregrasps"). To make the grasps sampling process more effective, we calculate the valid space of the point cloud data. The best reachable pregrasp is selected and execution can begin. To deal with perception and control uncertainties, the robot will monitor force and torque sensors at the wrist and between the fingers for unexpected readings indicating that there was unexpected contact. A negative space filtering technic is also used here, which takes in the perception data and ignore the grasps that try to grasp the part of the object that the perception system cannot "see". Then the robot controller will react to the contact to try to achieve the best possible grasp.

    Expriment Environment
    We set up our expriment environment at CSRL and test out our approaches before sending them to Harris. The environment sets are as follows,
  • 7DOF Barrett WAM with Barrett Hand
  • Schunk Powerball arm with Schunk Parallel Jaw Gripper
  • Six-axis force torque sensor
  • Force sensors on the fingertips of the gripper
  • Nine camera OptiTrack System for object perception
  • Arm controller implemented in Matlab and xPC Target
  • Pregrasp planning performed in GraspIt!
  • Simulation performed in OpenRave
  • Results in Simulation and Actual Experiments
    Below is our recent initial results of planning grasps on the pipe frame point cloud data. The point cloud is decomposited into spheres and the exact object model is alligned with the decomposition. Grasps planned on the spheres show good potential on grasping the exact model. For more detail about our system, please check here. Here are more videos about our progress on this project. Also you can find the introduction about our previous work on this probject here.
    Pipe frame grasp first example
    Pipe frame grasp second example
    -- ShuaLi - 10 Mar 2014
    Topic attachments
    I Attachment Action Size Date Who Comment
    pipe_frame_pcl.jpgjpg pipe_frame_pcl.jpg manage 30.6 K 10 Mar 2014 - 20:18 ShuaLi  
    pipeframe_grasp_1.pngpng pipeframe_grasp_1.png manage 134.4 K 10 Mar 2014 - 20:17 ShuaLi  
    pipeframe_grasp_2.pngpng pipeframe_grasp_2.png manage 84.9 K 10 Mar 2014 - 20:18 ShuaLi  
    top_hat_pcl.jpgjpg top_hat_pcl.jpg manage 28.4 K 10 Mar 2014 - 20:18 ShuaLi  
    Topic revision: r3 - 10 May 2014, BraydenHollis

    This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
    Ideas, requests, problems regarding Foswiki? Send feedback