Reinforcement Learning for Robotic Grasping and Manipulation: A Review

Authors

  • Praveen Kumar Donepudi UST-Global, Inc.

DOI:

https://doi.org/10.18034/apjee.v7i2.526

Keywords:

Grasp planning, Machine learning, Reinforcement learning

Abstract

A century of robots is the 21st century. The robots have long been able to cross the divide between the virtual universe and the real world. Robotics, as the most successful contender in the upcoming great technological revolution, will play an ever more important role in society because of the impact it has in every field of life, including medicine, healthcare, architecture, manufacturing and food supplies, logistics and transport. This document introduces a modern approach to the grasp of robots, which draws grasp techniques from the human demonstration and combines these strategies into a grasp-planning framework, in order to produce a viable insight into the objective geometry and manipulation tasks of the object. Our study findings show that grasping strategies of the form of grasp and thumbs positioning are not only necessary for human grasp but also significant restrictions on posture and wrist posture which greatly reduce both the robot hand's workplace and the search space for grasp planning. In the simulation and with a true robotic system this method has been extensively tested for several everyday living representative objects. In the experiment with varying degrees of perceiving in certainties, we have demonstrated the power of our method.

Downloads

Download data is not yet available.

Author Biography

  • Praveen Kumar Donepudi, UST-Global, Inc.

    Enterprise Architect, Information Technology, UST-Global, Inc., Ohio, USA

References

Asadullah, A., Juhdi, N. B., Islam, M. N., Ahmed, A. A. A. and Abdullah, A. (2019). The Effect of Reinforcement and Punishment on Employee Performance. ABC Journal of Advanced Research, 8(2), pp. 47-58. https://doi.org/10.18034/abcjar.v8i2.87 DOI: https://doi.org/10.18034/abcjar.v8i2.87

Balasubramanian, R., Xu, L., Brook, P. D., Smith, J. R., and Matsuok, Y. (2010). Human-guided grasp measures improve grasp robustness on physical robot. 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, 2010, 2294-2301. https://doi.org/10.1109/ROBOT.2010.5509855 DOI: https://doi.org/10.1109/ROBOT.2010.5509855

Ben-Ari, M., & Mondada, F. (1970). Robots and Their Applications. Retrieved November 05, 2020, from https://link.springer.com/chapter/10.1007/978-3-319-62533-1_1

Bicchi A. (1995). On the Closure Properties of Robotic Grasping. The International Journal of Robotics Research, 14(4), 319-334. https://doi.org/10.1177%2F027836499501400402 DOI: https://doi.org/10.1177/027836499501400402

Bohg, J., Morales, A., Asfour, T. Kragic, D. (2014). Data-Driven Grasp Synthesis—A Survey. IEEE Transactions on Robotics, 30(2), 289-309. https://doi.org/10.1109/TRO.2013.2289018 DOI: https://doi.org/10.1109/TRO.2013.2289018

Daniilidis, K., PF. Felzenszwalb, D., D. Fischinger, A., D. Guo, F., I. Lenz, H., AA. Maciejewski, C., . . . X. Zeng, W. (1999). Robotic grasp detection based on image processing and random forest. Retrieved November 05, 2020, from https://link.springer.com/article/10.1007/s11042-019-08302-9

Donepudi, P. K. (2017). Machine Learning and Artificial Intelligence in Banking. Engineering International, 5(2), 83-86. https://doi.org/10.18034/ei.v5i2.490 DOI: https://doi.org/10.18034/ei.v5i2.490

Donepudi, P. K. (2018). AI and Machine Learning in Retail Pharmacy: Systematic Review of Related Literature. ABC Journal of Advanced Research, 7(2), 109-112. https://doi.org/10.18034/abcjar.v7i2.514 DOI: https://doi.org/10.18034/abcjar.v7i2.514

Donepudi, P. K. (2019). Automation and Machine Learning in Transforming the Financial Industry. Asian Business Review, 9(3), 129-138. https://doi.org/10.18034/abr.v9i3.494 DOI: https://doi.org/10.18034/abr.v9i3.494

Dragan, A. D., Lee, K. C., Srinivasa, S. S. (2013). Legibility and predictability of robot motion. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo, 301-308. https://doi.org/10.1109/HRI.2013.6483603 DOI: https://doi.org/10.1109/HRI.2013.6483603

Ioffe, S. & Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Proceedings of the 32nd International Conference on Machine Learning, in PMLR 37:448-456. https://arxiv.org/abs/1502.03167

LeCun, Y., Bengio, Y. & Hinton, G. (2015). Deep learning. Nature 521, 436–444. https://doi.org/10.1038/nature14539 DOI: https://doi.org/10.1038/nature14539

Lenz, I., Lee, H., & Saxena, A. (2015). Deep learning for detecting robotic grasps. The International Journal of Robotics Research, 34(4–5), 705–724. https://doi.org/10.1177/0278364914549607 DOI: https://doi.org/10.1177/0278364914549607

Maleque, R., Rahman, F., & Ahmed, A. A. A. (2010). Financial Disclosure in Corporate Annual Reports: A Survey of Selected Literature. Journal of the Institute of Bangladesh Studies, Vol. 33, 113-132. https://doi.org/10.5281/zenodo.4008320

Quach, K. (2019). Watch an AI robot program itself to, er, pick things up and push them around. Retrieved November 04, 2020, from https://www.theregister.com/2019/01/18/ai_robot_programs_itself/

Saxena, A., Driemeyer, J., & Ng, A. Y. (2008). Robotic Grasping of Novel Objects using Vision. The International Journal of Robotics Research, 27(2), 157–173. https://doi.org/10.1177/0278364907087172 DOI: https://doi.org/10.1177/0278364907087172

Yamanobe, N., & Nagata, K. (2010). Grasp planning for everyday objects based on primitive shape representation for parallel jaw grippers. 2010 IEEE International Conference on Robotics and Biomimetics, 1565-1570. DOI: https://doi.org/10.1109/ROBIO.2010.5723563

-- 0 --

Downloads

Published

2020-07-30

How to Cite

Donepudi, P. K. . (2020). Reinforcement Learning for Robotic Grasping and Manipulation: A Review. Asia Pacific Journal of Energy and Environment, 7(2), 69-78. https://doi.org/10.18034/apjee.v7i2.526

Similar Articles

1-10 of 12

You may also start an advanced similarity search for this article.