Skip to main content

Bibliography

References and Further Reading

This page contains citations for all sources referenced throughout the course. Citations follow APA 7th edition format.


Foundational Robotics

Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47(1-3), 139-159. https://doi.org/10.1016/0004-3702(91)90053-M

Pfeifer, R., & Bongard, J. (2006). How the body shapes the way we think: A new view of intelligence. MIT Press.

Siciliano, B., & Khatib, O. (Eds.). (2016). Springer handbook of robotics (2nd ed.). Springer. https://doi.org/10.1007/978-3-319-32552-1


ROS 2 and Software Architecture

Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., & Ng, A. Y. (2009). ROS: An open-source Robot Operating System. ICRA Workshop on Open Source Software, 3(3.2), 5.

Macenski, S., Foote, T., Gerkey, B., Lalancette, C., & Woodall, W. (2022). Robot Operating System 2: Design, architecture, and uses in the wild. Science Robotics, 7(66), eabm6074. https://doi.org/10.1126/scirobotics.abm6074


Simulation and Digital Twins

Koenig, N., & Howard, A. (2004). Design and use paradigms for Gazebo, an open-source multi-robot simulator. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2149-2154. https://doi.org/10.1109/IROS.2004.1389727

Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., & Abbeel, P. (2017). Domain randomization for transferring deep neural networks from simulation to the real world. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 23-30. https://doi.org/10.1109/IROS.2017.8202133


Visual SLAM

Mur-Artal, R., & Tardós, J. D. (2017). ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics, 33(5), 1255-1262. https://doi.org/10.1109/TRO.2017.2705103

Labbé, M., & Michaud, F. (2019). RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation. Journal of Field Robotics, 36(2), 416-446. https://doi.org/10.1002/rob.21831


Macenski, S., Martín, F., White, R., & Ginés Clavero, J. (2020). The Marathon 2: A navigation system. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2718-2725. https://doi.org/10.1109/IROS45743.2020.9341207

Fox, D., Burgard, W., & Thrun, S. (1997). The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4(1), 23-33. https://doi.org/10.1109/100.580977


Bipedal Locomotion

Kajita, S., Kanehiro, F., Kaneko, K., Fujiwara, K., Harada, K., Yokoi, K., & Hirukawa, H. (2003). Biped walking pattern generation by using preview control of zero-moment point. 2003 IEEE International Conference on Robotics and Automation (ICRA), 1620-1626. https://doi.org/10.1109/ROBOT.2003.1241826

Vukobratović, M., & Borovac, B. (2004). Zero-moment point—Thirty five years of its life. International Journal of Humanoid Robotics, 1(1), 157-173. https://doi.org/10.1142/S0219843604000083


Manipulation and Grasping

Bohg, J., Morales, A., Asfour, T., & Kragic, D. (2014). Data-driven grasp synthesis—A survey. IEEE Transactions on Robotics, 30(2), 289-309. https://doi.org/10.1109/TRO.2013.2289018

Billard, A., & Kragic, D. (2019). Trends and challenges in robot manipulation. Science, 364(6446), eaat8414. https://doi.org/10.1126/science.aat8414


Vision-Language-Action Models

Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., ... & Zeng, A. (2022). Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691. https://arxiv.org/abs/2204.01691

Brohan, A., Brown, N., Carbajal, J., Chebotar, Y., Dabis, J., Finn, C., ... & Zitkovich, B. (2023). RT-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818. https://arxiv.org/abs/2307.15818


Speech Recognition and NLP

Radford, A., Kim, J. W., Xu, T., Brockman, G., McLeavey, C., & Sutskever, I. (2023). Robust speech recognition via large-scale weak supervision. International Conference on Machine Learning, 28492-28518. PMLR.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.


NVIDIA Isaac and GPU-Accelerated Robotics

Makoviychuk, V., Wawrzyniak, L., Guo, Y., Lu, M., Storey, K., Macklin, M., ... & State, G. (2021). Isaac Gym: High performance GPU-based physics simulation for robot learning. arXiv preprint arXiv:2108.10470. https://arxiv.org/abs/2108.10470


Reinforcement Learning for Robotics

Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., & Quillen, D. (2018). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. The International Journal of Robotics Research, 37(4-5), 421-436. https://doi.org/10.1177/0278364917710318

Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. https://arxiv.org/abs/1707.06347


Humanoid Robot Platforms

Unitree Robotics. (2023). Unitree H1 humanoid robot specifications. https://www.unitree.com/h1

Boston Dynamics. (2023). Atlas: The world's most dynamic humanoid robot. https://www.bostondynamics.com/atlas


Online Resources

ROS 2 Documentation. (2024). ROS 2 Humble Hawksbill documentation. https://docs.ros.org/en/humble/

NVIDIA Isaac Documentation. (2024). Isaac Sim documentation. https://docs.omniverse.nvidia.com/isaacsim/

Gazebo Documentation. (2024). Gazebo simulation documentation. https://gazebosim.org/docs

Unity Technologies. (2024). Unity Robotics Hub. https://github.com/Unity-Technologies/Unity-Robotics-Hub


Note on Citations

All citations in this course follow APA 7th edition format. Where available, DOI links or arXiv identifiers are provided for easy access to source materials. For the most current information on robotics platforms and software, please refer to official documentation and manufacturer websites.


Navigation:
Glossary | Course Introduction →