Robot Manipulation and Grasping with Deep Learning Training Course
Deep Learning for Robotic Manipulation and Grasping is an advanced course that connects robotic control with contemporary machine learning methods. Participants will investigate how deep learning can improve perception, motion planning, and dexterous grasping capabilities within robotic systems. Through theoretical study, simulation, and practical coding exercises, the course guides learners from perception-based control towards end-to-end policy learning for manipulation tasks.
This instructor-led live training, available online or onsite, targets advanced professionals looking to apply deep learning methods to achieve intelligent, adaptable, and precise robotic manipulation.
Upon completion of this training, participants will be able to:
- Develop perception models for object recognition and pose estimation.
- Train neural networks for grasp detection and motion planning.
- Integrate deep learning modules with robotic controllers using ROS 2.
- Simulate and evaluate grasping and manipulation strategies in virtual environments.
- Deploy and optimize learned models on real or simulated robotic arms.
Course Format
- Expert-led lectures and deep dives into algorithms.
- Practical coding and simulation exercises.
- Project-based implementation and testing.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to Robotic Manipulation and Deep Learning
- Overview of manipulation tasks and system components
- Traditional vs. learning-based approaches
- Deep learning in perception, planning, and control
Perception for Manipulation
- Visual sensing and object detection for grasping
- 3D vision, depth sensing, and point cloud processing
- Training CNNs for object localization and segmentation
Grasp Planning and Detection
- Classical grasp planning algorithms
- Learning grasp poses from data and simulation
- Implementing grasp detection networks (e.g., GGCNN, Dex-Net)
Control and Motion Planning
- Inverse kinematics and trajectory generation
- Learning-based motion planning and imitation learning
- Reinforcement learning for manipulation control policies
Integration with ROS 2 and Simulation Environments
- Setting up ROS 2 nodes for perception and control
- Simulating robotic manipulators in Gazebo and Isaac Sim
- Integrating neural models for real-time control
End-to-End Learning for Manipulation
- Combining perception, policy, and control in unified networks
- Using demonstration data for supervised policy learning
- Domain adaptation between simulation and real hardware
Evaluation and Optimization
- Metrics for grasp success, stability, and precision
- Testing under varying conditions and disturbances
- Model compression and deployment on edge devices
Hands-on Project: Deep Learning-Based Robotic Grasping
- Designing a perception-to-action pipeline
- Training and testing a grasp detection model
- Integrating the model into a simulated robotic arm
Summary and Next Steps
Requirements
- Strong understanding of robotics kinematics and dynamics
- Experience with Python and deep learning frameworks
- Familiarity with ROS or similar robotic middleware
Audience
- Robotics engineers developing intelligent manipulation systems
- Perception and control specialists working on grasping applications
- Researchers and advanced practitioners in robot learning and AI-based control
Open Training Courses require 5+ participants.
Robot Manipulation and Grasping with Deep Learning Training Course - Booking
Robot Manipulation and Grasping with Deep Learning Training Course - Enquiry
Robot Manipulation and Grasping with Deep Learning - Consultancy Enquiry
Testimonials (2)
Supply of the materials (virtual machine) to get straight into the excersises, and the explanation of the Ros2 core. Why things work a certain way.
Arjan Bakema
Course - Autonomous Navigation & SLAM with ROS 2
its knowledge and utilization of AI for Robotics in the Future.
Ryle - PHILIPPINE MILITARY ACADEMY
Course - Artificial Intelligence (AI) for Robotics
Upcoming Courses
Related Courses
Artificial Intelligence (AI) for Robotics
21 HoursThe field of Artificial Intelligence (AI) in Robotics merges machine learning, control systems, and sensor fusion to develop intelligent machines that can perceive, reason, and act autonomously. By leveraging contemporary tools such as ROS 2, TensorFlow, and OpenCV, engineers are now empowered to create robots capable of intelligent navigation, planning, and interaction within real-world settings.
This instructor-led live training, available online or onsite, is designed for intermediate-level engineers looking to develop, train, and deploy AI-powered robotic systems using modern open-source technologies and frameworks.
Upon completion of this training, participants will be equipped to:
- Utilize Python and ROS 2 to construct and simulate robotic behaviors.
- Apply Kalman and Particle Filters for precise localization and tracking.
- Employ computer vision techniques via OpenCV for perception and object detection.
- Leverage TensorFlow for motion prediction and learning-based control.
- Integrate SLAM (Simultaneous Localization and Mapping) to enable autonomous navigation.
- Develop reinforcement learning models to enhance robotic decision-making processes.
Course Format
- Engaging lectures and interactive discussions.
- Practical implementation exercises using ROS 2 and Python.
- Hands-on practice with both simulated and real-world robotic environments.
Customization Options
To arrange a customized training session for this course, please get in touch with us.
AI and Robotics for Nuclear - Extended
120 HoursIn this instructor-led live training Greece (online or onsite), participants will learn the different technologies, frameworks, and techniques for programming various types of robots for use in the field of nuclear technology and environmental systems.
The 6-week course is held 5 days a week. Each day is 4-hours long and consists of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete various real-world projects applicable to their work in order to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D through simulation software. The ROS (Robot Operating System) open-source framework, C++ and Python will be used for programming the robots.
By the end of this training, participants will be able to:
- Understand the key concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Extend a robot's ability to perform complex tasks through Deep Learning.
- Test and troubleshoot a robot in realistic scenarios.
AI and Robotics for Nuclear
80 HoursThis instructor-led, live training in Greece (online or onsite) allows participants to learn the technologies, frameworks, and techniques for programming various types of robots for use in nuclear technology and environmental systems.
The four-week course takes place five days a week. Each day consists of four hours of lectures, discussions, and hands-on robot development in a live lab environment. Participants will complete real-world projects relevant to their work to practice their acquired knowledge.
The target hardware for this course will be simulated in 3D via simulation software. The code will then be deployed onto physical hardware (such as Arduino) for final testing. The ROS (Robot Operating System) open-source framework, along with C++ and Python, will be used for robot programming.
By the end of this training, participants will be able to:
- Understand the core concepts used in robotic technologies.
- Understand and manage the interaction between software and hardware in a robotic system.
- Understand and implement the software components that underpin robotics.
- Build and operate a simulated mechanical robot that can see, sense, process, navigate, and interact with humans through voice.
- Understand the necessary elements of artificial intelligence (machine learning, deep learning, etc.) applicable to building a smart robot.
- Implement filters (Kalman and Particle) to enable the robot to locate moving objects in its environment.
- Implement search algorithms and motion planning.
- Implement PID controls to regulate a robot's movement within an environment.
- Implement SLAM algorithms to enable a robot to map out an unknown environment.
- Test and troubleshoot a robot in realistic scenarios.
Autonomous Navigation & SLAM with ROS 2
21 HoursROS 2 (Robot Operating System 2) is an open-source framework designed to support the development of complex and scalable robotic applications.
This instructor-led, live training (online or onsite) is aimed at intermediate-level robotics engineers and developers who wish to implement autonomous navigation and SLAM (Simultaneous Localization and Mapping) using ROS 2.
By the end of this training, participants will be able to:
- Set up and configure ROS 2 for autonomous navigation applications.
- Implement SLAM algorithms for mapping and localization.
- Integrate sensors such as LiDAR and cameras with ROS 2.
- Simulate and test autonomous navigation in Gazebo.
- Deploy navigation stacks on physical robots.
Format of the Course
- Interactive lecture and discussion.
- Hands-on practice using ROS 2 tools and simulation environments.
- Live-lab implementation and testing on virtual or physical robots.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Developing Intelligent Bots with Azure
14 HoursAzure Bot Service unites the capabilities of the Microsoft Bot Framework with Azure Functions, offering a robust platform for rapidly constructing intelligent bots.
Through this instructor-led live training, participants will investigate efficient methods for developing intelligent bots utilizing Microsoft Azure.
Upon completing the training, participants will be able to:
Comprehend the fundamental concepts underpinning intelligent bots.
Construct intelligent bots using cloud-based applications.
Acquire practical expertise in the Microsoft Bot Framework, the Bot Builder SDK, and Azure Bot Service.
Implement established bot design patterns in practical scenarios.
Create and deploy their first intelligent bot using Microsoft Azure.
Target Audience
This course is tailored for developers, hobbyists, engineers, and IT professionals keen on bot development.
Course Format
The training integrates lectures and discussions with exercises, placing a strong emphasis on practical, hands-on application.
Computer Vision for Robotics: Perception with OpenCV & Deep Learning
21 HoursOpenCV is an open-source library for computer vision that facilitates real-time image processing, while deep learning frameworks like TensorFlow offer the necessary tools for enabling intelligent perception and decision-making capabilities within robotic systems.
This instructor-led live training, available online or on-site, is designed for intermediate-level robotics engineers, computer vision specialists, and machine learning engineers aiming to leverage computer vision and deep learning techniques for robotic perception and autonomy.
Upon completion of this training, participants will be capable of:
- Building computer vision pipelines using OpenCV.
- Integrating deep learning models for object detection and recognition.
- Leveraging vision-based data for robotic control and navigation.
- Merging classical vision algorithms with deep neural networks.
- Deploying computer vision solutions on embedded and robotic platforms.
Course Format
- Interactive lectures and discussions.
- Practical exercises using OpenCV and TensorFlow.
- Live laboratory implementation on simulated or physical robotic systems.
Customization Options
- To arrange a customized training session for this course, please contact us.
Developing a Bot
14 HoursA bot or chatbot functions as a digital assistant designed to automate user interactions across various messaging platforms, enabling faster task completion without requiring direct human intervention.
Through this instructor-led live training, participants will learn how to begin developing bots by building sample chatbots using dedicated bot development tools and frameworks.
By the conclusion of this training, participants will be able to:
- Understand the various uses and applications of bots
- Comprehend the complete bot development process
- Explore the different tools and platforms utilized in bot construction
- Construct a sample chatbot for Facebook Messenger
- Construct a sample chatbot using the Microsoft Bot Framework
Audience
- Developers interested in creating their own bot
Format of the course
- A mix of lectures, discussions, exercises, and extensive hands-on practice
Edge AI for Robots: TinyML, On-Device Inference & Optimization
21 HoursEdge AI empowers artificial intelligence models to operate directly on embedded or resource-constrained devices, thereby minimizing latency and power usage while enhancing the autonomy and privacy of robotic systems.
This instructor-led training, available online or onsite, is designed for intermediate-level embedded developers and robotics engineers aiming to implement machine learning inference and optimization techniques directly on robotic hardware via TinyML and edge AI frameworks.
Upon completion of this training, participants will be capable of:
- Gaining a solid understanding of TinyML and edge AI fundamentals within robotics.
- Converting and deploying AI models for on-device inference.
- Optimizing models to improve speed, reduce size, and enhance energy efficiency.
- Integrating edge AI systems into robotic control architectures.
- Evaluating performance and accuracy in real-world scenarios.
Course Format
- Interactive lectures and discussions.
- Hands-on practice utilizing TinyML and edge AI toolchains.
- Practical exercises conducted on embedded and robotic hardware platforms.
Customization Options
- To arrange customized training for this course, please contact us.
Human-Centric Physical AI: Collaborative Robots and Beyond
14 HoursThis instructor-led, live training in Greece (online or onsite) is designed for intermediate-level learners who want to explore the role of collaborative robots (cobots) and other human-centric AI systems in modern workplaces.
By the end of this training, participants will be able to:
- Understand the principles of Human-Centric Physical AI and its applications.
- Explore the role of collaborative robots in enhancing workplace productivity.
- Identify and address challenges in human-machine interactions.
- Design workflows that optimize collaboration between humans and AI-driven systems.
- Promote a culture of innovation and adaptability in AI-integrated workplaces.
Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control
21 HoursThe course 'Human-Robot Interaction (HRI): Voice, Gesture & Collaborative Control' is a practical programme aimed at equipping participants with the skills to design and implement intuitive interfaces for effective human–robot communication. This training blends theoretical knowledge, design principles, and hands-on programming to create natural and responsive interaction systems utilising speech, gestures, and shared control methods. Participants will gain expertise in integrating perception modules, developing multimodal input systems, and designing robots that collaborate safely with humans.
Delivered by an instructor, this live training (available online or onsite) is tailored for beginners to intermediate-level professionals looking to design and implement human–robot interaction systems that improve usability, safety, and overall user experience.
Upon completion of this training, participants will be capable of:
- Grasping the foundational concepts and design principles of human–robot interaction.
- Creating voice-based control and response mechanisms for robotic systems.
- Implementing gesture recognition through computer vision techniques.
- Designing collaborative control systems that ensure safe and shared autonomy.
- Assessing HRI systems against criteria of usability, safety, and human factors.
Course Format
- Interactive lectures and live demonstrations.
- Practical coding and design exercises.
- Hands-on experimentation within simulation or real-world robotic environments.
Customisation Options for the Course
- For a bespoke training version of this course, please contact us to arrange your session.
Industrial Robotics Automation: ROS-PLC Integration & Digital Twins
28 HoursIndustrial Robotics Automation: ROS-PLC Integration & Digital Twins is a practical course designed to bridge the gap between industrial automation and contemporary robotics frameworks. Participants will acquire the skills to integrate ROS-based robotic systems with PLCs for synchronized operations, while exploring digital twin environments to simulate, monitor, and optimize production processes. The curriculum emphasizes interoperability, real-time control, and predictive analysis through the use of digital replicas of physical systems.
This instructor-led, live training (available online or onsite) targets intermediate-level professionals aiming to develop practical expertise in connecting ROS-controlled robots with PLC environments and implementing digital twins to enhance automation and manufacturing efficiency.
Upon completion of this training, participants will be equipped to:
- Comprehend the communication protocols facilitating interaction between ROS and PLC systems.
- Implement real-time data exchange mechanisms between robots and industrial controllers.
- Develop digital twins for monitoring, testing, and process simulation purposes.
- Integrate sensors, actuators, and robotic manipulators within industrial workflows.
- Design and validate industrial automation systems using hybrid simulation environments.
Format of the Course
- Interactive lectures and architectural walkthroughs.
- Hands-on exercises focused on integrating ROS and PLC systems.
- Implementation of simulation and digital twin projects.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Artificial Intelligence (AI) for Mechatronics
21 HoursThis instructor-led live training in Greece (available online or onsite) is designed for engineers who wish to explore the application of artificial intelligence to mechatronic systems.
By the end of this training, participants will be able to:
- Gain an overview of artificial intelligence, machine learning, and computational intelligence.
- Understand the concepts of neural networks and different learning methods.
- Choose artificial intelligence approaches effectively for real-life problems.
- Implement AI applications in mechatronic engineering.
Multi-Robot Systems and Swarm Intelligence
28 HoursMulti-Robot Systems and Swarm Intelligence is an advanced training course that delves into the design, coordination, and control of robotic teams, drawing inspiration from biological swarm behaviors. Participants will acquire skills in modeling interactions, implementing distributed decision-making, and optimizing collaboration across multiple agents. This course integrates theoretical knowledge with practical simulation exercises, preparing learners for applications in logistics, defense, search and rescue, and autonomous exploration.
This instructor-led, live training (available online or onsite) is designed for advanced-level professionals eager to design, simulate, and implement multi-robot and swarm-based systems using open-source frameworks and algorithms.
Upon completion of this training, participants will be able to:
- Grasp the principles and dynamics of swarm intelligence and cooperative robotics.
- Design communication and coordination strategies for multi-robot systems.
- Implement distributed decision-making and consensus algorithms.
- Simulate collective behaviors such as formation control, flocking, and coverage.
- Apply swarm-based techniques to real-world scenarios and optimization problems.
Format of the Course
- Advanced lectures featuring algorithmic deep dives.
- Hands-on coding and simulation exercises in ROS 2 and Gazebo.
- A collaborative project applying swarm intelligence principles.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Smart Robots for Developers
84 HoursA Smart Robot represents an Artificial Intelligence (AI) system capable of learning from its surroundings and past experiences, thereby enhancing its capabilities based on that acquired knowledge. These intelligent entities can collaborate with humans, working alongside them and observing their behavior to adapt. Beyond performing manual labor, Smart Robots are equipped to handle complex cognitive tasks. It is important to note that Smart Robots are not limited to physical hardware; they can also exist purely as software applications within a computer, operating without moving parts or direct physical interaction with the world.
In this instructor-led, live training, participants will explore the various technologies, frameworks, and techniques required to program different types of mechanical Smart Robots. The course culminates in participants applying this knowledge to complete their own Smart Robot projects.
The curriculum is structured into 4 distinct sections. Each section spans three days, featuring lectures, interactive discussions, and hands-on robot development within a live laboratory environment. To ensure practical mastery, each section concludes with a practical, hands-on project, allowing participants to practice and demonstrate their newly acquired skills.
The hardware targeted in this course will be simulated in 3D using specialized simulation software. Programming for the robots will utilize the open-source ROS (Robot Operating System) framework, along with C++ and Python.
Upon completion of this training, participants will be able to:
- Grasp the core concepts underpinning robotic technologies
- Understand and manage the interaction between software and hardware within a robotic system
- Comprehend and implement the software components that form the foundation of Smart Robots
- Construct and operate a simulated mechanical Smart Robot capable of seeing, sensing, processing, grasping, navigating, and interacting with humans via voice
- Enhance a Smart Robot's capacity to execute complex tasks through the application of Deep Learning
- Test and troubleshoot a Smart Robot within realistic scenarios
Audience
- Developers
- Engineers
Format of the course
- A blend of lectures, discussions, exercises, and intensive hands-on practice
Note
- To customize any aspect of this course (such as the programming language or robot model), please contact us to make arrangements.
Smart Robotics in Manufacturing: AI for Perception, Planning, and Control
21 HoursIntelligent robotics involves embedding artificial intelligence into robotic systems to enhance perception, decision-making capabilities, and autonomous control.
This instructor-led live training, available either online or onsite, is designed for advanced robotics engineers, systems integrators, and automation leaders who aim to implement AI-driven perception, planning, and control within smart manufacturing settings.
Upon completion of this training, participants will be able to:
- Comprehend and apply AI methodologies for robotic perception and sensor fusion.
- Create motion planning algorithms for both collaborative and industrial robots.
- Implement learning-based control strategies to enable real-time decision-making.
- Seamlessly integrate intelligent robotic systems into smart factory workflows.
Course Format
- Interactive lectures and discussions.
- Extensive exercises and practical practice.
- Hands-on implementation within a live-lab environment.
Customization Options for the Course
- To arrange a tailored training session for this course, please get in touch with us.