Autonomous Robotic Arm with Machine Teaching, Cognitive Services and IoT

Experiment #204




The Experiment #204 researches about the possibilities of using Custom Vision Object Detection and Reinforcement Learning in a Machine Teaching approach to building an autonomous robotic system.


For this Proof of Concept, we will try to build a robotic arm. The main purpose of the project is to create an Artificial Intelligence capable of detecting objects automatically and manipulating the arm autonomously.



The system will detect objects inside its action radio and tell the robot how to rotate, grab and move things to reach a goal by calculating the better path with the least number of instructions.



First, we will print some 3D pieces and assemble the servo motors and electronics on them. Then, we will create an Azure Custom Vision project to build and train an Object Detection model. After that, we will use Machine Teaching and Reinforcement Learning to build and train a model to manage the autonomous system. Finally, we will put all things together in a Windows IoT Core application and deploy it on a Raspberry Pi device attached to the robotic arm.



Using Artificial Intelligence to manage a robot has many advantages. The system optimizes the movements with the consequent energy saving and speed increase. Also, a well-calibrated robotic arm is more accurate than humans in manual and repetitive tasks.



Step by Step

1. How to assembly the Robotic Arm

First, we imported the .STL files into PrusaSlicer to generate the .GCODE files for the 3D printer. We added supports to the model for better printing of some parts:


If you want to see the whole process just go to this post.


Stay up to date!

Leave a comment