What is our project all about?
The project attempts to prove an algorithm by which a drone or portable instrument can move autonomously in a 3D space and avoid stationary or static obstacles moving in its path.
The algorithm:
When we take a video from just one camera we can only measure the size of each obstacle in the each frame. This is what the algorithm is about.
We are measuring the size of an obstacle on the current frame and calculate how much did it “grow” from the previous frame.
But that way we can't calculate neither the size nor the speed of the object.
So we created a “danger rate”: the obstacle that grows up fastest is the most dangerous, and the one that growing up slowly is the least dangerous.
How the algorithm does works?
The input: two frames. The output: the direction on the next step.
The algorithm compares the size of each object in each frame.
If the algorithm identifies an object that "grows", it concludes that it is dangerous.
We calculate the percentage of object growth, and tag the one that grows the most as the most dangerous.
By the direction on the object we know what to do so we can avoid it.
Assumptions:
Our obstacles are colorful poles:
The images processing is lighter – we can use only few rows of pixels, and not the whole frame.
We look for a specific colour - what makes isolating the poles easier.
We can easily calculate the width – by counting pixel from one row.
The background does not contain the poles colours - which makes the isolating even easier.
All the poles are static - which makes them move on the same speed toward the drone.
The Algorithm:
Uses two frames.
Isolate each pole by its colour.
Calculate the width of the pole (the bigger = the closer).
Calculate the speed in order to find how great is the threat.
Calculate the direction of the pole.
According to both direction and speed turn left, light left, light right or right to avoid the pole.
Take a new frame and repeat the algorithm.
Finding the best platform for the mission:
After failed attempts to control easily both PARROT 2.0 and BEBOP 2 (both are drones) we finally found an easy platform for our algorithm – the Turtlebot – a 2D robot, that can easily communicate and connect the ROS system.
Now our system included: the turtlebot, a ROS running on a linux OS, a WIFI communication from the robot to the ROS, and a simulink flow that runs the algorithm and sends the commands to the ROS.
The simulink and ROS are familiar to each other, what makes the communication easy.
After setting up the IP of the drone at both ROS and simulink, we used a built-in modules that the simulink supply for communication.
We use those modules for both taking a picture and sending a navigation command.
our system
The simulink is asking the ROS to take a frame.
The ROS is sending the command to the turtlebot, and after taking the frame the turtlebot is sending it to the simulink through ROS.
Simulink is running the algorithm (with the previous frame and the new one it just got from turtlebots new position).
After getting a result from the algorithm the simulink is sending a navigation command to the turtlebot.
The ROS system is sending the command and the turtlebot finds its new position.
This flow repeats itself until the robot reaches the target.
