On-Board Facial Recognition Using a Bitcraze Crazyflie 2.0 Drone

The objective of this project was to develop a real-time facial recognition system running directly on the Crazyflie 2.0 drone using the AI-Deck module. Our goal was to train a neural network for face identification and deploy it onto the GAP8 processor so that inference could run fully on-board during flight.

Development Challenges and Approach

Throughout the project, we encountered multiple technical issues related to the GAP8 deployment toolchain—specifically NNTool and the AutoTiler. Despite our efforts, we were unable to successfully flash our trained neural network onto the AI-Deck. To continue evaluating our system, we chose an alternative approach: running the trained model on a local computer while receiving the real-time Wi-Fi video stream from the drone’s camera. This allowed us to perform live testing, measure recognition performance, and verify system functionality.

Dataset, Preprocessing, and Model Setup

We used the VGGFace2 dataset and added images of ourselves to enable real-time recognition. To match the AI-Deck camera characteristics, we preprocessed all images by resizing and converting them to grayscale. Using Bitcraze’s Wi-Fi streaming example as a base, we modified the code so that every 10 seconds a frame was extracted, processed by our facial-recognition model, and annotated with the classification result.

Neural Networks and Performance Tradeoff

To explore the limitations of the AI-Deck, we trained two neural networks:
– Large Model (~91% validation accuracy): Too big for the AI-Deck’s memory and processing constraints. Could not be deployed onto the device.
– Small Model (~70% validation accuracy): Lightweight enough to be flashed to the AI-Deck in theory, but actual flashing failed due to toolchain issues.

Working with these two models allowed us to clearly observe the tradeoff between model size and performance: larger, deeper models achieve better accuracy but exceed embedded hardware constraints, while smaller models fit the device but provide reduced accuracy.

Conclusion

Even though full on-device deployment was not achieved, this project gave us extensive hands-on experience with embedded machine learning, constrained hardware and debugging complex toolchains. We developed practical skills and gained valuable insight into real-time AI systems. Most importantly, we enjoyed the process of working on a challenging and engaging project.