SharkDetector

The SharkDetector project aimed to develop an algorithm for detecting falls, classifying fall types, and identifying head trauma. The model integrates YOLOv5, MoveNet, and MobileNetv2.

The detection and classification process follows these steps:

  1. Image Classification: At each specified interval, images are classified to detect the presence of a person.

  2. Object Detection and Pose Extraction: If a person is detected, an object detection model is activated to extract the person’s pose (joint angles and positions) from the cropped image.

  3. Fall Detection and Classification: A trained RNN model then determines fall occurrence, fall type, and presence of head trauma.

Additional techniques, including quantization and pruning, are employed to ensure the model remains lightweight for real-time camera implementation.

Model structure.

Previous
Previous

FARE-AKI