We trained an LSTM model with gestures from John 3:16 and John 14:6. Here is a demo of the model inference. Our colleague is doing the gesture for the entire John 4:16 verse. The model correctly predicts 6 out of 7 gestures
Trained 8 classes namely ‘Yes’, ‘No’, ‘Hello’, ‘Thank You’, ‘Sorry’, ‘What is your name’, ‘Are you deaf’, ‘Nice to meet you’. Extracted 6 (X, Y) coordinates from shoulders, elbows and wrists positions from openPose to build feature vectors from a single frame. In a 2-second training video, we extracted 12 frames and generated the feature matrix. We used a total of 400 training videos and with this feature matrix, we trained an LSTM model to build a classifier.
This is a demo of our computer vision algorithm that calculates the joint angles of a person and calculates their ergonomics risk score. This is called as RULA (Rapid Upper Limb Assessment) score. This can be used for posture analysis and injury prevention
This is a demo of training a Convolutional 3D Neural Network to recognize activities like Smoking, Washing Hands, Talking on the cell phone etc., We used curated YouTube video clips as training data to train this model. The use case for this model could be hand hygiene compliance monitoring in Hospitals and safety and privacy violations monitoring in Industries and work places.