Intelligence Document Search using Semantic indexing

This is a demo of an intelligence resume search we built using Latent Semantic Indexing (LSI) to build semantically ordered indexes of 100 resumes. From this index, we can query resumes by keyword search or by giving a reference resume to match. The top 5 resumes that has the highest similarity scores based will be returned.

FAQ Chatbot

We trained a NLP model to automatically extract intents from FAQs in our website. We then proceeded to train a chat bot with these intents.

Gesture Recognition – Biblical Verse

We trained an LSTM model with gestures from John 3:16 and John 14:6. Here is a demo of the model inference. Our colleague is doing the gesture for the entire John 4:16 verse. The model correctly predicts 6 out of 7 gestures

ASL Gesture Recognition – 8 Gestures

Trained 8 classes namely ‘Yes’, ‘No’, ‘Hello’, ‘Thank You’, ‘Sorry’, ‘What is your name’, ‘Are you deaf’, ‘Nice to meet you’. Extracted 6 (X, Y) coordinates from shoulders, elbows and wrists positions from openPose to build feature vectors from a single frame. In a 2-second training video, we extracted 12 frames and generated the feature matrix. We used a total of 400 training videos and with this feature matrix, we trained an LSTM model to build a classifier. 

Human Activity Recognition using Vision Intelligence

This is a demo of a computer vision algorithm that recognizes human activity and gestures. Neural Networks are special software algorithms that mimic human brain. Our team utilized the recent advances in training these Neural Networks to train our human activity recognition models. We had lot of fun training these models and also learned a lot. We learned how to optimize these models so that they can run in inexpensive hardware. How to make these models small enough to run locally without the need to send the video feed to an external device. This greatly increases the privacy. These models are in useful in many situations like productivity measurements, Injury prevention etc., Our goal is to help our customers automate safety and monitoring tasks to safe money and improve productivity. 

Personal Protective Equipment Detection including goggles

Personal Protective Equipment (PPE) like vest, helmet and goggles are mandatory in certain hazardous work places. We built a vision intelligence based AI model that can monitor PPE compliance by employees. Our team developed this model using the latest advances in training Machine Learning model using Neural Networks. We optimized this model to run in inexpensive embedded systems. We connect a camera to this device and run our model to track the PPE compliance. Thus this camera becomes a smart camera capable of detecting PPE. This device can be installed in entrances to monitor adherence of PPE when employee enter the hazardous areas. We can easily configure this device to send notifications of any violations. Our team designed this device with employee privacy in our minds. The device doesn’t record or transmit video frame. In fact, this device works in stand-alone mode and doesn’t have to be connected to any network.  

Person Detection with distance from camera

A demo of using Caffe based Person detector along with distance from camera. Tested in different moving platforms like Fork-lift, car etc., Achieved detection at more than 50 ft. The who setup is running in a Rasberry Pi with an attached Movidius NCS

Web Automation including CAPTCHA using UIPATH

A demo of using UIPATH to automate a browser based workflow. This includes interpreting a CAPTCHA to proceed to the next screen. This flow is to lookup and download land ownership details for local government web portal for a plot of land. This is a routine flow that people wait daily in queues at government offices that takes about an hour to automate and a minute to execute. This demos shows the power and versatility of Front-End automation using UIPATH

Activity Recognition using Conv3D Neural Net

This is a demo of training a Convolutional 3D Neural Network to recognize activities like Smoking, Washing Hands, Talking on the cell phone etc., We used curated YouTube video clips as training data to train this model. The use case for this model could be hand hygiene compliance monitoring in Hospitals and safety and privacy violations monitoring in Industries and work places.