As human beings, our most critical external stimuli - from our communities, our friends, and even our family - have been been mutated or otherwise blocked since the COVID-19 pandemic. In tandem with the disruptions to our normal daily routines, it has become increasingly a struggle to maintain our physical fitness and emotional well being in an organized manner.
Many individuals lack a personal fitness schedule, and fewer still have access to personal trainers.
But when our human connections must grow distant for the visible future, artificial intelligence can temporarily fill the gap.
Russell can take the role of an AI-based trainer that organizes various exercises for users. Among other features, Russell is able to analyze a user's daily fitness data and help the user increase their endurance and stretch their limits.
This reinforces our collective attention towards fitness both in the COVID-19 context, where many lack motivation to stay in shape, and outside the pandemic, where we strive towards the greater goal of equitable access to personal wellbeing.
In particular, our project invokes techniques from two separate domains: natural language processing and computer vision.
On the natural language processing side, our fitness tracker uses the open-source spaCy library to extract structured fitness data from natural language input. Specifically, a user can write a normal English sentence, which is processed by spaCy to record the quantity of a certain exercise performed.
On the computer vision side, our emotions tracker uses the OpenCV 2 library to detect a user’s emotion from a webcam snapshot of their face. We utilize special filters called Haar Cascades, which we train ourselves, to gauge the presence of each emotion on the recorded image.
Sadly, we didn't have time to show the chat-bot function but basically what it does is that it uses sentiment analysis. It detects inflections in the speech the user types in and has a response outcome that will either cheer up the user if they are sad, or encourage the user to be more happy.
Another feature not demonstrated was the emotions recognition. What it does is that depending on if the person is happy, sad, angry, etc it will cheer up or encourage the user. This function is similar to the chat-bot so if we did have time, we would add a toggle between these two.
Future directions for this project include developing a more advanced chatbot model under the hood (particularly, one that distinguishes between different user intentions) and improving the emotions recognition feature through experimentation with other machine learning techniques, like convolutional neural networks.