T.O.O.F.A.N.

Enabling AI tools to be applied in the real world. Our project centralizes a variety of physical and virtual tools which can be deployed as backup in areas that have been hit by natural disasters.

  • 0 Raised
  • 219 Views
  • 0 Judges

Categories

  • HawkHacks Global Category

Gallery

Description

Description

We developed an AI-powered Arduino car equipped with danger zone detection and a helper chat feature. The car uses sensors and machine learning algorithms to identify and respond to hazardous conditions. Real-time communication is facilitated through the helper chat, providing assistance and information to users. This innovative project showcases the integration of advanced technology in a compact, versatile platform.


Links


Inspiration

As individuals from countries prone to a diverse array of natural disasters, such as earthquakes, floods, and landslides, we have firsthand experience with the devastating effects these events can bring. We have seen the profound impact on people's lives, from the loss of property to the tragic loss of life.

As four dedicated engineers representing the University of Western Ontario, we feel a deep responsibility to use our skills to make a meaningful difference in the world. Inspired by this mission, we have set out to create a solution that can significantly impact those affected by such disasters, potentially saving lives in the aftermath.

While we cannot prevent natural disasters, we are determined to enhance the recovery process and minimize their impact. Our goal is to improve resilience and provide swift assistance when it is needed most, ensuring a brighter, safer future for all.


What it Does

The rover is specifically designed to operate in areas affected by natural disasters. Its robust construction enables it to navigate rough terrain, and it can be remotely monitored and controlled within a 100-meter radius. Equipped with advanced sensors, the rover can analyze its surroundings and make informed decisions. For instance, it can assess the structural integrity of a building and determine if it's damaged. If the building is damaged, the rover will enter the premises to search for signs of human life. Upon detecting survivors, it sends out a distress signal to alert rescue teams.

The rover is also programmed with a sophisticated state machine that enables it to engage in conversations with survivors. This feature is designed to provide a calming and reassuring presence, mimicking human-like empathy. If survivors are able to move, the rover provides directions to guide them to safety. If they are unable to move or are trapped, the rover attempts to engage them in conversation to alleviate their distress. If communication is not possible, the rover sends out a distress signal and continues its search for other survivors.


How We Built It

There are various components involved in the culmination of our product. As soon as hacking started, we split up the work.

The rover was the first thing to be made. We had an old RC Car kit sitting around, so we built it up with some upgrades. Instead of using the one included motor driver, we used 2 motor drivers to power 4 wheels with more power, as we will need some strength in our wheels when navigating rough terrain of a disaster-strucken area. On top of the physical speed added by our additional motor driver and power supply, we upgraded the traditional Arduino Uno to an ESP32 Microcontroller, as it adds Bluetooth and WiFi capabilities, on top of the faster processing speeds and smaller physical profile. In general, the car was a simple build, with the plastic pieces, some batteries, and the motors. When it comes to connecting it to our phone via bluetooth, we used an Open Source application called Dabble, giving us an easy to use bluetooth controller.

The second thing we built was our radar. An even simpler build, it is simply a supersonic sensor glued to a servo motor, programmed to turn back and forth. The Arduino then transmits a signal to the laptop through the USB connection in order to produce the Radar application, which was made in Java. Although we added some slight alterations, the majority of the Java Radar code was sourced from online, and the original creator is credited in the comments of the code.

We also had a Facial Recognition Software which we worked on. We used YoloWorld Libraries. They have preloaded datasets and methods. Upon accessing their neural networks, we used their built-in methods to implement a recognition code integrated with our camera. Now whenever we use the application, it is able to identify a wide variety of objects. On top of this, we added a small "distress signa" where it sends a (fake) signal to the operators when the camera detects a phone.

Next, there was our Neural Network. We built this one from scratch, in order to pinpoint safe and unsafe locations in a disaster zone. We trained it by sampling a dataset of 2500 satellite images of broken and unbroken buildings. The result was our AI which could correctly predict 96% of new images of satellite buildings.

Finally, is the ChatBot. Although there is much to improve, as of now we have an excellent program which can comfort a person who may be in trouble. We created it using a State Machine Design Pattern, and it is able to determine when to help the person itself, or if it needs to call for backup. We did this using built in python State libraries.


Challenges We Ran Into

Feature Integration:

  • Complexity and Diversity: Implementing all features within the limited hackathon time was challenging due to the diverse nature of the topics.
  • Time Constraints: The time limitations of the hackathon made it difficult to integrate all features seamlessly.

Individual Feature Development:

  • Debugging Issues: We encountered several debugging issues while building individual features, which consumed a significant amount of our time.
  • Technical Hurdles: Each feature posed unique technical challenges, requiring careful troubleshooting and problem-solving.

Front-End and Back-End:

  • Integration Challenges: Merging the front-end and back-end components of our tools into a cohesive website was a major hurdle.
  • Resource Allocation: Balancing time between feature development and integration proved to be difficult. Additional Points:

Despite the challenges, our team demonstrated resilience and problem-solving skills, managing to develop individual features successfully. The experience highlighted the importance of effective time management and collaboration in complex projects. Moving forward, we plan to refine and integrate these features fully, ensuring a seamless and functional final product.


Accomplishments That We're Proud Of

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla blandit congue risus ac vehicula. Etiam massa tellus, hendrerit pulvinar libero vitae, maximus ultrices felis. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Sed sed rhoncus nunc. Aliquam quam metus, iaculis id rhoncus non, hendrerit vehicula tellus. Nullam gravida leo nisl, sed rutrum orci maximus a. Donec vel justo risus. Aenean id nisl malesuada, sollicitudin ante sed, maximus arcu.


What We Learned

Participating in the hackathon provided us with invaluable technical experience. We integrated a range of technologies, such as AI for object detection and machine learning, real-time communication protocols for the chatbox. Developing the T.O.O.F.A.N. Rover involved a blend of hardware and software, from designing a durable, remote-controlled rover using Bluetooth to implementing AI-driven detection systems with neural networks. We also leveraged cloud services, like Google Cloud Translation API, for real-time language translation, enhancing our skills in combining and deploying diverse technological solutions. Rapid prototyping, iterative development, and managing communication between team members were crucial for refining our project under tight deadlines, ultimately improving our problem-solving abilities and technical proficiency. 

Beyond technical skills, the hackathon reinforced our understanding of user-centric design and the ethical responsibilities of engineering. We focused on creating a solution that addresses real-world needs, such as enhancing search and rescue operations and providing timely assistance to disaster victims. This project highlighted the importance of features like real-time communication and language translation. We also learned the significance of scalability and future-proofing our design to adapt to various disaster conditions. This experience underscored our commitment to using our engineering expertise for social good, ensuring our technology is reliable, safe, and capable of making a positive impact on affected communities.


What's Next?

The Car and Radar:To improve the radar system, we can focus on increasing its accuracy and range. Upgrading to a more advanced ultrasonic sensor which can help detect objects at greater distances and with higher precision. Additionally, refining the software to process radar data more effectively will allow the rover to better identify obstacles and navigate complex terrains. As for the advancement for the rover, adding thermal cameras will improve its ability to detect humans in challenging environments, such as under debris or in low-visibility conditions. Implementing AI controls will enable autonomous navigation and decision-making, allowing the rover to continue its mission. Improving the rover's durability and battery life will ensure it can operate for longer periods in disaster zones.

Neural Network: Simply need to add more training, not much else to upgrade there. We could potentially add lower-view images as well instead of just satellite (it is just that the dataset isn't large enough yet).

ChatBot: The car will be upgraded to GPS, which will be used to pinpoint exact location of survivors. The ChatBot currently runs on a state machine but for the future, it will be powered by conversational ai to make a better conversation between people

Attachments