Tomorroad

Safer roads for tomorrow. A road safety system that alerts drivers of road conditions and saves lives!

  • 12,370 Raised
  • 27 Views
  • 14 Judges

Categories

  • Main

Description

Provide a short and concise description of the social problem your project is addressing/solving:

Through our project, we aim to improve the safety of civilians on our roads in our society today. Many car crashes and accidents occur due to the driver being inattentive or distracted, and there has not been any effective system in place that can help warn these drivers and curb these incidents, until now.

Tomorroad alerts the driver of upcoming road conditions that the driver may have missed. For example, the program will identify upcoming sharp turns, unique road curvature, rock slide areas, speed zones, steep inclines, and other hazardous road conditions. Many inattentive drivers may miss these dangers, resulting in car accidents. Tomorroad helps to make sure that you are always aware of your surrounding driving conditions.

Provide a short and concise description of how you integrated AI into your project (Please make this description short. You will be able to explain further in the video):

AI is integrated into our project in the detection, identification, and analysis of road conditions and road signs, and we used tensorflow and keras as the main libraries. The program takes an image capture of the road a few times every second, and identifies road conditions in the image using AI and Machine Learning. Color masks and shape recognition are utilized in first detecting these conditions in the frame, and are then identified and analyzed by being processed through a trained neural network model. When it identifies a hazardous road condition, it will alert the user using audio prompts.

Attach a video with a maximum length of 3 minutes that includes project demonstration, further explanation, etc. of your project.

How we built it (AI explanation):

As mentioned before, we used tensorflow and keras as the main libraries for artificial intelligence and machine learning. Iterating through the dataset of images, we split 20% of the data for testing, and started training the neural network model. We used a Sequential object from keras, which is a convolutional neural network that also allowed us to add layer stacks to the training process. After compiling together the model, we went through 15 epochs of training, and it went through 981 files through each epoch. After these epochs, the accuracy was 99.3%, and it was saved in an h5 file.

After noticing the sign detection problem, we decided to use shape recognition and color masks. Using the cv2 library for video and image capturing for input, we looked for shapes based on the corners and contours detected, and also used the color masks to ensure the object detected was a sign. This color masking and shape recognition ensured that the input the AI was receiving had a sign in it, meaning it would be accurate and only called upon when needed. We then integrated these two sections, calling the model from the h5 file and allowing it to identify the sign and audio output based on the input image, only if it passed the color masks and shape recognition.

What we learned:

We all learned a lot about image processing and filtering. Before this hackathon, we all had some introductory experience with artificial intelligence and machine learning. However, none of us had attempted detailed image processing, and at first, thinking about making a program that can identify signs in a image seemed impossible. However, we worked on the AI aspect first, focusing on being able to run an image through the neural network and output the image. We then worked on the sign detection and color masks, to filter what data was being sent to the AI. We overall learned a lot about image/video processing and integrating artificial intelligence. 

However, the biggest thing we learned was collaborating with each other purely online. Before this hackathon, none of us knew each other. However, throughout these last few days, we made sure to clearly communicate and work together to get this project done, and being able to learn about AI during this process is something we are very proud about.

Challenges:

Throughout the project making, we've encountered many challenges:

  1. The first prototype of our project did not have a sign detector, meaning that, the AI would constantly try it's hardest and guess what sign it sees, even if there was no sign in the picture.
  2. The next challenge came from that problem, since we started doing the sign detector. We tried to see what every sign had in common, so then it would be easy to identify. We thought that identifying the pole that the sign is standing on would be fine, but after testing, we found out that there are a lot of things that could be standing in our way(rust, sunshine, reflection on the pole).
  3. Our final decision was to make different color masks that would see if a specific shape of a specific color is present. However, one of the masks kept identifying brown/light brown colors(which we did not want). In the end making 2 additional masks that would make up 1 mask was the correct decision, because then, the mask started identifying only the colors that we wanted.

What's next for Tomorroad:

In the future, we would be interested in expanding this application to mobile devices and trying to make it a professional product. This would mean moving it onto iOS and Android, and expand the types and number of signs or conditions it can detect. First, we would want to make this python script work on these mobile devices, and ensure the AI and image processing can all work together with the phone’s hardware. Once mobile devices are integrated, we would want to improve our application as a well, in terms of its variety and efficiency. The dataset we used originated from Germany, which is the reason why the signs follow the metric system, and being native to Germany when compared to other countries. Ideally, we would ask for the user’s location, and adjust the set of signs to look for depending on what it selected. We would also like to improve the speed and latency of our application and the video capturing, as right now performance isn’t consistent and can be detrimental to accuracy. The process of improving this would be to optimize the libraries being used and the application itself. But overall, we would definitely be interested in expanding and making this a real product, and increasing the devices, data, and efficiency supported.

Code Repository: https://github.com/mnhegde/Traffic-Signs-AI

Attachments

Comments