Try it Yourself!
Want to try out EndangARed Defender yourself? Check it out at https://andrewdimmer.github.io/endangared-defender/. Note: Due to time restrains, we currently only support AR on iOS 11 or higher. Also, we recommend using smaller sized images on mobile devices, as the TensorFlow model has a high CPU usage.
Provide a short and concise description of the social problem your project is addressing/solving:
Human activity on Earth is putting dozens of animal species in imminent danger of extinction. Whether it’s because of poaching, or loss of habitats caused by climate change, pollution, and human encroachment, many species’ very survival hangs in the balance.
The good news about these human-caused problems is that they have the potential to be human-solved before it’s too late.
Unfortunately, one-size-fits-all universal solutions aren’t effective because of the huge range of different animals and habitats. A solution to help critically endangered Sumatran elephants, for example, probably wouldn’t help Hawksbill sea turtles. Therefore, the very first step in trying to save an endangered species is finding out how many animals of that species are left, and where they’re currently living. This is a hugely time-consuming and massively labor-intensive process, which requires enormous amounts of money and great numbers of highly organized, highly trained conservationists to locate, track, and monitor the animals.
And that’s exactly why we built EndangARed Defender. Our web app combines the nearly unlimited power of crowdsourcing and AI object detection to easily and inexpensively locate, track, and monitor endangered species. This has the double benefit of freeing specialized wildlife protection organizations to focus more of their time, money, and resources on specific conservation efforts, while simultaneously increasing public awareness of endangered species.
Provide a short and concise description of how you integrated AI into your project (Please make this description short. You will be able to explain further in the video):
We started by building our machine learning model in Google Auto ML Vision. To do this, we first collected as many different images of the sample animals as we could. Then, we wrote python scripts to handle things like bulk renaming and CSV generation, before labeling each image and training the model. In particular, we used an object detection model so we can identify, count, and track multiple animals (of the same or different types) all in one image.
From there, we built a web app to allow users to upload photos and view information about the animals in the photos. We also connected Google Maps to display information about where that animal has been sighted recently. This allows us to track the range and estimate the size of the population over time as users upload more images.
Finally, we used echoAR to upload, host, and display 3D models of the animal(s) identified in the picture so that users can see them up close, even from the comfort of their own homes.
Attach a video with a maximum length of 3 minutes that includes project demonstration, further explanation, etc. of your project.