The Three Blind Mice

  • Machine Learning
  • Hackathon
  • February 2019



Inspiration

During my 7th grade, I interacted with visually impaired students from NAB (National Association for the Blind). I observed the challenges the students faced in the classroom and later came up with an idea that could possibly help minimize the hurdles caused by their limited vision. The idea was to create an app that could help them detect objects and avoid obstacles. My friends and I created a prototype of that idea in the LancerHacks 2019 hackathon, winning “Best Beginners Hack”.

What it does

The app uses the phone’s camera and a machine learning algorithm to determine what kind of object is in front of them. It then displays the text of the object in a color scheme that can be comfortably viewed by the partially blind. Then, our app converts the text to speech and also vibrates the phone to notify the blind person about the object that is in front of them.

How I built it

We created the app on XCode using Swift, a programming language. The app uses a ResNet-50 training model to detect the objects that are in front of the user. Our trained neural network is used to predict the name of the object. We took the help of Brian Voong's tutorial from youtube. However, we modified the code to let it help the blind by limiting the threshold for the accuracy of the objects detected, converting text to speech and correcting the colors for easy readability for the partially blind.

What I learned

I learnt how to create a better UI/UX design which is essential for an app. I also learnt how to create an app for the first time.

Links: Devpost