Case study: An app for illiterate

Vishal Kumar
Bootcamp
Published in
4 min readFeb 22, 2021

--

Along with smartphones, the design and experience of smartphones are also becoming smart. But as most of the world is going forward, there is a small chunk of the population lagging behind, the illiterate. 14% of the world population is illiterate (It’s a small percentage but a huge number). Various NGOs and other organizations are working to make this number zero. Meanwhile, we can try to help them by design thinking and reducing their daily life technical challenges by improving smartphone accessibility.

Today’s Solution

After going through different apps which are providing some accessibility features, I saw many problems in all of them. Even in tools like text-to-speech, you should have some level of technical knowledge to use it. (It’s becoming more and more accessible with updates. Thanx To Google:) So there is only one solution to rely on, MULTIMEDIA PHONES. But you still need some knowledge of language and numbers to use these.

User Stories

In order to build realistic user stories and create empathy, I surveyed 9 people, aged between 40 to 60, who try to use smartphones but fail often due to their illiteracy. These people are from a village (my own village, people from my neighborhood).

  1. 4 said they like to give commands to google assistant (in Hindi), usually for making calls. But the failure rate is very high (maybe because of the low literacy of google in Hindi language XD).
  2. 2 said they can’t read or write text messages, so it’s hard to communicate by text.
  3. 3 said it’s easy to identify contact numbers by their picture.
  4. All of them wanted to know why smartphones are called smart.

Pain Points

After making a few assumptions myself, I combined them with some more questions I asked in the survey to support/contradict my assumptions, and help me identify the major and minor pain points.

  1. 5/9 complained that the UI for basic tasks (i.e. call, text, etc.) feels confusing.
  2. 4/9 said google assistant is good to play around with, but we can’t rely on that for these basic tasks.
  3. All of them said they would like to hear the text in their documents but they have no idea what google lens is and it’s hard to use because of the multiple features it provides.
  4. 7/9 said it would be better if they could hear a text message and type anything just by saying it.

Suggested Solution: MITRA

A sign and audio-based UI that facilitates smooth usage of the available features in the available devices. Designing user interfaces (UIs) such that novice and low-literate users can access a broad range of services and utilities, increasingly available to them, with minimal training and external assistance.

  1. Sign & Audio-based UI: Sign and audio-based UI will help the users to interact with the Application easily. A speaker button will be provided along with every title and sentence so that the user can get to know what’s written there. Using big signs for call, text, etc. will make the user understand the functions easily.

2. Minimal UI: Decluttering unnecessary elements from the UI will help the user not getting confused/distracted.

3. Tutorial Section: A tutorial section will be always there to help out the user.

4. Text to speech for documents: Other than the audio button feature elaborated in the first point, there will be a dedicated button for understanding a document just by taking a picture or uploading a picture from the gallery. (Just like google lens but with a minimal UI without a plethora of features google lens provides.)

Wireframes for MITRA

Its features are broadly elaborated in the wireframes.

--

--