Age prediction in the AI age

This post is about an experiment I did with the SeeingAI app from Microsoft that is available at the Android Play Store and the IOS App Store. This is a very interesting and thoughtful app that takes in images as the input – photos, people, color, a scene, documents – short or long, currency notes or product bar codes and gives a text output describing the same as well as audio output which is a reading of that text. One of the ideas behind the app was that it would help people with visual challenges to understand the world around them better. It was for this reason that I downloaded this app as I wanted to see if my uncle who has been having challenges with vision with advancing age could make use of this app.

One of the features of this app is that one can take photos of people or selfies and the resulting text and audio output tells you how many people were in that image and how close or far they are relative to the person who took the photo. As an additional benefit, the output also predicts the age of that person in the photo. This caught my attention and I wanted to see how accurate this could get.

Testing brings out the best in me

I took a series of photos of myself in various angles, backgrounds, lighting and wanted to see what the app would predict my age as. Take a look and you would be amazed to know that my age is not a single value but a range between 17 and 51 years! Not just that – according to the output, I had a beard sometimes and I was beardless at other times. My hair was black, grey, white, brown and even blond :). The faces I made also resulted in detection of my moods – angry, surprised, disgusted, happy, sad and neutral. Though some of them matched my intent, several were farther off from the intent.

A technologist who keenly follows AI and Machine Learning could possibly deduce why this is happening. An image recognition algorithm would have been trained on possibly tens of thousands or millions of images and based on the age labels in the training data, the model would then predict the age of a human seen in an image. If the training data were inclusive of all kinds of images of all kinds of people – race, color, gender, in sufficient measures, the resulting age prediction could perhaps be more accurate across all kinds of people. If that was not the case, the predictions would be good for some sets of images representing some sets of people.

These experiments led me to delve deeper around the notion of bias in AI, be it in training datasets or machine learning models or interpretations of the outcomes. A good resource that I stumbled upon was the ACM FAT* Conference which brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems. I will continue to explore more in this direction.

I would love to hear about your experiences with bias in AI and any examples that you have come across.

Disclaimers:

  • This article does not require you to voluntarily disclose your true age :).