Suscríbete al boletín semanal

Recibe cada semana los contenidos más relevantes de la actualidad científica.

Agencia Sinc
If you are registered

You will not be able to connect if you exceed ten failed attempts.

If you are not yet registered

The SINC Agency offers different services depending on your profile.

Select yours:

Journalists Institutions
África Periáñez, data scientist and founder of benshi.ai

“To combat bias in artificial intelligence, we seek talents from different continents, genders and sexual orientations”

The researcher from Madrid had worked in video games and fashion before the Bill & Melinda Gates Foundation signed her up. Her experiences in machine learning and behavioral prediction models will help to improve the learning apps used by midwives in resource-poor countries. The goal is to reduce maternal and infant deaths during pregnancy and childbirth.

La científica de datos África Periáñez en una terraza madrileña durante la entrevista con SINC. / Álvaro Muñoz Guzmán (SINC)

África Periáñez (Madrid, 1980) is the founder and CEO of benshi.ai, a newly created nonprofit based in Barcelona and fully funded by the Bill & Melinda Gates Foundation. The powerful institution will provide almost 9 million dollars (about 7.5 million euros) over the next two years for her and her team to develop a machine learning and behavioral prediction platform that improves mobile health apps used in resource-poor countries. These may include apps for midwives, telemedicine and epidemiological disease (such as AIDS, malaria and COVID-19) management, among others.

This data scientist and entrepreneur has spent much of her career in Japan, working in the field of video games. There she founded Yokozuna Data, a company devoted to predicting the behavior of individual gamers by means of machine learning algorithms. At the end of 2019, she returned to Spain to work as the Chief Analytics Officer at Inditex, where she led the data science research, development and implementation across the organization. However, despite being very interested in the project, she did not last long in that position. "I got a call with a proposal that was hard to turn down," she said with a big smile.

The Bill & Melinda Gates Foundation approached us to enhance mobile health apps through our machine learning platform, which features recommendations, individual behavior predictions and experimentation systems

How did you feel when you got reached by the Bill & Melinda Gates Foundation with a proposal like this?

It was such a nice opportunity that I did not have to think about it at all. This foundation recruited us to make improvements in health apps through artificial intelligence (AI). Among other things, we will work on promoting positive reinforcement and nudging the behaviors of the users of these apps. Our mission is to take the most advanced machine learning algorithms for personalization, which are used in the apps of large technology companies, to the healthcare field in resource-poor countries.

To this end, we created benshi.ai, a nonprofit that is fully funded by the Bill & Melinda Gates Foundation and that operates as one of its arms. Our headquarters are in Barcelona and we already have a budget: about 9 million dollars for the next two years. We are very happy and motivated.

What is benshi.ai working on now?

After registering our organization, we are developing a machine learning platform that will include different products, such as recommendation systems, individual predictions of the behavior of health app users, and an experimentation system.

Are you already using the platform for any specific issue?

Yes. While we continue improving the platform in the issues I mentioned before — predictions, experimentations and recommendations — we are already working with another nonprofit called Maternity Foundation [also funded by the Gates foundation] that has built an online learning app for midwives. They have been active for quite some time now and are present in most of Africa, in India and in many Southeast Asian countries. Their app is called the Safe Delivery App and is designed to train midwives in these countries to assist women during childbirth

Every day 800 women die from complications related to pregnancy and childbirth, so it is crucial to improve the training of midwives and the resources they can use

There are many organizations investing in midwives, as they are women who help other women. They are very much neglected by the institutions of these countries, where every day 800 women and 14,000 babies die from complications related to pregnancy and childbirth. This is why it is critical to improve the knowledge and resources they can use.

África Periañez during a moment of the interview / Álvaro Muñoz Guzmán / SINC

And what improvements are you making to the Safe Delivery App?

First, we are developing predictive models to help us determine how midwives use the application on their cell phones. For instance, we want to find out when they will connect again and which modules they will access. To that end, we use algorithms that combine autoregressive and deep learning models, such as DeepAR.

Once we know more about what they will do and how they will use the app, we can start creating incentives based on their behavioral data. We already did that in video games to motivate players. In health apps, you can also use this technology to shift behaviors.

In the Safe Delivery App, we are including predictive models to determine how midwives use the app on their cell phones. For instance we want to find out when they will connect again and which modules they will access, so that we can choose the best time to send them notifications

What incentives can be given to a midwife in these countries?

 To investigate this question, we developed a tool that we called XP Engine. This experimentation system allows us to test different approaches in order to decide, for example, what kind of incentive will work best: financial help, food, working material, etc. Such "prizes" can encourage midwives in these countries to access the app more often whenever we recommend the specific modules they are going to need, so that they can improve their training and practice at the time of delivery. It is a matter of sending the most appropriate incentive to each midwife. We also use reinforcement learning models to enhance their training.

How do these models help?

Reinforcement learning models are AI agents that automatically learn how each midwife reacts to the different interventions we have devised. In this way, we can optimize the notifications she receives on her cell phone by including the right content and sending them at the best time for her to assimilate and apply the information. For example, would it be better to send her the recommendations just as she is delivering a baby, or on the night before, so that certain materials are fresh in her mind? 

We take into account many factors. We need to know when the best time is to send these notifications. This is why we turn to behavioral scientists, who give us tips like: "Be careful if you want to send a recommendation at night in some African countries, because women usually do not look at their phones during these hours." There are many things we cannot see in the data, so we try to complement them with the knowledge of these experts. 

With the experimentation module, we test what incentive will work best, and we also take into account the recommendations of behavioral scientists to get information that are not in the data

In which countries, and in what other areas, are you working?

At the moment we are conducting projects with midwives in Ethiopia and India, and we are already gathering data from almost all countries in sub-Saharan Africa.

We are also starting to improve general diagnostic apps in other African countries, such as Nigeria, Kenya and Ghana. Plus, we are working with pharmacy apps, in order to ensure the correct use of drugs, encourage the monitoring of treatments, and perform the necessary tests to guarantee the control of diseases such as AIDS, malaria or COVID-19.

We will investigate what incentives can be given to change patients' habits, so that, for example, they start taking their medication or collecting the results of the tests they have undergone.

África Periáñez

Interview with África Periáñez. / Álvaro Muñoz Guzmán / SINC

Do you collaborate with research institutions?

Yes, we are working with universities to turn research results into production systems on our platform. Currently, we collaborate with the Statistical Reinforcement Learning group at Harvard University. Also, with the University of California, we are developing personalized surveys for midwives; this will help us to further assess the impact these personalized recommendations and incentives have on them, beyond the analysis of the data they generate in the apps.

One of the main issues in connection with AI bias is that most data scientists are straight white males from rich countries. In the end, you are not aware of your own bias

AI algorithms have a very bad reputation for introducing all kinds of bias. How can this problem be mitigated?

One of the main issues in connection with AI bias is that most data scientists are straight white males from rich countries. In the end, you are not aware of your own bias.                                                            

To improve things, I think it is important to have diverse teams. This is something we are working very hard on. To that end, we are trying to attract talents from Africa and elsewhere. We also want to open offices in Zambia and Nigeria, and our staff include people from different continents, religions, genders — we have more women than men — and sexual orientations.

In addition, we are collaborating closely with behavioral scientists in the countries where we work, because, as I have said before, many times data alone is not enough.

Source:
SINC
Copyright: Creative Commons
Related articles