En el último episodio del podcast AI for Information Accessibility, la presentadora Ayushi Khemka habla con las doctoras Eleni Stroulia y Martha White, ambas profesoras del Departamento de Ciencias de la Computación de la Universidad de Alberta. Stroulia es también directora del área AI4Society Signature de la universidad, mientras que White es la IP del Alberta Machine Intelligence Institute. Ambos debatieron las cuestiones relacionadas con la IA, la AI y el género, explorando tanto el contexto pedagógico como el industrial, arrojando luz sobre cómo situar la igualdad de género como principio rector en la AI y las diferentes formas en que el género aparece en un aula de ciencias de la computación. La conversación concluyó con un debate sobre el problema de la representación en la IA y campos afines, dejando espacio para las experiencias de las mujeres en la tecnología en general.
La serie de podcasts AI4IA está asociada a la Conferencia 2022 sobre Inteligencia Artificial para la Accesibilidad a la Información, que tuvo lugar el 28 de septiembre para conmemorar el Día Internacional del Acceso Universal a la Información. La Conferencia AI4IA y la serie de podcasts también se organizan en colaboración con AI4Society y el Instituto Kule de Estudios Avanzados, ambos de la Universidad de Alberta; el Centro para la Nueva Diplomacia Económica de la Fundación de Investigación Observer de la India; y la Comisión de Radiodifusión de Jamaica.
Para acceder a las presentaciones de la conferencia, utilice este enlace.
CORDEL GREEN: Hello and welcome. My name is Cordel Green, chairman of the UNESCO Information for All Programme Working Group on Information Accessibility. Welcome to the AI for Information Accessibility podcast, organized by Carnegie Council for Ethics in International Affairs. Your host is Ayushi Khemka, a Ph.D. student at the University of Alberta.
AYUSHI KHEMKA: Hello and welcome to our final episode of the AI for Information Accessibility podcast. We started our journey on all things AI and information accessibility a few months ago and it comes to a close today with a wonderful chat that I had with Dr. Martha White and Dr. Eleni Stroulia from the University of Alberta. In this episode we discuss their journeys and focus on issues around gender and AI.
Dr. Martha White is an associate professor of computing science at the University of Alberta and principal investigator at Amii, the Alberta Machine Intelligence Institute, which is one of the top machine learning centers in the world. She holds a Canada CIFAR AI Chair and received IEEE's "AI's 10 to Watch: The Future of AI Award" in 2020. Her research focus is on developing algorithms for agents continually learning on streams of data with an emphasis on representation learning and reinforcement learning.
Dr. Eleni Stroulia is a professor in the Department of Computing Science at the University of Alberta. Since 2020, she has been the director of the University of Alberta's AI4Society Signature Area and has been serving as the acting vice dean of the Faculty of Science since 2021. In the recent past, she has received a McCalla professorship and was also recognized with a Killam Award for Excellence in Mentoring. Her flagship project in the area of health care is the Smart Condo in which she investigates the use of technology to support people with chronic conditions to live independently longer and to educate health-science students to provide better care for these clients.
To start our conversation, I spoke with Dr. White and Dr. Stroulia about what developed their interests in AI data and algorithms. And I asked them about their current areas of research.
Here’s Dr. Stroulia, discussing her start in AI and how it connects to her work studying how young children learn.
ELENI STROULIA: I did my studies in the early 1990s, and our understanding back then when about what AI is slightly different than what we think now. At the time, there was a very strong impetus to understand human intelligence, and artificial intelligence was the study of how human intelligence works and the objective was to build simulations. So if we have an algorithm that behaves similarly to how humans behave, then that would give us an idea about how we are intelligent. What in our cognitive processes exhibits these properties that we consider intelligence. So at the time, I was very interested in how children learn particularly language, and I heard about some examples about how children generalize rules, and this is primarily the work that was interested in. So for example, according to Piaget, children learn by aggregating examples into higher order rules. And then when they encounter exceptions to their rules, they keep on ignoring the exceptions until they have a few exceptions, and then they make another rule to cover the exceptions.
So for example, young children start learning what is the present and past tense of verbs one by one. So very young children know that go is associated with went, but then at some point in time they start making the mistake and they say “goed.” And the reason they make the mistake is because they have enough examples that say that the past tense is formed by adding -ED to the present tense form. So now they're starting to make these mistakes because they have learned a generalization, and then they need multiple examples again to just recognize that there are exceptions to the rule and they start accumulating the exceptions. So this notion of learning as an aggregations of examples into higher order rules and discovery of patterns that explain or predict what the right thing to do is, that was the thing that fascinated me. At the time, the learning was very much conceptual, very much through examples that were more intuitive.
Now we are again interested in learning from examples, but the processes are very much mathematical and the examples that we're learning from are just much larger and we rely on big data sets and so on, so forth. But for me, it was always this idea that we can build algorithms that behave like humans, and this is the explanation of what aspects of our cognitive infrastructure make us exhibit these behaviors that are characteristically intelligent and characteristically human intelligent. So it was just a very exciting time to think about computing science as this mechanism for explaining intelligence and mechanism for simulating intelligence.
AYUSHI KHEMKA: Here’s Dr. White discussing her start in the field. She is currently focused on a different type of learning. We then started speaking about the ongoing issues of sexism and racism in AI.
MARTHA WHITE: So my undergraduate was in math, so I've always been drawn to algorithms and understanding functions, and understanding the technical underpinnings of things. And I did some research in my undergraduate and that research happened to be in AI. And I think a lot of people can probably say the same.
I'll say that as soon as you start working on AI, it really hooks you. There's just lots of really interesting things to do with AI. It's a field where you can work on really important applications, you can use it for really great things. And it's also a field where you get to explore a very technical side, understanding algorithms and applying knowledge from computing science and from math. So it really hooked me pretty early on.
As for your second question, what I'm currently working on, I'm actually working on lots of things, but my main area is called reinforcement learning. It's a formalism for decision-making. How can we get agents to interact with the world and actually learn how to make better decisions? Think for example, a vacuum cleaning robot, it's going to drive around your house. We wanted to do a better job of navigating, finding dust, do these things more efficiently.
So that's the general area that I work in. And my current main passion is understanding how can we get these kinds of agents to learn in deployment. And so by that, I mean a lot of our machine learning algorithms are learned from big batches of data, and we might deploy fixed functions or fixed policies. And I think one of the most important paths forward is to understand how can we have adaptive agents, ones that adjust based on new experience once they're actually in deployment.
AYUSHI KHEMKA: Thank you. That sounds so fascinating. And you talked about how we train our algorithms and machine learnings through a big batch of data in that sense. And I think somewhat related to that would be my next question. So there are multiple research projects where people are talking about how AI is sexist, racist, and adds to a lot of oppressive discourses existing already in the spaces. And so I'm trying to wonder that to move beyond the how of it and to understand the why of it. Where did we as humans just lose the plot? Or was the plot never there to begin with? What do you think about that?
MARTHA WHITE: Yeah, it's a great question. It's definitely not something I'm an expert in, but of course anyone who's in AI machine learning can't help but spend a little time thinking about it.
So I think maybe one of the reasons why this has happened, why weren't we more responsible and prevented these kinds of bad outcomes, I think is just the classic issue that many of us were experts in one thing. Maybe we were experts in algorithm design or experts in statistics, and we weren't experts in understanding how these algorithms were going to affect the real world.
So I think what you see here is people doing what they were already doing and then there being ramifications of that, of just doing the same thing that we were doing, but now with much more access to data, much more access actually deploying these models. And once people saw those ramifications, now of course there's lots of people stepping up and saying, "Okay, we have to fix this problem." But maybe for many it was hard to even know that it would be a problem. They kind of needed to see that the problem happened. And then of course we need to do something about that problem.
So it's maybe a little bit of negligence. Maybe we should have anticipated, we should have been asking how does our work actually impact the real world. But also for quite some time, maybe surprisingly, machine learning, and especially reinforcement learning was a smaller part of the research space, a smaller part of computing science, a smaller part of statistics. And so as the community grew, this is when we really need to start looking at these things.
So a niche area is exploding, and lots of ramifications are going to come out of that, and then we're going to have to deal with those ramifications.
AYUSHI KHEMKA: Here’s Dr. Stroulia’s answer to how humans “lost the plot,” with some historical context.
We then discussed the representation problem when it comes to women and emerging technology.
ELENI STROULIA: So the type of AI machine learning algorithms that we're able to design and use today rely on large sets of examples. In the early days, the examples were rather carefully crafted because we did not have the computational machinery to deal with many, many examples. So if you craft the examples, then you get inspired by scenarios, like I mentioned, the linguistic scenarios. So you can sort of recreate the phenomena that you're looking for fairly easily. Now we have lots of data sets and the data sets have been accumulated through long years of using software to conduct our processes. So in some sense what we're learning now, what our algorithms are learning now, is systematizing biases that are manifested in the data that we have collected. So for example, up until, I don't know, in the 1960s or '70s, women could not get a credit card without a man co-signing.
So therefore if you consider this data set, there is not a single example of a woman getting a credit card and therefore if you learn from these examples, women applying for credit cards would be denied. So this is a very stark example of what the data can include and what might be learned from an algorithm based on this data. Or for example, most of our data from human resources systems only have male and female genders, and therefore nothing from what that we can learn from our past experience is applicable to new systems where that might recognize more genders. So the issue of sexism is absolutely a valid concern because these algorithms land from our behaviors. And not only that, they systematize, they kind of freeze these biases. If you consider one single biased person making HR decisions in a big company, that's bad. But if you consider an algorithm that he has learned from these examples, and this algorithm is adopted by most Fortune 500 companies, that's terrible. That's unacceptable. Because in effect, people who do not fulfill the stereotypes that used to be successful are locked out across a whole sector.
Another problem in the sexism that has been manifested sometimes from AI systems is the fact that the engineers who are developing computer systems, software systems, AI systems are for the most part male and they are not necessarily seeing the same things that women in the same roles would possibly see. So it's a fascinating example to me that in 2014, Apple launched a health kit which is basically a self-monitoring piece of software for supporting individuals who take care of their own health, which is a great idea. It's basically democratizes everybody's ability to pay attention to their own physiological parameters. Now it turns out that that application had a functionality for monitoring copper intake, which I'm sure is important, but it did not monitor menstrual cycles, which seems to me is much more relevant to 50 percent, give or take, of the human population. So at the last step we have the people who are deciding what's important to do, what's important to deliver, what are the functionalities that should be included, and some people do not necessarily see the needs of the genders that they do not belong in. So we need to have more women in software development, in engineering roles, just so that all the needs of the human population are considered when we're deciding what is important to support and what's important to address.
AYUSHI KHEMKA: I think you've answered the next question that I was going to ask, but I will still ask it and I'll add a little bit to it. So as you mentioned, there are not enough women, certainly in AI or computing sciences or tech at large, and same goes for persons belonging to other gender minorities in that space. And so there are a lot of ways in which people try to maybe rationalize it and reason it out. Why is that happening? There's definitely a representation problem. So there's a school of thought which tends to say that yes, there's a lack of technical skills that exists within individual women at large. What do you have to say to that?
And also in terms of we know that, yes, there is a representation problem, we know how that is affecting the way our tech is being designed because it's not just what is being designed, it's also who is designing in that sense. Do you have any ideas on how that can be addressed? Obviously it's a huge, huge thing, but what are your basic thoughts on that?
ELENI STROULIA: So we are seeing that high school students, female high school students have the grades, the good grades. They need to come to university and get into disciplines that have high admission averages like science and engineering, for example. And these would be the pathways that lead to AI. But we see many women dropping out when they join team or they join a classroom where they're apparently a minority. We're still seeing our everyday life microaggressions and statements about who belongs and who doesn't belong as statements about women should be doing and what are the typical profiles of successful women influencers and models that are not necessarily covering science and engineering. And when you find yourself in a lonely environment and you have many more options to do something else because you are skilled and you are competent and you have high grades, in some sense it makes eminent sense to drop out.
So that's one problem that we who are in the field have to try to stand up and try to stop microaggressions, try to establish a culture of respect, and make sure that we project this idea and will this culture that everybody who's here belongs here equally. And whatever it used to look like, it's not necessarily the way that it should in the future. Our population, our team.
At the same time, I actually am very strong believer that strong measures must be taken to actively encourage women, keep women and indigenous students and Black students and students from communities that tend to be more rural or countries that may not have such representation, I think we need to actually support these individuals with special scholarships, special mentoring programs so that we cover, we support them when they feel this need that it is easier for them in their personal life to drop out. So we have to both build a general culture of respect and actually provide active support and recruitment or scholarships to change their representation profile of who is in technology.
But there is a more general problem of our culture, what we value. When I started my studies, I was told by my parents that I should become a teacher because as a woman I would eventually have children, which I did, and wouldn't it be nice if I had the whole summer free to be with my family at vacations, which is an absolutely great idea. And my parents told me that because it was a good practical advice. But they had also taught me before that it's very important to be educated and that everybody in my environment valued the investment that people made in their education when you went to school and got a degree and you work hard and you build a career around your degree. That was something that was important and respected.
I think our culture does not value anymore education just for the purpose of being educated. I think education is seen more as an avenue to become wealthy. And if there are other avenues to become wealthy, that's perfectly fine and more power to you. So I think that we're missing something when we lose this value. That being educated and being among the people who solve science problems and construct engineering artifacts and build the world that's important itself no matter if you're making enough money or much money or too much money or not. And I'm not sure how to address that.
AYUSHI KHEMKA: Here is Dr. White’s answer to the question about the representation issue in tech and whether or not it comes down to a lack of technical skills. This then led into a discussion about how to teach different genders and the ways that people may respond in the classroom.
MARTHA WHITE: For me, the short answer is that it's a systemic problem. I find it a little sad that people would say that it's about a lack of technical skills in women. Although I will say that maybe a little out outrageously, there was some point years ago where I wondered if maybe I wasn't as good at math as some of my colleagues maybe because I was a woman. And that's an absolutely ridiculous thing to think. I was very young at the time, and it's also false. I'm actually very good at math. But I was conflating having exposure and actually training in that area to some kind of inherent skill.
So whenever you run into this or you ask, do these people have these skills, I think it's dangerous to go down the route that there is some lack with the individual, and it's much more likely there's some kind of systemic problem.
And I can definitely see that there is a systemic problem. There's a systemic problem in how AI and computing science are taught. There's a systemic problem in how the rest of society also sees that area. I've definitely heard from young women, parents might not push their daughters, for example, to go into these areas, whereas men might be more likely to go into these technical areas.
So there's clearly a representation problem. There's a societal view problem. I think computing science for example, doesn't do a good enough job of advertising how it's actually a field that is very diverse. Diverse in the sense of diverse in what you can do with it once you've actually taken computing science. You can go into many other fields by springing from computing science, like biology, into health. It's just a very useful skill to have. And I think we end up making the world think that it's more narrow, focused on games, focused only on sitting at a desk and doing software engineering. So it's a systemic problem.
AYUSHI KHEMKA: Dr. White, you just talked about how there's an issue in terms of how AI is being taught, and how computer science is perhaps not advertising itself as a diverse discipline and how it can be implemented in so many different ways. So I want to understand how you as a professor, deal with these issues in your classes. Are there ways in which you approach these from a pedagogical perspective? So for instance, how men would react to certain situations or contexts or basic educational things that you're reading, like books that they're reading or stuff that you're dealing with, and how people from other genders would react to that. Are there any pedagogical approaches that you take for that?
MARTH WHITE: To be honest, not enough likely. I'm usually teaching slightly older students. And so I think one of the ways that we need to change some of our pedagogy is likely earlier so that we attract more people to the area and obtain more people in the area, make them realize that it might actually be a field that is for them. So that's more in our first year courses, and I teach slightly later machine learning courses.
However, I do at least try to make sure that when I come up with examples of how are we going to use machine learning or how are we going to use AI, I try to think about examples that are practical examples, ones you could imagine, I would like to help in health, or I would like to apply this technology to help with assistive robots, or something along those lines, rather than gravitating towards examples that might involve games or more traditionally computing science or AI, more examples that are more traditionally viewed as the ones we might use in AI or in computing science. That's a very, very minor thing that I do.
So in general, pedagogically, I teach machine learning quite like many other people teach machine learning, and I haven't thought enough about how to make it maybe address some of these pedagogical issues.
AYUSHI KHEMKA: Here is Dr. Stroulia speaking about gender and her pedagogical approaches and some of the experiences that are unique to women in this field.
ELENI STROULIA: I teach a course that has students working teams and every team solves a different project working with a client. And sometimes I very much see these teams working together. So I meet them regularly every couple of weeks and I talk with them about who's doing what, who's responsible for what activities, and I try to learn their backgrounds. And I often see a phenomenon where my male students start by saying they knew programming when they were in third grade, they got a computer and they build a program and they know programming. They have built 3,000 games and they're programming just geniuses since some kind of elementary grade. And this notion that they're experts when they hit the university can be intimidating for women who may not necessarily have started programming in grade three.
But the thing is that it absolutely does not matter.As a matter of fact, some of our students who have been playing with computers for a long time have very bad habits on what's the right thing to do and we have to sort of deprogram them. But when you are in this team and you know that everybody around you has been building games since they were in grade three, it's kind of intimidating. And I see many of my women students retracting to supporting roles, including documentation, talking with a client, organizing meetings, they fall not behind necessarily. Or even on the positive side, taking project management roles. So they skip the core technical part and they do the other things. Sometimes these other roles are sort of recognized by the team as being valuable, especially when they involve project planning and project management. But for the most part, the men assigned value and credit for the programming, for the engineering, and women tend to understand that they have missed the opportunity to actually become technically competent.
And that's something that I try to pay attention to and forcefully change. So if I see a team going that way, I make sure that I make everybody take the opportunity to do project management, including the men who may not want to talk to the clients. And I want to make sure that women students actually get to do programming of a module. So sometimes I will talk to my women students individually and sort of try to figure out if this is their choice or if they have been pushed to this role. And sometimes I see that there have been microaggressions that led them to that, demeaning comments that since you don't have as much experience as I do, it's better for you not to mess up with our program.
The other thing that they have seen on the positive side is some of my women students tend to be much more versatile. So I remember one particular case of one extremely gifted student who was doing double major in computing science and English, and she told me that everybody in English thought that she was strange that she was doing computer science, and everybody in computer science thought that she was incompetent that she was doing English. What was wrong with her? And she had no problem managing this kind of . . . no matter what, she was strange. She was just wrong, on both sides of campus. But this competence of actually being able to speak well and with nuance and the ability to solve problems and write software is something that is uniquely valuable to jobs afterwards. So she was one of the first ones to be picked up by a new startup, and she rose really fast through the ranks and she has been very successful.
But I find it difficult. Basically it does take a particular kind of thick skin to not be offended by this explicit understanding of everyone that you are strange. So we see positive and negative influences in this loneliness of being a woman in a mostly male-dominated field, but it has the positive effects sometimes that if there are fewer women, they tend to be noticed and they tend to be sought after given equal set of skills or similarly relevant set of skills.
Still, I wish that all my students had the same opportunity to experience all aspects of our field so that they're well-rounded and able to choose what kind of career paths they want to pursue after they get out of school.
AYUSHI KHEMKA: Yeah. I really like how you talk about the loneliness of being a woman in a male-dominated field because I think that's so well put that it is an everyday experience and that is going to affect the way your performance is going to be like, or your productivity or just the way you yourself feel about your own work. I really like that.
MARTHA WHITE: And it doesn't affect people the same. But I find that it's not fair to have to be stronger and more robust to this feeling in order to make it. I'm kind of used to it. I've been here for a long time and I'm used to it, but it's not easy when you start. So somehow I find that if you survive the first years and you find some success and you find joy and you find your career rewarding, then all of these sort of start falling down and they don't matter and they kind of wash over you, but they can be much more impactful when you're trying to find your way and your career and to decide what you like and what you're good at. So this is the tipping point. This can be the tipping point that can make a difference for someone to drop out.
AYUSHI KHEMKA: For my final question to both Dr. White and Dr. Stroulia, I asked them how we can situate gender equity as a guiding principle in AI while considering different stakeholders in perspective. Here’s Dr. White.
MARTHA WHITE: Okay, that's a big question. I suppose you might mean how can we pay attention to gender equity in terms of the people who are building AI and coming into the field, and also in terms of the models that we produce. Maybe you mean both. Maybe you mean the first one more though.
AYUSHI KHEMKA: Yeah, kind of the first one more, but I also wanted to sort of understand, because when we talk about AI, they are multiple stakeholders, people whose lives are going to be affected, people who are building their AI, people who are sponsoring that building of that AI. And then there are governments and everything. And given that there are so many stakeholders, and the field already is relatively new, if not super new, but it is building, every day there are newer things that are happening. So in that sense, there is just a lot of newness to it. And in that sense, how do you situate one principle of gender equity throughout these stakeholders? Is that even possible? Is that a lot to ask?
MARTHA WHITE: Right. I do think that representation is one of the ways that we start addressing all of these things at different levels. The more that we actually get more women into the field, more women in AI, then maybe they'll also start helping ask questions about how does this technology affect other women in the rest of the world? Or if we get more women into the positions, you mentioned political positions that are talking about policy, then they of course would also have an impact that way.
So there's no doubt that a diverse set of opinions help us understand how to tackle some of these diverse problems. If you have a very narrow view, everyone in the room is the same, then they're sitting there trying very hard to think about how is our technology going to impact the world? But they have a narrow view of the world and having a diverse group there can actually help that group accomplish what they really want to do, which is to make sure that their technology is used well. So I do think that by bringing more women into the field, we will naturally also start tackling some of those problems of how are we making sure that these models are used ethically and to promote equity?
And so then the question is of course, how do we actually get more women to come into AI? And you would hope that over the last few years this problem has been getting better, but personally, again, in my own classes and in computing science, I don't necessarily see more women than I used to see. So I think this is just something that we have to work really hard at. We have to start working harder to recruit students. We have to start asking ourselves, why are there certain students that don't come into our program, and how can we change our program to make it more attractive to those students?
So we are taking some steps, at least for example, at the University of Alberta, I'm sure lots of other universities are doing the same, where we are asking how can we change some of our first year courses to be more attractive to those students. So we have introduced an introductory data science course that's inside computing science that we are hoping will appeal to some of these more junior students that may not want to take an introductory computing science class because they don't really know what that means, but they can see how the utility of data science could be good for whatever program they go into.
So I think we just as a community, are becoming more serious about this very serious problem. And I think that's going to help us a lot towards gender equity around AI and AI models.
AYUSHI KHEMKA: And here’s Dr. Stroulia answering the same question: How can we situate gender equity as a guiding principle in AI while considering different stakeholders in perspective?
ELENI STROULIA: I think we have to consider it in the complete life cycle of developing systems that include AI. In fact, all technical systems that we're deploying to support and systematize processes have to be evaluated from this lens. First off, we need to understand the documents, the specifications, the data on which we are relying to build our algorithms. So I gave you a few examples about pregnant women could get fired, women could not get credit cards. So if this is your data set, then you have to be very thorough in throwing out data. So understanding how the data has been collected and through what systems and through what processes is essential in deciding whether this data is useful to learn from.
Then we have to ensure that our teams are reasonably balanced. We have to have representation from people and stakeholders who will decide the value systems that we want to build. Because when an algorithm learns from data, we have to decide what are the optimization functions that will make this algorithm learn faster. And in effect, an optimization function tells the algorithm that these properties make a result better and these properties . . . Basically we have to describe functions that will measure the quality of the outcome. So when you decide what is good, what is of high quality, this function in effect represent the values that we have as a society. And it has to be a team of people who are considering what are the values that we're going to build in our optimization functions. And then at the end, even if you have done all of this, our algorithms are quite complex and it's very difficult to know that even if you have done everything right, the outcome will be desirable.
So there are thorough testing mechanisms and you have to understand the performance of the algorithm on all these for all genders or for all classes that you want your algorithm to behave well, including the underrepresented ones. So for example, if you had an algorithm that you asked it black or white, and the algorithm simply said black all the time, and you tested this algorithm on a data set that's 80 percent black, your results will be pretty good. You're going to be 80 percent correct. 80 percent is not something to be disappointed over. But really you would know that you are 100 percent incorrect on the minority class. There are 20 whites and you're a hundred percent wrong there. So this kind of testing, taking your algorithm and testing it against a dataset that represents the classes that you want to recognize, this will give you an understanding of whether your algorithm is equally good or equally bad for every class.
So there is this study that was looking at recidivism of criminals and it was predicting much higher likelihood of recidivism for African Americans. And it turns out that it was false negative. It's false negative results were much higher for white criminals. So basically it predicted more times that African Americans will do something, will do a lot more crime, but it was lower in its prediction for white people. And this quality of white people prediction was much more incorrect than it was for black people. So when you see this kind of unbalance of when you're correct and when you're wrong in the different classes, this is when you see that your algorithm is built to prefer or to behave differently across different clusters of examples, classes, populations, and that's something that is probably undesirable.
So consider your inputs, make sure that the people who build the system represent the values of our society today because values evolve, and if a data set has been built through processes that represented our values 20 years ago, that's not something that we want to codify. We want to represent who we are as a culture today, and then make sure that you test thoroughly and with proper evaluation metrics the artifact that you have built. And there are quite a few technical solutions, which gives me immense pleasure. Technical people are working on the testing side, also on statistical analysis of the input data to notice these kinds of skews in your inputs. So we're becoming much smarter as technologists in understanding and teasing apart these nuances. But I think it's also important to have domain stakeholders that will tell us exactly what are the classes that we need to pay attention to. It used to be male, female. Now we understand gender much more, but maybe we have socioeconomic and representational differences, and this kind of understanding of what are the intersectionalioties that we need to pay attention to. These will come from domain experts, health experts, sociologists, lawyers. They will tell us what are the right classes to consider.
AYUSHI KHEMKA: With that, we have come to the end of the AI for Information Accessibility podcast. I would like to thank our partnering institutes who helped us in not only organizing this podcast, but also with our annual AI4IA Conference. I'm thankful for the support this series has been offered by AI4Society and the Kule Institute for Advanced Studies, both at the University of Alberta; the Centre for New Economic Diplomacy at the Observer Research Foundation in India; and the Broadcasting Commission of Jamaica.
I hope you all had a good time listening to this series. Thank you.