September 1 2022 GM: Difference between revisions
Line 31: | Line 31: | ||
== Notes == | == Notes == | ||
''' | '''Your work, the African women in AI series, what inspired it? What did you aim to achieve aka the goal and objective of the papers? If you could share brief overview of the three papers''' | ||
Your work, the African women in AI series, what inspired it? What did you aim to achieve aka the goal and objective of the papers? If you could share brief overview of the three papers | |||
* So, the African women in AI project was the brainchild of my colleague Sandra | * So, the African women in AI project was the brainchild of my colleague Sandra | ||
* I had just joined Pollicy back then and she had already written this long document about the AI in Africa that went into our very first paper and then I was brought in to support because I had an interest in AI generally | * I had just joined Pollicy back then and she had already written this long document about the AI in Africa that went into our very first paper and then I was brought in to support because I had an interest in AI generally | ||
Line 59: | Line 57: | ||
* And there was a lady who shared that she had supported some programs in the past trying to get women into tech but after expending so much money that husbands and fathers would no longer allow the women and girls from attending | * And there was a lady who shared that she had supported some programs in the past trying to get women into tech but after expending so much money that husbands and fathers would no longer allow the women and girls from attending | ||
* And then the fact that more women in AI worked in health and agriculture than any other sector too was telling | * And then the fact that more women in AI worked in health and agriculture than any other sector too was telling | ||
'''Have you looked into discriminatory AI-powered banking security/anti-fraud systems? If yes, can you share the highlights of your findings?''' | '''Have you looked into discriminatory AI-powered banking security/anti-fraud systems? If yes, can you share the highlights of your findings?''' | ||
Line 74: | Line 71: | ||
* There are two primary ways that come to mind: | * There are two primary ways that come to mind: | ||
# Shadow banning | # Shadow banning | ||
# The creation of echo chambers through Ai-powered recommender systems | |||
# Promotion of harmful content because of how viral they can get and the engagement they bring to platforms | |||
* And it | * And it's a troubling thing. So for instance, I listened to a guy called Jared (i've forgotten his surname) the father of VR and in his opinion, social media is the wrong place to carry out (m)any kind(s) of advocacy and he gave the example of #MeToo. That algorithms deployed on these platforms would heighten, elevate these campaigns because of the chance that there would be retaliatory kickbacks and harmful, aggressive reactions to them | ||
Regarding conversations around harm etc. Did you look into arguments that claim [https://link.springer.com/article/10.1007/s43681-022-00209-w we should stop speaking of AI ethics but engage in a power conversation]? What do you think of this, and really what does an ethical AI look like for the minoritized person? | '''Regarding conversations around harm etc. Did you look into arguments that claim [https://link.springer.com/article/10.1007/s43681-022-00209-w we should stop speaking of AI ethics but engage in a power conversation]? What do you think of this, and really what does an ethical AI look like for the minoritized person?''' | ||
* Oh yeah, we had this discussion on whatsapp and I think my final takeaway was what you said Mardiya, that conversations about ethics are also conversations about power | * Oh yeah, we had this discussion on whatsapp and I think my final takeaway was what you said Mardiya, that conversations about ethics are also conversations about power | ||
* I think an ethical AI would be one that minoritised people had a say in its development and use | * I think an ethical AI would be one that minoritised people had a say in its development and use | ||
Line 87: | Line 84: | ||
* Another they do is lobby organizations like the EU to not make legislations and to instead release guidelines. Because guidelines can be easily circumvented', while laws are binding and you could pay fines for breaking them | * Another they do is lobby organizations like the EU to not make legislations and to instead release guidelines. Because guidelines can be easily circumvented', while laws are binding and you could pay fines for breaking them | ||
* Many examples including the most recent: ** [https://www.nytimes.com/2022/08/30/technology/google-employee-israel.html Google Employee] and [https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html Timnit Gebru for hilighting AI Bias] | * Many examples including the most recent: ** [https://www.nytimes.com/2022/08/30/technology/google-employee-israel.html Google Employee] and [https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html Timnit Gebru for hilighting AI Bias] | ||
'''What are some of the most relevant emergent "AI snakeoil" used in the countries you focused on in your research?''' | '''What are some of the most relevant emergent "AI snakeoil" used in the countries you focused on in your research?''' |
Revision as of 14:15, 1 September 2022
Glitter Meetups |
Glitter Meetup is the weekly town hall of the Internet Freedom community at the IFF Square on the IFF Mattermost, at 9am EDT / 1pm UTC. Do you need an invite? Learn how to get one here.
Date: Thursday, September 1st
Time: 9am EDT / 1pm UTC
Who: Favour
Moderator: Mardiya
Where: On IFF Mattermost Square Channel.
- Don't have an account to the IFF Mattermost? you can request one following the directions here.
AI, Emerging technologies and the woman question
AI and other emerging technologies' development and harms have been the subject of growing conversations today. The fruit of their impact can be seen in the development of regulations and policy frameworks, especially in the field of AI. However, these frameworks often take a generalist approach to policy even when harms are centred. For this meeting, I would like to discuss some of the growing, novel challenges posed by emerging, data-reliant technologies and potential solutions.
Favour is a Data and Digital Rights Researcher at Pollicy. In her role at Pollicy, she arches a number of knotty topics related to technology-facilitated violence against women and the general impact of subsisting and emerging technologies on social justice and equality. Favour also works as a Content Writer with Ethical Intelligence, an AI Ethics consulting firm and with various research groups on topics related to data protection regulation and healthcare delivery.
Notes
Your work, the African women in AI series, what inspired it? What did you aim to achieve aka the goal and objective of the papers? If you could share brief overview of the three papers
- So, the African women in AI project was the brainchild of my colleague Sandra
- I had just joined Pollicy back then and she had already written this long document about the AI in Africa that went into our very first paper and then I was brought in to support because I had an interest in AI generally
- And I'd been writing a couple of blogs on that as well
- So the problem back then was that there was so little that we had found on AI on African women
- The impact, the harms, the significance, women working in AI etc.
it was a very shocking contrast with what people were already talking about in places like the US and Europe about AI.
So we had a problem of if we should write the report as something that should warn people about coming harms and how to prevent them or if we should write about what was currently happening (we thought there really wasn't much but that was something)
- But as it turned out, there was a lot happening that we were able to talk about in the reports
- Here are the papers :
Can you share some of the things your report covered?
- So the first one talked about the state of funding, barriers in the government and then there was an interesting part at the beginning where we talked about how intelligence was defined and how the term AI was coined by a group of researchers in Stanford years ago. white, male researchers. (which is very problematic when you think about it).
- The second one focused on the harms and benefits of AI to women in Africa. I was very eager to highlight the harms because it seems like everywhere today people are more eager to talk about the positives and stuff but then, I was persuaded to also look at the positives. How technology could help people. Some of the bad sides were AI-powered colourism, visa rejections and other travel woes due to algorithms, deep fakes and all that.
- The third one talked about the experiences of about 10 women we interviewed, who work in/around AI as VCs, academics, entrepreneurs, technologists and so on. We wanted to know what their experience had been working in AI, a male-dominated space so to speak.
What were the specific experiences you mapped through your research around the “woman question”?
- It was very interesting that many of the respondents did not believe that the faced any particular female-specific type of discrimination even as they went on to mention things like being happy that their youngest children had now gone off to college giving them time for their research.
- Or that colleagues would say "A male developer has coded this for you"
- To me that was a blatant anti-woman discrimination but they took it in stride as one of those "normal" things women need to deal with.
- And there was a lady who shared that she had supported some programs in the past trying to get women into tech but after expending so much money that husbands and fathers would no longer allow the women and girls from attending
- And then the fact that more women in AI worked in health and agriculture than any other sector too was telling
Have you looked into discriminatory AI-powered banking security/anti-fraud systems? If yes, can you share the highlights of your findings?
- We touched on this very briefly in the second report when we spoke about fintechs
- But we were looking more at the angle of automated systems being used to determine who was credit worthy
- And the privacy-invading and imo dignity-eroding requests to view users pictures, browsing history etc
Did you look into the whole rhetoric of "banking the unbanked?"
- Yes that was it! "Banking the unbanked. I co wrote a book chapter on this issue in Nigeria actually. banking the unbanked with apps that are only available on ios
- And even when USSD is used, it seems like more data on user identity is collected than on whatever value people can really get is, especially after the bank charge deductions
- Someone called me a stupid thief because I told them it was a violation of privacy. And then sometimes they say they have HIV or have stolen company money and are on the run
How has AI-powered automated content moderation algorithms impacted the freedom of speech of folks using social media in the African continent?
- There are two primary ways that come to mind:
- Shadow banning
- The creation of echo chambers through Ai-powered recommender systems
- Promotion of harmful content because of how viral they can get and the engagement they bring to platforms
- And it's a troubling thing. So for instance, I listened to a guy called Jared (i've forgotten his surname) the father of VR and in his opinion, social media is the wrong place to carry out (m)any kind(s) of advocacy and he gave the example of #MeToo. That algorithms deployed on these platforms would heighten, elevate these campaigns because of the chance that there would be retaliatory kickbacks and harmful, aggressive reactions to them
Regarding conversations around harm etc. Did you look into arguments that claim we should stop speaking of AI ethics but engage in a power conversation? What do you think of this, and really what does an ethical AI look like for the minoritized person?
- Oh yeah, we had this discussion on whatsapp and I think my final takeaway was what you said Mardiya, that conversations about ethics are also conversations about power
- I think an ethical AI would be one that minoritised people had a say in its development and use
- But I think a lot of this conversations are primarily referring to ethics washing
- Which is the use of ethics frameworks to distract or deceive members of the public into thinking that a company has motives other than profit in mind
- Even when these companies hire AI ethicists, or researchers, they often don't really have room to critique the policies
- They're just there to give legitimacy to what these companies are doing, but then they sign all these frameworks
- Another they do is lobby organizations like the EU to not make legislations and to instead release guidelines. Because guidelines can be easily circumvented', while laws are binding and you could pay fines for breaking them
- Many examples including the most recent: ** Google Employee and Timnit Gebru for hilighting AI Bias
What are some of the most relevant emergent "AI snakeoil" used in the countries you focused on in your research?
- Snake Oil. I guess the promise that AI would improve security and health and all that, those are the ones that come to my mind
- Security paraphernalia and all that does not mean human development is happening or that human rights are valued
- Another prominent example is the use of chatbots in banks: Banks are notorious for not hiring staff full time or for not properly paying even the developers. But then what do they do rather than improve worker satisfaction? Create expensive bots that soon fall into disuse just to propel or display that they know what the state of the art is
- I don't know we can have smart cities in places where there's no electricity
but anything to distract people from the real work that needs to be done
There is all of this conversation around trustworthy and inclusive AI, from your view and expertise, what do these mean, are they beyond buzzwords? Is it possible for AI to be trustworthy especially given all the misinformation around "it taking over the world and so on"
- There's an element of this that is attributable to AGI (Artificial general intelligence)
- The idea that AI will have and supercede human intelligence someday. And there are some companies and very wealthy people dumping resources into building this (if it's possible), trying to get into AI's good graces before presumably it reaches intelligence, or aliveness. Feels like there are better uses for money
- Already some uses of AI are quite deleterious to livelihoods and are imo very effective at the very narrow and specific places they are deployed in
- We need to think about how we can create AI with human supervision to serve human needs and assist not take over
- There are some amazing people working in trustworthy and inclusive AI though. But i've come to believe that when people say trustworthy they're really mostly talking about the company.