
Mo Lewis, NSVRC Prevention Coordinator, contemplates AI and its ethical and functional place in the sexual violence prevention movement.
As I was talking with a friend last week about ChatGPT and other forms of AI large language models, she said something that has stuck with me – “I think the train has already left the station.”
My online web is simultaneously filled with some people saying “I asked ChatGPT” to answer a question, and other people sharing stickers that say ‘using AI is loser behavior,’ and encouraging everyone to use their own brains.
I come from a generation that was taught strict rules about choosing reputable sources and citing them correctly; at that point, we had just begun to even have online sources to reference. This internal practice of scrutiny has stayed with me, especially as new technology is rolling out – quicker and quicker, lately.
These experiences have undoubtedly shaped my current views, as have my work experiences in the realm of sexual violence prevention. My prevention work focused for years on media literacy and digital safety, and the conversation has since expanded to include terms like deepfake pornography and sextortion. But even as there are ways to use AI to hurt people, is there a way to use AI to help people? I think that’s what we need to examine in the current moment.
We had a discussion at NSVRC a few weeks ago about organizational use of AI and watched this webinar hosted by Safe States. It was a great place to start, giving a lot of information about how AI models work, limitations and risks in the use of the tools, their potential uses in public health work, and potential ethical concerns that need to be considered when thinking about incorporating these into our work.
As our work also includes the use of data and requires the ability to make sense of that data, is there a place for AI in that? Stephanie Evergreen wrote a blog post about AI and data visualization, sharing the limitations AI tools have in being able to extrapolate meaning from data sets – it turns out that the current tools are not a replacement for our own discernment and expertise.
Some of the concerns and questions about the secure use of AI, especially since this field places a great deal of importance on confidentiality, consent, and safety, are still largely unanswered, although some organizations have been encouraging avoiding the build-in AI tools that are offered in programs like Zoom and Teams for meetings where confidential information may be shared.
RALIANCE recently published a blog post with considerations for the anti-sexual violence movement – highlighting ways that AI tools have a racial and gender bias, share incorrect information, and struggle to identify harmful behavior. I have encountered instances of abled people touting AI as something that can “create access” for people with disabilities, but not take into account disabled peoples’ knowledge about how AI tools are actually perpetuating biases and leading to increased discrimination when it comes to employment and healthcare.
These tools are built and “taught” by people and require large amounts of data. But what are the harms of that? These two reports delve into the exploitation of workers in Brazil and Kenya, made to sort through immense amounts of sexual abuse images and other violent text and imagery for unfair wages – all for the purpose of building and refining AI data tools. And currently, unpermitted methane-powered gas turbines used to power the AI chatbot Grok are increasing air pollution in a community in South Memphis, TN, an area that is composed of predominantly Black communities that are already facing large amounts of industrial pollution.
If our goal is to create healthy, safe, and connected communities in order to prevent sexual abuse, harassment, and assault, where do tools that are created through the exploitation of others fit into that? We can pause to learn more and to make sure the tools we are using are not perpetuating the harms we want to prevent.
We can still make decisions around our individual and organizational use of AI tools – the train has not yet left our station, and we owe it to this movement’s past and future to make careful and educated decisions about if and how we use these new technologies.
