AI Literacy
AI had already entered mainstream discourse by the time NPQ took up the topic in its Winter 2024 issue, but that’s what got me to finally, even if grudgingly, give it my attention. Tonie Marie Gordon’s interview with AI ethicist Shannon Vallor is just one of the articles that had me scribbling in the margins like this (below), but it stuck with me. I immediately requested The AI Mirror from my public library, read it in April/May, and it was an eye-opener, its core premise being that AI shows us not the future but our (past) selves and what we already (think we) know.
excerpts from the npq interview with shannon vallor showing author’s annotations like “DAMN, SHE’s good” and “THIS” with various arrows, underlines, and circles.
Before continuing, it’s important to clarify terms: AI is not one thing. AI is a catch-all term for a whole range of computerized pattern recognition and predictive technologies ranging from the Netflix “You might also like” menu to the algorithms controlling our social media feeds to large language models (LLMs) like ChatGPT. Lumping all these together obscures the fact that they’re designed and deployed vastly differently, with some task-specific AIs narrowly focused on calculations they do exceedingly well and other AIs operating a bit like modern oracles, promising to grant us answers if only we learn to ask in the right way.
This was a key takeaway from the American Evaluation Association webinar, “Who is afraid of … AI?” (June 2025) with Edge Hill University’s Axel Kaehne: LLM chatbots can be used in all phases of research and evaluation with the appropriate supports in place. Our workshop cohort used queries to analyze sample research papers in real time, discovering how wildly different the results we each got could be. Kaehne emphasized that it is important to iterate queries and “talk to” the AI to improve the relevance of results. He also shared an example of a multi-page set of parameters to feed the AI as instructions to ensure effective analysis. So when we ask AI to do a textual analysis, we’re delegating that cognitive load so we can take on the task of analyzing the AI’s analysis? How… Meta (*looks into the camera*). Maybe it’s just me, but it’s hard to get excited about a technology that needs as much training as a new puppy squaring off to pee on the carpet.
As much as I love with the fire of a thousand suns a good Sarah Connor meme, the greatest danger of AI isn’t a Terminator-style Rise of the Machines, but rather the rise of the broligarchy.
My point isn’t to trash talk all AI (just most of it, LOL). Those designed to handle massive calculations to meet a specific need are said to demonstrate strong levels of fidelity, quality, and accountability. However, the data-thirsty LLMs and image generators now being popularized by Open AI (ChatGPT), Google (Gemini), and others are not, in fact, designed to do many of the things people are looking at them to do. For example, they perform poorly as search engines. They are patently dangerous in delivering health advice. They are not the “everything” machines we might like or expect them to be.
Although we (and others) misuse these tools to our very real peril, the greatest danger of AI isn’t a Terminator-style Rise of the Machines, but rather the rise of the broligarchy. At the same time that Google, Meta, Amazon, and others are falling over themselves to suck up as much of our data as possible to train their AIs, their CEOs are sucking up to the Neo-fascist in Chief. Tech innovation is serving the concentration of privilege and power, shaping an AI-powered New Gilded Age where a few seek to consolidate control over not just knowledge and narrative, but of natural resources, human labor, and our futures.
I’ve not yet read Karen Hao’s book Empire of AI (the library is processing my request), but webcast interviews make it clear that she sees, in big AI, a digital colonialism. Many have noted that the way LLMs are built and trained is inherently extractive and exploitative and that the algorithms that power them are opaque (there’s literally a term for it: algorithmic opacity), but Hao also observes that AI is positioning itself as irresistible and inevitable (you’ve heard the argument: “if we don’t all learn how to write prompts our careers will end in tears”)—in the same way colonialism and empire asserts its own inevitability to convince us that resistance is futile.
If it’s not bad enough that mainstream AI amplifies power imbalances (though here I’m acknowledging that Indigenous, anticolonial, and ethical AI efforts are seeking to change the game), the question of electrical power remains. For me, the bottom line has to be the planet. The water. The air. The fact that communities are being sacrificed to data centers and that the race for more “efficiency” through AI uses exorbitant amounts of energy that has proponents clamoring to literally go nuclear. Now I’m reading that Open AI’s Sam Altman is saying “Let’s just put it in space!” This man is unserious.
Unless he IS serious… and that’s a darker philosophical topic getting into transhumanism. (I’m going to go off on a little tangent here—care to join me?)
In the meantime, back here on Earth, the novelty of chatbots, convenience of digital transcripts, and instant gratification of being able to summarize large amounts of text have made AI appealing to many nonprofit professionals, even as some take more circumspect approaches or say, “no thank you.” For an example of the latter, I recommend Berlin-based feminist futures lab SUPERRR’s policy statement, “About AI and Unlikelihood,” (which also summarizes so much of what I’ve attempted to, much better than I’ve been able to here).
Like Vallor, SUPERRR contends that, in large part, “generative” AI isn’t generating or telling us anything we don’t already know. The voraciousness of data-hungry LLMs scrapes books and scholarly articles without consent, stealing our brains and selling them back to us with a side order of state surveillance and environmental destruction. It’s an innovation offering nothing new. We know how that story goes. What we need isn’t a mashup of probabilities. As SUPERRR states: “We are and we want the improbable.”
Or, as I wrote in my real-life notebook after reading The AI Mirror:
“In a world where we’re increasingly letting AI tell us different permutations of what we’ve already done before, I want to be one who asks: What is possible that isn’t recycled from our past but is utterly creative and liberatory?”