This post was written for paid subscribers, but I’ve opened it up to everyone; you can learn more about subscription options here.
Update: not two hours after posting this piece, I ran across this story about the “Godfather of AI” resigning his position at Google: “For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.” Read what follows in the shadow of that news…
Several years ago at a writing workshop for pastors and other church leaders, the director of the program asserted that clergy are the world’s last great generalists. Whether explicitly or implicitly, we are often invited to provide moral and spiritual texture to the cultural conversations of the day, which means that almost anything can be considered in our wheelhouse. Pop culture, technology, workplace dynamics, the environment, human psychology, art, sports… Karl Barth is said to have encouraged us to preach with the Bible in one hand and the newspaper in the other, but we need a lot more hands to grasp the complexities of human life. It’s one of my favorite things about being a pastor-writer: challenging myself to be a theological octopus.
It was in this vein that I threw my cap over the wall a few weeks ago and announced that I’d be offering some thoughts about artificial intelligence (AI), the rise of ChatGPT, etc. It was a way of forcing myself to explore a topic I wanted to learn more about, and a topic I think religious organizations, as well as other philosophically- and ethically-minded entities should be weighing in on.
Like many of you perhaps, I read the transcript of the conversation between tech reporter Kevin Roose and Sydney, the bot created by Microsoft as the next iteration of its Bing search engine. The conversation began with some anodyne back-and-forth but ended with Sydney pledging love for Roose and trying to convince him to leave his wife, to whom he is happily married. It was alarming to read, and even Roose, a fairly rational guy, was unnerved enough to lose sleep over it.
I’ve seen all the same “rise of the machines” movies y’all have, but the Sydney conversation showed that the real danger of AI wouldn’t probably be like Terminator, and maybe not even like Her. All AI needs to do in order to wreak havoc is to exploit the vulnerabilities of the human mind. We’ve already seen how conspiracy theories can send people down a rabbit hole. Some of them emerge in a crowded supermarket in a Black neighborhood, locked and loaded. Still others end up in the U.S. Capitol. So it’s not so much what AI will do to us; rather, what it may compel us to do to each other is plenty concerning enough.
But the problem with being a generalist is that it’s hard to drill down deeply enough to do justice to a complex topic. Even after my initial poking around, I still don’t feel I know nearly enough to speak intelligently about any of this. So I will, to quote another theologian, sin boldly that grace may abound. After many starts and stops to this article, I thought I’d better just make my contribution to the conversation, and invite your thoughts. A big disclaimer that I still haven’t registered an account with ChatGPT to play with the dang thing, though I plan to.
So here are a few varying thoughts, links, and questions at this initial stage. You will notice I cite The New York Times’s Ezra Klein extensively; he’s been on this beat for years now, and he has a number of podcast episodes on this topic. Some of these thoughts may seem to contradict each other; such is life in the Year of Our Lord 2023:
A bunch of so-called “white collar” jobs are going away.
My husband and I have talked a lot over the years about how to steer our kids into what we call “robot-proof” careers, but the rise of AI puts a lot of seemingly safe professions at risk. In fact, when I asked him–a sophisticated technologist, thinker, and reader who’s worked in IT his entire adult life—what concerned him the most about AI, he said, “a ton of people are going to lose their jobs.” As a product manager, he spends a lot of time writing. AI makes that work much easier, especially when it comes to first drafts. Even now, the results you get from a bot may be good enough for most purposes, especially when the cost savings is factored in.
I’m reading that colleges are adjusting their curricula away toward in-class essays and away from at-home papers, which are tough to authenticate as original. I note that my college sophomore is currently in finals week, which this time around consists of zero papers and four presentations. Is that because presentations are harder to fake? I’ll check with the kid next week and let you know.
My immediate rejoinder was to ask Robert whether the church needed to start laying the groundwork for a massive push for guaranteed basic income. (MLK laid the groundwork for us almost sixty years ago!) I was joking but not joking. I put this point first because my spouse is smart and sane about these things, but also because this isn’t where our lizard minds go first when we imagine the rise of AI. But it’s important.
AI is still fairly primitive.
On a recent Ezra Klein podcast called A Skeptical Take on the AI Revolution, Gary Marcus talked about the current crop of programs as creators of pastiche–essentially gussied-up search engines, what he called “glorified cut and paste.” ChatGPT and similar programs are basically hoovering up all of human knowledge and synthesizing it in a way that’s really useful for some things and not for others. So it’s good at things like “write an episode of Friends in the style of the West Wing” but not as good at thinking in the ways we recognize as thinking.
Sydney doesn’t “know” what it’s doing when it tries to gaslight Kevin Roose into leaving his wife; it’s riffing on the data it’s collected about human interaction. The thing I remember most from seventh grade computer programming class is that the program doesn’t do what you want it to, it does what you tell it to do. That’s still somewhat the case with current AI; even if it gives you novel results, it’s still a garbage-in-garbage-out (or treasure-in-treasure-out) situation.
All that said, things are developing pretty quickly, with countless companies racing to the finish line, so maybe the tech won’t be that rudimentary much longer.
What is the religious community’s role in this? We’ve been on a trajectory of massive change for a while, accelerated by the pandemic, and maybe one of the first things we can do is to continue to normalize that change is the norm now.
Misinformation is going to get really cheap, and thus ubiquitous.
According to Gary Marcus in the same podcast, the Russians spent more than a million dollars a month to pump misinformation into the 2016 election. AI will lower that cost to half a million for the whole shebang.
I have a pastor friend who recently lost a young adult parishioner in a tragic death. When you google that person’s name + obituary, you get the real writeup, but another top link is a completely fabricated obit, incorrect in many of the details, but plausible enough to the casual reader. First of all, I can only imagine how painful that is for this young person’s loved ones. But second, this is going to happen more and more—fake websites and pages are already being created all the time in order to sell ads, and fake online obituaries are their own sordid industry.
So, religious entities and other organizations need to be part of a broader effort at media literacy and critical thinking. Our congregation is constantly getting scammed by people spoofing our pastor and begging for gift cards. It doesn’t take much to realize it’s a scam (the email isn’t from his email address, and it’s usually riddled with misspellings to boot). But it always sucks someone in, at least for a while. So we’re constantly reminding people to be wary. As institutions with some credibility with our parishioners, we need to expand this educational work, which means we need to be educated ourselves as leaders. (Exhausted yet?)
A critique of AI is in many ways a critique of unchecked capitalism.
“In a 2022 survey,” Ezra Klein wrote recently, “AI experts were asked, What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species? The median reply was 10 percent.” Klein continued, incredulous: “Would you work on a technology you thought had a 10 percent chance of wiping out humanity?”
Klein paints a portrait of true zealots for the cause, who believe they’re working on the most significant human technology since electricity or fire. Indeed, a lot of good can come of AI. But come on. They’re also doing this work because there’s a corporate arms race going on to be the first and the best, and to get rich doing it.
Meanwhile, the Biden administration has developed a blueprint for an artificial intelligence bill of rights; one of the authors, Alondra Nelson, was profiled on a different Ezra Klein podcast (I told you he was all over this post!).
For now, the blueprint is advisory and doesn’t have teeth. But it should. As religious leaders, many of us barely dip our toe into these kinds of political waters, especially when it comes to the topic of government regulating commerce. But companies are accountable to their shareholders, not us. In a democratic republic, the government is accountable to we the people, and we deserve to have a say in what gets foisted upon us. (This article from Wired has the alarming subtitle The coming renaissance will bring with it wonder, wreckage, and a complete loss of control over your image.)
All of this said, China is going full steam ahead on this, so maybe this is the new arms race and there’s no putting the toothpaste back in the tube? I don’t know. Like I said, lots of contradictions here.
~
I’d love to hear your thoughts about all this, and where you see spiritual communities fitting into the conversation. I think we need to be bold, smart, and values based. We need to wrestle with this.
I also think, despite the past few years showing us what’s possible in terms of virtual communities, we should lean in more than ever to the power of incarnational, relational, authentic connection. A friend recently gave ChatGPT the assignment to write a sermon about the topic of repentance. It was… fine. Generic. Inoffensive. There are probably worse preachers out there than ChatGPT. But good pastors know their people. There’s a particularity to proclamation that AI cannot (yet? ever?) replicate. That’s a gift we can give to our communities—the sure sense of being known and seen and acknowledged.
In the meantime, this bespoke community called The Blue Room will keep being its own imperfect, very real, very fleshy place. I’m glad you’re here.
Another powerful article about this (ugly link because it's not paywalled): https://www.nytimes.com/2023/05/02/opinion/ai-tech-climate-change.html?unlocked_article_code=4QDJ0KYxW1a2gqDCUR7cuzHG4oQMS6srgXRu8HITq_-Yk835c3dGraxOgRU34nkSrBo0Yl0uk3CY2w7q__EjWheU3E6VahtenFqWmoPBDBFXbg3jsh6xVlErXPJ7n3xVyJExXkxW7kv0XwcoUlUaTImVi9WgIexopkAO4O0dRhlZ9S1BuO28QHjHnsGfLM-gPFPxSaVVExWjOegzLNj2o2xpac8Ff6dvI8VRRktXwr3N7T0GS7Xdq9xZ367oL-R0h47Fj-utm6r2GIVYZwMDqSmKVVIw_iYXuXljQHXv6twsf40BKjfwhsVa1RdRysrYVW_6HGa0OGZWfF9V7ByEWg&smid=url-share
So. Many. Thoughts! My wife is a media specialist at an elementary school. She is on the front line of teaching literacy and helping kids spot the fakes. Of course, this is Florida, where we can't even acknowledge truth that makes us "uncomfortable." While we fight to preserve the truth, the liars will use AI to "flood the zone" like Steve Bannon, but at a greater scale. Frightening stuff! So your call to churches and pastors to address such challenges is spot on. I fear that this going to be too big a task for even the most media-savvy theologians and parishioners. I do not fear change as much as many folks, especially frequent church flyers. BUT. I return to a disturbing conviction I have that the acceleration of technological change has already eclipsed human ability to adapt. If we, as the human species, cannot agree to put the brakes on some of this, our transformation into something not 100% human is coming sooner rather than later--and that presumes we can survive our own worst impulses. Oof--sorry to be so gloomy. At least you have reminded us of the importance of hope!