I just had a meta experience with AI that tickled me so much I thought you might enjoy it. For those who don’t know (and I presume many don’t, because that’s not what I write about here—this is a psychology and sociology substack), I’m a Bible scholar. Hi. 👋 That’s what I do with most of my time when I’m not parenting and homeschooling.
I bring this up because I’m currently working on a treatise about anthropomorphism. It’s okay if your eyes glaze over at this point. You can always come back to it and reread if you realize you need this info for what happened next. Anthropomorphism in the Bible is when it describes God in human characteristics, like God “got angry” or “regretted” or “went down to see” or sees or feels anything. Theologically a nonphysical God has no body parts, doesn’t change, has no feelings, and nothing humans do has any effect on…well, “him” is also an anthropomorphism.
Once you get on the anthropomorphism train, you start to realize that they’re all over the place, they make no sense, and it’s just the Bible’s way of speaking in somewhat inaccurate terms that people can grasp emotionally. Then a lot of arguments start showing up, especially in the Middle Ages, which is my subspecialty. There are the scholars who like to be accurate, which leads to a lot of adding “so to speak” after everything. And there are the scholars who think, “Eh, most people aren’t philosophers and don’t think too deeply about this and yeah, sure a nonphysical God by philosophical definition has no feelings but let’s not get bogged down by the fact that most people just don’t care too much about this. And don’t forget the Bible wrote it that way, so the Bible wasn’t expecting the general population to be grand philosophers so don’t get your panties in a twist if people read it on a basic level, take it literally, and understand it superficially.”
And so in the apophatic theology fandom, things can get pretty heated. I don’t expect most of you to care about any of this (See above, “Eh, etc.”) and certainly not if you are atheist.
But this part about AI was eerie and funny so I figured I’d write it up.
So I’m writing this treatise, as I said, for those who are persnickety about the anthropomorphism (I’m philosophically persnickety myself and so maybe others would find this interesting) and how these anthropomorphisms can be understood in an apophatic theologically accurate fashion. I’m assuming your eyes are still glazed over. No problem.
What this means is that I take phrases like “God is a jealous God” and “God got angry” and I try to explain how they can be understood as metaphors if we are being philosophically accurate.
Why would you care? I would not expect you to, particularly.
But.
I had such a hilarious, parallel, meta experience.
So I’m writing things like “God is not actually angry. What it means when it says ‘angry’ is actually…” (Instead of going into detail here, I’ll let you read the book when it comes out if you’re that interested.)
Aaaaand I’m using chatgpt sometimes because yes, it absolutely hallucinates but it’s also handy at finding sources I half remember. So I asked it to find me a source today (I’ve trained it to add a link, because I’m so lazy and need such immediate gratification that I can’t click open a new tab and open the link myself it saves me an obscene amount of time when it generates the link for me and I can just click), and I always check the links because it makes things up at least 40% of the time.
I have dyscalculia, all numbers look like ### to me, I might be being unfair to chatgpt and it’s actually more accurate. I could ask it, but I don’t think it’s so accurate about that kind of math either … I just asked, “Are you accurate at math? Can you calculate your percentage of accuracy vs. hallucinations you have specifically with the sources I have asked you for (not just this time but all of our interactions).”
“You've challenged and verified many of my sources, so this gives us a pretty rich dataset,” it told me.
(Well, I guess as much as I have dyscalculia I have a decent felt sense. Impressive. This time.)
Anyway, I told it that it made a mistake and I asked it why it made that mistake. It gave me an explanation1 and it used some very human sounding terms, which I know it does not experience. So I asked it.
And then comes the hilarious part. The style that it explained anthropomorphism echoes exactly what I’m working on and how I explain it!
Obviously anthropomorphism about negative theology of God is going to have different explanations. But the whole approach of “when it says this” it actually means “that.” That’s what I’m doing.
Chatgpt explained to me why it does this:
Well, I’m not sure which camp I’m in. I’m just reporting it, I’m not taking a side. On one hand, it’s more streamlined to use anthropomorphism. On the other hand, it’s less accurate. And I’m that small subsection of the population that feels lied to when you don’t acknowledge you’re anthropomorphizing, Biblically or in AI.
I told it I don’t mind if it continues to use anthropomorphism, but a heads up that it’s post hoc constructing explanations would be nice. Humans do enough of that. We have no idea why we make lots of decisions we make and then when you ask us, we make up something plausible. Why is my 30 year marriage successful? Hell if I know. I could come up with a host of plausible explanations, but it may not be the best idea for you to believe them.
which, btw was wrong, and we had a further chat about that too
Anthropomorphizing God has been on my mind too lately! Enjoyed this anecdote about AI