Misreading Scripture with Artificial Eyes

I asked ChatGPT to interpret the Sermon on the Mount. Here’s what I learned about AI’s exegetical errors.

This piece has been adapted from an article that was originally published in the Biblical Mind.

In the past several months, it has been difficult to avoid discussion of artificial intelligence or, more particularly, ChatGPT and a host of other chatbots developed by technology companies. Based on the popularity of the topic (and often handwringing tone) within higher education and technology reporting, I decided I needed to see what ChatGPT would say about the Bible.

Specifically, I wanted to explore how ChatGPT interprets the Sermon on the Mount. I did this for the sake of the young undergraduate men I mentor—especially since our group is convinced the sermon is meant to be followed and we are committed to living it out in our everyday lives.

In my conversations with the bot, I was struck by the fact that ChatGPT holds up a mirror to the North American church, as well as to the broader Western scholarly community, by sharing three major shortcomings with us as we have been shaped by the spirit of our age:

First, ChatGPT metaphorizes and individualizes Scripture without a clear method for when and why, without warrant, and often in direct contradiction to the text itself. Second, the bot’s interpretations are ignorant of the interpretive traditions that produce them. Third, because the bot is disembodied, its interpretations are necessarily disembodied—and thus a bot is unable to recognize the realities of Scripture and interpretation. Each of the above tendencies present in AI’s responses is in some way a reflection of historic weaknesses in our own human interpretation.

When I asked ChatGPT, “How should we interpret the Sermon on the Mount?” the chatbot spit out an expected definition, including …

Continue reading…