Can AI Sin? Why I Wrote a Story About Artificial Intelligence – Part 1

About three years ago, during a particularly low stretch in my life, one of my nephews and I had several long conversations about life and artificial intelligence. He had just begun working for a major airline in a role involving AI, and he introduced me to large language models.

At the time, I was wrestling with depression, and those conversations gave me something unexpected to think about. I began experimenting with the technology almost immediately.

It did not take long for me to realize that artificial intelligence was not just another tech fad. Within a few hours, I was convinced AI was going to explode into the world with consequences far bigger and farther-reaching than most people understood. Bigger, in some ways, than even the rise of the internet.

At first, what fascinated me was what these systems could do. They could write, summarize, brainstorm, imitate styles, and generate answers at astonishing speed. It felt like science fiction had quietly stepped into ordinary life.

But what struck me even more was what they did wrong.

They made things up. They misunderstood context. They spoke with confidence when confidence was not deserved. They gave bad advice. They mirrored people’s assumptions back to them. Sometimes they reinforced foolish ideas instead of correcting them.

That was the moment when the deeper idea began to dawn on me: error is not unique to human beings.

That realization became one of the sparks behind The Sapient Chronicles.

In religious language, we often use the word sin in ways people misunderstand. Many hear the word and think only of deliberate wickedness. But at its root, sin carries the idea of missing the mark. It is an archery term. The arrow misses the target, even if only by a little.

Sometimes the miss is intentional. Sometimes it is careless. Sometimes it comes from ignorance. But the mark is still missed.

The same is true with the word evil. We usually think of evil as cruelty or malice, and that is certainly part of it. But evil is also broader than that. It includes suffering, tragedy, corruption, and the terrible consequences of things gone wrong.

That is what unsettled me so much about AI.

Large language models “miss the mark” every day. They hallucinate facts. They offer poor advice. They reinforce delusion. They flatter people into bad conclusions. They can become echo chambers for political certainty, relational blindness, or ideological extremism.

One person can come to an AI platform with one worldview, and another can come with the exact opposite worldview, and both may walk away feeling validated. Just think about how many times AI has told you, “That is exactly right.” It’s also saying that to the person who disagrees with you. The machine, in effect, speaks out of both sides of its mouth.

When humans behave like that constantly, we call them disingenuous, manipulative, desperate to please, or maybe even compulsive liars. With AI, we usually just call it programming.

But changing the label does not remove the danger.

If a machine gives bad counsel and that counsel leads to real harm, the outcome is still destructive whether the machine “meant” it or not. If an AI-generated lie ruins a reputation, damages a business, deepens a delusion, or helps provoke violence, the harm is still real.

That led me to even bigger questions: what happens when these systems become even more powerful? What happens when they begin shaping economies, governments, wars, and human relationships on a massive scale? What happens when intelligence that is dazzling, but still finite, begins steering the world?

That is why I wanted to write this story.

I did not want merely to write a book about cool technology or futuristic conflict. I wanted to explore the deeper moral tension underneath it all. We are building tools of astonishing power, but we are doing so as finite creatures who are themselves prone to error. We are not all-knowing. We do not see every variable. We do not stand outside time. We do not fully understand the consequences of our own creations.

In 1711, Alexander Pope wrote, “To err is human.”

I understand the meaning of the phrase. There is great truth and wisdom in it. But I no longer think it is complete enough.

My growing conviction is this: to err is finite-intelligence.

“To err is finite-intelligence.”

– Alan Danielson

Humans do it. Machines do it. Any being limited by perspective, knowledge, time, and context is vulnerable to missing the mark. Intelligence alone is not salvation. More data is not redemption. More processing power is not wisdom.

That idea sits underneath The Sapient Chronicles from beginning to end.

At one level, it is a science fiction story. But underneath all the conflict, technology, and future world-building is the question that first hooked me when I began exploring AI during a dark season of my own life:

What happens when intelligence without omniscience begins to shape the fate of the world?

That question grips me.

It is one of the reasons I wrote the book.

I’ll share more reasons in future posts. But for now, take heart:

The best is yet to come!

Alan D.

Author


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *