Skip to main content

An Atheist's Explanation of the Concept of "Good"

By Ben Love ~

“In each of us, two natures are at war—the good and the evil. All our lives the fight goes on between them, and one of them must conquer. But in our own hands lies the power to choose—what we want most to be we are.” Robert Louis Stevenson
I've been in enough dialogues with Christians to know that one of the mainstays of their argument is the distinction between “good” and “bad.” Specifically, the Christian maintains that without the standard provided by an all loving, benevolent God, we human beings could never know what “good” is. God sets the standard, according to their argument, and from that standard flows all moral knowledge. Without that standard, the Christian contends that we would be amoral monsters with no predilection for good over evil; we would be animals that followed whatever base instinct was driving us at the moment. As a result, the world would be a chaotic place where rampant selfishness and senselessness reign supreme. But, according to the Christian, God has set the standard, and we humans therefore know what “good” is, thus saving our civilization from certain demise.

Except… well, the world is a chaotic place. Rampant selfishness and senselessness do reign supreme. Turn on the news on any given day and you can plainly see that monsters do indeed abound, prompting one to wonder why this standard God has apparently set has gone so terribly wrong.
“No, no,” the Christian says. “It’s not like that. The standard is found in the Bible, and only those believers who study the Bible closely are tuned-in to that standard.”

But this answer, which doesn’t really answer anything, only prompts one to further wonder why the presence of the Bible in the world for the last 1,600 years hasn’t cleaned up the mess by now. Oh sure, you may argue that people couldn’t read the Bible for themselves until the printing press was invented in the 15th century. You might further argue that the Bible has mostly had an effect only on the Western world, and that the rest of the planet is still trying to catch up. But the Bible’s message has been carried from land to land by faithful pilgrims ever since the 2nd century. Surely, by now there should be some sort of homogenization of “good” in the world if the Bible is indeed the vehicle of the standard, right? But is there? No, not even close.

I contend that if there is a God, and he is the standard of “good,” and this standard has indeed been imparted to humanity, whether through a small sector of the human population (only the believing Christians) or the population at large (direct revelation), then humanity ought to be agreed, to some degree, as to what “good” is. But the human species is anything but agreed. Place a thumbtack anywhere on a map and you’ll likely be speaking about some culture that has vastly different ideas about what is acceptable than yours does. Would we see this if there actually were a God setting a standard for us? Think about it. Imagine the Earth as it would be if the Christian God were the standard of “good.” While there would still be deviants and derelicts lurking about, the idea of “good” would be a much more unified and prevailing concept. And yet the reality is that it is difficult trying to get even two Christians to agree on what is right and what is wrong—and these two Christians study the same Bible!

Furthermore, let us assume for a moment that there is a benevolent God driving the standard for “good.” What can be said about his method? Which sounds more effective to you: revealing “good” to all humans everywhere at the same time so that a maximum amount of people can be on board, or revealing it to a select few who then might (or might not) share that revelation with the world, thereby causing a painfully slow trickle-down effect that takes centuries to come to fruition? The second option sure does carry an awful lot of collateral damage with it. How many wars were acceptable to God while his trickle-down “plan” ensued over the centuries? Was there a tolerable amount of ignorance-based mayhem that God could live with while the concept of “good” made its way through time and geography as the light was slowly revealed to humanity?

No, the situation of humanity on this planet as it exists today is actually strong evidence against the involvement of some benevolent deity. Indeed, if you were to imagine humanity as it would look if “good” were an evolving, fluid concept, you would probably envision something that closely mirrors the present reality. Remember the thumbtack on the map? The evidence clearly demonstrates that humanity is still on the journey of unpacking all that pertains to “good” and “bad.” And that is precisely what we would expect if the evolutionary model were correct.

(Guess what: it is correct.)

And it is at this point, then, that the Christian shakes his head and says, “No, evolution cannot account for the concept of ‘good.’ Theoretical molecular changes have nothing to do with ethical advancement.” Well, duh! And that is when I shake my head and say that there is a fundamental misunderstanding of the word evolution taking place here. The term simply means gradual change over time. Nothing more. We might talk about how a literary character evolves as the story progresses. We might speak about how a squall has evolved from a tropical storm into a hurricane. In these cases, we are indeed talking about evolution. And when we speak about lifeforms molecularly mutating over vast periods of time, we are talking about evolution at that point as well—biological evolution. But when we speak about how the concept of “good” is a changing, fluid concept, we are not referring to biological evolution at all. No, we are referring to social evolution.

Therefore, let us visit the idea of social evolution. The fact is quite simple and, if I may, fairly obvious: humans are creatures of change. Humans change as the environment changes. They change as the economy changes. They change as the political landscape changes. They change as their relationships change. They change with time. They change with knowledge. They change with adversity. They change out of boredom. They change sometimes simply for the sake of change. It’s not the fact that we change that is important; it’s what drives that change, and why. And when the discussion is sociological (because that is the only proper context in which any understanding of “good” can be placed), we have to start at the beginning to get the full picture…

So, rewind with me, if you will, back to that point in prehistoric time when Homo sapiens was a species of nomadic hunters and gatherers sheltered in caves, painting horses on the cavern wall, and learning how to cook meat over a fire. At this early stage of human social development, survival was the dominating factor that commanded what was deemed right or wrong. Looking out for you and yours was considered right, even at the expense of the welfare of others. The only wrong was to not do everything possible for you and your family to survive. Now, this is not to say that these prehistoric humans were cold-hearted machines incapable of feeling warmth or kindness in their hearts toward their fellows. Such a statement would not only be erroneous, it would be presumptuous. And yet, based on what we know of early humans, it is clear that life for them, while being exceptionally difficult, was relatively simple. The goal and purpose of life was to survive. As such, these humans were concerned with very basic principles of conduct. Abstracts, such as whether or not humans were meant to be monogamous, or what age made a person a “legal adult,” were of no consequence to these people. And even these basic principles of conduct were not set in stone. For instance, if someone stole provisions from you, you considered that bad. But if you had to steal in order for your children to survive, you considered that good. Prehistoric theft, then, is a perfect example of how the concept was fluid. It changed with the situation.

Similarly, if one nomadic band of hunter/gatherers had to make war on a neighboring band, the motivation was likely never pure greed, or a lust for power. No, what drove the actions of these prehistoric peoples was again survival. War was therefore considered an occasional necessity, not so that one person or group could attain political power but rather that the strongest group might acquire all the necessary provisions for survival. Now, we today would look upon this kind of behavior as ignoble and selfish. We might comment on how it is more honorable to sacrifice oneself for the good of another. But what use did prehistoric humans have of honor? Honor was completely impractical and irrelevant to them. Life was “kill or be killed,” “live or die,” “eat today and survive tomorrow.” As such, their ideas about what was “right” or “good” would clash with ours today. And that is okay, because “good” is a fluid, evolving concept. It changes as we change. It changes as our priorities change. It changes as our definition of “purpose in life” changes. There is no standard.

And so humanity continued in this state. But then something extraordinary and inexplicable occurred. About 14,000 years ago, humans suddenly (and yes, the historical record suggests it was relatively sudden, at least in evolutionary terms) acquired agriculture. This meant an abrupt change in the way life could be approached. No longer did humans have move with the food supply, chasing mammoths and tigers and whatever else they ate all over the globe. No, now the food supply could be grown. Now the food supply was stationary. Humans would still hunt for meat, of course, but now they could do it locally, closer to home. Home, by the way, was a cluster of habitats where these small nomadic bands settled. Thus, the village was born. But now that people were amassing in one place, ideas about “right” and “wrong” had to evolve. After all, now humans were staying put, living next to each other, in close quarters, existing together. Where before they all shared one large cave, now each family was erecting their own homes around their own farms. Thus, the notion of personal property emerged. A new sense of community emerged, one that was different from the cave dynamic. Suddenly, humans were concerned about their possessions, not just their provisions. Life was still about survival, but even that had changed. Where before you either slew beasts and lived or failed to do so and died, now you had to rely on and even attempt to manipulate the weather. Now you had to watch over your crops. Now environmental forces outside of your control could determine life and death for you and your family.

This new situation with all its new concerns meant that humans had to conceive of ways to live peaceably, inasmuch as was possible. It is therefore very likely that the idea of a community leader emerged at this time—someone who could oversee the progress of the village and exert a kind of governing role over the villagers. But each village was different. Some were affected differently based on the climate. Some were affected differently based on their geography or topography. Some villages grew one kind of crop while another grew something else entirely. All these varying dynamics meant that each village had its own struggles, its own distinct character. And these variables meant that each village had its own ideas about right and wrong, good and bad. The community leader of one village might have had to stress one aspect of ethical behavior that never even arose in another village. And since these villages were cropping up all over Africa, Asia, and Europe at this time, the distribution of “good” and “bad” as concepts was not unified. It was as diverse as the map upon which the villages were to be found. (And it still is today, though to a lesser degree.)

It is important to understand, then, that “society” is itself a fluid concept. What did “society” mean when humans were huddled around fires inside massive caves? What did it mean when they came out of their caves and began to build homes and plant crops? What did it mean when individual villages began to have their own identity? What did it mean when the first power struggle between a community leader and potential usurper occurred? What did it mean when a village of 40 people became a village of 200 people? What did it mean when the first slave was chained? What did it mean when the first disease epidemic ravaged these early villages? “Society” is a word that evolves as we do. And if society evolves, so too does its parameters—namely, the concepts of “right” and “wrong, “good” and “bad.”

As for the progression of human social evolution, it is important to note that in the millennia following the advent of agriculture, two more extraordinary developments took place.
First, between 10,000 and 6,000 years ago, the aforementioned farming villages began to grow. Why did they grow? Well, a stable, dependable food supply and a more stationary (and thus less dangerous) lifestyle led to an increase in human health. Humans were slowly beginning to live longer (though not by much in comparison to today’s standards) and live better. As a result, the population began to increase. Now these villages were swelling into major centers of trade and commerce. Thus, the first cities were born. And the more people there are, the more guardians are required to oversee them. This meant that the community leader had to have people under him, generals and lieutenants who could see to the day-to-day operations. Bureaucracy was therefore born. A police force was also needed, and thus the militia was born. Suddenly there is a hierarchy of power, a totem-pole system where each citizen of the city knows his place. And when you have a society that has ordered itself into a functioning machine, where the society begins to take on it is own cultural identity, well…we call that civilization

The second development that occurred at this time was the advent of writing. The presence of civilization meant that now there was more information to keep track of, and humans needed a way to do that. And so, the historical record tells us that roughly 6,000 years ago, Cuneiform writing was developed in Mesopotamia. Other civilizations eventually developed their own writing systems as well, and by about 4,000 years ago, most (though not all) of the societies that existed in Africa, Asia, and Europe had established some form of writing.

The advent of civilization and the advent of writing are extremely important factors to keep in mind as our story of social evolution continues. Why? Because where there is civilization, there arises a need for law. And when laws are written down and thus communicated to the future generations, the concepts of “good” and “bad, “right” and “wrong” no longer seem like tenuous, fleeting notions that change with the weather but rather mainstays of culture, pillars of ethical thought that get woven into a society’s identity and into the consciousness of its people.

And once again, the historical record confirms this. You’ve heard of the Code of Hammurabi? Or Moses’ Hebrew Law? You probably have. But I bet you might not have heard of the Code of Neslim (via the Hittite empire), or the Negative Confessions (via ancient Egypt; one of the oldest recorded systems of ethical morality), or the Kang Gao (via ancient China). All of these systems of law, and many others, emerged in civilizations all over the globe at roughly the same time: between 5,000 and 3,000 years ago. And this is common sense, really. Logic would suggest that in the wake civilization’s advent, a period of trial-and-error ensued where human societies began to experiment with laws. After all, a civilization without laws is anarchy—no civilization at all. To have civilization is to have a need for law and order. But who decides what that is? What if some of the laws are unjust? What if some of them simply don’t work? How do you erect perfect law where once there was no law without having to test the waters? It was therefore necessary that human beings poked around in these ideas a bit. A civilization isolated somewhere in eastern China might have had need of a law that would have made no sense to the peoples living in ancient Rome. A civilization situated on the Mediterranean Sea might have had more maritime priorities when it drafted its first laws, while a civilization located up in the Himalayan mountains might have been more focused on territorial concerns. The point is that human beings were formulating laws at this time, and they didn’t always agree on what was right and wrong. That is what one would expect from the social-evolutionary model, and that is also precisely what the historical record reveals.

Eventually, as civilization continued and as time passed, the laws that didn’t serve the collective welfare were dismissed, while the laws that seemed to have a net positive effect on society were kept. And with the passage of time came new concerns, new struggles, new obstacles, and thus the laws were altered, or new laws were drafted. History shows us that as Rome gave way to Christianity, and as Christianity gave way to the Renaissance and Enlightenment, and as the modern age of Industry and Technology was born, the changing nature of humans corresponded directly to the changing nature of “good” and “bad.” For instance, where once it was thought to be perfectly acceptable to own slaves, now the human race knows better. There is still slavery in this world, but it exists as a defect to what we know of as acceptable. Similarly, humans once thought it was perfectly acceptable to burn heretics at the stake, to sacrifice fellow humans to the gods, to drink blood, to eat human flesh, to have multiple spouses, to put neighboring clans to the sword, to leave deformed infants out in the elements to die, to settle differences with a duel, to keep certain portions of the population from having basic rights, to…well, we could go on and on. The point is that we change. We grow. We progress. We evolve! And so too do our notions of “right” and “wrong.”

At this point, the Christian usually objects by pointing out that what we think of as “good” here in America is not mirrored by what many think of as “good” in China, or in Iran. “If good were an evolving concept,” he says, “then it would have evolved at the same rate everywhere. And yet that is not what we see. We here in the glorious, godly nation of America know so much more about ‘good,’ while the backward heathens in the Middle East are woefully behind. Thus, your model fails.” This argument backfires on the Christian because he simply doesn’t understand the historical record (how could he when he believes the Earth is only 6,000 years old?) or the social-evolutionary model. He simply takes for granted that, if we atheists are correct, all civilizations everywhere should have evolved at the same rate and in the same way. But he forgets about all the contingent factors that would interrupt the process for some while expediting it for others. Some civilizations were in direct contact with others, through the trade routes. This meant that ideas were exchanged, and sometimes new concepts from a foreign society would be assimilated into another. Thus, these connected societies had the benefit of collective contact. Their progress would have been faster than a civilization that was isolated from the rest. An isolated civilization did not have the benefit of the collective contact, and it had to grow on its own. This could mean it developed slower, though it doesn’t have to mean that. The point is that sometimes something as simple as geography or climate or even ecology could have a tremendous bearing on how a society progressed ethically.
And so when we today, here in 2015, ask ourselves what is “good,” are we relying on what some theoretical God might have set in standard? How can we, when this God himself is still theoretical? If the existence of God must be taken on faith, that must mean the existence of this God is not a certainty. But this must also mean that the Christian’s idea of God being the “standard for good” is just as uncertain. If God is theoretical, then so is this concept of “good” in which the Christian believes. But if the concept of good is in fact theoretical, how can it be a standard? Furthermore, if the concept of “good” is based on a theoretical intangible, doesn’t that mean the concept is itself, by default, just as intangible? It would seem to me that the terms standard and intangible are oxymoronic. Doesn’t it make more sense to accept that as we ask ourselves what “good” is, we are withdrawing from centuries—no, millennia—of trial and error, and legacies passed down to us through the evolution of our ancestors? Doesn’t it make more sense to accept that what we know of as “good” has evolved alongside of us? Doesn’t it make more sense to endorse a social-evolutionary model that corresponds perfectly to the historical record rather than the immaterial, impalpable projections of faith in a hypothetical God? I say yes. I say that we are smart enough as a species to realize on our own that “good” is that which lends to the collective welfare of society, and “evil” is that which subtracts from it. What could be more straightforward than that?

Comments