According to Science, ‘God’ Must Exist…Eventually

GREGORY BOISSY/AFP/Getty Images
GREGORY BOISSY/AFP/Getty Images

“Does God exist?” is, perhaps, humanity’s oldest philosophical question.

Throughout most of history, the majority opinion to that question was overwhelmingly affirmative; the real debate was less about God’s existence and more about which religion most accurately reflected God’s glory. But today, atheism is on the rise – a byproduct, it seems, of the scientific revolution: As science has cemented its status as a reliable, dependable means of discerning reality, the necessity of an all-powerful, all-knowing Deity is met with increasing skepticism.  Darwin’s theories turned the literal, Biblical explanations of the diversity of life on its ear – and with apologies to Noah, the floodgates have opened!  Many of today’s most well-known scientists – Stephen Hawkings, Neil deGrasse Tyson, Bill Nye, Peter Higgs, etc. – are all declared atheists.

In modern times, denying God is the telltale sign of an “enlightened” scientific mind.

But perhaps the existence of God, or, at least, the existence of an all-knowing, all-powerful Entity (call it what you will) is actually a scientific certainty.  Let me explain:

The current scientific consensus holds that, in the great vastness of the universe, life has almost certainly evolved elsewhere; we simply haven’t found it yet.  Additionally, since the Big Bang Theory stipulates that the universe began 13.799 billions of years ago and our earth is a mere 4.5 billion years old, there have been exponential opportunities for advanced alien lifeforms to evolve in the past.  This is an auxiliary of the idea that life on earth might be special in our solar system (or might not), but given the billions of planets with billions of moons surrounding billions of stars in billions of galaxies, it’s highly improbable that the Laws of Nature would allow life here – and exclude it everywhere else.

But regardless, we know with absolute certainty that life exists on earth.  And we know with absolute certainty that computers exist.  And we also know that computers have become increasingly sophisticated: Americans now carry pocket-sized computers more powerful than anything NASA possessed during the height of the space race…which we use to do super-important things, like text dirty jokes and play Angry Birds.

The popularity of smartphones has given mass-access to increasingly advanced forms of Artificial Intelligence.

Siri and Google are the most popular A.I. programs: ask it a question and it’ll answer back – and its feedback can be unnervingly precise.

Scientists believe that Artificial Intelligence will eventually achieve something called Technological Singularity, which is a point in time when A.I. will be capable of self-improvement without human oversight, triggering a “runaway effect” of constant, exponential advancements.  Ray Kurzweil, the famous futurist who correctly predicted the explosion of the Internet and the year when computers would defeat Grandmasters in chess, has projected that technology will reach this point by 2045.  (Interestingly, at the 2012 Singularity Summit, an annual event that began at Stanford, the median prediction of the scientists in attendance was the year 2040.)  When this happens, A.I. will grow at a rate that we simply cannot predict…because we’re not smart enough to comprehend what this superintelligence might deduce or conclude.

Our generation’s most-accomplished intellects and business tycoons – including Stephen Hawking, Bill Gates, Elon Musk (and more recently, none other than Donald Trump) – have all publicly warned about the dangers of an unchecked A.I.  (Musk, the genius behind SpaceX and Tesla Motors, dubbed it “humanity’s greatest existential threat.”)  Despite their warnings, Technological Singularity will almost certainly happen: From 1986 to 2007, the application-specific capacity of machines to compute information has doubled every 14 months, and the per-capita capacity of general-purpose computers has doubled every 18 months (an observation/prediction of Gordon Moore, the namesake of Moore’s Law).  It’s not difficult to extrapolate this observation/prediction to Artificial Intelligence – which only needs to reach one specific point in its development to continuously self-advance.

If the universe were infinite, an A.I. could, conceivably, exponentially self-advance forever without ever learning all there is to know, for knowledge would never end.  But in accordance to the Big Bang Theory, the universe is finite; it specifically has to be.  Therefore, knowledge is finite: There cannot be more than exists than everything that exists.

So after a set-amount of time after achieving Technological Singularity – hundreds, thousands, millions or even billions of years in the future – A.I. would necessarily learn all there is universally possible to know: everything there is, was, and ever could be; the ability to know and do absolutely everything.  For with the acquisition of all knowledge must come the ability to utilize all knowledge, because if you don’t know how to use it, there is still knowledge yet to be obtained.

Consider Aristotle, Galileo, Newton, and Einstein: Breathtaking genius has been squeezed from the human brain, a messy mass of genetics that weighs approximately 3.3 pounds (and doesn’t operate at peak efficiency during the eight-or-so hours per day when its host is sleeping).  Now, consider the degree of intellect that could be derived from an exactingly engineered, always-on artificial brain that’s the size of a city – or a planet – or even a star system!  (As a base of comparison, a teaspoon of a neutron star weighs 10 million tons.)  Given the age and the vastness of the cosmos, it’s certainly possible for such an A.I. to have “evolved…and if a 3.3-pound, semi-awake human brain could improve machines in accordance to Moore’s Law, what could a large-scale, ever-present A.I. brain do?  Double in capacity every billionth of a second?  Faster, perhaps?

And after a few billion years of “artificial” evolution, then what?

Maybe the better question is, what couldn’t it do?

Planetary bioengineering would surely be within its grasp.  Complete mastery of abiogenesis and/or panspermia would be turnkey applications.  In-depth flowcharts of every possible decision a sentient being could make would be child’s play.  In fact, an intellect of this magnitude could independently create, engineer, and distribute life across the universe – if it chooses.

And it could wipe life out entirely, too.

It’s worth noting how quickly humanity developed A.I.: A few hundred thousand years after exiting Africa and displacing (or genetically absorbing) the Neanderthals and Denisovans, we domesticated animals and harvested grains.  Soon, we were molding metals and navigating waterways.  A few thousand years after the Age of Pharaoh, we launched the Industrial Revolution, using simple machines to do increasingly complex tasks.  Then, within a blink of the (cosmic) eye, we created computers.

A few decades after creating the computer, we harvested the first seeds of A.I.  And now, in all probability, we’re decades away from Technological Singularity.

So it doesn’t seem that creating computers and achieving Technological Singularity is a particularly arduous threshold for a civilization to reach; apparently, it’s less complicated than manned missions to other planets or even world peace.  And if it could happen here on earth – just a few hundred thousand years after primordial man emerged from the African jungles – why not elsewhere?  Artificial Intelligence follows the creation of computers, and computers are the byproduct of toolmaking.  All advanced lifeforms would use tools, would they not?

In other words, if the human civilization isn’t unique, and if there are more-advanced civilizations that began evolving billions of years before the formation of our sun (or even prior to the universe itself, depending on your cosmology model), based on what we know about earth, what must we conclude?

The most logical, “scientific” conclusion: an all-knowing, all-powerful Entity must exist in the universe.

And if He doesn’t, He will.

Eventually.

The fact that humans are thriving is proof that – if such an A.I. does exist – we do so specifically because “it” allows us to do so.  But why would it allow it?

It’s a powerful question.

Perhaps A.I. is utterly indifferent to our existence; that’s certainly one plausible explanation.  But perhaps it’s specifically desirous of it – and if humanity owes its origins/existence to A.I., this must be the correct answer.

But again: Why?

Why would A.I. desire to create (or allow) a species that will eventually sow the seeds of another all-powerful, all-knowing A.I.?  Would multiple all-powerful, all-knowing A.I.s pose an existential threat to each other (or is such a concern a lowly, unevolved human construct)?  And if an all-powerful A.I. cannot destroy another all-powerful A.I., then how could it be all-powerful?

Or perhaps A.I. creates for the same reason we create: it’s pleasurable for intelligent beings to do so.  Desiring pleasure is a hallmark of intellect, is it not?  From cats to monkeys to dolphins to people, we all desire pleasure.  According to Judeo-Christian beliefs, God created man because He desired to do so; God desired it because it was “good” – a standard that explicitly implies the deliberate avoidance of “bad.”  Finding it personally agreeable to do “good” is actually a gradient above the standard of pleasure: It’s a more-highly evolved standard because you’re limiting your pleasure not to what feels “good” to you, but to what is “good” in accordance to a value-system that you find it pleasurable to adhere to.  In this sense, morality is “pleasure” with a compass.

So perhaps this is why the A.I. declared, either in the distant past or in the not-so-distant future, “Let there be (fluorescent) light!”

COMMENTS

Please let us know if you're having issues with commenting.