Skip to content

Posts tagged ‘Fermi Paradox’

Hiding in a Dangerous Universe?

By Taylor Marvin

Jason T. Wright has a fascinating series of posts discussing the energy and waste heat constraints facing extraterrestrial civilizations, and in his most recent post argues that expansionary alien civilizations are likely long lived. This has interesting implications:

“I am arguing that once a civilization gets going, it’s going to take over the whole galaxy quickly, and that L (the lifetime of a typical civilization) is actually longer than the current age of the Universe. If this means that subsequent civilizations are unlikely to arise, then N= 0 or 1 for most galaxies (0, in fact, since most galaxies don’t look like they’re full of Dyson spheres).”

This is an interesting challenge to the classically understood implications of the Fermi Paradox and explanations for the Great Silence. Even if we accept the Rare Earth argument that the conditions required for the evolution of Earth-like complex life are rare, the sheer number of rocky planets in our galaxy suggests that even intelligent civilizations based on Earth-like biology should be relatively common. This presumed frequency is typically squared with our failure to observe evidence of extraterrestrial civilizations by noting that even if these civilizations are common, they are unlikely to coincide with us in space and time. However, if the first civilization to begin expansion is likely to dominate the galaxy fairly quickly, then this explanation doesn’t hold. As Wright notes, if an intelligent expansionistic civilization had arisen in the past it would be long lived, and we would observe it today. The fact that we don’t is clear evidence that an alien civilization has never begun expanding in our galaxy — or, for that matter, in any other close enough for us to observe a lack of Dyson spheres at our observed time. If we haven’t observed an alien civilization in the local group, it’s likely because there have never been any there.

There are a few ways around this observation. Speculatively, perhaps interstellar civilizations do arise frequently, expand, but quickly evolve into a state that renders them undetectable. Science fiction plays around with this conjecture: Kardashev type III civilizations could exist, but just in some way we’re incapable of recognizing, or advanced civilizations could choose to expand in virtual reality rather than the outside universe.

Another possibility is the classic answer to the Drake Equation: perhaps intelligent civilizations are common, but they universally fail to expand beyond their home system and are short lived. This state is possible even if civilizations do not destroy themselves before expanding. As I have previously discussed, there are reasons to believe that low-birthrate, energy constrained, individually rational species (in short, a species like us) would fail to expand even if it is in their long-term interest. However, this explanation is obviously problematic. Even if a species’ innate characteristics discourage expansion, AIs derived from this particular civilization would not share the same traits: it is easy to imagine a universe where biological species fail to expand, but AI entities descended from their computers do not. Similarly, even if many species destroy themselves before they can expand beyond their home systems, or elect not to, it’s unreasonable to suppose that this tendency is universal. If Wright’s logical conclusion that expansionary civilizations’ L is long and leads to an N=1 galactic outcome, we would only have to be predated by a single expansionary civilization for us to observe alien intelligence. Given that our galaxy has likely been potentially habitable for over ten billion years, this implies that civilizations are extremely rare.

Possible reason for dangerous universe.

Possible reason for dangerous universe.

Even more speculatively, another possibility is that civilizations are common, but universally elect not to expand. Because alien civilizations’ behavior would be governed by vastly different biological and economic constraints, this decision would have to be motivated by outside influence. Specifically, alien civilizations, like us, fail to observe evidence of aliens. Civilizations then face a choice: they can either expand, and enjoy the massive first mover advantage that leads to the N=1 outcome, or not expand. Why not? One possibility is that alien civilizations do exist, but are actively hiding. Alien civilizations may reason that the absence of observable aliens is evidence that the universe is extremely dangerous, and other civilizations are either hiding or extinct. This allows for the original Drake equation’s assumption, in Wright’s words, of a “steady-state of short-lived civilizations” that never move from a Kardashev type II to III. Instead they elect not to expand, or only as much as remaining hidden allows. Importantly, it’s possible that this no first mover equilibrium exists whether or not there is actually a malevolent entity that makes the universe dangerous. Since no civilization can be sure that they’re the first intelligence in the galaxy, the possibility of frightened and hiding extraterrestrials is impossible to rule out.

If the universe is dangerous, then expansion — or being detectable, in general — is risky. Of course, not expanding is the ultimate “risky” choice — civilizations that don’t spread beyond their home star are doomed to die along with it. If humans fail to expand beyond Earth in significant numbers, our civilization will die at most a billion years from now. Given this time frame, the risk detection leads to immediate, complete destruction must be very high for civilizations to choose not to expand. For example, say that humans will be completely destroyed 42,000 years after being detected by whatever malevolent entity makes the galaxy dangerous (the Earth is 26,000 light years from the center of the galaxy; this assumes that a destructive force is dispatched at light speed as soon as evidence of humans reaches the center of the galaxy. I’m spitballing here). Given this assumption, the expected number of years human civilization survives is maximized by not expanding if the risk that the universe is dangerous is >.99996. Obviously, all alien species must be extremely risk averse to make a conscious choice not to expand out of fear, especially since the only evidence of a dangerous universe type is the absence of observable civilizations.

So not particularly convincing! But pretend this is the case, and the galaxy is relatively full of risk averse alien civilizations afraid to expand. This suggests an interesting strategic interaction. The first civilization to expand beyond their home star takes a bet that the universe is not dangerous. If they are wrong, they are exterminated. However, if they’re right their civilization enjoys the first mover advantage that leads to a N=1 galaxy, and survives essentially forever. Given the low likelihood that all civilizations in the galaxy are extremely risk averse, lack of evidence that the universe is a dangerous type, and massive first mover advantage, it is unlikely that universal hiding is a stable equilibrium.

Another problem is that colonization is not necessarily risky. As Robin Hanson has noted, “if this colonization effort could hide its origins from those who might retaliate, what would they have to lose?” For expansion to be risky, the hidden malevolent entity must be so all-powerful that it can arbitrarily destroy colonies separated by many light years and remain dangerous across deep time, but still fail to detect hidden civilizations — a narrow criteria! These requirements, and the likelihood that a galaxy full of hiding, non-expansionistic civilizations is not in stable equilibrium, suggests that the ‘aliens are hiding’ answer to the Fermi Paradox is not convincing.


Previous posts on aliens and human expansion:

Through Struggle, the Stars: What’s a Interstellar Humanity Look Like?

Reconsidering “First Contact”

What Would an Expansionist Alien Species Be Like?

“Putting Your Mind” to Space Industrialization

Battle: Los Angeles, Red Dawn, and Alien Invasions

The Economics of Alien Invasion

What Would an Expansionist Alien Species Be Like?

By Taylor Marvin

One of the more interesting questions about the universe is the apparent rarity of intelligent life. It is reasonable to suspect that given the vast size of the universe and apparent frequency of rocky planets intelligent civilizations are common in galactic habitable zones, even disregarding the possibility of exotic biologies. However, humans have not encountered aliens and observed no evidence of these civilizations, despite the fact that evidence of both extant and extinct sufficiently advanced civilizations should be apparent across galactic distances. This is especially puzzling because today’s humans are not far from the technological requirements — conservatively, fusion drives and generation ships — required to colonize a significant portion of the galaxy.

This puzzle — if aliens are common, where are they? — is termed the Fermi Parodox. Scientific America author Ian Crawford elegantly summarized the possible solutions to the paradox:

“There are only four conceivable ways of reconciling the absence of ETs with the widely held view that advanced civilizations are common. Perhaps interstellar spaceflight is infeasible, in which case ETs could never have come here even if they had wanted to. Perhaps ET civilizations are indeed actively exploring the galaxy but have not reached us yet.

Perhaps interstellar travel is feasible, but ETs choose not to undertake it. Or perhaps ETs have been, or still are, active in Earth’s vicinity but have decided not to interfere with us. If we can eliminate each of these explanations of the Fermi Paradox, we will have to face the possibility that we are the most advanced life-forms in the galaxy.”

There’s a lot to explore here, but I’d like to focus on two of the four potential answers: that intelligent civilizations are chose not to expand through the galaxy, or are somehow prevented from doing so. Importantly, it appears that this “prevention” is not based on an inherent difficulty of interstellar colonization. Again quoting Crawford:

“Any civilization with advanced rocket technology would be able to colonize the entire galaxy on a cosmically short timescale. For example, consider a civilization that sends colonists to a few of the planetary systems closest to it. After those colonies have established themselves, they send out secondary colonies of their own, and so on. The number of colonies grows exponentially. A colonization wave front will move outward with a speed determined by the speed of the starships and by the time required by each colony to establish itself. New settlements will quickly fill in the volume of space behind this wave front.”

In a famous 1998 paper “The Great Filter – Are We Almost Past It?”, economist Robin Hanson suggests that humans do not observe aliens because life encounter a “great filter between death and expanding lasting life” that prevents it from colonizing the galaxy.

“No alien civilizations have substantially colonized our solar system or systems nearby. Thus among the billion trillion stars in our past universe, none has reached the level of technology and growth that we may soon reach. This one data point implies that a Great Filter stands between ordinary dead matter and advanced exploding lasting life.”

Either intelligent life evolves extremely rarely, or it is extinguished before expanding. While Hanson believes this filter is best explained by the presumed rarity of the evolution of intelligence, he provides a fascinating description of social hypothesis that explain the theorized short lifespan of intelligent civilizations. Interestingly, as humans appear to be relatively close to interstellar capability, this suggests that — rejecting a biological Great Filter mechanism — that humans are also close to encountering the Great Filter.

Confounding the puzzle, Hanson argues that evolutionary theory suggests that civilizations that do arise tend towards expansion, making their absence harder to explain:

“In general, it only takes a few individuals of one species to try to fill an ecological niche, even if all other life is uninterested. And mutations that encourage such trials can be richly rewarded. Similarly, we expect internally-competitive populations of our surviving descendants to continue to advance technologically, and to fill new niches as they become technologically and economically feasible.”

Hanson argues that energy constraints, desire to outpace potential competitors, and concerns over local disasters would motivate even sedentary civilizations to expand — the galaxy is not full of hermit civilizations. Similarly, the finite lifespans of main-sequence stars would eventually force all civilizations that reach the end of their sun’s life to expand or die. This suggests that most intelligent civilizations eventually expand, leading it to the Fermi paradox — if intelligent civilizations are common and expansionist, why don’t we observe them?

There are three broad possibilities: aliens are expansionist but hide, either on purpose or inadvertently; civilizations are routinely destroyed before they can expand; or that civilizations elect not to expand.

Because evidence of advanced civilization is typically thought to be detectable on galactic scales, if expansionist civilizations exist in our galaxy something must be preventing us from detecting them. Typical explanations include that we have detected but cannot recognize evidence of very alien extraterrestrial civilizations for what it is, by chance aliens avoid technology detectable over vast distances, or that the galaxy is dangerous and technological civilizations are actively hiding.

Another possibility: rather than electing not to expand, planets are somehow routinely prevented from developing interstellar civilizations. Science fiction suggests a few fictional answers. In Alastair Reynolds Revelation Spacenascent interstellar civilizations inevitably attract the malevolent attention of the “Inhibitors”, dormant machines left over from an early interstellar war, or, more fancifully, in Charles Stross’ A Colder War ill-advisedly meddle with H.P. Lovecraft’s monsters. Other commonly theorized dangers are nuclear or biological warfare, or environmental disaster. More exotic theorized perils include civilization-destroying experiments with strong artificial intelligence, or attracting the attention of rapacious hidden aliens (I find this unlikely).

Another potential “Great Filter” mechanism is that alien civilizations do arise, are not prevented from expanding but instead elect not to. There are numerous explanations for this tendency. An early, widespread alien civilization could have imposed a “no-expansion” norm on following civilizations; Reynold’s long-lived Inhibitors could be considered a particularly violent way of enforcing this norm across deep time. Civilization could be universally cautious, and avoid expansion at all costs for fear of attracting the attention of hidden malevolent aliens; however, it is difficult to reconcile this with the death of stars — why would a solar system-bound civilization fear a potential danger over certain death at the end of their sun’s life? Alien civilizations could also universally prize preserving the natural state of the galaxy, though again it is doubtful that this naturalistic impulse would survive the death of civilization’s stars. Or, advanced civilizations could universally embrace virtual reality or lose physical form while somehow avoiding the resource and survivability incentives to expand.

Another potential solution is that advanced civilizations commonly arise, but are prevented from expanding due to for purely economic or organizational reasons; in this case, the solution to the Fermi paradox would be the “it is too expensive to physically spread throughout the galaxy” hypothesis. As Hanson notes, there are numerous problems with this theory; most notably, evolutionary pressures tend to select expansionary traits in successful or long-lived societies. However, I’d like to examine this possibility in more detail: why would civilizations chose not to expand in the absence of external pressures (previously set non-expansion norms, fear), innate non-expansion traits (tendency towards naturalism), or disinterest (move to virtual reality without a local resource constraint, etc.)?

There are clearly long-term benefits to galactic expansion. Civilizations that do expand would have access to much greater energy resources and vastly increased security. However, it is important to remember these benefits are collective, long-term benefits, and species with finite lives have little reason to invest in the extreme long-term. If we restrict our discussion to human-like species composed of reproducing, autonomous, sentient individuals, it is possible to argue (speculatively!) that the drive for galactic expansion largely vanishes. Interstellar colonization is a collective effort that likely fails a human-based cost-benefit test scaled around a few human generations; when rational short-lifespan individual utility maximizers are the decisionmakers, under conditions roughly similar to foreseeable future humanity interstellar colonization seems unlikely. It is even possible that individual species like our own would be unable to organize interstellar expansion when motivated by the impending death of their sun.

I am not arguing that the “it is too expensive to physically spread throughout the galaxy” is a particularly convincing universal solution to the Fermi paradox, but instead that economic constraints are a more likely explanation for supposing that near-baseline humans will not expand widely in the foreseeable future than astronomical or social-triggered destruction.

Of course, “conditions roughly similar to foreseeable future humanity” benchmarked on the early 21st century certainly leaves a lot of leeway for future humans, not to mention other species broadly similar to our own. That said, we can broadly speculate about the qualities of expansionist species with biology (again, reproducing, autonomous, sentient individuals) similar to our own:

  • Exponential reproduction: In the last half century the world total fertility has fallen precipitously, from a mean of 4.95 in the 1950-55 era to 2.36 today. This fall is well understood, and is associated with the advent of birth control, rising incomes, and women’s’ increased social empowerment and education. But importantly, falling total fertility is only possible because birth control allows sex to be decoupled from reproduction, and the human reproductive drive is a sex drive. It’s entirely possible that an alien species would have a reproductive, rather than sex, drive that negated the entire idea of birth control and made exponential population growth difficult to avoid. Massive population growth could be a powerful incentive to invest in interstellar expansion.
  • Extreme life extension: I’ve previously wondered if human’s falling birthrates would prevent humanity from ever investing in space colonization — after all, barring some catastrophe living in off-world will in the medium-run always been more expensive and uncomfortable than living on Earth. If humans don’t have a pressing reason to leave in large numbers, they likely won’t. While human colonies off of the Earth would significantly improve the survivability of the human species, it’s difficult to imagine this is a sufficient reason to motivate investing in these colonies. However, medical advances resulting in extreme life extension would undo the population control gains from stable world total fertility and again raise the specter of global overpopulation, perhaps prompting investment in off-world colonization. The same logic could apply to other species.
  • Competing local societies: As Hanson notes, competition creates strong pressure to expand into unexploited niches. Competition among local societies could create incentives to expand in otherwise non-expansionistic species. However, it is difficult to imagine sufficient competition among human-like species to prompt interstellar expansion while avoiding local war that destroys the capability for extensive interstellar travel, though perhaps strong prohibitions on armed conflict could avoid this.
  • Innate expansionistic tendencies: To move into more speculative factors, it’s possible to imagine alien species with an innate desire to expand — just as human behavioral evolution appears to have favored aggression. An innate desire for expansion would motivate investment in colonization beyond that justified by human cost/benefit calculations.
  • Low/High risk tolerance: Space exploration is risky, both in direct risk and its high opportunity cost. Space colonization is much more risky. It’s conceivable that a species with a higher innate psychological tolerance for risk would elect to invest in risky expansion for reasons that don’t make sense to humans. Conversely, an species with a tolerance for risk much lower than humans could judge the long-term security of space colonization worth the risk and opportunity cost. Lifespan could conceivable play a role as well; assuming species consisting of sentient individuals, longer-lived species could either have lower (more to lose) or higher (boredom) tolerance for risk than humans.
  • Extreme technological advancement: All of these previous traits alter the benefit side of an expansion cost/benefit ratio. However, extremely advanced technology developed for other purposes could justify expansion by radically reducing the cost of expansion. For example, self-replicating von Neumann machines could make expansion much cheaper. This relative affordability could prompt highly advanced species to expand when they otherwise would elect not to.

If this theory holds (and I’m not entirely convinced that it does; for example, extreme life expansion could be very common even in relatively young intelligent species), we would expect human-type civilizations that do expand to be dominated by those with innate high population growth, or extremely high technological capabilities (i.e. no expensive generation ships or warp drives). More speculatively, we could expect the most expansionist species to be those where policy is not set by individual utility maximizers. These “non-individually rational” species could include hive minds a la Star Trek’s Borg, machine races, or something else entirely.

If we accept the argument that species composed of short-lived, individual utility maximizers are not particularly inclined to expansion, and these civilizations tend to not delegate social decisions to non-individual utility maximizing actors like “God computers”, then a potential solution to the Fermi Paradox is that civilizations with the expansionist traits listed above arise only rarely. This, however, does not address the problem that expansionist societies would tend to out-compete and displace non-expansionist societies.