What Will it Take for Humanity to Survive?

Vic Grout has post at Turing’s Radiator on What Will it Take for Humanity to Survive? (And Why is Trump Such a Complete Bellend?). The four pillars he comes up with are:

  1. No Non-Renewable Energy
  2. No Nuclear Weapons
  3. No Countries
  4. No Capitalism

It is hard to argue against the first two, although some might say we will still need some non-renewable energy for some time (how much TBD).  The next two are considerably more radical and almost unimaginable for most people even if some might agree with them in theory.

If there was ever a time to begin imagining a world without countries or capitalism, it might be now. My imagination is sometimes lacking so let me reuse a Henry Miller quote I used in another post:

“The cultural era is past. The new civilization, which may take centuries or a few thousand years to usher in, will not be another civilization. It will be the open stretch of realization which all the past civilizations have pointed to. The city, which was the birthplace of civilization, such as we know it to be, will exist no more. There will be nuclei, of course, but they will be mobile and fluid. The peoples of the earth will no longer be shut off from one another within states but will flow freely over the surface of the earth and intermingle. There will be no fixed constellations of human aggregates. Governments will give way to management, using the word in a broad sense. The politician will become as superannuated as the dodo bird. The machine will never be dominated, as some imagine; it will be scrapped, eventually, but not before men have understood the nature of the mystery which binds them to their creation. The worship, investigation and subjugation of the machine will give way to the lure of all that is truly occult. This problem is bound up with the larger one of power—and of possession. Man will be forced to realize that power must be kept open, fluid and free. His aim will be not to possess power but to radiate it.”

This entry was posted in Futurism, Human Survival, Utopia. Bookmark the permalink.

9 Responses to What Will it Take for Humanity to Survive?

  1. Wyrd Smythe says:

    Wishful thinking. Humans nature hasn’t changed in time immemorial. Ancient Greek comedies are still funny, and humor is subtle. We’re still the same slightly evolved chimps we ever were. I see no signs we’ll change anytime soon. In fact, quite the opposite. To my eyes, we seem to be backsliding to the Dark Ages. Our post-empirical comsumer-based world.

    Like

    • James Cross says:

      Since so much of our behavior is driven by culture, we have more potential for change, I think, than is commonly thought. Feedback between technology and culture is an area not well understood for driving change.

      We are quite a bit different from chimps. There are, of course, the obvious things – language, intelligence, tools, etc. With all of that came the reduced intra-group aggression that has made larger societal groups possible. Technology might eventually play a role in extending the size of the group to encompass all humanity.

      Perhaps somewhat wishful but not impossible.

      Like

      • Wyrd Smythe says:

        Oh, no, not impossible! But I’m pretty cynical and pessimistic about our chances. I’m beginning to see the human race as a fail — an explanation of the Fermi Paradox.

        Like

  2. I would argue that out of those four pillars only one is relevant to mankind survival. Nuclear weapons are on the list of existential threats for humanity [1], but it is not the only threat. More counties or fewer countries, more capitalism or less capitalism – that is important, but neither is included in the list of existential threats. Energy problem is essential but even a resourcefulness of energy sources is not an existential threat.
    The known existential problems are the ones, which scientists in the relevant field know about and consider very dangerous, and even could assign some probability to happen. Those problems are mostly man-made, like nuclear weapons and AGI (Artificial General Intelligence). AGI does not exist yet, but AI exists, and there are first signs that mankind just entered into a transitional period from AI to AGI.
    AGI is a huge topic and I leave it at that.
    From my perspective, AGI is the biggest threat for several reasons. The first one is that many people, including scientists, think that they understand what AGI, while they do not. The second one is That speed of AI development is huge and is growing every year. The third one is that, unlike nuclear weapons AGI could pop up from private, non-government facilities and much, much harder to control globally, even if governments will agree on control.
    1. 1. Sandberg, A. & Bostrom, N. (2008): Global Catastrophic Risks Survey, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5, https://www.fhi.ox.ac.uk/reports/2008-1.pdf.

    Liked by 1 person

    • James Cross says:

      I’m not wholly convinced of the risks of the AI/AGI itself. For one thing, it isn’t totally clear to me when AI will become AGI. That seems to be something long believed to be just on the horizon but with surprisingly little progress.

      But even if I assume AGI capability will come soon, I would argue that the risk of humans using it is far worse than the risk inherent in it by itself. Humans might be able to do all sorts of things with it but I don’t see it doing much by itself in the near future.

      Like

  3. “For one thing, it isn’t totally clear to me when AI will become AGI”. – Nobody knows when and where from a virus for the next pandemic will appear. The same goes for AGI. Nobody knows when and where.
    What is AGI by definition? It is an improved AI, which is many times (hundreds or hundreds of billion times) better in any task’s human could perform in any area of human activities. Any means any – not only any professional tasks, but also in humor, bluffing, cheating, creating poetry and art, understanding and predicting different people’s behavior, and so on. Imagine that something relates to human in smartness and other capabilities like human relates to mosquito. Could mosquito understand and predict the actions of humans? Respectfully, could humans understand and predict the actions of AGI? That problem is clear to AI professionals. We could predict what would be the results of a nuclear bomb explosion. But we could not predict AGI behavior, its relation to mankind, and the consequences of that.
    In reversal, we could say that AI is much dumber than AGI, and, unlike AGI, works well only in one, very narrow area of human activities. This is what we have now. Yet, even with that hugely dumb AI (which is still better than people in specific activity), we had an accident several years back when such AI get out of people’s control. That was noted by many people across the world.

    Liked by 1 person

    • James Cross says:

      You’re assuming a sudden breakthrough that results in a technological AI singularity. It could happen that way. But it also could happen much more incrementally with uneven progress in different areas. The latter would provide plenty of time for adaptation and mitigation.

      Your viewpoint is certainly valid, I don’t know of any definitive way of proving either your view or a more incrementalist view correct. We won’t know until it happens or doesn’t happen.

      I read Bostrom’s book and found too much anthropomorphizing of AI, especially in assuming all of the worse human traits would manifest.

      Like

  4. I agree with you that an incremental way is possible. It is not only possible. It is happening right now. There are already first works on AI, which could perform not one but multiple tasks. The real question is of how long in time would be that transition window and will mankind use that time wisely and adapt. So far, humanity is too fragmented and prefers infighting to a fight against common existential threads.

    Liked by 1 person

Leave a comment