Rational Transhumanism

Quoting Wikipedia, the Free Encyclopedia :

Transhumanism (abbreviated as H+ or h+) is an international cultural and intellectual movement with an eventual goal of fundamentally transforming the human condition by developing and making widely available technologies to greatly enhance human intellectual, physical, and psychological capacities.

I've been considering myself a transhumanist for several years now, working towards the transhumanist values in most activities I consciously undertook. While the initial fascination came to me from some excellent hard science-fiction novels, I learned the philosophy from Eliezer Yudkowsky's Simplified Humanism notion and LessWrong community. Very early in my activism I learned to see a strong distinction between the fiction and philopsophy, hoping instead of believing and taking action today.

I've had a pleasure of working with some excellent people in the field, inter alia Anton Kulaga, founder of Ukrainian transhumanist movement, and with his help - doctor Aubrey de Grey, the founder of SENS and a well-known biogerontology pioneer and promoter. I've seen a dozens of educational actions showing regular people the importance of longevity research, took part in a crowd funding an experiment and had a lot of stimulating discussions about structure of the modern Academy. I learned about the importance of Open Science, journals such as Plos One or Github-like PeerJ. It has been a stimulating intellectual journey, where I felt that every action brings a change in the world - what I loved about transhumanism.

However when mentioning that topic in a conversation with uninvolved people, I am often met with a misunderstanding. There's a popular notion of transhumanism being based more on science-fiction than on actual science and rational predictions, with transhumanists preferring to live-out their fantasies, treating future and futurisim like some sort of fashion or a culture to play out. While I cannot deny existence of such groups, I would like to present my own definition of the philosophy at hand:

I define transhumanism to be a humanism extended by a belief and anticipation of accelerating changes coming through technology. Anticipation that many new scientific breakthroughs and inventions will not only bring advance in a specific field, but also cause shifts in unexpected places. My favorite examples are (among others): the scientific method, the Internet itself, and the free software / open source movement.

Said anticipation may be the biggest advantage and the gravest risk of transhumanism. It gives a transhumanist an ability to make a little bolder predictions, a little further in the future. While this may lead to noticing problems unseen for other people and shaping the existing world in preparation, just as often it brings far-fetched arguments seeming ridiculous to outside readers, such as telepathy patents warnings.

As the notion of shaping/preparing the future lies in the center of my definition of transhumanism, many people seem to misunderstand it for preparing for the future. They may be the ones accused of just living out their fantasies instead of partaking in any kind of activism - or acting just for the sake of accelerating changes.

I see no point in working towards that aim without inspecting the changes closely. It's important to remember that rapid social transformation (almost always brought by new technological inventions) often shifts the balance of power in the society. I believe (and think it's a part of transhumanist creed), that the changes will come either way.

That should encompass all my major views on transhumanism. While not all will agree with me, I like calling this specific set of beliefs a rational transhumanism understood mainly as an activism, not a theoretical model of the world.

Holding to my words, I try to anticipate the opportunities to shape the future to the ideals I believe in. And I believe that a core of transhumanism is old, simple humanism - putting humans in the first place. That means allowing individuals to act freely, express their opinions and control themselves, have access to all the knowledge we have gathered so far and build upon it.

I don't try to take any political stance - the values I mentioned above can be interpreted both as left- or right- wing, and I wouldn't imply libertarian themes popular among some transhumanists. Nonetheless, there are a lot of threats:

Old social structures changing too slow to accommodate all the changes may quench the inventiveness, lead to conflicts in multiple places and loss of knowledge. Academia is the most prominent, if not only example. Trickle-down model of knowledge, publication-based funding system, resources closed to the external world, stratification and strict career rules are all very real problems. There are also others: limitations on research based on moral backgrounds or even an economy lobbying are still a problem of regenerative medicine worldwide.

While XX century totalitarianism are waning from around the world, a new competitor emerges: multinational megacorporations with powers not only to lobby the governments, but even challenge them as equals. Several philosophers have already suggested modelling the corporations as organisms, with humans being just internal mechanisms for them. With the share-holders guarding the single purpose of profit, no CEO can single-handedly change politics of a giant: every strategy will be used by more or less willing workers to achieve maximization of revenue. The existing mechanisms defending societies from monopolies - such as patents - are rapidly corrupted to serve corporations' purpose. While new technologies are developed ever faster, we are losing any control over them - and losing knowledge about them, leaving us forced to live in certain lifestyles, surrendering our human rights, too.

To combat the problems developing in the transhuman future I turn to the ideals of openness: Free Software, Open Science and limiting any one entity's control of our knowledge. I see that the fastest progress may come through multinational corporations and military research programs - and I oppose it, because it comes at the price of being controlled and surveilled - which I won't stand, as a human - even for a cost of being slower.

I am not a researcher and there's a part of me that regrets it. The same way I am not a true hacker - although I do possess a few of skills of both these callings. I consider myself an activist, working for and from Warsaw Hackerspace. As I am writing this, some people around me are trying to debug a laser cutter, a drone is whirring over my head, soldering workshop is producing someone's own spin of Arduino all among ongoing discussion of the political consequences of Apple banning Pebble-capable applications. It's not only a hub of technology - it's a place to promote self-education and self-development, to discuss new breeds of political ideas, where biology workshop meets digital human rights training.

I believe that communities such as this - local makerspaces, hackerspaces and fablabs are one of the keys to keep the future in people's hands. For several months I've been helping to set up several | technology | hubs in the Middle East, and I honestly believe that they are the biggest chance for stabilization in the region. With a hammer of religious fanaticism and an anvil of predatory corporations exploiting local markets young people need to see that they have other choices.

On the other hand, Europe is being threatened by TTIP, an agreement severely controlling governments' - and people's hold on megacorporations. There are several organizations, such as Polish Panoptykon seeing the dangers and working to prevent it.

Where I see a modern transhumanist is not on a science-fiction convention among other Trekkies, but involved in serious activism, as in the examples above. Not everyone will agree with me, but I think that it's the only way to take part in the rational transhumanism.

Rational transhumanism, pointed out:

  • Technology brings exponentially accelerating changes in basically every aspect of life
  • Humanist values are the most important ones - let's focus on real people
  • It's pointless to work towards accelerating the pace of changes without any control over them - they will come either way
  • It's important to work for the sake of shaping the changes towards humanist values

If you're looking for a place to start, I suggest abandoning science fiction for a moment and trying some of the LessWrong sequences, especially How to actually change your mind.