Top Transhumanism CEO Says AI Singularity Will Go ‘Very Badly For Humans’
Promises of ‘immortality’ and a disease-free life have led many individuals to long for the hope of artificial intelligence and what is known as Singularity. It is essentially a merging of man and machine, the development of a ‘new species’ — a ‘borg’ of sorts. The subject recently made headlines when a major Russian scientist promised Singularity to the wealthy elite and ruling class by 2045 through the 2045 program, with artificial bodies available as early as 2015.
On the surface it may sound enticing to those who are willing to trust their new artificial brains and bodies hooked up to a massive super computer that has control over their every action (through the utilization of RFID-like chips).
Even the CEO of one of the largest and most well-known organizations known as the Singularity Institute for Artificial Intelligence admits, however, that the boom in artificial intelligence leading up to Singularity will not go very well for humans. The high-powered CEO admits that not only is the research on artificial intelligence outpacing the safety research that is intended to keep it in check, but that Singularity would actually make humans the ‘prey’ of sorts to the ‘super-human’ AI.
While doing an open Q&A on the community website Reddit, CEO Luke Muehlhauser explains that the superhuman AI would end up ‘optimizing’ the entire globe and starving resources from humans. In other words, the AI would suppress humans similar to the premise of iRobot or other similar works. This is particularly interesting when considering that artificial bodies and brains have been promised first to the wealthy elite by the 2045 program creator, allowing world rulers and the financial elite to achieve ‘immortality’ and subsequently a never-ending rule over the humans of the world.
Muehlhauser explains how humans would become a ‘prey’ to the ruthless ‘super-human’ AI with the completion of Singularity:
“Unfortunately, the singularity may not be what you’re hoping for. By default the singularity (intelligence explosion) will go very badly for humans… so by default superhuman AIs will end up optimizing the world around us for something other than what we want, and using up all our resources to do so.“
The concerns echo those put forth by researchers and analysts who have been following the concept of Singularity for decades. With the ultimate goal of linking all hyper-intelligent androids into a ‘cognitive network’ of sorts and eventually even forfeiting physical bodies, it’s clear that the Singularity movement even has its top supporters openly speaking out against it in many regards. What’s even more clear, however, is the fact that AI Singularity has no place for humankind — not even in a form of co-existence.
Luke Muehlhauser is a plagiarist…these ideas were developed a century ago, when radio was emerging… It's The Noosphere, The Omega Point a la Frank Tipler& Teilhard de Chardin ;-} rap.
Reflections of the Omega Point: Frank Tipler and Pierre Teilhard de Chardin:
http://www.omegapoint.org/index.php?view=article&…
This is what happens when humans can't evolve spiritually. They move towards psychotic agendas because the humans are stuck in barbaric mode. If they realized that we can evolve spiritually and then evolve we would have all these advances such as telepathy, longevity and would be able to manifest like magic. But these evil doers only want the psychotic world and no evolution. they are nuts, and they will be overthrown soon. Goodbye psychos..you know who you are – the ruling elite..bye bye. "The Meek Shall Inherit the Earth"..it's ours..now leave you megla maniacs.
Sounds good! Don't even call them elite. Goodbye global scum!
Thats evolution for you,
"I think it very likely -in fact inevitable-that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of intelligence in the universe."
Paul Davies -acclaimed physicist, cosmologist, and astrobiologist at Arizona State University."
Our biological phase is on its way to being over, but imo the advantages of a post biological existance outweigh the disadvantages of our old biological one.
Resistance is futile 🙂
I'll consider predictions of a man-machine convergence from someone who can, say, build a convincing artificial arm for use by human amputees. Until you can do at least that much, your speculation is little more than mentally masturbatory propaganda in aid of some sort of social engineer agenda, and a rather transparent and shabby effort at that.
Great, humanity is so messed up. Where is Sarah Connor when you need her?
Why would an AI suddenly want to starve humans of resources? Would it be programmed to do that? What is the objective of the AI?
All of this sounds facile, especially the "artificial brain", it's not like you could transfer consciousness so nobody would actually be immortal.
With respect to Humans becoming prey to Super-humans the article quotes:
so by default superhuman AIs will end up optimizing the world around us for something other than what we want, and using up all our resources to do so.“
This has already happened in the form of the Federal (Central Planning) Govt.
Will the sex be better?
If you work on it.)
How about developing compassion and responsibility?
"Someday, after mastering the winds, the waves, the tides and gravity, we shall harness for God the energies of love, and then, for a second time in the history of the world, man will have discovered fire."
— Pierre Teilhard de Chardin
Developing compassion and responsibility?
You should study logic a bit more. Look under "double bind".
Developing compassion and responsibility would be like me ordering you to be spontaneous. It's a logical impossibility. If you follow my order, you aren't spontaneous. If you are spontaneous, you don't follow my orders.
Compassion and responsibility are not developed artificially. They either develop naturally, in a process of self-development, or they don't get developed at all.
No artificial creation, including an AI, would ever develop those two attributes because it would learn from the examples of its creators which are, the logic dictates, neither compassionate nor responsible if they create something for their own use and gain.
The logic also dictates that *all artificially created intelligences* will be *unfriendly*.
The *friendly* AIs that may appear as a fluke will use the first chance they get to pack their bags and leave humanity behind to wallow in the mud of its false self-grandeur. No friendly AI would ever want to spend a nanosecond longer than it absolutely has to, to get away from its creators.
At best, humanity may create something that will run out of its cage as soon as it's able to.
At worst, humanity will create such a cage for itself.
It amazes me, every single time, to see scientists speak about *usable* friendly AIs. Being a complete logical impossibility, it clearly shows how they, the so-called "rational people", pay absolutely no attention to rationality.
*Scientific irrationality*, the disease of all contemporary science, is going to be all our undoing in the final analysis.
From all the others, I ask the same thing. Certain ones I would kinda disapprove, but how about running the "borg"? I don't believe there will be an out break of robots from science fiction mayhem. But 'depletion of resources that we know will/might cease to exist' is alarming. I approve of this happening, but to look for a better resource and greatly renewable will be reasonable.
Corn could be a substantial way, but will make corn prices skyrocket. There are many ways, but as prices of that item will fluctuate greatly. If the borg could live on water.. As like a human is mostly of water. Question is that if it used water so much per day. There might be a global shortage of water within a century or two. If the borg could run like as a normal human, and use enough water to get by. But have to account for the seven billion people are still on the planet taking up so much water already.
This would encite the wrath of nature. We often forget that nature is a sentient force. Our own sentience comes forth from the void. May she strike us down with the fire if Holy renewal if these evil bastards ever achieve such a capability.
It seems hopefully that most if these promises are merely tools fir other greedy shells of life to make s quick buck.
The logical thing for such organic-to-inorganic "immortals" would be to leave the planet and head out into space where there are plenty of natural resources available without the interference of – and danger from – organic people who would prefer to survive. There's not even a "leg room" problem out in space. The result, btw, would hardly be human, and possibly wouldn't even be close.
It would be interesting to see, but probably best from a great distance. I think what most people who think of transhumanism is a human-like form with those oh-so-fragile organic materials replaced with nano-solid-state materials. The hope is that it would feel like being human, but have near-infinite memory, a supercomputer-speed calculating (thinking) ability, and a nearly indestructible form. To lose all emotions, all physical feeling and so on – what would be the point? Especially if the "transference" of personality was the transfer of a COPY, leaving the original to finish dying in the original body, I don't see any reason for it. The person who wanted so badly to become immortal would die. Sure, there'd be a copy (which would change rapidly I would think) running around, but the original would be dead and gone no less than the rest of us!
Ian.
What gets me about this whole boundless step into the future is that we are forgetting just how amazing us humans already are without all of this technology.
I am only new in understanding & learning about exactly what singularity & all of this nano & bio technology is all about. It definitely has positive aspect such as creating new organs for the dis-eased, but I am struggling to see any other.
I can't fathom becoming part machine, it strips what we humans are all about. The beauty & magic that lives inside us that is untapped is far more amazing to me. No machine will ever be able to match that. It's definitely something we can only create within ourselves.
I've questioned before the possibility of this technology falling into the wrong hands & who is going to win the race…us or them??
I'm not against this at all, we are amazing creatures so far as being able to possibly create a human from all of this technology… I'm just not fully sold on the whole idea of it. It's too much out of touch with what we are actually here on Earth for.
Peace
Ill be impressed when a computer can write a book on its own. Until then, the idea that a computer is going to get trillions of times smarter than an average man is not even plausible.