What is Transhumanism? What is the Singularity? In detail, if possible.
Last edited:
Jaster Mereel said:What is Transhumanism? What is the Singularity? In detail, if possible.
makeshift said:What's the Singularity?
The Singularity is characterized as a time in the future where computers and AI (artificial intelligence) becomes so fast and advanced that they become a truly disruptive force in the fabric of society as we know it. These super intelligent machines use their superior cognitive abilities to improve and engineer upon themselves to make themselves ever intelligent and ever more powerful. Because smarter and more advanced intelligence is going to be engineering, this is going to speed up the rate at which technology advances dramatically. Therefore, it's extremely difficult to say what the world is going to be like in the coming decades. That is in fact why it's called the Singularity. Because you can't see or reasonably predict the future beyond that point, just like how you can't see beyond the singularity of a black hole.
I'm reading a book right now called "The Singularity is Near" by a dude named Ray Kurzweil. Brilliant guy. He's a futurist, inventor and author. Bill Gates says he's the best person to predict the future of AI.
Ray predicts that we're going to see the Singularity happen approximately 2045. Not very far away.
Jaster Mereel said:So, I am to understand that this is supposed by some to be a genuine historical event at some time in the future? How did Mr. Kurzweil come at the date of 2045? Also, if it is supposed to be a real historical event, how does he connect now with that event? What is leading up to it in the historical sense? Are there any factors other than technological development that he considers?
and I wuod appreciate it if you respondsed to MY reply. already posted this andit went walkies....hmmmmmmmmmJaster Mereel said:I would appreciate it greatly if someone engaged in this thread and at least attempted to answer my questions.
I've just read the utterly fascinating Wikipedia articles along with some of the links and that's the sum of my knowledge on the subject but, as I understand it, the key event is going to be the point when we develop the first technology that's smarter than us.
I'm going to reiterate baumgarten's point by saying that you can't honestly talk about a machine being more intelligent than us until you define what intelligence is. I absolutely agree that machines will (and already are) capable of doing things that we as homo sapiens cannot do, but to say that they will be more "intelligent" than us is kind of shortsighted. Most people seem to look at intelligence as if it works on a hierarchal scale, that some things are less or more intelligent than other things. I would say that, as we study animal intelligence more and more, we are realizing that intelligence is a matter of the niche which the animal exists in. Machine intelligence is likely to be the same thing (it already is, actually), whereby the machine's niche is in a particular area which is determined by us, rather than a drive to survive and reproduce like most life. I think you're looking at intelligence the wrong way.redarmy11 said:as I understand it, the key event is going to be the point when we develop the first technology that's smarter than us.
Again, my point above. Intelligence doesn't work on a hierarchical plane, and machines won't think like human beings. What you just described is how a person (albeit an amoral one) would think, not how a machine would think. Machines are built to perform a specific function which is determined by us. Unless we design these machines to solve the world food shortage problem, or we design them like animals with a need to survive and reproduce, then none of these things will happen. The only way it could happen is if we were building said machines with the purpose of simulating living organisms, but that's not what the vast majority of machines are built to do.This will take over the design of newer smarter technologies, which will design even smarter technologies, and so on, at an ever faster rate, until we reach a point - the Singularity - "where our old models must be discarded and a new reality rules". The "new reality" will be one designed by a machine with an IQ of, say, 6 trillion but only as much 'compassion' as we can build into it. How do we know it won't 'solve' world food shortages by eliminating 'surplus' humans? How do we know it won't play dumb, only to trick us at a later stage? The answer is: right now, we don't.
Vinge, writing in 1993, predicts that the design of the first 'superhuman' intelligence will happen "within 30 years", and that the singularity will take place very quickly afterwards - but because technology has by this point completely outstripped us, predicting the pace and nature of change from our current co-ordinates is an impossible task (we'll become semi-human cyborgs in order to keep up, but only if the machines don't vaporise us first).
Well, we were. We hit a plateau a while ago, so PC manufacturers have been pushing 64-bit and dual-core architectures in order to compensate. IBM has been using nanotechnology in research to attempt to squeeze even more transistors onto a chip, but you can only take miniaturization so far. At a certain point (likely within the next ten years) it will be impossible to build significantly smaller and faster electronic microprocessors.RoyLennigan said:the whole basis for the idea of a technological singularity is derived from the fact that the number of transistors we can fit into a circuit is doubling every 18 months (if i recall correctly). so we are advancing at an exponential rate doubling every year and a half.
baumgarten said:What's the standard unit of 'smart'? Humans might be able to build intelligent machines, but what determines whether they're superior or inferior to us?
There truly is very little precedence for such a historical development- unless you consider the agricultural revolution of the ninth millennium bce, or the industrial and scientific revolution of recent centuries. Both these had profound effects on most of the competing societies in the world to a greater and lesser extent (but they were effects which would have ben impossible to predict beforehand).
It definitely wasn't a rhetorical question (well, maybe a little). Thanks for the reply.eburacum45 said:Good question; it deserves some debate, rather than leaving it hanging as a rhetorical question.
Such a test could only return a binary result, however. We wouldn't know how self-aware a machine was, only that it had self-awareness. I suppose a sufficiently rigorous test could establish several arb itrary levels of self-awareness under which things can be categorized, but such a test's practical application would be limited.Perhaps we could take general human intelligence as a measure of 'smartness'. Psychologists already use the IQ measure, supposedly based on average human ability; there are certain cultural difficulties associated with this measure, but it has a certain amount of validity when applied to humans. But machine intelligence is likely to be very different, and I suspect the IQ scale will be difficult if not impossible to apply to intelligent machines.
Alan Turing suggested a test, as is well known, to test if machines are indistinguishable from humans in their responses. A concerted effort might sooner or later produce a machine that would be able to pass a Turing Test; but this would only result in a machine that imitates a certain aspect of humanity.
Far more intensive tests would be required to determine if a machine has real self-awareness. Alternately a machine might be capable of self-awareness without being able to pass the Turing test. Or even self-awareness might not be a requirement for an artificial intelligence; a machine capable of running a traffic control system, or a market trading system, or a country, might not need self-awareness and may even be inhibited by such a capacity.
I believe Roger Penrose has presented an argument against the idea that a machine can completely emulate a human mind. I don't know much more about this, but perhaps someone on this forum does. It is worth researching, anyway.But given a machine that can be shown to emulate a human mind in every conceivable way, this could then provide a baseline to measure the 'smartness' of other machines. If a human-level machine is upgraded to be twice as fast, it simply becomes a speeded up human, capable of making mistakes twice as fast as a human. Similarly a series of human level machines connected in parallel could be as indecisive as a committee.
A machine with a vast memory could exceed any human capacity for knowledge and recall; but these aspects too could be no more useful than a human with a competent search engine and access to the internet, and who is fastidious about keeping records.
It seems to me that a machine that thought and behaved exactly like a human wouldn't be much more than a very expensive human. It could be a personal bias, but I favor the idea of more concrete applications of artificial intelligence. Sentient computers might be capable of pondering the nature of their own existence, but such functionality wouldn't be useful in a wide range of applications, which is why I think they are unlikely to become prevalent in our society.So will a machine that is equivalent to a human mind be the pinnacle of machine intelligence? I think not. A machine with a hundred times as much processing power and a hundred times as much memory will be more capable in many ways; it seems entirely likely that such a machine will exceed most human ability, even when the difference in operating speed is taken into account.
And it seems to me that machines which mimic human characteristics will be in the minority- there will be very competent machines with very large processing abilities which are entirely different in form and function.
baumgarten said:Well, we were. We hit a plateau a while ago, so PC manufacturers have been pushing 64-bit and dual-core architectures in order to compensate. IBM has been using nanotechnology in research to attempt to squeeze even more transistors onto a chip, but you can only take miniaturization so far. At a certain point (likely within the next ten years) it will be impossible to build significantly smaller and faster electronic microprocessors.