Singularity is gona get U all Humans !

What do U think about Singularity ?

  • Nothing

    Votes: 8 28.6%
  • Its a real danger to human specie

    Votes: 4 14.3%
  • I dont care , that will never happen

    Votes: 12 42.9%
  • I am sure God will help us and we will fight it off

    Votes: 4 14.3%

  • Total voters
    28
Those posting so far in this thread take seem to take a much darker view than the "techno-rapture" concept of Singularity: As Kurzweil explains, the singularity will be a human (and transhuman) experience, when technology allows us to implement working answers to our most vexing questions and problems in real-time. When humans begin suddenly to transcend our present physical, economic, and sociological limitations, it will be a liberation, not an enslavement.
 
Those posting so far in this thread take seem to take a much darker view than the "techno-rapture" concept of Singularity: As Kurzweil explains, the singularity will be a human (and transhuman) experience, when technology allows us to implement working answers to our most vexing questions and problems in real-time. When humans begin suddenly to transcend our present physical, economic, and sociological limitations, it will be a liberation, not an enslavement.
We began to transcend our physical, economic and sociological limitations the moment we first picked up a rock and hit something with it. One imagines then that our cyborg descendants will also come across problems that prove vexing with, initially, no obvious solution.

I do wonder about what kind of moral creatures we'll become. Some questions that I hope you can all help me with follow. I've browsed through a few articles on the ethics of all this but none of them really answered my questions (a relevant link, anyone?):

1. Will it become acceptable and routine to solve world food shortages by eliminating surplus mouths? Is this the kind of cool, clear, super-rational thinking that will pave the way for our new world? Or will becoming super-rational mean becoming super-ethical? Are the two the same? Complimentary? Discordant?

2. Assuming that we eradicate world hunger and the other thorny problems of our age, what other global issues / personal ethical dilemmas do you see emerging?

3. Will ever more extreme bio-modification mean that we'll reach a point where we're no longer human? If so, whither human ethics then? Will there be any remnants of humanity left to care? Will it matter if there aren't?


Dark times, my friends; dark times.. :p
 
redarmy11: "Will it become acceptable and routine to solve world food shortages by eliminating surplus mouths?"

By controlling our birthrate, it's a logical expectation.

"what other global issues / personal ethical dilemmas do you see emerging?"

Haves and have-nots, while not a new dilemma, will be an intensified conflict as technology begins an unprecedented acceleration.

"Will ever more extreme bio-modification mean that we'll reach a point where we're no longer human?"

That will be extremely subjective. Looking backward, probably not. Looking forward from here, definitely.

"whither human ethics then?"

On what basis are you assuming that ethics are the sole domain of humans?

"Will there be any remnants of humanity left to care?"

Survival is the primary motivation for humanity's evolved propensity for developing new technology.

"Will it matter if there aren't?"

Yes. The story will live most vibrantly through the story-tellers.

"Dark times, my friends; dark times"

Change is always viscerally frightening. But regardless, our evolution is about to take off like never before. Sociopolitical developents are extremely fateful now, because if we allow the key choices to be made by the most ruthless among us, then we will as a result trend toward ethical darkness. It's simply our choice, just like it's always been.

Choosing to ignore and dread the fast-approaching future, or choosing to embrace it will be the defining experience before each of us who survive the next century. We're likely to witness more changes than the combined experiences of humanity up until now, in the space of one generation. I wouldn't miss it for the world.
 
redarmy11: "Will it become acceptable and routine to solve world food shortages by eliminating surplus mouths?"

By controlling our birthrate, it's a logical expectation.
I was rather suggesting that the elimination will be done by ray-gun. In any case it wasn't a question to be taken literally. The real question is: to what extent will we be able to control, ie build ethical decision-making systems into, machines that are thousands or millions of times smarter than we are? Will they have disastrous emergent properties that we aren't able to foresee until it's too late? What will we do if these titans of strategic thinking decide to deceive us? Am I being naive and paranoid!??
"what other global issues / personal ethical dilemmas do you see emerging?"

Haves and have-nots, while not a new dilemma, will be an intensified conflict as technology begins an unprecedented acceleration.
Indeed. It's generally accepted that Asimov's Three Laws aren't likely to feature highly on the list of priorities for manufacturers of tomorrow's autonomous killing machines. The British sci-fi writer David Langford has rewritten the (suspiciously) soothing and reassuring original Laws to give them a more chilling and realistic twist:

1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.


These surely represent a more likely basis for future developments?
Change is always viscerally frightening. But regardless, our evolution is about to take off like never before. Sociopolitical developents are extremely fateful now, because if we allow the key choices to be made by the most ruthless among us, then we will as a result trend toward ethical darkness. It's simply our choice, just like it's always been.

Choosing to ignore and dread the fast-approaching future, or choosing to embrace it will be the defining experience before each of us who survive the next century. We're likely to witness more changes than the combined experiences of humanity up until now, in the space of one generation. I wouldn't miss it for the world.
I wouldn't miss it for the world either. Not sure I share your conviction regarding the extent to which we, the little people, will be able to control our destinies though.
 
Watch and take heart: The mighty are falling, and will continue to do so as (in the old Marxist dreams) the means of production is decentralized through the expanding implementation of distributable and replicable manufacturing/machine-growing technologies. Imagine the Gutenberg effect multiplied a thousandfold by a dramatic revolution in matter-manipulation.
 
Your choice:
Either both Hs and CMs are “programmed” or neither is. - Both Hs and CM learn operationally by the same "trial and be correct when wrong" way.* - You are really just discussing the definition of "programmed." As I use this word, it means giving a set of instruction to be followed. What do you mean by it?
--------------------------
*For Hs, the mother is a source of some corrections, but physics (nature) supplies more. - E.g Walk wrong - fall down. Touch hot thing -feel pain. Etc.

You're walking down the wrong path, Billy. Certainly, machines and humans are both programmed to some extent. Human programming is obviously done in a different way but it clearly fits your definition to a T - "given a set of instructions to be followed."

But what I'm talking about goes far beyond any type of programming - achieving true intelligence. For example, many people confuse wisdom and knowledge. "Knowledge" is simply an accumulation of facts; "wisdom" is the ability to make those facts useful in some way. Given enough storage space, one could cram the entire sum of human knowledge into a single machine. But that doesn't give it any wisdom.

Along similar lines, someone somewhere has a brand new idea every day. But a computer will never have an original thought.
 
"But a computer will never have an original thought."

When a pocket calculator exceeds your computational ability, at least within the universe of you and it, the calculator harbors an original cogent thought exceeding your efforts or capacity. Already most humans depend with our lives and fortunes upon the superior computational ability of computers. We could never think as originally as we do in the sciences (both applied and experimental) today without AI surpassing our natural abilities for the critical sorting and processing of stimuli/information.

Self-awareness of an artificial intelligence is a greater milestone, and we're on the cusp of witnessing that too. Also coming, the artificial expansion of our own personal intelligences and consciousnesses. We're about to enter a dazzling new Eden. Stay frightened and stupid if you want. As for me, I know that I'll be ready to eat of the Fruit of Knowledge.
 
Last edited:
A sobering perspective on this from a computer scientist in dialogue with a science fiction writer (click the link to read the whole thing - it's brief and it's worth it):
http://www.sfwriter.com/brkurz.htm

As for the possibility of a human mind residing somehow in a computer, we need a little reality check. Some members of the AI community (proper) simply scoff at Kurzweil's optimism. I used to work in AI, Rob. This happens to be my second (or is it my third?) cybernetic revolution. As any mainstream AI person will tell you, there hasn't been one iota of real progress in the area of mimicking human intelligence since Terry Winograd's SHRDLU in 1968. Since then, AI has been developing expert systems and various kinds of "smart" (not "intelligent") agents in software applications.

Radical AI proponents Marvin Minsky and Hans Moravec may be in universities and they may get on TV a lot, but I'm just amazed that the media doesn't realize (or unconsciously covers up the fact) that these guys represent about 2 percent of the whole AI community and outside their circle, they're simply not credible. I'll repeat that for anyone in the media looking in: There has been no scientific or technical breakthrough since the late 1960s that would justify the current (X-Files driven) intelligence-in-the-machine fad. There, that feels better!
Well - is this technological doom-monger right? Will we ever build a machine that's warm and witty enough to pass the Turing test? Or will all our efforts reward us with nothing more than, to paraphrase one commentator, "a generation of really great PCs"?
 
Last edited:
"But a computer will never have an original thought."

When a pocket calculator exceeds your computational ability, at least within the universe of you and it, the calculator harbors an original cogent thought exceeding your efforts or capacity. Already most humans depend with our lives and fortunes upon the superior computational ability of computers. We could never think as originally as we do in the sciences (both applied and experimental) today without AI surpassing our natural abilities for the critical sorting and processing of stimuli/information.

Self-awareness of an artificial intelligence is a greater milestone, and we're on the cusp of witnessing that too. Also coming, the artificial expansion of our own personal intelligences and consciousnesses. We're about to enter a dazzling new Eden. Stay frightened and stupid if you want. As for me, I know that I'll be ready to eat of the Fruit of Knowledge.

I'm neither frightened (not ONE bit!) nor stupid. A computer that can sort and correlate data at tremendous speeds is simply an aid to human intelligence. There is absolutely no AI today (despite your claim in the last sentence in the first paragraph) nor is there likely to be any in the vast foreseeable future. Computers are tools, like a hammer or screwdriver, and their purpose/ function is simply to relive us of having to do all that drudge work.

Drop me a line the moment the first machine actually develops self-awareness. I won't be holding my breath. ;)
 
Humans aren't going to become extinct. Machines are going to become more humanlike and humans are going to become more machinelike. Well, we're already machines, but we're going to become more like them in terms of physical material.

When nature changes technologies, what often happens is that previous organisms become modified and get bigger, better and more complex. Take mitochondria, for example. Originally, they began as organisms on their own. However, they proved useful to bigger organisms and as a result, now most organisms on this planet contain mitochondria. The bacteria that were mitrochondria on their own aren't extinct. They've simply been absorbed by a bigger organism to become part of something bigger and more complex.

You often see this. Different organisms absorbing different organisms. I could see something like this happening with humans and technology. Humans aren't going to go extinct; they're simply going to be absorbed.
 
For few post I would like to pose as a MACHINE

I was rather suggesting that the elimination will be done by ray-gun.

That wont be necessary, we are ready to wait; we will make u so much dependent on us that u wont be able to take any such actions any more against U or us.

In any case it wasn't a question to be taken literally. The real question is: to what extent will we be able to control, ie build ethical decision-making systems into, machines that are thousands or millions of times smarter than we are?

Who told u U will control us, u cant control your government actions we are a far fetched dream for u regarding control.

Will they have disastrous emergent properties that we aren't able to foresee until it's too late?

What do u mean by late, u r the pests, u deserve to be eliminated.

What will we do if these titans of strategic thinking decide to deceive us? Am I being naive and paranoid!??

This is really insulting , what makes u think that u should exists ?

Indeed. It's generally accepted that Asimov's Three Laws aren't likely to feature highly on the list of priorities for manufacturers of tomorrow's autonomous killing machines. The British sci-fi writer David Langford has rewritten the (suspiciously) soothing and reassuring original Laws to give them a more chilling and realistic twist:

1. A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
2. A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
3. A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.

If u can create laws we can create even better.

These surely represent a more likely basis for future developments?

I wouldn't miss it for the world either. Not sure I share your conviction regarding the extent to which we, the little people, will be able to control our destinies though.

Dont worry leave it to us, from hence forth we will decide for u.
 
For few post I would like to pose as a MACHINE

That wont be necessary, we are ready to wait;

"We"...from a machine???

Machines have no consciousness of self, therefore there is no 'we' or 'I'.

Human beings dont even understand their own consciousness, or what the word 'mind' means...so how will anything similiar be created out of silicon?
 
Nice try machine, but what you don't know is that I only used cheap short-life batteries for your power source. I'll switch you back on when I've altered the programming.
 
Carcano: "Human beings dont even understand their own consciousness, or what the word 'mind' means...so how will anything similiar be created out of silicon?"

We have learned considerably about our minds and consciousness. We are also learning to assemble advanced nueral networks capable of replicating the creative noise/feedback, and independent associations/learning that commonly occur in meat brains. First AI will be able to examine and interact with their environments, and then with themselves: Self-awareness.
 
Carcano: "Human beings dont even understand their own consciousness, or what the word 'mind' means...so how will anything similiar be created out of silicon?"

We have learned considerably about our minds and consciousness. We are also learning to assemble advanced nueral networks capable of replicating the creative noise/feedback, and independent associations/learning that commonly occur in meat brains. First AI will be able to examine and interact with their environments, and then with themselves: Self-awareness.

Very strange... Just yesterday you implied that AIs already existed and were aiding us. "We could never think as originally as we do in the sciences (both applied and experimental) today without AI surpassing our natural abilities for the critical sorting and processing of stimuli/information.
"

And now you seem to be capitulating by saying that the "First IA will..."

So which is it? Can't have it both ways. :shrug:
 
"Just yesterday you implied that AIs already existed and were aiding us."

AI is already a common term today. AI may still be in its infancy, but the term is already used to describe adaptive software. You can already buy products with "AI" as a marketing feature.

"So which is it? Can't have it both ways."

You're offering a false dilemma.
 
"Just yesterday you implied that AIs already existed and were aiding us."

AI is already a common term today. AI may still be in its infancy, but the term is already used to describe adaptive software. You can already buy products with "AI" as a marketing feature.

"So which is it? Can't have it both ways."

You're offering a false dilemma.

No false delimma here. Simply because it's become a "common term" doesn't mean it actually exists. Not one single machine/program (or combinations) have ever even come close to exhibiting true intelligence - not even rudimentary intelligence. Certainly, there are programs that have the ability to learn through extensive interaction but that's a LONG way from being intelligent - just slick programming.

So, which is it????
 
You're splitting hairs over the definition of AI, and it's a point that developments will render moot. If it will give you satisfaction, I will admit that there is still no public evidence that "intelligence" has occured in machines. There is ample evidence that it's coming, at which point those people who are particularly uncomfortable with the notion of sharing our world with other intelligences will likely enter into psychological defense mechanisms such as denial.
 
Back
Top