AI and the singularity

Oh, I agree.
IMO, unfortunately "greed" is also a natural extension of a mathematical survival technique.
Darwinian evolution is a mathematical process, with a build-in restraint of "natural selection".

interesting debate i would like to follow...
should society be regulated based on the example of Darwinian evolution being a known absolute regulatory mechanism ?
is Regulation its self going against Darwinian evolution ?
would want the likes of Niel D~T Richard D Brian C Bill G Steve W and a few others, would make an amazing TV series.

"Divine Evolution"

OTOH, why are we building AIs, with say the intellectual ability of an ant?

removing humans from the labour market by taking away their only means of generating food clothing housing breeding etc is a regulation process.
producing robots is a process of producing regulation of the labour market.

that needs to be addressed as a social-ecconomic-geo-political-cultural mandate to prevent extinction by regulation.
the regulator(robot) must be Regulated to prevent extinction as a result of regulation.
 
"Divine Evolution"
Personally I would like to call Evolution a "probabilistic mathematical function" which can be statistically shown to exist by 14 billion years of development of ever greater complex patterns, and eventually resulting in the advent of self replicating polymers. Mutations in this process (random assimilation of compatible chemicals) , may have resulted in a survival advantage and the beginning of competition to dominate, e.g. Darwinian Evolution .

This does not necessarily mean that simpler structures cease to exist altogether, they may well continue to exist, but will take a different path of evolutionary development.

A great example IMO, is the fusion of chromosome 2 in humans, which marked the split from our hominid precursors.
All great apes apart from man have 24 pairs of chromosomes. There is therefore a hypothesis that the common ancestor of all great apes had 24 pairs of chromosomes and that the fusion of two of the ancestor's chromosomes created chromosome 2 in humans. The evidence for this hypothesis is very strong.

hum_ape_chrom_2.gif

http://www.evolutionpages.com/chromosome_2.htm

If it were not of man's interference by destruction of habitat and hunting for "bushmeat", the great apes would be thriving instead of declining. The same holds true for whales
What springs to mind when you think of a whale? Blubber, blowholes and flukes are among the hallmarks of the roughly 80 species of cetaceans (whales, dolphins and porpoises) alive today. But, because they are mammals, we know that they must have evolved from land-dwelling ancestors.
Read more: https://www.smithsonianmag.com/science-nature/how-did-whales-evolve-73276956/#xj1MpBOC8sG38LZ4.99
and coral reefs which provide a rich environment for a great variety of fish and other life forms.
The bio-diversity of a coral reef is often called the rainforest of the sea. The coral reefs are home to almost one-fourth of the total species living in the sea. Reefs are mostly found near to coast line in the tropics, but deep sea coral reefs can also be found....
https://aquaworld.com.mx/en/how-are-coral-reefs-formed/

"mess with Mother Nature and she will exact a price for destabilizing local and/or global ecosystems"

We cannot regulate global functions, only disturb the symmetries and balance which took earth some 4 billion years to establish by self-organization of the ecosphere due to earth's inherent potentials derived from chemical reactions and our stable proximity to the sun.

The rare probabilistic events were due to "outside" interferences, such as collision with Theia

300px-Big_Splash_Theia.gif

https://en.wikipedia.org/wiki/Theia_(planet)

Which may have introduced several elements (such as gold) into the earth's chemistry and be causal to the eventual emergence of self-replicating polymers, which continued to organize into more complex biochemical structures over the following millions of years.

This is why I see creation of life as a probabilistic, but statistically demonstrable process of trial and error over some 2 trillion, quadrillion, quadrillion, quadrillion chemical interactions on earth alone (Hazen), in spite of the early chaotic state of the earth's atmosphere, which created the great extinction epochs, where only the hardiest or sheltered organisms survived and when conditions settled, continued to populate the earth.

I see this as a "fundamental" process, and as Hazen proposed, there may have been other ways to form self duplicating biomolecules, but they all must have had something like it in common.
 
Last edited:
Personally I would like to call Evolution a "probabilistic mathematical function" which can be statistically shown to exist by 14 billion years of development of ever greater complex patterns, and eventually resulting in the advent of self replicating polymers. Mutations in this process (random assimilation of compatible chemicals) , may have resulted in a survival advantage and the beginning of competition to dominate, e.g. Darwinian Evolution .

This does not necessarily mean that simpler structures cease to exist altogether, they may well continue to exist, but will take a different path of evolutionary development.

A great example IMO, is the fusion of chromosome 2 in humans, which marked the split from our hominid precursors.

hum_ape_chrom_2.gif

http://www.evolutionpages.com/chromosome_2.htm

If it were not of man's interference by destruction of habitat and hunting for "bushmeat", the great apes would be thriving instead of declining. The same holds true for whales and coral reefs which provide a rich environment for a great variety of fish and other life forms.
https://aquaworld.com.mx/en/how-are-coral-reefs-formed/

"mess with Mother Nature and she will exact a price for destabilizing local and/or global ecosystems"

We cannot regulate global functions, only disturb the symmetries and balance which took earth some 4 billion years to establish by self-organization of the ecosphere due to earth's inherent potentials derived from chemical reactions and our stable proximity to the sun.

The rare probabilistic events were due to "outside" interferences, such as collision with Theia

300px-Big_Splash_Theia.gif

https://en.wikipedia.org/wiki/Theia_(planet)

Which may have introduced several elements (such as gold) into the earth's chemistry and be causal to the eventual emergence of self-replicating polymers, which continued to organize into more complex biochemical structures over the following millions of years.

This is why I see creation of life as a probabilistic, but statistically demonstrable process of trial and error over some 2 trillion, quadrillion, quadrillion, quadrillion chemical interactions on earth alone (Hazen), in spite of the early chaotic state of the earth's atmosphere, which created the great extinction epochs, where only the hardiest or sheltered organisms survived and when conditions settled, continued to populate the earth.

I see this as a "fundamental" process, and as Hazen proposed, there may have been other ways to form self duplicating biomolecules, but they all must have had something like it in common.

Yes of course the probability theory of life is simple , but complex .

But how does this account for Arctic Worms ?
 
But how does this account for Arctic Worms ?
They are simple.

Water bears (tardigrades) are also very simple.
The name Tardigrada (meaning "slow stepper") was given three years later by the Italian biologist Lazzaro Spallanzani.[7] They have been found everywhere: from mountain tops to the deep sea and mud volcanoes;[8] from tropical rain forests to the Antarctic.[9] Tardigrades are one of the most resilient known animals,[10][11] with individual species able to survive extreme conditions that would be rapidly fatal to nearly all other known life forms, such as exposure to extreme temperatures, extreme pressures (both high and low), air deprivation, radiation, dehydration, and starvation. About 1,150 known species[12][13] form the phylum Tardigrada, a part of the superphylum Ecdysozoa. The group includes fossils dating from 530 million years ago, in the Cambrian period.
220px-SEM_image_of_Milnesium_tardigradum_in_active_state_-_journal.pone.0045682.g001-2.png

https://en.wikipedia.org/wiki/Tardigrade

And of course there are the extremophiles
An extremophile (from Latin extremus meaning "extreme" and Greek philiā (φιλία) meaning "love") is an organism that thrives in physically or geochemically extreme conditionsthat are detrimental to most life on Earth.
300px-Grand_prismatic_spring.jpg


https://en.wikipedia.org/wiki/Extremophile

IMO, with these organisms the fact that they formed in extreme environments may well have prevented them from evolving beyond where they thrive now.
 
On topic, what will the advent of "useful" quantum computers mean for AI and machine learning?

The current paucity of quantum algorithms (perhaps due largely to our lack of intuition) is being addressed by computer scientists (or at least by those who can grasp what a quantum algorithm is and how it's very different from classical algorithms). One way to think about this difference is this: a classical algorithm is like a musical composition where single notes are played in a sequence, whereas quantum algorithms are like playing all the notes at the same time so they interfere--the interference "is" the algorithm, type of thing.

But our current understanding of machine learning and AI in general is entirely classical, even designing a quantum computer follows classical rules of physics.
So what will happen when we start to use quantum systems in AI (already happening)?

Does that mean we will need to "lose" somehow, our classical perspective, because we will have handed the problem over, so to say? Will we be able in that case to predict what such systems can learn? Will we be able to implement algorithms on quantum computers that can faithfully simulate natural systems like black holes, or even living cells (nobody really knows just now)?

What could that mean? I'd say right now we have no real idea. But of course we don't, we've always been unable to predict how new technology will change the world as we know it.
 
But of course we don't, we've always been unable to predict how new technology will change the world as we know it.

So, intuitively, we can only develop new technology which is based on existing knowledge and so, is based on what we know about the technology we have, that is to say, on theories which "explain" it and its "working", or "usefulness".
Which means, it's based on what we conceive of as "information" we can collect from things we designate as "outputs" for abstract signals, which encode this information.

But physically, information is there whether or not we can collect it. In other words, the nature of information is pretty much a decision we make, based on theories of its collection and storage in some kind of archive. We invoke theories of information transmission from senders to receivers: "observers" of information. Thus, quantum information itself cannot "escape" this paradigm, or so we think.

So we're in some sense (we don't really understand) constrained to a context where information is encoded in abstract signals. Because we apply an input-to-output context we have to choose which is which. And we have to choose an encoding.

The thing is that a quantum of energy can be encoded in many ways. When an electron "absorbs" a photon, that's equivalent to an electron "emitting" a photon, in terms of the energy: the encoding is either as a change in the electron's momentum or it's the (equivalent) photon. So our decision about which is an input and which is an output appears to depend on the time direction. (we don't understand photons as being able to "encode momentum" into the past).
 
Last edited:
Apart then, from the problems we still have with being able to construct large enough "qubit" ensembles, and then encode them all at the same time (which it seems will only ever be statistically possible such that we have a "fiducial" signal of some kind) we don't yet understand why we do need to connect (in a virtual way, i.e. not physically) the outputs back to the inputs: everything it seems must be in a superposition, even when there is no detectable input or output.

How strange. As if something in the past "knows about" something that's in the future. Or are we just somehow confused about the nature of information and its transmission and storage?
A qubit doesn't have to be in a "stable" state for very long so it can "store" a signal, moreover. The signal output too, corresponds to correlations between outputs, not so much the outputs themselves, since these are random.
 
Last edited:
About the question: can a quantum computer (even one we haven't built yet, i.e. it exists in our "technological" future, along with a classical causal chain of events, an extension of a chain extending into the past that includes modern electronics--the transistor and other semiconducting devices, a "class" of material that now includes graphene, many kinds of doped silicon and other metallic elements, yada yada);

If we let's say, successfully predict that the technological problems with fabrication and subsequent operation of large-scale quantum ensembles such as n x n arrays, of qubits, will all be conquered, then would such a computer be able to solve problems like (ta da!) the black hole paradox?

If the answer is yes, what will that mean? Will the answer depend on the "context-switch": a quantum computer with a (large enough) number of qubits can "faithfully" simulate a black hole (akin to a physics model on a classical computer doing this, in say a computer video game, which projects, as it were, images onto an n x m array of pixels) . . .

If instead we see problems with the simulations, we might have to assume it can't be done, we will never have the technology.
Or what? What might we conceive of how to test a simulation of a black hole; what would the architecture of the quantum computer be so we can have classical measurements, moreover that means we also must have classical inputs (unless we want to throw causality away).
 
Apart then, from the problems we still have with being able to construct large enough "qubit" ensembles, and then encode them all at the same time (which it seems will only ever be statistically possible such that we have a "fiducial" signal of some kind) we don't yet understand why we do need to connect (in a virtual way, i.e. not physically) the outputs back to the inputs: everything it seems must be in a superposition, even when there is no detectable input or output.

How strange. As if something in the past "knows about" something that's in the future. Or are we just somehow confused about the nature of information and its transmission and storage?
A qubit doesn't have to be in a "stable" state for very long so it can "store" a signal, moreover. The signal output too, corresponds to correlations between outputs, not so much the outputs themselves, since these are random.
Hameroff believes and provides some compelling evidence that quantum processing of information may well be found in the nano-tubules of living oranisms.

He has a cooperative working relationship with Roger Penrose (Quantum Universe) and I found this lecture very interesting and thouht provoking.
 
Yet I disagree
The brain is made from life energy and material elements .
Explain "life energy" which does not function as Quantum Mechanics.

Yes, the brain is material, but thoughts are fleeting immersive experiences, caused by the energetic stimulations of the quantum wave collapse in the micro tubules within the brain which act as tiny quantum computers and also form short and long term chemical memories which are used to compare and identify later experiences as cognitive experiences.
This process is explained by Hameroff and Penrose in their hypothesis of the quantum nature of ORCH-OR.

But even as the brain has billions of neurons, it can only process a small portion of the total amount of information contained in the stream of photons. The result is a "best guess" but as the entire system is flexible and open to instant modification, it allows for learning new interpretations of the received information as Seth demonstrated with the verbal example of;
"I think breakfast is a terrible idea" .
 
Edited.

"Write4U, post: 3522945, member: 261885",
Explain "life energy" which does not function as Quantum Mechanics.

Yes, the brain is material, but thoughts are fleeting immersive experiences, caused by the energetic stimulations of the quantum wave collapse in the micro tubules within the brain which act as tiny quantum computers and also form short and long term chemical memories which are used to compare and identify later experiences as cognitive experiences.
This process is explained by Hameroff and Penrose in their hypothesis of the quantum nature of ORCH-OR.

But even as the brain has billions of neurons, it can only process a small portion of the total amount of information contained in the stream of photons. The result is a "best guess" but as the entire system is flexible and open to instant modification, it allows for learning new interpretations of the received information as Seth demonstrated with the verbal example of;
"I think breakfast is a terrible idea" .

https://www.ted.com/talks/anil_seth_how_your_brain_hallucinates_your_conscious_reality
Or corrected link;
https://soundcloud.com/royal-institution/sets/ri-science-podcast
 
Last edited:
Explain "life energy" which does not function as Quantum Mechanics.

Yes, the brain is material, but thoughts are fleeting immersive experiences, caused by the energetic stimulations of the quantum wave collapse in the micro tubules within the brain which act as tiny quantum computers and also form short and long term chemical memories which are used to compare and identify later experiences as cognitive experiences.
This process is explained by Hameroff and Penrose in their hypothesis of the quantum nature of ORCH-OR.

But even as the brain has billions of neurons, it can only process a small portion of the total amount of information contained in the stream of photons. The result is a "best guess" but as the entire system is flexible and open to instant modification, it allows for learning new interpretations of the received information as Seth demonstrated with the verbal example of;
"I think breakfast is a terrible idea" .

Information is not just about photons .

Smell , touch , taste , hearing , is not based on photons , but molecules of matter .
 
Computers is based on physical things existing . Things that are physical in the Universe . Galaxies , Suns , planets etc . Energy and Matter .

Therefore the three dimensional , physical things , the Universe , existed before computers .
 
Back
Top