AI and the singularity

I'm not downplaying the incredible cleverness of the latest generation of neural networks. I'm blown away by the latest news of AlphaGo Zero, which was simply taught the rules of Go and then programmed to play millions of games against itself to see what works, and is now an expert player without having to be programmed with any human knowledge at all.

But even so, it's a program running on conventional hardware. If they had to, the designers could freeze the cpu, take a memory dump, a copy of the source code, a pencil and paper, and a lot of coffee, and figure out what it's going to do next.
In principle, yes, but AlphaGo's "knowledge" of Go is not explicit anywhere in that memory dump you mentioned. All the memory dump will show you is a bunch of values of various neural network connection weights and the current data flow through the network.

It's a bit like if you could simultaneously measure all the voltages and currents across circuit elements on a silicon chip of some kind. At the end of that measurement process, you'd have a whole lot of numbers, and in principle you could work out what the electrical circuit would do next, given the schematic and the data. But none of that would tell you what the circuit was actually meant for, or what it's purpose or principle of operation is. To understand that, you need higher-level knowledge.

My point is that the "higher-level" knowledge about Go strategy and so on, implicitly coded in that data dump from AlphaGo, is not fundamentally accessible to any human being.

This is not so different from me asking you what is 2 plus 2. You answer 4, but even you can't tell me exactly what processes you used to arrive at that answer. Those processes are not accessible even to your own conscious mind. This is true even though your brain is nothing more than a complicated neural network.

There's no reason to support that this kind of argument is any different whether we're talking about an artificial neural network or a naturally-occurring one.
 
What do you think we are conduits of?
And which creator/designer are you talking about?

i mean we did not program ourselves.

Right now, artificial neural networks exist that come up with insights and conclusions and connections between data that their designers cannot explain. That is, the designers cannot trace exactly the process of "reasoning" that led to the output, so as to "understand" it.

then i stand corrected. i thought it could be understood since it's all based on code. are you saying that it uses multiple pathways so there is no way to predict what pathway it will use?

This kind of thing can only become more common over time, as artificial intelligent computers become more complex.

i must admit, i don't completely understand this subject but as ai has more options (neural networks), then it would be a case of trying to predict outcomes as far as ai output and no way to trace the particular pathway it used as well as the information it chose as well to arrive at a conclusion.

This is not so different from me asking you what is 2 plus 2. You answer 4, but even you can't tell me exactly what processes you used to arrive at that answer. Those processes are not accessible even to your own conscious mind. This is true even though your brain is nothing more than a complicated neural network.

this is true as there are any number of multiple processes that could have been used to arrive at the same answer. abstract or literal memory, abstract reasoning (and even that using metaphors, objects, numbers/math, pictures, even possibly intuition (if it's stored as a set pattern (chosen subsets of info for faster processing) as an identifier) even a feeling (emotion, not just tactile) or certain sensation can be stored as an identifier or even some type/aspect of info that is never in conscious awareness we use but don't realize, etc.
 
Last edited:
No living person knows or understands how the internet manages to connect billions of packets from servers to clients, but it happens and people do understand how routers work. Nobody can explain how complex systems like the distribution of perishable food work either (by which I mean, work so as to provide millions of people with food every day), but they do work and there is a basic "chain" of logic: Growers produce food, the food is harvested then sold to wholesalers, then to consumers eventually; a supply chain.

As with your example of withdrawing cash from an ATM, there is a discernible chain anyone can understand even if all the computer code has become too complex for humans to understand the details.
Conversely, quantum logic isn't like a supply chain at all. It isn't just hard to understand because of the complexity, it's hard to understand because of an apparent lack of "things being consumed". There is no classical explanation, not just a lack of an explanation due to complexity or "forgotten code structure".

I've written code that I would probably struggle to get my head around today, except I still know (decades later) what it does (or what it did) so I can explain it the way people usually explain such things, in terms of what inputs and outputs there are.

I agree with all your points but I don't understand what point you are trying to make.

Someone claims that weak AI's are opaque to their designers, and they wish to derive the conclusion that ... what, exactly? Weak AI's are sentient? Break the Church-Turing thesis? More clever than sliced bread?

I don't see the logical argument here. "A weak AI Go program's moves are surprising to its designers." Ok. I agree. What of it? I'm objecting to the magical conclusions people are drawing from this fact.

For what it's worth I got beaten at chess by a program running on my Osborne I in 1980-something. I still don't think sentience lives on a five inch floppy disk.

I'm simply pointing out that just because a program's actions are mysterious, that doesn't mean anything in terms of human minds, weak or strong AI, or anything else.
 
In principle, yes, but AlphaGo's "knowledge" of Go is not explicit anywhere in that memory dump you mentioned. All the memory dump will show you is a bunch of values of various neural network connection weights and the current data flow through the network.

Right

It's a bit like if you could simultaneously measure all the voltages and currents across circuit elements on a silicon chip of some kind. At the end of that measurement process, you'd have a whole lot of numbers, and in principle you could work out what the electrical circuit would do next, given the schematic and the data. But none of that would tell you what the circuit was actually meant for, or what it's purpose or principle of operation is.

Right.

To understand that, you need higher-level knowledge.

Right. The kind of knowledge supplied by humans, who have the intensionality that machines lack as in Searle's Chinese room argument.

My point is that the "higher-level" knowledge about Go strategy and so on, implicitly coded in that data dump from AlphaGo, is not fundamentally accessible to any human being.

Nor is a data dump of a 1960's mainframe running a payroll program. Of course neural nets are a very clever way to organized a computation, but no theoretical computational barrier is being broken. Neural nets are conventional programs running on conventional hardware. You agree?

This is not so different from me asking you what is 2 plus 2. You answer 4, but even you can't tell me exactly what processes you used to arrive at that answer.

Beaten into me in grade school.

Those processes are not accessible even to your own conscious mind.

I could derive 2 + 2 = 4 directly from the Peano axioms if I had to. Beaten into me in college, where I was a math major. Clearly warped my thinking for life.

This is true even though your brain is nothing more than a complicated neural network.

That's a claim which with I disagree and for which you've provided no evidence or argument. But it's a common belief these days so we need not litigate it here.

There's no reason to support that this kind of argument is any different whether we're talking about an artificial neural network or a naturally-occurring one.

Which kind of argument? You seem to think that neural nets have properties beyond those of conventional computer programs. But neural nets ARE conventional computer programs running on conventional hardware. Do you disagree?
 
Perhaps

But the algorithms have a set of rules , and the rules are set by the programmer .

The more intelligent the programmer , the more difficulty we will have towards understanding this AI's conclusions .

Hence the less in the end we could trust this AI system .

it's not the ai that can't be trusted. it's people who program the ai. if people program the ai to be an asshole, biased, prejudiced, dishonest, illogical, even to kill etc, which it could eventually do so ai would just be an extension/appendage of human fallibility so it's the same 'ol society. that is no improvement.

what would be an improvement is if a purely logical and unbiased but humane ai would replace all politicians and had to consult the ai god before acting or making decisions. furthermore, the ai decides who among the population is best qualified to be president. lol

it's not the ai i don't trust, it's the type of people who may have access to it and how they manipulate it.

that said, i would think there would be positives as in improved appliances, brain wired to a network, robots doing our work etc. lol
 
it's not the ai that can't be trusted. it's people who program the ai. if people program the ai to be an asshole, biased, prejudiced, dishonest, illogical, even to kill etc, which it could eventually do so ai would just be an extension/appendage of human fallibility so it's the same 'ol society. that is no improvement.

Currently what they COULD do with AI (if people program the ai to be an asshole, biased, prejudiced, dishonest, illogical, even to kill etc) they are doing with children :?:frown:

:)
 
birch:

i mean we did not program ourselves.
I'm not sure I entirely agree. We are, in part, "hard wired" by our genes, so in that sense our genes made and programmed us. On the other hand, we're also shaped by our experiences, and that has something to do with the choices we make, so in that sense we program ourselves.

But maybe you're thinking of God or something...

then i stand corrected. i thought it could be understood since it's all based on code. are you saying that it uses multiple pathways so there is no way to predict what pathway it will use?
There are really two things at work: the code that runs the neural network, and the reaction of that network to the data going through it. In the same way, when you look at your brain, at one level there's the hardware - the physical neural connections and so on - but those alone aren't enough to explain your consciousness or reasoning. For that, we also need to look at how those connections, neurons etc. actually work to process the sensory inputs, memories and so on that are handled by your brain at any moment in time.

this is true as there are any number of multiple processes that could have been used to arrive at the same answer. abstract or literal memory, abstract reasoning (and even that using metaphors, objects, numbers/math, pictures, even possibly intuition (if it's stored as a set pattern (chosen subsets of info for faster processing) as an identifier) even a feeling (emotion, not just tactile) or certain sensation can be stored as an identifier or even some type/aspect of info that is never in conscious awareness we use but don't realize, etc.
Yes, and in the case of a brain we currently have only the vaguest ideas about how all these things work. Moreover, the workings all occur "behind the scenes" as consciousness is concerned. For example, you can't tell me how you retrieve the memory (if that's what it is) that 2 + 2 = 4; in fact, you can't tell me what you do to retrieve any memory at all. As far as your own experience goes, that is something that just happens, almost like magic.
 
Right. The kind of knowledge supplied by humans, who have the intensionality that machines lack as in Searle's Chinese room argument.
I understand Searle's argument, but it doesn't convince me. As far as I can tell, a human being is just a complex machine.

Of course neural nets are a very clever way to organized a computation, but no theoretical computational barrier is being broken. Neural nets are conventional programs running on conventional hardware. You agree?
Yes, I agree.

Do think that some kind of theoretical computational barrier is broken by human minds?

Beaten into me in grade school.
....
I could derive 2 + 2 = 4 directly from the Peano axioms if I had to. Beaten into me in college, where I was a math major. Clearly warped my thinking for life.
My point is, you have no conscious access to how your memories, or your knowledge of mathematics and the Peano axioms etc. are actually stored or accessed by your brain, except in the most rudimentary sense. Your brain is essentially a black box, as you experience it.

That's a claim which with I disagree and for which you've provided no evidence or argument. But it's a common belief these days so we need not litigate it here.
Similarly, you have provided no evidence or argument for an alternative.

I see a contradiction in your position. On the one hand, you're at pains to point out that neural networks are nothing magical, but on the other hand I get the feeling you think that the human mind is something magical - something other than just a neural network. It's not clear to me why you would think that.

You seem to think that neural nets have properties beyond those of conventional computer programs. But neural nets ARE conventional computer programs running on conventional hardware. Do you disagree?
No, I don't disagree.

Do you agree that human brains are like neural networks running on biological hardware? Or do you think there's a fundamental difference?
 
someguy1 said:
I agree with all your points but I don't understand what point you are trying to make.
The point is, we can understand why we have complex systems like your example of all the software involved in getting cash from an ATM, which although we have forgotten a lot about it, still operates in a way we do understand, because it was built that way.

So my point about a singularity isn't (maybe) about programming machines so they do entirely predictable things, then forgetting how we programmed them, but about not knowing how to program certain kinds of machines, so having to build a machine that can somehow overcome this barrier, which is basically the question--what can we do with quantum computers when we eventually have reliable hardware?
That's not the same thing as being able to overcome the technical difficulties with building a quantum computer, that's already something we have done by applying entirely predictable physics.

Quantum mechanical (i.e. entangled) systems will continue to yield surprising results--these aren't bugs (e.g. the Hong-Ou-Mandel effect, the HBT effect, FQHE)--because we don't understand what they are in a way that is somehow fundamentally different to not understanding classical systems, or forgetting about how someone built them. Therefore QM cannot be a theory, but a kind of "programming language" we are still learning but may never be able to grok fully.
 
Last edited:
But maybe you're thinking of God or something...

uh no. i mean inherently and originally we did not create/program ourselves. we are not responsible for the formation of organic life.

Yes, and in the case of a brain we currently have only the vaguest ideas about how all these things work. Moreover, the workings all occur "behind the scenes" as consciousness is concerned. For example, you can't tell me how you retrieve the memory (if that's what it is) that 2 + 2 = 4; in fact, you can't tell me what you do to retrieve any memory at all. As far as your own experience goes, that is something that just happens, almost like magic.

agree
 
The ses singulairos with non sus pares is the Language of most ASCII translations :

MX equals one line
AX equals a line in a computer
BX equals in a line a laptop
CX equals in a line a sequenced laptop
DX equals a line in a animated cartoon
Last line equals actual robot code.
 
Computers cannot divide by zero. A human can.
Do they need to?
We're not going to ask them to do abstract theoretical calculus. We want them to be functionally aware of and respond to the existing environment.
 
If a computer (calculator) cannot divide by zero it cannot do everything (all (ali.)) ((+1÷+1)-1)=+0

1+0=1
2+0=2

Whatever is divided by zero is brought into being. How many nothings are in this? This!

Just as when I multiply by ten I move the decimal point one place starboard, and when I divide by ten, one place left (You move the decimal point the number of zero's.)

Paper×10=Paper0
Paper÷10=Pape.r
Six grams of paper+0=Six grams of paper.

 
If a computer (calculator) cannot divide by zero it cannot do everything (all (ali.)) ((+1÷+1)-1)=+0

1+0=1
2+0=2

Whatever is divided by zero is brought into being. How many nothings are in this? This!

Just as when I multiply by ten I move the decimal point one place starboard, and when I divide by ten, one place left (You move the decimal point the number of zero's.)

Paper×10=Paper0
Paper÷10=Pape.r
Six grams of paper+0=Six grams of paper.

One question; does nature require the ability to divide by zero?
 
Back
Top