The question though, isn't so much about AI and whether we can program consciousness into a machine, it's about whether we can write specialist knowledge kinds of programs that can "understand" quantum programming better than we do.
If so, will it discover better quantum algorithms? will we in fact be compelled to do this, because we just don't understand the quantum algorithms that well? Evidence for this is the relatively small set of quantum algorithms that we are sure about, and how most of them use two or three qubits. Beyond that, there is Shor's algorithm which works with any number of qubits, and a handful of others.
If we do get AI systems to do this because of the trouble we have with quantum logic (something a machine can presumably be programmed to ignore), will we understand what they're doing?
I suppose the thread here is, we do already build machines that can do things for us which are hard, and therefore which we "don't understand" in a sense. Can you understand how Deep Blue can beat a chess grandmaster?
I was not so much talking about consciousness (as we understand it), but the processing and categorizing of neural wave-functions, which have computable specific values.
Example; little girl lying on the ground crying. AI approaches and after analyzing the visual and sound waves calculates a "high value" (emergency) condition , and assumes an "assist mode", overriding another mode of " lesser value".
Ultimately it comes down to analyzing values and sets of values, IMO.
I admit I know little of AI structural neural processing systems, but as I understand it we are already able to construct "intuitive" response systems in computers.
Just as a spell checker is able to recognize a word with incorrect values, which causes it to signal a possible error and even offer alternative words which show similarities in structure.
I seldom see mention of a learning curve in AI. I know there are chess programs which learn from experience. The more games it plays, the better it gets. But that takes time and many games before it recognizes the possible implications of the opponent's strategy and make the correct counter moves.
But IMO, in order to recognize a value, one must have prior cognitive and associative experiences, which can be achieved only after a period of learning.
A doctor in medicine, has to spend 8 years of higher education in order to achieve the knowledge to make diagnoses and prescribe treatment. Why should an AI be exempt from having to learn from experience?
I believe a "learning" AI also must undergo a period of time before it can properly diagnose an existing condition and the variables involved. You cannot expect any form of intelligence to be able to respond to value inputs, by their priorities, unless it has a kind of mirror function by which it can make comparative and relative abstract perspectives, which enhances cognition.