Open question

Please explain what rubbish has to do with synchronous clocks and observers in a rest frame.

After removing your foot from that big mouth.
 
You see, the laws of physics prevent computers from communicating externally or internally other than in a serial fashion. A local processor can only guarantee synchronized parallel steps locally; the serial steps depend on other processors so a machine step is a function of the degree of synchronicity between processor nodes.

In spacetime a serial event is a communication; parallel events mean synchronicity so two observers have to communicate serially to synchronize their clocks - although they can signal each other "at the same time" this only has meaning in a synchronous frame.

P.S. thanks for the lecture, Bob!
 
Last edited:
P is a subset of NP. Therefore a polynomial deterministic machine or algorithm has a nondeterministic polynomial "dual" solution in principle,
There's nothing about NP that requires an algorithm to use non-determinism. If you have a polynomial deterministic algorithm, then by definition that algorithm is also polynomial non-deterministic.
there are some actual examples.
Indeed. Like, every algorithm that solves a problem in P, even the deterministic ones.

You're conflating the decision problems (which are the members of P, NP, etc.) with the algorithms (machines) that decide these problems. It's stuff like that which makes me think you don't have a very deep grasp of complexity theory.
<stuff about parallelism and relativity>
This isn't actually as stupid as it looks, but it's really garbled, and you don't draw any conclusions. What are you trying to say?
 
You see, the laws of physics prevent computers from communicating externally or internally other than in a serial fashion.

The notion of NP and P are mathematical notions, so they do not depend on the laws of physics. You have to modify your question.
 
funkstar said:
You're conflating the decision problems (which are the members of P, NP, etc.) with the algorithms (machines) that decide these problems. It's stuff like that which makes me think you don't have a very deep grasp of complexity theory.

When an algorithm runs on a physical machine, do you mean it's separate from the P or NP domain, which is strictly mathematical and not physical?
I find your statement puzzling - how are P or NP problems decided if they aren't run as algorithms on real computers?
Do you mean writing a solution on paper, using equations, is different from running an algorithm on a machine? Can you explain what the difference is between a problem in P/NP and an algorithm running on a machine? Which one makes decisions and which one is a decision problem?

There's nothing about NP that requires an algorithm to use non-determinism.
No there isn't, and I didn't say or imply any such requirement. What I said was P is a subset of NP.

If problems in P and NP are mathematical, why do computers run algorithms. When they do, are the algorithms making decisions, or are the computers making decisions, or are the decisions made by humans? Do these decisions, whoever or whatever makes them, depend on "the laws of physics"?
 
I thought the P vs. NP problem stated: It takes more information (time) to calculate the solution to an event than the amout of information (time) needed for that event to occur. Doesn't quantum computation address this by adding a superposition as a third "hidden" bit (1, 0, 1&0)? Then again, I know next to nothing about quantum computation and even less about parrallel computatiom so I may be completely out of line here.
 
The complexity class P is often seen as a mathematical abstraction modeling those computational tasks that admit an efficient algorithm. The complexity class NP, on the other hand, contains many problems that people would like to solve efficiently, but for which no efficient algorithm is known, such as the Boolean satisfiability problem, the Hamiltonian path problem and the vertex cover problem. All the problems in this class have the property that their solutions can be checked efficiently.[2]

Since deterministic Turing machines are special nondeterministic Turing machines, it is easily observed that each problem in P is also member of the class NP.

The question of whether P equals NP is one of the most important open questions in theoretical computer science because of the wide implications of a solution.[2] If the answer is yes, many important problems can be shown to have more efficient solutions. These include various types of integer programming in operations research, many problems in logistics, protein structure prediction in biology,[4] and the ability to find formal proofs of pure mathematics theorems.[5]

http://en.wikipedia.org/wiki/Computational_complexity_theory
 
@noodler: Start from here. What I'm pointing to is that there's a difference between a decision problem and a decision procedure. The fact that a decision problem is in, say, NP, does not imply that a procedure has to be non-deterministic to decide it. It merely say that the decision problem can be decided by some non-deterministic polynomial time algorithm.

If you are interested in these things, I suggest finding an introductory textbook on the subject, or, better yet, taking a course on computability and complexity.
 
funkstar said:
What I'm pointing to is that there's a difference between a decision problem and a decision procedure
Yes, obviously a problem is what you input to an algorithm that then processes the problem. At some point there is an "output".

What the algorithm does, step by step, is input the previous step, so that each step has an input and an output. Therefore the initial input "problem" is also the result of a "step".

If a problem is in P that implies there is a NP solution, not that there is a "requirement "for one. You appear to think that I need to read a book; I've read plenty of books thanks, and I've written a few complex systems into algorithms.
Sorry if you have the impression that I don't understand complexity theory, I think you don't understand the connection between P/NP and theories of causality.
 
All I can say is that your use of the terminology is not in line with the technical meaning used in formal complexity (or computability) theory, which makes it very difficult to have a lucid debate on these issues.
 
Well, look at it like this:
my use of terminology includes the terminology Alan Turing used in his paper on the decision problem. Basically he decided that, since decidability is the same as deciding if a number can be written or if the next digit of a number can be, that the halting problem is about an algorithm that can determine if the number of places for a figurable number is finite, i.e. decidability requires that finitary numbers exist.

Since any algorithm is also a written representation or a figurable "number', this is recursive; any design for a computer is a representation, as is any actual construction of a computer. This is why computers are used to design their successors.
 
Against my better judgment:

Firstly: Don't use terminology from the paper that was never picked up. Don't say circular/circle-free. I happen to know the terminology because I have books with the original article reprinted literally within reach, and an interest in the foundations and history of computability theory, but most (computer scientists, even) will not know it. There are newer and better expositions on computability. Also, it's not "his paper on the decision problem", it's his paper on the "Entsheidungsproblem". It is the terminology he uses himself, and "Decision problem" has another, far more general meaning, now.

Secondly: Decision problems are not the input to a decision procedure. Decision problems are (in general, infinite) sets, and decision procedures decide whether or not a given element belongs to the set or not.

Thirdly: "since decidability is the same as deciding if a number can be written or if the next digit of a number can be": You're conflating his concept of "satisfactory" numbers, which encode circle-free (non-halting!) machines that write out (infinite) representations of numbers, with decidability, which requires halting. Be precise.

Fourthly: "the halting problem is about an algorithm that can determine if the number of places for a figurable number is finite" is very garbled. Also, it's perfectly easy to give an example of a diverging Turing machine that only writes to a finite number of cells.

If you do insist on going to the source, I recommend Charles Peltzold's "The Annotated Turing" which goes methodically through Turing's seminal paper, explaining the quirks, idiosyncrasies and methods quite clearly.
 
Try this simple Turing test:

1. think of a number
2. write the number somewhere
3. check that the number is the one you thought of
4. repeat until satisfied

... you can conflate 2 and 3 so you check the number as you write it.

Now consider the following questions:

When is the number a) real, b) imaginary, c) decidable, d) determinable, e) completed?
At which step is the number infinitely recursive?

Why does step 3 imply a halting condition, or why is it that you stop thinking of numbers and writing them down, so the "checking" part is eventually statisfied, and a decision is made?

(this is a dummy test)
 
Last edited:
BTW, I think this forum is suffering from a terminal disease, you simply are incapable of abstract thinking, for the most part.

Your responses, mr funkstar, have tried for the most part to address "problems" I never raised. You've avoided addressing the fact you cannot speed up a process beyond a certain "computational" limit, and you can only achieve a tradeoff by doing serial steps in parallel. Here, think of a number of people writing down numbers, in synchronous steps.

Jeez, where's my hat?

Circular/circle-free means what Mr I'm an expert on Alan Turing?
 
Goodbye, Vkothii. I don't feel the need to further qualify my knowledge to you, as I'm fairly certain it will lead to nothing good.

I hope you read the book I mentioned - it's very good. Or, even better, read an introductory text on computability and complexity.
 
What is computability, though? What's a computable number?

Why is complexity something that goes with computability?

Because all numbers are complex if they are computable, since you can in principle build a machine and configure it so it determines a computable number, by writing it out in a finite series of digits. This has something to do with encoding.
 
Back
Top