Open question

noodler

Banned
Banned
Is there a connection between GR/SR and parallel computation problems (problems in P/NP)?

Is the answer:
1) Could be, but who cares?
2) Yes, there is a deep connection.
3) No, computers have nothing to do with gravity.

e.g. a link between polynomial time and number of processors, and nondeterministic polynomial time. N being equated to both 'number' and (non)deterministic. Determining N for Ahmdal's Law is also in P/NP.
 
In the set of computational problems with solutions in P, with a correspondence to T(p) for an p-processor system, where T(s) and T(p) are computational measures of serial and parallel steps T, respectively, T(s) is circle-free, and T(p) a closed elliptic form of a T(s') its circle-free rewriting by the machine $$ M(T,p): M \right M'$$ (M changes state in steps T, over p processors).


* Given a parallel computation exhibiting speedup ψ on p processors, where p > 1, the experimentally determined serial part e is defined to be the Karp - Flatt Metric viz:

$$ e = \frac{\frac{1}{\psi}-\frac{1}{p}}{1-\frac{1}{p}}$$

The less the value of e the better the parallelization ~ rewriting. This translates into e being a shortest open ellipse in M; e is a span of M.

Example:
The shortest 'number' in binary is a 1-bit span; 1 is the span of a M which uses the open and closed ellipse, i.e. the physical basis (1,0) and a single-pole switching function that inverts the number. Replace a switching function with a photon and M with spacetime.

??
 
I don't think so, but I also thought I knew what the P=NP problem was.

I thought the P=NP problem was, can computers be programmed to solve logical mathematical proofs without using complex algorithms that use semi-random variables that approach solutions - despite the fact that solutions once found were simply provable. Or in simply terms; can computer engage in higher order logic in order to determine solutions?

First...is my understanding of the topic right at all? Second (Assuming it is), why do you think it has relation?

Nevermind, I didn't see your second post if that is the problem....this is the answer...if not I don't understand the question.

What you seem to be describing is Planck's length, which is generally accepted in relativity. Gravity exhibits different behavior at this threshold. So I think the answer to your question is no...while the limit is rather well defined in both cases - physics has ways of determining what's beyond the limit (assuming something is). While in computation, the limits is well defined as a property of computer. (Which may be even true in physics)
 
Last edited:
'meh'
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).

In such analysis, a model of the computer for which time must be analyzed is required. Typically, such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). As of 2009, these assumptions are satisfied by all practical computers yet devised.

In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine.[5] Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:

Is P equal to NP?

In a 2002 poll of 100 researchers, 61 believed the answer is no, 9 believed the answer is yes, 22 were unsure, and 8 believed the question may be independent of the currently accepted axioms, and so impossible to prove or disprove.[6]

http://en.wikipedia.org/wiki/P_%3D_NP_problem

Planck's constant = e is true in a M where there is no vacuum energy...
I mean in general computational devices (switching nets) of any stripe; since Ahmdal's Law is in fact a specific rather than general result - you might say it's special to each machine type or example - whereas the K-F metric is general.

In line with c a special velocity, there is e the special serial restriction in P/NP for a M(T,p).
 
I think it's:

$$e=\ell_P$$ if $$M \approx 3K$$ in which case the value of $$t_P$$ (Planck's Time) is known. That would be the lower computational limit. If different particles in space are experiencing different values of resonant energy levels, or different relativistic effects then they should have higher computational limits. If they're asynchronous it's physical manifestation would be quantum foam.
 
Yes, if you ignore the fact the machine has to be made out of something that processes information. If e is the smallest structural constant you have to allow say, Planck mass to build the machine with and Planck energy as the information.
Spacetime is then the input and the output - but you have to allow for a true vacuum to do it all in. I'd say Maxwell allows for mass etc already so that's the domain.

This is where e is the distance a photon travels to communicate, so c is the information carrier. To get from very small to very large distances is a bit of a crunch, you need GR to tune the carrier, as it were.
 
Herewith I digress and posit the following homework assignment, viz a viz the open question(s).

Design a state-transition network that can add, multiply and divide numbers (binary is a good place to start) and show that ":subtraction" is possible (i.e. use the STN to induce -1).
Show that this is relevant to the question by deriving fast multiplication/division algorithms, starting with 'naive' versions and finding a path that gets to the most efficient, for the graph G the STN maps, for a given summation -> multiplication. Show that the STN is consistent with a parallel algorithm that uses a circuit model (see e.g. R.W. Doran "The Circuit Model for Parallel Algorithms", Kunkle, Cooperman "Twenty-six moves suffice for Rubik's Cube" and other papers addressing the question of the shortest-path problem or 'Gods algorithm').
 
That's obviously nonsense, but I'll humour you with your original question: No. Relativistic computing cannot give you the (believed) required exponential speedup to go from NP to P.

Oh, and P vs. NP doesn't have anything to do with parallelism.
 
Relativistic computing cannot give you the (believed) required exponential speedup to go from NP to P.

Oh, and P vs. NP doesn't have anything to do with parallelism.
Relativistic computing can't do what?
What's the required speedup?

What does parellelism have to do with P/NP, to fire your last question back? Do you have anything pithy about that subject?
How much parallel design have you done, btw? What's your grasp of "computational problem"?
 
Relativistic computing can't do what?
What's the required speedup?
Well, I read your original query as asking whether relativistic computing would allow us to solve problems in NP in polynomial time. The answer to that is "no". The reason is that to the best of our knowledge, solving the hard problems in NP in polynomial time requires exponential speedup (if you want to solve them deterministically.) So, if you want to use time dilation for that speedup, it'll require that you push an exponential amount of fuel through your rocket in polynomial time. Not gonna happen.
What does parellelism have to do with P/NP, to fire your last question back? Do you have anything pithy about that subject?
I'm sorry. Your OP says: "Is there a connection between GR/SR and parallel computation problems (problems in P/NP)?" which strongly implies that you think that P/NP has something to do with parallelism, which it doesn't. If you meant something else, please elaborate.
How much parallel design have you done, btw? What's your grasp of "computational problem"?
I'm currently doing computability theory, so my grasp of what a computational problem is is fairly good, I reckon. Why?
 
Last edited:
funkstar said:
my grasp of what a computational problem is is fairly good, I reckon

Then the next question I have is: do you consider the problem of determining the cost and effectiveness of parallel processing is a computational problem, and if so which domain do you consider it's in?
 
Then the next question I have is: do you consider the problem of determining the cost and effectiveness of parallel processing is a computational problem, and if so which domain do you consider it's in?
You're going to have to be a lot more precise about pretty much all the terms involved, if you want a sensible answer.

I.e: Which cost metrics? Which model of parallel processing? Do you want an analysis of a given algorithm/circuit/program, or a general decision method, or something else entirely? What does effectiveness mean in this context? Etc.

Dependent on your answers, my response could range from boolean circuit theory to program analysis to complexity theory to computability theory.

Anyway: It's fairly easy to prove for most computational models of parallel computing, that p parallel machines bring you at most a speedup of p (yes, I know about superlinear speedups in practice - it doesn't matter, here, as those effects don't scale.) So you cannot get out of NP with p parallel machines even assuming maximal speedup (by definition, really, as constants are consumed by the big O. But even polynomially many machines doesn't help.) Note, btw, that the quintessential NP-complete problem 3SAT does, in fact, parallelize wonderfully if you're brute forcing: Just try different valuations for the atoms on each machine.
 
Are you familiar with Amdahl's Law, and does using it to determine "speedup and cost" require analysis of the method of parallelization used in a system?

Are you also convinced this thread is about "getting out of" NP? Do you know there are NP problems for which there are no known solutions in P (but no proof that none exist)?

I was under the impression that the original question was about a CONNECTION between Einstein's THEORIES and computational PROBLEMS.

That's: theory (of spacetime) connected to problem (in space and time). Or not.
 
Are you also convinced this thread is about "getting out of" NP?
Pretty much, yes. The way the OP was phrased strongly suggested that you were asking whether one could use relativity theory to speed up computations such that problems in NP became problems in P due to the speedup.
Do you know there are NP problems for which there are no known solutions in P (but no proof that none exist)?
If you want to test my knowledge of complexity you might want to find something to ask which is not absurdly trivial. Asking whether I even speak the language is kind of insulting, really.
I was under the impression that the original question was about a CONNECTION between Einstein's THEORIES and computational PROBLEMS.

That's: theory (of spacetime) connected to problem (in space and time). Or not.
Ok. No, there's no obvious connection. People have tried to exploit certain features of relativity to show that relativistic computing is more powerful than our (implicitly euclidian) standard computational models - but no one has really come up with anything convincing. Usually these scheme take the form

1. Start brute-force computation.
2. Launch self into space at relativistic speed. (Alternatively, lower yourself close to the event horizon of a black hole.)
3. Return as the younger twin to find the computation finished.

The problem is that to get out of a complexity class you have to get more than constant speedup, so your speed actually has to vary with the input size for it to be interesting. And nobody really thinks it's possible to get exponential speedup, because it's fairly difficult to get exponentially much fuel burned in less than exponential proper rocket time.

A really kooky scheme I'm aware of suggests throwing yourself into a super-massive rotating black hole, with the universe outside performing your calculation.

If that doesn't answer your question, you might want to rephrase.
 
Is there a connection between GR/SR and parallel computation problems (problems in P/NP)?

Is the answer:
1) Could be, but who cares?
2) Yes, there is a deep connection.
3) No, computers have nothing to do with gravity.
4) Have no clue what i am talking about..

e.g. a link between polynomial time and number of processors, and nondeterministic polynomial time. N being equated to both 'number' and (non)deterministic. Determining N for Ahmdal's Law is also in P/NP.

forgot one..
i vote #4..
there was a time i was capable of understanding this stuff..but that was along time ago...what you don't use, you lose....anyway..sorry..
 
funkstar said:
The way the OP was phrased strongly suggested that you were asking whether one could use relativity theory to speed up computations such that problems in NP became problems in P due to the speedup.

You think the question "Is there a connection between GR/SR and parallel computation problems (problems in P/NP)?" is asking about relativistic computing? I think that, so far, you've completely misunderstood the question. Hence I thought I'd do a reality check.

We know there are problems that have solutions in P and since P is a subset of NP, that there are NP equivalent solutions. However the converse is not necessarily true, in fact, we can't say..
You say there is no obvious connection between general relativity and computational problems; this includes problems amenable to parallelization.

But parallelization is also a computational problem; basically all computational devices are parallelized to some extent. Can anyone list a computer architecture that uses only serial computation (I don't think so)?

p.s. the fact that relativistic computers can't get past the nondeterministic "barrier" of NP, is a connection between parallel computation and GR, right there. It's not possible to compute backwards in time.
 
Last edited:
Hence I thought I'd do a reality check.

We know there are problems that have solutions in P and since P is a subset of NP, that there are NP equivalent solutions.
Eh? What does it mean to have an "NP equivalent solution"?
p.s. the fact that relativistic computers can't get past the nondeterministic "barrier" of NP, is a connection between parallel computation and GR, right there.
I have no idea what this is supposed to mean. Why do you qualify it with "parallel"? I don't think you're using the term in the way (or with the meaning) it is usually done in computer science.

In fact, I'm actually fairly convinced that you don't really know what you are talking about.
 
P is a subset of NP. Therefore a polynomial deterministic machine or algorithm has a nondeterministic polynomial "dual" solution in principle, there are some actual examples.

Parallel means synchronous.
Steps Ts are asynchronous events, Tp are synchronous events (with synchronized clocks). Replace processors with observers and T with events. (T,p) is a rest frame; start with "On the Electrodynamics of Moving Bodies".

Am I right, people?
 
Back
Top