Are you saying anything more than:
"I make a simulated world / universe where the laws of physics are different?" If so, what?
I've touched on it before but it's to to do with a Recursive method of Emulation. The notion initially started with the hypothetical act of emulating 1cm[sup]3[/sup] from data supplied by observing a real 1cm[sup]3[/sup]. Obviously an Emulation would be like a "Slave" to the real world following the smallest of time delays. The hypothetical was to then suggest that if by manipulation of a paradox (Which requires the use of Qubit computing.) you could in fact make any "Slave" action in the emulation occur prior to the real world that's being modelled, you could actually get to the point of bridging manipulation of reality through the usage of the emulation.
This was initially hypothesised as a way to "trick the universe into the machine" as such. It however bore the question as to whether that initial real world 1cm[sup]3[/sup] was already in fact a running emulation and whether the attempt to emulate it was actually just compounding it's make up into a recursive emulation state.
In essence the physics laws of that 1cm[sup]3[/sup] will initially start as a "Default", it's default that you would want to begin with after all if the theory is right we are playing catchup and the last thing we want to do is cut corners and jump the gun in the sense that we don't actually know how something was done. "Default" is also a good state to defeat those people that might worry about such potential technological disasters like "Grey-goo" in Nanotechnology. However if it's understood that subtle manipulation can be done to alter the very physics of this finite model space through iterating per recursion.
Hypothetically if a 1cm[sup]3[/sup] is such an emulation, there is no upstairs universe sitting at the top of a hierarchy to define it's limitations, it's limitation themselves would actually more likely be tied into the logic of an "Inverse square law" in the sense that only so many iterations could be done before the communication between iterations becomes too large to to be transmitted as a packet, of course this is implying there is a data transfer specification which I would hope would have a "Broadcast" method applied.
(It will seem like
word salad but that's because I'm cutting across interdisciplinary schools of thought, and like I've mentioned I am still struggling to write it up to the point of explaining ever detail. I also know that no matter what I've written I should edit things out no matter how world salad it is because it has some meaning, even if you don't get it, you can still ask me to reiterate bits which you wouldn't be able to do if I just pressed the delete key.)
As for the statement of "Non-Locality communication", I have a theoretical model where you first start with a finite number of servers emulating 1cm[sup]3[/sup]. This is the first finite building block which through the use of a Super-symmetry method of planning and the capacity to create paradoxes will eventually allow for infinite duplicates volumes that through the super-symmetry are initialised to be at a different global coordinate to the original and also have differences in what is outputted during the initial "Universe" building calculations.
The 1cm[sup]3[/sup] is then populated similar to how load baring benchmarking is done, the volume is populated by dimensions until it can not be processed further upwardly. The hypothetical is that all the infinite spacial volumes that are paradoxically interlinked and placed through super-symmetry will also initially duplicate this initial event. This is the equivalent of spamming a volume with Gamma Radiation prior to element building.
Once this "load test" is complete, it's then possibly to run "builder algorithms" in the sense that you reduce the volumes dimensional population and then start to repopulate but to specifications. I hypothesised something similar to
Conway's Game of Life in the sense that the algorithms would attempt through basic default fundamentals attempt to build structures that are stable.
What is interesting about the infinite array of 1cm[sup]3[/sup]'s volumes at this point is they do not necessarily stack like blocks onto of each other, or staggered like a brick weave, in essence you've potentially got one volume existing composited over eight other volumes. (This is where the term Composition or composite comes in to play.)
Imagine you have a construct cube of eight blocks, 2 x 2 x 2, you then have this emulated block the same sizes as the other blocks but it's centred on the constructed cubes centre. In essence if that centred block was to contain say the emulation of an Atom, you could suggest that all components that make that atom will only be emulated by that block, the other blocks emulate an "Effect" or a negative impression of what the emulated block outputs.
The reason for this is because in this emulated hypothetical I am trying to follow certain rules in protecting Data and making sure that it's "Non-Volatile". So say having a hydrogen atom with one electron implies that you can't have extra electron's suddenly be conjured by the other emulation areas unsynchronised their matrix into duplications, instead the rule of thumb is you can only ever have that one electron in that emulation model. If the model just happens to be emulated as a construct, it means that electron would shift in and out of existence with each emulation block, so as to be non-volatile.
What is also interesting is how Data exist within a composite environment. Obviously we are use to thinking in 3D, however data transfers in regards to computers are 2D arrays that are processed in regards to planar spacetime. (basically computers timing "eventually" processes data)
An emulated construct (in this case the observer) will see 3D, however the information that was composited together to create it was stored as 2D arrays but processed into 3D. (I'm not entirely sure if this is what Fermilab was investigating with it's Holometer, but I have a Fix for "Emulators" that use 2D arrays in the sense that the array is duplicated and placed next to an inverse version, thereby negating the "One direction of time arrows". The effect being that a 2D array is no longer suffering from processing alone one planar axis, but is in fact capable of processing from both "ends" at the same time. (It also hypothetically increases data integrity on Disc storage media since the scratch on a 2D array disc would have to damage both ends of the data for it to be complete corrupted.)
In essence it's all part of the parcel, to create paradoxes requires to violate physics which requires manipulation of time which currently with a system that doesn't see itself as an emulation would be impossible, if it was indeed possible it explains some, if not all the "oddities" of particle physics and opens the potential to re-write the very foundations of the universe (when we are of course ready).