Pincho,
[I apologise for word soup, the following is something that I should be having a discourse over with multiple people so that between myself and them, something more digestible can emerge.]
swap you collision nodes for emulations of a cm3 volume, housed in separate universes neural network arrays (One world could technically support many dimensional clouds that formulate the cm3 emulation) and you might be a little closer to the truth.
I initially looked at emulation by Particle or Atom, however the problem with locking into particles or atoms is that "they didn't exist to begin with". The suggested course is actually pure energy which incidentally would be measured by vacuum space (cm3)
It's suggested that the initialisation phase is just one world creating that one small vacuum space. The plan being to utilising the generation of a temporal paradox to create multiple instances of the same emulated node, infinitely in all dimensional directions (or at least to radix horizon signified by the inverse square law)
Based upon the theory that "you can't make a parallel without a universes amount of energy", it can be suggested that a parallel plays proportion into the making the paradox of what the beginning of the universe is, therefore it incorporates a pointer into the universe at the beginning so you don't have to create a parallel you just hook to one that already exists.
This basically is like initialisation a String variable at the beginning of a program to make sure that the pointer has been set and isn't violated by alternative data. You can then use the pointer during the program and assign a new value.
This from a multiworld's perspective implies that you'd have an infinitely sized universe with initialised "pointers" all initially set with the same values. The real magic however is utilising methods to create not just paradoxes and difference between each node, but also have the capacity to communicate as a overall structure.
As with any programming test however, you initialise the system further with a LOAD TEST. After all you need to know that the parameters of your physical resources are capable of "running the program". This means ramping up the initial emulation node (Which is auto-paralleled by all other nodes because they haven't been assigned to be different. So from a multiworlds perspective all of "space" becomes energy and mass) to be filled to the point where they can't be filled any more, in this particular instance with pure digital energy [String Theory]. (Symbolically The Big Bang or preferably, "The Big Number Crunch")
Once the initial LOAD TEST is complete, it's then possible to start manipulating those nodes to become different, this is where the creation of particles and matter occur. You see pure energy is great, however it doesn't have structure, particles are basically a construct. In some respects the system would mirror the concepts of Conway's Game of Life with a very simplified method to create particles ("balls of string") and leave "space/spacing".
To stop there and just cut to the chase:
Imagine if every cm3 of vacuum space was emulated by one single world. That single world emulates it not as a single Two dimensional occurrence across the whole network, but runs multiple cloud systems together to compile a multidimensional representation for the emulation.
That single emulation (And world) is joined by an infinite number of emulations that all started with a "Singularity", a single world that created the first Kernel parameters that was then manipulated into being duplicated infinitely across spacetime.
The initial Kernel would function much like a linux kernel, in the sense that any updates to physics through Observation is automatically applied to the Singularity kernel. (Heh a Singularity in this case is "an Auto-updater for a kernel build".) However each emulation then has it's own parameters, those parameters might be how to bridge interactions of atoms to bond together to create molecules if each emulation was just an atom or possibly communicate through a squarelaw interpretive "bridge".
(I guess what I'm saying is if an atom exist in an emulation, perhaps there is a way to "talk" to it from out relative perception, to communicate with the universe that emulates it.)
Incidentally Fermilab might like this interpretation or maybe not if I didn't read what they were up to with their Holometer correctly.
Each world is in glorified 3D+, it's interactions with other worlds are through "bridges", those bridges used compressed information in 2D arrays. I have a concept that if the universe is left to it's current devices, those 2D arrays would be Linear in regards to time passage ("Times arrows moving in one direction"), however I have a hypothesis for an encapsulation method which basically means copying a single array, inversing it's direction and then reading both arrays together back to back. So while one array travels from A to Z, the other arrays is travelling Z to A.
I see the Doppler Effect being an example of this, it's just we are still only linearly interpreting the information in one direction.
(Incidentally if I'd applied this dual inverse 2D array's to say the creation of a storage media like a DVD, ideally each array would be written and read using a different laser colour, Blue and Red. I actually queried that "If the whole universe was a giant computer, it's current limitations are that we build computers with a linear sequence. So if we wanted to be able to manipulate time or utilise the system to it's fullest we'd require reworking the hardware to no longer be following the linear system we were use to, this means how the system processes data and how it stores data. and in both those cases, the data is 2D arrays, even if the data itself contains information in greater dimensions)