In the multi-processing version, the vacuum at any time looks only at a finite sequence of the disturbance., and thus looks at many short sequences without overlap. But the encoders (many) are always incomplete in the knowledge of the complete sequence. Simultaneity has restored. But knowledge incomplete.
The encoding increasing makes the sequence more efficient by the collision between encoded versions, which cause the local dimensionality to increase via merging of adjacent encoders.
This is the rough version of encoder that atomic physicists currently use. Physicists in this version discover incomplete encoding, and add various properties to the quantizations to simulate the optimum congestion theorem. But the current standard model is always a bit skewed, its skewness decreasing as portions of the vacuum collide and must re-encode. Hence simultaneity decreases as the encoders collide and merge. That fate of the universe is a decrease in the encoders as dimensionality increases.
Action at a distance, therefore, occurs because the physicist causes a long sequence to be observed, and they observe, unknown, an increase in dimensionality. A black hole arises because the encoder observes a longer sequence. This happens naturally because encoding increasing packing efficiency and relative to vacuum density. The vacuum thus measures encoded disturbance, seeing a greater sample of events.
Light is a disturbance to sparse for encoding, and is constant because the vacuum is still finite bandwidth..
Kinetic energy is the compression of encoding against the band limit of the vacuum. It causes an increase in encoding dimensionality. If the dimensionality of the disturbance is higher than the capability of the vacuum we get cyclic big bang behavior. If it is less, then we get steady state.
Light should always be effected by the local dimensionality of the vacuum.
Propagations from encoders of higher dimensionality.
When the disturbance is dense enough, the magnetic field is quantize. In this case we would expect the gravitational field to be curved about the surface and the dark matter field vertical out. This encoder would emit gravitational magneto light, but electro magnetic light never escapes. This is likely the black hole. All of the standard particles would be slightly requantized, having more complete properties than our physicists would expect.
Gravity, within our sun, is vertical out and static, magnetic highly curved, and charge leaving the sun makes standard light coming out. The dimensionality down from the Sun would be nuclear-electro propagation. Very high frequency and appearing to us as particles coming from the nucleus of atoms. So the frequency emissions decrease in wavelength as the dimensionality increases. We should expect there to be four or five major propagation modes in the universe. As the universe increases to maximum dimensionality, these propagations all curve around the surface and eventually collapse into the high dimensionality vacuum.
The model is based on the optimum congestion principle
I start with the unknown disturbance, and the fixed sample rate of the vacuum. The encoding process needs no map, its goal is to create the spectrum of the disturbance with the maximum coverage. It under samples such that the uncertainty constant, it creates, and the implied signal to noise ratio of the encoding match. Hence, I think, it measures the total disturbance with the minimum number of quantization levels. Thus it needs no map, and there is no predetermined standard model. It creates and refines the standard model as it progresses.
And, the disturbance is not expanding, it is becoming more efficiently packed and shrinking relative to the vacuum. And the vacuum reduces to restore density. If we think of the vacuum with no disturbance, it has
No comments:
Post a Comment