It is the error band between frequency estimates one way to put it. It is a result of calculting up to the maximum number of things.
It is measured by physicists in flat earth and there is comprssion loss. But the error is otherwise real and can be considered the error in counting all events. Within the Markov model there is unknown kinetic energy.
The fine structure deviations are made of kinetic energy and relativistic effecst from flattening the space. But those numbers should all be known. Hurwitz gives us the maximum count in 3D space.
Ignore relativity effects for now. 137 is the exponent for my Avogadro. But it includes kinetic energy. Now much kinetic energy? I suggest that the electron is at the barrier, and it looks to be (3/2)^29 at its near minimum. Those are the number of paths it will sustain. So we would expect our fine structure to be composed of the uncertainty if kinetic energy state takes for each of the objects countable, the maximum index, which has an exponent.
My numbers seem to match on this, but I am working backwards. There is also residual error in N, and that will add a notch in the error exponent.
I start bu noting this is a triple system and the total sample space is a power of (3/2). The system is under sampled and admits of finite uncertainty.
I tink i know the maximum useable sample space, everything in logs of 3/2). The exponent is 108 asnd if I take as 29 the exponent that counts kinetic states I get 137. But there isP lanks error, at the top. Normalize it to three deviation. It is measurable,order of 10 ^ -5 .
So, building up the Markov tree we quickly get a stop unless the electrons have their kinetic energy counted, they have already hit the badwidth barrier. And we are mostly spreading N across the Markov tree. The 137 number assume a fully tranquil environment with only the kinetic energy needed for quantizaton spent.
So, then, (3/2)^137 is the largest useable integer in 3D space, we need a jump to 5D. (3/2)^106 is the largest integer any 3D system can count. Markov is hyperbolic, so on the Lie flat paper, there is a slight resolution loss going out du to the projection onto a 2D surface.
Light ttakes two samples to correct the deviations. The numbers given are aggregate, they are usually spread round the central node, usually the proton, and so on..
Bayesian theory of physics, we only got a finite amount and there is no Godot. They key is to understand the iLog(i) gets the set size that optimally centers you binomial. If your binomial is centered and the electrical engineers won't bitch because you are at least as Gaussian as the next guy.
No comments:
Post a Comment