Friday, March 20, 2015

Weiner processes

I see this process as a very large number of elements crowded together, relative to the external environment. The elements have the capacity to combine and reduce crowding.
I have t as the number of combinations that have taken place.
I have x as the notches on a measuring stick.

The job, then, is to approximate pi and e such that every notch on the measuring stick is equally precise in measuring the x number of combinations. The elements have, evidently, the ability to partially overlap when they combine.
In this formulation, the number of elements is finite, since the approximations to pi and e stabilize, and the approximation error determines  the largest x on the measuring stick. At that point, the environment and the the largest x have the same imprecision, but we can let that precision go to zero, but never get to zero.

This is what I am playing with at the moment. But I can tell you that this formulation leads to the Hyperbolics with solutions at the Lucas angles, to some combination of the Wythoff array if precision increases. That is because the rate of recombinations relative to the number of combinations (tanh) , on some x along the measuring stick, will equal the minimum precision possible, and that will determine the approximations to e and pi. The result is true because I set the precision at a finite value. That precision will be 1/2 * tanh'', I think. And when that value is normalized to the value one, then I always get the same solution, and that fixes the number of elements possible. The Lucas angles themselves are finit approximations based on Phi. And that is why we have quarks because the vacuum is precide enough that it needs to get close to angle 1.5 * ln(Phi)).  Hence the proton  needs  three quarks, making three Pi,  allowing the proton operating at that angle offset. We all want to know if the vacuum is equally precise everywhere, that should be a major project for astro physicists.

The Lucas solution is fixed because of the 1/2, and the 1/root(2) in the formulation above.  The defines causality, the process is spectral matched, it can sample itself, (recombine) at a rate needed to cover its stable motion; it is at the Shannon Nyquist rate.  That is a necessity because of the value pi that is used as the maximum divergence.  Spectrum different from symmetric, band limited are possible, if the environment has a gradient (is not uniform). But since the environment has a gradient, then it too must have elements doing recombinations, and you are eventually stuck with either finding God or making the world spherical.

What does log(1/x) = -log(x) mean?

It means that 1/x will not do -log(x) actions so that log(x) could do them.  1/x will never see those actions (recombinations). Actions that cannot happen for 1/x if it is to be a good estimate of the inverse of x.  x and 1/x have to divide up their allocated actions.


Negative time really means actions that did not happened, yet.   So let time go one forever, like the fools we are, and in finite systems we are unknowingly counting actions that did not happen. Time is not going anywhere, it is just subdividing actions and unactions.

No comments: