Another way to manage three binomials, say in a portfolio.
The three sided weighted coin is flipped. One binomial gets heads, the others get tails. Then to get cancellations, the mathematicians favorite trick, let the probabilities on the three sided coin be equivalent to prime fractions of the coin toss count. If N is the coin toss count, then the probability of heads becomes 1/N, 2/N, 3/N. In of the terms in the binomials becomes:
[(k^k)][(N-k)^(N-k)], which is the total Bayesian space of two sets, on k, and the other N-k. Then using these binomials, reconstruct the Markov tuple. The entropy is the log of that multiple. N must be N^N, the total Bayes space available. Coin tosses must be allocated proportional to entropy. The coin toss is allocated to the binomial that maximizes total entropy, at the current state. The problem becomes on of keeping the weights on the coin optimized. N, the total number of coin tosses, becomes a random variable.
Imposing the Markov condition on pr(i=2) for the tuple and pr(i=1) for all of them demands a normalization of the binomials. And that yields another round of cancellations. (Get it all, there is a residual (M+1)/M which drives the overlap in binomials.
The math gets simpler as you trim the factorial, N^N. The effect is to just miss the requirement to use imaginary axis, keep it all positive definite. N is an open set and we get a series of expanding solutions, walking up the m-tuple tree.
This model ends up optimizing the overlap between binomials, which is really the round off error, a bound, finite combination. Log(N) is mostly normal with a small variance, much small than the sum of variances. At maximum entropy, N will have a mean of N with a small bounded and countable error. If we are dealing with m-tuples, then their is a factor m, and the appropriate base would be m, or 1/m, really. But that fraction is never one hundred percent on target.
Kinetic energy, round off error, is carried along with the color operator.
No comments:
Post a Comment