Software, and other stuff

Friday, June 12, 2020

Duplicates in the exponential

The Taylor series of the exponential is sum of x^n/n!.

That is the ratio of all possible messages taken from x elements x at a time, with replacement vs
N!, the number of unique paths.  The ratio is another partition ratio, tells us the number of duplicate paths.

The terms in the exponential expansion have a maximum duplication ratio when n=x.  x^x/x!.  The inverse of that is the number of valid paths over the total number of paths, the information entropy. The idea of information theory is to make information entropies all within one for all messages.  The idea is that the duplicates can never get over congested for any set of messages. All messages are sent with near equal information entropy, and this yields the Huffma encoding tree which manges to kep duplicate messages from over congesting any mode on the encoding tree.

The Huffman encoder eliminates duplicates, like the Markov color operator, both encoding networks. Both are derived from the general Shannon-Nyquist condition which implicitly specifies Gaussian arrivals.  So the condition for Gaussian 'noise' in the original specification becomes Gaussian arrives when converted to actions. The Huffman encoder essentially specifying deviations from Gaussian.

Information theory is about meeting the Shannon-Nyquist condition. Information entropy is the ratio of unique messages to duplicates, the idea to minimize duplicates. Thus, maximizing entropy is the inverse of minimizing redundancy.  And it all boils down to a congestion problem.

The equation log(SNR+1) has the plus one because tit is positive definite, the deviation count never goes to zero, but the deviation counter will still roll over, it still counts deviation messages, N. So the Snannon information theory is a march up the 1,y,z on the Markov tree.

No comments:

Post a Comment