Wednesday, October 22, 2014

Brownian Motion

The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval.[citation needed] Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1021 collisions per second.[6] Thus Einstein was led to consider the collective motion of Brownian particles.[citation needed] He showed that if ρ(x, t) is the density of Brownian particles at point x at time t, then ρ satisfies the diffusion equation:

\frac{\partial\rho}{\partial t}=D\frac{\partial^2\rho}{\partial x^2},
where D is the mass diffusivity.
Assuming that N particles start from the origin at the initial time t=0, the diffusion equation has the solution

\rho(x,t)=\frac{\N}{\sqrt{4\pi Dt}}e^{-\frac{x^2}{4Dt}}.

This one is going to be fun.  What did Einstein tell us? He identified some approximations in the relative movement of two, mixed chemical solutions.  These approximations let us use Isaac's Rules of grammar.

Can we define how the actual molecules do it without  Isaac's rules? Can we eliminate the Greek symbols and dump the t thing? We sure can.  I am working on it, but likely I will only get us part way through as I am a hack.  This is a work in progress, I am going to dump the e thing and make finite log, replace the t thing with the fine structure spectrum, and let the pi thing fall out as a result of local action by the bubbles of the universe.

I think we will find molecules at the markets, and they keep marching further into the market where trades expand mostly as the square of the market 'quant' and they rarely have to wait in line. I am going to do it with the Frank Lucas rules of grammar.

 This guy, smart dude but a little overweight. He doesn't  look like a mad scientist.
It was not until 1918 that a proof (using hyperelliptic functions) was found for this remarkable fact, which has relevance to the bosonic string theory in 26 dimensions.[1] Elementary proofs have been more recently published.[2][3]

Uh Oh, he only does 2D boson's. How are we going to fix that?Hopefully Andrey Markov had some ideas. Do geodesic integers exist? Bubbles make spheres easily enough but they need to connect them. Or we can make unit ellipsoids using Mr. Markov's triples. Where are the brilliant mathematicians who work this out? Let me go look for them and save myself time and effort. 

How accurate does Phi measure pi?

In this previous post I noticed that the Phi sequence, incorporated into Lucas numbers can compute Pi as a hyperbolic powers series of Tanh squared. It looks like the rational fraction is 223/71, an error of 7e-4,  as taken from this set of rational approximations from John Heidemann. That made me think, how does the Sun do better? NASA measured it and got that 2Pi*r for the Sun was 8e-6. The Sun did almost 100 times better than Mr. Lucas and his Phi. Well, NASA removed magnetic variations over time under the assumption that magnetism and gravity are unrelated.  Do they have a Theory of Everything that allows them to do that?


Tuesday, October 21, 2014

These mathematician folks

Copied from Wolfram cyclotomic polynomials.  Who are they? These guys are rewriting science as we know it, they are brilliant.  I am a hack, an amateur.  I know what is happening, but I can only occassionaly work around the edges.

What is happening is these polynomials have an associated recursive integer set.  The polynomials will be mapped to standard physics, and integerized on the unit circle. Much like Schrodinger, except the result will be a proton that is stable with only local knowledge anywhere, the finite element version of quantum physics.  Its happening, I wish I were smarter.

The physicists get it, Weinberg, Higgs, all of them are in on the game.  These are exciting times.

REFERENCES:
Apostol, T. M. "Resultants of Cyclotomic Polynomials." Proc. Amer. Math. Soc. 24, 457-462, 1970.
Apostol, T. M. "The Resultant of the Cyclotomic Polynomials F_m(ax) and F_n(bx)." Math. Comput. 29, 1-6, 1975.
Beiter, M. "The Midterm Coefficient of the Cyclotomic Polynomial F_(pq)(x)." Amer. Math. Monthly 71, 769-770, 1964.
Beiter, M. "Magnitude of the Coefficients of the Cyclotomic Polynomial F_(pq)." Amer. Math. Monthly 75, 370-372, 1968.
Bloom, D. M. "On the Coefficients of the Cyclotomic Polynomials." Amer. Math. Monthly 75, 372-377, 1968.
Brent, R. P. "On Computing Factors of Cyclotomic Polynomials." Math. Comput. 61, 131-149, 1993.
Carlitz, L. "The Number of Terms in the Cyclotomic Polynomial F_(pq)(x)." Amer. Math. Monthly 73, 979-981, 1966.
Conway, J. H. and Guy, R. K. The Book of Numbers. New York: Springer-Verlag, 1996.
de Bruijn, N. G. "On the Factorization of Cyclic Groups." Indag. Math. 15, 370-377, 1953.
Dickson, L. E.; Mitchell, H. H.; Vandiver, H. S.; and Wahlin, G. E. Algebraic Numbers. Bull Nat. Res. Council, Vol. 5, Part 3, No. 28. Washington, DC: National Acad. Sci., 1923.
Diederichsen, F.-E. "Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz." Abh. Math. Sem. Hanisches Univ. 13, 357-412, 1940.
Lam, T. Y. and Leung, K. H. "On the Cyclotomic Polynomial Phi_(pq)(X)." Amer. Math. Monthly 103, 562-564, 1996.
Lehmer, E. "On the Magnitude of the Coefficients of the Cyclotomic Polynomial." Bull. Amer. Math. Soc. 42, 389-392, 1936.
McClellan, J. H. and Rader, C. Number Theory in Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1979.
Migotti, A. "Zur Theorie der Kreisteilungsgleichung." Sitzber. Math.-Naturwiss. Classe der Kaiser. Akad. der Wiss., Wien 87, 7-14, 1883.
Nagell, T. "The Cyclotomic Polynomials" and "The Prime Divisors of the Cyclotomic Polynomial." §46 and 48 in Introduction to Number Theory. New York: Wiley, pp. 158-160 and 164-168, 1951.
Nicol, C. "Sums of Cyclotomic Polynomials." Apr. 26, 2000. http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0004&L=nmbrthry&T=0&F=&S=&P=2317.
Riesel, H. "The Cyclotomic Polynomials" in Appendix 6. Prime Numbers and Computer Methods for Factorization, 2nd ed. Boston, MA: Birkhäuser, pp. 305-308, 1994.
Séroul, R. "Cyclotomic Polynomials." §10.8 in Programming for Mathematicians. Berlin: Springer-Verlag, pp. 265-269, 2000.
Sloane, N. J. A. Sequences A013594, A010892, A054372, and A075795 in "The On-Line Encyclopedia of Integer Sequences."
Trott, M. "The Mathematica Guidebooks Additional Material: Graphics of the Argument of Cyclotomic Polynomials." http://www.mathematicaguidebooks.org/additions.shtml#N_2_03.
Vardi, I. Computational Recreations in Mathematica. Redwood City, CA: Addison-Wesley, pp. 8 and 224-225, 1991.
Wolfram, S. A New Kind of Science. Champaign, IL: Wolfram Media, 2002.

Computing a bit of Pi with the Lucas sequence

Here I have the sums of (1-tanh(n*a))^2, where a is log(phi) and n is the X axis.  So this is really the sums of Tanh', the first derivative.  Since the Lucas polynomials are Cyclotomic, they have roots on the unit circle.  These sums might approach the value Pi/2. They do, and get closest at the Lucas prime 29. After that, the sums stay close to Pi/2.

Here are the sums from my spread sheet:


0.8
1.2444444444
1.4444444444
1.5260770975
1.5580770975
1.5704227765
1.5751565043
1.5769672784
1.57765932
1.5779237128
1.5780247102
1.578063289
1.5780780249
Should we care? I am not sure. Whatever the starting angle, the series sums converge to some number since Tanh goes to 1.0.  So I have to show that somehow the series sums from ln(phi) converge to a specific value of pi. But Professor Lucas may have already figured that out. I am not surprised that it might, I just want to know if this is a relatively unique series from the hyperbolic angles made of phi.

The sinh and cosh still obey this:
\cosh^2 x - \sinh^2 x = 1\,
 So any derivation of Pi from Pythagorean can be derived from this. The Taylor series of Tanh is limited to Pi/2 because this is a triangle.  Why Phi work likely goes back to Lagrange.

But leave that to later.  I am more interested in the differential:
sum(tanh'(n*a)) = pi/2. The residual error on that series is (tanh')^n when the series has n-1 terms. N is about 7.  That is a large power. It mean that light ultimately has 7 degrees of freedom, or there abouts. That also implies a big charge of six, I think, in the gluons. But I had estimated about 4, so dunno?

But this point conforms with what the physicists are doing with the natural units, they are making pi and output, not a constant. It seems all those tiny constituents of the vacuum seem concerned about getting an accurate value for pi. It is back to sphere packing I presume.

One and a half seems interesting

Let x = 2/3
Then x+x^2+x^3+... = 2
The Maclaurin series for (1 − x)−1 is the geometric series

1+x+x^2+x^3+\cdots\!
And we can see that in the limit we 2 as I have removed the first term, 1. If we add more samplers, each new sampler being 2/3 slower than the previous we still approach the Shannon-Nyquist limit of sampling at twice the arrival rate.  (I use queuing terminology instead of bandwidth).  Anyway, adding more and slow samplers we can always add enough to prevent any sampler from queuing up. It would be inefficient, but I would think this effect would show up in proofs on sampling theory.

How does the spacing between arrivals look? For each slower sample it goes as the inverse, 3/2.  Now it gets interesting. Consider packing a sphere but we allow volume spaces for the density to adjust continually. So we  make room for (3/2)^n, n=1...Max spaces inside the sphere, approaching the Shannon sampling rate. How many samplers can we put in the sphere at a maximum before we reach the sphere volume:
Here I have it.  The blue line is accumulated empty space taken in powers of 3/2. The red line is sphere volume, the X axis is radius in integer increments.  We run out of room for empty spaces at about 17.1, which is one and a half short of the ratio of the proton to electron mass: 3/2^(18.53) =1836.I guess the volume of the proton is 3/2 times the number of empty spaces allowing room for the things in motion.

This shows up on my spectral chart. This is also what makes the atom seem to be an 17 bit computer.

Monday, October 20, 2014

Lucas numbers make a hyperbolic sequence

I should have noticed.



Every odd Lucas number is a sinh, and every even is a cosh up, counting  a delta hyperbolic angle of   ln(Phi) = 0.481211825. The angles are integer log base Phi. That brings up interesting properties, sinh(k) + cosh(k+d) = sinh(k+2d). And sum of angle identities yields linear combination of Lucas numbers.




Boy I should have looked earlier six months ago, but I am a bit of a dummy.

Sunday, October 19, 2014

Schiller needs a clue

He complains about secular stagnation being a rumor harming the market.

When a Stock Market Theory Is ContagiousSince Sept. 18, the stock market has fallen more than 6 percent. An abrupt decline last week — after five years of gains — prompted fears that the market may have reached a major turning point. Has a bear market begun? It’s a great question. The problem is that short-term market movements are extremely hard to forecast. But we live in the present and must try to understand what’s driving markets now, even if it’s much easier to predict their behavior over the long run. Fundamentally, stock markets are driven by popular narratives, which don’t need basis in solid fact. True or not, such stories may be described as “thought viruses.” When they are pernicious, they are analogous to the Ebola virus: They spread by contagion.
Here is the chart showing QE and the SP500.  The market needs to know if the Fed is making another QE run because the market has to hold the liquidity over the cycle. Secular stags means, Yes, the Fed will make another QE run.  No secular stagnations means the market will start the next business cycle with a correction.

It is not that difficult.

Conclusion on RGDP and NGDP

The two numbers are nearly perfectly hedged against any borrowing DC does. The first differential of these two values are acting as if the economy knows perfectly well that DC is a constraint.  There looks to be two or three modes in that ratio, all of them designed to hedge short, medium or long term against any sudden moves by Congress.

The claim of money illusion is bogus. The claim of sudden inflation is bogus. We are not likely to have a major crash, but we will have a period of the slogs. The economy will get about 1.5 (YoY) points of RGDP over the next two years.
The Keynesians are stuck, their claim of near zero rates, or near zero real rates is completely bogus.

NGDP and RGDP

Here I have the average of Real GDP growth divided by Nominal GDP growth , taken over various sized windows.  One can see the window size by noting the point at where the line starts for each of the three colors. The red, for example has the longest window size over which the average is taken.  The X axis is number of quarters. The data start in 1950 and goes through today.

Notice they all cycle about .5, why is that? Because the economy always computes enough Nominal GDP so that it can contain variation in Real GDP. Mean equals variance.  This is the Compton wavelength in physics, matter must be large enough to trap the variance of the light it contains. Economics is the same.  This is optimum queuing. It appears in the growth rate simply because the economy is built around net gain, or flow.

So the fiat banker has to take 'losses', actuarial losses to cover variation in real GDP.  This is not real money lost, it is the fiat banker doing fiat banking. The real question is cause and effect.  Does the economy need cycles to grow or is the fiat banker creating cycles? Dunno, still thinking.

But, Real GDP is the real change in inventories, counted using matched dollars, but counting real goods, if the bean counters have this accurate.  So, why does inventory have to fall and rise to make growth, if Real GDP is the cause here? The answer would be sphere packing, yet again.  The economy can pack more stuff into the sphere is some of the stuff is in motion, which is my unproven conjecture on sphere packing. In this case, the economy wants the arrival  variation to match the lines at the queue, and wants that to be constant. So it manages the service time at the queue until the folks in line are exactly right, likely two or three.  Here is a description of Poisson distribution:

In probability theory and statistics, the Poisson distribution (French pronunciation [pwasɔ̃]; in English usually /ˈpwɑːsɒn/), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.
 I have highlighted the appropriate term, independent arrivals.  That is the motion of arrivals must not be conditional upon each other.  That is the minimal redundancy condition. But that business cycle seems rather redundant,  it looks like an inefficiency.  Real GDP should be more of a random walk, so some inefficiency is introduced. Further, it looks like one inefficiency since we clearly have one unit root. Some queue in the economy is much slower and less able to adapt.

What happens when all the queues are equally congested?  We still have mean equals variance, but the variance would not be dominated a single queue, but spread out.  We would get less inflation.  So, the question becomes, who is the guilty party that is slow to adapt?


Saturday, October 18, 2014

Nature's amazing counting ability

Look here, real GDP, and it looks like a 2.2% growth rate with about a half point variance.  If you take growth at quarterly rates, I bet you get mean and variance of growth being equal.  Also, this means the bean counters are using log base 2,  they have approximated the M2 Velocity as 2, instead of 1.8.

Is this a plot? No, the BEA has to do revisions because data is late.  Data is late because inventories are jammed and uncountable, or foreign currency has to be swapped or the derivative industry has to price insurance.  Stores across the economy take a good shot at getting this right, and it is basically improved in accuracy as it goes up the stream.

If you look at RGDP/NGDP, YoY growth rates, and average those over two or three recession cycles the number is .5, to an accuracy of 1%. This is true from 1950 until today. That number, is also a variance and even splits the variance in money and goods inventory, it is the optimumly accurate double entry accounting system. It is Shannon sampling theory, but accountants never really used that theory. And entropy theory was mainly buried in physics and engineering until the digital age. This is minimum redundancy, we need this as the basic law of nature.

What does it mean?
The economy has somehow done a six bit, base 2, counter with sampling at Nyquist-Shannon on half quarter periods. Six bits because the economy knows we do an eight year cycle and counting down from eight: 8,4,2,1,1/2,1/4.  How did 100 million monkeys across three time zones, four geographies, and four different weather systems figure this out.  They did a Huffman encode on trade, creating a minimum redundancy network.

Is it a trick?

No, quasars make a mixed two and three base log system and make baryons, matching the coefficients of motion to fit the finite log pattern. DNA adds a log base five, and Vegas gets the number seven.  Something is going on.  Nature make groups to match finite logs and create minimum spanning finite log networks and pack them with Higgs bubbles. Mathematicians are just now cracking natures code.

The connection:
There is a connection between mean equals variance from Poisson and optimal transfer networks.  Something about making all the queues equal length distributes uncertainty (or motion in physics). Mean equals variance seems to be the fundamental Compton wave equivalence.

How did nature figure out how to make F and 1/F from recursive equations. We can do it by simple division and aggregation from a starting ratio. How did the universe figure out how to do it in log? How did figure out it needed two different bubbles of wave to effect subtraction? Is this simply minimum redundancy, such that non-solutions fly away? Random events result in a stable solution?


This is not a drill. These are exciting times for mathematicians, the world is at their door. This stuff is better than Isaac's rules of grammar. I wish I were young and brilliant.