Still a work in progress, but right now I have Lucas order counting up from the edge of the orbital toward the proton. So a small hyperbolic angle, with respect to the electron reference, which grows large as the electron moves radially toward the proton center. Moving toward the proton, the radial gradient decreases and the transverse gradient increases. Why increase? Lucas numbers grow! Yes, but derivative Tanh decreases. I have likely inverted entropy in prior posts, always be careful with me and sign dyslexia. But when the Lucas angle grows, gradients become fuzzy. The electron does not know radius, it only knows its own surface curvature relative to the immediate environment.
So the electron transfers radial motion to transverse motion going down, as kinetic energy increases. The transverse motion become circular as the electron creates its own center of motion. Or, one can say, as the electron moves toward the proton, the proton center becomes ambiguous and its uncertain location is distributed by angular rotation. The recursive nature of uncertainty causes an obliqueness in the angular movement, allowing the electron to move from orbital to orbital.
So, in this model, the orbitals are centered by Lagrange points, centers of motion. The electron is guided by a hyperbolic compass and will move toward the most certain Lagrange point. But the motion of the electron, itself, makes the Lagrange points fuzzy, and ambiguous, causing angular motion. So, in the end, we have the unit spheres moving in a dense shell of multiple Lagrange points. The unit spheres constantly distribute 'fuzziness' among these points, which is the minimum redundancy condition. It is a sort of spectral optimization in the sphere. The number of Lagrange points increase with energy, and seem to be limited to about 16.
Being careful with terms. Certain is the opposite
of fuzzy. When the electron has a small angle, it has high variance,
1-Tanh^2 is large, and that means it has a radial direction. Fuzzy is
when the electron approaches Pi/2 in the angle and 1-tanh^2 is near the
fine structure constant. I am learning about this
along with everyone else, so always expect errors, and do not be
surprised if professional mathematicians switch directionality.
The math:
I would give the electron one standard unit of fuzziness which it distributes among the selected number of Lagrange points. The electron can deal with no more than three L points at a time, and I would give it a separate tanh angle set for each point. The electron moves in units of hyperbolic angle quants, it take one step toward the least fuzzy L point by increasing the angle, which increases the L point fuzziness. Then it takes the next quant step toward the second least fuzzy L point and performs the same step. If the electron exceeds its fuzzy budget, it degenerates. The goal of the electron is to avoid calculus, so it must operate from 0 to pi/2, generally this would be the Pi/2 square angle. If the electron is doing log 3, then each of the angle changes mean a net change in the electron intertness sum network, it takes in and emits inertness, the opposite of fuzzy. Hence the electron leaves intertness balanced in the shell and L points. I have not done this yet, but the method is close. At equilibrium, the
electron is suspended in adiabatic motion, always staying below its
fuzzy budget and the atomic shell has balanced inertness. When the electron is near the edge of an orbital shell, it likely dumps some intertness, essentially defining the Lucas zero points and setting the grid. When it gets too much fuzziness it will take in a batch of interness. I think the dumping and gulping of interness causes the tanf to scale a bit, subtracting or adding separation between angles. There is no distance or time in the thing, one has to create an engineering unit standard and compute these as functions of the quants.
With three hyperbolic navigators, I think the electron will solve cubic roots along the way by noting the sequence of quant jumps it takes. I have not done this, but this just seems like a simple extension of finite difference computation incorporating entropy. For example, when the computational system arranges the grid for maximum entropy, then they can run the Lucas equations from zero to one and draw or derive their motion.
Finite Log:
This should work. Put N bodies in the system with enough fuzziness to prevent degeneration. Then step them through, removing excess inertness until they stabilize. The stable angles will be the finite log rotation.
Quarks?
Dunno, but they definitely do not have a dominant Lagrange. All their motion seems to be angular. But I am still not sure.
Friday, October 31, 2014
The magic seems to be 17
Phi, like Pi, is something that allows mathematicians to use some grammar, in the limit. But at the end of the day, there is no magic in the vacuum, except, I think 17 and spherical symmetry. 17 seems to be the number of different things that can be packed into a sphere and still make Stokes work within the error of the Fine Structure. The Fine Structure seems to be an outcome of the finite sphere problem. Each time I look at any power series that counts up to the Fine Structure, I get a limit in its order, and that seems to be 17. The limit can be measured using a recursive power series and decoded into a finite rational ratio. But even ratio is a bad term, there seems to be an actual set of integers that maximally pack a sphere. And even 'integer' is an improper term since that smallest elements that are counted up can vary in their counts up to the finite structure. 17 is the open limit, the actual degrees of freedom end up being 16, or 8 if we count fractions and wholes.
This seems to be the ruling equation, where f is radius(n), n being the count. At the limit, r(n) becomes r(16) and has a value that cancels Pi from the volume. Two radii rule, sinh and cosh, and when they maximally identical we get 1/sqrt(pi) and the warping of the grid just cancels out Pi from the equation. When their angular separation is moderate, we get an even mix angular motion and radial motion, the peak of the Planck's curve for the sphere, otherwise known as optimal queueing. It is mainly combinatorics, the binomial coefficients have an optimum integer order for the sphere. The limit is based on allowable error in identifying the sphere center. Too many combinations exceed that limit, the sphere degenerates. Too few combinations and the local grid will renomralize to restore the correct degrees of freedom.
Here is a measure model:
Any unit sphere will follow a density gradient when the change in density is larger than its own curvature. When the condition is not met, the unit sphere will engage in angular motion because its own density dominates the gradient. So the electron at the edge of its shell sees a dominate gradient toward the proton, and the electron will move imbalance through itsels causing motion toward the center. At some the the gradient becomes to steep that is cannot carry charge and it begins motions along an angle to the proton radius. The net is a fixed uncertainty about the actual proton center. When the gradient toward the center goes as 3/r and the gradient orthogonal to the radius goes as Pi * r then they match at r^2 = 3/Pi. At 1/cosh(a)^2 = FS (the fine structure then the hyperbolic angle a = Pi - FS/2. So it all boils down to:
And, minimum redundancy theory is mostly setting the boundary conditions, not about mechanism.
And, this is not a gimmick with the hyperbolic function, rather the reverse. The hyperbolic system is a gimmick that bridges the gap between finite and infinite systems. Past the hyperbolic angle Pi/2, calculus becomes accurate faster than finite systems can grow. When the angle is between zero and Pi/2, the differential equation above is more accurate with certain finite number sets than with calculus approximations.
At some point the space for a next generation of spheres is not available, and crowding makes counting by five possible and we get organic and bio chemistry. Up to that point, its a spherical world. I think the quarks have managed a log base 3 system, a 3-ary network. It is when that is squeezed against the orbitals that chemistry evolves with log base 5. The total theory seems to be minimum redundancy combinatorics.
This seems to be the ruling equation, where f is radius(n), n being the count. At the limit, r(n) becomes r(16) and has a value that cancels Pi from the volume. Two radii rule, sinh and cosh, and when they maximally identical we get 1/sqrt(pi) and the warping of the grid just cancels out Pi from the equation. When their angular separation is moderate, we get an even mix angular motion and radial motion, the peak of the Planck's curve for the sphere, otherwise known as optimal queueing. It is mainly combinatorics, the binomial coefficients have an optimum integer order for the sphere. The limit is based on allowable error in identifying the sphere center. Too many combinations exceed that limit, the sphere degenerates. Too few combinations and the local grid will renomralize to restore the correct degrees of freedom.
Here is a measure model:
Any unit sphere will follow a density gradient when the change in density is larger than its own curvature. When the condition is not met, the unit sphere will engage in angular motion because its own density dominates the gradient. So the electron at the edge of its shell sees a dominate gradient toward the proton, and the electron will move imbalance through itsels causing motion toward the center. At some the the gradient becomes to steep that is cannot carry charge and it begins motions along an angle to the proton radius. The net is a fixed uncertainty about the actual proton center. When the gradient toward the center goes as 3/r and the gradient orthogonal to the radius goes as Pi * r then they match at r^2 = 3/Pi. At 1/cosh(a)^2 = FS (the fine structure then the hyperbolic angle a = Pi - FS/2. So it all boils down to:
- 1) We are limited to the spherical model mainly because of combinatorics packing the 1,2,3 together at the low end.
- 2) Combinatorics are finite so our degrees of freedom is limited and maximum entropy applies.
- 3) From these two, hyperbolics follows, as does 16/2 and Pi/2 falls out automatically.
And, minimum redundancy theory is mostly setting the boundary conditions, not about mechanism.
And, this is not a gimmick with the hyperbolic function, rather the reverse. The hyperbolic system is a gimmick that bridges the gap between finite and infinite systems. Past the hyperbolic angle Pi/2, calculus becomes accurate faster than finite systems can grow. When the angle is between zero and Pi/2, the differential equation above is more accurate with certain finite number sets than with calculus approximations.
At some point the space for a next generation of spheres is not available, and crowding makes counting by five possible and we get organic and bio chemistry. Up to that point, its a spherical world. I think the quarks have managed a log base 3 system, a 3-ary network. It is when that is squeezed against the orbitals that chemistry evolves with log base 5. The total theory seems to be minimum redundancy combinatorics.
Wednesday, October 29, 2014
Here is a fun equation
4^phi = 3*Pi, to an error of .003.
Wierd? Not sure, but I think it means irrationality might be a spherical definition. It means that 2Phi = log2(3pi), or the precision of measuring 3Pi-1, assuming Gaussian noise of one is about Phi. That is, when measuring 3Pi-1, one notch on your ruler gets you about Phi bits of precision when notches sample at Nyquist-Shannon rates.
Now if you were to reverse the equation, assume 3Pi-1 was gaussian noise and compute log2 precision you get:
log2(1 + 1/(3pi-1)) = Phi/10, to within .0001. Wierd? With Gaussian noise, only the one or the other is valid. The Shannon equation, as written, assume perfect orthogonality between signal and noise..
Anyway, spreadsheets are fun.
Talk about stupid
In the battles for smart card dominance:
OK, explain to me why a bunch of merchants want to destroy the very fiat system they are creating? I can see why Apple's Cook is a zillionare, stupid merchants simply want Apple to win the game.
These are them:
Walmart, Best Buy, Gap and others Otherwise known as incredible idiotic executives. Here is a clue to these idiot store mangers. Money is different from spying, they are two different things, you cannot spy using money it will not work.
CurrentC’s maker MCX, for those unfamiliar, is a group of over 50 retailers who have been working to develop their own mobile wallet technology. Essentially, they want to own the mobile wallet experience for themselves, instead of turning it over to a company like Apple, whose Apple Pay mobile payments solution prevents them from gaining access to customer data. Instead, retailers involved with MCX want to use mobile payments as a way to learn more about their customers’ shopping behavior, which could mean they could better target offers to them in the future.
OK, explain to me why a bunch of merchants want to destroy the very fiat system they are creating? I can see why Apple's Cook is a zillionare, stupid merchants simply want Apple to win the game.
These are them:
Walmart, Best Buy, Gap and others Otherwise known as incredible idiotic executives. Here is a clue to these idiot store mangers. Money is different from spying, they are two different things, you cannot spy using money it will not work.
Tuesday, October 28, 2014
Its about 'one' and 'zero' being equidisant everywhere
This condition sinh^2 + 1 = cosh^2, and the Shannon condition are all the same theory. That 'one' is a computed quantity, maintained by the motion of matter, the 'one'. If matter hits close to the 'zero' it degenerates. I have a few drawings almost ready to show how the 'zeros' are moved about so the 'one' has equidistant paths through the linear portion of tanh. It is all about making the optimum 'yardstick' where the F and 1/F are chosen to keep the zeros at the proper mid-points. This is the minimum redundancy condition. The system is ultimately limited because the relative ratio of the largest F to 1/F becomes close to the residual irrational error in Pi.
Weinberg knows this, I was listening to one of his lectures, and he got the problem, and knew this was measure theory and warping of Riemann space. So this is real stuff.
Stay tuned, I am going to put up some rough schematics and show how these tear dropped orbitals have the effect of making the zeros equidistributed. So, the Lattice Chromodynamics folks are on the right track, the idea of a continuous coupling constant is off base a little. The coupling is still quantized, but wave, or the imaginary part of the lattice between grid 'notches'. Things are getting much simpler in physics, its a new world. The folks who did quantum conductance nailed it. The multiple paths through the linear tanh is the key link between combinatorics and quantum mechanics.
This graph is the plot of Lucas polynomial L(5,r)+L(6,r)-L(6,r), over the unit circle from 0 to Pi. The boundary conditions are met, the resultant is zero at angle 0 and Pi. But along the unit circle is a double wave at Pi/2. The vacuum wants to disperse that variation over the whole circumference, so it make an oval, it literally changes theshape scaling of the unit circle x,y axis (still a little fuzzy); but this distributes entropy. We get the tear drop thing. What unit sphere is warped? The shell and the 'one' both have complementary distortions. Doesn't that distort the value of Pi? No because of -iLog(i). In the sum the power series sum remains the same, because the probability of a 'digit' counting is adjusted for the remapping of the grid.
Finite element computations use the concept, matching grid size to the variation in the function being computed. In Shannon theory this condition makes the baud size, the distance between signals, consistent. In measure theory this condition makes the estimation of 'one' consistent. In physics it makes the total system, in the sum, nearly spherical. In statistics it means you have square matrices and a diagonal identity. In number theory it means you are counting with maximum accuracy using the minimum number of digits. And in network theory it means you have a minimal spanning tree.
How does the vacuum stretch the grid?
It redistributes the Nulls, the little inert elements of the vacuum can be shifted around. Packing the Nulls makes matter by keep contained light trapped in the limit of the curvature, the Fine Structure limit.. That is the Higgs mechanism. So when stretching the grid, the relative change in curvature, or Tanh'(L(n) + L(n+1), must total Pi in its variation. That is the total variation in Pi matches Pi, the finite log condition, the Shannon condition, the equal queue lengths condition, etc. Kinetic energy becomes adiabatic.
Note:
I figured that the Fine Structure Constant was a variation in volume. I may be wrong, and physicists should clarify. But, it really should be computed as the variation in the value 'one', for spherical systems. It itself is a 'constant' with the same proportional variation. Given the constant, the laws of physics should fall out from the equal distribution of entropy. What if one built a system with a different number? Hmmm.... It may not be possible, the set of combinations may always be open. This was Shannon's world where cables were infinitely long. Recursive sequences tend toward Phi in their ratios, though I cannot prove this always that case. Making the variance in the value 'one' larger just causes the system to scale up and work with an nth root of 'one',
Weinberg knows this, I was listening to one of his lectures, and he got the problem, and knew this was measure theory and warping of Riemann space. So this is real stuff.
Stay tuned, I am going to put up some rough schematics and show how these tear dropped orbitals have the effect of making the zeros equidistributed. So, the Lattice Chromodynamics folks are on the right track, the idea of a continuous coupling constant is off base a little. The coupling is still quantized, but wave, or the imaginary part of the lattice between grid 'notches'. Things are getting much simpler in physics, its a new world. The folks who did quantum conductance nailed it. The multiple paths through the linear tanh is the key link between combinatorics and quantum mechanics.
This graph is the plot of Lucas polynomial L(5,r)+L(6,r)-L(6,r), over the unit circle from 0 to Pi. The boundary conditions are met, the resultant is zero at angle 0 and Pi. But along the unit circle is a double wave at Pi/2. The vacuum wants to disperse that variation over the whole circumference, so it make an oval, it literally changes the
Finite element computations use the concept, matching grid size to the variation in the function being computed. In Shannon theory this condition makes the baud size, the distance between signals, consistent. In measure theory this condition makes the estimation of 'one' consistent. In physics it makes the total system, in the sum, nearly spherical. In statistics it means you have square matrices and a diagonal identity. In number theory it means you are counting with maximum accuracy using the minimum number of digits. And in network theory it means you have a minimal spanning tree.
How does the vacuum stretch the grid?
It redistributes the Nulls, the little inert elements of the vacuum can be shifted around. Packing the Nulls makes matter by keep contained light trapped in the limit of the curvature, the Fine Structure limit.. That is the Higgs mechanism. So when stretching the grid, the relative change in curvature, or Tanh'(L(n) + L(n+1), must total Pi in its variation. That is the total variation in Pi matches Pi, the finite log condition, the Shannon condition, the equal queue lengths condition, etc. Kinetic energy becomes adiabatic.
Note:
I figured that the Fine Structure Constant was a variation in volume. I may be wrong, and physicists should clarify. But, it really should be computed as the variation in the value 'one', for spherical systems. It itself is a 'constant' with the same proportional variation. Given the constant, the laws of physics should fall out from the equal distribution of entropy. What if one built a system with a different number? Hmmm.... It may not be possible, the set of combinations may always be open. This was Shannon's world where cables were infinitely long. Recursive sequences tend toward Phi in their ratios, though I cannot prove this always that case. Making the variance in the value 'one' larger just causes the system to scale up and work with an nth root of 'one',
Monday, October 27, 2014
So how many units of 'Disorder' in the atom?
The Lucas system is circular, but it would accumulate the 1-tanh^2 as it spirals through the orbit. That gets Pi/2. But it is scaled and so may actually be getting Pi or 2* Pi, who knows? Anyway, that accumulation is a volume as the hyperbolics are circular. So the fine structure residue is a volume, and I am not sure where physicists have the 3/2 in their formula.
To get the total number of those, take (Pi/Fs) or (pi/2*Fs) and so one; you numbers like 215, 430 etc. Call these the total volume of the sphere in units of Fs, and you have it, I think.
Do I have units upside down? Could be. Previously in that spectral chart, I could count up or down, it wasn't hyperbolic. Now I have to be careful. But Tanh is linear around the small angles, and that is where most most measurements of Pi will take place. So we may have to count from the center out. The system will always try and operate along the linear portion.
Atomic Orbitals:
This makes sense regarding the atomic orbitals because the first two Lucas polynomials have no zero, so the shell is spherical, Add another Lucas number and we get that inner and outer shell. How does the Lucas system handle the higher s orbitals with no angular or magnetic moment? Most likely the Lucas polynomials that have congruent zeros are used in series, as if it can skip intermediate polynomials.
Magnetic moment:
But you can see the set of Lucas polynomials can satisfy the n,l,m system in standard use. Magnetic moment probably happens then the center no longer is a point source. So the Lucas numbers no longer count by 2,4,8 etc. They will snap too on the odd numbers. When the incongruent polynomial appear, it is mismatched with the s orbital and has to curve the shell to realign them along the tanh linear, and that means splitting the outer shell into separate regions, each region having the same odd curvature.
The thing seems to be driven by surface distortion on the surface of the unit spehere, wherever that is, near the center. So the orbitals compensate for asymmetry in the nucleus, I would think. It likely make Pi by making the zeros as symmetrically out from the some center as possible. The vacuum does not have direct access to both sinh, and cosh. It basically adds, and even then can only add maybe three numbers at a time. So the vacuum simply redistributes the interts and imbalances to make a 3 bit adder work at any one time. In other words, it does not do sophisticated math to meet some human regular complex plane, it just warps the local vacuum to do simple straight line adds.
I hate to spoil the party, be we are dealing with something having an IQ of 1.5.
To get the total number of those, take (Pi/Fs) or (pi/2*Fs) and so one; you numbers like 215, 430 etc. Call these the total volume of the sphere in units of Fs, and you have it, I think.
Do I have units upside down? Could be. Previously in that spectral chart, I could count up or down, it wasn't hyperbolic. Now I have to be careful. But Tanh is linear around the small angles, and that is where most most measurements of Pi will take place. So we may have to count from the center out. The system will always try and operate along the linear portion.
Atomic Orbitals:
This makes sense regarding the atomic orbitals because the first two Lucas polynomials have no zero, so the shell is spherical, Add another Lucas number and we get that inner and outer shell. How does the Lucas system handle the higher s orbitals with no angular or magnetic moment? Most likely the Lucas polynomials that have congruent zeros are used in series, as if it can skip intermediate polynomials.
Magnetic moment:
But you can see the set of Lucas polynomials can satisfy the n,l,m system in standard use. Magnetic moment probably happens then the center no longer is a point source. So the Lucas numbers no longer count by 2,4,8 etc. They will snap too on the odd numbers. When the incongruent polynomial appear, it is mismatched with the s orbital and has to curve the shell to realign them along the tanh linear, and that means splitting the outer shell into separate regions, each region having the same odd curvature.
The thing seems to be driven by surface distortion on the surface of the unit spehere, wherever that is, near the center. So the orbitals compensate for asymmetry in the nucleus, I would think. It likely make Pi by making the zeros as symmetrically out from the some center as possible. The vacuum does not have direct access to both sinh, and cosh. It basically adds, and even then can only add maybe three numbers at a time. So the vacuum simply redistributes the interts and imbalances to make a 3 bit adder work at any one time. In other words, it does not do sophisticated math to meet some human regular complex plane, it just warps the local vacuum to do simple straight line adds.
I hate to spoil the party, be we are dealing with something having an IQ of 1.5.
Tie in Shannon to Lucas
Look at log(1+entropy of Pi), a version of Shannon. A slice of the entropy is 1-tanh(n*a)^2 where a is log(phi). The idea is that the electron moves such that is separates out the integer values of Lucas at the surface (or center?) of the orbital shell, where the Lucas polynomial is at x=1. We want to know how the Lucas numbers are paired to make orbitals. The electron angular motion would effect a transfer to balance out the phase and match motion to phase imbalance.
So we get:
log(2-[sinh^2/cosh^2]) or log(2*cosh^2-sinh^2) - log(cosh^2).
The first term becomes: log(sinh^2+2)
Then converting to the Lucas system which scales the unit circle we find that sinh^2 = L(2n) because the integer 2 goes away with the square of sinh. But L(2n) is cosh(2n)* sqrt(2) so we end up with:
log(cosh(2n*a) - 2*log(cosh(n*a)) + log(sqrt(2), and these come out to integrals of tanh, which is a sum in out discrete system. So it may be that the system is working with copies of Lucas in n and 2n to balance Pi.
I think I have most of this, but errors abound, so be careful.
Another way to check this is go to the spread sheet and use irrational Phi, and take the logs of 1-tanh(n)^2 for the power series and directly construct a two binary digit system.
This is not something I should be working on, it is a critical piece of the puzzle and the pros need to be on this.
So we get:
log(2-[sinh^2/cosh^2]) or log(2*cosh^2-sinh^2) - log(cosh^2).
The first term becomes: log(sinh^2+2)
Then converting to the Lucas system which scales the unit circle we find that sinh^2 = L(2n) because the integer 2 goes away with the square of sinh. But L(2n) is cosh(2n)* sqrt(2) so we end up with:
log(cosh(2n*a) - 2*log(cosh(n*a)) + log(sqrt(2), and these come out to integrals of tanh, which is a sum in out discrete system. So it may be that the system is working with copies of Lucas in n and 2n to balance Pi.
I think I have most of this, but errors abound, so be careful.
Another way to check this is go to the spread sheet and use irrational Phi, and take the logs of 1-tanh(n)^2 for the power series and directly construct a two binary digit system.
This is not something I should be working on, it is a critical piece of the puzzle and the pros need to be on this.
Sunday, October 26, 2014
Handling the Bohr model
I am trying to match counting directions for quants with the Bohr model. This is about aligning the axis between this and the Lucas model and converting the total energy of the Bohr model into delta energy per level in the Lucas model. On top of this is the angle matching between hyperbolic and trigonometric with the Lucas model. Bohr also counts negative energy, the amount of energy needed to remove the electron. I am trying to straighten this out, see below.
The energy of the n-th level for any atom is determined by the radius and quantum number/ An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom.
Lets try to nail down down direction in counting Lucas numbers.
1) The highest order Lucas number matches the Fine Structure.
2) The fine structure constant must be the high end band limit of light from the electron orbital. Isn't that limit a radial frequency?
3) The atom gets heavy toward the center, high Compton frequency goes with higher heaviness, λ=h/mc and high Gamma frequency comes from the atam center.
4) The Lucas quants counts layers, it is not an integral. I mean light is a power spectra and all of its cylinders are firing. So the Lucas atom will be a power series, going from the exterior to the center.
That means Lucas numbers count up from the outer shell toward the more inert center. This is the way I have been counting exponents, but Hyperbolic says we can count symmetrical anyway.
And, the Lucas polynomials are equally defined over the interval zero to one, and are not a radial function, so that variable has to be scaled. And the hyperbolic angle will be scaled.
I likely got a lot of this backward, but that is going to be my first effort, so you experts out there, feel free to jump in.
The energy of the n-th level for any atom is determined by the radius and quantum number/ An electron in the lowest energy level of hydrogen (n = 1) therefore has about 13.6 eV less energy than a motionless electron infinitely far from the nucleus. The next energy level (n = 2) is −3.4 eV. The third (n = 3) is −1.51 eV, and so on. For larger values of n, these are also the binding energies of a highly excited atom with one electron in a large circular orbit around the rest of the atom.
Lets try to nail down down direction in counting Lucas numbers.
1) The highest order Lucas number matches the Fine Structure.
2) The fine structure constant must be the high end band limit of light from the electron orbital. Isn't that limit a radial frequency?
3) The atom gets heavy toward the center, high Compton frequency goes with higher heaviness, λ=h/mc and high Gamma frequency comes from the atam center.
4) The Lucas quants counts layers, it is not an integral. I mean light is a power spectra and all of its cylinders are firing. So the Lucas atom will be a power series, going from the exterior to the center.
That means Lucas numbers count up from the outer shell toward the more inert center. This is the way I have been counting exponents, but Hyperbolic says we can count symmetrical anyway.
And, the Lucas polynomials are equally defined over the interval zero to one, and are not a radial function, so that variable has to be scaled. And the hyperbolic angle will be scaled.
I likely got a lot of this backward, but that is going to be my first effort, so you experts out there, feel free to jump in.
This equation says it all
In the fields of electrical engineering and solid-state physics, the fine-structure constant is one fourth the product of the characteristic impedance of free space, Z0 = µ0 c, and the conductance quantum, G0 = 2e2/h:
Its on the Wiki site. This tells me that finite combinatorics and group theory are superseding Isaac's rules of grammar and relativity. I think this generation of physicists and mathematicians have everything together. The next step would be to define the first order of light entropy as the unit of momentum for engineers, then let them divide up straight line light, distance, mass and time as they see fit. Physicists can move on.
We are at a great turning point in science and amateurs and 'numerologists' like myself cannot keep up. The new constructions of the orbitals, based on Lucas theory, are already in the computers of these brilliant minds.
My numbers
These numbers are me using the rational ratio for Phi. If I use the limit of Lucas and actual Phi, I get -137.1590861574, which about right.
Using rational ratio, my entropy in pi is a lower than the actual fine structure,
.006 instead of .007. However, that is likely a selection error on my part. I have the titles and Fibonacci numbers both in the chart. I think there is a trade where adding more degrees of freedom and suffer a bit of error in Pi yield a better stability for the atom.
The odd orbital shapes happen because the entire power series is not used everywhere.
The Lucas number do not actually divide out a rational ratio, they count up to the maximum degree, (L number) then count down, in a quantum oscillation. Everything is done by n, the Lucas quant, no time or space. It looks like everything in the nucleus and orbitals is composed of just 16-20 integer counts. Separability by primes insure that they combine in the proper two or three integers near the surface of a sphere.
1.619047619 | rational ratio. | ||
n | Ph()^n | p^n-p^-n/p^n+p^-n | 1- ratio^2 |
1 | 1.619047619 | 0.4477144646 | 0.7995517582 |
2 | 2.6213151927 | 0.7459121502 | 0.4436150642 |
3 | 4.2440341216 | 0.8948023173 | 0.199328813 |
4 | 6.8712933397 | 0.958518851 | 0.0812416123 |
5 | 11.1249511214 | 0.9839698039 | 0.031803425 |
6 | 18.0118256252 | 0.993854207 | 0.0122538153 |
7 | 29.1620033931 | 0.9976509898 | 0.0046925025 |
sum of | 1.5677944879 | ||
Rational phi | sum of | 1.5724869905 | |
1 | 2 | ||
2 | 1.5 | error from pi/2 | 0.0030018388 |
3 | 1.6666666667 | error from pi/2 | -0.0016906637 |
5 | 1.6 | ||
8 | 1.625 | 0.0060036777 | |
13 | 1.6153846154 | ||
21 | 1.619047619 | ||
34 | 1.6176470588 | ||
55 | 1.6181818182 | ||
89 | 1.6179775281 | ||
144 | 1.6180555556 | ||
233 | 1.6180257511 | ||
377 | 0 |
Here is where I think we are headed
Some of this will be wrong, but its a start.
This is the atom, I have marked the Lucas four and three quant levels. I have also marked what I think are the charge vector, Cosh and the magnetic vector, Sinh. Note: The magnetic vectors will be remapped by the odd zero set, so this diagram is more a schematic. The sinh is always a half angle off from the cosh, and, and the sinh always has odd zeros on its bounding circle. The Lucas numbers appear on the bounding circle for that quant level and goes to zero toward the proton center. Lucas numbers are quant values of light.
The blue and yellows circles are the zero points of the Lucas Polynomial, with respect to what real and complex axis? I think the axis will have to be relative to the dominant vector of radial motion. I also am still confused about mapping this to a sphere, but I think Reimann sphere packing is on its way.
But lets continue anyway. The zeros are spots where phase imbalance is zero!, they would be likely spots that the electron would appear, I think. The half angle difference and the Lucas numbers result from the fact that unit spheres, the proton and electron, are finite, not point charges, and that is why we have magnetism vs charge. The magnetic is mostly tangent to the unit sphere, the charge vector perpendicular. But they are both made of light bubbles, the magnetic bubble having an odd set of zeros, that is why we think it is dipole.
The motion of the electron is out from the page, a cross product of the sinh and cosh vectors. The electron itself is composed of Higgs elements and trapped light, as are the rest of the atom, its all these two things. The electron will spiral in and out from the center, its momentum determined by the angle of the two vectors. In this chart, the electron bounced between the L4 and L3, driven by the half angle difference. The odd and even zeros will cause spiral in two dimensions.
The movement of the unit sphere follows the vector cross product until force is drained then reverses motion, so the electron climbs from L3 to L4, and back again. The light bubbles follow the reverse circuit.
And of course we would have the complementary motion in the quarks. This stuff is very vague to me, and a lot of work needs to be done here. The Lucas values are reusable at higher energy levels, and will deform as we see in existing orbitals. This is mainly because the degrees of freedom for light are nearly orthogonal. As the bounding circles warp into tear drops and rings, the zeros are redistributed, we get a remapping of the grid. And the magnetic vectors will assume the normal pattern we are used to, being curved by the odd number of zeros..
But, no fields in this thing, just bunches of local bubbles packed in a sphere.
This is the atom, I have marked the Lucas four and three quant levels. I have also marked what I think are the charge vector, Cosh and the magnetic vector, Sinh. Note: The magnetic vectors will be remapped by the odd zero set, so this diagram is more a schematic. The sinh is always a half angle off from the cosh, and, and the sinh always has odd zeros on its bounding circle. The Lucas numbers appear on the bounding circle for that quant level and goes to zero toward the proton center. Lucas numbers are quant values of light.
The blue and yellows circles are the zero points of the Lucas Polynomial, with respect to what real and complex axis? I think the axis will have to be relative to the dominant vector of radial motion. I also am still confused about mapping this to a sphere, but I think Reimann sphere packing is on its way.
But lets continue anyway. The zeros are spots where phase imbalance is zero!, they would be likely spots that the electron would appear, I think. The half angle difference and the Lucas numbers result from the fact that unit spheres, the proton and electron, are finite, not point charges, and that is why we have magnetism vs charge. The magnetic is mostly tangent to the unit sphere, the charge vector perpendicular. But they are both made of light bubbles, the magnetic bubble having an odd set of zeros, that is why we think it is dipole.
The motion of the electron is out from the page, a cross product of the sinh and cosh vectors. The electron itself is composed of Higgs elements and trapped light, as are the rest of the atom, its all these two things. The electron will spiral in and out from the center, its momentum determined by the angle of the two vectors. In this chart, the electron bounced between the L4 and L3, driven by the half angle difference. The odd and even zeros will cause spiral in two dimensions.
The movement of the unit sphere follows the vector cross product until force is drained then reverses motion, so the electron climbs from L3 to L4, and back again. The light bubbles follow the reverse circuit.
And of course we would have the complementary motion in the quarks. This stuff is very vague to me, and a lot of work needs to be done here. The Lucas values are reusable at higher energy levels, and will deform as we see in existing orbitals. This is mainly because the degrees of freedom for light are nearly orthogonal. As the bounding circles warp into tear drops and rings, the zeros are redistributed, we get a remapping of the grid. And the magnetic vectors will assume the normal pattern we are used to, being curved by the odd number of zeros..
But, no fields in this thing, just bunches of local bubbles packed in a sphere.
Saturday, October 25, 2014
How close can Fibonacci get to the fine structure constant?
Well the maximum deviation Fibonacci has from Pi/2 is -0.0072908055, and its inverse is -137.1590565066, the physicists have:
I call that close. We need to dig up Prof Lagrange and Lucas for the Swedish Banana. This had to be true, the degrees of freedom are equally utilized, so Lagrange estimation will be spherical and that makes Phi a winner. The Fine Structure is simply t the variance of light exchange rate and given the size of the universe, the quasars simply got the vacuum optimum to the degrees of freedom that match.
It came out to a power series in tanh', the derivative, because tanh is the allowable motion, and the variance in that has to be split up to minimize redundancy. The variance in that value is of course the spectrum of fundamental light, its sample rate noise.
This number is theoretical based on using Phi to the limit of my spread sheet. When a finite rate is substituted the number is between 0.0002269164 and -0.0045091191. My finite ratio for Phi is 89/55. So the universe can compute Pi and implicitly Ln(Phi), so it has enough know how to do Isaac's grammar to a finite limit of .00729 = dx. Is this the limit? I dunno if the quark system is more accurate, this only applies to outside the proton shell as near as I can tell, but do not quote me, check with Weinberg and Higgs.
How does this fit with my spectral chart? The chart was just a computation about how much would fit if there was no structure, no electron, no quarks. The came me trying to find applicable recursive power series which the vacuum could do with only local knowledge and maximum entropy. Most of my blundering about was ignorance and application of things like primes and power series, most of which I barely understood. Lucas did the real work here when I realized his integer series was cyclotomic.
Packing spheres, Richard, packing spheres.
There should be up to six configurations in which the elements of light can be packed against Higgs elements. But, like I say, I am not sure how the fine structure is split between the proton and the electron. But all the measurements of the constant seem to be along a single axis, but do not quote me, I am still reading up.
What about magnetism? Not sure, but I am looking at the cosh(n*w) plus i*Sinh(nw-n), the two being one delta angle separated.The motion of the unit sphere needing to avoid zeros when an electron is involved. Look at the Lucas sequence and note the alternation between cosh and sinh. The quant, the ln(Phi), is built into the cross product I think.
What about size difference?
One of my incomplete ideas. Light (or the Higgs) can change shape and that gives us the seven degrees of freedom. That seven determines the degree of the power series. So that problem melted away.
Who is doing numerology here?
The physicists who assume the number line goes to infinity. Not the finite element, finite order wave folks like me.
I call that close. We need to dig up Prof Lagrange and Lucas for the Swedish Banana. This had to be true, the degrees of freedom are equally utilized, so Lagrange estimation will be spherical and that makes Phi a winner. The Fine Structure is simply t the variance of light exchange rate and given the size of the universe, the quasars simply got the vacuum optimum to the degrees of freedom that match.
It came out to a power series in tanh', the derivative, because tanh is the allowable motion, and the variance in that has to be split up to minimize redundancy. The variance in that value is of course the spectrum of fundamental light, its sample rate noise.
This number is theoretical based on using Phi to the limit of my spread sheet. When a finite rate is substituted the number is between 0.0002269164 and -0.0045091191. My finite ratio for Phi is 89/55. So the universe can compute Pi and implicitly Ln(Phi), so it has enough know how to do Isaac's grammar to a finite limit of .00729 = dx. Is this the limit? I dunno if the quark system is more accurate, this only applies to outside the proton shell as near as I can tell, but do not quote me, check with Weinberg and Higgs.
How does this fit with my spectral chart? The chart was just a computation about how much would fit if there was no structure, no electron, no quarks. The came me trying to find applicable recursive power series which the vacuum could do with only local knowledge and maximum entropy. Most of my blundering about was ignorance and application of things like primes and power series, most of which I barely understood. Lucas did the real work here when I realized his integer series was cyclotomic.
There is a most profound and beautiful question associated with the observed coupling constant, e – the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with about an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.) Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the "hand of God" wrote that number, and "we don't know how He pushed his pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out, without putting it in secretly!
—Richard Feynman, Richard P. Feynman (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. p. 129. ISBN 0-691-08388-6.
Packing spheres, Richard, packing spheres.
Charge is offset, is the electro dynamic entropy in a sphere, composed of what seems to be a fifth to sixth order power series. By offset I mean the exponent in the power series when phi is the base. It is the contribution to the entropy of pi from each of the 'digits'. Free light in 'space' is the spectrum of light without enough Higgs density to be meet the Compton limit, and that is conductance. {ut in other words, Conductance is the ability to move charge without moving Higgs elements.
There should be up to six configurations in which the elements of light can be packed against Higgs elements. But, like I say, I am not sure how the fine structure is split between the proton and the electron. But all the measurements of the constant seem to be along a single axis, but do not quote me, I am still reading up.
What about magnetism? Not sure, but I am looking at the cosh(n*w) plus i*Sinh(nw-n), the two being one delta angle separated.The motion of the unit sphere needing to avoid zeros when an electron is involved. Look at the Lucas sequence and note the alternation between cosh and sinh. The quant, the ln(Phi), is built into the cross product I think.
What about size difference?
One of my incomplete ideas. Light (or the Higgs) can change shape and that gives us the seven degrees of freedom. That seven determines the degree of the power series. So that problem melted away.
Who is doing numerology here?
The physicists who assume the number line goes to infinity. Not the finite element, finite order wave folks like me.
Friday, October 24, 2014
Adding zeros
I inverted my Lucas polynomials to see the zeros. This is mainly me still getting handy with Maxima plots, so nothing much to reveal.
Where am I going?
My Hamitonian is simple, the electron should make one step along the axis that makes the atom sphere surface a bit more spherical. Deformities in the electron surface should match deformities in the atom surface. I have a hard time imagining 'digits' in my system doing anything more intelligent.
Eventually, the accumulated error in Pi would guide the electron along is designated orbit, and as near as I can tell, that error is the accumulated value of d/dn( tanh(n*ln(Phi))), (using the Lucas numbers). Seriously, we are talking about bubbles. What else would the bubbles of the universe do?
Here for example. A smooth vacuum caused these orbital coefficients to follow a Pascal like pyramid? And those rings around 4f, likely the fourth order Lucas polynomial is roaming those rings, adding a bit of accuracy. This is obviously a distributed mechanism that is refining the fractional ratio digits of Pi as more energy is added.
Where am I going?
My Hamitonian is simple, the electron should make one step along the axis that makes the atom sphere surface a bit more spherical. Deformities in the electron surface should match deformities in the atom surface. I have a hard time imagining 'digits' in my system doing anything more intelligent.
Eventually, the accumulated error in Pi would guide the electron along is designated orbit, and as near as I can tell, that error is the accumulated value of d/dn( tanh(n*ln(Phi))), (using the Lucas numbers). Seriously, we are talking about bubbles. What else would the bubbles of the universe do?
Here for example. A smooth vacuum caused these orbital coefficients to follow a Pascal like pyramid? And those rings around 4f, likely the fourth order Lucas polynomial is roaming those rings, adding a bit of accuracy. This is obviously a distributed mechanism that is refining the fractional ratio digits of Pi as more energy is added.
Lucas spaces
Lucas polynomials without the zeros.
Mainly I am still learning 3d plotting in Maxima, a year late. But getting closer. Eventually this is going to be an orbital shell.
How did the physicists do this before the PC and the algebra machines?
The idea is that the bubbles of the universe just compute Pi. All the atomic orbitals, even the higher energy orbitals, just divide up into subsets of Lucas recursions and compute, in total, the value of Pi so the surface of the atom is spherical. Then they aggregate the powers up and do it again, and again, until they get stars and quasars.
Mainly I am still learning 3d plotting in Maxima, a year late. But getting closer. Eventually this is going to be an orbital shell.
How did the physicists do this before the PC and the algebra machines?
The idea is that the bubbles of the universe just compute Pi. All the atomic orbitals, even the higher energy orbitals, just divide up into subsets of Lucas recursions and compute, in total, the value of Pi so the surface of the atom is spherical. Then they aggregate the powers up and do it again, and again, until they get stars and quasars.
Thursday, October 23, 2014
Me and my Maxima plotter
Step one, make a plot. I have to relearn plotting everytime unless I do it continually.
Step Two, I have to make the plot of the Lucas polynomial, as a power series, along a radial from the atom center, in cylindrical coordinates. That gets us nice orbital looking things. I am slow at this.
Step Two, I have to make the plot of the Lucas polynomial, as a power series, along a radial from the atom center, in cylindrical coordinates. That gets us nice orbital looking things. I am slow at this.
Wednesday, October 22, 2014
Brownian Motion
The first part of Einstein's argument was to determine how far a Brownian particle travels in a given time interval.[citation needed] Classical mechanics is unable to determine this distance because of the enormous number of bombardments a Brownian particle will undergo, roughly of the order of 1021 collisions per second.[6] Thus Einstein was led to consider the collective motion of Brownian particles.[citation needed] He showed that if ρ(x, t) is the density of Brownian particles at point x at time t, then ρ satisfies the diffusion equation:
where D is the mass diffusivity.
Assuming that N particles start from the origin at the initial time t=0, the diffusion equation has the solution
This one is going to be fun. What did Einstein tell us? He identified some approximations in the relative movement of two, mixed chemical solutions. These approximations let us use Isaac's Rules of grammar.
Can we define how the actual molecules do it without Isaac's rules? Can we eliminate the Greek symbols and dump the t thing? We sure can. I am working on it, but likely I will only get us part way through as I am a hack. This is a work in progress, I am going to dump the e thing and make finite log, replace the t thing with the fine structure spectrum, and let the pi thing fall out as a result of local action by the bubbles of the universe.
I think we will find molecules at the markets, and they keep marching further into the market where trades expand mostly as the square of the market 'quant' and they rarely have to wait in line. I am going to do it with the Frank Lucas rules of grammar.
This guy, smart dude but a little overweight. He doesn't look like a mad scientist.
It was not until 1918 that a proof (using hyperelliptic functions) was found for this remarkable fact, which has relevance to the bosonic string theory in 26 dimensions.[1] Elementary proofs have been more recently published.[2][3]
Uh Oh, he only does 2D boson's. How are we going to fix that?Hopefully Andrey Markov had some ideas. Do geodesic integers exist? Bubbles make spheres easily enough but they need to connect them. Or we can make unit ellipsoids using Mr. Markov's triples. Where are the brilliant mathematicians who work this out? Let me go look for them and save myself time and effort.
How accurate does Phi measure pi?
In this previous post I noticed that the Phi sequence, incorporated into Lucas numbers can compute Pi as a hyperbolic powers series of Tanh squared. It looks like the rational fraction is 223/71, an error of 7e-4, as taken from this set of rational approximations from John Heidemann. That made me think, how does the Sun do better? NASA measured it and got that 2Pi*r for the Sun was 8e-6. The Sun did almost 100 times better than Mr. Lucas and his Phi. Well, NASA removed magnetic variations over time under the assumption that magnetism and gravity are unrelated. Do they have a Theory of Everything that allows them to do that?
Tuesday, October 21, 2014
These mathematician folks
Copied from Wolfram cyclotomic polynomials. Who are they? These guys are rewriting science as we know it, they are brilliant. I am a hack, an amateur. I know what is happening, but I can only occassionaly work around the edges.
What is happening is these polynomials have an associated recursive integer set. The polynomials will be mapped to standard physics, and integerized on the unit circle. Much like Schrodinger, except the result will be a proton that is stable with only local knowledge anywhere, the finite element version of quantum physics. Its happening, I wish I were smarter.
The physicists get it, Weinberg, Higgs, all of them are in on the game. These are exciting times.
REFERENCES:
What is happening is these polynomials have an associated recursive integer set. The polynomials will be mapped to standard physics, and integerized on the unit circle. Much like Schrodinger, except the result will be a proton that is stable with only local knowledge anywhere, the finite element version of quantum physics. Its happening, I wish I were smarter.
The physicists get it, Weinberg, Higgs, all of them are in on the game. These are exciting times.
REFERENCES:
Apostol, T. M. "Resultants of Cyclotomic Polynomials." Proc. Amer.
Math. Soc. 24, 457-462, 1970.
Apostol, T. M. "The Resultant of the Cyclotomic Polynomials and ." Math. Comput. 29,
1-6, 1975.
Beiter, M. "The Midterm Coefficient of the Cyclotomic Polynomial ."
Amer. Math. Monthly 71, 769-770, 1964.
Beiter, M. "Magnitude of the Coefficients of the Cyclotomic Polynomial ." Amer. Math. Monthly 75,
370-372, 1968.
Bloom, D. M. "On the Coefficients of the Cyclotomic Polynomials."
Amer. Math. Monthly 75, 372-377, 1968.
Brent, R. P. "On Computing Factors of Cyclotomic Polynomials." Math.
Comput. 61, 131-149, 1993.
Carlitz, L. "The Number of Terms in the Cyclotomic Polynomial ."
Amer. Math. Monthly 73, 979-981, 1966.
Conway, J. H. and Guy, R. K. The
Book of Numbers. New York: Springer-Verlag, 1996.
de Bruijn, N. G. "On the Factorization of Cyclic Groups." Indag.
Math. 15, 370-377, 1953.
Dickson, L. E.; Mitchell, H. H.; Vandiver, H. S.; and Wahlin, G. E. Algebraic
Numbers. Bull Nat. Res. Council, Vol. 5, Part 3, No. 28. Washington,
DC: National Acad. Sci., 1923.
Diederichsen, F.-E. "Über die Ausreduktion ganzzahliger Gruppendarstellungen bei arithmetischer Äquivalenz." Abh. Math. Sem. Hanisches Univ. 13,
357-412, 1940.
Lam, T. Y. and Leung, K. H. "On the Cyclotomic Polynomial ."
Amer. Math. Monthly 103, 562-564, 1996.
Lehmer, E. "On the Magnitude of the Coefficients of the Cyclotomic Polynomial."
Bull. Amer. Math. Soc. 42, 389-392, 1936.
McClellan, J. H. and Rader, C. Number Theory in Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall,
1979.
Migotti, A. "Zur Theorie der Kreisteilungsgleichung." Sitzber. Math.-Naturwiss.
Classe der Kaiser. Akad. der Wiss., Wien 87, 7-14, 1883.
Nagell, T. "The Cyclotomic Polynomials" and "The Prime Divisors of the Cyclotomic Polynomial." §46 and 48 in Introduction
to Number Theory. New York: Wiley, pp. 158-160 and 164-168, 1951.
Nicol, C. "Sums of Cyclotomic Polynomials." Apr. 26, 2000. http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0004&L=nmbrthry&T=0&F=&S=&P=2317.
Riesel, H. "The Cyclotomic Polynomials" in Appendix 6. Prime Numbers and Computer Methods for Factorization, 2nd ed. Boston, MA: Birkhäuser,
pp. 305-308, 1994.
Schroeder, M. R. Number Theory in Science and Communication: With Applications in Cryptography, Physics,
Digital Information, Computing, and Self-Similarity, 3rd ed. New York: Springer-Verlag,
p. 245, 1997.
Séroul, R. "Cyclotomic Polynomials." §10.8 in Programming
for Mathematicians. Berlin: Springer-Verlag, pp. 265-269, 2000.
Sloane, N. J. A. Sequences A013594, A010892, A054372,
and A075795 in "The On-Line Encyclopedia
of Integer Sequences."
Trott, M. "The Mathematica Guidebooks Additional Material: Graphics
of the Argument of Cyclotomic Polynomials." http://www.mathematicaguidebooks.org/additions.shtml#N_2_03.
Vardi, I. Computational Recreations in Mathematica. Redwood City, CA: Addison-Wesley, pp. 8
and 224-225, 1991.
Wolfram, S. A
New Kind of Science. Champaign, IL: Wolfram Media, 2002.
Computing a bit of Pi with the Lucas sequence
Here I have the sums of (1-tanh(n*a))^2, where a is log(phi) and n is the X axis. So this is really the sums of Tanh', the first derivative. Since the Lucas polynomials are Cyclotomic, they have roots on the unit circle. These sums might approach the value Pi/2. They do, and get closest at the Lucas prime 29. After that, the sums stay close to Pi/2.
Here are the sums from my spread sheet:
Should we care? I am not sure. Whatever the starting angle, the series sums converge to some number since Tanh goes to 1.0. So I have to show that somehow the series sums from ln(phi) converge to a specific value of pi. But Professor Lucas may have already figured that out. I am not surprised that it might, I just want to know if this is a relatively unique series from the hyperbolic angles made of phi.
The sinh and cosh still obey this:
But leave that to later. I am more interested in the differential:
sum(tanh'(n*a)) = pi/2. The residual error on that series is (tanh')^n when the series has n-1 terms. N is about 7. That is a large power. It mean that light ultimately has 7 degrees of freedom, or there abouts. That also implies a big charge of six, I think, in the gluons. But I had estimated about 4, so dunno?
But this point conforms with what the physicists are doing with the natural units, they are making pi and output, not a constant. It seems all those tiny constituents of the vacuum seem concerned about getting an accurate value for pi. It is back to sphere packing I presume.
Here are the sums from my spread sheet:
0.8 |
1.2444444444 |
1.4444444444 |
1.5260770975 |
1.5580770975 |
1.5704227765 |
1.5751565043 |
1.5769672784 |
1.57765932 |
1.5779237128 |
1.5780247102 |
1.578063289 |
1.5780780249 |
The sinh and cosh still obey this:
But leave that to later. I am more interested in the differential:
sum(tanh'(n*a)) = pi/2. The residual error on that series is (tanh')^n when the series has n-1 terms. N is about 7. That is a large power. It mean that light ultimately has 7 degrees of freedom, or there abouts. That also implies a big charge of six, I think, in the gluons. But I had estimated about 4, so dunno?
But this point conforms with what the physicists are doing with the natural units, they are making pi and output, not a constant. It seems all those tiny constituents of the vacuum seem concerned about getting an accurate value for pi. It is back to sphere packing I presume.
One and a half seems interesting
Let x = 2/3
Then x+x^2+x^3+... = 2
How does the spacing between arrivals look? For each slower sample it goes as the inverse, 3/2. Now it gets interesting. Consider packing a sphere but we allow volume spaces for the density to adjust continually. So we make room for (3/2)^n, n=1...Max spaces inside the sphere, approaching the Shannon sampling rate. How many samplers can we put in the sphere at a maximum before we reach the sphere volume:
Here I have it. The blue line is accumulated empty space taken in powers of 3/2. The red line is sphere volume, the X axis is radius in integer increments. We run out of room for empty spaces at about 17.1, which is one and a half short of the ratio of the proton to electron mass: 3/2^(18.53) =1836.I guess the volume of the proton is 3/2 times the number of empty spaces allowing room for the things in motion.
This shows up on my spectral chart. This is also what makes the atom seem to be an 17 bit computer.
Then x+x^2+x^3+... = 2
The Maclaurin series for (1 − x)−1 is the geometric seriesAnd we can see that in the limit we 2 as I have removed the first term, 1. If we add more samplers, each new sampler being 2/3 slower than the previous we still approach the Shannon-Nyquist limit of sampling at twice the arrival rate. (I use queuing terminology instead of bandwidth). Anyway, adding more and slow samplers we can always add enough to prevent any sampler from queuing up. It would be inefficient, but I would think this effect would show up in proofs on sampling theory.
How does the spacing between arrivals look? For each slower sample it goes as the inverse, 3/2. Now it gets interesting. Consider packing a sphere but we allow volume spaces for the density to adjust continually. So we make room for (3/2)^n, n=1...Max spaces inside the sphere, approaching the Shannon sampling rate. How many samplers can we put in the sphere at a maximum before we reach the sphere volume:
Here I have it. The blue line is accumulated empty space taken in powers of 3/2. The red line is sphere volume, the X axis is radius in integer increments. We run out of room for empty spaces at about 17.1, which is one and a half short of the ratio of the proton to electron mass: 3/2^(18.53) =1836.I guess the volume of the proton is 3/2 times the number of empty spaces allowing room for the things in motion.
This shows up on my spectral chart. This is also what makes the atom seem to be an 17 bit computer.
Monday, October 20, 2014
Lucas numbers make a hyperbolic sequence
I should have noticed.
Every odd Lucas number is a sinh, and every even is a cosh up, counting a delta hyperbolic angle of ln(Phi) = 0.481211825. The angles are integer log base Phi. That brings up interesting properties, sinh(k) + cosh(k+d) = sinh(k+2d). And sum of angle identities yields linear combination of Lucas numbers.
Boy I should have looked earlier six months ago, but I am a bit of a dummy.
Every odd Lucas number is a sinh, and every even is a cosh up, counting a delta hyperbolic angle of ln(Phi) = 0.481211825. The angles are integer log base Phi. That brings up interesting properties, sinh(k) + cosh(k+d) = sinh(k+2d). And sum of angle identities yields linear combination of Lucas numbers.
Boy I should have looked earlier six months ago, but I am a bit of a dummy.
Sunday, October 19, 2014
Schiller needs a clue
He complains about secular stagnation being a rumor harming the market.
It is not that difficult.
Here is the chart showing QE and the SP500. The market needs to know if the Fed is making another QE run because the market has to hold the liquidity over the cycle. Secular stags means, Yes, the Fed will make another QE run. No secular stagnations means the market will start the next business cycle with a correction.
When a Stock Market Theory Is ContagiousSince Sept. 18, the stock market has fallen more than 6 percent. An abrupt decline last week — after five years of gains — prompted fears that the market may have reached a major turning point. Has a bear market begun? It’s a great question. The problem is that short-term market movements are extremely hard to forecast. But we live in the present and must try to understand what’s driving markets now, even if it’s much easier to predict their behavior over the long run. Fundamentally, stock markets are driven by popular narratives, which don’t need basis in solid fact. True or not, such stories may be described as “thought viruses.” When they are pernicious, they are analogous to the Ebola virus: They spread by contagion.
It is not that difficult.
Conclusion on RGDP and NGDP
The two numbers are nearly perfectly hedged against any borrowing DC does. The first differential of these two values are acting as if the economy knows perfectly well that DC is a constraint. There looks to be two or three modes in that ratio, all of them designed to hedge short, medium or long term against any sudden moves by Congress.
The claim of money illusion is bogus. The claim of sudden inflation is bogus. We are not likely to have a major crash, but we will have a period of the slogs. The economy will get about 1.5 (YoY) points of RGDP over the next two years.
The Keynesians are stuck, their claim of near zero rates, or near zero real rates is completely bogus.
The claim of money illusion is bogus. The claim of sudden inflation is bogus. We are not likely to have a major crash, but we will have a period of the slogs. The economy will get about 1.5 (YoY) points of RGDP over the next two years.
The Keynesians are stuck, their claim of near zero rates, or near zero real rates is completely bogus.
NGDP and RGDP
Here I have the average of Real GDP growth divided by Nominal GDP growth , taken over various sized windows. One can see the window size by noting the point at where the line starts for each of the three colors. The red, for example has the longest window size over which the average is taken. The X axis is number of quarters. The data start in 1950 and goes through today.
Notice they all cycle about .5, why is that? Because the economy always computes enough Nominal GDP so that it can contain variation in Real GDP. Mean equals variance. This is the Compton wavelength in physics, matter must be large enough to trap the variance of the light it contains. Economics is the same. This is optimum queuing. It appears in the growth rate simply because the economy is built around net gain, or flow.
So the fiat banker has to take 'losses', actuarial losses to cover variation in real GDP. This is not real money lost, it is the fiat banker doing fiat banking. The real question is cause and effect. Does the economy need cycles to grow or is the fiat banker creating cycles? Dunno, still thinking.
But, Real GDP is the real change in inventories, counted using matched dollars, but counting real goods, if the bean counters have this accurate. So, why does inventory have to fall and rise to make growth, if Real GDP is the cause here? The answer would be sphere packing, yet again. The economy can pack more stuff into the sphere is some of the stuff is in motion, which is my unproven conjecture on sphere packing. In this case, the economy wants the arrival variation to match the lines at the queue, and wants that to be constant. So it manages the service time at the queue until the folks in line are exactly right, likely two or three. Here is a description of Poisson distribution:
What happens when all the queues are equally congested? We still have mean equals variance, but the variance would not be dominated a single queue, but spread out. We would get less inflation. So, the question becomes, who is the guilty party that is slow to adapt?
Notice they all cycle about .5, why is that? Because the economy always computes enough Nominal GDP so that it can contain variation in Real GDP. Mean equals variance. This is the Compton wavelength in physics, matter must be large enough to trap the variance of the light it contains. Economics is the same. This is optimum queuing. It appears in the growth rate simply because the economy is built around net gain, or flow.
So the fiat banker has to take 'losses', actuarial losses to cover variation in real GDP. This is not real money lost, it is the fiat banker doing fiat banking. The real question is cause and effect. Does the economy need cycles to grow or is the fiat banker creating cycles? Dunno, still thinking.
But, Real GDP is the real change in inventories, counted using matched dollars, but counting real goods, if the bean counters have this accurate. So, why does inventory have to fall and rise to make growth, if Real GDP is the cause here? The answer would be sphere packing, yet again. The economy can pack more stuff into the sphere is some of the stuff is in motion, which is my unproven conjecture on sphere packing. In this case, the economy wants the arrival variation to match the lines at the queue, and wants that to be constant. So it manages the service time at the queue until the folks in line are exactly right, likely two or three. Here is a description of Poisson distribution:
In probability theory and statistics, the Poisson distribution (French pronunciation [pwasɔ̃]; in English usually /ˈpwɑːsɒn/), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.I have highlighted the appropriate term, independent arrivals. That is the motion of arrivals must not be conditional upon each other. That is the minimal redundancy condition. But that business cycle seems rather redundant, it looks like an inefficiency. Real GDP should be more of a random walk, so some inefficiency is introduced. Further, it looks like one inefficiency since we clearly have one unit root. Some queue in the economy is much slower and less able to adapt.
What happens when all the queues are equally congested? We still have mean equals variance, but the variance would not be dominated a single queue, but spread out. We would get less inflation. So, the question becomes, who is the guilty party that is slow to adapt?
Saturday, October 18, 2014
Nature's amazing counting ability
Look here, real GDP, and it looks like a 2.2% growth rate with about a half point variance. If you take growth at quarterly rates, I bet you get mean and variance of growth being equal. Also, this means the bean counters are using log base 2, they have approximated the M2 Velocity as 2, instead of 1.8.
Is this a plot? No, the BEA has to do revisions because data is late. Data is late because inventories are jammed and uncountable, or foreign currency has to be swapped or the derivative industry has to price insurance. Stores across the economy take a good shot at getting this right, and it is basically improved in accuracy as it goes up the stream.
If you look at RGDP/NGDP, YoY growth rates, and average those over two or three recession cycles the number is .5, to an accuracy of 1%. This is true from 1950 until today. That number, is also a variance and even splits the variance in money and goods inventory, it is the optimumly accurate double entry accounting system. It is Shannon sampling theory, but accountants never really used that theory. And entropy theory was mainly buried in physics and engineering until the digital age. This is minimum redundancy, we need this as the basic law of nature.
What does it mean?
The economy has somehow done a six bit, base 2, counter with sampling at Nyquist-Shannon on half quarter periods. Six bits because the economy knows we do an eight year cycle and counting down from eight: 8,4,2,1,1/2,1/4. How did 100 million monkeys across three time zones, four geographies, and four different weather systems figure this out. They did a Huffman encode on trade, creating a minimum redundancy network.
Is it a trick?
No, quasars make a mixed two and three base log system and make baryons, matching the coefficients of motion to fit the finite log pattern. DNA adds a log base five, and Vegas gets the number seven. Something is going on. Nature make groups to match finite logs and create minimum spanning finite log networks and pack them with Higgs bubbles. Mathematicians are just now cracking natures code.
The connection:
There is a connection between mean equals variance from Poisson and optimal transfer networks. Something about making all the queues equal length distributes uncertainty (or motion in physics). Mean equals variance seems to be the fundamental Compton wave equivalence.
How did nature figure out how to make F and 1/F from recursive equations. We can do it by simple division and aggregation from a starting ratio. How did the universe figure out how to do it in log? How did figure out it needed two different bubbles of wave to effect subtraction? Is this simply minimum redundancy, such that non-solutions fly away? Random events result in a stable solution?
This is not a drill. These are exciting times for mathematicians, the world is at their door. This stuff is better than Isaac's rules of grammar. I wish I were young and brilliant.
Is this a plot? No, the BEA has to do revisions because data is late. Data is late because inventories are jammed and uncountable, or foreign currency has to be swapped or the derivative industry has to price insurance. Stores across the economy take a good shot at getting this right, and it is basically improved in accuracy as it goes up the stream.
If you look at RGDP/NGDP, YoY growth rates, and average those over two or three recession cycles the number is .5, to an accuracy of 1%. This is true from 1950 until today. That number, is also a variance and even splits the variance in money and goods inventory, it is the optimumly accurate double entry accounting system. It is Shannon sampling theory, but accountants never really used that theory. And entropy theory was mainly buried in physics and engineering until the digital age. This is minimum redundancy, we need this as the basic law of nature.
What does it mean?
The economy has somehow done a six bit, base 2, counter with sampling at Nyquist-Shannon on half quarter periods. Six bits because the economy knows we do an eight year cycle and counting down from eight: 8,4,2,1,1/2,1/4. How did 100 million monkeys across three time zones, four geographies, and four different weather systems figure this out. They did a Huffman encode on trade, creating a minimum redundancy network.
Is it a trick?
No, quasars make a mixed two and three base log system and make baryons, matching the coefficients of motion to fit the finite log pattern. DNA adds a log base five, and Vegas gets the number seven. Something is going on. Nature make groups to match finite logs and create minimum spanning finite log networks and pack them with Higgs bubbles. Mathematicians are just now cracking natures code.
The connection:
There is a connection between mean equals variance from Poisson and optimal transfer networks. Something about making all the queues equal length distributes uncertainty (or motion in physics). Mean equals variance seems to be the fundamental Compton wave equivalence.
How did nature figure out how to make F and 1/F from recursive equations. We can do it by simple division and aggregation from a starting ratio. How did the universe figure out how to do it in log? How did figure out it needed two different bubbles of wave to effect subtraction? Is this simply minimum redundancy, such that non-solutions fly away? Random events result in a stable solution?
This is not a drill. These are exciting times for mathematicians, the world is at their door. This stuff is better than Isaac's rules of grammar. I wish I were young and brilliant.
Friday, October 17, 2014
Is it possible the hedge the risk of a large fart
You know, what is the cost to cows in Montana if Obama let loose at a black tie event?
Wednesday, October 15, 2014
Hilsenrath has brilliant timing
HILSENRATH’S TAKE: COMMODITY PRICES DROP GIVES FED ADDITIONAL BREATHING ROOM
Federal Reserve officials have an additional reason to be patient about raising short-term interest rates: Downward pressure on commodities prices.
Nymex crude oil futures prices dropped $3.90 per barrel Tuesday to $81.84, the lowest level since June 2012. Crude is down 19% from a year earlier. Nymex RBOB gasoline futures, at $2.1802, are at their lowest level since November 2010 and off 18% from a year ago. The Reuters/Jefferies CRB Index, a broad measure of commodities prices, is down 4.7% from a year earlier.
Of course the driller stocks are crushed by 20%, ready to start lay offs. The ten year almost breached 2%. So, yes the pressure is off to raise rates, though it was never clear what that meant since the interest on reserves is the key rate and that is about three times the one year Treasury market rate. In fact, the Fed right now is scrambling to find a way back to QE.
World leaders take action to protect the economy
That would be Victor Leonardi reporting from the IMF world headquarters..
Tuesday, October 14, 2014
The Keynesian Clowns are back
The Swamprats from DC, Blanchard, Krugman, Summers and DeLong.
The same idiots the caused HSR nightmare in California, a 68 billion dollar
Sword of Damocles hangin over the California budget. We cannot get rid of HSR, it is recurring nightmare.
The same group that fouled the fiat banking system.
These same idiots brought us 15 years of No Child left behind, nothing but volatility to the California education system. We pay a 25% tax bonus to be a member of Swampland, that is how much extra tax they take from us to pay for this crap, and we are not that rich.
The Keynesians are the same group that refuse to allow the Keystone pipeline, even though it was privately funded.
The same group that brought us the dead weight San Jose Light Rail system. This group had nothing but a string of failures in California, and have caused nearly 20 years of a floundering economy out here.
The California Legislature is not that smart. They are easily swindled by the DC carpet baggers, and California is better off leaving DC behind and give our legislatures some time to learn a few things about governance. We will not escape the Carpet Baggers from DC, they will haunt California into bankruptcy, and we almost went there. It is time to finish the secession. It is time for the tenth largest economy to stand on its own two feet before the Swamp Creatures devout us.
The same idiots the caused HSR nightmare in California, a 68 billion dollar
Sword of Damocles hangin over the California budget. We cannot get rid of HSR, it is recurring nightmare.
The same group that fouled the fiat banking system.
These same idiots brought us 15 years of No Child left behind, nothing but volatility to the California education system. We pay a 25% tax bonus to be a member of Swampland, that is how much extra tax they take from us to pay for this crap, and we are not that rich.
The Keynesians are the same group that refuse to allow the Keystone pipeline, even though it was privately funded.
The same group that brought us the dead weight San Jose Light Rail system. This group had nothing but a string of failures in California, and have caused nearly 20 years of a floundering economy out here.
The California Legislature is not that smart. They are easily swindled by the DC carpet baggers, and California is better off leaving DC behind and give our legislatures some time to learn a few things about governance. We will not escape the Carpet Baggers from DC, they will haunt California into bankruptcy, and we almost went there. It is time to finish the secession. It is time for the tenth largest economy to stand on its own two feet before the Swamp Creatures devout us.
Monday, October 13, 2014
Market down again
SP500 off 1.65%, which I think makes about 4% in the past week or so. I guess the economy now hangs on whether George Soros and Carl Icahn know what the proper shorts should be. Tell them the number is about 15%.
Sunday, October 12, 2014
They are simply units of an accounting system
The stuff in the basement of the Fed. They are units of an accounting system and their job is to move those units in and out of the economy. They need that pile of ink and paper, in the basement, to count money not in the economy. The fiat banker operates the first differential, their pile are units of flow, and they come in and out of the basement. Otherwise they are like any other normal manufacturing business.
If by law Congress is required to go into the basement and take all of the Fed inventory, then it is a nearly hopeless task for the Fed to keep track of their business. Imagine if the shareholders of General Motors monthly went to the factory and removed all of the aluminum stock. General Motors would be forever tracking down aluminum in the open market for instant delivery.
I would expect Congress to overdraw their seigniorage, and I thought that is what we meant by the term. But evidently, all of the Fed inventory in the basement is classified as earnings to Congress. I never would have imagined something this stupid would have become law.
If by law Congress is required to go into the basement and take all of the Fed inventory, then it is a nearly hopeless task for the Fed to keep track of their business. Imagine if the shareholders of General Motors monthly went to the factory and removed all of the aluminum stock. General Motors would be forever tracking down aluminum in the open market for instant delivery.
I would expect Congress to overdraw their seigniorage, and I thought that is what we meant by the term. But evidently, all of the Fed inventory in the basement is classified as earnings to Congress. I never would have imagined something this stupid would have become law.
Saturday, October 11, 2014
I am visibly shaken by fiat money
I still hope, wish that I am mistaken.
I always thought that fiat banking was a balanced flow proposition. Interest earned by the fiat banker becomes paper and ink in the basement. Ink and paper paid out in interest becomes money. And the fiat banker simply balanced the flow. I would hope he would do it with an efficient transaction rate, but balancing the flow every eight years is still OK. The proposition is so simple that even an eight year balance cycle would work.
If anyone ever flushed a toilet, then they know how a fiat bank operates.
I want to see that; I want to see the point where money into the bank becomes 'ink and paper. in' I hope it was me being stupid when I looked for the 'ink and paper in' account in the Federal Reserve, and could not find it. It was just me failing to understand the accounting system. The 'ink and paper in' account really does exist. I am not talking to anyone until I see a banker or economist points toward the 'ink and paper in' account. If that account does not exist then mathematicians should be ashamed for failing to stomp their feet, scream out loud, and file lawsuits. I need to see some sort of chart like the one below:
If this chart does not exist then we end up with one great big unit sphere, the wealthiest thing in the Milky Way. The economy will minimize a redundant exponential growth of fiat.
I always thought that fiat banking was a balanced flow proposition. Interest earned by the fiat banker becomes paper and ink in the basement. Ink and paper paid out in interest becomes money. And the fiat banker simply balanced the flow. I would hope he would do it with an efficient transaction rate, but balancing the flow every eight years is still OK. The proposition is so simple that even an eight year balance cycle would work.
If anyone ever flushed a toilet, then they know how a fiat bank operates.
I want to see that; I want to see the point where money into the bank becomes 'ink and paper. in' I hope it was me being stupid when I looked for the 'ink and paper in' account in the Federal Reserve, and could not find it. It was just me failing to understand the accounting system. The 'ink and paper in' account really does exist. I am not talking to anyone until I see a banker or economist points toward the 'ink and paper in' account. If that account does not exist then mathematicians should be ashamed for failing to stomp their feet, scream out loud, and file lawsuits. I need to see some sort of chart like the one below:
If this chart does not exist then we end up with one great big unit sphere, the wealthiest thing in the Milky Way. The economy will minimize a redundant exponential growth of fiat.
Friday, October 10, 2014
Tiniest dictator in the world is missing
The tiniest dictator in the world, at 3'4", Dim Kim Son of North Korea is missing:
Business Insider: The formerly tiny band of journalists and academics who specialize in North Korea has expanded dramatically in numbers over the past several years. Watching this has been a cause for rejoicing.
The pendulum may have swung a bit too far, though, judging from the current mad rush by what has become a veritable horde of Pyongyang watchers to put forth theories about the meaning of Kim Jong Un’s mysterious month-long absence from the public eye.
On the basis of scanty evidence we’re hearing theories that:
- Kim is gravely ill (if not dead).
- He has been replaced by his sister.
- He has been removed in a coup.
- He is under house arrest and — this one you gotta love — in danger of being hung out to dry as the fall guy taking the blame for the three-generation Kim dynasty’s horrible human rights record.
Wednesday, October 8, 2014
Josh Zumbrun at the WSJ is hilarious
His headline report:
I don't think we will have a real crash, you know a watched pot never boils. But in order to drive that deficit higher we will need a long period of the slogs, slow growth, less than 2%, for about a year.
Here is the IMF:
The Federal Deficit is Now Smaller than the Average Since the 1980sAnd he shows this chart. But he has one problem. Unless Obama is Bill Clinton, the deficit has to get back to its normal giant size. There is only one way to get the deficit higher, we have to crash the economy.
I don't think we will have a real crash, you know a watched pot never boils. But in order to drive that deficit higher we will need a long period of the slogs, slow growth, less than 2%, for about a year.
Here is the IMF:
The Washington-based IMF said that more than half a decade in which official borrowing costs have been close to zero had encouraged speculation rather than the hoped-for pick up in investment.
In its half-yearly global financial stability report, it said the risks to stability no longer came from the traditional banks but from the so-called shadow banking system – institutions such as hedge funds, money market funds and investment banks that do not take deposits from the public.
José Viñals, the IMF’s financial counsellor, said: “Policymakers are facing a new global imbalance: not enough economic risk-taking in support of growth, but increasing excesses in financial risk-taking posing stability challenges.”
Viñals said the IMF had analysed 300 large banks in advanced economies, making up the bulk of their banking system. It found that institutions representing almost 40% of total assets lacked the financial muscle to supply adequate credit in support of the recovery. In the eurozone, this proportion rose to about 70%.
“And risks are shifting to the shadow banking system in the form of rising market and liquidity risks,” Viñals said. “If left unaddressed, these risks could compromise global financial stability.”
Tuesday, October 7, 2014
Does the grey market explain money velocity decline?
Here is Jonathan Ashworth and Charles A.E. Goodhart doing real economics, actual good stuff, not the hand wave.
A 1.5% YoY growth in the grey economy puts the money velocities right on track for the USA.
http://www.voxeu.org/article/trying-glimpse-grey-economy
Despite the growth of online and card payments, the ratio of currency to GDP in the UK has been rising. This column argues that rapid growth in the grey economy has been a key cause. The authors estimate that the grey economy in the UK could have expanded by around 3% of UK GDP since the beginning of the Global Crisis.
We use the currency demand approach to estimate the likely increase in the size of the grey economy. The currency demand approach has historically been one of the main methods for estimating the size of the ‘shadow economy’ (grey plus black economy).1 The basic idea is that, since tax evasion is illegal, almost all grey (and black) economy transactions will be made in cash.2 For obvious reasons, cash is almost always anonymous, whereas most other payment mechanisms leave a record. What one does, then, is to estimate how much of the currency-to-GDP ratio is due to incomes, interest rates, technological trends, and such other variables as theory or direct observation suggest (a standard currency demand regression). One can then either take the residuals from such an equation as an estimate of the shifting shape of the hidden economy, or, better, add additional variables that should be correlated with the grey economy, such as tax rates – especially VAT – and the ratio of the self-employed and unemployed to the total workforce. We did the latter/
There are, however, a couple of other factors that probably have raised currency holdings in recent years. The first is the decline in interest rates to nearly zero. If you cannot earn interest on a bank account, there is less incentive to put spare cash on deposit in a bank. We test for this effect in our empirical exercises and, like most other studies of this kind, we find that the demand for cash holdings is reduced if interest rates on bank deposits rise and vice versa (i.e. that the interest elasticity of cash holdings is negative), although the impact is relatively slight in terms of magnitude. (In this particular instance, however, the decline in interest rates was very large, meaning the overall effect was quite sizeable).
A 1.5% YoY growth in the grey economy puts the money velocities right on track for the USA.
Monday, October 6, 2014
In economics its transaction rate
The red and blue lines indicate how often money turns over in a specified period. Look at the red line which includes checking and short term accounts. This is the nominal middle class and how often they make purchases.
The rate of transactions is below any level ever recorded by the Fed. One would think economists would be looking at that value, any many are, I am sure. But for Keynesians its not even in their equation yet! Yet they claim to know how the economy works. Absent any transactions, the economy simply breaks up, it can no longer keep pricing accurate enough to engage in investment.
Look at producer prices and consumer prices. Producers are in blue, consumer in red. Producer price is a difficult statistic because the supply chain is long while the consumer prices are end points, so producer prices are both input and output. Producer prices crash during the recession, mainly folks stopped buying oil, especially the transportation sector. Since the crash, producer prices have held steady while consumer prices kept on rising.
Retail input prices held steady while consumers pay more. Somewhere in the value chain profits are rising. Where? Who knows. But clearly consumers are shopping less and someone else making better gains, hence the drop in velocity.
The rate of transactions is below any level ever recorded by the Fed. One would think economists would be looking at that value, any many are, I am sure. But for Keynesians its not even in their equation yet! Yet they claim to know how the economy works. Absent any transactions, the economy simply breaks up, it can no longer keep pricing accurate enough to engage in investment.
Look at producer prices and consumer prices. Producers are in blue, consumer in red. Producer price is a difficult statistic because the supply chain is long while the consumer prices are end points, so producer prices are both input and output. Producer prices crash during the recession, mainly folks stopped buying oil, especially the transportation sector. Since the crash, producer prices have held steady while consumer prices kept on rising.
Retail input prices held steady while consumers pay more. Somewhere in the value chain profits are rising. Where? Who knows. But clearly consumers are shopping less and someone else making better gains, hence the drop in velocity.
Sphere packing with inert bubbles
In natural units, where , the expression becomes .
where p is the density.
The Sun is so round!
Yes, but it is emitting enormous excess energy, it has more motion than needed to get a good 4*pi. This is still mostly a puzzle for me.
Inert spheres have no motion
Yes, but when optimally packed the variation in surface area should be enough to get the best 4*pi possible. In fact, I think Gauss already did this problem.
I am pretty sure physics is getting down to the business of sphere packing.
So all you sphere packing mathematicians, gear up, your skills will be in demand.
This shows that the gravitational coupling constant can be thought of as the analogue of the fine-structure constant; while the fine-structure constant measures the electromagnetic repulsion between two electrons, the gravitational coupling constant measures the gravitational attraction.Now that I see this, I want to revise my interpretation of the fine structure constant. I am fishing here, so beware. Gravity is about how much motion bubbles of a single size need to measure 4*Pi exactly when filling the balloon. Thus the fine structure is about how much motion is required when filling the balloon with two sizes of bubbles. But something is wrong because the difference between the fine structure and the gravity structure is about 10e-45. So, there may be another Avogadro stuck somewhere between the two, I am not sure; an Avogadro squared is about 10e45. In terms of relative sphere size, the charge spheres, two of them, and the inert sphere cannot be that far apart. The Swarzchild radius gives the coupling constant when the balloon has all three bubble types:
where p is the density.
The Sun is so round!
Yes, but it is emitting enormous excess energy, it has more motion than needed to get a good 4*pi. This is still mostly a puzzle for me.
Inert spheres have no motion
Yes, but when optimally packed the variation in surface area should be enough to get the best 4*pi possible. In fact, I think Gauss already did this problem.
I am pretty sure physics is getting down to the business of sphere packing.
So all you sphere packing mathematicians, gear up, your skills will be in demand.
Subscribe to:
Posts (Atom)