Saturday, May 31, 2014

Vitaly Efimov, you are my hero

Have a nice Nobel day! We will enshrine 22.7 everywhere in physics.

Impedance part 2

One has to decode the history of engineering physics before one does calculations using the finite number line. Charge and magnetism are the same group. Magnetism is managed inside the proton using the 1/B fraction part, and electron charge discovered independently, it uses B. So they divide them,  naturally getting the square and root them getting the actual value, 377, in normalized math for Isaac's rules of grammar.  But we number line folks see is simply the measure the proton uses in physics rules of units.  That really is the third power, because they are quantized to the third moment, squared and then rooted again.  Then I take the square root and get the log, at 3/2.  The rules are fairly simple, is it Wave or Nulls? Then is it a really big thing or a really tiny thing, that tells me if they got an Avogadro. So, by playing around I eventually figure out how the proton works. Watch out for Pi, physicist mostly get it right, but sometimes it may end up in a measured constant. And physicists inevitably end up with a log, from F'/F, so there will be an exponent adjustment of 2/3 or 3/2; and occasionaly the sqrt(5), somewhere.

Impedance is the log spectrum of third moment, motion of the bubbles inside the electron orbitals.
It is 17+2.4, the 2.4 is log3/2(Phi), the electron packed null is loaded with, what looks like, two  1/2+sqrt(5)/2 of wave. So the electron is 17 Null plus 2.4 of Phi, in units of base 3/2 exponents. The quarks are the opposite.  The effect was to widen the unit circle for the quarks, and extend Gauss down stream one notch.  This is a common feature among sphere packing, finite number lines.

I originally got this wrong because I didn't know the details of the gluon.  Then Vitaly came along, bless his Nobel prize, and gave me the number 22.7, which I quickly digested.

This rules helps out:

Packed Nulls are a unit circle, and can do fractional multiply within, but can only do stable addition in their motion. packed Nulls cannot do multiply outside the units circle because they count bubbles, and bubbles do not multiply. Waves do multiply all the time, but cannot take powers across packed null potential barriers, only within.  I think this is it, it is mainly operators jumping up from Null to Wave. Nothing in the proton counts Pi. Free waves, I think,  only count  1/2 plus or minus fractions less than 1/2.  It seems all of what the proton does in in the third moment, including adding bubbles. W and K boson is something I do not completely understand, yet.

So what does impedance mean to free space?

Nothing, really. Impedance is the band width of the atomic orbitals. The real capacity of space is the Schwarzschild radius. The magnetic and electric constants may have gotten capacity individually, but  in their combination that was cancelled.  The wave equation only works because impedance is band width.

Impedance of free space

They say its this which looks like a band ratio along the line of travel. The sqrt tells me they have collapsed from two to one line of symmetry, the division tells me is a band ratio. But in Maxwell.s the two constants are multiplied.  Telling me its a power spectra, or the bandwidth over the surface area of a sphere, two degrees of symmetry.

I am a licensed ham, but have long since given up on forces. As near as I can tell forces are isolated bits, and their fractions, in the proton computing machine. Inn a travelling wave, the quants are less than half, and all multiply. 

I am determined to decode this. Time is squared differential in Maxwell, so it is computing the power spectra of the wave surface relative to a flat space, and the Laplacian in Maxwell is squared, a double derivative along each of three lines of symmetry. If I take the square root of impedance I get 19.4, within one half of the log of the spherical spectrum of the electron in units of bubbles. Why the log? Hmm.... If I adjust for Pi I ger something close to 16.7.  But why the square root it? I am thinking that light quantizes the third moment in optimum units, but (3/2) quantizes the first moment in units of bubbles. They still match because bubbles are a third moment.  And why isn't the electron 108 * 17 anymore? Because physicist distinguish charge and mass.  The proton is unitless. It looks like the proton considers the proton to be a combined (17 + 2.4). The proton never went to engineering school. Physicists are i the habit of taking (f'/f), when they measure, so the 19 does not surprise me.  That would be the bandwidth of the electron, and the measured impedance.  Still, this needs work, I never got it right.

There is a little nuance in my theory I have not worked out.

The Universe cannot all be 3/2

The universe has to have 7/5, 13/11, and 19/17 curvature. And they would be barely coupled, so wave motion from the 3/2 would pass through a 7/5 group structures. It has to be this way, otherwise we would not have Heisenberg, Jello Pudding and houses. So maybe, in the end, the goal of the Universe is to increase dimensionality everywhere.

In the balance between minimum redundancy and minimum variance, the 3/2 has run its course.  More groups make finer resolution on Gauss, but Plank spreads, and visa versa.

Great work Mr. Gibbs and Wiki for the explanation

In statistical mechanics, the Gibbs algorithm, first introduced by J. Willard Gibbs in 1878, is the injunction to choose a statistical ensemble (probability distribution) for the unknown microscopic state of a thermodynamic system by minimising the average log probability

 

You are right up there with Planck and Heisenberg, Weinberg and Higgs. And that Russian guy with no vowels.
Entropy has the dimension of energy divided by temperature, which has a unit of joules per kelvin (J/K) in the International System of Units.

Correct, that is bandwidth per quant of third order moment.


We are now ready to provide a definition of entropy. The entropy S is defined as

S = k_B \ln \Omega \!
where

kB is Boltzmann's constant and
\Omega \! is the number of microstates consistent with the given macrostate.
Where consistent means the Hamiltonian of the system is consistent between the micro and macro. That implies the decoding map is not necessary. If you include the decoding map then you have made the system consistent, but added the entropy of the decoding map.
In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining

Coarse graining is actually the business of physics, that is what it is all about, finding the decoding map. The states in nature are not continuous, but Isaac's rules of grammar are defined for continuous systems. But Isaac's rules just define a discrete set of functions which go to their limit faster than the finite number line grows in size.

The decoding map:

 The correct interpretation of the Shannon channel rate in the presence of carrier noise includes the band width to send the decoding map. That is, it is information flow up to the limit of Gaussian noise in the channel.  If you know how many degrees of freedom are needed for the map, then take your baud rate and include room for the map, separating the two functions.  In physics, the proton does this, it sets aside bauds to send information to the vacuum, called quantum entanglement.  It is just enough information, 1/5 to 2 bits, so the vacuum and the proton can agree on space curvature, which is the only map they ever use. The 1 12/ to 2 bit measure is a third moment, and according to the MIT cold people, that is 180 meters, or 6.4e6 meters as a first moment. That means gravity knows the typical space curvature at 1e6 meters. But within 180 meters quantum entanglement to 1 1/2 degrees of freedom is maintained. So, I think 180/6.4e6 is the correct Schwarzschild radius, or 2e-4. But we have to do the 4*Pi/3, I think, or 8e-4. And I am off an order of magnitude. Hmmm. That point should be the point where gravity is completely part of the proton Baud system. Together they have gained 2 Baud in group organization. Am I supposed to multiply by 2^3 to account for encoding gain? That would be the same as packign efficiency which I have as 3.

Flat space must be energy generating

In the balance between space curvature, degrees of freedom and the precision of light, there must be a point in which light is too imprecise. The vacuum then flattens space and steals potential energy from it. And anywhere that the vacuum curves too  much it returns potential energy to it.  Existence and countability are codefined. Finiteness is energy constant. Beyond that we can know nothing more.

OK, then, where is the sample rate of light on my spectral chart?

Lets find the light sample rate as a singlet line of symmetry, a point on the X axis of a 20 bit computer.

It seems the clock phase error is about .001 per  the Higgs baud rate. Converting from a count of things in a sphere (third moment = baud rate) to a clock rate (tick mark on the X axis) is a collapse of the third moment to the first moment. The second and third moments are left in the group structure.

The proton itself is 16 bits (groups) plus about 4-6 bits of W and K boson counters, call it a 20 bit computer with a Higgs Baud rate. Give the vacuum two bits to do quantum entanglement. The electron is a 19 bit adder only, I think.

The proton obviously figured out Nyquist, so Higgs is likely two or three times the light rate. The proton treats light as noise in a Shannon guassian channel. That gives light the wave around 113 or 127, the most likely primes, but because light is irrational it will not have a single wave number, and will likely range between two primes. Given the circumstance, I guess around 113, to 127, which is a variation of about .001 per Baud. The wave number should vary along the singlet axis as the curve of the finite number line used.

Should I have done the estimation in the first,second, or third moment?  I think that once we start working in Higgs Bauds, then we have linearized to the first moment. A baud is a third moment.  Like Einstein, converted energy, a third moment, to frequency, a measure in the first moment.

I could easily be off in my description of moments, but not off in the method.  Remember, there is an Avogadro of single bit processors that comprise the system, of which at least (3/2)^19 are electron adders; and I think, (3/2)^7 quark adders plus fractional multiplies of 1/11.  Most of the multiply unit is the 20 bits of Gluon and Boson. It looks like spin is a one bit multipler (2 and 1/2) There is actually 1/2 wave action inside the packed Nulls, reflected back out as a 2 multiplier of wave motion. I have not sorted this out, it could be a two bit quantity, or a 1 bit multiplier and 1 bit adder.

I have previously estimated the third moment of light as 10e54, Plank estimates the third moment of light.  But Plank converts that estimate into engineering units, which are first, second and third moments. That is why the Planks curve is an ungodly mess. The X axis on Plank seems to be a first moment measure, however. So whern MIT reports the coldest point on the X axis is 6e6, I am tempted to think that is the first moment estimate.

Try reconciling all the methods and you will do much better than I have done here. The best approach is to treat the first moments as tick marks on three curved, finite orthogonal axis.  Then the second and third moments are circular grids and spherical grids composed of power series (finite log)  of the singlet axis tick marks. The number line theorists owe us a set of algebra rules that we can do with exponents on a finite number line.

Why is the Plancks curve balanced the way it is?

The proton is marked between Nulls of:
 (3/2)^89 to  (3/2)^108 to (32/)^127

The exponent are 89+19 and 89 + 19 + 19

Why those primes?  They are big and close to the origin of the number lie, it is that simple. Go play with my spectrum on the page to the right.

Those products in the right side of the Zeta all form a separable digit system, they can divide and multiply.  Hence they are a minimum redundancy group organizer.

The quant errors for 89, where the electron goes, is 9e-3, for the 108, where the proton is is 9e-5, and the Heisenberg number is 127 at 9e-3.

19 does well with multiply because is counts the largest number of things near the multiplicative identity.  89 and 127 are both primes, so there is a prime balance, a 19 digit set on either side of the proton.
 
Isn't there another prime balance out there? Somewhere, sure, but will it balance with a 108 = 2*2*3*3*3 in the middle?

108 +- 19 is the best shot for a three sphere packer. And on either side, the most irrational number lines up for a balanced 16 digit digit system, perfect for Nyquist sampling. We get 16 digits of whole number, and their fractions for multiply.

But, 91 + 16 = 107, a little tight for a full 16 bit computer, the clock rate will slip.  Well we gave the vacuum spin, so its really a 15 bit computer. Then we did a shift, by adding 3/2. Blame Higgs.

Is this enough bits to do quantum entanglement? Sure, with a 1/3 charge plus a spin, as long as the messages are 1/2 and the massage length less than a few hundred meters.

But why is Planck uncertainty so small if all we have is a 15 bit computer? Because this is not uncertainty, the vacuum is very certain about what it is doing. Bit errors are about .0001/Avogadro, caused by the imprecision in the sample rate of light.  The reason an Avogadro of bubbles is stable is because they are in less than 15 countable groups.

All of physics is the result of the utility of multiply. The Gluons get most of the bits:
 2* 11 + [3+3+1]/11+ [3+3+1]/11^2; The .7 * 11 is a continued fraction, it can just take power series of itself endlessly.
Vitaly is going to tells us this is his secret 22.7.
The Gluons and quarks count out the orbitals. We get a 2 to multiply the gluon 11, so no quarks get a 2. That leaves 3,3,1 for the quarks. So, we get 11 bits to the gluon, three bits, one per quark.  And one bit to flip electron spin. Since the electron is in independent motion, it just counts out its 19 bits of Null.

Friday, May 30, 2014

That was a close one

Yes, the conditions are met to meet the geometry of the sphere and quantize the most irrational number.  Yes, I got lost and equated volume, as an integral, with volume as a count of three spheres, so accommodating all the factors, Higgs and Avogadro match. The optimum finite number is real.

It is hard for me to see how any process in nature which has to pack integer things would not notice the separation of groups, and would not adapt an irrational number somehow.  If they cannot get a three sphere, too bad, they will still use the same principle. But most likely, they will find a three sphere line of symmetry. If not a three sphere, they will try a five configuration, and do the best it can be done. The most efficient number of groupings will always be limited by 107 and 127 in the exponent, that is always (speculation) the second best precision, and due to the density of groups being highest near the multiplicative identity, there will be some group match with better precision, so the middle and the end points of groupings are always defined. This all result from the necessity of having a multiplier in the system to maintain group structure, or remove adverse groups, in other words.

So the count of things will always be Higgs or some number close by using the most irrational number and the Heisenberg equivalent ratio times the packing efficiency.

Frequency has what dimension?

Physicist say frequency is a one dimensional measurement of three dimensional energy density ion the atom. The they tell me frequency is a wavelength measure of distance, which is not energy, by there definition.
E = h\nu \,. 
OK, this is the definition of frequency. 
Then h introduces the error in collapsing the atomic orbital energy to one dimension. h is energy-second, or mean energy * unit, for one unit of the finite curved number line. Got it.This tells me that when using the number line to measure volume, then frequency introduces a one time error, and plank gives the mean value. I interpret the second to be the tick mark on the optimally curved finite number line. C then is simply a scale factor, taking one unit of the finite number line to some other units.

So, lets call energy the third moment in our system of units, since the atomic orbitals are three dimensional. Then Compton wavelength is a measure of volume.

I am struggling a bit with the most efficient partition of states, as in the number of states an Avogadro set can be in.  There is only one solution in the range of partitions we have discovered, and Avogradro is about three times too high.

In other words, whether its the atom or the ball of gas, the minimum redundancy stacking will not do better than the proton, Avogadro is doing three times better, unless I have an error. But I do not think any quantized system can directly know the third moment of something, only the first or second. For example, consider the power relationship between circumfrence, volume and radius. Convert that to a quantized system, and it will always quantize the unit one. In other words, I do not see a geometry where is can measure Phi^N and Phi^N/3, or some combination of a third root, only Phi^N and Phi^N/2.  Hence my SNR falls as Phi^N/2. I never proved what the volume was, I just used the geometry to note the conditions for upward quantization meet the sphere equation. The packing efficiency could easily be three times what I assumed.

Hence my interest in Vitaly, he will show why three sphere packing scales up and the number packed always stays the same.

Getting excited for Vitaly

Wired: More than 40 years after a Soviet nuclear physicist proposed an outlandish theory that trios of particles can arrange themselves in an infinite nesting-doll configuration, experimentalists have reported strong evidence that this bizarre state of matter is real.
In 1970, Vitaly Efimov was manipulating the equations of quantum mechanics in an attempt to calculate the behavior of sets of three particles, such as the protons and neutrons that populate atomic nuclei, when he discovered a law that pertained not only to nuclear ingredients but also, under the right conditions, to any trio of particles in nature.

And he has a good name, the Efimov condition, we can call it. I am looking for data and equations as we speak.

Anyway, the linked to the Hoyle state, another state opf the triangular nature of three partices. I am just now looking into it. That led to this mathematical 'trick':
Instead, Meissner's group combined the theory with numerical methods often used to describe the interaction of individual quarks via the strong force. This approach breaks down space and time into discrete chunks, constraining particles to exist only at the vertices of a space–time lattice and so radically simplifying the possible evolution of the particle system. 

Why not think using discrete chunks is something the vacuum might very well do?  That is what the vacuum spectral management is all about. It this nagging thing which physicists, everything must integrate on a flat space. Somehow they lose sight of the fact that flat space, continuous force fields is a human model, having little to do with the vacuum.

Thursday, May 29, 2014

Constructor Theory of Everything?

In constructor theory, a transformation or change is described as a task. A constructor is a physical entity which is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible and impossible tasks. Counterfactuals are thus fundamental statements and the properties of information may be described by physical laws.

OK, I can buy that, except there is only one task, pack the sphere. The only information free space needs is directional advice.

Bell's Theorem again

The best possible local realist imitation (red) for the quantum correlation of two spins in the singlet state (blue), insisting on perfect anti-correlation at zero degrees, perfect correlation at 180 degrees.
Bell considered an experiment in which there are "a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions."[2]

Bell says something other than local hidden variables must cause this. I never got his theory because I could not understand why he would take integrals as infinitely divisible in a finite bandwidth world. He claims no idea of local variable can explain quantum mechanics, he is right. That idea expands to the idea of taking integrals to the limit of zero and expecting free space to obey that law. The quantum correlations are the simple result of minimizing redundancy in a finite band width space.



If the speed of light not infinite, then free space is band limited, and free space will be curved, end of story. I can remove some restrictions. If the speed of light is not infinite then one can treat free space as if it were finite bandwidth.
Gill points out that in the same conference volume in which Jaynes argues against Bell, Jaynes confesses to being extremely impressed by a short proof by Steve Gull presented at the same conference, that the singlet correlations could not be reproduced by a computer simulation of a local hidden variables theory.

John Burg did it, simply, and got his Phd from Stanford.  He converts the signal to white noise using a minimum phase filter, then after computing the filter, inverts it and the spectral peaks show up. Using just local information. Then he makes a 'curved ruler' to match the peaks so he is not plagued with divide by zero.  Otherwise known as minimizing redundancy by making the number line fit the data. Others wise known as making the number line match the blue line above so as not to have that sharp peak in the red line.  Who says that nature, or anyone else, should use the same, infinite number line?

The infinite number line exists in mathematical grammar because a whole class of problems that obey infinite rules can be simplified. It really does not exist in nature. Space is curved and particles with spin know it.

Anyway, here is the thing.

How could physicist look at the blue line and notice the thing is minimum phase? I mean, minimum phase is all the information the particles need.  It tells them a hellava lot, local, general, whatever, that is a big piece of information.

Quantum fields are a grammar, not necessarily a reality

In theoretical physics, quantum field theory (QFT) is a theoretical framework for constructing quantum mechanical models of subatomic particles in particle physics and quasiparticles in condensed matter physics. A QFT treats particles as excited states of an underlying physical field, so these are called field quanta.

This is another error in the physics stack exchange. A quantum field is no more real than a electromagnetic field or a gravitational field. It is simply a more accurate model than we had used before.  The much better model, the model that explains much more is simply that free space is band limited.An even better model is that free space is a three sphere packer.  An even better model is that free space is a self packing set of three sphere sizes, and the optimum number of bubbles they pack in a sphere is about 10e22.

There is a simple rule. The best model of free space is one that a grammar school kid can understand. Free space is not that smart folks.

Matching spiral moments?

These are the things below, the denominators, which come out as spiral rates from the center. When they product is 1 ot within precision of 1, the surface of the object is marked.  Each time a prime group is added the 1 points obtain one more line of symmetry.
 Look at this paper by D. Pozdnyakov. But these are counter posing spirals for each prime which combine with the previous spiral moments. Its all about splitting up the curvature to accommodate sphere packing with multiple spectral modes, each mode independent of the other, so they can superimpose.

Each prime has the form p + 1/b and p -1/p.  That creates a counter posing dual spiral. With one prime, terms cancel only on the sphere, with the second prime 2 sphere are formed by term cancellation, with three, 2*3 spheres are formed by cancellation.  I am pretty sure this is what is going on. The more energy, the more modes, and these include much smaller fractions, meaning higher bandwidth, making for more points of cancellation. A discrete, radial  power spectrum designed to encode as many Nulls as possible, that is, put them into separable groups. In this systym, the phase imbalance in the proton center is absolutely minimized, so the proton lives for 10E34 years.

Schrodinger and Heisenberg describe this as the err function of the engineering approximations, and got it right, mainly because the engineering approximations are damn good, especially charge and magnetism. Uncertainty, imprecision, is still the main component of physics; its just that the value of imprecision is always put in engineering terms. But the true uncertainty in this system is the band limit defined by the finite system under minimum redundancy grouping. Go to Pozdnyakov's paper where hhe found the zeros of the Reimman, and remember, these are near zero. So trace out the small boundary where all the near zeros has the same nearly zero value.  Invert that boundary and get the orbitals.

Another way to do this is to take an X axis of unit length. Mark all the whole number primes, and the 1/prime fraction, up to the center of the atom, about prime 13.  Mark them to a precision of of 1/Higgs 1/sqrt(Higgs) decreasing out on X. Then spread out the x axis to a circle, reducing the precision again by 1/Higgs 1/sqrt(Higgs) , decreasing out  then to a spheric volume, reducing precision by 1/Higgs 1/sqrt(Higgs) decreasing out again.  Then look at the ruler, and mark mark the spots where every prime whole number lands on the same integer.  I think that works.

Like this

OK, I say to myself, if mathematicians are so smart, then a search of sphere packing on Riemann surfaces should turn something up. And it comes right up!  Mathematicians are really that smart!

But this is a binary packing, two bubble sizes. Not what I thought, now these smart mathematicians have confused me, darn. Then:
A compact binary circle packing with the most similarly sized circles possible.[4] It is also the densest possible packing of discs with this size ratio.[5]

OK, great, I bet when we make this a sphere, we get three sphere packing is most efficient. So, I drill further down the link chain. I further suspect that four sphere packing is best in an ellipsoidal world. Stay tuned...

And this:

 The expression on the left is the definition of the finite log taken to infinity on a Reimann number lined curved at s. Einstein used this to find that the finite log for a 3/2 curved number line has a maximum, 2.6124... That is, there is a point where the finite log and the most irrational number match.  The thing on the right is the product of those denominators. Each denominator is the precision by which a separable group has an inverse, how close are its smallest fraction to an inverse.  So we have log measures entropy.

Combine the two ideas, sphere packing has a maximum accuracy, and that maximum is right at the proton peak in my spectral chart. So, depending on what you are doing with your number line, there is a best finite size; the point at which multiply is best matched to the entropy your process creates.

The individual components on the left are simply the measuring error each of your quants created when your units are maximum entropy (minimum redundancy). These are the measuring units, exponents,  when the integer set is converted to the power series used by the process. That power series measures signal when redundancy is minimized.  They make the Planck's curve for your governing process when using the most optimum group separation.

All this stuff fits, all we need to do is follow up and find the group generating function for three sphere packing, which defines the Reimann curve. Then we have our line of symmetry generator, and we have the theory of everything.  Those darn mathematicians are going to get a banana real soon.

Its the Web and/or the mathematicians

I am not that smart, and I am not adding much here, just reporting what I read in group theory and measurement theory.  I have just been around this stuff so long that I can read thru the mathematical texts. But the cannonical links in the web keep driving me straight to confirmational theories, like Haar's theory on measurement from the 1930. Not all of these links are new, I have become paranoid.  Either the mathematicians have kept their mouth shut, or they are currently relinking based on my blog, or Wiki is just imposing a canonical link order.

Currently I am just a traveller in this thing, its all right there, on the web.  Is this really about some nut ball in Fresno just traversing the web links? Is it, like, I am the first one to click thru it all? No, that cannot be.  I am not the only one, there are others who know the secret much better than I. Heisenberg's great simplicity, it was a secret, I just ran across it and reported it as my duty as a blogger. Like this on the logarithm of complex numbers:

Another way to resolve the indeterminacy is to view the logarithm as a function whose domain is not a region in the complex plane, but a Riemann surface that covers the punctured complex plane in an infinite-to-1 way.
Branches have the advantage that they can be evaluated at complex numbers. On the other hand, the function on the Riemann surface is elegant in that it packages together all branches of log z and does not require any choice for its definition.
Making this, my spiral function to drive the generation of lines of symmetry. What is the logarithm?

This definition. What does it tells us? Add up all the fractions up to the point X. Why? So a finite  number system can use those fractions optimally and measure X. That is minimum redundancy encoding, that is the basis of Shannon. For every separable spectrum defined by a quant, we automatically get the separable spectrum defined by its inverse. The Planck's cuve is simply aligning the whole and fraction up for maximum efficiency. The finite number line is simply using finite summation instead of the integral. The limit of the finite summation being the current maximum finite whole number. The result the closest approximation to the natural log. The method is multiplication using the most irrational number, computed to the same finite precision. The effect is in the center of the Proton,  chromodynamics, that is the prime number with its fractional and whole part spread in a balance about the peak of the Planck's curve.

My spectrum has this very sharp peak, before maximum entropy packing. Why? Three sphere packing is the best there is. I saw it, its in the web somewhere. The bubbles simply pack sphere using natural addition causing the finite log to be computed at the sample rate of the most irrational number. We could have made a universe out of billiards balls.

I know its not me, some mathematicians, likely that Russian guy with no vowels in his name, he dunnit.

Wednesday, May 28, 2014

Something strange in the uncertainty business

The Heisenberg uncertainty value should be unit less, yet all I see are engineering units. As near as I can tell, the vacuum can measure the Compton wavelength with a deviation cubed  of 9.22e-5, a unitless value, and the precision of light. We have measured light to a deviation of about 5e-4.  We know Planck to a precision of about 1e-5, (relative to the multiplicative identity), and that is a measure of energy, or the cubed of deviation. So precision all seem to agree, more or less. All of these precision should be deviation cubed, energy, as that is all we can measure, energy.

There is no other error to measure.  Even the theoretical idea of the uncertainty principle has to rest on the speed of light.  What is Heisenberg saying?  The current engineering approximations are limited by error. Yes, so what does that have to do with the precision of the vacuum? Heisenberg says we have to extract energy to measure something, and he may be right. But no need for the engineering approximations, take the deviation cubed, multiply by sqrt(2), and you have the energy combined variance  between two energies reacting. It is still a unit less number, a statistical measure.

Here is how they define Planck. In each case, they have, correctly, taken the number in parenthesis, and transformed it through all the know laws of physics and produced the precision in the units of physics.  But ultimately, the precision in the engineering units is derived through the speed of light, and the number in parenthesis was derived from measurements using the speed of light.

The value of the Planck constant is:[1]
h = 6.626\ 069\ 57(29)\times 10^{-34}\ \mathrm{J \cdot s} = 4.135\ 667\ 516(91)\times 10^{-15}\ \mathrm{eV \cdot s}.
The value of the reduced Planck constant is:
\hbar = {{h}\over{2\pi}} = 1.054\ 571\ 726(47)\times 10^{-34}\ \mathrm{J \cdot s} = 6.582\ 119\ 28(15)\times 10^{-16}\ \mathrm{eV \cdot s}.
Putting this into engineering units, 2e22 bubbles can fit themselves into a sphere and the number of bubble types will be balanced to 1/1000th of a bubble. The sampling error for quants is lower than the sampling error of light, by bubble design, a result of minimum redundancy bandwidth management. So they do not use all the accuracy of bubbles to make one sphere, but if they wanted, they could do it. Now my number seems to agree with the lifetime of the proton at 10e34 years, a very long time. And I can say no error in the 107 wave number exceeded 1/2, three sphere packing is extraordinarily accurate. That is why Planck comes out do small in all the engineering units, all the engineering laws are about sphere packing with mass and two phases of charge.

Correcting an error, MIT not IBM and the coldest temperature

A record cold temperature of 450 ±80 pK in a Bose–Einstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT.[8] The associated black-body (peak emittance) wavelength of 6,400 kilometers is roughly the radius of Earth.

I mistakenly said IBM, somehow.

Compton and Avogadro

Avogadro and Higgs seems to agree on the largest number of things we can count in a sphere. Yet Compton and the wave number seem to agree on the number of pseudo masses counted in a straight lines.  The Null count for bubbles agrees with the frequency. MIT reports a wavelength of 6e6 meters. Is this correct?

This system is simply the optimal counter under the conditions imposed by the bubble of the vacuum. Compton introduces something called c, which relates two unit definitions, and is a scale factor used for division in his system.  So, both Compton and Avogadro agree in this system.

But the bubbles seem to prefer the Avogadro standard. But, on the quantum scale, Compton is likely measuring a small sphere, because of momentum times position make a cubed power.

The speed of light, meters per second, is just some scale factor used in engineering approximations.  The real units of physics are bandwidth and band limits.

Standardizing

Having discovered the largest efficient sphere, we can simplify by eliminating sources of confusion, like Avogadro and speed of light and Boltzman. We also have choices in setting what the wave number measures, we can say it measures radius P^N for area p^(2N+2); or area of P^(N+1) for radius P^(N/2+1/2) or volume. The SNR came out as a ratio of volume and area so it shouldn't matter one way or the other. The Current Compton standard is radius, but Avogadro is volume.  Compton radius makes sense because it is a symmetry of dimension  one, the lowest integer so to speak.

Then we  say, the largest unit of length is:  1/2 * Phi^107 and the smallest unit of length is 2 * Phi^(-107), where I define Higgs as Phi^107.

Set the smallest packed thing to 1, set the largest thing to Higgs/2 and the smallest fraction to 2/Higgs.   We add the 2 to maintain the Nyquist rate and ensure Higgs quantizes nothing. Then for all the engineering approximations, we can just set a scale on Higgs to make engineering numbers look nice within some range. We really count everything in quantum numbers. But we know all the separability, and symmetry,  and thus know all the general relativity adjustments for the engineering approximations. All computations are done in power spectra and converted to engineering units after the fact, much simpler.

All the Hamiltonians can be written as a function of volume and area:

 H(n,m) where the n and m are quant numbers, shifted depending on whether you want fermions, black body, bosons, or chemical potential.  The generating function is always an exponential polynomial with lines of symmetry based on a defined generating graph. The job of physicists becomes discovering that graph. Relativity is always the case of crossing a node on that graph. Much, much simpler, as Einstein and Heisenberg would say. The coupling constants in the Lie Groups are really just normative functions of imprecision in the most irrational number.

Quantum Chromodynamics

QCD enjoys two peculiar properties:
  • Confinement, which means that the force between quarks does not diminish as they are separated. Because of this, when you do split the quark the energy is enough to create another quark thus creating another quark pair; they are forever bound into hadrons such as the proton and the neutron or the pion and kaon. Although analytically unproven, confinement is widely believed to be true because it explains the consistent failure of free quark searches, and it is easy to demonstrate in lattice QCD.
  • Asymptotic freedom, which means that in very high-energy reactions, quarks and gluons interact very weakly creating a quark–gluon plasma. This prediction of QCD was first discovered in the early 1970s by David Politzer and by Frank Wilczek and David Gross. For this work they were awarded the 2004 Nobel Prize in Physics.
There is no known phase-transition line separating these two properties; confinement is dominant in low-energy scales but, as energy increases, asymptotic freedom becomes dominant.

There is no anlytical proof (proof with analytic continuity) of this feature. Discrete QCD can prove this.

So, if there was ever any doubt about the theory of vacuum bubbles, then this should seal the case.  The wave numbers for the two conditions are determined by the maximally packed spheres. The ratio of Nulls to Phase admit only two separable solutions.

Anyway, this is my next project. Wow, just starting gets you ten links deep into groups theory, and I eventually hit some theory Haar had, and a whole bunhc of school work comes rusing back.

But I know how this is going to end, the universal finite line counter  for sphere packers, driven from a single crank and generates quants, including their lines of symmetry. We end up with this mechanics contraption with 1,000 pencils making notches on our 3D graph, a massive minimum spanning tree of drawing tools. It will be Nlog(N) scribblers, a one hundred rank drawing tree.

Each branch of the drawing tree has a prime resolution about its radial axis, and only passes on the quant count when it counts up to its resolution.

Take on of the Boson at 47 * 2.  Put it in the center of the proton and let it make radial motion, right and left spin with 47 + 1/47 spherical modes for each wave spin motions.  I see three of those possible, 3*3*11, 11*8, 2*47, 4 * 26. Those bosons are massless because a big enough wave prime in the proton has grabbed any appropriate size that fits these bosons.  The proton, if its gluon has 13, leaves these larger wave bosons looking for 14. They try to add in the quarks, but they are 7+13, so they look for 20, and reach the edge of the orbitals, doing nothing but rounding out the sphere surface. They are limited to [2+1/2]*[47+1/47], all the other phase has better stability, the boson is its own group of two, managing a specific band of the spectrum.

Details and corrections on the sphere packing curve

Correction, Quantum number N counts a sphere of area P^(N+1), so I have the X axis shifted still correct, I think, because I inverted the SNR.

I want to convert the quantizer from what it tried to do, block off the sphere into equal area layers, into what it did after Hitting the Higgs boundary and packing the Nulls.

I used the basic maximum entropy encoding of a noise channel:

Q/C = log2(1+S/N)

Where C was a multiple of the Wave number n, Q is the quant measured, a unit less number. S is area, N volume.

Signal to Noise came to:

3/(phi^(n/2), n being the wave number and  phi the most irrational number. This is area to volume because phi^n is counting area and area to volume is 3/radius.

Why not just plot the Noise by wave number?
Because I want the noise relative to the maximum entropy packing function, not the minimum phase wave number. Hence Shannon converts from one norm to the other. Think of the shoe store manager. He measures noise in the mall, people walking around.  Then he sets up his shoe store, and measures the noise  people entering his store. He wants to know how much tennis shoes radiate noise in his register, and I want to know how much noise is radiated from the quarks. How does the spectrum look after Higgs did the maximum entropy encoding, not before.

Anyway, I put this in the form. I have the exponent and the irrational phi because I want to remove signal, and just look at noise.  That is, how congested are my checkout counters, not how much per sale; using my shoe store analogy.

   j*(phi^(n/2))
----------------------------- = Noise
[2^(Q/ (k*n) -1]

I am treating the function as a two bit number in which each quant is counted at a declining multiple of the Higgs rate.  Longer wavelength, lower frequency counting, lower order bits; make for lower signal to noise and lower Q.

The j is there to scale my y axis into nice looking fractions. The k in the exponent serves the same thing as the Boltzmann's, it changes the base from what nature uses to what I use, and it scales temperature (rate) from some historical standard.  k also has a factor of two in in because I need to sample each bit at twice the quant rate.

So this tells me which parts (frequencies) of the atom will radiate when I shake the thing.

Tuesday, May 27, 2014

Take these atomic orbitals, for example.

Look at how energy increased and the medium sized sphere on the left became two spheres, one large and one small. The two sphere looks like a ratio of 1/4 in radius, making it a 1/16 in quantum number. Who has that wave number? Something in the center of the proton, something with a multiple of 16. Well, a [2+2+1]*16 makes 90, two ups and a down and a 16 quantum gyron color thing in the middle. Or was it the electron, at 75, 15 *3; except it has the longer, phase shifted wave for charge, making it a 16 with phase shift.

Now there are certainly Null in this game, that is what keep the small sphere stable, the excited, short wave mode has gathered into its favorite radius, and the outer sphere is bereft of nulls for any other wave mode except the long, the b to the 1/b, getting the most action.

I think the quarks and the gyro machine are governing this process. But how? The quarks are this odd ball triangular thing with with no radial symmetry. Yet this is a strictly radial game going on.  What? Are the two ups start spinning about the one down? That is all I can see. Then we go from the one to the two sphere, do the two up have two radial movements modes?

But, the two sphere is still maximum entropy, so that means my SNR crank has a radial mode in it, SNR jumps for that small sphere, then drops a little faster than usual until the large sphere.

I am still a little weak, I need an algebra for wave numbers. It should exist, this algebra, it is just sphere packing, or almost sphere packing in the case of wave.

Increaing the temperature of the atom

Higgs is the fundamental sampling rate. When the temperate exceeds that rate, then more events (sphere packed) are happening than Higgs can sample. You split the curve and make complex orbitals. How? As little as possible, by two, then three, then two then three, then five; allocating precision among the known quant numbers. Then separate the splits evenly among the radial angles and along the radial axis. Using the rules of symmetry for which I have not figured out. But I think each wave, except Higgs, carries both b and 1/b, they all have upper and lower spectral limits, which should be separable. I have not learned everything about this, I am still learning. If I knew I would right the rules of algebra for wave numbers. But If we had those rules, then we would put them all into the multi-factor SNR, give each wave a matrix, and turn the SNR crank at whatever temperature pleases us.

Then make a sphere packing Plank's curve along each of the new lines of symmetry.

Band stops in the universe

Anyone find it odd the IBM got the temperature of sodium to 6e6 meters in wavelength, about the size of a planet? That is about the wavelength of gravity that supports a Lagrange point in space. The proton is stuck in a bandwidth well, it is stable. Interesting how that happens, and how lucky it happen in one go around with the Big Bang?

Anyway, I bring it up because the travelling wave never has enough nulls in free space to stabilize.  If the Planck curve is an entropy filter the the travelling wave is its inverse.  We may have a simple method of constructing the travelling wave function using the inverse Planck curve. So, in the reverse process, the travelling wave counts down in surface area, from high quants to low, moving the intended center of the sphere forward as it attempts to enclose the smaller and smaller volume.

 I think we can work this and greatly simply the whole oolynomial wave equation. The key is a phase shifted B and E quantum match which continually rolls over the quant numbers. Go back to the sphere packing Plank's curve, and rotate the signal to noise through the quantum number that define the bandwidth of the wave.  You will continually regenerate a smoother version of the peak of that curve. Just define the axis of movement along the moving center of a series of spherical waves forming and degenerating along the travel line. Using the same geometry as here, we could easily draw the wave in three space. I mean the sphere packing Planks curve should have all the elements of physics, we really do not need much more. If you want to change space curvature, just change the three space step size. If you want to add noise, just add a noise term to SNR, which you are carrying along. Adding Nulls to free space is the same as increasing SNR, and it flattens free space.

We can even generate out atomic orbitals with the Plank curve as long as we carry the frequency in a matrin and use exponential matricies.  Split the SNR among the dimensions of the wave function. Then take your atomic curve, multiply it into overlapping bands, turn the crank and make molecules.

Making the sphere packing Plank's curve

Completely out of scale, but the relative shape is close to the packed Higgs spectrum. I hesitate to go into the morass and normalize Boltzmann's constant to power spectra. Anyway, this is simply the Noise value for any given quant rate relative to the wavelength on the X axis, done in log2 format. The quant size grows with shorter wavelength, and so does the noise, up to the peak, where the lower quant size in the numerator dominates.  The peak occurs right where the quark/gluon machine is constructed.

I have SNR as area over volume, the reverse of what I thought. So, in statistical terms, the noise is the error in volume size. Thus, SNR drops as we make the total sphere larger.  It is packing volume efficiently, but counting surface area by quant number. Hence, my interpretation, signal is accurate to the quant number, volume is increasingly noisy. So this is all strictly Shannon using SNR as volume to area. I used the most irrational number as the base for SNR.

The point of physics and shoes is simple, you can only pack spheres up to the accuracy of light, or the accuracy of the checkout counter. Why is this curve diofferent from the bankers curve? Bankers measure Noise to Signal, I think, and physicists just measure noise. But they both use wavelength on the X axis.

This is also finite number line theory and Heisenberg theory. The error in b must match the error in 1/b, fractions and wholes split the error.

Guassian Noise in the shoe industry

In the shopping mall, the shoe store owner knows who wants new shoes and who is just looking around. Why? Because that is his business, he wants to capture all the customers who may be making redundant visits to multiple shoe stores. When he has captured all the redundant shoe buyers, then by definition, he has recognized all the wanderers. Gaussian noise is a good outcome, it means that temperature is stable and redundancy removed.


Shannon had a gaussian noise requirement in his theory because of sampling error in electrical systems.  That gaussian noise is there because the atom has removed the redundancy in the system. But Shaonnon could have easily re-interpreted his result to say:  
'When all redundancy is removed from the system then all that remains is sampling error of finite systems.'

That error will fall off as x^2, it will be minimum phase, it will be Gaussian.  Engineers complain because of the method I use, not everything is Gaussian, they say.  Except when temperature  (bandwidth) is stable, and the system is finite; then it is guassian, whether it is the shoe store or a coax cable.

What is sampling error in the shoe business? Periods when things are a bit congested. Shoe buyers see the store has a bit of a wait, so they tend to add a bit of wandering in their shoe buying habits.  They become customers who wander a bit  occasionally, and when a good shoe buy comes around, they grab at it. That process of equilibriation is called radiation, in physics. WHen the shoe store is equilibriated, then some folks will by four pair, rarely, and some buy two pair, but more frequently, one person per pair. So the amount purchased goes is proportional to SNR, but the frequency, inversely proportional to amount purchased, we get a power series, the -iLog(i) are matched.

Ultimately, the electrical channel is encoding information until its own sampling error matches the sampling error of the underlying carrier. Like the shoe stor owner in a mall, he can do not better then the normal phase error generated by all the stores in the mall.

When phase knows its time to requantize



So I delved into this relationship from Gary Meisner.  If the value Phi is the current quant rate, and the phase finds the minimum phase path to be a spehere, then at this point, its own circulur of phase surrounding the sphere is equal in density to the packed area of the sphere.

Look at this circle with the same triangle. A group of phase, optimally spread, attempts to penetrate the bubble. At that peak of the triangle is exactly where the spread of the colonizing phase meets an equal gradient in the area of the sphere. The root(Phi) will be an area density that matches spread quant Phi. There is no more minimum phase path into the sphere.

The general condition holds by multiplying through all sides by phi^n. For a wave traped inside the atomic orbital, surrounded by a sphere of the Higgs limit, the same process works in reverse.

So, I think the irrational number quantizes surface area in constant ratios. The surface area is spectral power. and counts linear with the integer quantum number I believe.  That means energy goes as n^(3/2), with the quant numbers. And Compton Null/Freq = Constant is a spectral power match.

So, spectrum is allocated increasingly toward the centrer of the atom, no surprise.  But check me on this, as I have surface area decreasing but spectral flux in increasing.  This is one of the cases where I forget if I am counting down or up. The thing is letting surface area go by quant number, but I think that means more flux density. So, in other words, I am likely counting up instead of down, but I do not care, its a sign change and I get to it later.

 Mainly by packing more of the Nulls toward the center of the atom. The quarks are packed in and only short wavelengths are supported, the Compton effect. Simplifies the job of allocating polynomial bandwidth. Density of packed nulls is the inverse of power spectra. EM light is low bandwidth compared to gamma. Quarks make faster adjustments over shorter lengths then does the electron.

What about signal to noise? They are power or energy ratios. I am not sure here, but if we went with volume are the signal and area as the noise, except it goes as phi^n, and volume goes as phi^(n*3/2). SNR = r/3 becomes (phi)^(n*1/2)/3.

Here is what I am doing

The phase is packed according to minimum phase, radially. It is linear along the radial axis.  But it gives me the SNR, which is constant going up the axis, so I can compute the maximum entropy packing, which I did, and at first glance it matches the proton well enough. So light quantizes for minimum phase and Higgs does the maximum entropy on the rebound.

Monday, May 26, 2014

Maxwell in polynomial quants, part 2

I think out loud in these posts, generally assuming I am speakign to methematicians much smarter than I.  So I am refining the concept of polynomial wave motion in parts.

First refinement.  Waves do not pack Null, so we whoudl be counting out Maxwell polynomial wave in fractional units. Our wave equation should end up looking like:

b = 1/r where r is the most irrational number to a fixed precision. The precision of b is 107, the Higgs precision. The wave equation should end up looking like:

Basic form: 1/2 - b^j + a1*b^k + a2*b^l+ ... + b^107

Where the total number of terms will be on the oprder of 6-9, and the exponents are composed of multiples of the basic quant modes in the atom, say:
[2+2+2]*[13] and various multiples up to 107.

The basic form simply counts in units of b^107, where b is the fractional precision, or one over the most irrational number.  Whenever the fractional power series carries the 'one', we have completed one complete wave.  Meanwhile the wave should count along the line of travel in units of fractional Higgs.

The coefficients, a1 .. an; they are really vectors radial to the line of travel, but I think we can collapse them to scalars to get the bitstream version of the wave. But these radial vectors should all be of the form [1/3]*k, defining 'mixing' angles along the concentric circle about the line of travel.  (to be precise, they should be radials perpendicular to the spherical surface perpendicular to the line of travel). To be even more precise, the B and E polynomials should be counting out counter posing spirals against the spherical Higgs boundary along the line of travel.  The spirals fixed by the two wave modes, at the source, the atom. In the process of counting those spirals, they reset, or normalize the vacuum of space to the natural 2,3 curvature. Even more precise, I believe the Higgs fraction itself is a composite of the two vacuum spirals. So all waves and matter are composed of power functions of the two vacuum spirals and the sphere is perfectly preserved in all structure.

The angular separation in the fundamental 2,3 spirals on a spherical surface is the Weinberg mixing angle, naturally, pi/6 plus or minus the imprecision of the most irrational number, which changes with energy. So you see, the world consist of a Higgs plus a Weinberg, my two favorites.

Anyway, we should end up weith a simple digit based rate counter, counting fractional quant numbers.

We have to do Maxwell in finite polynomials


c becomes the Higgs sample rate relative to the vacuum, which is also a polynomial, of order 1 or 2.  E and B come from the atom, they already are bandlimited to the fundamental sample rate of the orbitals








The Laplace operator need only have an axis along the line of travel and one radial axis. But the polynomials B and E have to carry the vectors induced by kinetic motion. The vectors are identified as [pi/3 or pi/6] and so on, is my guess.

If r is the most irrational number and b=1/r the fractional count, then the vacuum, against which you differentiate, is of order b^2, if you want to try curved vacuum, or b otherwise.  Higgs is b^107, by my calculations.  There is that one self symmetric Boson that makes gamma around b^99 or b^104.  The visible light is all b^15 to b^17, with exponents to three or six; depending on the game you play with the quarks. I think all the differential come out as division, actually, and everything should divide OK, I would think. We end up with a digit sequence that coutns out fractional units of 1/Higgs.

The only thing different from the analog version is that we never entered the world where irrational numbers prevail, except for b.

Sunday, May 25, 2014

Weinberg and Higgs are my favorites

Higgs marked the upper end of my spectral chart, and Weinberg verified my Null quant ratio. D. Pozdnyakov is another favorite for his work with the Zeta. And the scientists working on the muon atom. And the folks at Wolfram working on the most irrational number. But if you discover the limits of space, then have a name like Higgs.

Things that bother me


The relative densities of galaxies today and the CMB  do not match, the Big Bangers claim. The Hubble deep space takes just a tiny part of the CMB picture on the left. I have a hard time seeing the density variation is so dis-similar.

And below, I wonder what it means the 20% of the red shifted galaxies seem gravity lensed. My theory says that when waves hit the Higgs they do a 20 degree phase shift, the mixing angle. The rays we see are the rays that combine from the under sampling of the lens, along a path perpendicular to us.  Is that the same ray that lines up with the emitting galaxy?

The Big Bangers would have us believe that the Plank curve of the CMB can be scaled back to visible light.  But my theory says that a round billiards table with a Planck curve of billiard balls can be scaled any where I want. Why wouldn't a box full of protons around a galaxy not organize into a maximum entropy Planck curve?

Is microwave subject to gravity lensing? Not as much, perhaps not at all, the band width is much lower. In fact, if all these galaxies held maximum entropy protons, the impedance match would be nearly perfect.
This is a color composite image of the Hubble Ultra Deep Field. Green circles mark the locations of candidate galaxies at a redshift of z~8, while higher-redshift candidates are circled in red. The estimated distances to these candidates have not been confirmed spectroscopically. About 20 to 30 percent of these high-z galaxy candidates are very close to foreground galaxies, which is consistent with the prediction that a significant fraction of galaxies at very high redshifts are gravitationally lensed by individual foreground galaxies.



while adiabatic density perturbations produce peaks whose locations are in the ratio 1:2:3:...[68] Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving.

This one is tough, how did they determine the density was entirely adiabatic?  How would I model galaxies today? Dunno, haven't thought that far. But my first guess is large galaxies widely separated and small galaxies more compact. But large galaxies would have more radiation.

Dave Dilworth says my theory cannot go anywhere until the Cosmic fore ground is replaced from the data.  They removed all known sources of microwave radiation from the data.

What would be the theory of counting, by the way.

I would state it as:

What is the finite value of the most irrational number as the precision of the number line approaches the next axis of symmetry.

But, I am not going to prove it, if I did I would get the banana and have to bathe.

My approach to the CMB correlation function

I am starting to look at the papers written about how the cosmic radiation is correlated with some assumed (or not) process. I may not get far, I may just say, who cares in the end. But here is my approach, the same as always:

I assume the CMB we see is a result of maximum entropy encoders via a series of phase changes during the arrival process.  But the result is still a maximum entropy encoding, and is still the most orthogonal lines of symmetry necessary for local telescopes to collect the measurements.  It is, in essence, the best set of axis available to describe the processes that delivered the stuff.

That symmetry should look like a group of Planck radiators of various orders, easily modelled. But at least it is the best orthogonal axis we have, and the mostly likely orthogonal axis that can describe a set of phase changes along its path.  I think  Weinberg comes is coming to the same conclusion.

Now, the Big Bangers want that orthogonal set of axii to converge to a a single radiator, crammed by space curvature. Then they want to discover that moment in which this single radiator split into multiple radiators, all of the same temperature, because that tells them the distribution of matter at some phase change in their theory.  But they can do no better than to reverse the processes in units of symmetry presented by the CMB today, assuming the laws of physics trend toward maximum entropy.

It seems to me the Big Bangers are in worse shape if they find the CMB predicts a uniform expansion regardless of matter density, as that would violate a whole bunch of their laws of physics.  My best advice is take it as it comes, they are likely to show a more reasonable set of expansion phases if the laws of physics are maintained along the way.

Saturday, May 24, 2014

So, once again , this stuff about BigBang

The claim is that there is excess space curvature, and somewhere, that space curvature is being buried, somehow, so free space is flatter and needs fewer lines of symmetry to hold units of spectra.  Where is the excess space curvature going? Space curvature is potential energy, it is conserved. Where does it go? It causes each galaxy to move along a path mutually orthogonal to every other galaxy, we are all moving away from each other.  To create that line of symmetry, gravity has to be curving up between galaxies and flattening out everywhere else. But that is impossible, because they tell us that light from each and every galaxy is following a flatter path, hence the red shift.  But, actually just the opposite should be occurring, light arriving at our galaxy from any other galaxy should be lensed by the gravity bulge in between. They have some explaining to do.

There is no theory  know of that hypothesizes a spacetime field, sorry.

The Schwarzchild relativity metric





We have to do relativity again, the misconceptions about it are enormous. Here is what Schwarzchild  wants to know:
The Schwarzschild radius (sometimes historically referred to as the gravitational radius) is the radius of a sphere such that, if all the mass of an object is compressed within that sphere, the escape speed from the surface of the sphere would equal the speed of light.

When this condition happens, the curvature of the gravitational field perpendicular to the surface of a sphere will not support one fixed quant of light spectra, so the light spectra must spread out over the surface of the sphere, or curve below it. The curvature perpendicular to the sphere surface happens because an infinitely compressible thing called M makes space curved, the the sphere is filled with M.

What does t represent? That is one unit of spectral activity. The theta and psi angles simply calculate the derivative of a unit of spherical surface.  The greek tau is simply the amount of spectra the curve space supports.  c is one unit of a linear line that can hold the one unit of spectra, in flat space. c^2 is simply the power spectra of the one unit of spectra, it is variance in statistical terms. All Schwarzchild wants to know is  the curvature of space  at the Higgs bandwidth.  The relationship between the compressible M inside the sphere and the unit of light spectra is mainly the Compton relationship.  This whole episode has nothing to do with time, space or any of that other crap, it is simply finding the relationship between the Compton wavelength and the light spectra, relative to some liquid M thing that has a specified relationship to curvature.

The stretching of 'spacetime' is simply the reverse process. When some specified Auntie M causes the curvature to flatten, then how to the lines of symmetry disappear.  Same spectral process in reverse. The flattening  of free space curvature has nothing to do with time and distance, these units are for the sake of humans, but the vacuum has no space time field, sorry.

But Swarzchild has a much bigger problem.  How is it that one unit of spectra,t,  is indivisible, but this thing called M is not.  He's got some explaining to do.

Correlation functions of the cosmic background wave distribution

I listened to a lecture by Weinberg on his work regarding cosmic background.  I say to myself, self, isn't this the same thing we are trying with the atom, find the 'box' that makes the wave numbers?

Now I have to spend time reading some of the papers, hopefully avoiding the papers that use the fake variables and the stretching of flat space. Somebody must be working the problem as a finite sample space, after all, there aren't too many wave numbers needed to get an accurate model.

My issue is that I want the curve to correlate with a general spherical distribution of free protons organized in separable groups that approximate a Planks curve with separation of wave numbers proportional to the general separation function of free protons we see in nearby space.  I did,more or less,  I was happy.   But, I will read some of the work anyway, learn something.

When quarks play the game

If we look at the ratio of proton to electron, = 1836 = 17*2*2*3*3*3, or 17*108; we are talking more spectral modes than Higgs allows. Physicists have somehow compared proton and electron by spectral modes, do not ask me how at the moment.

Let b be the most irrational number, the gyron rate in the center of the proton, then:

(1/b + b) define both Avogadro and the high frequency band limit. So, it is ok for the gyro machine to do a power of 6, because 17*6 = 102, less than 107. If the gyro machine wants to do a 17*7 it needs another axis of symmetry where modes are separated and add.  Who does this? The quarks, they quantize up separably, adjust their relative relationship and create the new axis of symmetry.  They are likely the operators that generate the various L quants. They rotate the magnetic field. Look at their wave numbers, including the fraction wave that generates charge.  They have some 3 *2 degrees of freedom, as long as they remain an isosceles triangle. Kinetic movement of the packed quark Nulls make addition possible. The proton is like a computer, it computes solutions to the Laplace equation.

What about the Compton condition?
Wavelength inverse to mass seems correct to the first order, but it needs to be corrected for maximum entropy decomposition, I am looking at that again.

How do bubbles add?
They hit the Higgs and within the precision of the most irrational number, the only minimum phase path is along the perpendicular axis, divided by the coefficient of the polynomial term.  Its all about the finite number line again, like the general relativity thing. Do the math and you find everything is computed with rational fractions to the nearest one half. Any small error and some of the bubbles do the gamma, and split. The whole process will equilibriate with the motion of the quarks, I am pretty sure.

See, its right here, the puzzle completed thanks to Wiki:
 Compton scattering is an inelastic scattering of a photon by a free charged particle, usually an electron. It results in a decrease in energy (increase in wavelength) of the photon (which may be an X-ray or gamma ray photon), called the Compton effect. Part of the energy of the photon is transferred to the recoiling electron. Inverse Compton scattering also exists, in which a charged particle transfers part of its energy to a photon.

The way I put it: All motion is a result of hitting the Higgs, whether baseballs of electrons. Bioth the quarks and the electron hit the Higgs, and equilibrate to balanced motion.
 So does the electron do the relativistic thing and act like spread out mass?
I am sure. The tradition Schrodinger estimates where a unit of charge is because they estimate relative to the engineering approximation.  This method may or may not include the two vacuum modes, we are one Nobel prize away from understanding that. We may find the packed electron stays packed, but the packing structure changes a bit, and it is just spiraling and twisting away inside the distribution, but keeping charge stable with small changes to its packed structure.

A systematic way to construct the power spectra of the atom?

Probably an ordered matrix power series, and probably something for which I have no real answer. Now that I mentioned it, I will likely look around and borrow steal ideas.  But it is  one of those things where I am not the expert. I am not sure how far I want to push this. It seems to be we are counting power spectra up from the ground, and generating axis of symmetry for each count. Those axis should appear as the coefficient of a symmetric matrix to the power defined by the exponent separations. I do not et understand it all, but its a bit like placing the digits of a numbering system in the proper location of a spherical number line.

My spectral chart is a good starting point, it is to the right in one of the pages on this blog. The Laplace guy from ancient France worked part of the problem.
In mathematics, spherical harmonics are the angular portion of a set of solutions to Laplace's equation. Represented in a system of spherical coordinates, Laplace's spherical harmonics Y_\ell^m are a specific set of spherical harmonics that forms an orthogonal system, first introduced by Pierre Simon de Laplace in 1782.[1] 
Our boundary condition is the Higgs bandwidth areoung the atom, includes all phase velues generated by the polynomial. Our condition should be Gauss^2. Follow the references for iterative methods that work with sparse matrices, generated by our hyperbolic power series; would be my guess. The bubbles already computed the maximum entropy modes, so we are working the reverse Planck curve, finding the 'box' that generates the Planck curve. We are decoding, not encoding. Adding the Bosons is optional, it increases the resolution. But we should see the quark and chromodynamics modes. What about the precision of the most irrational number? That should increase with energy.
Pretty Bubbles

Friday, May 23, 2014

The Higgs mecanism, for example


91 9.22769139322099E-005
92 0.1575005387
93 0.3149088006
94 0.4723170624
94 0.3702746758
95 0.212866414
96 0.0554581521
97 0.1019501097
98 0.2593583715
99 0.4167666333
100 0.268416843
101 0.1110085812
102 0.0463996806
103 0.2038079424
104 0.3612162043
104 0.4813755339
105 0.3239672721
106 0.1665590103
107 0.0091507484
108 0.1482575134
109 0.3056657752
Wiki explains:
The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group SU(2) × U(1): three out of its four components would ordinarily amount to Goldstone bosons, if they were not coupled to gauge fields. However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons (W+, W− and Z), and are only observable as spin components of these weak bosons, which are now massive; while the one remaining degree of freedom becomes the Higgs boson—a new scalar particle.

In spectral theory, this is all about finding room in the spectrum for sidelobes in an under sampled system.So, you can see from my spectral chart, 107-91 is 16, a number the Higgs mechanism created, not me.  Higgs left the numbers 104 and 94 for spectral power, and the band limit is 107 with a significant guard band.

Why and how?
It broke symmetry with wave 90, 90+4 = 94, 94+10 = 104; and oddly, the matching mass number match 108 + 4 matches 94, and 112 + 10 matches 104; and they leave much more wave than mass, they will not pack.

Did the bubble do a rotation on the group structure? No, they just went to minimum phase quantization, or flew the coop and what was left was the best minimum phase power spectrum up to the bandwidth of the vacuum, 107.

On the mass number, 122+5 makes 127, and no group of five is being packed anywhere in a 3/2 universe.  The numbers ended up there because numbers that did not flew the coop.

So how did charge do the 2/3 power?  I am a little unknowledgable, but the vacuum has two modes in its 2/3 curve, and one mode did a phase shift relative to the other.

The power spectrum ended up back in symmetry, if you include the proton.

Did the photon end up massless? Well, you can bet there are unpacked nulls in the orbital slots.  And you can bet that any free wave exchanges with Nulls.  But the question really is, does a travelling EM wave move Nulls on net, along its forward direction? Not much, and beyond that I do not know.

This whole thing was about restoring a symmetric power spectrum in a 3/2 space. The wave number for the electron is 15+1/6 (charge asymmetry) below the 90, and the band limit is 17 above the 90. It works, and is damn near symmetric. For the neutron, its 16 and 16, I think. If the proton is 90, then it is 5*18, the quarks taking the five as a 1+1+3, and the 18 broken into a dandy little Nyquist minimum phase sampler with modes made of 2*3*3*3.

What did special relativity have to do with this?
In my model, nothing, because this is already a sampled data model. In the model using Isaac's rules of grammar, general relativity simulates a sampled data system by adding terms.

What does "acquire the vacuum expectation" mean?

It means let the bubbles hit the 107 number and bounce back.  It is an impedance mismatch reflectivity, or side lobes from under sampling in spectral theory. The Higgs mechanism describes an anti-aliasing filter, or an impedance bridge matching network.

Why don't they use spectral theory in the Standard Model?
They are, actually.  All those group matrices are done under the generic name spectral theory. Sampled data engineers use them too for power spectra. I don't, because I am lazy.