Wednesday, April 30, 2014

Make a perfect sphere with a couple of up and down and three gluons

That is the puzzle we have to solve.  The proton , and neutron, are the gold standard, they have to be near perfect spheres.

The gluons come in threes, but they have color white, so I bet there is an isosceles triangle involved here.

There has to be a sperical wave mode and two angular modes supprted by the gluons.

The sphere has to be very flexible in radius for the proton to be so perfect.

Is it possible for a bunch of stupid bubbles to make a fixed isosceles triangle in perfect rotation?

Well, just taking the top wave number, 91, or 13-7, what is the beat frequency? 6, make that two wave modes of 2 and 3, all in powers. There is the key, the perfect spherical multiply system. The one wave counting out area, the other counting volume and they always match.

Multiply efficiently.  The vacuum only does powers series, very simply because he makes Bosons which move.  So it is dealing with:

r^[13*7] so it makes a Boson the packs in a 13 digit number, then uses that to make a 7 digit number.  Each digit sequence is a power series.  Multiply become very efficient, and it can do big things at the limited speed of bubble.  But if it gets a natural interference term like [13-7] = 2*3, then more accuracy. It needs to cover the total accuracy of the proton, 9e-4, it has to make wave modes that construct a sphere to that accuracy on volume/area.  If the combination exists, it will naturally equilibriate to it. 
So, [7+6]* 7 = 7*7 + 42

What is 7 *7? A seven digit wave counting by powers of two. But that no problem for the Bosons, that is their thing, moving about with two adjacent wave numbers, like the EM wave. That is your radius squared, a spherical wave mode.  The 42 is 7*6, the radius, with the 6 used to support quark orbits.

Why this is so simple even a bubble can do it. The first wave modes counts up with the 7 powers, the second gets with the quarks at 7*6, using the radius as a base.  The quarks are on the surface, counting out a 2-3 pattern in the null points of this surface wave.

So, I need to look on my spectral chart and find two wave numbers, one looking like 7*7*j and another like 7*k, that match two adjacent null quants that  make dandy bosons. Hopefully the j*k have six as factor.  Tune tomorrow  for mystery quarkathon!

I bet  there is someone out there who is going to crack this case, within the next few months.

Waves in space are ellipsoidal?

Yep, along their line of travel, because they do not have an efficient multiply. The electron wave number and the magnetic wave number seem to disagree on the proper radius of a sphere.  The one almost has it in one direction, but the other has been redoing the radius transversal. It almost makes a mockery of the idea that Bosons get along.

When fermion nulls  stabilizes phase to its mass number, there is slight error.  But since the phase can pack in two directions, the error is split, and the -iLog(i) for each packing is closer than any  single packing can do.

A boson usually comes from adjacent mass numbers, and can stabilize phase with one of two radii, thus supporting two wave numbers.  That lets the photon travel. Bosons have higher bandwidth.

Why does 3/2 show up everywhere in physics and quant ratios?

The ratio is the volume of a sphere to the surface area, log(r^3)/log(r^2), so that number will be closely related to the quant ratios you need for maximum entropy, and in the signal to noise, a ratio of energy, you will see looks like r^3/r^2 everywhere. So maximum entropy theory will tell us that at some point, we can do just as well by treating a collection of packed bubbles as one spherical bubble. Namely when log2(1+r), and r > .5, you can do better by bumping your quant rate and carrying fractions.

3/2 is log(r^3/r^2) is 3/2*log(r). So 3/2 is the optimum quant ratio to make a digit system for spheres. That means, breaking up a sphere along the radius by units of (3/2)^n will give you the minimum number of digits you need, so multiply is the most efficient.

The topologist who discovers the maximum packing theorem for three types of bubbles will tell us that story when he gives speeches to the Swedish Banana Society.

Lets do a Bose Einstein critical temperature.

We start with something called the Riemann zeta function, the funny swirl function in the denominator of the first term. It is this:
That number at the bottom is sum n^-3/2, the first fraction, for all possible systems quantize at 3/2. That number is the largest fraction possible before the 3/2 ratio that makes a Shannon boundary. It is not a simple sum or powers because he has to allow duplicates. So the series:

e^[-log(n)*3/2] has many overlaps, log(n), n 1...Nmax not being an integer multiples. However, since we know the precision we are working toward (four digits in 3.3125) we would be free to scale up log(n) and work a power series. When we do this, we find some log(n), log(n+1)...log(n+k) are all about the same value, and we can treat them as log(N+1), and be accurate to our precision.

Einstein wants to lower the temperature sampling rate until that quant is reached, given the normal SNR for a collection of Planck motions. Since he has n particles, he can normalize that quant to each particle, because he doing only one stochastic sub channel, and does not worry about overlap of subchannels. Orthogonality is not a problem.

Bosons and fermions

I was decoding the Bose-Einstein condensate function and that lef me to the spin statistics theory which then introduced the non existent time variable which then appear eight times on one side of the Lorentz equations.

Now, let me confess, I have introduced a non existent variable, the sign of phase which I know is fake because the vacuum does not subtract, though it can degenerate a packing. The vacuum knows about three bubbles, each with a slightly different surface area. Two of the bubbles will exchange with the middle sized bubble, and the middle sized bubble does not initiate exchanges. The intent of the bubbles is to minimize the total surface area around them, at the speed of bubble, which is constant. Instead of the spin statistics theorem we need the maximum packing theory, for which someone will  get a Swedish banana when they prove it.

Here is how bubbles make fermions and bosons.  Fermions result when there are enough nulls so that small and large bubbles penetrate and reach a point where no exchange reduces surface volume. Excess phase then leaves the region.  The maximum packing theorem will tells us that the configuration comprises large bubble spiralling in in one direction, and small bubbles spiralling in in the other.  Hence the spin thing, two stabilized 'phase' can have opposite spiral relative to any other fermion.

Bosons result when too many small and large bubbles penetrate and reach a point they over sample the null bubble, constantly moving the little nulls around. But there are no Nulls in the region to create a stable fermion.

There, I think that does it.  It is sampling theory. When phase bubbles slightly oversample, causality cannot be maintained at the speed of bubble. The way to see this si to restructure the statistics functions for the fermions and bosons and you will see the fermions have SNR closer to one, the bosons had SNR closer to 1/2, and this is the vacuum carrying a fractional component of entropy. Fermions pack such the -Log(i) is always within one with maximum deviation less than 1/2, Bosons have -log(i) deviating by more than 3/2.  The system continues to make bosons, at the light quantization ratio, until it reach a null-phase quant  that realizes a fermion solution. The relativity thing is embedded in the oversampling rate of bosons.

OK, then what is the Bose-Einstein condensate?  The black body is a relationship that explains the steady state energy balance. Bosons phase in, Bosons out, at the Boson quantization rate, the fractions. If you quit sending bosons in, then the only boson remaining is the boson that maintains the half integer error between adjacent fermions. The ground state fermion remains other the vacuum is non existent, and the world disappears, or fermions disintegrate.

Bosons are not force carriers, but entropy managers. They allow the vacuum to find close Shannon matches and thus create a euclidean system where the speed of bubble is constant, with slight phase shift. The multiply is available and the total number of exchanges to minimize surface area is minimized.

Thus, on the macroscopic scale, quantum mechanics and classical physics converge at the classical limit. Nevertheless, it is impossible, as Planck discovered, to explain some phenomena without accepting the fact that action is quantized.

Planck was on to something, the universe is a sequence of actions. Why we introduced time and space I have no idea.

Tuesday, April 29, 2014

The Nat and temperature

The term I should be using generally for natural log digits, according to physics standards. Temperature should be bandwidth, and subunits of temperature are the powers of your digits.

I am looking through gas laws and trying to eliminate units.

 Here we have Boltzman constant removed, yielding Planck temperature.   Great, the phjysicists are fixing stuff up for me!.

Compton Scattering

Light must behave as if it consists of particles to explain the low-intensity Compton scattering. Compton's experiment convinced physicists that light can behave as a stream of particle-like objects (quanta) whose energy is proportional to the frequency.
Not either or but its both and. The key is that the vacuum has both nulls and phase, and for matter or wave, both are involved, the difference being which one is the whole part and which is the fractional. Waves count whole phase quants and partial null quants, mass counts whole null quants and partial phase quants.

Now, one might say that I just played another trick, converting everything to sample space.  But it matters not, there is an equivalence between sample space and elastic space, as long as the sample space is smaller than any effect observable.

Folks who believe in the elastic space have a very serious problem in deriving an elastic computation of the sine function.  Once you believe in elastic space, you will spend your life trying to derive an elastic integral for that function. String theory gets into this problem, as does general relativity.  Each one has to add dimensionality to some kernel as physicists get more accurate.  Quantum theory has a simple solution, it physicists get accurate, it must be a better set of quants being executed by nature. So they just rescale. All sample data space (and associated group theory) does is add a uniform method of rescaling to normalize potential and kinetic energy separation. We get signal to noise, instead of a Hamiltonian; and that includes tunneling and just about everything else.

Black Holes

  Neutron stars are not a problem, they are baryon matter.  Black Holes, with curvature beyond the Plank limit should not exist. The only thing inside a Black Hole should be a 'Higgs' standing wave, in which I use the wave number 107, beyond the proton wave number of 91. There would be Nulls at the corresponding mass ratio, but not enough to form matter, and the Higgs standing wave is still finite in curvature.

Quasars would be where we find proton production, at the edge of the event horizon, and this would be where Higgs does its job. The center source of the beam from a quasar should be about 1364 times the Compton frequency of a proton, and the spectrum of light should be all the peaks in my spectral chart, up to the proton, and the only matter exiting would be baryonic. Quasars should be producing neutrons and protons, mainly nulls pulled into the system by light. This:
In this formula, ΔV is the rms velocity of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines; 
Should end up as my spectral chart, on the page to the right of this blog.
The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole that will cause the matter to collect into an accretion disc.
Correct, no matter is in the black hole, only a standing Higgs wave.
However, this assumes the quasar is radiating energy in all directions, but the active galactic nucleus is believed to be radiating preferentially in the direction of its jet. 
Correct again, and the only matter in that jet is baryonic.  This is an ongoing process, not something from the big bang. If the Milky Way was emitting this beam, it would be a finely separated spectrum, and little or no beam spread until they were way above the event horizon. We, being at right angles, would not even see it.  Consider that we are viewing the universe with a microscope, but do not know it, we interpret everything as being bigger, redder and more spread out than it really is.
Quasars were much more common in the early universe. This discovery by Maarten Schmidt in 1967 was early strong evidence against the Steady State cosmology of Fred Hoyle, and in favor of the Big Bang cosmology.

Partly correct.  The universe has gone through untold cycles, each time the proton becoming more accurate, and each time the number of Quasars needed reduced.
The release of gravitational energy[13] by matter falling towards a massive black hole is the only process known that can produce such high power continuously.

A proton gradient at the edge of the gravitational field fines tunes the gravitational gradient.  These proton gradients form all the way out to flat regions of space where the protons dissipate to form cosmic radiation. Ask yourself, do we get more bubbles, the same bubbles recycled, or the same bubbles organized with more complexity? I dunno.

Einstein crosses and rings

In sample data systems these just become the beat frequency of the wave front when when under sampled because to significant phase shift. The radius of curvature along the minimum phase path is near the frequency of the light. The sample rate remains the same, but the sample phase changes along the curvature. The vacuum simply falls behind and cannot form the wave front before the next wave change appears.  Spacially, two wave fronts end up getting grouped. 

So relativity simply falls out from sampling theory, much less exotic. The way to compare the two methods is to think of sampled data systems as doing numerical summations, rather than trying to stretch out some continuous variable of integration.

If this fair or am I cheating the math model?  Even if you believe that universe is infinitely stretchable, the observer instruments are not. So one can always model the process as a discrete system, up to the SNR of the users instrument.  So, it is perfectly fair.

Schwarzschild radius is the same thing. The gravity density changes the sampling phase along the gradient and the sampling phase transverse simply reforms the wave transverse to gravity, it goes no where..

Monday, April 28, 2014

Plank accuracy and the Proton as a unit of account

The Physicists claim that Plank is accurate, to the edge of the atom, to 9e-6, roughly.  The proton vacuum has enough capacity up to 9e-5.  So the proton, in the atom, is using all of its precision to manage the atom.

But out of the atom, as a free proton, it gets all of that precision back. It is likely using all of that precision to maintain gravity and energy balance across space. This can only happen if the proton has fine tuned the impedance match between the atom and free space. The central principle is the the proton tries to view the world as the standard atom.

This could not have happened all at once, in one Big Bang. This has to be an extremely fine tuned system happening over many cycles, each cycle going from 1 - 13 billion years. We should be able to compute the number of cycles.

Take the cosmic background chart.  Let the blue be signal and the rest be noise.  Break it up into successively smaller units, be sure to compute energy ratio, not quant size.  The find the quant rats as units of space on that chart, until the quantization noise is greater than the information added by the next subdivision. Or, just find the Nyquist spatial frequency.

But, unless the vacuum finds more bubbles, it has reached its limit and may have be on some untold number of cycles, never gaining more accuracy.

Microwave dots from space

I consider the possibility that the background process is reversed from what we think.

The peeks could be the tie points of space, places where the gradient is very flat, protons go there to die, their density resonant to 160 GHz.  The flat spots in space return microwave, which keeps backward pressure on the upward flow of Protons, maintaining a balance. The picture likely 13 billion years ago.  Such a process would simply be, yet again, the vacuum just equilibriating and finding group structure.

A gradient in curvature, maintained by proton flow would ultimately go to zero at the tie points. The blue spots in the picture are regions of active stars and galaxies.  If a gradient was maintained around active stars, the inhabitants would think the world is expanding because they expect a flat field and, instead, get field focused toward their telescopes. And if we misinterpret the reiationship between curvature and proton decay, we would be fooled.

The flat spots have available energy the protons extract, so we would expect an energy balance as these spots orbit over the balance cycle of the universe. Yet again  the ability of the vacuum to maintain both wave and partial matter, on a larger scale.  

Does sparse vacuum really work?

I keep thinking, as long as the vacuum gradient is within the precision of the proton, it will not notice if the vacuum bubbles are bigger. In a sample data system, the relativity thing is handled because every thing is in units of relative samples. The proton could bloom in size, but relative to the vacuum of which it is made, everything is stable.

What does cause the proton instability is the lack of the proper gradient, not the sparsity of space.  My assumption here is that stability proton is due to its ability to adapt to the equalibriated gradient of space, which the neutron cannot do.  The adaption process is the result of free protons creating that gradient by stabilizing gravity to the level proton precision out to the edge of space.

 There should be a region of high noise, much greater than nominal vacuum noise where protons lose stability and dissipate. Can the proton, at the edge of space, cause more vacuum energy that that which created it? The assumption is that the proton was created in a phase gradient to which it was suited, and the energy being released in in a phase gradient that is suddenly unbalanced and has the wrong gradient. It would require a sudden, explosive dissipation.

The process would be the reverse of the big bang. But it would only cause more noise in the vacuum at the fringes, where the vacuum density gradient changed abruptly, and that should dissipate at light speed, except to the point that phase has to disentagle itslef from the cloud of Nulls. If you had other protons nearby, on the edge of instability, you can get a chain reaction. But I do not see how this pulls excess energy from the vacuum, even though its noise level rises.

Can we assume an infinite supply of vacuum beyond the known universe? 

Note on relativity: This theory has only one relative moment, the density of the bubbles from the source to user relative to destination, which is the number of samples need to make the trip. That would total samples for anything that the observer thinks effect the measurement.

I am still not sure how to make a proton

 If protons recirculate, then they have to be recreated, on an ongoing basis. It still seems that at some point we need the Higgs mechanism.  But what is the probability that the vacuum randomly has a phase imbalance for a Higgs to form? In terms of energy, they count that number down to the electron, and its too big to happen at random. Maybe quasars do it.  There may be just enough of these to keep the supply going. But where does the excess energy come from?

Going down the quant chain, we would look for wave/null pairs that make a good euclidean sphere, sum them up and see if they are enough to flip the bit at wave 90 to 91. It should be easier than we think because of the precision of the proton makes  it endothermic, by 20 orders. This is  the huge conversion of kinetic to potential at the magic integer.

But, it does take positive energy, one needs to find a net energy source for the universe.

The background radiation

Imagine if this chart was the density of free protons, going right to left,  from the atom to the edge of the universe.  At the edge of the universe, there is no vacuum density to support the proton, it dissipates. What would the light wave numbers look like on that way back. Light will not readily form, right way, but  as soon as the protons are near instability, we get light emission returning. The returning radiation matching the free proton density on the way down.

Why would free protons be migrating outward? They help stabilize far field gravity?  How would a bunch of, not quite so stupid, protons figure that out? The didn't, they just happened to match their returning light and phase locked in one of those usual equilibriums the vacuum does.

This one bothers me

E = h\nu \,.
In 1905 the value (E), the energy of a charged atomic oscillator, was theoretically associated with the energy of the electromagnetic wave itself, representing the minimum amount of energy required to form an electromagnetic field (a "quantum"). Further investigation of quanta revealed behaviour associated with an independent unit ("particle") as opposed to an electromagnetic wave and was eventually given the term photon.

Great, the atomic orbital carry energy, and emits light.  And that means the energy of the orbital is proportional to the frequency of emitted light. It is not incorrect, it is just that the physicist is letting the atom compute the energy, and energy is not decomposed.  The orbital of the atom is not devoid of free nulls, in my theory. The kinetic energy is a partial fraction of free nulls and wave motion.  So the physicists can easily be confused if he cannot decompose these two and back out the virtual Compton wave/null ratio for the orbit. Later, when the light hits a region of space where the original conditions are not met, he has to do the General Relativity thing, to decompose and renormalize.

The other term inside the Plank joules/sec is the DeBroglie relation, p*wavelength = plank. So, real energy should be:


Compton mass * Compton frequency squared, gives the mass energy equivalence at the speed of light.  

When the wave leaves the atom, it is still pushing pulling null, not enough to make matter, but enough cause the Signal to Noise issues relative to free space. Light is still constant in free space, but without the other term, we can only approximate the wave spread. So light curves in gravity, it makes a different entropy relation.  It is not, either particle or wave; it is particle and fractional wave, or wave and fractional particle. That is why we get action at a distance. That is why we do the general relativity, that is why there is inherent noise in the vacuum, and that is why quantum mechanics need not be perfect Euclidean in its groups. That is why the proton is so flexible, it can alter, within some bound, its embedded phase and exposed phase, and that is why you cannot find the mass in the quark. And, that is why we have Heisenberg uncertainty. Also, look how Einstein added the second term to Planks model of the vacuum energy.

Sunday, April 27, 2014

Multiply is simply efficient

It all boils down to the simple fact that the vacuum of space can conserve sample space if it can multiply accurately. So whenever a two potential barriers arise that block off a Euclidean group, the vacuum of space settles in.  We end up with a step function world.

There seems to be a limit, from the noise of the universe fringe to the Higgs wave.  I dunno why. Likely we reach a natural point where divide becomes more efficient than the next level up, so the universe cycles.

Time is an operation count

Time is the number of calculations the vacuum of a wrist watch and the vacuum of free space need to move the second hand one tick.  It is always a constant. We can scale down and make that operation count meet the limits of the noise induced by kinetic energy. So, in the Lorentz transformation, pick a decent inaccuracy, and where time appears (eight times in one equation!), just replace it with an integer constant.  Within that inaccuracy, time remains the same.

That is all Lorentz is doing, scaling down to make time equivalent.The speed of light appears because there is the Nyquist limit, you cannot  exceed the density of free space in velocity; the vacuum 'multiply' count exceeds the vacuum 'add' count and free space arithmetic no longer works. Wrist watches require many more operations per second than does an atomic clock, they have more vacuum samples involved in their operation. So the vacuum basically starts rounding off to larger units.

So, take the Lorentz equation, replace t everywhere with N, and it all cancels on the right side, and on the left you are left with something like N*prime, where prime is some operator.  All the other ratios, on the right are of the form distance relative to speed of light distance; which is nothing more than the density of your wristwatch, in units of vacuum bubbles, relative to the maximum allowable density of the vacuum.  The prime, then is the position of the decimal point for N operation counts.  The vacuum bubbles are re-arranged to make your wrist watch tick, but some of the operations will end up as fractions, otherwise called kinetic energy, and your wrist watch, by definition, only measures whole numbers of ticks.

Someone with enterprise put the concept into a graph made of hyperbolic lines of equilibrium. What is a hyperbolic?
 Here, a hyperbolic.  In the first form, there are two symmetrical parts, the one has x and the other -x in the exponent.  The -x are the fractions, the x the whole numbers. In the diagram, that Greek looking angle is 1/2 the relative density of your wrist watch to the Plank density of the vacuum. That is directly from the Lorentz transformation where the ratio is converted to angle, and hyperbolic identities work nicely.

 The vacuum always operates to the same Signal to Noise. So it sets the decimal point such that a whole number at light speed =   log2(1+SNR).

The ion cyclotron

An electric excitation signal having a frequency f will therefore resonate with ions having a mass-to-charge ratio m/z given by

 for a given magnetic field strength B

Not if m absorbs and releases phase imbalance. Put a proton in the thing, and assume the proton has a large, stable mass of nulls. What happens? Much of the exposed proton charge settles into the proton, happily content to take a free ride.

Once again, the free space impedance is not the issue, the proton mass has its own impedance, an ability to absorb a phase imbalance and hold it. The physicist thinks the mass is separate, so he is in effect measuring a quant under the assumption that the SNR of his signal is constant.  The proton just drops SNR down by absorbing phase, the physicist measures smaller values.  

Correct using relativity, and the conclusion is the mass has gotten heavier than it should, but in fact the field action B*z around the cyclotron has gotten weaker than  it should.

Measuring the mass of the proton, rather than theoretically calculating is a difficult problem that I am thinking on.  Anything that stable will have a huge margin of adapting to altered phase environments. If you smash a proton, much of its mass moves away, pushed along by the gluon wave motion. This is a difficult issue.

My best guess, right now, would be to construct the equations of the muon atom and the hydrogen, together, using an assumption of charge/mass ratio between the two. The deviation in size measured is an indication of the ability of the proton to absorb and release phase imbalance, and is likely the best indicator of phase/null for the proton.

The asymmetry of that quark matrix is .003, max. Square that and get .00009, the inverse is the capacity to hold imbalance, to change size, actually.  I get 9.3e-5 at the peak of the wave/null spectrum, the same number. This is correct, the proton is balanced wave and null, so it has a large capacity to change mass/phase ratio.  The proton can actually track the curvature of space.

Saturday, April 26, 2014

Cycle time of the universe

OK, I am going to be explicit, as a lesson to the mathematicians.  Please mathematicians, butt into physics, be insistent.

I see two estimates for the expanse of the universe, 40 billion and 80 billion. I see one estimate for the Big Bang time, 13 billion.

The ratios are Pi and 2*Pi.  Please note, it took our little dumb vacuum bubble many cycles of chaos to compute these numbers.

The proton has only one percent Nulls? Not any more.

When physicists measure mass of the high order particles, they measure momentum, M*V, so one has to work backwards to get the phase vs null. They say the proton is only one percent matter (null)!. That cannot be, it has the same phase imbalance as the electron, making the electron have negative mass!

 The mixing angle determines the phase/null ratio with the proton and the electron. Their numbers are right except they are mixing phase and null, and using the same space impedance yet again, which becomes a common term. That why they do so much work, I think. Their Compton pairs are shifted, but otherwise they are fine. I can correct that.

Charge ratios are following the 3/2 line, all the way down. It is what sets the EM impedance. Their mixing angle:
The 2004 best estimate of sin2θW, at Q = 91.2 GeV/c, in the MS scheme is 0.23120 ± 0.00015
Comes to 30 deg, or 180/6. That is the number I got by using the expected null ratio between the adjancent magnetic-electron null pairs. The atom just bumped the magnetic null by 3/2 and filled one half of that null ratio with one half of the positive phase, thus exposing a one sixth negative charge.

I never bothered to fix this because physicists got all their dimensionality right, and I assumed they knew what Compton pairing was all about. So work backwards, and the proton phase ratio is likely 1/1836th of that. It is no wonder the Heisenberg was so uncertain.

Otherwise, what is going on up at these levels is that phase has run out of vacuum density to maintain light speed, so there is no chance of phase running off with higher levels of null, and all the multiples show up (2,2,3,3,3) etc.

Those multiples are a result of a few cycles of the universe. Those multiples stabilized, and humans created a number system out of them. It was a gift from the vacuum to us, we did not invent the number system. The group theorists should be shamed for not noticing this, and correcting the physicists. The group theorists got the limit on light speed simply from Euclidean considerations, they should have gotten the theory of counting long ago.

So, go finish the job, what say? You can have the Swedish Banana.

General relativity and magnetic confusion

In the post on tallying up the quant chain I mentioned some problems with the magnetic.  There is no apriori reason to think the magnetic lines of phase around the Sun are the same order as those around the atom.  We call them the same only because of bias in the way they were discovered, but the free electrons in the Sun and spinning atoms in the planets might both be entirely different orders than the magnetic dipole of the proton. Only group theory can sort that out.

We need to compute all the 'general relativity' smoothers along the quant chain, including free protons in space.  But it is not too difficult in a sampled data system. Here is the algorithm:

1) Take your best guess at the lifetime of the proton, then take the ratio of Big Bang time to Universe span, divide that out to get cycle time. Then you get the phase gradient caused by free protons in space. Free protons going up and out are signal, vacuum going back down is noise. The proton density, going as r cubed, will phase lock to the reverse flow.

2) For the entire quant chain, from the fringes to the Higgs, do the following
  a) Find the Shannon matches for order, say four.  That is signal energy.
  b) Compute the noise between Shannon barriers as kinetic energy.
  c) Add in the proton phase gradient, and see if the SNR is smooth thru the path. Then repeat with adding in the next Shannon match, look again.

At some point the SNR, Shannon matches, proton lifetime, standard model, and universe cycle period will all match pretty well. Then you get a Swedish banana.

The accuracy of the vacuum, including the vacuum that makes up our mathematicians and physicists is quite amazing. I doubt we all got this accurate and  are still looking at after shocks. This is not the first time through.

Proton lifetime was a puzzler for me too

urrent work on GUT suggests the existence of another force-carrier particle that causes the proton to decay. Such decays are extremely rare; a proton's lifetime is more than 10E32 years.
Then I realized that nothing exists except to the extent  the vacuum has the SNR. Everything is vacuum, nothing escapes the necessity of SNR to maintain quants.  SNR is a ratio, we need either a strong signal, or less noise. Physicists focus on the less noise, but lack of a strong signal works to undo the proton. Signal is ultimately the measure of space curvature. If the proton cannot see space curvature, it will go look for it, in parts. The proton charge is impedance matched to the necessary space curvature, that is an antenna for phase imbalance in the vacuum. Without the curvature, it comes apart.

What kind of causality made space curvature match proton charge? Only one thing, the cyclic behavior of the universe. If we think the proton lasts 10E20 years, and the universe is 10e10  years big, then it took the universe 10E10 cycles to get the curvature to match. But it could have been many simultaneous cycles in the beginning, the universe slowly becoming connected.

Compare the rate of space expansion you think is happening, with the precision of the proton.

Multi dimensional theory not needed

In theoretical physics, M-theory is an extension of string theory in which 11 dimensions of spacetime are identified as 7 higher-dimensions plus the 4 common dimensions (11D st = 7 hd + 4D). Proponents believe that the 11-dimensional theory unites all five 10-dimensional string theories (10D st = 6 hd + 4D) and supersedes them. Though a full description of the theory is not known, the low-entropy dynamics are known to be supergravity interacting with 2- and 5-dimensional membranes.
>br/> Theories like this, and like general relativity result from the starting point concept. To start from scratch, the universe needs complete knowledge. If you assume the universe can cycle through until it balances, then you only need one dimension and two quantization rates. We still need bubbles, and we still ponder where they come from, but that is a separate story.

Friday, April 25, 2014

Let's do a quant tally, with suggestions and uncertainty

Of the proton limit of 108 null ratios:

36 or 19? Proton
15 or 18 ? for the atomic orbitals
18 or 15 ? for the range of magnetic dipole
20 for gravity out to the galaxy level

89 Total, with wave number 75

That leaves 19 for vacuum noise and protons in free space.

On the electron/magnetic. The mass of the proton puts the electron at 18, but I do not think we count embedded phase, so I have it at 15. Otherwise the two should be swapped. I still have the magnetic below the electron so it can couple with gravity.  But there is no reason to think that the magnetic we have was the same as the magnetic we might have had.

Physics may still have some confusion on that. Here is the catch. There is no reason to think the magnetic from the Sun and planets is the same order as the atomic magnetic.  If the electron generating motion is computed using just the presumed space impedance, then the numbers need to be rechecked. The way to check things is to have out computer go thru the chain and compute the Shannon matches for each run of the quants, and from that compute SND over the whole chain.  The one you want is the one where SNR is uniform over the chain.

I gave all the quarks and gluons 36, but that might just be 19, because that number puts us at 89, the spot to which Plank measures, and seems more likely. But that left three spots for quarks, and they would need a hellavalot of color to make that six. That would put the electron at 70, 19 down, the next sweet spot. The next spot down, then becomes 38, 32 spots, way too much for the magnetic, I think.

I am sort of approximating at 89 because that is a point where light and mass are awfully close. I also notice that Plank  divides between null and wave into orders 89,75, and I would expect Plank to measure out to the accuracy of gravity, or the proton, but not farther. The 19 comes about because that is another hot spot.  The Shannon error at 19 and 89 that is, 9.3E-4. The same error I get at null 127, (having wave 107), which I consider to be the location of the Higgs wave. If gravity counts down from 89, and that is the Plank limit, then residual error, in units of bubbles is remarkably low. I hesitate to give the number because it is so low that I don't believe it, but it is in the range of Mercury's orbital error, as a percent.

The protons are cycling thru the universe and keeping these quants accurate.  The protons reach the fringe, there is not enough SNR to maintain them and they dissipate, keeping the volume of space cycling down toward the center.

What happens when the gradient exceeds the balanced curvature at some point in space? Gravity couples with the magnetic to make a Shannon match. All the free nulls in the magnetic layer get quantized, the atomic layer is crushed, we compress and make a surplus of protons, restoring balance in curvature.

The sample rate of light comes out as that ratio simply because primes block the Shannon groupings, and the primes are 1,2,3,5,7,13; 7*13 = 91, the number of light quants.   And the prime 11 was reserved for God and a bunch of Phd students. But my guess is that free protons got the prime 13, and the 6remaining are vacuum noise. If you want to do this with GR, you are going to need some 13 of those +--+++-- marks you put on your tensors.

Protons climb curves too steep and roll down curves too shallow, keeping their own density matched to necessary space curvature.

Seriously, go to a sampled data system, use the theory of counting up.

John Moffat, can I take a vacation now?

University of Toronto

Am I done? Is this good enough? The vacuum is not going to execute all those equations behind you. It is just going to find the best Taylor series expansions it can, at the sample rate of light.

Scalar–tensor–vector gravity theory

Oh no, not another one!

These modifications to accommodate quantum error will continue up to the Plank volume of the vacuum.  Each correction will modify the previous to account for a split in the quantum number for very low quantization rates.  The vacuum, at the speed of light, simply creates  a new quantum set that outperform the previous, until the ultimate SNR is reached. Wiki, yet again:

far from a source gravity is stronger than the Newtonian prediction, but at shorter distances, it is counteracted by a repulsive fifth force due to the vector field.

Translation: When the gradient of space  drops near closer to the SNR, then the vacuum will utilize two quantum numbers and compute its Taylor series approximations with a little bit of error correction. One quant (3/2)^j with matching wave number,n, goes to two quants, (3/2)^k, (3/2)k-1, where k+k-1 = j, and (1/2+sqrt(5)/2) ^ n-1 fits between k and k-1. The vacuum, at the sample rate of light, simple found a better pair of mass quants.

Thus those little devil vacuum bubbles extend the quant chain down, retain a multiply function and are still accurate to their SNR. The vacuum creates a few more packed nulls and computes a new gradient that is relatively more accurate. It is likely getting a bit more help from protons out there in the fringes.

Causal Entropic Force!

From the New Yorker.
A Grand Unified Theory of EverythingThe paper’s central notion begins with the claim is that there is a physical principle, called “causal entropic forces,” that drives a physical system toward a state that maximizes its options for future change. For example, a particle inside a rectangular box will move to the center rather than to the side, because once it is at the center it has the option of moving in any direction. Moreover, argues the paper, physical systems governed by causal entropic forces exhibit intelligent behavior. Therefore, the principle of causal entropic forces will be a valuable, indeed revolutionary, tool in developing A.I. systems.

The inventors have a start up company to capitalize on the idea. They call themselves Entropica, what a great name. The New Yorker authors are none believers because they notice too many stochastic algebras in the world where energy seems probabilistic on a small scale.

What Entropic start up discovered was that stochastic algebras appear between Shannon algebras, the potential energy barriers.  That idea is not new, as my readers have noticed over the years.  On the web the search engines need to organize data along their Shannon points, as linguists are learning to do. Geneticists are starting this as well I have noticed.  Its the next big thing.

I am starting to believe in Bubble cycles

It is the extraordinary simplicity of physics under the Theory of Counting Up. The vacuum must, at some point, get out of balance when doing everything with such simplicity. They have to reach the Higgs limit where the next group is bigger than the density of bubbles and they have no subtraction operation.

I dunno how this happens, I must think like a proton, a most difficult task having trained myself to keep an empty head of vacuum. But protons likely drift to the fringes, as the centers reach their Higgs.

Protons show a positive charge because they bury negative phase, it is the opposite of what we think.  They would disintegrate rapidly at the fringe where large positive bubbles dominate, cause a recycling, somehow.

To us it would look like interstellar space is expanding, but what really happens is a slight density gradient, the volume of space constantly realancing by moving protons from the center out. The circulation time determined by the density gradient which supports the proton.

We would be less than rectangular on the larger scale, the lack of perfect symmetry explained, and the vacuum always having just enough SNR to  make the round trip. Proton would disintegrate in interstellar space at just that radius when the balance of {-1,0,-1} bubbles drops below the Plank volume. That SNR must include the density of protons needed for the circulation.

Since we can calculate the density of the vacuum, we can compute the SNR needed to climb the quant chain and thus compute the difference in volume size of the three bubbles, and then compute the size of the universe.

The size of the universe being that point in which the little bubbles do not have the precision to compute sine(theta) to the third power in their Taylor series. That precision should match the precision of the Higgs number:


And physicists would add the isotropic adjustment term to the Einstein equation, thus making the world static and slightly curved. The cycle time would be relative to the Higgs limit, so if energy was lost, the cycle time decreases but who would notice? We are not looking at galaxies receding at some velocity, but galaxies some curvature time cycle time.

How would this effect show up in space?
The red shift in space would match the rend shift in sign(theta error, an when those match, the velocity of space expansion equals the velocity of galaxy compaction. The cycle is connected. Proton recirculation is continually replacing the volume lost from compaction with the volume gained from expansion.

The universe need only have enough supernova explosions to keep the proton balance constant. In any region of space where the gradient exceeds the cycle time, matter packs to the Higgs limet and new protons are made. Regions where the gradient is too small  makes small matter which dissipates toward compact regions.

 So, physicists are correct about the Big Bang time, just incorrect in its interpretation, it is the cycle time. The universe would look exactly like it began at an instant, having no way to differentiate.

I will take a wild guess. The apparent size of the universe and the apparent big bang distance are related by Pi.

No, its strictly a flooding problem

String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations.[13]
There is a reason the world looks 3/2, the summation problem. Two groups are unique in the most simple form when they 1) are not zero, and 2) differ in one dimension only. 3) They differ by the minimal amount.  Its about a simple, stupid grouping process.

If something is grouping, then something is in a race to do it as simple as possible. The three postulates above produce addition along one dimension, the two things added have values of one and two; all the time in all places. The world looks like 1+2 =3 so each next group will be 3/2 times as large as the previous group.

The everywhere, everytime results because the world is made of samplers, at the same rate. Samplers at the same rate  conserve addition operations, and thus make the most groups with the least energy. One has to empty one's mind and think like a vacuum, that is my expertise.

Its the Theory of Counting Things Up

Thursday, April 24, 2014

The Weinberg angle or weak mixing angle

My theory of counting has to explain this.  I tell you the truth, I would just go thru the various matrices until I found some combination of 2,2,3,3,7,11; in some form where i would expect a ratio of nulls or wave, between the electron, photon (as they call it) and the bosons they make fractions with. Thos numbers are the factors of powers  at (3/2)^108 and (1/2 + sqrt(5)/2)^91

That would be cheating, that would be me searching a finite set of numbers generated by physicists, until I found the match.  There are few choices, all the choices have to make Compton identity true with the corresponding wave having maximum kinetic energy less than half.  Then I would look down the spectral chart and make sure it bounds the atomic orbitals by best wave/null ratio match.  Then I verify that it did indeed knock off the magnetic mass ratio and packed the electron with one sixth of the next  wave number; making sure the charge per atom matches that actual count of vacuum bubbles.  I would intelligently 'guess' and find the right answer because Weinberg already did the work. But the mixing angle should be close in relationship to the derived value of space impedance for EM wave, for a plane wave, since they are using sine cosine.

But my theory will get the answer, it is the theory of counting things up.

OK, lets do a sampled data SpaceTime expansion

Vacuum bubbles expanding, sample rate dropping. Why? Either momentum left over from the bang, or it is adjusting to light passage. Source and destination are already normalized, it is only interstellar space that has the problem.

So this is the same model, except we have Nyquist to guide us. Light starts out blue, it is sampled faster. Along the way, the sample rate slows. What happens? Beam spread. under sampling is spacial.  What is the difference? My space impedance adjusts along with the vacuum.  My model, as a vacuum sampled data system, has no other properties, except sampling everywhere, at about the same rate, producing the same hyperbolic spread of phase imbalance as required by phase minimization. The effect shows up in any line of symmetry the observer can view.

We solve it the same way, take one wave of light, at the source, and assume the expansion is isotropic, count up the vacuum expansion, proportional to r cubed scaled by the reduced sample rate.  Wait! times the sample rate? Either that or time. For any local observer, the sample rate is the same, with respect to matter. So it matters not.  What matters is the r cubed, I include it, expansion in all directions. and the light wave is not assumed to be ray, infinitely small.

Wait, you say, the wave spread is  minimal transverse to the direction of travel.  Maybe it is. But it has to be minimal with relative to your aperture. And you have to assume that space impedance is isotropic relative to the dropping sample rate. Then you have to account for lower x,y bandwidth on your focal plane, and make sure the lower bandwidth is less than the red shift you are measuring.

Collecting light for longer intervals does not over come  lower transverse bandwidth loss of your focal plane, unless you assume light is quantized, and can be treated as a small diameter ray. But then the effect of spacetime expansion on space impedance is gone. If you want to eliminate the space impedance problem, then you have to show the vacuum has enought SNR to perform the
sin(theta) = theta calculation to the third power. The vacuum normally does that operation by renormalizing the beam over some surface area of the wave front. That is a space impedance function.

OK, now gravity wave.

Gravity cannot really outrun light.

With exponentially expanding space, two nearby observers are separated very quickly; so much so, that the distance between them quickly exceeds the limits of communications. The spatial slices are expanding very fast to cover huge volumes. Things are constantly moving beyond the cosmological horizon, which is a fixed distance away, and everything becomes homogeneous very quickly.
This can heppen, a huge jump in the volume of space, at the lowering sample rate of space. But you end up with a wave crest, a high sample on one side, a low sample on the other.  I am not sure what happens. If not, then you have to speculate on something other than vacuum. But we are running out of 'order' Higgs at 91 (wave), the vacuum at zero, maybe four accounting for noise. I dunno what is beneath.

Why general relativity?

I really all starts with the topology, the vacuum needs to keep volume uniform without renormalizing groups already at volume equilibrium. That gets you the euclidean system: r1^2 + r2^2 = r3^2 for the three vacuum types. Anti symmetry would result if the two phase types had differing volumes, now or before. That may be what the vacuum is trying to do, fix that problem. Anti particles are a real problem for simple counting systems, because the sign bit is only known locally. The reason for black holes, if they exist, may be because of the scarcity of negative particles. Contained systems seem to be negative phase in, positive out.

 The vacuum has to count in powers to keep digit systems whole, allowing groups to remain stable and thus minimize sample rates. That gets you the stable multiply and the stabilization of the sample rate of light. From there, light quantization has to count up in integer groups to the Higgs density of the vacuum, and that gets you normalized units of acceleration.

If space time expanded, it is because the volume normalization system got screwed up, and that may be possible, I dunno. When Einstein said we cannot distinguish from acceleration or gravity, he really meant that we are counting down the same digit system. Both us, and the vacuum we are made of have to count with the same digits. So the instructions to the engineers is simply that, even though they are using a right angle coordinate system, they have to renormalize units if their powers do not match the vacuum, especially near a small number divide in their series expansions.

All of the particles we see are digits. All of the kinetic energy we see are fractional units of the speed of light, between integer units of particles, counting in powers. When energy and momentum warp space time, we really mean that its time to normalize our units.

The universe may be cycling thru alternations of negative and positive phase. If so, there would be nearly invisible dark holes of reverse phase packed vacuum. These holes would cause a slight phase alignment in free space. Galaxies would cluster there, and be swallowed and a violent phase change, a Big Bang, would cause the vacuum to renormalize.

If the vacuum is renormalizing volume, the sample rate of light would still be constant with respect to local nulls, but would be different than the sample rate for a far away observer.

Humans should just adopt the 3/2 counting system, everything would be much more efficient.

Special relativity is more fun

Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = mc2, where c is the speed of light in vacuum

Applies to nature, can't go faster than nature, we are band limited.  Here is Einstein:

The insight fundamental for the special theory of relativity is this: The assumptions relativity and light speed invariance are compatible if relations of a new type ("Lorentz transformation") are postulated for the conversion of coordinates and times of events... The universal principle of the special theory of relativity is contained in the postulate: The laws of physics are invariant with respect to Lorentz transformations (for the transition from one inertial system to any other arbitrarily chosen inertial system). This is a restricting principle for natural laws...[7]

What he says is that any invented coordinate system that has nothing to do with nature must obey some simple rules of translation. Space and time are engineering units, so this is all about telling engineers to be careful, and make sure some rules of symmetry and orthogonality are maintained.

Experiments suggest that this speed is the speed of light in vacuum
This statement came from considerations of homogeneity of any coordinate system that has right angles must have a light speed, or an equivalent. They get this if a fairly easy manner by showing one has to have an invertable multiply in your coordinate system, because reversing things must be possible.

None of this has anything to do with nature, but it is all about instructions to engineers using coordinates with a right angle.  E=MC^2 simply states that the engineer has to count uniformly, the C being the counter, the E being a 'divide by zero' if his counter does not work.

But, here is a hint.  These rules apply to nature as well.  Nature has to count uniformly, nature needs a simple multiply, and has to have a speed of light. There is one simple proof of this, the proton with a lifetime of 10E35 years. For all practical purposes, that means that nature is a counting system.

Lorentz contraction is nutty

Tells me nothing.

Any observer co-moving with the observed object cannot measure the object's contraction
This was in Wiki under experimental verification of the equation.Here is a clue, one has to interact with something to count it.

Yes, this equation says you cannot measure something if you travel faster than your measuring device. What this has to do with the speed of light? I have no idea, except that one cannot measure faster than the speed of light. Put this all in sample rate, it simplifies things, you are, after all, counting in integers. If you were not counting in integers, then the first person to do the experiment, some 100 years ago would still be out there, rattling off an infinite digit number.

Does this tell me that the object contracted? No, just tells me when my sample rate error is noticeable.
It turns out, that the proper length remains unchanged

Thank god for small miracles. I agree with special relativity, I just dunno what the big deal is.  Matter did not go away, energy was conserved, no miracles, just a simple application of Nyquist theorem. And, at the end of the day, Nyquist is all you have unless you like infinite length integers.

Harry Nyquist ( Harry Theodor Nyqvist; /ˈnkwɪst/, Swedish: [nʏːkvɪst]; February 7, 1889 – April 4, 1976) was an important contributor to communication theory.[1]

I am missing something in the space curvature thing

For an observer observing the crest of a light wave at a position r=0 and time t=t_\mathrm{now}, the crest of the light wave was emitted at a time t=t_\mathrm{then} in the past and a distant position r=R. Integrating over the path in both space and time that the light wave travels yields:

c \int_{t_\mathrm{then}}^{t_\mathrm{now}} \frac{dt}{a}\; =
   \int_{R}^{0} \frac{dr}{\sqrt{1-kr^2}}\,.
In general, the wavelength of light is not the same for the two positions and times considered due to the changing properties of the metric. When the wave was emitted, it had a wavelength \lambda_\mathrm{then}. The next crest of the light wave was emitted at a time

This is wiki explaining general relativity with regard to distant galaxies. There is something missing, I keep thinking.

The statement should read:

For an observers observing the crest of a light wave ar position r=0 and angle of incidence equal 0 at t = t now, the light wave was emmitted at a time t=then in the past at a distant position r = R and angle equal Angle. Where is the angle of incidence in a universe expanding at all directions? Then it should read, integrating over the path in both space, angle, and time.

The geodesic curvature explains:
This is also the idea of general relativity where particles move on geodesics and the bending is caused by the gravity.
OK, particles do not bend when space does. OK, if I were standing and counting particles arriving at my aperture, then sure, but I am not. There should be a double integral in that equation. The R^2 comes from the expansion of R along the point curve toward the source, as near as I can tell.

They are saying that over the path, energy density was stretched, so it takes more time, from our point of view, for the energy to arrive, so the energy we collect arrives at a lower rate, we measure lower energy density. Correct, but both along the line and along the area of our projected aperture. So, if distance expanded by a third, then our aperture shrunk by 80%.

Something seems odd, but it may just be me.

A unified theory based on Fibonacci!, I am not alone

Sounds OK, but is clearly incomplete.  It is not the Fibonacci ratio, per se, but the summation problem that naturally causes the vacuum to assume the Golden ratio for quantization. The whole key is that the vacuum assumes a natural quantization rate that, relative to matter quantization, leads to simple flood based algorithms for computing hyperbolic power series.  Seriously, it all about counting things up.  You need Shannon to make it all work, and the vacuum gets that part too, they appear as potential energy wells.

But the deal with the Fibonacci rate, the minimum sample rate needed to group two sets of Nulls.  So, if Nulls are grouped somewhat to begin with, this is the rate phase settles into. And tyou find that when the null and wave quant match, the hyperbolic circle is obeyed because cosh(x)^2-sinh(x)^2 = 1, and x is 1/2(w-n), the two quant ratios. Work the Taylor series, I have not, but you will find they are computed naturally.

 This identity, I think, lets groups come apart and reform and increase the accuracy of the corresponding Taylor expansion.

Wednesday, April 23, 2014

Still bugged by Einsteins thermal property of the vacuum

 Energy radiated seems to be limited by the energy mass equivalence,.  Energy is quantized in Einstein's formula, which was originally from Plank.

In the numerator of the exponential is the Plank quantization of light, and that includes a factor of 1/2 when you reduce units.  It simply means that space will hit  the Nyquist frequency when your signals are quantized as a fixed proportion of the sample rate. In the vacuum of free space, when the signal hits the Nyquist, the vacuum will simply disperse the energy. It will still radiate the energy, it just won't radiate over a straight line. As near as I can tell, the vacuum still has 10e17 more samplers than Plank had wavelength units, the wave form simply disperses.

The mass/energy equivalence is correct, it just isn't what limits free space.  The mass energy equivalence is simply running out of vacuum density to maintain mass, it is the Higgs limit. The Higgs limit tells us that we only have about a few hundred bubbles of vacuum to stabilize the next unit of mass.  That is when we reach the bandwidth of the vacuum, and the impedance of the vacuum comes into effect.  But we still have stable gluon wave numbers, about 20 orders above atomic wave numbers.

The Compton effect rules, which is the bandwidth limit effect of quantization, or momentum, and it will appear up and down the quant chain way before we hit the bandlimit of space. Experience with relativity bears this out, we always see velocity effects way before square velocity effects. Even in the electron orbitals, the corrections seem limited to momentum, and we generally assume the electron mass stays whole. Here is an experiment, by the way. Chop the electron into eights, then skip the spin quant, see if you still get good results.

So when do we have to make the 'SpaceTime' correction for red shift? Well, the vacuum has to do a Taylor series expansion of sine(x), it has no TI calculator. When delta sine(x) is around  Plank * E-16, you have problems, you run out of vacuum multiprocessors to to a Taylor series.

I am serious about the Taylor expansions.

Relative to nulls, light quantizes in powers, and above the noise of the vacuum, the vacuum is grouping and ungrouping horizontal wave in units of:
 (1/2+sqrt(5)/2)^n, n being around 5,6,7; a few orders above noise. The vacuum does this everywhere, that is why it can compute hyperbolic values to great accuracy.

Characteristic impedance and bandwidth

The weak interaction has a very short range (around 10−17–10−16 m[8]).[7] At distances around 10−18 meters, the weak interaction has a strength of a similar magnitude to the electromagnetic force; but at distances of around 3×10−17 m the weak interaction is 10,000 times weaker than the electromagnetic.[9]

The impedance of free space, Z0, is a physical constant relating the magnitudes of the electric and magnetic fields of electromagnetic radiation travelling through free space. That is, Z0 = |E|/|H|, where |E| is the electric field strength and |H| magnetic field strength. It has an exact irrational value, given approximately as 376.73031... ohms.[1]

OK, I use the Maxwell definition to mean the rate of change of one frequency band to another.  The ratio of wave numbers that bound the free wave in that mode. Do I have the definition right? Who knows. I will take the inverse or not.

The weak force is bound between the gluon wave number and the upper boundary of the atomic orbitals, just above the oproton, likely. Probably 6 order difference. The atomic orbitals occupy some 14 slots between their Shannon wave number. Whatever the definition of impedance, you can be sure the weak force will run into bandwidth issues very quickly.

The elecro weak force

If you look at how particle theorists describe the force, and its relationship to electro magnetism, the description is all group theory.

Calling groups by some name like electro or magnetic only works after you have found the group separation points that were coupled Shannon points.   Its all phase and null anyway. What happens when the wave and mass quant match nearly perfectly? SNR goes way up. Noise, kinetic energy results from deviations from the Compton wavelength. Then you get a lot of wave energy needed to balance phase.  But this force seems to mediate between the neutrons and the protons.  The strong force mediates between quarks.  So the naming convention seems off to me.

A force is strong when its Shannon barriers are high, it can move many nulls without quantizing them. When the weak force separated from the electro magnetic, it made its own group, but its group is weaker than the nuclear force.

So, near the proton level, where wave and null ratios match, we have many possible ways to combine kinetic and mass rates to match the wave number. So, apply a phase imbalance to a partially complete system.  Like all good Fibonacci adders, it will attempt to combine nulls into a stable set. This weak force likely came about in within a mass of semi-stable heavy leptons, and found a null group  that involved a charged electron and part of a Neutron. Thus the electron orbitals as we know them. The way the physicists find these stable combinations, I think, is simply finding the nominal wave/null match, then comparing the various statistical and integer algebras. Basically the same way a stupid vacuum would do it.

Gravity, around these parts, does not have much Shannon barrier, and requantizes mass at many quant ratios, small and occupies many quant slots. The weak Shannon barrier of gravity is why we always think it a continuous.

Physicists talk about the hierarchy problem, why is gravity so much weaker than a subatomic force like the electro weak?  It is not a hierarchy problem, it is a multiple of threes problem.  There is room up near the proton for many multiples of twos and threes, simple as that.   We are dealing with a vacuum that must do simple arithmetic.  Group separation makes its life much easier. Super symmetry is all about combining multiples of twos and threes in groups and subgroups to allow some precision for this simple, fairly stupid vacuum. When you have this super integer system up near the proton, then adding in a few, weaker sub groups is fairly easy,

Gravity is way down the order, around order ten.  Signal to noise sucks, and gravity is likely stuck with a few prime and not much multiplies.

Understanding the power of groups is understood by looking at the ability of the proton to count out some 120 different types of atoms and their orbitals.  The proton is near that magic number, 108. Factor that number and you will see its strength.

Go back to group theory and add Shannon, the world will be much simpler.

A better way to ask this question

Why are there three generations of quarks and leptons? Is there a theory that can explain the masses of particular quarks and leptons in particular generations from first principles (a theory of Yukawa couplings)?

Is there another mass ratio that would be as good a near perfect match with the Compton wave number at the level of the proton? Clearly the stability of the Proton and the nature of the quark/gluon interaction are related. Also clear is that the dimensionality of the quark (spin, charge, mass, up, down, charm) and the gluon color matching all are part of the stability.

It is also clear that the quark/gluon stability allow the atomic real to be diverse and stable, and are the reason we have no magnetic quantization.

SO, ask another question. What other Null quantization ratio schemes would do better? Go look. And then, since this all results from the nature of the vacuum, why is the vacuum the way it is?

The answer, I think, is that no other structure of the vacuum can exist.  Topology should tell us, any other structure of the vacuum will immediately evolve into the one we have.

Another related question. Why does phase imbalance in the proton make it so stable, even in free space? And if the Neutron is much less sable by 10e36 years, then why does it take ten minutes to decay? The process inside the Neutron is ultra high frequency, and internal oscillations would immediately cause the delay. 

Why do paerticle come in families, some physicists ask

Because the Compton wavelength rule is fundamental and wave and nulls quantize to different ratios.  It comes from the property of the vacuum. Understanding how a simple vacuum can count is simple, understanding where it came from is hard.

Higgs field equivalence

The Higgs wave I believe in.  Its interpretation is subject to debate since one could write the standard model from the ground up or the top down.  From the ground up, the Higgs wave prevents any higher order mass quantization, it is a blockage, not a delivery vehicle.  Both positions require the Higgs, for the same reason, and both positions have problem.  Matter can be packed from Nulls, counting up from the bottom. But how does null packing obtain and constrain the force needed to make the quark? It needs the Higgs.

From the ground up, having wave action push Nulls around is not a problem.

The vacuum fluid

Liquid Spacetime - The Fluid Flow Of General Relativity?

If we follow up the analogy with fluids it doesn't make sense to expect these types of changes only" explains Liberati. "If spacetime is a kind of fluid, then we must also take into account its viscosity and other dissipative effects, which had never been considered in detail."
Liberati and Maccione cataloged these effects and showed that viscosity tends to rapidly dissipate photons and other particles along their path, "And yet we can see photons travelling from astrophysical objects located millions of light years away!" says Liberati. "If spacetime is a fluid, then according to our calculations it must necessarily be a superfluid. This means that its viscosity value is extremely low, close to zero.
"We also predicted other weaker dissipative effects, which we might be able to see with future astrophysical observations. Should this happen, we would have a strong clue to support the emergent models of spacetime. With modern astrophysics technology the time has come to bring quantum gravity from a merely speculative view point to a more phenomenological one. One cannot imagine a more exciting time to be working on gravity".

They are not alone, Einstein said the vacuum had inherent energy. So, how much of a fluid is the vacuum? Well, how about (3/2)^-108 of the fluidity of the proton, as near as I can tell.  That tiny number seems to agree with both Max Plank and Albert when they postulated the the energy of the vacuum.  What effect would that cause when viewing light from 60 million light years? Energy spread of light, and red shift, as a matter of fact. 
The bigger problem is whether the energy of the vacuum is dissipating. And if not, what replenishes it? And if so, where would it go?

I have decided to make quantum pie

I have rules for making Quantum pie. It is to be filled by machines that must fill the pie dish evenly. The pie dish will rotate on a spindle, thus the pie dish has a hole in its center. And the pie has an outside edge. I call these my Shannon pie boundaries. The rotation of the pie is constant, I call this the speed of pie. The pie will be filled by four machines, each of them will use one of four spoons. But no machine may use the same spoon size as its neighbor within one unit of speed of pie. My machines must be able to calculate pie filling to within their integer limits, so my pie quants must obey some group pie theory rules. The goal is to minimize the variation of pie density during one rotation of the quantum pie. Wish me luck, it will take a day or so. I can get this on my R code system pretty quick.

 Nope, this is not it.  This just shows I have 3d graphics up and running!

Tuesday, April 22, 2014

The Null was discovered in 1995, I am not alone

Nature: In 1995, Ted Jacobson, a physicist at the University of Maryland in College Park, combined these two findings, and postulated that every point in space lies on a tiny 'black-hole horizon' that also obeys the entropy–area relationship. From that, he found, the mathematics yielded Einstein's equations of general relativity — but using only thermodynamic concepts, not the idea of bending space-time1.

The Null is a black hole?

Sure, why not. It does nothing, it is the best sphere that nature can make. There are about .3e17 of them in a Plank length, as near as I can tell. We are made of them.

It is not surprising that physicists have though of these ideas before. after all, they have to count things up, so they are likely to consider the unit thing counted up' One of the physicists actually though that everything was made of the smallest thing, including gravity! He almost nailed it.

Fractions and stochastic algebra

The atomic orbitals are not quantized to Shannon, meaning they do not make a two bit digit system, but make a digit system in the natural log. They do not have SNR greater than 1/5. So a wave function of the orbitals will have forms looking like:

e^(k) + e^(k-1)+  e^(k-1)....

A perfectly fine digit system, the k are integer, the quantum numbers of the orbitals.  And we can treat them like a digit system, add, subtract, multipy and so on.  But when we draw them, we convert to the twos system of our computers.  If we do that anyway, then just use Shannon with a high enough sample rate and find all the orbital quantum numbers.  Essentially what we would be doing is breaking the electron mass into fine granularity, applying a bit of special relativity.

Look what happens:
1) Convert e to log2(e) = 1.4427 = b then we get:

2^(l*k) + 2^(l*(k-1) + 2^(l * k-2)......

Nice, the the l*(k),l*(k-1)... are not integer, and we really do not have a nice digit system until we scale b up to an integer like 144.  Still works but we are dealing with a 144*k*j digit number. So, if we have some 20 quantum numbers total, we would apply Shannon across the spherical phase density using 20 * 144 integers; computing all the integers to quantize that phase density.

Why not? Good question, why not.  You end up with three variables, one for radius squared, one for theta angle and one for beta angle, and the pretty picture would be a series of binary numbers added up, each binary number ranging from about 1 to 1000 digits. So what? I dunno, why not.

All we are doing in atomic physics is minimizing the variance of {-1,0,1} within the proton.  The know wave/mass numbers that define the orbital boundaries, so we know the total phase. We should know the relative amount of phase in a unit of charge, so we initialize the proton  to that. We know the number of Nulls in a proton. We have chopped up the electron enough to accommodate relativity.   In finding the Shannon boundaries of the orbitals we have accommodated magnetism. Maximum entropy is minimum phase to the precision of 1000 digits over the sphere of the Proton. We ignore the quarks, they just give us an axis of symmetry.  The orbitals are simply the paths of uniform phase, so we map the proton phase function onto the orbitals

I simply cannot find fault.

Five years ago, I was not alone

Physics of the Shannon Limits
We provide a simple physical interpretation, in the context of the second law of thermodynamics, to the information inequality (a.k.a. the Gibbs' inequality, which is also equivalent to the log-sum inequality), asserting that the relative entropy between two probability distributions cannot be negative. Since this inequality stands at the basis of the data processing theorem (DPT), and the DPT in turn is at the heart of most, if not all, proofs of converse theorems in Shannon theory, it is observed that conceptually, the roots of fundamental limits of Information Theory can actually be attributed to the laws of physics, in particular, to the second law of thermodynamics, and at least indirectly, also to the law of energy conservation. By the same token, in the other direction: one can view the second law as stemming from information-theoretic principles.
Entropy means optimally matching with a countable set.

Decoding Einstein

A popular exercise.

It seems clear what was going on in his development.  He discovers the quantization of light in the atom, discovers the quantization of vacuum energy. Naturally setting the speed of light constant, he still wants to avoid quantization issues. So he sets space impedance to a constant, though he knows that space impedance and gravitation are related, both of then a function of the vacuum Null. Hence, gravitation* impedance becomes a constant, everything is still continuous and thus space time becomes the independent variable.

Einstein simply did not accept the quantization premise. Now we look at the vacuum and discover that it is the quantization ratios of matter that are constant.  Light, though a constant, is simply set to a sample rate that maximizes the Compton rule in a quantized environment.   Then things are not so continuous, and we have to have both space impedance and gravitation written with respect to the mass quantization ratios.

The impedance of space, the general term including both gravity and light impedance, is simply the signal to noise ratio and the structure of the vacuum; which Einstein did not like. 

So, the simple theory of counting things up:
  • The structure of the vacuum
  • The derived SNR of the vacuum 
  • The theory of entropy written in terms of phase equalization
  • Including both stochastic and integer entropy make the decimal point.
This is about a simple as it gets.

Good idea here

These authors work with the complete sequence:
It's bigger on the inside: Tardis regions in spacetime and the expanding universeProfessors Rasanen, and Szybkab, of the University of Helsinki and the Jagellonian University at Krakow, together with Rasanen's graduate student Mikko Lavinto, decided to investigate another possibility.
The "standard cosmological model," which is the framework within which accelerated expansion requires dark energy, was developed in the 1920s and 1930s. The FLRW metric (named for Friedmann, Lemaître, Robertson and Walker, the major contributors) is an exact solution to Einstein's equations. It describes a strictly homogeneous, isotropic universe that can be expanding or contracting.
Strict homogeneity and strict isotropy means that the universe described by an FLRW metric looks the same at a given time from every point in space, at whatever distance or orientation you look. This is a universe in which galaxies, clusters of galaxies, sheets, walls, filaments, and voids do not exist. Not, then, very much like our own Universe, which appears to be rather homogeneous and isotropic when you look at distances greater than about a gigaparsec, but closer in it is nothing of the sort.

Physicists know a sequence of things happened in a particular order. They do not know, yet, what the clock rate for those events were, and for that they need to know how the vacuum quantizes events, how does the vacuum count. The vacuum would count differently if the environment at any one place were a bit different. So until we known the complete sequence, we do not know what the units of maximum entropy were along the path.

So, SpaceTime does travel faster than the speed of light!

Science Line:
So, while the speed of light remains an unbreakable barrier for those of us within the universe, it can’t limit the expansion of space-time itself. The universe keeps right on expanding, but the speed of light limits how much of it we can see, and how fast we can move. It may not be fair, but that’s physics.

Physicists are thinking the relative rate of gravity speed and light speed is constant. That's an improvement. That means:

 space impedance*gravity  = constant.

Not a bad assumption, it means that mass is now the universal constant, and the structure of the vacuum does imply that fact. But it also means the red shift is subject to differing interpretations, I think. This assumption also implies the connectivity of the universe.

Joel R. Primack says the vacuum bubbles expanded

UC Santa Cruz professor:
"According to modern cosmological theory, based on Einstein's General Relativity (our modern theory of gravity), the big bang did not occur somewhere in space; it occupied the whole of space. Indeed, it created space. Distant galaxies are not traveling at a high speed through space; instead, just like our own galaxy, they are moving relatively slowly with respect to any of their neighboring galaxies. It is the expansion of space, between the time when the stars in these distant galaxies emitted light and our telescopes receive it, that causes the wavelength of the light to lengthen (redshift). Space is itself infinitely elastic; it is not expanding into anything."

Likely a better explanation, and one that I still consider.  But his interpretation casts much doubt, and if the vacuum inflated, so did the impedance of space or the gravitational constant. We cannot have it both ways, the vacuum expanded buts its properties were unaffected? Impossible. He once again assumes a structure for the vacuum with no method to define that structure.

The idea that the vacuum bubble expanded is a bit far fetched, they most likely just changed their relative shapes, thus changing the dimensionality of symmetry.  But topological considerations make even that circumspect.