## Monday, March 10, 2014

### Does this theory correspond well to transitions in the standard model?

Well, I have an order N mode region holding its own field, propagating in order N+1 modes, and quantizing in N+2 modes.

From there things look close, charge, wave and quantization. Spin is relative curvature or phase imbalance across quantized samples. But some changes are predicted.  Mass still seems like a relative concept.

The unit of energy is one unit of phase equalization by Nyquist. Adding an axis of sumultaneity squares the number of phase equalizations. So mass in order N+2 does indeed have energy MC squared, where C is the Pauli speed. Mass is the wavelength ratio between orders.  Wave propagations exists in every order except Nyquist + 1. Nyquist propagates in Nyquist +1 but Nyquist +1 has no mass.  From there, where is Higgs? Working on that,k but I see Higgs quantizing the Leptons and propogating in the nuclear, unless there is another order down there. The physicists think there is, I have not done the work yet, except my quick spread sheet, here, does not indicate that to be true.

### Revisiting gravity

It is wave motion of a quantized sample set. In the gravitational field it is a simultaneity. This is true any field two orders longer from the originating field. Two orders because fields are not compact and remain unquantized with the neighboring long field.. Fields are not compact. So phase equalization is moving wave motion from the center of compaction and phase adjusting the quantized field down. Is it moving? The containing vacuum is phase equalized to the bounce, in one quantization. Phase pushing back from the center wave equalize very fast with the quant, gives it acceleration down. Quantized fields keep kinetic energy in the enclosing vacuum. OK, the motion is real, but motion relative to the opposing wave lengths.

### Het, Steve! Did I get it right?

The Unified Field Theory, tell me where I am wrong, I need to double check for any errors. Start here, and work backward.

Says Wiki: Most scientists, though not Einstein, eventually abandoned classical theories. Current mainstream research on unified field theories focuses on the problem of creating quantum gravity and unifying such a theory with the other fundamental theories in physics, which are quantum theories. (Some programs, most notably string theory, attempt to solve both of these problems at once.) With four fundamental forces now identified, gravity remains the one force whose unification proves problematic.

Steve, been there done that.

## Sunday, March 9, 2014

### Vaccum Phase Equalizer part 2

My physics readers notice I got stuck on the Compton wavelength ratio. I made a couple of error describing the vacuum phase equalizer and was off on the electron proton mass ratio.

1) It will not quantize (stop forever) until it sees the triplet [0,0,0] at order one, and two triplets [0,0,0,0,0,0]  at order 2, and so one.

2) The vacuum works on  triplets at Nyquist, which is 1.5 * wave speed, more or less.
3) The vacuum will not see a simultaneity when working on 5 samples, two pair of triplets over lapped.
4) The vacuum undersampler has the slight positive phase curve in its measurements. Remember the values [-1,0,2] which results in a slight curvature of the universe.
5) That curvature insures its work will be completed, and insures order level is maintain.  The slight curvature removes the steepest gradient in phase, and so a second order order gradients, smoother gradients with finer granularity will not overlap the first order and so on.  Thus, except at the Nyquist noise level, there will never be sequences like [-1,-1,-1]. The system only quantizes at the triplet level, and the sub quant levels are always insured to be fine granularity levels of 1,2,4,8 etc.
With these conditions then the order allows mass rations between density levels of 64, minus gaps that result from non compact groups.

So, no group theory for me, just an approximation, and indeed this system does work within the round numbers of the standard model and traditional quantum physics.

Here is my approximation. I took the full number of quants available, estimated a gap ratio that results from  non compact groups.  Then computed the mass ratio. The proton seems to be order two, the electron order three and the mass ratio fits.

 Order Allowable Missing gaps Used quants mass ratio States 0.6 1 8 4.8 4.8 38.4 2 64 38.4 184 307.2 3 512 307.2 56623 2457.6 4 4096 2457.6 139156940 19660.8 5 32768 19660.8 2735936773627 157286.4

This needs refinement, my magnetron/electron wavelength are not enough to explain the Pulsar. But, for an amateur, not bad, a complete unified field theory.

### What are the simple mechanics of the Vacuum sampler

Really, the basic principle is remove simultaneity and order samples in order to minimize phase. Very simple, a few lines of software actually. The sampler looks at three samples and exchanges the two the minimize phase. The six three quant levels at Nyquist are +2,+1,0,-1,-2. -1,0,+2. The universe has a slight curvature because Pauli is enforced.

At any point in the universe, the vacuum only operates where there is no simultaneity. When there is simultaneity, Pauli would be broken and the vacuum simply treats the simultaneous samples as one.  How does it do this? Simple, the two simultaneous samples are left as they are, and eventually the ordering process evacuates nearby samples, and the two that break Pauli are left alone where they are not a bother. Not a bother, until the compaction process again crowds them out with variation such the the vacuum again sees them as different.

Wave motion operates on the principle of simple ordering process.  On the wave front, the few samples in motion are simply exchanged, in order of phase reduction, with the next sample that reduces phase, this is all we need.

I think this gets us the big bang process.  The original disturbance is considered an unordered, unsampled disturbance. The vacuum collects signals up to Nyquist noise, thus making all samples distinguishable.
.

OK, the big bang starts with a disturbance have some unordered grouping by density.  The vacuum treats this initially as a simple reordering issue without simultaneity. But the sampler assumes no simultaneity from any angle, and attacks the problem from many different angles. Soon, two at a time until it reached two samples that are provide no exchange that minimizes phase.

Dens regions get sequential samples like -1,-1,-1,0,0,+2,+2.  And so one, always a slight imbalance. A pair of -1 is not separable and the sorting process simply gets hung switching between the two.  But the -1,0,+1 sequences get sorted, away. Finally we have a batch of similar numbers, the size of the batch being the gradient level, or density.  The disturbance is sorted by density, the number of indistinguishable sample together is the batch density, and that gets quantizes in the sort process.  Density regions become optimally separated, the gradient actually quantized in the value Batch size, which are integer. They will be numbered: 1,2,3,5,8; a Fibonacci, I think, as the process completes.

The red shift phenomena is simple the batching process, long wave have finer granularity and the vacuum is simply increasing sparsity in free space at the expense of density else where, by the sorting process. All wave action is the sorting process.  Quantization points, like electron or magnetrons are simply density peaks that have established the regions batching level.

The batching level explains why light is energy. The higher frequencies need more sample pairs, each sample pair requires its own sampler, and sampling by Nyquist is energy.

If you follow this approach then you end up with the group theory behind Hilbert spaces. Each 'batching' level becomes a closed and complete group. not quite complete, not quite closed; they call this the binding energy. Thus waves never requantize within a particular group. But you might have to jump two groups to the longer end to get matter from light, I have not worked this.  But a wave from the shorter group enters the longer group, and there is no sorting solution, so the wave freezes and becomes Pauli bound, it becomes mass with respect to the longer wave group.

Lets assume separable groups and estimate the order of the electron and nuclear.  The electron has a mass that is approximately 1/1836 that of the proton.

Note the overlap between orders, that means the groups are not closed. That is wavelength to me and correspond to quantizatuion levels. The Nyquist gets three leveles.  Taken two at a time, and assuming closed, I get five levels. Each pairwise increase the number of states by 2. But I can mix and match the three group and five group, no?
So, 3, 3*5, 3*5*7,3*5*7*9, 3*5*7*9*11,3*5*7*9*11*13 gets me:

3;15;105;945;10,395;135,135.  These numbers are an over estimate, but I get that the electron is order five, the proton order four.  True? We have:
Boson,Hadrons,fermions. Notice the overlap? That is what I mean when I say a wave won't become simultaneous until it jump two orders.

Two missing orders, if I calculated right. My numbers likely screwed, but then maybe there is dark subatomic mass? Where am I going wrong?

$\lambda = \frac{h}{m c} \$
This has wavelength * mass = my Pauli sample rate times the  Plank. Hmm... I need to fix this. Wavelength * mass (proton) = wavelength * mass ( electron). I have a 1836 * number levels = number levels electron. So my ratio is correct. Hmm...

## Saturday, March 8, 2014

### Pulsars

A pulsar (portmanteau of pulsating star) is a highly magnetized, rotating neutron star that emits a beam of electromagnetic radiation. This radiation can only be observed when the beam of emission is pointing toward the Earth, much the way a lighthouse can only be seen when the light is pointed in the direction of an observer, and is responsible for the pulsed appearance of emission. Neutron stars are very dense, and have short, regular rotational periods. This produces a very precise interval between pulses that range from roughly milliseconds to seconds for an individual pulsar.
How do we know this thing is rotating?

Pulse frequency is a millisec. Wave speed is 3e8 m/sec times 1e-3 sec = 3e5, the wavelength of the pulse.  About the wave length of a quantized magnetic region. The magnetrons would be phase reversed with very stable phase gradients.  Be very conducive to rf transmissions.

A pulsar could either be a rotating body, or a body that has transferred its rotation energy into a stable  magnetic field gradient. Would we know the difference?

So we construct a model in which the electron field is dense, near the pulsar, and unstable in phase.  It would be bright in the radio frequency, and compressed by fairly numerous and sparse magnetrons orbiting about .3 to 10 e6 meters out.  These magnetrons are not much bigger than electrons, maybe two orders of magnitude larger. They have a phase gradient, 360 degrees, but very stable. They are phase unbalanced, reversed from what we think, and very strong field. So it rotates RF, adding a longer pulse mode.

The pulsar is simply a bright object transmitting in RF.  The rotation  static in the magnetic field, kinetic energy transferred to a very strong field gradient. We would never know which one. The wavelength of the charge field would have expanded slightly, as the proton charge was transferred. Electrons in this world a bit smaller. The neutron field actually less dense, a better density balance between the two transferred to a sparse imbalance in the magnetron field. The magnetron field pushing back against the gravity field which is now a bit denser. We would likely mis-intrerpret the shape of the gas cloud outside the magneto sphere.

If we mapped the spectra, we would see the rotating beam of the RF wave and  and the longer wave  of the gravity field and magneto field, separate waves, and separate spectral lines.

### Black holes?

There is only one that we will ever see, our universe, and it is not yet compacted. When we are compacted, we will be surrounded by the Nyquist layer, the unquantized field with no phase gradient.
If the vacuum knew the standard model, we would be black holed. We are not, I assume. So the standard model is a convergent series, for us and for the vacuum.  We know it to the extent we can measure relative densities, everywhere we can look.

This approach says that to the extent we observe apparent centers of compaction then there must be small quantized matter in sparse orbits beyond the surface of compaction with wavelengths on the order of the total region. But the order we observe in space seems to small to match any density at the atomic level and below. Either we are way early in the compaction process, or we fail to observe mid level orders. Our estimated density ratios are not correct. Take magnetism. It will either it comes from phase balanced charge and never quantized. Or charge is too sparse and the vacuum will quantize the magnetron. If charge is too sparse, then the sub atomic order is not nearly finalized. But if they exist then where are the short wave waves in the megahertz range?

### My guess on dark matter

The galaxies the scientist looks at are cooling with long wave emissions unobservable to him. The phase gradient in these galaxies is quantized, and it is not all a smooth gravity gradient. Gravity likely more dense has greater curvature, but that is because an outer layer of quantization is compressing gravity.

The cooling emission are multimodel because they travel out, through quantized phase gradient levels. The short modes requantized back into matter, only the longest wavelengths escaping. So the scientist sees matter recirculating many time and thinks there is greater luminosity per matter than there really is.

We would see the same phenomena on Quasars, with much higher energy. The wave emission if very high energy and very multimodal. The matter dropping back into the hole is just requantized short mode waves, but some of in on a scale of galactic size. The quasar is just doing mini big bangs.

In this model the megnertron, for example, is only a couple of orders larger in apparent mass than an electron, but a couple of orders longer in wave length. Being mostly sparse, with high kinetic energy, their phase polarization is extreme, the long lobe field tightly looped and the short lobe field mostly straight.  They would be orbiting hundreds of  thousands of kilometers away from the center surface of compaction.

Around galaxies, far out in the perimeter would be a quant level I call the galactitron, very sparse, wavelength of galactical scale. But they would be crowding the gravitrons, being less sparse, the gravition field has a higher phase gradient, more curvature.

It is all about the relative packing density between quantization levels.  it is hard to judge unless one has a good approximation of the standard model for that region of space.

### Still working the theory of undersampling (revision 4)

This is an ongoing page. I will stick here for a bit and get some details straightened out.  Call this page a work ongoing.(Revision 4 ongoing)

This sampling method needs a name, let's call it the Pauli sampler because it is sampling to make Pauli exclusion true.

Pauli says the probability of two quants arriving and the probability of one quant not arriving must be equal. As a ratio of that normal curve, we want the tails left over from two quants equal to the size of the quant.
Q must be equal to the tails of 2*Q.
100-2*Q = Q. Q =33%. For Nyquist that would be 50%. The effective quant rate is .33 per sample.  Nyquist samples 2 times per Q.  2* .33/.5 = 1.32 is the Pauli rate, the Nyquist rate is 2. The system is under sampled at the Pauli rate, gets  more Q, but they are deliberately erroneous. So in any complete sequence that equalizes phase, the Nyquist sampler gets 33%. These are the extra samples the vacuum keeps  to maintain Pauli exclusion.  However, the Pauli samples are taken at Nyquist rate. Which means two samples gets one Pauli sample, or, using Shannon,
1.32/2= .66 = log2(1+SNR) or SNR = .58.  This is the accuracy of all the mass measurements, including the vacuum, all taken at Nyquis and meeting the Pauli exclusion. Any sample that is weak, the electron for example, will be quantized down to meet this accuracy.

We break the system up into the vacuum, the proton and the electron. If the total system requires 150 samples, then the vacuum gets 50, the proton might get 30, and the electron 2. Convert these to rates: Vacuum, 1/3, proton, 1/5, and electron 1/75. Wavelength is the inverse of rate.

Quantization levels:

The electron has the longest wavelength. 75 samples are needed to complete one wave. So 1/75 is the rate taken taken at the Pauli SNR, how many partial quants?

(B/75) = .66 or B = 75 * .66 = 49.  The electron has to be measured to 49 quantization levels. For the proton: 5 * .66 = 3.3. This means that the uncertainty of the electron field has to be much less than the uncertainty of the proton field.  The vacuum handles this by creating more samples of the field, mainly 49 more. So the electron, with the longest wavelength ends up with a greater number of samples in its side lobes, the advanced and delayed fields.

Energy states

This is the ground state. The number of Nyquist samples is sufficient to balance phase, if I did the numbers right.  Increasing the energy state is equivalent to removing Nyquist samples. I would think the energy states can be increased by removing the Nyquist samples three at a time. As they are removed, at some point a phase inversion occurs and a dimension of simultaneity is introduced.  At that point, the system equilibriates back to a state where the Nyquist rate is again 1/3, and the excess Nyquist samples are the kinetic energy of the emitted particle.

But since the length of the sequence was measured in the dimensionality of the scientist, the system reaches a limit which should be two more dimensions, in our case. When that limit is reached, the wave contribution from the proton collapses, and the proton particle goes back to its ground state, it requantizes from wave to particle.

As long as phase inversion is not breached, the modeler is free to add chunks of inert vacuum along an axis of dimensionality to distribute the wave. The  modeler introduces simultaneity back, within the energy holding capacity of the volume from which the sequence was taken.

Time and space dialation

Time is the ration of Nyquist samples to the length of the sequence. Space dialation are the samples along the axis of dimensionality.

Low SNR fields need greater numbers of field samples to balance phase.

Phase delay in sampling and field curvature.

This is important. Why do most quantizations end up with fewer field samples holding more phase advanced and more field samples holding more phase delay? The electron seems phase imbalanced, always negative, but that cannot be so, phase rebalancing still occurs at Nyquist. The number of sub quantized samples around the electron is fixed by the wavelength, so the balance is in fewer field samples having large phase advance and more field samples having less delay. The answer is relative sparsity, the electron is relatively sparse to the proton, and longer wavelength, so it extends more samples to the proton, all of one phase sign, and fewer in the other direction. Hence the delayed phase is smaller per sample than the phase advance. The longer delayed field is the short end and the advanced field the long end, mainly because of the wavelength of the sparse is longer relative to the dense.

Sparse regions have stable quants, unbalanced phase, more simultaneity, more kinetic energy and more vacuum.
I(n our compressed sun center we should expect a slightly more dens electron region, a slightly less dense nuclear region and a slightly more dense boson region. On the outs shell, the magnetic region is dens, the gravitational regions sparse. The vacuum as been redistributed to provide the balance, the nuclear getting a bit more, the electron getting a bit less.

Wave action between the electron and magnetic would be slightly multimodal, containing sjorter nuclear modes.   An unstable quasar wave would requantize into component, including galaxy size quants near the edge. Galaxaies would be dispersed around the quasar, kep in orbit by far flung blackhoeltron, and compressed by random appearance of galaxy size quant near the quasar the have requantized momentarily.

Compaction is the process of move the vacuum out from the center as the center cools. Thermal energy increases the density mis match between orders, radiation restores the balance. The shorter the wavelength of emission, the larger the phase mis match, and so the greater the density mismatch.

### Vacuum is conserved by quantization

Relativity, specialized or otherwise are just confusing. Skip time and space and move to sample rates.  The vacuum quantizes at longer wavelengths in order to remove simultaneity and thus needs fewer vacuum samples at the Nyquist rate to restore phase imbalance. Hence the ordering levels of the universe.

Wave motion is too complex in standard theory.  The vacuum is imply removing phase imbalance at the Nyquist rate. The wave appears to travel along the axis of phase imbalance. Wave move at the Pauli sample rate.

Curvature of the wave is not gravity specific, it occurs when  wave propagation enters a longer wavelength region of space. The vacuum simply add a second moment to phase at it removes the imbalance at Nyquist.  If the phase imbalance is greater than the vacuum samples available, the wave becomes simultaneous and it quantized. This occurs mostly when a wave jumps two orders of simultaneity from the short to the long.

The  Chandrasekhar limit, same thing. That limit occurs when curvature exceeds the Nyquist noise and simultaneity is not supported.

Magnestars, if stable when they are formed, will have a steeper phase gradient, and the gravity field outside the region has a less steep gradient.  The total phase imbalance remains, the vacuum as simply quantizes the gradient into two regions of simultaneity.. Light will travel through a steep  magnetic gradient, with greater curvature,  and fool the scientist who thinks gravity caused it.

Use Shannon because Shannon minimizes the amount of compaction for a given level of SNR, it is the maximum packing theory.