Monday, March 31, 2014

One more time on maximum quant size matching

I took another try at getting the common constants, the largest accurate integers representing the size of mass and the wavelength of light. That is, the largest accurate integers known to the vacuum.

I started with this 5.39106(32) × 10−44 s, which is Plank time per second.  I am assuming that there are two quantization  ratios, and these define the  quantum number accurate to Plank, and the error will be within the range of the Plank measuring  errors (.0000032), which I assume to be: 0.000032/5.3906 = 5.9363e-6, the error of 5.39106(32) × 10−44 s.

To understand these numbers, they say that within Plank measuring error, the vacuum is accurate to 1/[(3/2)**108] nulls. That would be the distance, in nulls,  over which the phase  will adjust their volume sizes to set their sample rates.

So, I am looking for the two integers within the error that makes the quantizations of the two ratios meet the assumption. The closest error I could get was 9.228e-5,compasred to the Plank measuring error of 5.9e-6.

I have the largest accurate volume of mass =   [(3/2)**108],
and the largest accurate wavelength of light =   [(1/2+sqrt(5)/2)**91]. In units of vacuum samples.

I simply computed the results in decimal using logs, as in:
r2**N2/r1**N1 = 1

= (r2)**N2 * (r1)**-N1
= (r2)**N2 * (r1 * (r2/r1) **N1) * (r2/r1)** -N1 = 1
=r2**(N2-N1) = (r2/r1)**N1
Take logs:
N2 = N1+ [log(r2/r1)*(N1)]/log(r2)]

Then I searched for the N1, N2 closest to integer. I doubt there is a closer integer set.

(3/2)**108 = (1/2+sqrt(5)/2)**91 = 1.042e19

This is not completely tested. But the vacuum per Plank distance is still huge.

I make no guarantees, and I generally take three or four trials to get it right, so beware. As you can see, this says there are about 6.0  e15 of the largest vacuum quants in a Plank meter. I search for integers using a spreadsheet and could not find any more accurate integers up 10e75, which I am sure is the limit of the spread sheet accuracy.

Anyway, we will find out soon enough after I get my longest sequence spreadsheet working. I figure about two or three 30 bit quantizers and 180 long sequence should making a dandy quark.  I am calibrating the thing, it needs to be able to count itself along the sequence, and so forth; so I have calibrators in the thing, and some basic macros.

Sunday, March 30, 2014

Phase needs more separation than nulls

Nulls are at minimum phase already, they can pack together. They will be underpacked and extra null interspaced with phase to stabilize wave. For phase, in any unbalanced situation, all the phase elements will swap with a Null.  The two phases being unequal, N not equal to M, N+M nulls are moved, and one or two nulls will sit between them in a straight line for instance. Hence the Fibonacci quantization. That also explains the half integer spin, the remainder when Nulls haven't quit gotten the full power of two. Phase, still fractionally unbalanced will swap the null in place, giving the motion we call angular motion. It likely is simply being swapped constantly by phase.  The quantization of nulls obeys the 3/2 quantization rule regardless of the original density.  Hence the proton is stable because the quarks and gluons complete the rule when bound.

Charge is another of those things that come in 3/2 often. The is phase embedded in a packed Null set, though I do not completely understand it. So, I am pretty sure I have this almost right.

Working so far

The goal is to lay out a force  over a complete sequence. The multiple quantizers decompose the force on a per sample basis. The since the quantizers find the nearest match to the force, they already get the nearest integer to each other. The residual after quantizing is requantized, yet again, until is is gone. Make the force with whatever basis I want, and change the variation, or frequency, from large to small. I use sine and set a sequence of sine functions that match, essentially, the point where the quantizers line up.  So it naturally picks up the 'beat' frequency of the quantizers. So, ultimately, I am going to get groups and subgroups, and so on.

Saturday, March 29, 2014

Have to compress a Higgs to make a proton

My spreadsheet works with thermal energy, everything flies away. I have to add compression force, which means converting small amounts of energy at a time.

The plan is to test my theory that everything is mostly quantized by the Fibonacci ratios, so I need to let energy ito the system in units of tiny planks, or I can insert energy up to the first quant, then my counters should quantize in sequence, and make the quark matrix.

Compression force. OK, thermal energy is redundancy, the phase can see that easy, it is not noise.  Add force, the phase quantizes, part of the thermal becomes potential energy, and that is noise. The more quantization, the harder to find redundancy. That works, up to a point.
A spread full of bit computation.  Making lots of mass, few wave, but I have to scale and fix errors still.

But it works. I have four digit systems, each digit system currently is based on the Fibonnacci ratios, starting at 3/2 and ending at the exact Fibonacci ratio. and a large unquantized number that is converted into a 96 bit digit, each digit being the quantized with the closest ratio. I then let the solver determine the force needed to complete the conversion.  Its not yet a quark, but closer than yesterday.

I can eliminate thermal and potential energy because I know the force required to compress, since I know the original size of the thing. That made it simpler still. Not I just let the spread sheet solve for the force variation that maximally quantizes force.

Then I find the coefficients:
log(force) =  a*log(wave) + b* log(kinetic) + c*log(mass) [ i*log(i_ all within an integer] where the log,mass and wave are the quant ratios, and the a,b,c are integer. I go in the order that the phase quantizes first.  If the ratios are right, and set to the order of the quarks, then  minimizing force whould make a bit set, each bit composed of the three quantizers,

Friday, March 28, 2014

Quark spread sheet and universal counter

Simple thing. It quantizes at the Golden rate, if the SNR is high enough, otherwise the Pauli, otherwise Nyquist, which is kinetic.  It quantizes the largest thing. I am testing it, I think it might make Quarks and Gluons if I get the energy input right.

I corrected the big numbers for the biggest thing, and I am close enough for testing.  The idea is the we start with Nulls in motion, Kinetic energy, which get quantized at Nyquist. As Nyquist climbs the ladder, kinetic noise is reduced and we reach a point that Pauli can quantize, then it reduces noise so Fibonacci can quantize.  Whenever something quantizes, it induces noise in the previously quantized rates, and stops, releasing the spreadsheet to the lower rate, and so on.   I think this is the best way to get groups, I am sure it will work.

So how does a dumb vacuum make a multi-rate encoder on a spreadsheet?

Phase mainly finds resting spots where they are not perturbed. So, we have all these vacuum nulls running loose, the phase gets away and balances itself in the open areas.  That reduces noise and leaves channels for these nulls to run loose. That is a quantization level in the Nyquist.  But noise is reduced, and other phase start to get better balanced, leaving groups of Null trapped in packs.  That reduces noise and causes a quantization at the Pauli rate.  Then the Fibinacci rate starts waves, standing or not, mainly to keep them out of harms way. Really, just bunches of phase vacuum leaving specific marks by their position. When induced quantization noise is too high for the Fibonacci, the phase rebalance and make another level of packed Nulls.

This goes on until there is nothing left to do and we have three counters, with matching bits. Each time the phase leaves a channel, wave, or packed null set, noise reduces mostly, but they also induce quantization noise on each other.

So, I will just get a bunch of energy values from the physics sites, plug them in, with plenty of errors, and watch the thing make quarks and gluons. The density of the thing, and energy input to confine the nulls will force the other quantization into action sooner, making quarks and such.  Later I will add a valance charge quantizer that lets phase embed into safe spots in quark masses, eventually making a hydrogen atom on my spreadsheet.

The phase vacuum should automatically obey the Shannon condition, and make groups and sub groups.  In the sheet, I actually compute Shannon signal to noise  I have to add an "if" statement into my Open Calc, I hope that is enough.

If the system has no noise, the proton is a 16 bit mass of nothing. That is how far I have tested it. But I am pretty good at this stuff and will get it working.

Is this the BitStream version of physics?

Close, it is three bit stream versions, separate quantization rates and counters for three types of things going on, kinetic, packing and waving.

Will it make the Quark matrix?

It should. If I get the input energies right, the densities then some of the counter bits should line up as values for the matrix.  Shannon is maximum entropy, it will compute to the accuracy of me, which is not too good, but improving.

What other things will it count?

 Put in a counter for orbitals, electrons, planets, or yoyos.  Count the parameters you need to draw waves.  It should count anything in life because nothing in life is countable unless it meets Shannon.

After testing the thing and writing a little manual, I will release it.

Thursday, March 27, 2014

Still working the problem (update mode) and getting digits straightened out

It is a numbering system with two different fractional precision. One counts fractions, the other whole numbers, but they have to be compact, operating within a certain range until the fractional part exceeds it limit, then a new group forms.

Pauli counts whole numbers, and light counts fractions. But the match must remain within the group, or order. Fractions can have trailing zeros (Nulls) until the trailing Nulls make a whole. Pauli can have leading phase until the leading phase makes the first fraction.

Remeber, waves are packed at the Fibonacci quantization, the two binary system is an approximation that Fibonacci uses, and the packed Nulls are quantized at the Pauli level.  Its a mess to sort out, and is going to take some time. But here is a better start, and the numbers I get are all within range of the 96,48 levels, or small multiple of them.

Pauli of ordern M can count r digits such that:

 P**M > [p**(M-1)+P**(M-2)...+P**(M-r)] which must be greater than light in the same group having order N with k digits such that

P**(M-e) > [L**(1-N + L**(2-N)+...L**(k-N))}

Two quant ratios, counting whole numbers and fractions in the same group, then we go up one group. My mistake last time was 1) Math errors, but I always make them, and 2) I just wanted to get the relative ordering right, and leave the digit sequence till later. That was why I wasn't worried about partials results. But I glommed onto those original numbers, (98,49) they are close.

The entire atom should count down, Higgs as the significant bits, then the other groups hanging within as less significant bits, without interference. Groups contained with groups. This works because the light is faster then the pauli and contains the current group, literally acts as a border region. Going down in the groups, contained within groups, the quantum numbers are the digit count. It should result in the quark transformation matrix, that is where I am headed. And I am closing in.

So, we can fill in the entire 'bit sequence':
 starting with the maximum mass, which is unsatble, I presume:

P**Nmax ..|.. P**QaurkMax.... P**QaerkMax-r......P*8QuarkMax-p....
And between each Qark Group, should fit the qluons, at the R rate. Then there is room thee for the leptons, each group separated, and meeting maximum entropy to an integer with fraction.

The general arrangement of quantization levels down the the magnetic is estimated, and the atom should be contained with a range of:

P**100 ... P*60 or somewhere like that. Going down in quantization powers of P.
Light quantizes larger, and should fit within these range, matching group for group. Light need not count at the same rate as Pauli, it just needs to match the range and be separable between groups. There will be peer groups in Pauli and peer groups in light, each subdividing into match sub groups and so on.

Light packs the atom as a Fibonacci, I am pretty sure, or close enough to start.  It can be  modelled  as a binary digit sequence with three huge gaps:


The leading zero is the unsustainable Higgs. The sequence should be about 48 bits long, or there abouts. And the spans contain a matching Pauli sequence counting in smaller quant ratios.

We know from the mass ratio of proton to electron that each of those gaps is about P**14 or P**16, in that range. And the P ranges are matched in pairs. 16 is 1/3 of 48, sop these types of number get me excited.

So we want to find the binary number, in 2**N, and match it to sets of Pauli numbers at (3/2)**M, and meet the Shannon in each set and subset.

So  2**N, N near 48 will be Shannon close to 1/2 of P**M, the P counting in (3/2), over the whole atom. Now there is likely a third rate, the kinetic rate, but I ignore that four now, but it will count up to Pauli groups and sub groups. But the ratios seem to be going:
1/2, Nyquist, 3/2 mass, 5/3 kinetic, and all packed at the light rate, Fibonacci.

So 2**Nmax < (3/2)**Pmax = (3**Pmax) * (2**-Pmax)
 2**[Nmax +PMax} = 3**Pmax
or Nmax+Pmax = log2(3) * Pmax   each Pmax and Nmax to the nearest integer.

 Nmax = Pmax *  [log2(3)-1] or  Nmax/Pmax = .585
and closer to zero than an other combination.

I keep getting numbers near the range of Pmax = 106 to 62 as being in the range, but I have to double check the maximum order I expect, going back to Plank and doing the number right. Tp=5.39106(32) × 10−44 s should be the second per sample, or close to it, about 2**143. But Planks are units of packed Nulls, aren't they? A plank is P**(-Pmax).

If so, a plank is a quantizatiopn at the 3/2 rate, giving me 246= 123 * 2 = Pmax and Nmax = 144 = 48 *3. The number are in the right range, if I can make up my mind on the right matching ratio. The Pmax is the largest prime * 2, in the range.  This is likely not a coincidence.

But wave are packed at the Fibonacci ratio, and if that is the case then the Plank is F**-207 or F**-208 if we add one.

I am picking the Pmax and Nmax that most closely match integers. 48 and 96 and now 123, keep coming up as common multiples. I am trying to sort this out. There are three quantities here.  The binary quantization Fibonacci uses to pack waves, the Pauli quantization that measures packed Nulls, and the 2s binary system that Fibonacci assume when packing waves.

Gluons and Quarks make the color white.  In the binary approximation, white is a large (high order) 7 bit number made by adding up three 10 bit numbers, a good startin range.  The gluons are likely exchanging free Null with the quarks, they 'superconduct' mass. But the color white is close to the Higgs mass.

Pmax is our quantization of binary Higgs at Pauli (3/2) levels. So a binary 7 bits of that quantized at Pauli to make a proton, is composed of three wave/quark combinations, quantized as Pauli occupying 11 bits, each. And each of these come out as 6  bits in binary approximation, they are the quark/gluon combinations.  Each of these, in binary approximation, have about 3 in Pauli and 2 in wave, to a rough approximation.

Remember, the Higgs order is only about 40 bits, out to the electron,  in a universe with is like 300 bits, just to give scale ranges.   But it is the highest order, it is counting huge numbers. I am working from binary approximating that with (3/2) then get that into (1/2+root(5)), where the Pauli mass are the most significant digits, in (3/2),  of the (1/2+root(5)/2), the wave counting up to the nearest half of each Pauli mass. The wave is packed, meaning it will carry free Null without quantizing them, and can still bind the packed Nulls. The digit system must be able to divide out the groups and sub groups with ease, we are after all simulating dumb vacuum.

The question, now, is how to make the bit system uniform so I can minimize the back and forth, something I am not really qualified to do.

The vacuum can pnly do addition within its quant levels, s0, addition becomes:

q**n + q**m = q**n + remainder.  where  n nearest integer .  Which means we can use fibonacci as the unified basis, as long as were are careful with rounding and remainders.

Start with the disorganized set of free nulls and phase in balanced pairs.  The vacuum, everywhere, quantizes these in units of 2 (one unit a null and two phase), then sets of 3/2, then set of 5/3 and so on until there is no more balancing. Whenever a set is quantized at order n, and the next order would reduce entropy, it has to skip the order. It will find these points because there is no exchange that balances phase any better. The phase and Nulls are bound in the region. But there is a remainder that keeps on balancing. at the edges, surrounding existing set that have grouped.

If the vacuum can measure fractional phase to infinity, and keep swapping, then it will to the largest value in 14 digits, the largest value being 1.82e43, the Plank speed.

OK, now I need to model density, the idea that quantization stops at a small bits because there is too much phase imbalance to solve at light speed.  I will use sensitivity to get that, the more sensitive the measure of phase imbalance the more dense. This is the Shannon condition, phase imbalance is noise, it lowers the quantization rate. As the vacuum packs nulls and balances wave, the SNR increases. Quantization level steps up.

Noise is horrific, the vaccume just barely separates itself into negative, Null, positive positons. The SNR then is:

Q/2= log2(1+SNR)  signal is Plank energy organized, noise is density, we start with the number of Planks, in phase, which is 2/3, and that is squared. Then each time some phase separates itself, the Signal in the system goes up by .the Shannon condition and the noise drops.

We should find noise drops fastest when wave and null quantize at nearly the same level. Lets put all the noise on loose nulls having kinetic energy. The more they are packed, the less noise. The vacuum stop when everything is quantized up to light quantization. At that point, the 1/2+root(5)/2 value does no better than packing by the integer ratio in steps. Since Nyquist is causal, it can work, by definition.  When it has reduced noise sufficiently, the higher quantization kicks in until it can no longer work, and it drops back to Nyquist.

The sampling rates should go as: Nyquist, Pauli, Kinetic then light.  Looking at quantizing the largest thing, Pauli stops at about an 11 bit integer, that is way too compact and I do not account for noise yet. Noise comes from original energy plus quantization variance, the amount one quantization rate overlaps as the system settles.  We end up with balanced packed nulls, kinetic energy and wave, co quantized. I am not really sure about kinetic energy, and I assume it stays at Nyquist, that is, unpacked.

So I start with enough noise to step the thing, but not stop it. There should be an energy level that makes a proton, or quarks.

Super conductivity (Updated)

Rice University: High-temperature superconductivity is one of the greatest unsolved mysteries of modern physics. In the mid-1980s, experimental physicists discovered several compounds that could conduct electricity with zero resistance. The effect happens only when the materials are very cold, but still far above the temperatures required for the conventional superconductors that were discovered and explained earlier in the 20th century. In searching for a way to explain high-temperature superconductivity, physicists discovered that the phenomenon was one of a larger family of behaviors called “correlated electron effects.” In correlated electron processes, the electrons in a superconductor behave in lockstep, as if they were a single entity rather than a large collection of individuals. These processes bring about tipping points called “quantum critical points” at which materials change phases. These phase changes are similar to thermodynamic phase changes that occur when ice melts or water boils, except they are governed by quantum mechanics. - See more at:
Super cooling packed nulls unpacks them and they can move inside, mostly standing, wave motion. Gluons contain a ton of unpacked Nulls.Unpacked Nulls maintain themselves because, we have definitately shown, wave is sampling a bit higher than packed Nulls.

The question is why supercooling a material unpacks electron Nulls.Well, kinetic energy drops, and the positive phase containment goes away. Phase imbalance is energy, there is no imbalance to disable the nulls, and the electron still has phase balance in its own positive/negative phase.  Packed Nulls carry a minimum of balance phase, the electron wrapped in the atom will not show this. But, that is the reason the vacuum packs Nulls, to separate imbalanced phase and it leaves some balanced phase to contain the packed Null.

Just to note, the charge of the electron is embedded in its mass. Its positive and negative balance are a partial of that charge, at the Fibonacci rate. This is the same way that quark/gluons pairs work. The superconductive electron should be heavier, though that is my initial speculation. When you put the electron back, it gains kinetic energy and gives up its phase balance to the atomic phase container.

Mass it not some unique substance, it is simply pieces of vacuum.

More parodox on the motion of light in free space

New Scientist: Just that phenomenon has already been seen, starting with some unusual observations made by a telescope in the Canary Islands in 2005 (New Scientist, 15 August 2009, p 29)Movie Camera. The effect has since been confirmed by NASA's Fermi gamma-ray space telescope, which has been collecting light from cosmic explosions since it launched in 2008. "The Fermi data show that it is an undeniable experimental fact that there is a correlation between arrival time and energy - high-energy photons arrive later than low-energy photons," says Amelino-Camelia.
And this:
"It has been obvious for a long time that the separation between space-time and energy-momentum is misleading when dealing with quantum gravity," says physicist João Magueijo of Imperial College London. In ordinary physics, it is easy enough to treat space-time and momentum space as separate things, he explains, "but quantum gravity may require their complete entanglement". Once we figure out how the puzzle pieces of space-time and momentum space fit together, Born's dream will finally be realised and the true scaffolding of reality will be revealed.

Why would they differ, high energy gamma rays and ordinary light? For one thing, their quantization ratios are different, so their impedance in free space is different. What do we know about impedance and curvature?  If there is no phase alignment in free space, then they do not matter. If there is some phase alignment then it matters a whole lot.  It is whatever phase alignment there is, and it might not just be gravity.

Scientist make the mistake of assuming space impedance is a function of space, when it is a function of the relative quantization ratios of the two phase modes doing the swirl.  The second mistake scientist make is assuming that only gravity causes phase alignment.  Galaxies cause phase alignment on top of gravity, they are semi-independent of gravity.

The relative quantization ratio within the atom are from the positive phase boundary caused by the electron in motion, and the positive phase boundary caused just by the electron charge, which is negative, and the negative phase quantization of gluons which is massive.

So space impedance, the difference in quantization, is huge for gamma wave compared to EM wave, likely a 50 times difference.  Any mild phase alignment causes the high impedance to curve more, it will  loop a bit along is minimum path path.

Wednesday, March 26, 2014

Quantum entanglement, is a slight phase imbalance

Quantum entanglement is a physical phenomenon that occurs when pairs or groups of particles are generated or interact in ways such that the quantum state of each particle cannot be described independently – instead, a quantum state may be given for the system as a whole.

  1. Alice measures 0, and the state of the system collapses to \scriptstyle |0\rangle_A |1\rangle_B.
  2. Alice measures 1, and the state of the system collapses to \scriptstyle |1\rangle_A |0\rangle_B.
If the former occurs, then any subsequent measurement performed by Bob, in the same basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's measurement will return 0 with certainty. Thus, system B has been altered by Alice performing a local measurement on system A. This remains true even if the systems A and B are spatially separated. This is the foundation of the EPR paradox.

What do the two particles have in common? The share the difference between an integer quantum separation, and a near integer quantum separation. The system balances between Pauli compaction and wave quantization.  The different quantization rates to not yield perfect separable quantum states.  When they leave the cojoined state, the one takes half the quantum error, the other takes the other half, split left and right. It will be a phase imbalance, negative in one and positive in the other. The imbalance is not enough to disturb the state, on its own, but measure it and the imbalance uses the measurement to rebalance. So you disturb one it leans left, disturb the other it leans right, because the quantum error was split tin balanced fashion..

Current quantum equations need to be modified slightly, to include probability in a phase side channel, computed as phase imbalance. The new system includes fractional error.
As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert space. More generally, if one has a large number of copies of the same system, then the state of this ensemble is described by a density matrix, which is a positive matrix, or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by the spectral theorem, such a matrix takes the general form: \rho = \sum_i w_i |\alpha_i\rangle \langle\alpha_i|, where the positive valued w_i's sum up to 1, and in the infinite-dimensional case, we would take the closure of such states in the trace norm.

Unity inthe matrix, but generate a side matrix that contains the fractional error. It can be set at +1/3 and -1/3, which will likely be the minimum error the system will keep.  Note: The fraction may be [+- (1/2-sqrt(5)/2)]/2, split between each pair. This needs thought.  The error is kept in wave form, and moves with the particle.

What does mean for the quarks and gluons

Likely we have a solution with mass containing 96 orders, which is Higgs, divided onto three sub orders of 32 each, split between mass types or, 16 orders for each of six. These are managed by wave, taken from likely 48 orders, split six ways gives 8 wave quantizations for each of eight mass quantizations. You total Null count is Higgs, but they can be split between free Nulls in wave or packed nulls in mass. The quant ratio for mass is 3/2, the quant ratio for wave is (1+root(5))/2. Free Null and packed Nulls are swapped for stability at each color match.

Damn close considering we started with Plank and three vacuum shapes.

Grocery stores work just like atoms, to within .001%

5.39106(32) × 10−44 s  The time it takes light to travel one plank unit. How is it so accurate?  The inverse of that number is longest  complete sequence using the bitstream math. The inverse is the density of the vacuum.

The first thing to notice is the speed of light is defined. That is, current theory claims light is not subject to plank  but is a quality of the vacuum everywhere.  In our region that has basically been confirmed, light remains more constant than plank with a precision of, well, Plank. The correlation coefficient is .9, meaning that the speed of light about 9 times more accurate than Plank, that is constant.

But in this theory, the sample rate of the vacuum and the density of the vacuum have some settling time; the time to keep relative vacuum sizes constant. The sample rate is nearly constant because their is a minimum ratio between phase and Nulls that keeps the volumes and rates consistent everywhere we look, in our region.

Remember, the only real constants in this theory are the existence of three distinguishable vacuum units, and the necessity of balancing phase, which is equivalent to balancing volume.  So in our region we know the density of the vacuum, it is 1/(5.4e-44), measured at the sampling rate of light. That must be the order of a 2* Higgs wave? The sampling rate that counts 1/(5.4e-44) to the nearest integer is the sampling rate of light.  Well, unless there is a bigger Higgs, but I doubt it. And that is the biggest quant that the vacuum can make. There are two numbers, the rate and the exponent.  rate**N is known, the r and the N separately are not, unless you believe some conjecture about rate and optimum packing. But since light is less than ten percent, we can factor that out and get a much smaller number, or 1.1902e17, when inverted. So the N and r we look for have the form: N = logr(1.1902) + 17* logr(10). Find the r, using very accurate logarithms, which makes N an integer. And the closest reasonable number I get is 3/2, whaddya know. N = 96.97. It is within .001%. Not a complete search, but I suspect it to be the answer.  It is not the light rate, mainly because we are really measuring a mass of Null.  But, I think we might have cracked this case, let's give it some thought.

I never believed it would be this close. I started a two months ago with the idea that physics works just like grocery stores, only one or two in the queue.  Now here I am with the same result for physics, to within .001% of what Plank says. I think this proves my case, its all about counting, everything.  Group theorists, the world is yours. 

Look over my numbers, please find my error:

rate logr Logr(10) Times 17 N
1.1 1.8268915302 24.1588579281 410.7005847776 412.5274763079
1.2 0.9550234393 12.6292531365 214.6973033207 215.65232676
1.3 0.6636626394 8.7762908476 149.1969444099 149.8606070492
1.499 0.4301435925 5.6882293059 96.699898201 97.1300417936
1.5 0.4294361136 5.6788735873 96.5408509835 96.9702870972
1.5001 0.4293655196 5.6779400502 96.5249808529 96.9543463725
1.618 0.3618551175 4.7851808551 81.3480745369 81.7099296544
The other thing. When I started grabbing scale numbers and putting the world into orders, I sort of got the Higgs at order 100, and here it is at 97, using the same 3/2 rate. I am pretty sure this is the theory of counting.

What wave modes make gamma rays?

Gamma rays typically have frequencies above 10 exahertz (or >1019 Hz), and therefore have energies above 100 keV and wavelengths less than 10 picometers (less than the diameter of an atom).

I doubt they are electro magnetic if they are shorter than the atomic radius. How would they be quantized to EM levels if that were so? The wave mode is most likely nuclear/electro. They would make mass in our gravity field, their wave length so short they would be captured with the Null density we have. These are of the same intensity as gluons, in fact, they are gluons coupled to charge in free space. They carry a large group of Nulls.  Higgs, according to my last reading, are simply wave loaded with free Null, they don't get far before the wave becomes quark and gluon. So these gamma rays should be the highest frequency of wave sustainable in neutral free space.

So, just get the relative quantization ratio between electro wave and the gluon, compute the space impedance relative to speed of light, and you have it. The logic here is that if we strip the atom of electron kinetic energy, then the remaining balance of phase is based simply on electron charge, so we get nuclear/electro wave mode.  If the atom emits, then th4e residual, stable magnetic field in the region would have some quantization value relative to total electron energy. Its a tricky subject and I need work on separating out energy levels, but being a simple counter of things, I need more things I count.

Relative probability of state changes

That would be the difference between an integer solution to Shannon and the closest solution to an integer.  It says, that over the incomplete sequence, the quantization levels have some overlap. So in the quark system, the mass states and the wave states make the best match at the current generation. They cannot perfectly match as wave quantization ratios were different than mass quantization. The sequence is incomplete because of the balance between Plank, the wave quantization ratio, and the mass quantization ratio. The point where balance is acceptable is the point where Plank is closest to its value.

So, if we knew the relative quantization ratios (between wave and mass), and we knew Plank, then we can count down from Higgs and pick the points where the difference between perfect entropy and actual entropy is minimum, and thus lay out the entire mass/wave set thru the proton.

My conjecture that the wave quantizations go as the Fibonacci angle seem to bear out because the maximum entropy points occur when  the Shannon index is a multiple of three. My quantization angle for mass is a bit more of a suspicious, but comes close to the quark system of taking three, two at a time. This model is more right than wrong, especially since it works from the vacuum up to the Black Hole. Theoretically, if we believe in maximum entropy, and we think Shannon has nailed that; then, adjusting for quantization ratios, it must work.

Facebook still sucks

They still bounce the screen around, makes it impossible.

Does light push Nulls, Part 2

It is the perplexing problem.  So lets break the problem into components.
Light has a phase gradient, Dg, which gives it frequency.  Light also has intensity, Dg/Nd, where Nd is the density of Nulls at the source. The speed of light is given by the rate at which gradient can be equalized, and that is the sample rate of phase, and sets frequency.

Nulls then move forward or backward depending on the Ng of free space where  phase is maximally balanced. If free space has fewer Nulls, then Nulls get pushed forward. The presumption is that sources of light have denser Nulls regions, so the general answer is that light pushes Nulls along its path, and its path is mostly determined by the right angle between the quantization rates of that make up the light wave.

So, that makes sense, but we have to include the topology of the vacuum.  The sample rate of phase, I think, is determined by the relative size of the two phase samples, and that is determined by the relative set of vacuum over which vacuum density is equalized. Over that set, Nulls and both phases have the same density.  Thus we are required to have empty space to have some minimum of Nulls to maintain the balance, hence the quantization effect. Free space is the density of Nulls required to balance the sample rate. That determines Plank, the sample rate, and quantization. 

Everything is either balanced, or not.  Light transmission would restore the density of Nulls in free space. But, if Null packing is sparse, free space slows down as the density of Nulls exceeds the density of phase. In the adjustment, the sampling rate of phase drops.  If Null packing is dense, then free space becomes sparse of Nulls, and the sampling rate increases. That translates, once again, to the issue of Plank.  Has it changed lately?

Then doesn't the speed of light depend on Null density? No, it depends on Plank being constant.  The minimum phase direction is always the direction that restores the sampling rate, the two are interchangeable when Plank is constant.  It is still the sample issue, is Plank constant and why would that be?. Plank is the distance over which vacuum density is equalized.  One way out of the problem is to prove that Plank always seeks to have the vacuum organized in triplets, -Phase,0, and +phase. And that is again a topology issue.

Tuesday, March 25, 2014

Does light push along free Nulls?

This is another issue I left behind. The answer is likely yes, I am not sure yet, I suspect yes. I dropped the issue because I began to think about the topology of the vacuum. The idea that the vacuum samples at Nyquist, and under sampling resulted from overlap did not get me anywhere, it was much more likely under sampling came from topology. I left Nyquist as a reference point, the apparent sampling rate when phase is balanced. Then I changed my views on the Null, figuring it had no need to sample at all. At that point I never went back to light, and need to rework the issue. Second, the thing about Nulls. They see phase balance, but a phased sample will exchange with it. But, with respect to light, this is a moving wave and I will end up needing to do integral equations to approximate the motion of nulls, and I am not anxious to do integrals. I would need to negate Maxwell, so to speak, and see the movement of anti-light, and find out what happens. I am looking for a simpler method. Anyway, the intuition is that field intensity is not infinity, and the intensity is the ratio of phase to Null. It is maintained, ignoring free space losses, because the path forward is the minimum path. So, waves do carry Nulls and maintain field intensity. Looking at the raw form of Maxwell equations of wave motion, it simply gives the wave conditions, and says nothing about how the conditions were met. We write the conditions with respect to the kinetic energy of the electron because that is the only charged mass we can move around. Positive and negative is a convention, to write equations around the middle point. We do not have positrons around these parts. Magnetism is generated positive phase, real positive phase, in response to negative phase. In a steady current, it has no circular direction, but in a changing negative density, both field will rotate against each other because the changing phase is changing pretty fast, and each field swirls trying to catch the other and minimize imbalance. So, we always have to remember the point of view, mainly everything is respect to the electron.

Fields and Quantum scale

Purists on fields do not believe the world is made of the same substance, just phase. But it is a matter of scale. We normalize charge to the atom, but the gluons and quarks ave about 10,000 or more times the phase embedded within those huge masses. The electron is way down the scale, and gravity is a millions times lower than that when is comes to phase density. It is not destabilizing for phase values from gravity to pass right thru the proton, and it is not destabilizing to pack electron phase into quarks. Those relative phase densities are way below anything that would destabilize. The quantum orders are that way for one reason, the packings that weren't way different in scale have long gone, they weren't stabile. That scale guarantees a right angle for EM propagation over long distances, and even that angle is not perfect. The opposite is much harder to work, the idea that the vacuum can support four or more field functions is absurd.

Super fluid vacuum theory!

I don't think so. Ultimately one has causality, and that is determined in the vacuum, and set a Nyquist rate there does the trick. Beyond that, we need only determine the minimal requirements to exchange the basic thing, whatever the basic thing is. But whatever the basic thing is, you can bet that it is a zero, and to opposites, each distinguishable. Beyond that, the simple act of counting things sets up the quantum agglomeration function. And if nature is anything, it certainly seems to count well.
As an alternative to the better known string theories, a very different theory by Friedwardt Winterberg proposes instead, that the vacuum is a kind of superfluid plasma compound of positive and negative Planck masses, called a Planck mass plasma.[6][7][citation needed]
Since then, several theories have been proposed within the SVT framework. They share the main idea[which?] but differ in how the structure and properties of the background superfluid must look like. In absence of observational data which would rule out some of them, these theories are being pursued independently.

Making groups

Equal is when -iLog(i) = -jLog(j) and j+i is the complete sequence.  Then we have equality.

so, when i/(S+1) f**N = 1/(S+1) f**N and

i = f**M/f**N =  f**(M-N) and i nearest integer, we have a set , in which the larger sequence can be composed of the smaller. 

This gives the Fibonacci sequence, naturally. It tells us what the complete sequence is when we can break it up  into a subset which is N-M orders down.

Then compute the error from rounding i. The error decreases,  but the error is positive and locally minimum when i is a multiple of 3, and negative on either side.  This is mainly because 3  are the most difference from the natural ratio, the rounding process is minimal. These, points, I claim, form the sets into groups with a largest common factor. They have the maximum entropy  basis set.

If we have to ratios,f and p, what happens?

i =  f**M/p**N, and M > N. Change exponents: f**M = (p**logp(f))**M

i = p**(M*logp(f) - N) and i nearest integer, i the sequence that makes a set of p ratios within f ratios.

One can find maximum entropy basis sets by watching the error in i.

If p is a ratio of F, then the groups are aligned.  For example, making p = 3/2*f, I get groupings every 2 and 3 steps, as we expect. They actually go 1,3,2,3,1,1, then the pattern diverges. But I reference all the sampling relative to Nyquist, the common denominator of 2. This is a work in progress.

The least common denominator are the energy levels, so the matching goes as:

i = M/N and one should start with these, really locating i that have integer ratios.

Plank and the magnetic

Plank is a measure of the density of nulls to phase in the universe. Plank is small when there are more nulls than phase by a large margin. When Plank is small then packing in large groups happens at the nuclear level. The magnetic is the Nth order down in which stable group arrangements grow gro exponentially, and the quantum system is not observable.  That point is simply what we call magnetism. Its on the boundary. It is quantum in the early stages when phase has not been minimized on a small scale. Time, in the Universe scale is really the rate of formation of stable groups. Stable groups are separated when the Pauli mass, which grows faster, is a Shannon composition of a scale fibonacci sequence. We get complete groupings over the entire range, with 'multiply' defining the separation between stable groups. At the magnetic, we get small scale difference as get get more 'multiplies'.

The problem astrophysicist need to solve is the density of Nulls in free space. They needs to find discrepencies that indicate a different Plank, and that tells us the ultimate density of Nulls, and thus the rate of group formation and the eventual fate of the Universe.

Group theory is back in the ball game in physics.Really, the cornerstone.

Lattices, vacuum, Plank,Pauli,Higgs and Shannon

Lattice scientists need to use Shannon. Lets try it out.

We have three vacuum shapes, distinguishable up to Plank. What is the density?

In the complete sequence, out SNR is Plank.  Assume the vacuum are spheres of r1,r2,r3 radius,  R1 and R3 occur with the same frequency, they are negative and positive phase. We do not know the density of free Null, the r2.

There is some density over which the -2iLog(i) and  -jLog(j) are within an integer, that makes a counting system. The -Log(x) give you the volume of each sphere when the condition is met. Plug them into the Shannon condition, using the Plank SNR, find the j (number of Nulls), and you have it, the density of free Nulls in space, and you can compute the relative vacuum sizes, in units of Plank. You need, then, to adjust the Log(i) into two volumes that occupy the same space. The number of vacuum that meet the condition gives you the Higgs size, at the Pauli under sampling angle.

Let -Log(i) = r**2, then dividing up the phase by volume gets: r1**2 + r3**2 = r**2, the Euclidean right angle. 

Pauli is the nearest exact 'multiply' in the system, I think. It tells us how to under pack Nulls so that a multiply moves from the Shannon condition at one scale to another.  Work the problem at two different scale and see if underpacking results.

This method should work on most lattice structures. For example, the problem of finding the integer set used for Fourier analysis given the separation of samples. That is exactly what the vacuum does, that number is given by the golden angle.

Work the problem for quarks and gluons the same way. I need to set up dual counting systems, and find the common multiples, within Shannon, from Higgs on down. These are the stable groups. They are the sets where waves and mass coexists. Kinetic energy is still a work in process for me at these huge quant levels.

At the range of gravity, the quant sizes are so small that one can use a single counting system, and ignore Pauli packing of gravitrons.  But at Black Hole compaction ranges, that approximation breaks down because all the packed masses get squeezed. The same approximation distance holds for Newton physics. The difference is when 'multiply' becomes stable over a wide range. In our world, the magnetic seems to be a sharp boundary when that occurs.

Shannon, Pauli and Fibonacci are at the heart of any counting system, including group theory, because it defines the maximum entropy solution for a given spacial separation.

Do the quarks match the theory?

Here is how I am working the problem.  First, I am pretty sure the charge numbers got embedded when atoms got made. For now I skip that.

Second, The golden rate quantizes wave modes, fewer wave modes then the Pauli rate quantizes mass. But it generates wave modes in opposite phases.

The colors are mass exchanges (or mass balances) across the wave.  The system is in balance because there is some generational difference in which wave quantization matches mass quantization, to a whole number. At that point, there are three mass transfers possible that match wave quantization. The proper balance creates the colors.

Gell-Mann–Okubo mass formula is the thing in the formula. I think we can get at a closer model for that.

Up down, top bottom, and charm/strange are the Pauli generated masses that most match the golden generated quantization. The mass differences are huge, they are about (3/2)** 10 apart. The largest, top and bottom have this huge spread in mass. My guess is that the Hadron is really a mass balancing act, part packed Nulls and part free nulls.

In short, what I am doing goes to the heart of the matter. We are dealing with two cross coupling sampling rates, and free nulls move at light speed, they are not mass. The condition when quarks are stable is the Shannon result:

The -j(log(j) and the -i(log(i) for the golden and the pauli match, that is maximum entropy.  There will be a set of three j, and some j*.82 of i, all scaled to the generation of masses you are dealing with.

Three colors, and wave quants in plus or minus,  give us about eight or nine quantization matches in which wave modes make mass balances.

Here is the thing.  Free Nulls are not mass, they are empty spheres of vacuum, they sit still but phase vacuum exchanges with them at the Golden rate.  Packed mass gains inertia because the packed nulls are indistinguishable, they have no Pauli path.  Standing and moving wave can handle free Nulls. So, what we see in the rest case, much free Null in the wave modes at these high quantization levels.  They are not weighable.  But when the color matches are completed, you get captured nulls, you get this semi particle.  This all happens because the duality is real. Free Nulls are not mass, not wave really, but wave modes don't mind them a bit.

Monday, March 24, 2014

Take note, if you dare

First this

The relationship between these two conditions is as follows. A topological space is Hausdorff if and only if it is both preregular (i.e. topologically distinguishable points are separated by neighbourhoods) and Kolmogorov (i.e. distinct points are topologically distinguishable). A topological space is preregular if and only if its Kolmogorov quotient is Hausdorff.
Then this
In physics, the principle of least action – or, more accurately, the principle of stationary action – is a variational principle that, when applied to the action of a mechanical system, can be used to obtain the equations of motion for that system. The principle led to the development of the Lagrangian and Hamiltonian formulations of classical mechanics.
So I am almost done. The first defines how to make a vacuum, the second limits us to maximum entropy. The vacuum is topologically distinguishable and has neighborhoods. The packed nulls become unsdistinguishable within. Going from the one to the other, is the packing process. Being indistinguishable is the process of looser packing, density of space drops inside packed Nulls, fewer Nyquist per vacuum, and that would be Pauli, I think we get it automatically.

I would not be surprised of nuclear theory gets the two modes mixed, calling waves particles at the subatomic level, and making things more complex. An anti particle, may be the scientists thinking packed Nulls should have a symmetric transformation. But anti wave would exist.

A quick review: Housdorff gets me the vacuum and both Ficonacci sampling and congested Pauli packing. Minimum action gets me Shannon and maximum entropy, Fibonacci is makes Shannon true, and gets me the hyperbolic basis functions. The hyperbolic meet the Shannon condition, and handle negative and positive forms separately.

But Housdorff tells us that without foreknowledge, F codes are Pauli packed, because they become indistinguishable, (they cannot be the best packing as they make Null indistinguishable before the total sequence is known). That should lead to a multi-level quantization scheme composed of multiple of 2 and three.

The two thirds comes from the vacuum triplet, and Housdorf would look at the bounday between distinguishable or not, and conclude the  ratio in packing. At the boundary ot packed Nulls, the Pauli packing ratio becomes the norm.

Light goes at the golden rate but follows the Pauli path, still, that is the path where Nulls will not be packed. Nyquist is a given because of cause and effect.  Pauli packing makes indistinguishable, and allows negative of positive phase to be embedded. We use Shannon to find the quantization order anywhere. But we break up quantization levels into multiples of wave mode, under the 2/3 rule, everything quantized is a multiple of 2 and 3 at any given level. Light has the proper impedance because it comes from the atom which set the quantization levels.

Whew!! I hope this matches the gluons.

The topology of rational fractions?

The golden ratio, below, is the separation between whole numbers that breaks a collection into is small set of whole numbers.   Root five is the first root with a single set of continued fractions, a[2,4,4,4,4..]. It is the first number that can break up any sequence into whole numbered fractions, I presume. Pauli adds the third shape, the Null, and gets three shapes, the golden counter, than is multiplied by 2/3, and gives the continued fraction that can make fractions at integer three seperation.

 divide by  3/2 = (1+root(5))/3;

 5 = [2; 4, 4, ...]  Root(5)/3 = [2/3,3,3,3,3,3,3]
becomes the continued fraction 3,3,3,3,3,3

The golden vacuum shapes, then differ by .61803....; and they are optimally packed and can hold thwo numbers. Add the third and  count whole fractions by 2/3, with whole number (rational) fractions.

The two vacuum shapes for minus and positive are not equal, according to golden.

Vacuum shapes, topology, Nyquist, Golden and Pauli

If the samples were Nyquist, they would overlap by one half. But, they optimally pack by the golden ratio, that determines their relative shape ratios. They simply push into a shape the makes space as densely vacuum as possible. Then phase imbalance is pushed to the optimum congestion ratio, the ratio in which unpacked and packed nulls are most stable, and make symmetric groups. That gives us three curvatures, the unused Nyquist, the golden and the Pauli.  The latter is apparent, the middle is real, the Nyquist is a relative basis.

How does the vacuum do this? The relaxation ratio is the ratio by which on sample changes shape relative to another.  That always maintains balance, within the precision of vacuum density. Free Nulls do not exchange. So we get this ratio of obliqueness; the Null, and two anti symmetrical shapes. They are in balance at maximum packing to the extent vacuum density is measured to the golden ratio, which would be the density distance over which mis shaped vacuum samples squeeze each other.

The optimum congestion rate is simply the rate where arithmetic works, determined by symmetrical group formations. That number should be related to Nyquist and the golden, a natural outcome, though I am a bit over my head, but lets look:

Nyquist .5, Golden .618, Pauli .6666; .5 + .118 + .048 See any pattern? Neither do I, call the pattern expert. But the Pauli rate should be fixed to Nyquist, and the golden rate fixed by the measurable density ratio within that.  If the vacuum shaped itself past the range that Pauli is untrue, then we get constant shape distortion as as gaps appear now and then.  Pauli is the first stable group integer, determined by the the density of space, in units of vacuum. And when tat ratio computes to the first integer, we have Pauli. The shape ratio adjusted to make the first integer a workable 'counter', and it becomes a close approximation to golden. The first integer is the denisty/vacuum size, computed in Nyquist 2s, without remainder. Where's my calculator?

So anywhere and everywhere, the vacuum equalizes phase up to the first integer, and Pauil is set to handle the remainder. The numbers scaled to make Pauli the first integer.

That makes a universal constant.  free Nulls go at the Golden rate, the speed of light, packed Nulls go at the Pauli rate.  E = Mc squared is off a bit because packing of free Nulls is less than golden.

So now we have it. As matter is packed, it leaves more excess free Nulls relative to vacuum density. Phase alignment is always a bit less than can supprt free Nulls.  But free Nulls travel at Golden, so scientist think there is more energy hidden up there, and there is. The free Nulls get pushed down in order, (make smaller mass), and that excess eventually becomes Black Holetrons, the teeny weeny things the farthest out. But Black Holetron are measuring the common mode of the Universe, in terms of excess free Nulls, they make phase alignment, and thus we will eventually go Black Hole, and blow up.

The ratio, Golden curvature and Pauli curvature is constant at each quantization level. waves as Golden, mass at Pauli. Matter is stable, within a level,  because phase alignment which contains matter adjusts slightly faster (Golden) than mass can disintegrate (Pauli). Because the ratio is constant, I can define the right angle and use complex quantization levels. The ratio between the two is .928.  So energu is Emin * sqrt(1+.928**2) or Emin* k = k*1.36 Emin. At each level I break up phase and mass into sin(k*1.36) and cos(1.36). The easier was to do this is to pack all mass at Pauli, then go back and pack all the wave at Golden.

That means the Universe is closer than we think, we measure light at mass speed, according to E=MC**2. It also means that Null gets a bit more space than the phases when the vacuum adjusts to the Golden rate.  These ratios should show up when an expert in topology works the problem. The universe defined by the, yet unproven theory, of how two antisymmetric shapes pack relative to their symmetric counterpart.  Race me to that theory, you topologists everywhere!

  That would be physics counting system defined.

An amateur doing group theory (updating)

I start with the idea that symmetrical grouping by charge came in the 1/3 multiple. At that point, kinetic energy of the electron in hydrogen was balanced, likely with the kinetic energy of the containing magnetism. The impedance of free space is the kinetic ratio between the inner and outer, computed in units of 1/3 charge. Then that ratio becomes the mismatch between phase and Null in the region when the first three levels of quantization take place.  That symmetry gives me the apparent curvature (the unit of quantization ratio), computed in units of stable group packing.  The quark forms a stable group, it is the first point where arithmetic works for group theory. Then atoms match kinetic energy in those units.  My real Pauli rate is less, and must be computed in units of the original 2/3 arithmetic. In other words, the ratios were created because of the real Pauli rate making a outside containment field faster than the nuclear force could stabilize. That delta must  computed in 2/3 arithmetic, the radius of the hydrogen, with a remainder being equal to the region of stability for hydrogen.

But the matter is complicated by a prior fundamental unit, made of strangeness, updowness, topbottomness, stangeness and charmness. Mass, or packed nulls were captured in a way to make a whole number out of all those at the nuclear level, it is the unit being calculated, actually.

How? Still working the issue. Any expert, please stand up, now.

The optimum packing ratio assumes Nyquist, and that sets the apparent curvature to the golden angle. So all the ratios in the nuclear ave the common multiple of 2. And with that, it is computing the whole number that makes stable groupings.

The ratio between the golden angle and the Nyquist angle is the redundancy in twos arithmetic.  That must be the ultimate Plank distance. The real rate must be quantized in units of those. But that is just a hand wave on my part.

Well, up/down have the best mass ratio, likely came first and is the smallest common multiple?  The charm and strange, in the next multiples of that. Then top/bottom, and finally, those common multiples arranged into multiplicands to make 1/3 units. Reconstruct those ratios and look at the whole word that packs them all, at the golden angle?  That whole number should be packed as 11111101, of that form.  It is not. The unpackedness must be deterimed by the real Pauili ratio being different than the golden..

I should be able to recompute the nuclear in Nyquist 2s arithmetic, then with that arithmetic, compute the partially packed whole number of the nuclear and the packing ratio is the minimum unit by which Pauli rates can be set. So I find the closes Pauli rate?

. The angle 137.5° is related to the golden ratio (55/144 of a circular angle, where 55 and 144 are Fibonacci numbers) and gives a close packing of florets. 
That golden ratio is the apparent angle when the nuclear is packed. The ratio 55/144, is for Sunflowers. We need that ratio for the nuclear. Except that the apparent ratio must be the optimum congestion ratio, 2/3, that is the group separation where packed nulls most stable and most accurate, at the same time.

Maybe the world is Nyquist, the packing is golden, but stability of group separation establishes in groups because of excessive balance in Nulls and phase.  The world may simply be flat, packing set to ratios close to golden, but  Pauli curvature is established by imbalance. My next step would be to just use the golden angle, based on the limit of the golden ratio. That would be correct if high density got packed first, as the vacuum would have packed as closely to the golden as required by the highest density.

 This is it, Golden and it is a faster sub-sampler, relative to optimum congestion. The world is made of units of these?

Here are the properties. Try to find common fractions in the groups.

Name Symbol Antiparticle Charge
Mass (MeV/c2)
up u u +23 1.5–3.3
down d d 13 3.5–6.0
charm c c +23 1,160–1,340
strange s s 13 70–130
top t t +23 169,100–173,300
bottom b b 13 4,130–4,370

Measured mass do not conform to the same groupings as charge,phase and antiness. And these:

Name Symbol Antiparticle Charge
Mass (MeV/c2)
Electron e e+ −1 0.511
Electron neutrino ν
0 < 0.000 0022
Muon μ μ+ −1 105.7
Muon neutrino ν
0 < 0.170
Tau τ τ+ −1 1,777
Tau neutrino ν
0 < 15.5

All the charges are relative to the electron. The mass is set as energy equivalent. The charges balance, mainly because they balance across the atom, and set the principle value, relative to the electron. It looks like charge, not mass, set up the groups. These were not forged in gravity, they were forged with a strong, negative phase, magnetic monopole. The density at creation did not match an efficient packing, and the magnetron formed at the normal curvature, whcih we still have, and that packed the sub atomics, which are now inefficiently packed.

My Pauli curvature should be (-1/3,0,+1/3).

Name Symbol Antiparticle Charge (e) Spin Mass (GeV/c2) Interaction mediated Existence
Photon γ Self 0 1 0 Electromagnetism Confirmed
W boson W W+ −1 1 80.4 Weak interaction Confirmed
Z boson Z Self 0 1 91.2 Weak interaction Confirmed
Gluon g Self 0 1 0 Strong interaction Confirmed
Higgs boson H0 Self 0 0 125.3 Mass Confirmed
Graviton G Self 0 2 0 Gravitation Unconfirmed


The modified theory of SpaceTime

The curvature of space determines the density of free Nulls. Free Nulls do not exchange. In free space if the free Nulls have all gone down hill, then the curvature hits the golden angle, light is about 20% faster. If the wave front spreads to half density, then its bandwidth equals the vacuum  rest period. It goes at Nyquist, Impedance drops, but light is half dim.

The Nyquist limits the vacuum speed, Nyquist is symmetry, we cannot go beyond symmetry and still have causality. The golden rate is the rate when everything gets optimally packed at the optimum speed. The Pauli rate is the assumption that the vacuum is perfectly congested, and we over pack. If we are all at the golden rate, then we don't get super nova explosions.

The only thing I see changing the speed of light is the density of Nulls, or the curvature is related to the density of nulls. SpaceTime theory sets light at Nyquist, then stretches the vacuum size so curvature and free Null density match. SpaceTime fails because the world appears quantized. But quantization ratios change if Null density changed curvature. So, curvature itself becomes quantized. In this case, each region consumes and packs Nulls, the curvature decreases in jumps. We get a world of packed matter and light speed is Nyquist. Does that make a black hole? If free Null density causes curvature, and G was constant, then Einstein would not distinguish the two. Still, the curbvature would equilibriate in each dense region, no Black Hole.

Sorry, Steve Hawkins, though you are still a great physicist.

 Something is not quite right.

Sunday, March 23, 2014

Almost! Anyway, I not the only believer

Millette from Canada! Welcome to phase theory and Pauli sampling.

The derivation of the Heisenberg Uncertainty Principle (HU P) from the Uncertainty Theorem of Fourier Transform theory demonstrates that the H UP arises from the dependency of momentum on wave number that exists at the quantum level. It also establishes that the HUP is purely a relationship between the e ff ective widths of Fourier transform pairs of variables (i.e. conjugate variables). W e note that the HUP is not a quantum mechanical measurement principle per se . We introduce the Quantum Mechanical equivalent of the Nyquist-Shannon Sampling Theorem of Fourier Transform theory, and show that it is a better principle to describe the measurement limitations of Quantum Mechanics. We show that Brillouin zones in Solid State Physics are a manifes- tation of the Nyquist-Shannon Sampling Theorem at the quantum level. By comparison with other fields where Fourier Transform theory is used, we propose that we need to discern between measurement limitations and inherent limitations when interpreting the impact of the HUP on the nature of the quantum level. We further propose that while measurement limitations result in our perception of indeterminism at the quantum level, there is no evidence that there are any inherent limitations at the quantum level, based on the Nyquist-Shannon Sampling Theorem.

Getting Plank from Higgs and curvature too high

I realized I missed the plank factor, the fundamental energy level.  But if Higgs knows a 100 bit digit, the the smallest energy is 1/[(3/2)**Nmax]. Then, 1+SNR becomes (energy+k*energy)/energy for each order, independently, I think. But that comes out too even, working on that. That means order counts basic energy times k+1, and that may be right. Jumping one energy level does not jump an order. An Maybe I have mixed digit size with order, and I have to straighten that out.

Order jump at k = (3/2)**order. But quantization levels go as k*e, I think. Signal is power, energy per sample.

My curvature is way to high to make black holes. We would have needed a curvature change. But the curvature likely matches the atom and the subatomics. Also, leave proton mass out of the picture when doing atomic orbitals, just use electron mass, charge, and kinetic energy and proton charge. But there are no Black Holes unless we had the miraculous curvature change, well after the atom was made. Or else, Higgs is way too massive. I don't see things connecting, they go beyond below Plank. 3/2 covers most stars, I think. So, why would ratio change, after star mass was grouped?

 Look ahead size is way to small to get galaxies, unless we have the dark matter thing going on. If we reached the magical curvature of pertect G, then Plank everywhere go more accurate, suddenly. It didn't, near as I can tell. The alternative is we have two curvatures, the curvature depends on another variable. That variable modifies vacuum shape. But the vacuum cannot change everywhere faster Pauli, but Pauli speeds up. Is that the case? If Pauli was faster in intergalactical space, the red shift happens. But then it hits our curvature and looks normal. The thing that is constant is Nyquist. The uncongested sampling rate is the Fibonacci golden angle. We have the congested rate.  Perhaps there is a flip between Pauli and Fibonacci?

The answer would be in the Quasars, look for more of them and get a better picture.

Plank distance looks close to a 3/2 world. I dunno, someone check my numbers.
1.616199(97)×10−35 metres.

Anyway, thj Universe is somewhere between a 50 digit and 100 digit F code.

Van Alan and asteroids

Negative phase and positive phase, likely contained by gravity. This is what planets do when gravity makes them phase aligned. Positive outside, inside layer caused by gravity phase alignment in the protons.

Look here for gravitrons,  between Mars and the asteroids.

Us local star clusters have a field, meets gravity here in the Kuiper belt..

If the explosion of magnetron stars was common in the mid-life epoch, then most cosmic radiation would have been  way out out proportion.

Wide ranging magnetism

There is a huge group separation between the atomics and gravity. Anywhere in that range, Nulls are free game, and magnetic field alignment is not forming over those ranges. So in the atom, magnetism quants at the usual impedance, in the sun magnetism has a wavelength the radios of the Sun.  When the free spinning charge in the center of the Sun approaches stability, the magnet field get very aligned, the gravitation and magnetic field have huge positive phase alignment, sun wants to split apart.  It will not, insted a gravitational/magnetic wave coupling emits most of the magnetic field up and away, where is likely stabilizes around gravitrons, gravity trapped Nulls of tiny size, just outside the inner planets. The charge spin becomes chaotic, the magnetic field collapses.  This is ther Sun delivering alignment information to the gravity, the normal process.

Gravity is likely a double half wave, Nulls at the inner planet and fewer nulls beyond the gas giants.  The positive half loops nearly vertical, and there is a hyberbolic function for that. We hardly notice, but incoming light would be polarized, twice.

That low order for magnetic/ electron interaction was not always there, especially before Hydrogen.  Nor, really was gravity.   After the origination, the subatomics were partialy encoded, and magnetism beat gravity to the punch, we were a magnetron star.  A very strong magnetron standing wave, somewhere just inside the inner planets.and the magnetic fusion did some tricks, gravity blocked. The sudden compaction of the atom, and an explosion, and magnetron never returned.

Most scientists think that the Sun (along with the rest of the solar system) is about 4.6 billion years old, which means it would have exhausted approximately half its 'life'.

And we think the Big Bang was 13 billion years ago. Under the super nova plan, there was an intermediate atomic creation nearby, or else heavy metal travelled 4 billion light years. Or SpaceTime is faster than Nyquist, where it sort of broke symmetry. You tell me.

Optimal communication systems

Digital or wireless. Change order to adapt to channel usage. Do lexical analysis on the go for information packing. Make optimum codes when the lexicon groups is known before hand.

Add this to the universal semantic kernel, to locate optimal matches between searches and data organization. Teach the web to read better and respond better. Make the optimum data base.

Oh yes, find optimum planting ratios in agriculture to manage higher profits.

Great tool for neuroscience. Nearly perfect for encryption, it is maximum entropy. Transmit a key through another channel.

Pauli separtion paths in the hperbolic, groups structure and making atoms

Look the hyperbolic functions. They talk about an angle that defines the containment of a Hyperbolic field. That makes groups, or sets in the F codes which really do not match.  The extra terrestrials from gravity up, the subatomics, electron and magnetic About four large groups groups, minimum.  The Higgs look ahead, its largest packing. And the electron phase and proton phase meet that look ahead, they can be treated as a subgroup in the atom.  So, negative phase to the electron, positive to the proton. The positive curls around the atom, like a planet in motion. Add more electron and more atoms, then the groups subdivide in the algebra, into almost complete and compact subgroups. In the atom, the are the orbital paths available.  The hyperbolic sub functions should construct orbitals.

This picture is for fun and entertainment, but it has Pauli separation planes.

The proton region can hold almost as much positive phase as Nulls, huge. The negative half of that phase gets split into partial Packing, some how, and lots of Nulls for free orbits. The standard model is normalize to the electron because that is the group boundary, nothing more special.

The magnetic is within one look ahead span of the electron. And the gravity and magnetic should be one look ahead, but I cannot find that distance, I have not worked the model accurately. But long wave EM on earth and short wave visible at the periphery of the solar same should be the same effect. So the group algebra of F codes is a real important clue.

The magnetic is  ill-formed, and its quantization rate determined by the difference in charge of a free electron with the positive phase of the conducting atom, usually copper.  That Quantization distance belongs to the magnetic phase motion, the impedance of space. The magnetic positive phase is like any the phase equalization in any other order. Impedance, a quantization result at the source. It is limited at low wavelength because the look ahead makes gravity/magnetic sub groups, in the model.. The EM quantization difference, their relative quant rates. The moving quants can carry free Nulls in the environment, as near as I can tell.  Much of classical physics is built around free Nulls. 

The distance between groups shows why the gravitation effect in the proton is weak, but is also means packed protons can carry gravitation field alignment and positive valence without breaking up. This should come out in the group theory.

Convert the Lorentz to hyperbolic.  Look at group overlap in the F code algebra. Folow up on these hints from Wiki:
There are many ways to derive the Lorentz transformations utilizing a variety of mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.
Look up the residual theorem for hyperbolic, it tells us something about groups.

Phase transitions from liquid to transactions are treated like ratio changing over the mode. Whenever we make it colder, or hotter, we are changing Null density. Foloow up on this:
The  Ising model  named after the physicist Ernst Ising is amathematical model of ferromagnetism in statistical mechanics. Themodel consists of discrete variables that represent magnetic dipolemoments of atomic  spins
that can be in one of two states (+1 or −1).Consider a square lattice. On each lattice site i.
 There is a variable spin, which interacts with each other through anexchange interaction, J,  which favors parallel alignment and thus we define a Hamiltonian.

Another source on hyperbolic algebra here. And here are functions to compute gradient flow in mixed systems, may help. This paper on isolating monopoles in Hyperbolic layers. Anyway, an entire 20 careers, all on this model, Shannon, F codes and hyperbolics.  Google, Apple, many universities, big industry, have lots of fun. If you know this, you have a good job.

Digits, SNR the algorithm, Signal is now energy, SNR pwoer

Angles determine curvature of the model.
I looked at  three angles .  I have worked the solution for the 2/3 angle down below. The F codes assume the first row, but it is quantized to the 2/3.

-2/3*Pi , 2/3*Pi :My first choice

-pi/2,pi/2 Nyquist packer

Here I convert to the 3/2 numbering system:

 Converting base formula.

These are base of the numbering system;

These forms used in Shannon:

Using the first form and breaking in two for negative and positive signal energy:

In two halves: (1/2) (e**x -1) - 1/2(e**-x -1)

I have log3(e**-x) = log(e**x)/log(3/2)

log(e**-x)/c = -x/c
 log3(SNR + 1)/c= -x /c.
Signal Power =   (Nyquist Energy * k). = dE*k
Nyquist is any lower order, including Nyquist.
System is contained, SNR+1 = (Signal + all of lower order)/all of lower order.
Higgs is is top. Heavy mass have higher order number, more dE, large quant size, bigger F codes, highest frequency, smallest wavelength. I got it!

The various orders build a hyberbolic basis set of optimum curvature, i = Nmax to1 = Nyquist.  There, that is the standard. Heavy has large N.

Any number x is a function 3/2**i   where i goes from Nmax to 10..k , in both negative and positive. A Change of base is a different c and different ratio. Noise is anything below in order. Two of each, negative and positive.  These quants sizes are the available F codes available, the local group of F code packing available.

Signal is energy. Take note, that is a change. Signal to Noise is a force. Noise are the low orders. These are quant sizes.

Higgs is about (3/2)**100 as near as I can tell.

Proton about (3/2)**95

Lepton about (3/2)**70

Magnetic (3/2)**55

Gravity (3/2)**8
Quasar (3/2**4 
Black hole (3/2)**2
Nyquist (3/2)**1 Normalized.

Very sparse, means very curved, stable mass.  The hyperbolic set starts with extremely deep, Higgs wells to flat black hole,have smallest mass, large wavelength the width of the universe. Free Nulls, which is usable energy, is hard to contain.

The Higgs has about one packed mass and almost as many free nulls in the two boson waves. Huge Null count, 3/2**100,, sjallow well ,means the Bosons free up and it breaks to quarks. . Quarks  bind in three groups, this qwould come from F code algebra, and have embedded charge. another with free Null density, deep well, stable.  Electron, much smaller mass with embedded charge and many free Nulls, negative pahse inside, and kinetic energy positive phase polar to the proton, wraping back?. And so on.

The fewer available quant positions, the more free Null, and the more curved space. The free nulls make more light modes available for electron and magnetic. Powerful light with many free nulls make long distance travel.

Black Holes are the tiniest mass made, but have wavelengths that span that Universe.  More tiny black holes made, mean the ever slight effect on curvature occurs.