Sunday, April 6, 2014

Summary and discussion of proof and warnings

The going assumptions of this model is that the vacuum is a triplet of {-phase,null,+phase}, and has two quantization ratios, 3/2 and 1/2+sqrt(5)/2.  The vacuum equalizes phase to the nearest integer it can, which becomes (3/2)**108 and (1/2+sqrt(5)/2)**91, which is an error of 10e-5. The equalization process is phase exchanging with null to minimize phase in its vicinity. Phase in this sense is simply the volume of a vacuum sample, think of them as coming in three spheres and the two largest always exchange with the smallest. But the relaxation time of the exchange process sets the sample rate. So, in minimizing the volume of space the phase samples become locked into the Fibonacci sample rate. The entire system is a sampled data process.

If this model was correct, then its error should be about 10e-6, Plank measuring error, and Plank should be measuring an integer multiple of this constant. Plank and the model mismatch by (3/2)**.02 = 8e-3. So, this model is within 2% of Plank. However, the integer in my model is assumed to be Plank when the vacuum is maximally sparse, no packed nulls, and there may be an imbalance in the universe which accounts for the 2%. I dunno yet.

Otherwise the first paragraph above is the complete model and should describe all of physics to within 2% of what the old model does.

The main computing method is matching the two quantization ratios to Shannon maximum entropy over the complete sequence of 108+91. This is the volume of vacuum over which the two ratios can multiply and maintain integer quantization; the largest group the vacuum can form.

Other than the 2% error, the warning to readers is that I often am late correcting error in the posts.  Most of my errors in these posts are sign and inversion dyslexia, large often should be small, constants often should be inverted, and sign often inverted.  But the spreadsheet mostly has this corrected.

And this chart would be the groupings that would be Shannon separated, mostly to the 2% error.  They are the set of quantization ratios, sorted, which make up one Plank integer in the system. They have been Shannon matched to a twos bit digit sequence, that is scaled such that each quantization ratio gets a separated bit, and thus there are 'gaps' where fractions are counted.  The groups are formed where the fractions are small, mostly the points around 20, in this chart, which is the scale factor.

The X axis count as 2**N, giving the ratio in twos, starting with gravity stuff on the left and moving right up to Higgs, which evidently has a degenerate group near 200.  This chart, again, is built under the assumption of compaction, no free space Nulls.  The ordering count goes as one half the X axis, I think, because the wave and mass quants match and make a unit. So I expect the last six groups, down from 200, down to 120 or so, to make the quarks. Then electrons get another count of 14*2 or 30 or so, then, magnetic another 36, and the rest for gravity and beyond.

A sparse compression would allow these groups space to recombine, but the space for recombination will be severely limited at the quark level.  The quantization process in the spread sheet under different compression forces will tell us more. I think I have a quantizer macro working as the vacuum would naturally balance phase, and the compression force is a phase offset. The macro is recursive, so I will be dropping the scale factor when I start.


No comments: