Monday, June 30, 2014

Housing boom in California!

325 Laurel Ave, Arcadia, CA 91006
2 beds, 1.5 baths, 1,522 square feet
For Sale:               $960,000
For almost $1 million, you can actually buy this home with only 2 bedrooms. You don’t get two full bathrooms but hey, you’re living in Arcadia so stop mouthing off. This place was built in 1942 smack in the heat of World War II when we were full on battling the Axis Powers. Take on a piece of history for only $960,000. More at Doctor Housing Bubble

Economists have the natural log problem

They want to integrate quanties as if they are infinitely divisible.  Physics engineering models can do this because the bubbles of the vacuum are so much smaller than any of their approximations.  Quantum physicists learned how to avoid the problem.  But economics is all quant, none of it is infinitely divisible, there is no such thing as 10.56 eggs in a carton, or 4.73 of an automobile.

So their supply and demand function is a poor approximation, while the engineers charge model is quite accurate. Using a poor approximation is OK, if the economist knows what he is doing. I use log, often, but I know in the real world that will be a finite log in some integer base.

More Error fixing in entropy

Entropy.  The probability of a price for any given transaction is 1/P, P being the price.  Small prices happen more often. So in the entropy sum I missed a sign:

-(1/P)*Log(P), or (1/P)*Log(1/P), either way.

This was all about the maximum efficiency version of quantity money theory. the economy is at minimum redundancy when the sum of -P*Log(P) are all within an integer of each other. I was thinking about this when I was looking at the Phillips curve, which estimates inflation over time.  Inflation is bad, high inflation means there are lots of price changes and something is wrong.  How bad is inflation?  Less than half the growth rate, I would go with that.

George Selgin had it right in inflation

Unemployment goes down during periods of stable pricing. He e mailed me once on the topic, and I sort of took the normal view, prices rise during growth.  But he was right, and give him credit. Look at the chart, the green unemployment line most goes down when prices are stable.

Efficient pricing means stable prices meaning the economy is growing adiabatically, no one is jerking at the levers. My excursion in physics renewed my sense of orientation after listening to some economists for too long.

Generalized entropy

Renyi Entropy. Interesting, and it brings up memories from some classroom I was in once.

Sunday, June 29, 2014

Lagrange number and economics

I am not quite the expert I should be, but they derive from the Markov numbers, and they are real, they are actually learned by trial and error in the economy. They bound the degrees of freedom, and the Markov tripple go as:
 (1,1,1), 1,1,2), (1,2,5). Look at the resulting Lagrange numbers as band stops. The degrees of freedom go as 1,2,3; and that is the base.  2 and 3 are the logarithm bases in integers that bound the natural log, and are the most efficient integer basis in terms of minimizing redundancy.

So, I think we are going to see DC going linear, and the next level out, small states the Fed, for example, will be square noise.  But farther out, the Californians, Texans and Central Americas have to triangulate, they have to watch the square variance of all the parties in and around DC. That is why we see these shrewd moves by Jerry Brown, and company, to strike a de with DC just in the nick of time.  And that is why we often have these rotating presidents coming from California and Texas. To triangulate is to keep the bounds absolutely separate, then look to make moves in phases of three relative to the squares.

Stiglitz screening

Economically, I think that is just taking the log of the representative samples. It allows the screener to keep an orthogonal list of separable types of transactions and people.  It is one of the basis for Kling and PSST.

How did Ben do?

I assume the effective federal funds rate means active trading, so Ben is in the market throughout this series on inflation and the federal fund rate.  Ben is a 100% interventionist, he thinks that is his job.

Ben got  that one blip in 2003 when he lowered rates a quarter and got 1.5 points of inflation.  But otherwise, through out the series, as you see, it is the central bank breaking the pricing mechanism with the sign reversed from current theory.

Ben drops the rate and the inflation drops a point, in 2003.  He starts to exit the market and rates rise, as does inflation, from late 2004 to mid 2006. Then he gets back into the lending market, and rates drop again. By late 2007, he knows something is wrong, and he re-enters the market heavily. Inflation levels off  a quarter later. Then after a quick peak in the oil, its all down hill for both, with inflation dropping after a lag now. So Ben has everyone about six months confused, but he is still driving down inflation has he actively drives down rates.


So what is cause and effect?  The oil peak in 2009 is definitely the economy trying to get oil rationed, rates or not, so the pricing mechanism is dealing with severe shortage by then. Then the economy drives prices down as Ben drives down  rates, the pricing mechanism a little late. Look at the level spot at zero inflation in late 2009.  Notice that follows the exact path Ben took, some six month earlier.  These are people thinking, does Ben know what's going on?

So the suppression of the pricing mechanism is taking longer and longer, Ben cannot get inflation down without heavy intervention to suppress rates.  Then, as Ben hits bottom, he is the only player in the market, and cannot go lower.

Once growth picks up a bit, the pricing mechanism is back working. So Ben mostly screwed with the pricing mechanism, and it failed to set the proper allocation for oil, instead constantly being reset three times by Ben's monopoly market intervention. Pricing and allocation in the economy was fairly screwed up during the process because Ben either got the sign wrong or deliberately made it fail. It really wasn't until 2010 that Ben finally exited the market and pricing and allocation began to work.  But  basically, the sign I have is correct, and whoever has the price puzzle thing has the sign reversed. No puzzle, monopoly banks really foul up the resource allocation, driving down inflation as they drive down rates. They generally do it in service of a fraudulent central government, so the price puzzle term may just be political cover.

Pricing of oil.

At any point in the process, whoever is transacting with oil simply wants to match the entropy of oil loans with oil purchases and oil sales. They want to stay current, and avoid getting queued up in the deliveries in and out of their business. Especially with oil, we generally run a tight network because oil gets slammed all the time by central bankers, so the industry is pretty good at keeping the money queues matched to the oil queues.

But in all these network, once there is an empty queue, or an over full queue, there is an integer quant shift in the pLog(p) for the good.  That happened with the peak in 2009. After the shift, the oil network had space to adjust to the central banker. These pricing metrics are within an integer, and they shift by an integer, moving up or down in the sequence when the business or firm thinks a new pLog(p) is needed. Its a queuing deal, all on integer boundaries.

How do these network re-queue? 
 
Depends on the logarithm base, which is the line of symmetry for the industry. Do they have a fanout in multiples of two or three, and that is set by the Lagrange number. Critical networks that have high value added and get the base three log, and they are watched carefully, and shift as the cube root of three.  I think that is how it works, but you all better check me on that, as usual. They do a rotation in phase and size, each rotation a power of the previous.
Jim Hamilton calls it the peak effect, queuing changes when the peak is higher than the previous. I call it 'carry the 1/3' in the rotation of the digits that count through the network, like a Shannon decoder with a cubic variance, and a three way fan out. I know its a cubic variance, its very high value and the oil people watch the economic square variance and key of that as their signal, making their noise cubic.

Is this price puzzle then a cubic thing?

A very good question. The correlation between the fed and inflation is likely hard to observe during normal times. It is simply that economists watch the fed when things are cyclic and volatile, and that is right when critical networks will watch the fed.  It is hard to tell. The Fed likely has only two degrees of freedom, and during uncertain times its square variance is the real thing to watch, the central bank will be all over the map.  So I think that makes the observer's  noise spectrum cubic and asymmetrical. In the pricing of oil look, does oil require three swings at the anvil before is acquires Ben's noise?  That is the key, how long before the cubics get a solid integration of the Fed variance. Narrative studies? They are cubic in noise. Romer and Romer are separating out the second moment in noise when they extract narration, as was Uncle Milt, god bless his soul. And the bankers at the GD moment had do do the same, it is when they get a good look at the central bankers square noise when they find stabilization. Asymmetric cubic noise explains the sign reversal.

This Great Depression thing

We have monetarists, like Uncle Milt, bless his soul, claiming the Fed screwed up. How? Well, just at the stock market peak he restricted some speculative loan activity.  Then the speculative loan activity ran the Call Rate up four points.

This is not a major blunder
This is a normal central banker being off by a year with a minor adjustment. Central bankers do that all the time, heve been, will continue to, and may never stop. They are bounded in the information they have. But more importantly, why were stock traders willing to bid up the call rate at the peak of a bull market? That seems weird, and certainly not the bankers fault.

Discount rate.
After the crash, the central banker lowered the discount rate, and entered the market with everyone else, according to the rate chart.  The market rate and the discount rate chased down the short term rate as fast as they could.  Deflation was the result, exactly what we would expect when the monopoly fiat banker is intent on lower prices.  It is Uncle Milt, bless his soul, who had the sign wrong, and called it the price puzzle ever since.  But I never found any evidence in 1930, 1974, 1981 and 2003 that anything other then the standard happened.  When the monopoly banker is determined to lower prices by lowering rates, or raising prices by exiting the market, we get the same result.  Short term rates follow the price level, they both go up, they both go down.

Money Aggregates.

We are well into the depression, the central bank does not have the control it needs. State banking was obsolete, banking was agglomerating in NYC. It was a technology change having nothing to do with banking, all the major corporations were agglomerating.

Who started the Great Depression?

Hoover, in 1928. I can give you the week, the time, the four or five people, and what Hoover had for lunch.  Hoover met with the broadcast executives and created the modern FCC, allocated the bands and made the networks agglomerate.  The economy needed more chaos in the radio markets. Premature agglomeration of the networks caused the great depression.

Uncle Milt got a sign wrong somewhere, so did Ben. Central Bankers who fumble a bit do not cause Great Depressions. look at the minimum redundancy equation for prices and quantity. What happens when, over night, all the transactions costs are dramatically lowered for nationally agglomerated companies.  The new networks lowered sales costs, for national companies, almost to zero relative to local retailers.

What happens when financing costs drop?

The retailer uses debt to cover inventory. He wants the size of his debt and the price debt, at the sales frequency, to vary about the same as the price of his shoe purchases and the size of the purchases. He does not want either of them to queue up; that is, his store inventory and his bank account should vary about the same.  P1Log(P1) = P2Log(P2), within about one. He will lower prices on his shoe sales, especially if he is under pressure from the nationally syndicated department store. A banker in 1930, who still had the sign correct, would have intended this to happen.

Can the fiat monopoly banker adiabatically lower prices

An otherwise sound economy with no price distortions. The Monopoly banker enter the short term lending market, slightly undercutting an otherwise sound banking network; reducing rates. He can do this for a while, systematically lower prices across the consumer goods.  But the adiabatic cycle is a cycle, he has to stop and reverse sometime to let the banking system recover.

Can the banker get work out of the deal?

Not unless he has energy to put into the process. If he has bright economists who are slightly idle, and these economists can use their skills to help find pricing advantages over the economy, then yes, that is work out of the economy. If his economists are good, and the bank has excess efficiency, then the fiat banker can move money in and out of the vault, adiabatically; on each cycle the economists will find and communicate pricing advantages that are Pareto efficient.

A Theory of money in a maximally efficient economy

 I think we just need to make q = -log(p), where 1/p means small prices happen more often.  When that condition is met then we are at minimum redundancy. That is the distribution network is a minimum spanning tree. Each of the set of -1/p*log(p) is equally variant. So this assumes package sizes are set such that the purchase price for one collection of things is as variable as the purchase price for any other set of things.  IE, the purchase of a carton of eggs for $4 should be as uncertain as the purchase of a used truck for $6000, for example. Uncertainty is tricky, it means that the next transaction counted over the aggregate, would be as unpredictable as any other. So, we expect more cartons of eggs purchased than used trucks. But if the price of eggs for some transaction was $6, we would be more surprised than if the truck was $6200.

Then M*Vt is the entropy of the system. See Shannon Entropy. I think I have that, but as usual double check me.

Entropy does not decrease, in a closed economy, they say. An open economy can decrease its entropy with redundant transactions. How do we manage to increase our transaction rate (Vt) over the time? More categories of goods make for more complex network. Where are transaction costs? They are a multiplier on P, and split between P and -log(P). If you have high transaction costs then you have to raise price and increase quantity. If you give all the transactions an efficiency shock then they will rearrange sizes and prices and that restructuring is a recession. Rearranging transaction sequences is OK as long as you mostly keep the inter arrival separation, don't make them bunch up. Keeping them mostly separated while rearranging is called adiabatic.

For some reason
We have this encourage  the central banker to suppress the pricing mechanism, on purpose. We call it the 'price puzzle', meaning that the federal banker acting constantly against its boundaries should not crowd out the flow of money. But it does. It forces a slant across the pricing, and the economy gets redundant. We do this mostly to suppress the equalization of energy prices, and other imports, across the economy. Its a form of price control, and we call it a 'price puzzle'. Then when the Fed quits lending money and allows equalization we get a sudden inflation and price equalization. Its no puzzle.

The process is sort of logical.

I mean, energy prices are too high, so why not lower all prices by suppressing the price of short term money and extracting money from the economy.  Its as logical as any other price control that government tries.  The real puzzle is why economists call it a price puzzle. Its an attempt by government to suppress prices, and government does it all the time. No reason to be puzzled.

If the Fed wants to raise prices.
It should borrow money, borrow short term, borrow a lot of it. The interest it pays will fill the economy with free cash, and the principle is returned back to the economy, a net flow from the printing press to the economy. 

Reverse, confusing, magic from economists

Here it is.  Volcker raising rates and inflation went right up with rates.  What's the problem? When the central banker quits making money by lending to idiots, there is less money extracted from the economy, price is the numerator in the economy. So with the Fed extracting less money, the prices stabilize, generally higher. There are fewer piles of money near the Fed, where the fed is lending like crazy, and those piles of money go into the economy and reset prices.  Why is this so complicated?

Why did Volcker do this?
We had a real oil shortage; the price of oil, relative to other goods was way to low and out of balance. Bankers wanted prices to reset higher, to match the oil shortage. No banker in his right mind is going to lend money with a severe oil shortage. They convinced Volcker to get the hell out of the lending market.  He quit lending, he quit extracting money from the economy.  Interest rates reset higher, prices reset higher, and all was normal.  Bernanke did the same thing.  Who had this bozo idea that less lending by the fed reduces inflation, it never happens. It is mostly the fed lending too much that causes the fed to get rich and the economy to have a money shortage, that is disinflation, in the economy.  Where did the reverse magical thinking ever come from?

Saturday, June 28, 2014

More on the central banking in and out thing

Mainly to help the economists.
When the central banker lends money here is the flow:
1) The central banker earns interest and the principal is returned, a net flow from the banking market to the Fed.
2) The Fed subtracts the funds it needs to operate, and returns the rest to the central government.
3) The consumer gets disinflation, the government purchasers get inflation.
4) The net result is the recession expansion cycle which moves with presidential elections.

So inflation moves between the consumer and central government. The result is cycling, not counter cycling. In 2003, the Fed began lending less money to the market. Inflation and rates both rose. Smae thing happened with Volker.

Do central bankers have a sign wrong?

I have often wondered why central banks think raising interest rates reduces inflation, I never saw the connection.  When the central bank   lends to the economy, it drains money from the economy, disinflation.  When the central bank borrows from the market it adds money to the economy, inflation.

I notice this in the data.  The bank loans less money, rates rise along with inflation.  Volker had that problem, as did Bernanke.  I often think bankers are using the bizarre expectation function and have no real clue what they are doing.

When was the last time raising rates worked to reduce inflation? When Volker let the rates rise? No, inflation rose right along.  How did Volker raise rates? He simply loaned less money. It was the market that pushed rates up, not Volker.  Volker simply quit doing the disequilibrium.  The market threw itself into the recession, Volker had nothing to do with it.  This is a recent subject in economics. 

Why do economists think that running the central bank up against the bounds of the market is equilibriating? Why do they think that central bankers should always be causing volatility and cycles by avoiding the neutral position? Economists simply have never studied stability theory, if they did they would figure out that they are simply causing economic disruption with the ridiculous.

As near as I can tell, economists who are clueless about the flow of water start using this thing called the expectation operator, taken from linear least squares statistics. They used that incorrectly, misplacing the time function, not accumulating the variance and a bunch of other crap. It took 50 years to teach the bozos about spectral theory. Taking less money from the economy, raises rates and inflation. If the Fed were neutral, its effect on the economy would be a Gaussian minimum phase effect, it would have very little effect, and the economy would stabilize. The more I think about it the stupider economists seem to be, and they always confuse me. I have to go think for hours about how economists got up and down mixed up. It is more a study into the psychology of the stupid.

So why do we have no inflation if the Fed is simply letting Congress draw on its signiorage?  We likely do, in Congressional expenditures where they spend the money, that is why they are going broke, Congress keeps raising prices on themselves. Anyway, I no longer listen to economists until they learn some basic spectral and stability theory. I keep assuming they know the obvious, and then I am constantly confused for a moment, thinking they discovered some secret, and then I get it, they are simply uneducated in the art of aggregate statistics.

I had to get up off the couch and think this again.

Just a week on Mark Thoma's site listening the the upside down has totally confused me.  Ok, bozo economists, taking money from the economy is less money in the economy, the central banker is causing disinflation with lending. Try to get this right you idiots.  What you guys see is a 180 degree phase shift, mainly from yapping all the time that the banker should cause cycles.  Try having the banker cause nothing in the market once in a while.

Statistical fraud in action

Is this the real cost of borrowing by the Treasury? Maybe, but it is only an estimate of current growth, and this interest rate will not perform much better than the standard rate. They are both estimated by the same folks.

So what does the declining rate tells us? Nothing, really. We have no way to compare past, present and future from this chart.   So there is no method to tell us if this is a good or bad rate.

Krugman uses this to say rates are cheap because they will go up sometime. Based on what analysis?  Rates have been headed down for thirty years, these may be expensive rates, or these may just be rock bottom rates.
But in the post-2008 economy we’ve been awash in unemployed labor and capital with no place to go. This is an ideal time to be doing a lot about climate!

OK, what makes climate equipment more expensive in good times? The value is still relative to alternative goods, at the time of purchase.

Friday, June 27, 2014

Obama knew about the mass deportation plot

Months ago, Obama, in a public discussion, pointed out that mass migration of Central Americans will not help because California only has two senators anyway.

But John Perez, California House Speaker, the last of the Jim Crows in California, took his Obamacare money and organized this whole crime against humanity. He should be arrested.  Jerry brown went along with the plot, Nancy Pelosi a co-conspirator.

A negative GDP Print of 2.9% GDP
 
Why? Jerry and John Perez hid the Obamacare budget, the CBO had no way to get the data, and Obama was unable to perform his usual corrections. Now Obama blames the Republicans!  

Cycling Through the California Morass
Where did the recession start? California after 20 years of budget shenanigans.  Most of the budget horrors committed by Gray Davis and John Perez.When did it end? When Jerry finally negotiated a tax deal with the DC boneheads. How is it restarting? California using Obamacare as a weapon of fraud. High speed rail? A bogus fraud by Jim Costas Is anyone getting the idea that California has trashed democracy in America?  

Bogus Keynesian science
For 80 years statisticians have pointed out the Keynes was statistically incompetent and his idea of counter cyclical policy was in fact pro cyclical. We have had thirty years of Keynesian recession/expansion cycles, all of them landing on presidential boundaries at lame duck time. Larry Summers of Harvard knew this and covered it up. Now with the debt doubled, the eve of the Obamacare recession, the bonehead comes out with the same idiotic multiplier story that Cristina Romer of Berkeley used. I mean, get a clue.

Crimes against humanity, racism in California, scientific fraud, and cowardice, the same old racist Democratic Party we have known for 200 years. Nancy Pelosi and her racists patronage would fit right in in the ante bellum South.  California shames us all.

Monday, June 23, 2014

Slow convergence


This number gives the natural exponent in the limit. Most of the Lagrange numbers converge slowly, still having 3% typical error when N is 150. The proton has to have large error bounds and plenty of wave action. The inverses converge much faster. Around the unit sphere, fast convergence, the exponential approximation works. As soon as the first spectral moment traps light, then all of those Lagrange solutions are trapped at the current ratio.

The approach is correct, my skills are lacking

The proton is not solving cubic roots. It is arranging a counting system for maximum entropy and that means optimum separation so rational remainders  are bounded.  And maximal group separation almost certainly means prime roots. It can really only manipulate two variables, quant separation and cycle length, all integer. Dimensionality is fixed to the prime number, and no primes above 17, they are most likely 1,2,3 and maybe 5. The electron likely does not have a root system three, and its unit sphere odd shaped.  Beyond that, the theory is incomplete and above my head at the moment.

Sunday, June 22, 2014

What about quant exclusion?

Only n quants can occupy a spectrum with n roots at a given energy level. So is the electron excluded from the quark cubic solution? Not unless it occupies a different energy level. But doesn't that make the electron decoupled? Maybe not if the electron occupied a root three quant one integer power  below the quarks. What is the Efimov number? 22.7.  The electron has to be 22.7 times farther than the quarks are to themselves. What did we say about rest mass ratios between electron and quarks? About 21 or 22, I think. 2.7 was times 3 pi , the 3 comes from area over volume.  I always get Pi messed up because the proton does not compute that. 1836/81 = 22.666 is more likely what happened, rather than compute Pi.

But quant exclusion means the bands do not overflow the roots.  We should be able to treat the system as one digit system.
Seems like a simple issue.

Using 3/2 as the base, the electron is 18 exponents from the center of the quarks, and about 6 exponents from the first quark, and the total of all the exponents is about 108.  (3.2^18.54 is 1836). Then readjust the base, if necessary, so you get a complete sequence without overflow. In terms of bandwidth, that means:
b = (1+1/n)^n, for some maxbaud n is the finite natural base, and total rounding error accumulated for the rational approximations. The system is essentially doing Taylor series in cycles, and has to keep accumulated rounding below any overflow foe some element in the power series.

So, finite base,b, says no quant can accumulate mote than b amount of rounding in one completion it its cycle. In my system, quant q has dimension d, then
q^(1/d < b, I think.

Finite Log(x) should indicate maximum mltiply error


 Our natural finite base must be the finite sum 1/(n!) n from 1 to Maxbaud. Then it conforms to the natural log of the infinite number line. The limit of (1+1/n)^n as n goes to infinity, is e^1. For finite systems, some (1+1/n)^n is the maximum fraction in multiply for Maxbaud n, so this should all work, somehow. The natural base for finite maxbaud is maxbaud+1 taken to the maxbaud power. The error we tolerate in finite systems is the natural e^(1) for n at infinity - the finite x^(1).

In other words, Maxbaud has to be large enough to take the multiply error through one cycle without overflow. This allows for the zero function. In my system, the set of whole parts must be prime, I think. That means each prime must execute the zero function once per cycle. The zero function is the minimum phase  function, it evenly distributes the accumulated error for all digits with the same prime.


Saturday, June 21, 2014

Some corrections on Log(x)

That is defined as the sum of path (length * length from 0 to x), for each baud on the number line. I mis-spoke a few times on the issue. This is still not completely clear, but I continue.

Log(1) in the discrete system is the sum of path lengths from zero to one, should be less than  e  the multiply precision. The zero function clears error, mainly by just bumping the fractional baud counter. So, within one complete sequence, fractional error need to stay below the system allowance. Each quant goes to complex angle zero once in a cycle and will clear errors at that quant level. At maximum entropy, there is a one to one between the ordered complex numbers and the baud index in a complete sequence.
Lets try something, again:
Finite log(1+y^X/(1+e)^2) =d, where e is the multiply error. This tells you how much error can be supported by dimension X, at maximum entropy over one complete sequence, before it has to go to zero. d is the supported error at X, and :
X^d - 1 is the total entropy supported by X, and includes the zero function. X is the dimension of the Reimann discrete complex plane at  X.

I am still working on all this, but the problem being solved is this. When complex number goes to 3+2k, and carries, that is the multiply. Then it goes to 0+0k, and stays there until its accumulated error is less than 1/3, and then goes to 3+1k. The dimension of X is X-1. The rule of conservation ensures we are finite, cyclic, and do not slip counts. All of the irrational roots become rational ratios in the finite system, and need error bounds. And we need to determine stability by band width functions. This  will all work out, and likely has already been done.

Consider 2^(1/2) and 3^(1/3).  2, a complex number of dimension 2 has to carry to 3, of dimension 3.  It cannot overflow.  This is the stuff we need.

Entropy is a big concept because if the system is not at maximum entropy, then we have to worry incompleteness, some quants may not connect up. And, at maximum entropy, the spectra is divided between less than zero and greater than zero. Ergodic, I think means minimum phase, the power spectra continually rises up toward the center of the band.

Counting quants in the Afimov root

I think this is how it works.  Given spheres with three charges, take the sphere with the small charge. It counts three angles pi/6, each with offset pi/12. These are all stable points, but he counts only once per carry from his left digits.  When it has counted to two, its next count is zero and he carries. At zero he accumulates up to 1/3 of the phase error from all units spheres sharing the quant, and these errors are added to its fractional part. If we do that, then I think we meet all consistency rerquirements, and remain at maximum entropy. We are sticking with one energy level at the moment. ere is wiki

The Efimov effect is an effect in the quantum mechanics of few-body systems predicted by the Russian theoretical physicist V. N. Efimov[1][2] in 1970. Efimov’s effect refers to a scenario in which three identical bosons interact, with the prediction of an infinite series of excited three-body energy levels when a two-body state is exactly at the dissociation threshold.

I held of on reading this because the puzzle of 22.7 was too fun to extinguish.  The 22.7 I uses is (1/2 + sqrt(3) + 1/(1/2 + sqrt(3)). I missed the 1/f in the previous post on Afimov, and almost never get things right the first time, so always beware. If we are stable to the sample rate of multiply, then the accumulated error from the total of all fractions at the Afimov root should never exceed 1.  It is accumulated from the fractional digits of all unit spheres that share the quant.  I think this work. The power spectrum in the fractional error give you the probability of state transition. The Afimov number they give, 21, probably includes the uncertainty of multiply, the coupling constant, which likely is (1/2+sqrt(3) ) - 1/(3/2 + 3*sqrt(3)). It is related to the full spectrum,
F + 1/F, which includes spin.

I think this will all work. There is a metric on baud:
M(e^baud) =  some r+i*theta. 
And the orbitals at each baud should be given by the direct sum of each Unit sphere wave equation: U1(baud) + U2(baud).... Baud will count the length of the number line, one whole complete sequence. The zero function insures we are a complete Shannon space, it gives space for baud uncertainty to do imprecise multiply.

Making a quark matrix

That list of numbers from my spread sheet of the 1/2 + sqrt(3) factors in the spectral chart is the quark matrix. Quarks are finite complex polynomial in discrete Reimann space with b = log(1+Q) being the scalar exponent of the generator. Then  q = base^b-1. generates the matrix.  So, q is the set of indices, {0,1,..k} such that q^i/baud, for i in k, is the ith value of the complex polynomial in discrete space.  Baud is the set of all ordered indices in the finite number line.


So, a quick review. The Afimov state completes out picture, more or less. We have a ring of yardstick, Y, comprising the characteristic roots:  
1/2 + sqrt(5)/2, 1+sqrt(2), and 1/2 + sqrt(3), over the finite number line baud, baud = Log(Y+1), are indices of the finite number line. The  number line is of dimension 3, having three spectral modes. The fourth spectral mode, curvature of the unit sphere, is 3/2 + 3*sqrt(221)/10, which makes motion.

We can use 0 = Null, and we are orthogonal, or we can say 0 > small e, and we have coupling constants. Thus the sample rate of multiply, c, is 1/e.
A conic on the number line baud, obeys the rules of hyperbolics.  So, sinh(q)+cosh(q) is the band width of q, and sinh(q)^2+cosh(q)^2 is the power spectra, I think (check me on this).  The quarks are pi/6, at 1/2 sqrt(3). How that works as a function, I am not sure, but it counts three times, n*pi/3 *k + root(1/2+sqrt(3)), I have not worked all this out. The professional mathematicians will finish up the theorems. Many thanks to Weinberg, Higgs, Yang-Mills, and the whole team of the standard model.

The counting pattern of the quarks factors, without the 1+sqrt(2). Leaving out the spinners, makes this chart a bit distorted, but these are the spectral paints created at the peak of the proton by the root 1/2 +sqrt(3). Putting the spinners back in removes about half those peaks, but leaves more room for the remaining. The number after 30 are is the Gluon spectral space, I am sure.



This is how we do with a the ring of yardsticks.

Groups and finite bandwidth

Given D, Q1, Q2 complex, finite numbers in the discrete Reimann plane, then:

1 < Q1 < Q2 <= D then if log*(Q1) = 0.
 So, using the indexes on the number line
 there is no quant path from 1 -> Q1, in the region  Q1 to Q2, that gets from 1 to Q1 to Q2, or visa versa.  That region is band limited at Q1.

Efimov State! What a great physicist.

I love that number, right out of the blue, after mucking about now and then.
Numbers, exponents of 1/2 plus sqrt(3).  How close are they to integer? These numbers came from the proton peak of the spectrum, the land of 2 and 3. Who would have expected this? Anyone, you say. You are right, really. But those ones marked with the yyy are where I always knew they would be, after some floundering.  Am I being fooled by the microprocessor?  Look at that base quant count by whole and halves. I am nomination my microprocessor a banana.




3.8840506217
4.3890382921
4.8940259625 I
5.3990136329
5.9040013033
6.4089889737
yyy6.9139766441
7.4189643146
yyyyy7.923951985
8.4289396554
yyyyy8.9339273258
9.4389149962
yyyyy9.9439026666
10.448890337
yyyyy10.9538780074
11.4588656779
yyyyy11.9638533483
12.4688410187
yyyyy12.9738286891
13.4788163595
yyyyy13.9838040299
14.4887917003
yyyyy14.9937793707
15.4987670411
yyyyy16.0037547116
16.508742382
yyyyy17.0137300524
17.5187177228
yyyyy18.0237053932
18.5286930636

Log of complex value in Reimann discrete numbers.

The grammar of Reimann's current complex space, and this is is that this is discrete. All other rules of grammar are followed.

What is 1/D, D complex and discrete.  I assume D finite, so this is the path from complex and discrete 1 to  value D.  If D is zero than that is a walk to zero, and offset phase has been lost. To be lost is to have D = 0 + 0i causes skip a cycle.  Little i is an index in the ordered set D, value d, 0=d to D

Otherwise, just remember finite log, sum of all backwards paths from the number line to D.  And have a nice base^Pi * k, Pi scalar and k in the  Reimann complex, discrete plane. OK, I guess Pi * n means? One half cycle thru the finite finite angle line, at the complex point, k? I dunno, but keep all the rules of grammar.

Have we seen him yet? We are using his exact same grammar.

Compulsive bubble did a root three? The inverse of spin!

How on earth?   In Efimov State, the official name, and its a perfect name.

How do bubbles root three!  This spectral mode, and this bandwidth mode is now official. This mode needs a quant gear. Each of the three quarks gets on that gear.

But how do bubbles do a sqrt(3)?

We have a world, lets pretend, made of  spinners, the leptons.  Its crowded, they start having bandwidth issues with spin. Three getting stuck  and bandwidth interference will make the 15 degrees, I am sure. Elmov can do that work, he is getting the Banana.  Boy, we are moving right up the quant X axis. We know about Gravitons, likely what a bunch of leptons might looks like, know the quarks, and will soon work the spectrum of the gluon.
Spinners have the same stability as the Efimovs. They are  under counted and the phase are right angle freeks, they solve the simplest quadriatic, and sit across a right angle and chase each others tails, warping the body of the electron.
So spinners have Q , a complex variable in discrete space, as a quant, way down at the fast end, just above the fuzzies.  D + i, counts the quant. D + i * 0 = 0. Zero is in the cycle.

The pace is fascination, and I have no doubt that we will know the complete root system for the proton, likely in a few months. The reason this system is so stable is the nulls, markers on the spherical worm gear.

The angle in the mystery number, corrected for 1/f

Take three point charges in free space. One has 1/3 charge, the other two have1/3, opposite charge. They have an L1 spot, where all forces balance. They never meet but they never escape.  Move them Cos^2 theta apart, and they have a new L1 spot on space.  The angle between the small charge is theta.  That particle has also spit the triangle, and the angle is 15, or pi/12.
2* Cos(PI/12)^2 -1,  which is, 15 degrees, for the single splits his his degrees of separation. Cos(15 degrees) + 1/Cos(15 d) = , 2.268, 1/2 + sqrt(3) = 1/(1/2+sqrt(3)) . So they say 22.7, the rotten rounders.

Any way, I love it and it explains a lot, whether you like bubbles of rubber bands. It makes the quarks work, so naturaly I plugged it in has a wave quant ratio, and sure enough it went quark wild, opening up six wave slots across the peak in my spectrum.  So, yes, let me repeat, it dont matter if you believe in rubber bands of bubbles, the only different is management of the grid, so the barbarion bubbles can bash from quant in space

Now my prediction for 1+sqrt(2) fell short, it sorta did what I expected, but nothing special came out, except it likes quants in half units sometime. But the sqrt two quant will get stuck at the electron level, its just too redundant, and traps at the electron. So, Markov five, the sqrt(221)/5 made it to the peak with one great number.  The bigger the Lagrange, the farther they travel, but the match probability drops.

Ok, puzzle. Which came first the 221 or the sqrt(3)?

So Mr Guy with the 22.7, nice work, great puzzle, and I did not buy the article, sorry, the puzzle was too fun.

Anyway, those roots get a z = e^ (i  * theta), in the Reimann discrete spiral system. i is complex numbers, z is two dimensional, having: D and d, d = D+1. i is cyclic over two Pi. D + i*k] is a valid location, in the map.. We can make the complex log work as well as we want, multiply, add subtract, all within their bounds. It is discrete, quantized.

 He looks too serious, dishevel that hair, be nutty.

Friday, June 20, 2014

The problem is simpler than I thought

We have already orthogonalized the power series for four different unit spheres. That means we have solved the roots, made the map, set the quants, incorporated the noise bounds. And we included the 'do nothing' count to accommodate maximum bandwidth.

For any given quant N, then, all we need to do is find the up/down direction for four different counters to minimize phase at that quant. Sounds much easier, very little hassle except finding all the roots. None of the roots are changing, so we have no partials really to worry about except the partial of one quant to another in a different counter.

The geometry is greatly simplified with a discrete Reimann surface.  The polynomial recursive roots are the eigenvalues of the system. I am pretty good at this!

A discrete Reimann z function for maps

We want to take logs and exponents of directed graphs. Like the Reimann surface, except discrete.

e^(d*a)  , d imaginary map, a is angle, gets   z+d*b, the zth dimension, bth variable. So, the Reimann is like a screw counting up the multi-base digit system. All integer, all discrete and finite. As you crank the lower digits, they count through quantum angles fairly fast, but as you count up to higher digits, it takes more cranks to pass a dimension.

P(d+d*b), computes third power differential, bth root, Tanh can handle the rest, generating nth dimension, kth angle.
The discrete worm gear, it will count through any M dimensional, geared map; as long as you do the log function first.

We still have to work the issue of superposition of synchronous counters, make sure we obey the laws of recursion. That one might be tough.

Working the bitstream equations

I want the complete sequence, bit stream version.

I start with the minimum redundancy model ln(1+SNR). The signal, in this case is a vector delta/Baud rate of degree k, {k=1,2,3,4}, which points toward the minimum phase. A product of these vectors will pass through the decoder tree and generate a R,Theta,Alpha, a point in the complete space of places, computed from the surface of the unit circle, at the u vector for a set of polynomial  at the tangent to the conic.

All of these vectors come from forms like: tanh(P(baud)^n)., a function of some recursive polynomial, there is one for each Lagrange. The noise is derived from the Lagrange. The value one is the sum of fractional vectors that make up the empty space, but that is also derived from Lagrange. They should all be incorporated in the in the polynomial P for each Lagrange.

So we get, their product:
log(tanh(P1(baud)^n)*tanh(Pk(baud)^(n-1)....).

Bauds are the marks in the complete sequence, it counts the complete set of quantized solutions. There will be some 100 or so solutions, and the decoding tree has that many leaves.

The decoding tree operates on the vectors N1,N2,N3.. from root to leave, each vector generated from the digit system. Each of the product forms become sums after the log and each converted to the base corresponding to the dimension, so they generate the appropriate degrees of freedom in symmetric root indices.

Every kth solution occupies a portion of the digit system up to the the digits supported by its Lagrange. The kth digit counts through the vectors in a rotation through Pi, generated the number of dV1..dVk, appropriate. They perform carry and borrow, and are reversible, but generally cyclic.

Will this work? Sure if one can get the map. But isn't the map just a hierarchical grid function? It just finds the relative quantize points in sphere space relative to the preceding branch.   The tanh function includes both fraction and whole number, it is hyperbolic.   The bases are all multiples of Baud.  There are multiple order differentials on the unit surface.  The polynomials generate p and q.  The error is against the light.  But lower powers, even above 17, will still count, but their effect slight as their fuzzies are over run by the Tanh vector of higher orders. p and q are always taken at the unit sphere surface, and fractions are chained in to count.

Given four unit circles, the four digits systems are counted in sequence.  The actual quant can mostly be computed from the various Lagrange ratios.

What about the static polynomial solution?

Take the tanh functions of the polynomials, set up the matrices, go ahead, likely useful. This solution does include motion fo bandwidth limits, it assumes minimum redundancy. Higgs is a stable baud rate. We meet all the requirements for strictly hyperbolic differential equations.

Are the quarks fourth order?
Good question, 10e30 years is a long stable life.

What about phase other than the unit circle? Everything is phase stable in this  solutio, and maximum entropy. The quants in the counters set, and the bases set, carry and borrow work. If you want different results, alter the map.

Reimann z function for complex roots?
You know your maximum bandwidth and minimum, the gluon has this.  The count should cover the whole width, and do one complete phase circle, I think.  It should count as the order of the polynomial going up the radial.  Try to find the boundaries, and functions.

Dr. Markov

He was a rebel, refused to obey the bosses at the university, opposed a lot of officialdom in 1908, from Wiki:
In 1896, Markov was elected an ordinary member of the academy as the successor of Chebyshev. In 1905, he was appointed merited professor and was granted the right to retire, which he did immediately. Until 1910, however, he continued to lecture in the calculus of differences.
In connection with student riots in 1908, professors and lecturers of St. Petersburg University were ordered to monitor their students. Markov refused to accept this decree, and he wrote an explanation in which he declined to be an "agent of the governance". Markov was removed from further teaching duties at St. Petersburg University, and hence he decided to retire from the university.
He was about bounding the problem.Great work, I try his triples on recursion relationships.  They will mostly be polynomials in differentials of tanh yielding curve surface, trig and hyperbolic parallel to the unit circle. It gives the directional slope toward minimum phase in N powers, quantized,  as a Three vector on radial, angle and z.
My Maxima algebra system is great.

I propose the ati gravity machine

Go to a good L1 spot nearby for some gravity balance. Go out their and bring your MIT absolute zero blast cooler. Your jet freezes ahead, blasts heat from behind. The fuzzies will form a fuzzie condensate. Hmm. That equation should be fun. The rocket ship uses the fuzzie freezer to circle at higher speeds, then with the momentum, it jumps to another L spot, little skips from  center point to center point.

Light speed is a rational number?

Wow, says this theory on the measure of irrationality:
Let x be a real number, and let R be the set of positive real numbers mu for which
 0<|x-p/q|<1/(q^mu)
has (at most) finitely many solutions p/q for p and q integers. Then the irrationality measure, sometimes called the Liouville-Roth constant or irrationality exponent, is defined as the threshold at which Liouville's approximation theorem kicks in and x is no longer approximable by rational numbers,
 mu(x)=inf_(mu in R)mu,
where inf_(mu in R)mu is the infimum. If the set R is empty, then mu(x) is defined to be mu(x)=infty, and x is called a Liouville number. There are three possible regimes for nonempty R:
 {mu(x)=1   if x is rational; mu(x)=2   if x is algebraic of degree >1; mu(x)>=2   if x is transcendental,

Well, it seems we can approximate light with a p/q corresponding to one of the prime ratios of Phi, Phi^17 or Phi^19, or Phi^13; 17 is correct I think.  Phi itself is algebraic of degree 2.

That makes light as constant as the bubble sizes are constant. How constant are the bubble sizes? Dunno.
 
If noise falls like a step function, as with a rational number, then noise can be contained by Gauss; because it seems that when the bubbles reach the p/q limit they form a Shannon digit system. But, overall, mixing entropy and irrationality is not completely worked out.

Ok, here is the entropy theory:

The error for Phi goes as:  e^(-log(q)*2), Gauss. That means one of two things. If the approximations of Phi need to be re-encoded, then the bandwidth must include the periodic map, and Phi is not a repeating decimal.  If the error is small enough, then Phi is a repeating decimal and does not need a map.  What is small enough? Work the theory.  I am not sure, every time I work the theory I get Phi as periodic in its approximations. But the theory above says Phi has at most a finite set of p/q, and I do not get that.


What does all this mean?

Light still has a very high center frequency, way up at the Higgs level.  But is band limits (first moment, deviation) seem to be Phi^17 to 1/Phi^17.  This is a high frequency narrow bandwidth carrier signal. I actually guessed its error to be about 10e3 or 10e4 once. I was not far off.

Thursday, June 19, 2014

I hedge and flounder all the time

Astronomers Hedge on Big Bang Detection Claim
A group of astronomers who announced in March that they had detected space-time disturbances — gravitational waves — from the beginning of the Big Bang reaffirmed their claim on Thursday but conceded that dust from the Milky Way galaxy might have interfered with their observations.
The original announcement, heralding what the astronomers said could be “a new era” in cosmology, astounded and exhilarated scientists around the world. At a splashy news conference on March 17 at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., the talk quickly turned to multiple universes and Nobel Prizes.
But even as reporters and scientists were gathering there, others convened on Facebook and elsewhere to pick apart the findings. What ensued was a rare example of the scientific process — sharp elbows, egos and all — that played out the last three months.
If the findings are indeed true, the detection of those gravitational waves would confirm a theory that the universe began with a violent outward antigravitational swoosh known as inflation — a notion that would explain the uniformity of the heavens, among other mysteries, and put physicists in touch with quantum forces that prevailed when the universe was only a trillionth of a trillionth of a second old. The idea once seemed like science fiction, but the astronomers’ findings put it almost in reach...

Floundering and hedging is all part of the game of puzzles. Think nothing of it, it is still great work. Big bang rest on the idea that a flat vacuum becomes Higgs, almost immediately. But immediately is a short time for the sudden appearance of a flat vacuum. One has to imagine that the universe managed to get stuck in a giant, spectral mode one fuzz ball of a graviton. But that is still relative.  Any fuzz ball is still sphere packing, not matter that is is a Gaussian ball. And that means Either they do not have three spheres, or they are at Phi^17, a total of 3700 bubbles or so. There is no other mode below Lagrange one, noise falls linear with signal, you have no containment. So they need at least three bubble sizes to start.

The other solution is to assume a fourth order moment inside the bubbles themselves, and this gives us the multi-verse, a complex wave motion between universes. But we are running out of Lagrange, they have a limit at three. But given the 10E34 year life time of the proton, one can imagine they have obtained it, and enough slip over the horizon to start the new universe. These escapees are enough to just flatten our universe, allowing more to escape until we are just a quant 17 bunch of fuzz balls.

A cube root counter

Representing cube, and square, root solutions of polynomials require complex values of a Z plane, in general.  However, in a system where the set of solutions is limited and quantized, a decoder works much better.  Consider:

ax^3 + bx^2 + cx + d = 0.   Being cube roots, they count out solutions by three. If I know my solution set is limited to, say 27 solutions, I can make a decoder tree and step through my roots in sequence. Is this fair? Sure, when we have hyperbolic differentials with local solutions, they have phase shift and the solutions will be in sequence.

I also know that I can orthogonalize my solution set so each of: log3(b)+log3(x), where the b are coefficients, can be accurate to my required precision. And, for 27 solutions, I get a three level tree with three branches. I also claim that if we are limited to 27 solutions. The equation above is over specified, and will be reduced at minimum redundancy, they are not wave equations.

The bubbles are simply stepping through a spiral stair case, each stopping point separated by a noise band. Cube roots and square roots and linears are all separable. I further claim that there will always be a 'baud' path that identifies the next solution in sequence. And there should exist two dual solutions, marching in tandem, one  inside the unit spheres and one outside.

I make a bold prediction in spectral theory

Good luck to me, but here it is:

1) Starting with the counts at Phi^17, begin using the Silver Ratio, the second Lagrange, and recompute the quants going up to Phi^107. Use the ratio p/q, you do not need the computed value of the Silver. Match the counts against the (3/2)^N series.
2) You will find, a) The peak of the proton will split into two symmetrical peaks splitting the Phi^91 point.  This will happen at the electron Phi^75, and the Higgs Phi^107.  That will be fermion spin showing up. It will be absolute value, as we have not carried the plus and minus shifts separately.
3) At about Phi^63 or so, you will find that the p/q accurately matches the accuracy of light.  At that point, light bulbs will click in you head as you have discovered some theoretical particle, a particle with fermion spin but no charge.
4) Repeat the same procedure from that point, using those counts as the starting point, then bump up to the third Lagrange. Each of your previous will split into three, you will be creating all the orbital quants in their specific pattern.

This works because we are all sphere packers, the vacuum, the physicists, and Markov. This is not forever, above the physics level, ellipsoids began packing, because I still have five fingers.

How do we carry both phase and magnitude in  p^3 + p^2 + p + n polynomial space? How about a compound Reimann space?

Make this thing do a second swirl on top of the first. Compound swirl, I think the bubbles do that.

So Phi^17 must be the graviton?

Evidently so, it is the first, and barely stable loosely packed Gaussian ball of Nulls the vacuum makes.  It fits with gravitational lensing, light would hit these things and go to the second Lagrange and split with two root solutions. Hit another one, and split again.  These fuzzies would hang around, a tens of meters apart or so, and form real L1 spots in gravitational fields. All 3500 bubbles of Nulls and their Phase balls. Light would severely disperse these gravitons, but if space is otherwise stable, they would reform. These fuzzies have short wavelengths, in the third second moment, so the absolute zero experimenters at MIT can cool gravity.  So, yes indeed, MIT did indeed freeze gravity.

It is also the first point at which phase difference begins to matter. Up to this point, phase difference allowed the vacuum to maintain a slight space curvature with a slight swirl moment to define straight and to define the direction to the nearest quasar, but not much else.

Wednesday, June 18, 2014

Exchange phase, light and charge

Can't avoid the topic any more.  I always wanted to get with just the absolute value of bandwidth, and ignore the phase shift across the spectrum.  But it now matters because the bubbles solved a cubic root with complex numbers, that means phase delay, and that means charge.

A minimum quant of Phi^17 is one wave of phase delay in the electron, compared to the quarks (or is it 2/3 a wave as Mr. Planck claims).  It is the length of the spectrum of the orbitals. It must be both the fundamental unit of mass and the unit of charge.  Phase delay is the quants counting with relative delay between them.  That is how the Higgs worked,  making a sphere slightly odd shaped is associated with charge and motion. Somehow they discovered that a sphere with different quants of the two phase types would be misshapen and move, move just fast enough to escape the phase balancing mechanism. The whole idea of a difference in the exchange delay between the two bubbles is what makes light work, it provides a line of symmetry.

Is the electron negative inside the sphere or outside?

They never said in physics class, they just tell me its negatively charged.  I interpret that to mean that it looks, from the outside, like it has a surplus of negative bubbles (small bubbles?), so therefore, it must have a surplus of positive (big?) bubbles in side? No?

Anyway, these numbers need straightening. Phi^17 is very small relative to the center frequency. I have a scale factor to deal with, there is a limit on the size of anything counted, a kind of Planck. The very top of of the spectrum, Phi^N is the size of a quant, and that is huge, but that conforms to one cycle of a gamma ray, more or less, measured in seconds. I never figured out what a second was, in units of bubbles, but the inverse of that shouldn't be greater than Phi^17, think. That is only about one wave number at that level so still fits.

The rest mass of quarks is 20 times the mass of the electron, they say.  And they hold about the same amount of charge. But in raw numbers, the electron and the gluon are about 10^19 bubbles apart, by subtraction. So, there is likely a rescaling going on, the first upgrade to quant size happened early, it happend just above the vacuum.

When I checked to see if Plank scaled the same as my two quant ratios, I noticed that my quant ratios counted Phi^18 many counts as Plank, which is a ratio. So this all fits.

And, when I scaled the longest wavelength that the physics have in their planck curve, and the  shortest, at the end of my spectrum, they matched. So clearly we all think that bubbles are the right size, (physicists think space is flat). Thus something happened to change quant ratios at about quant Phi^17, definitely, a change which happened above the vacuum layer.

Physicists have the Plank charge at about 10 e-18.  That is on the order of the raw difference between the gluon and the electron, on my chart.   So, the 3500 bubbles, at Phi^17, is within range, that is 10e-18 seems to be one unit of Phi^17 things.

The exchage rate of phase

Phase quantization seems to crap out around Phi^17, according to my spread sheet. That is about 3500 null bubbles. So the phase error of light must be about 1/3500 units of phase per bubble, at the quant rate of Phi, in the first Lagrange, thus, I think, this is linear, and phase error is quadratic.

There are Phi^74 of these globs of null bubbles in the proton, times two if you like, that is 1836 times the mass of the electron. So at least we know the phase error of light relative to a unit of engineering mass. So we have the standard unit of mass. What this number really is in the number of counts required for 3/2 to power through one period of round off error and match Phi. It is sort of the beat frequency between Phi and the ratio 3/2.

In  reality, what would have happened is a stationary Gaussian distribution of phase and Nulls about Phi^17.  The ratio of the first and second Lagrange is about 1.28. That is about a quart of a standard deviation away from 3500 bubbles.  The old and new quant rates separate because the old quants have mutual interference. The new quants are root(2) plus or minus 3.5 (there abouts), their noise separation has doubled from 1/2. 

This does not last, of course, eventually the second Lagrange runs its course and the third moment is added when the third Lagrange takes over.  That is when we get charge. The part I do not quite get is that the phase error does not go away. At least, not unless the bubbles change characteristics. The noise separation can change by increasing quant size, or adding degrees of freedom, but the phase error stays, I would think.


Computing progatation functions

I recommend we don't, mainly we need only determine the quant sequence in the digit systems of each body. Motion of a unit circle is simply the quantized change in its surface shape, which I know nothing about at the moment.  But that motions should go as fourth order, I think, though do not quote me.

After the quant sequence is determined, just go though some calculation on baud rate relative to any other baud rate and we have propagation. In our case, Higgs is the baud rate.

Motion and Compton Bandwidth

If we define Compton bandwidth as the ratio of Nulls to phase in the unit circle, then we have a definition of mass.  Mass being the resistance to motion, and it increases as the third moment of that ratio. The more internal phase imbalance that is tolerated, the more spherical the body remains and the less motion.  Wave shift raises the quantization ratio of wave and makes the unit circle larger.

Here the hyperbolic and trig functions are multiplied by root(2).

But why would there be a higher null/phase ratio as a result? Because of this:
r^2 = r+2. Two units of separation with two degrees of freedom with the large quants. Larger separation between wave quants, inside the unit circle,  means less interference.


The quarks have 4 to 10 times the mass of the electron, and over all, three have about 21 times the mass of the electron.  The gluon brackets the bandwidth of the whole system, but likely has three quants of spectrum of the form, 1/f and f.  But they are really functions of the action in the three quarks. So the quarks are wide band. So somehow the system has managed to dump much of the quark phase into the orbitals and into the gluon. The wave shift for the quarks seems larger, and it seems they have a different cubic root system then the electron.


Is there unquantized empty space?

The behavior of bubbles makes sense if flat space is non existent. There is no such thing as empty?  The universe behaves as if empty is an impossibility.  Or, empty can only exist in bubble form. Light seems to have the job of removing empty, in any possible form.

Bubbles jump up the Markov tree in order to have enough Nulls to keep light noise contained in the unit sphere. That is what mass means in physics terms. So don't we get a deficit of Nulls in free space? I dunno, it seems to be that if free space is so bad, then packing Nulls does not likely help, a deficit of nulls in free space means empty space might appear. What then? More bubbles?

Thinking like a bubble in 3D is hard

I am just now starting to train my brain, but being a bi-symmetric being, it is hard. I have five fingers, but that is DNA and only gives me a fifth root decoder.  Not enough CPU units in me.

But, so far.  The bubbles have thing called r which adjust the unit sphere.  Each r must be place one noise unit away from any other r, so it becomes an r+1 thing. In 3D, any other r will be one volume away, so I have, r^3 = 1+r. So  am training myself to place little spheres just inside the unit sphere, tangent to its surface and tangent to each other.

The separation by one unit, by the way, is minimum redundancy, or optimum noise separation; entropy theory. What I mean is that this thing is already encoded, I need the decoder. So, I know the form of values will be (1+r)^(1/3), which gives me r. When the decoder is orthogonal to noise then its decoding graph is given by: log3((1+r)^(1/3)). That gives me a counting system that counts three things for each digit and has the quant log3(1+r).

What happened to pi?
Good question, I have not gotten that far! They are likely ellipsoids. Wait, won't Markov give me ellipsoids? Likely, and I might discover that with a little work. Are Markov sum of squares really the differential of ellipsoids? Good point, maybe.

 What about this equation? Does not this give me what I want?  The hyperbolic function tanh is a solution to this, likely a good spot to look. But I wonder, what are the ellipsoids that correspond to the Markov triples, do they follow some form like this?

Wiki has a section on hyperbolic differential equations and says:

The solutions of hyperbolic equations are "wave-like." If a disturbance is made in the initial data of a hyperbolic differential equation, then not every point of space feels the disturbance at once. Relative to a fixed time coordinate, disturbances have a finite propagation speed
These sound like fixed bandwidth solutions to me. They have a finite propagation bandwidth. I always eliminate the t thing.

The tanh gives the slope of the unit circle where the conic has touched. When that value changes, it point in the direction of minimum phase, toward some other point on the unit circle. There is only one quantized solution in that direction because we are at maximum entropy in a fixed bandwidth system. The motion will be a rotation about some center radial of symmetry. 

Tuesday, June 17, 2014

So the bubbles need charge to make spheres?

Evidently.  They cannot pack sphere until the three degrees of freedom.  The two phase chains are different in size, they have a slight difference in exchange rate, not enough difference to slip a cycle, but enough to make sharp turns nearly impossible.  So, in Lagrange mode one, they can make large bracelets, but not pack sphere. Hence, they alternate, and become a bipolar clock. This makes for the first spin mode but not much else.  So the correct name for charge is a sphere packing shift in wave number to compute cubic roots.

What about Lagrange mode 2?  Make cylinders? Probably. They are part of the equation. Relative to every third cubic root in space, the wave has two choices to place Nulls. This is likely part of the quarks strange and charm, and likely the cause of fermion spin. There are two types of spin and I have not sorted them out.

OK, then, what about the fine structure constant?

It look like it is defined as the square root of free space impedance, which is the power spectra of the orbitals. Its square root is the first moment, the bandwidth in units of quants, of deviation, which seems to be about 19. They tell me the fine structure is about 137 = 1/(alpha)., or about .36 of the total bandwidth, in units of power, or 30% of the total width in quants (first moment).  That indicates to me, using my simple statistics, that fine structure is simply tells us how much of the bandwidth (in units of quants) is outside the standard one deviation point.  That is, assume the power spectra is  Gaussian, one standard deviation is about 65% of that, leaving 35% for the sqrt(fine spectra).  The Plank charge must be the standard deviation of the bandwidth, therefore.

So, how much of the bandwidth is really caused by charge? Probably about 65% of it, or one Plank's charge. The total available is determined by the bandwidth (in quants) of the gluon, 2*(Phi^n - Nulls^m), in the gluon wave. I mean, the gluon is the center of action, it sets center quant and quant span  for orbital kinetic energy.

So take Mr. Plancks 11 bits of quant deviation, divide by 3, it has three degrees of freedom, and we get about 3.5 units of phase shift over the whole 19 units of wave.  Of that, about 1 are likely due to spin, motion, and other quark modes, leaving us with about 2.5 units of wave shift for charge, a number that seems reasonable. It gives me the original 17  plus my 2.5, and square that I get power spectra at about 377.

The notable differences.  This is not gaussian spectra it is spectra in a third degree polynomial, composed of one, two and three degrees of freedom. The whole system seems to be a 16 bit wave bandwidth (in quants), counting relative to the center frequency, with packing gain due to the third order polynomial. Motion, then, seems to be the fourth order polynomial produce of the surface anomaly in the unit one times the third order bandwidth spectra. It should be treated as noise in the channel. I do not think it will show up in the spectra of EM light.

Degrees of freedom in the quarks

This is color, the bandwidth between the quarks and gluons essentially. I see three degrees of freedom, I am not including the anti quarks. Then baryon number and angular momentum is fixed for all, but charge has three degrees of freedom, including the electoron. Then ISO spin, but that only appears in up and down quarks, the ones that make protons and neutrons. Nor to the up and down have charm and strangeness. Hmm...

They say the mass is relativistic, but what they mean is that the gluon is a tub of Nulls that 1 to 3 waves short of a packed Null, so no  unit circle. It makes waves with a few bits of high frequency motion. The various masses, then, must be composed of separate three bits, each with one degrees of freedom. Right? The unit circle is the first Lagrange, no?  Especially since I do not think we trade quark pairs until we break something and have to renormalize. So, for each normal configuration, all the bits are centered about the one mass bit.

So the gluons likely set the bandwidth of the combined orbitals to something like:

Phi^17 is the largest quant and they can be decomposed into units of 1/Phi^17.  Guaranteed never to make a unit circle. They would be somewhat smaller than or equal to the smallest unit of null as 1/Phi^17.

In base 2, the proton has about 109 units of variation devoted to mass, and it varies little so they are likely the most significant of our 16 total bits. They are something near the top four bits at Lagrange 1 and define the four unit circles. The electron seems to be about 1/17 of the rest masses of the quarks.

So what do we make of the unit of charge in wave motion? It is one value that can appear in three locations, all rotations about a radial from the center of the unit circle. But the relative angle of that radial is determined by motion of the unit circle. The entire system is four gear systems, comprising 16 bits/ 4 (each, I think),  designed to move the four unit circles, all centered within the bandwidth of the gluon center.

What causes motion?
Deformation of the unit spheres by the action of the fractional bits, the 1/2^16, (I think) within the unit spheres, the inverse of the hyperbolic wave motion outside. These fraction changes of phase variation inside the unit spheres are actually designed to restore curvature. They will have the same degrees of freedom.

I am not sure about much of this, like are there four yardsticks with one Unit one each, or one yardstick with four unit ones? We only have about 16-19 total, so consider the electron. One for mass at Lagrange 1, one for spin at Lagrange 2 and one for charge at Lagrange 3. Including degrees of freedom, (which are bits, really, it has 6 bits. Higher orders of Lagrange are more minimum redundant, you get more bit action per degree than the lower orders.

Other clues:
Treated as four independent rulers, we have to identical quarks, which therefore must be positioned mostly orthogonal to each other. Second, all three spheres have the same degress of freedom, and must have the same number of equivalents bits (entropy).  And they are all equally subject to bandwidth restrictions, both in limits an separation of bits. So I am at least at 20 bits, but some of that is compaction because of Lagrange optimization.  It is four, including all bases, then we fit into the 156 bit system and still fill the bandwidth.

What about motion?
That seems to be another amount of entropy, no doesn't it? I have not figured that one out.

Monday, June 16, 2014

Graphs and entropy

The idea that a signal of the form x^(1/b), can always be represented in a base b system of finite size digits should be easy to prove. b is integer, make it simple. make logb(x)  finite in base b. So there exists a finite, balanced decoding graph with b branches at each node. From there it is simple to show a base b counter can represent all paths through the tree, up to the precision of my finite log. Shannon did no different, except he started with noise.

Computing the quantization error is simple. If the number of base b digits is n, then add up the inverses of the finite set 1/b^n, to get the finite log base b, then compare that to the Taylor series of the real log.  The last term is the error term.

Also, there will exist an encoding that is minimum redundancy. I know there is a general information theory about this in Wiki. If b is not integer, then make it so.

Why do I care? Because we are going to deal with cubic roots. So instead of 0,1,2 as my coefficients, I can use r1,r2,r3; I think, which would be degrees of freedom, say for the l numbers in atomic orbitals. I can rotate though the roots as I count the digits. The L numbers in the orbitals seem to be base three. I think the 'one' spots on the unit oval will rotate about by cubic roots.
What about digits for cyclic  graphs? Dunno, I just though about it. Digits on the interior of a sphere? They would not be ordered, necessarily. But more likely three independent wave actions since quarks are three.

The quants on the orbitals: Principal, angular, and magnetic; all seem to be the Lagrange degrees of freedom, in base 1,2 and 3; So they are interleaved in any digits system.  The principal quant does not have bu one value per digit, it is the first Lagrange. But it does a wave shift with energy, shortening and leaving the charge to be counted by l in 2 degrees.  Then they all shorten for the magnetic, it is almost a shift of the decimal point. More degrees of freedom added out from the electron as energy increases. But those quantum numbers are not completely orthogonal, I can tell. They also do not include the quarks.

Also, I notice, when the principal quant is at its lowest energy level, my picture of the hyperbola says the radial charge is a straight line, and the electron mass is flat.  No, small Lagrange, and small q in the denominator of the minimum error. So error cone from the electron is wide, the packed nulls nearly spherical. But for principal quant 2, there are two spheres, where did the extra degree of freedom come from? Spin? No. The first few bits must be powers of the first Lagrange. Still a bit confused, I am. But it looks like they assign Markov numbers straight from the list, starting with the most least significant bit. I guess the spin took all of the first Lagrange.

Light and its accuracy

So why is the relative sample rate of light not constant?

The accuracy of the null bubbles, as powers of 3/2,  matching powers of Phi stop, somewhere around Phi^107.  Mainly because, I think, the ratio that matches Phi better comes from:

Phi^n = a * Phi + b

At some point, that ratio gets better faster than the error term from (3/2)^m can cycle.  But does that mean the light is not constant? Light rate is based on the exchange rate, and those are Null bubbles, so I would not expect light to be more accurate.

There is a point where the rational approximation of Phi, at Phi^17, does indeed drop to the approximation error of Nulls at (3/2)^127.  So there are packings too small and packings too large. Three points seem to match.  Phi, estimating itself, needs a quant of (3/2)^17, which more or less matches the electron matching Phi with (3/2)^89, which matches the error at Phi^107, and in between is the nearly perfect match at the proton peak. So, (3/2)^17 seems to be the smallest thing Higgs can handle. So, something like 1/(3/2)^17 looks like the smallest fraction. The number I throw out are seriously rounded, I have not worked the spread sheet in this.  But (2^17-1) makes a 16 bit system at the peak of the proton, so I am merely guessing that that is the number. The smallest thing is packed nulls, that does not include wave kinetic energy.

Bubble size?
There must be a difference, but the two wave bubbles would not change size to allow better packing a low quants. Trying to make light rate variable would remove all the stability of packing. I would guess that the phase error in exchanges corresponds to the minimum packing ratio, simply on minimum redundancy grounds of how the bubbles are set in the quasar.