Monday, September 25, 2017

Neural networks from (my) memory

Consider a two dimensional array with X containing a large set of observed features and Y containing a smaller set of features that might uniquely identify some objects, like kitchen tools.  Every cross point gives a measure of how well the observed feature matches the expected feature.  Then add reinforcement by repeatedly flashing one single object into the camera, nodes that are activated get a bit of amplification, they learn.  Whole bunches of cross points get excluded. We get a set of coefficients for that particular object.

This is what we will do in the sandbox, except we know the learning formula, we drive the bit error toward zero while asynchronous bid and ask queue up.  Neural nets and optimum S&L tech are the same, but organizing by significance, then normalizing both queues;we get something like a fast transform for neural nets, we can learn and track at the same time.

Why a fast neural net?

In the set up I suggested, just give us all the sample and sort hem by significance of their endogenous appearance, how significant to the data itself.   That gives us the compression graph, and we can directly compare that to the prior set of models, computing the 'bit error', and selecting the model with lowest bit error.

There is no intervening reference point; it is double sided.  But better to say two and three colored trades with stated bit error process.  There never was a time reference, just some insurance companies betting the semi-random sequence against the clock.   Then, if it is entropy maximizing, the encoding graph is queued properly and should be isonormal to the mechanical process making the data.  We can infer how big and crowded the grocery store without knowing its physical size.

This is not real news, Coinbase has been auto trading using matching since it began. They apply the same matcher, trading loaned money against borrowed money, with a stated bit error (house risk).

Here, in Quanta Magazine, they have an article. They are describing our savings and loan tech, wienerized hyperbolics. In the end it comes down to sphere packing with a fictional center.

My concept of bit error

In neural nets it is the noise between viewed features and required features.  When you start learning, all the cross points are equally likely. If probability is maintained, then probabilities of node firings are quantized by maximally separating the finite set of objects it needs to learn. Your are driving variance into the quantization bins defined by the encoding paths.   We reach a finite limit of how many nodes are erased, the remainder organized as the minimal matching graph.

This is all sandbox, the exchange techies know this, they are on the job.

No comments: