Monday, January 14, 2019

I suspected so

The original 1954 paper by Paul Morris Fitts proposed a metric to quantify the difficulty of a target selection task. The metric was based on an information analogy, where the distance to the center of the target (D) is like a signal and the tolerance or width of the target (W) is like noise. The metric is Fitts's index of difficulty (ID, in bits):


This is a blurb from theory regarding he selection of a virtual target. Again, the problem is the user and the filter both try to remover measurement bias error such that at equilibrium, no move of the mouse results in no move of the cursor. At equilibrium the user has the cursor where he wants, else the computer is in broken pieces.

Hows well does the human communicate with the filter? The human ability to transmit via gesture is a fix information channel, this is a sphere packing issue. The channel is fixed for he reason in bold above, it is the mouse, it should have low information flow and work properly. So the human allocates a fixed mental channel, a tiny one, for the mouse.

So, the human generate really, a packet of information, a complete sequence which is a coherent message. If the filter reads a series of very  smooth dx and dy, no change in acceleration, as compared to the past, then the user has fine tuned, lifted up the muse and taken out ibas, or has an office chair with wheels. Then the filter sees a sudden burst of large dx and dy, then a stop.the user is sending you a vector, almost like morse code, dot dash dot,  it means take a large jump in this direction. When the user is taking short repetitive moves, that means calibrate, user if stepping regular, likely along some menu. In he last case, the user velocity is calibrated to the unfolding time of the menu, a separate and more accurate reference point.  But the huffman tree shows these state charges. Thus is becomes a two color problem, look at the innovation tree most recent from the user dx an dys. Then try to match them in scale, the filter will both move to target and self calibrate, but it develops lag.

In short, this is a maximum entropy filer, not a least squares.  We maintain a 'model' of the users click sequences, then assuming fixed precision, we restructure the model as innovations come, and respond with counter moves to match.  It is not as easy as matching dx per dx, the complete sequence is gathered, and sorted by the significance of the move. Canes result in dropping insignificant measures, a least squares will not do this.  The maximum entropy model is always a finite generator that best matches the recent history, in entropy.  It is not a complete weighed history,m like least squares. Key assumption: Fixed channel bandwidth. means history is chosen that best fits the channel.
That is what connects Fitts and Shannon, the fixed bandwidth assumption, especially low bandwidth.

No comments: