Monday, November 23, 2020

Not true of a sand box pit

The Way We Train AI Is Fundamentally Flawed

Artificial intelligence training is a 2-step process. You start by showing an algorithm a dataset. As it goes through that data, it “learns” to identify images, voices, or whatever you’re trying to teach it by subtly altering the weights of the criteria it is coded to evaluate. Once that’s done, you test it on data it hasn’t seen before. When you get a satisfactory outcome to that test, you’re done.

Not your basic S/L automatic pit boss.  The bot is the model, until it goes bankrupt, then it is not. So model, training, actual, and theory and model,  are all one. Think of a flash loan, over and done, model and theory matching.

Consider the same bot learning to drive from a human. It is minimizing the paths between sensors and actions,by selecting sensors and actions in small amounts, as the human does the same. If it is optimally thinning the tree, then it becomes symbiotic with the human, they slightly partition duties. The car is still working fine with a pit boss between sensor and action. The two co-learned, it is a self sampling process.

Google missed a point from years ago, the search engines co-organize search terms along with the search query; they mutually thin the tree and it works because it is ongoing, model and reality all one.

No comments: