Monday, November 16, 2015

Nice trick at MIT but CalTech does it better

To work with computational models is to work in a world of unknowns: Models that simulate complex physical processes — from Earth’s changing climate to the performance of hypersonic combustion engines — are staggeringly complex, sometimes incorporating hundreds of parameters, each of which describes a piece of the larger process. Parameters are often question marks within their models, their contributions to the whole largely unknown. To estimate the value of each unknown parameter requires plugging in hundreds, if not thousands, of values, and running the model each time to narrow in on an accurate value — a computation that can take days, and sometimes weeks. Now MIT researchers have developed a new algorithm that vastly reduces the computation of virtually any computational model. The algorithm may be thought of as a shrinking bull’s-eye that, over several runs of a model, and in combination with some relevant data points, incrementally narrows in on its target: a probability distribution of values for each unknown parameter. With this method, the researchers were able to arrive at the same answer as a classic computational approaches, but 200 times faster. Youssef Marzouk, an associate professor of aeronautics and astronautics, says the algorithm is versatile enough to apply to a wide range of computationally intensive problems. “We’re somewhat flexible about the particular application,” Marzouk says. “These models exist in a vast array of fields, from engineering and geophysics to subsurface modeling, very often with unknown parameters. We want to treat the model as a black box and say, ‘Can we accelerate this process in some way?’ That’s what our algorithm does.”

They compare the task to a task of determining an approximation of the system as a board game with an unknown grammar representing probable moves.  They basically draw samples from the data to get an idea of the 'lattice' (the board game). b Then with an approximation, they re-draw a set of samples and refine the picture of the 'lattice'.

This is self adaptin g statistics, it is what nature does.  But Marcolli at CalTech and Keevosh at Oxford have taken this approach farther.  MIT still remains the center of Basket Weaving technology.

No comments: