I have this model in index space, we run the index crank a finite number of times and some set of recursive algorithms accumulate and dispense round off error. After some finite number of cranks, the collections of algorithms have about the same set of round off errors they had before. They all have the same algorithm complexity.
Now I want to crank my indices twice as much, and my set of a recursive algorithms accumulate round off error over a larger window, have finer resolution. I need to renormalize my algorithms, get them all up to a similar complexity level.
What have I described? Either relativity or renormalization, a fielder's choice. I am having a hard time separating the two concepts.
No comments:
Post a Comment