On 2016-06-22 03:05, John Regehr wrote:
But anyway I was thinking that most of the benefit could be gotten back by doing caching at the level of passes instead of individual transformations. So when we're about to invoke a pass, check in the cache if this pass has seen this test input before. If so, replace it with the output and move to the next pass. Again, this'll only speedup the last round of execution, but that would be nice sometimes.
I think it's a good idea. In terms of large memory consumption, perhaps we could start caching when the size of the input test drops to some value (e.g. 10k)?
- Yang