Each prediction operation uses a cache for merge of prediction contexts.
Don't keep around as it wastes huge amounts of memory. DoubleKeyMap
isn't synchronized but we're ok since two threads shouldn't reuse same
parser/atnsim object because it can only handle one input at a time.
This maps graphs a and b to merged result c. (a,b)→c. We can avoid
the merge if we ever see a and b again. Note that (b,a)→c should
also be examined during cache lookup.
@uml
@__gshared
Each prediction operation uses a cache for merge of prediction contexts. Don't keep around as it wastes huge amounts of memory. DoubleKeyMap isn't synchronized but we're ok since two threads shouldn't reuse same parser/atnsim object because it can only handle one input at a time. This maps graphs a and b to merged result c. (a,b)→c. We can avoid the merge if we ever see a and b again. Note that (b,a)→c should also be examined during cache lookup. @uml @__gshared