When fed large chunks of pre-initialized data (in this case parser tables), the type solver fell over. It was taking on the order of minutes to solve about 1000 lines of parser table (which is nowhere near the size of the full parser).
The problem was that the split/unname phase of the solver doesn't kick in until the type of a particular node is completely solved - and this type was running to over 10,000 type variables. This was triggering a lot of nasty N² behavior in the solver, which shunts equations in and out of the unifier as it fuses/decomposes them.
The fix I found is to trigger an 'extra' unname phase whenever 1000 tvars have been added to an s_let() frame. This cut the time down from 1:20 to 8 seconds.
Another thing I'm considering - in order to avoid the 'overly specific type' problem - which really goes crazy when fed literal lists... I may try to type variant constructors as a funcall instead.
One last possibility - to add datatype declarations back, and just give up on the row-based variants. I don't want to do that, because I think they have a very pythonic feel to them.
p.s. I threw up a new tarball a few minutes ago.