The main reason that would hold me back using an LLM to automatically make legal or tax decisions (without a human in the loop) is that I cannot correctly quantify uncertainty of its decision.
Hence, many decisions need human review and that is the main blocker why I cannot use these models for legal decision making at scale (determining VAT, identifying permanent establishments, determining debit / credit accounts etc.).
In order to quantify uncertainty, the decision making process needs to be modelled. If I know the uncertainty in a decision I can act on it (e.g. let a human decide if uncertainty is high, reduce uncertainty by providing more information).
All the systems that I have seen in tax that fully automate a decision (e.g. tax engines) are deterministic: Given the same (limited) input they will always return the same output. These systems cannot quantify uncertainty. Why have we nevertheless employed these models for 50 years? Does the fact that an expert can explain the decision making process give us more confidence? Or is it because we have compliance systems in place that monitor the decision making process? Or because we did not have the technology?







