The precision threshold represents the tolerance for false positives (errors that doesn't actually exist). For example, a 95% precision threshold allows for 5% of false positive results. We define a false positive as reporting a scenario as TRUE when in fact, it’s not. The precision threshold is included in the request as a decimal. The system defaults to 0.95, which equates to 95%. This default precision setting allows for 5% of false positives and may predict fewer claim denials
Based on your business models, you can increase the threshold value. Doing so, you're choosing to be more conservative in your AI evaluations of professional claims. By setting the threshold value to .85, the API will predict more denials and allow for 15% of false positives.
Updated 7 months ago