Basware launched agentic AI agents on February 24, 2026, promising 100% touchless invoicing through digital “teammates” that handle everything from PO matching to natural language queries. But the “autonomy gates” required to make those agents enterprise-safe could create the exact bottleneck finance teams thought AI would eliminate.
The pitch is seductive: agentic AI agents that don’t just automate invoice processing โ they autonomously decide which invoices to approve, which exceptions to escalate, and which vendors to flag.
No human review. No manual matching.
Just AI making calls in real time, trained on 2.2 billion invoices and capable of handling 90%+ automation regardless of format or PO status. AP specialists join the growing list of AI targeting high-skill roles where the question isn’t if AI replaces humans, but when.
The reality? Even with that massive training dataset, one in five PO-based invoices still arrives broken โ wrong line items, mismatched quantities, missing approval codes. This isn’t an AI capability problem. It’s a vendor data quality problem, and no amount of machine learning fixes upstream chaos from suppliers who can’t format a CSV correctly.
Autonomy gates: the new compliance bottleneck
Here’s where Basware’s pitch collides with enterprise reality. The AI can process invoices autonomously, but it can’t decide autonomously โ not without what Basware calls “autonomy gates.” These are business rules and risk thresholds configured by finance teams before the AI executes any action. Strict fraud thresholds? The AI waits for human approval. Complex multi-entity approval workflows? The AI escalates. Currency mismatch on a $50 invoice? Depends on your gate configuration.
The catch? Policy configuration hell.
Finance teams trade manual invoice matching for manual policy design. And unlike invoice matching โ which AP clerks handle daily โ policy configuration requires the same finance expertise AI was supposed to replace. Set gates too strict, and your “autonomous” system grinds to a halt waiting for approvals. Set them too loose, and you’ve got a compliance incident waiting to happen. CEO Jason Kurtz told reporters that “autonomy without trust is just risk,” which is corporate-speak for “we built guardrails because our customers demanded them.”
Astrid Baetsen, AP Manager at Belden, framed it optimistically: “We’re not just automating โ we’re improving accuracy, reducing risk, and gaining insights.” But that statement assumes the gates work as designed. Similar autonomy tensions in other enterprise AI deployments show the same pattern: the more control you demand, the less autonomous the system becomes.
Who this isn’t for?
If your finance team operates in pharma, defense, or financial services โ industries where compliance requirements demand heavy customization โ autonomy gates become a deployment delay mechanism. Basware’s rollout timeline is deliberately vague: “capabilities rolling out throughout 2026.” That’s vendor-speak for “we’re still figuring out how to make this work at scale.”
No named case studies. No 2026 ROI data. No competitor benchmarks showing how SAP or Coupa handle the same autonomy-versus-control tension. Basware’s cautious approach reflects broader enterprise agent control challenges โ 40% of apps may run agents by year-end, but governance frameworks lag behind.
The companies that succeed with agentic AP won’t be the ones with the smartest AI. They’ll be the ones willing to simplify their policies to match what AI can govern. Everyone else waits.
The AI can process 2.2 billion invoices. It can match POs in real time. But can it decide which exceptions matter and which don’t? That’s still a human call โ and until it’s not, “autonomous” AP is a marketing term, not an architecture.









Leave a Reply