Ditto Raises €7.6M for AI-Powered Patient Support
Dutch healthtech startup Ditto secures significant funding to scale its AI solutions for enhanced patient support.

The promise of Artificial Intelligence in business finance is often painted as a universally benevolent force, democratizing sophisticated tools and leveling the playing field. Adfin’s recent $18 million Series A funding round, bringing their total raised to over $30 million, fuels this narrative. Their platform aims to bring AI-powered cash flow management and money movement automation to businesses of all sizes. However, beneath the gleaming surface of efficiency gains and automated workflows lies a critical vulnerability: the potential for AI to embed and amplify historical financial inequalities, leading to biased lending decisions and exclusionary practices.
This isn’t a hypothetical concern; it’s an inherent risk in any AI system trained on real-world data. Financial institutions have a long, documented history of disparate impact, where policies that appear neutral on their face disproportionately disadvantage certain groups. If an AI model learning from this historical data isn’t meticulously scrutinized and corrected, it will inevitably replicate those same patterns. For Adfin, and any FinTech embracing AI for critical financial operations, understanding and mitigating this “invisible bias” is not just good practice; it’s a foundational requirement for ethical and sustainable growth. This post will delve into Adfin’s technical architecture, explore its potential, and critically examine the failure scenarios, particularly focusing on how AI can inadvertently perpetuate unfair financial outcomes.
Adfin’s core proposition hinges on its “agentic” AI, a sophisticated system designed to proactively manage and automate complex financial tasks. At its heart lies a proprietary payment infrastructure that orchestrates various money movement methods, from direct debits and Open Banking to card payments and traditional bank transfers. This multi-modal approach aims to offer a unified solution, reducing the need for businesses to cobble together disparate tools.
The intelligence powering this infrastructure is where Adfin truly differentiates itself. Key AI components include:
The platform supports a range of payment methods, including Direct Debit, Open Banking, Apple Pay, Google Pay, and standard bank transfers. This comprehensive support is vital for businesses operating across different geographies and customer preferences.
Security is understandably paramount in financial technology. Adfin employs standard, robust security measures: data is encrypted in transit via TLS/SSL and at rest. Two-factor authentication (2FA) adds another layer of protection for user accounts. Infrastructure is managed using Terraform, enabling reproducible and secure deployments, and regular external penetration testing aims to identify and address vulnerabilities. Integrations with popular accounting software like Xero, QuickBooks, and professional services automation (PSA) tools like Engager.app and Actionstep further enhance its appeal by fitting into existing business workflows.
While the technical sophistication is evident, it’s essential to remember the inherent complexities of AI model development. A critical area where hidden risks emerge is in the data used to train these models. The “silent data leakage” is a prime example: if data normalization techniques are applied before splitting data into training and testing sets, models can inadvertently “learn” from future data, leading to misleadingly high validation performance that crumbles in real-world application. This is especially dangerous in finance, where predictive accuracy directly impacts financial decisions.
Adfin’s platform promises to liberate finance professionals from tedious, time-consuming tasks. Customer testimonials often highlight significant time savings – up to 150 hours per month – and demonstrably faster payment cycles, with invoices settling up to 73% quicker than the UK average. This level of automation, powered by agentic AI, is a game-changer for SMBs that often lack dedicated finance teams. Accountants and bookkeepers also stand to gain immensely, automating accounts receivable processes end-to-end.
However, the reliance on AI, especially for nuanced financial decisions, introduces a distinct set of challenges. The “AI Hallucination” phenomenon is a significant concern. AI models, particularly LLMs, can generate plausible-sounding but factually incorrect or entirely fabricated information. In a financial context, this could manifest as erroneous invoice details, incorrect payment amounts, or misleading credit assessments. Without rigorous human oversight, these hallucinations could lead to significant financial errors and misguided business strategies.
Consider the anecdote of an FP&A platform that accurately identified budget variances but failed to articulate the underlying business drivers. This highlights a crucial trade-off: AI can excel at pattern recognition and data processing, but it often lacks the contextual understanding and business acumen that a human expert possesses. Translating raw data analysis into actionable strategic insights requires a level of qualitative judgment that current AI systems struggle to replicate.
This is where Adfin’s MCP server, while enabling natural language interaction, also presents potential pitfalls. While interacting via LLMs offers unprecedented ease of use, the underlying authentication mechanisms can be a bottleneck. Adfin’s APIs currently rely on user bearer tokens, which can pose integration challenges for LLMs requiring more sophisticated authentication flows. This might require additional layers of security or custom solutions to ensure seamless and secure LLM-driven interactions.
Furthermore, there are hard limits to consider. For instance, Direct Debit collections have a default maximum of £5,000, requiring direct support contact for increases. This indicates areas where automated processes still require manual intervention, underscoring the current limitations of full AI autonomy in sensitive financial operations.
Adfin’s $18 million funding is a strong indicator of market confidence in its AI-driven approach to business finance. The platform’s ability to consolidate payment methods, automate complex workflows, and integrate with existing business tools positions it as a compelling alternative to fragmented solutions. The promise of democratizing sophisticated financial management is real.
However, the most significant challenge for Adfin, and indeed for the entire FinTech sector embracing AI, is not technical complexity, but ethical responsibility. The potential for AI models to perpetuate and even amplify historical financial inequalities is a grave concern that cannot be understated. If the training data for Adfin’s credit control or lending assessment AI reflects past discriminatory lending practices, the system will inevitably learn to discriminate. This could result in certain businesses, particularly those from underrepresented groups, being unfairly denied credit or offered less favorable terms, simply because the AI has learned that “historically, businesses like yours have been riskier.”
This is the essence of the “invisible bias.” It’s not programmed malice; it’s the unintended consequence of training on flawed historical data. Mitigating this requires more than just ensuring data is clean; it necessitates active bias detection and correction mechanisms within the AI development lifecycle. This involves:
While Adfin’s focus on automation and efficiency is commendable, the true measure of its success will lie in its commitment to fairness and equity. The AI powering business finance must not only be intelligent but also just. The journey from sophisticated automation to truly democratized and equitable financial tools is still ongoing, and vigilance against the subtle but pervasive threat of AI bias is paramount.