Using Blockchain to Build Customer Trust in AI
Authors: Scott Zoldi and Jordan T. Levine

In a remarkably short period of time, organizations across industries have deployed artificial intelligence (AI) to produce decisions that affect people’s daily lives. Since AI can be characterized as “a mirror that reflects our biases and moral flaws back to us,” sometimes this practice results in unfortunate and even tragic mistakes. And bias is just one of a multitude of reasons why AI is considered a “black box” with a trust problem. Last year Pew Research found that 52% of Americans are more concerned than excited about AI in daily life, compared with just 10% who say they are more excited than concerned.
Clearly, AI needs to prove itself as a trustworthy technology. To do this, companies that use AI must ensure the interpretability, auditability, and enforceability of decisions these analytic models make. Interpretability enables the technology to be understood. Auditability enables accountability. Finally, enforceability assuages doubt, leaving trust in its wake.
If organizations want to reap real business benefits from their investments in AI, customers need to trust it. Systemic social mistrust in AI can be dissolved only when questions about how this technology works—from customers, regulators, and other appropriate parties—can be answered. Using blockchain-based accountability provides an attainable, operational path to accountability and enforceability.
At FICO, we’re using blockchain technology to build the trust of consumers and the financial industry at large in AI. Blockchain creates an immutable record of every aspect of AI model development and ensures that every action taken adheres to corporate requirements and standards for responsible AI. Rather than signaling mistrust in data scientists, a blockchain system for AI model management illustrates that trust is not a personal issue—there’s a reason that important things in life have contracts. The blockchain doesn’t serve to pinpoint blame; it’s designed to keep everyone honest, efficient, safe, and on-standard. With proper governance and accountability structures in place, AI innovation has a safe and wide-open space to thrive.
This article presents a case study on how FICO came to adopt blockchain-based AI model development management, how it benefits the business, and how other organizations can adopt and gain from this approach.
When Blockchain Met AI
In 2021 the FICO data science team responsible for AI and analytic innovation began using blockchain for model development governance, a move that has since provided demonstrable value. This team provides core technology for FICO’s software platforms, including fraud detection and solutions for credit card management, and is separate from the analytic organization that develops analytics for the FICO Score. It has found that this approach has not only sped our time to market with AI and analytic innovation but has also helped keep new models in production; blockchain has reduced support issues and model recalls by over 90%. It has done so by helping to automate the process of keeping tabs on the rapidly multiplying model development details.
The seeds of this approach were developed over more than a decade of work, as the team worked to document and manage the myriad incremental decisions that go into the complex process of developing a model: the model’s variables, model design, algorithms, training, and test data utilized the model’s raw latent features, ethics testing, and stability testing. This process also includes an enormous human element: the scientists who build different portions of the variable sets, participate in model creation, and perform model testing. Each tiny change can impact model performance, responsible use, and decision outcomes.
The initial solution the data science team came up with was to start using an analytic tracking document (ATD) to guide its development process. Originally contained in a pages-long Word document, this approach detailed every aspect of a model’s requirements, development, and testing. The ATD informed a set of very specific requirements linked to FICO’s AI model development standard. Once all elements of the build were negotiated, it became the document by which the team defined the entire model development process.
Using the ATD was a game changer, but handling hundreds of voluminous ATD model documents, and holding dozens of meetings to confirm each model’s adherence to the standard, generated too much administrative overhead. So, in 2021, FICO put the entire ATD process onto a private blockchain, providing a much easier way to create an immutable trail of decision-making for every model. The blockchain eliminates any confusion about requirements, algorithms used, and success criteria to be met, as all are committed to the chain before development starts. It also permanently links to assets that demonstrate adherence to standards, exposes latent features, and determines if these introduce bias into the model, as well as identifying who worked on the latent features, which tests were done, the approving manager, and management sign-off.
Importantly, the blockchain produces not just a checklist of positive outcomes; it also includes mistakes, corrections, and improvements made along the way.
Why Blockchain Works
The ATD has come a long way from lengthy Word documents. FICO’s blockchain-based approach abstracts each task into an easy-to-use interface that is integrated into data scientists’ daily work. A maverick scientist who doesn’t want to use this method, just can’t—committing each development decision to the blockchain is simply the way the work gets done and a requirement for models to get released.
FICO has found the business value of blockchain’s immutable record to be enormous. We achieve consistency in a large global data science organization; model development across hundreds of production analytic assets each year is uniform, minimizing confusion and waste.
Reducing waste matters, given the sky-high opportunity costs of lost innovation and the very tangible costs of AI development talent and related computing resources. It’s an open secret in financial services that only a fraction of internally developed AI models is put into production because no one is quite sure what’s in them or how they will perform. A 2019 McKinsey & Co. survey of the financial services sector found that only 25% to 36% of respondents had deployed AI in various use cases within their companies. Anecdotally, we’ve seen those numbers improve, but across the industry, unused AI assets still translate into hundreds of millions of dollars of wasted effort.
Ultimately, FICO knows why blockchain works because of what doesn’t happen. Models are not held back from production because of uncertainty about their risk or lack of artifacts demonstrating adherence to the company’s Responsible AI standards. Scientists don’t inadvertently tap production models for research projects or, worse, release data science experiments “into the wild.” And that maverick data scientist? Time isn’t wasted in rejecting work that goes rogue, intentionally or not; the blockchain keeps teams cohesive, on-standard, and meeting requirements, efficiently producing models that meet FICO’s quality and safety standards. In addition to seeing model support issues drop to nearly zero, we achieve absolute adherence to, and enforceability of, AI model development standards even at high velocity.
All of this is the operational key to building trust in AI. It helps FICO produce output that 100% complies with our standards, backed with hard assets of proof of work. This means that consumers’ experience of these tools is consistent with our own Responsible AI standards.
How FICO Made Blockchain Work for AI
Getting this system up and running wasn’t a technology problem, first and foremost. It was an organization and people problem. Addressing that was followed by design and technological hurdles that had to be cleared, of course, but the first part was the hardest.
Here’s what we learned through this process.
Standards first, tech second
Without an AI model development standard to adhere to, using blockchain to record every detail of model development is futile. Unfortunately, this first step can be the hardest part of the journey: To establish corporate standards around responsible AI, hard decisions will need to be made on what will and will not be done, including approvals of which algorithms can and can’t be used, model interpretability, ethical AI testing methodologies, and meeting regulatory requirements. At FICO, this involved some evangelism around the goal of ensuring consistent analytic outcomes for all clients independent of individual data scientists’ artistry and appointing a committee that would define the development standards and educate the entire team on associated methods.
User friendly is nonnegotiable
At FICO, getting the data scientists on board with the idea of using the system wasn’t that hard. Most of them appreciated the structure it offers, automatic alignment of their work with AI model development standards, and formalized approaches to responsible AI, all of which protect them and their work products.
What was hard was developing the user interface (UI) between the data scientists and the blockchain. UI development was more than figuring out which fields and buttons the user would click and when; it had to help data scientists make the mental shift from individual-centric, linear waterfall development to a team mindset in which multiple developers and testers could do their work and validate others’ in an efficient, automated way.
Ultimately, we invested significant time and resources to make it easy for scientists to use the system in a way that emphasizes intellectual engagement instead of cumbersome oversight, integrating a slick UI into the way work gets done.
To achieve that end state, organizations should prepare to go through a formal process with business requirements and product requirements documents (BRDs and PRDs). This allows software designers’ thoughts and opinions to be aligned on how users will interact with the system and how the process will operate. It was important to ensure that everyone felt heard and that development didn’t start until expectations on form, functionality, and operation aligned.
Most critically for adoption, friction was not an acceptable outcome, forcing software development creativity from the get-go. For example, we had to balance data scientists’ not needing or wanting to know blockchain technology with mandated use of the tool based on it. Similarly, we needed instant reporting on the state of all model development and testing tasks without requiring data scientists to build reports.
Iterate on quick wins
Next, prepare to do proofs of concept of early designs to get a quick version of the system working, and start navigating use cases such as establishing requirements, development updates, testing updates, validation updates, rejection/approval, and how to reset the state of the requirements when a model is rejected. There should be a strong focus on how completeness is measured, since no model is released until all requirements are met by the developer, tester, and validator—how will that be parsed from the blockchain and presented to stakeholders?
Keep everything together, forever
No analytic application is ever truly done; it’s constantly evolving, and processes (particularly for dependencies) cannot be forgotten. So, as a key technical point, it’s important to think about the repositories that will hold large AI assets in alternate storage, with hashes, checksums, and other mechanisms that will confirm the asset referenced in the blockchain is appropriate and uncorrupted. Any alternate storage must be actively monitored and alerts issued if there are updates or migrations to other tech stacks within the organization.
Maintenance hits different
Finally, the overarching reality of a blockchain-based AI model development management system is that it is software—software with requirements for security and vulnerability management, maintenance, and upgrades. IT software teams handle these issues every day for enterprise applications, but with AI development applications, software developers need to cultivate new expertise or partner with other resources to get it.
Trust can be elusive when working with AI. Understandably, these new powerful tools need to clear a high bar. But systemic mistrust in AI can only be dispelled when customers, regulators, and other appropriate parties are confident in how the technology works and that they can rely on specific models working the way they’re supposed to. That’s what this blockchain-based approach can provide: accountability, transparency, and enforceability. By keeping everyone honest, it gives users reason to trust these powerful new tools.
TAKEAWAYS
AI’s increasing influence on daily life has led to growing concerns about its transparency, fairness, and reliability. Blockchain technology offers a solution by creating an immutable record of AI model development, ensuring accountability and adherence to ethical standards. FICO used this approach, and this article shares its learnings.
Enhanced AI transparency. Blockchain tracks every stage of FICO’s AI model development, reducing uncertainty and bias.
Improved accountability. An immutable record prevents disputes and ensures adherence to responsible AI standards.
Simplified governance. Automating model tracking streamlines compliance and reduces administrative burdens.
Usability must be prioritized. Making these systems work is a people challenge as well as a technology challenge; a well-designed user interface fosters adoption among data scientists without adding friction.
Please Log in to leave a comment.