AI Has a Trust Problem. Can Blockchain Help?
Business uses of blockchain, like supply-chain tracking, have floundered. But some say the technology could find relevance as a tool to vet AI algorithms
By
Isabelle Bousquette
Jan. 11, 2024 7:00 am ET
The difficulty of ensuring artificial intelligence algorithms are safe, unbiased and accurate is turning some companies to a technology once hyped as transformative, but that has failed to catch on in business: blockchain.
Known for underpinning bitcoin and other cryptocurrencies, blockchain is a data structure that makes it possible to create a digital ledger of transactions and share it among a network of computers. It uses cryptography to allow each participant in the network to add to the ledger in a secure way without the need for a central authority. Once a transaction is made, the blockchain remains an immutable record of it.
With that in mind, tech vendors FICO, a data-analytics firm, and the blockchain-focused startup Casper Labs say they are applying the technology to track the process of building and training AI algorithms.
It’s a timely and urgent application, they say, as AI proliferates across business sectors and companies find they can’t always trust its output. At the same time, government regulators are putting more pressure on companies to increase transparency and auditability of the algorithms.
Previous business uses of blockchain, including supply-chain tracking, failed to gain traction in the market. Whether the massive uptake in AI can be the wind under blockchain’s wings has yet to be determined.
Blockchain can be used to track exactly what data an algorithm was trained on, when, by whom, as well as what other steps were taken to vet and verify that data, said Scott Zoldi, FICO’s chief analytics officer. Companies building AI models typically try to follow that data trail, but using blockchain would make it easier to end up with shared, consistent, trustworthy records, he said.
Blockchain won’t stop algorithms from going off the rails or exhibiting bias—but it would offer an auditable record that could show why, he said.
“We find that the blockchain can break things down into bite-sized little mini-contracts. We can validate that each of those steps are done at a very high-grain level,” Zoldi said. “There’s a level of transparency and honesty in having that immutability.”
Blockchain isn’t the answer to explainability, or the “black-box” problem in AI models. On its own, it won’t determine why an AI model spit out a particular answer. What it does promise are better records to help answer that question, Zoldi said.
Some CIOs are skeptical. “We’re all for greater AI governance, but using blockchain is a little like a hammer looking for a nail,” said Scott duFour, chief information officer of payments company Fleetcor Technologies. Blockchain could help increase trust in AI systems, he said, but only if it complemented other tools already in use that help designers understand and interpret predictions in AI models.
At present, FICO’s tool is being used internally, but there are plans to roll it out for customers later this year.
Elsewhere, Switzerland-based Casper Labs is working with IBM to develop its own tool which will offer “version control”: It records what data and parameters are influencing a certain model at a certain time and if companies start detecting bias or inaccuracies in their models, they can revert to an earlier version, said Chief Executive Mrinal Manohar.