‘Moral responsibility’: Can blockchain really improve trust in AI?


Experts are still divided on the real impact blockchain tech can have on solving some fo the problems that ail AI.

Most technological revolutions come with an unforeseen darker side.

When Austrian-born physicists Lise Meitner and Otto Frisch first split the atom in the late 1930s, they probably didnt anticipate their discovery would lead a few years later to the atomic bomb. The artificial intelligence (AI) revolution is arguably no different. 

AI algorithms have been around for decades. The first artificial neural network, the perceptron, was invented in 1958. But the recent pace of development has been breathtaking, and with voice recognition devices like Alexa and chatbots like ChatGPT, AI appears to have gained a new public awareness.

On the positive side, AI could dramatically raise the planets general education level and help to find cures for devastating diseases like Alzheimers. But it could also displace jobs and bolster authoritarian states that can use it to surveil their populations. Moreover, if machines ever achieve general intelligence, they might even be trained to overturn elections and prosecute wars, AI pioneer Geoffrey Hinton recently warned.

Enormous potential and enormous danger is how United States President Joe Biden recently described AI. This followed an open letter in March from more than 1,000 tech leaders, including Elon Musk and Steve Wozniak, calling for a moratorium on AI developments like ChatGPT. The technology, they said, presents profound risks to society and humanity. 

Already, some countries are lining up against OpenAI, the developer of ChatGPT. Italy temporarily banned ChatGPT in March, and Canadas privacy commissioner is investigating OpenAI for allegedly collecting and utilizing personal information without consent. The EU is negotiating new rules for AI, while China is demanding that AI developers henceforth abide by strict censorship rules. Some amount of regulation seems inevitable.

An antidote to what ails AI?

With this as a backdrop, a question looms: Can blockchain technology remedy the problems that afflict artificial intelligence or at least some of them? Decentralized ledger technology, after all, is arguably everything that AI is not: transparent, traceable, trustworthy and tamper-free. It could help to offset some of the opaqueness of AIs black-box solutions.

Anthony Day, head of strategy and marketing at Midnight a side-chain of Cardano wrote in April on LinkedIn with respect to blockchain technology: We DO need to create a way to enable traceable, transparent, uncensorable, automated TRUST in where and what AIs will do for (or to) our world. 

At a minimum, blockchains could be a repository for AI training data. Or as IBMs Jerry Cuomo wrote several years back an observation that still rings true today: 

With blockchain, you can track the provenance of the training data as well as see an audit trail of the evidence that led to the prediction of why a particular fruit is considered an apple versus an orange. 

Users of centralized AI models are often unaware of the biases inherent in their training, Neha Singh, co-founder of Tracxn Technologies an analytics and market intelligence platform tells Magazine. Increased transparency for AI models can be made possible using blockchain technology.

Many agree that something must be done before AI goes more heavily mainstream. In order to trust artificial intelligence, people must know and understand exactly what AI is, what its doing, and its impact, said Kay Firth-Butterfield, head of artificial intelligence and machine learning at the World Economic Forum. Leaders and companies must make transparent and trustworthy AI a priority as they implement this technology.

Interestingly, some work along these lines is underway. In February, U.S.-based fintech firm FICO received a patent for Blockchain for Data and Model Governance, officially registering a process it has been using for years to ensure responsible AI practices. 

FICO uses an Ethereum-based ledger to track end-to-end provenance of the development, operationalization, and monitoring of machine learning models in an immutable manner, according to the company, which has more than 300 data scientists and works with many of the worlds largest banks. Notably, there are subtle differences between the terms AI and machine learning, but the terms are often used interchangeably.

Using a blockchain enables auditability and furthers model and corporate trust, Scott Zoldi, chief analytics officer of FICO, wrote in an AI publication earlier this year.

Importantly, the blockchain provides a trail of decision-making. It shows if a variable is acceptable, if it introduces bias into the model, or if the variable is utilized properly. It records the entire journey of building these models, including their mistakes, corrections and improvements.

AI tools need to be well-understood, and they need to be fair, equitable and transparent for a just future, Zoldi said, adding, And thats where I think blockchain technology will find a marriage potentially with AI. 

Separating artifice from truth

Model development is one key area where blockchain can make a difference, but there are others. Some anticipate that devices like ChatGPT might have a deleterious effect on social media and news platforms, for instance, making it difficult to sort out artifice from what is real or true. 

This is one of the places where blockchain can be most useful in emerging platforms: to prove that person X said Y at a particular date/time, Joshua Ellul, associate professor and director of the Centre for Distributed Ledger Technologies at the University of Malta, tells Magazine.

Indeed, a blockchain can help to build a sort of framework for accountability where, for instance, individuals and organizations can emerge as trusted sources. For example, Ellul continued, If person X is on record saying Y, and it is undeniable, then that becomes a reference point, so in the future, individuals could build their own trust ratings for other people based upon what they said in the past. 

Read also

The blockchain projects making renewable energy a reality


Boston nurse fired for nudes on OnlyFans launches crypto porn app

At the very least a blockchain solution could be used to track data, training, testing, auditing and post-mortem events in a manner that ensures a party cannot change some events that happened, adds Ellul.

Not all agree that blockchain can get to the root of what really ails AI, however. I am somewhat skeptical that blockchain can be considered as an antidote to AI, Roman Beck, a professor at IT University of Copenhagen and head of the European Blockchain Center, tells Magazine.

We have already today some challenges in tracking and tracing what smart contracts are really doing, and even though blockchain should be transparent, some of the activities are hard to audit.

Elsewhere, the European Commission has been looking to create a transatlantic space for trustworthy #AI. But when asked if blockchain technology could help offset AIs opaqueness, a European Commission official was doubtful, telling Magazine:

Blockchain enables the tracking of data sources and protects peoples privacy but, by itself, does not address the black-box problem in AI Neural Networks the most common approach, also used in ChatGPT, for instance. It will not help AI systems to provide explanations on how and why a given decision was taken.

When algos go crazy

Maybe blockchain cant save AI, but Beck still envisages ways the two technologies can bolster one another. The most likely area where blockchain can help AI is the auditing aspect. If we want to avoid AI being used to cheat or engage in any other unlawful activity, one could ask for a record of AI results on a ledger. One would be able to use AI, but in case the results are used in a malicious or unlawful way, would be able to trace back when and who has used AI, as it would be logged.

Or consider the autonomous driving vehicles developed with AI technology in which sensors, algorithms and blockchain would provide an autonomous operating system for inter-machine communication and coordination, adds Beck. We still may not be able to explain how the AI has decided, but we can secure accountability and thus governance. That is, the blockchain could help to trace who or what was really at fault when an algo went crazy. 

Whats in the box? (Investopedia)

Even the aforementioned EU official can foresee blockchain providing benefits, even if it cant solve AIs black box problem. Using blockchain, it might be possible to create a transparent and tamper-proof record of the data used to train AI models. However, blockchain by itself does not address the detection and reduction of bias, which is challenging and still an open-research question.

Implementing a blockchain to track AI modeling

In the corporate sector, many companies are still struggling to achieve trustworthy AI. FICO and Corinium recently surveyed some 100 North American financial services firms and found that 43% of respondents said they struggle with Responsible AI governance structures to meet regulatory requirements. At the same time, only 8% reported that their AI strategies are fully mature with model development standards consistently scaled.

Founded in 1956 as Fair, Isaac and Company, FICO has been a pioneer in the use of predictive analytics and data science for operational business decisions. It builds AI models that help businesses manage risk, combat fraud and optimize operations. 

Asked how the firm came to employ a permissioned Ethereum blockchain in 2017 for its analytics work, Zoldi explained that he had been having conversations with banks around that time. He learned that something on the order of 70%80% of all AI models being developed never made it into production. 

Read also

5 years of the Top 10 Cryptos experiment and the lessons learned


Inside South Koreas wild plan to dominate the metaverse

One key problem was that data scientists, even within the same organization, were building models in different ways. Many were also failing governance checks after the models were completed. A post hoc test might reveal that an AI-powered tool for fraud detection was inadvertently discriminating against certain ethnic groups, for example. 

There had to be a better way, Zoldi recalls thinking, than having Sally build a model and then find six months later after shes already left the company that she didnt record the information correctly or she didnt follow governance protocols appropriate for the bank.

FICO set about developing a responsible AI governance standard that used a blockchain to enforce it. Developers were to be informed in advance of algorithms that might be used, the ethics testing protocols that need to be followed, thresholds for unbiased models, and other required processes. 

Meanwhile, the blockchain records the entire journey in every model development, including errors, fixes and innovations. So, for each scientist who develops a model, another checks the work, and a third approves that its all been done appropriately. Three scientists have reviewed the work and verified that its met the standard, says Zoldi. 

What about blockchains oft-cited scaling issues? Does everything fit on a single digital ledger? Its not much of a problem. Well store [on the blockchain] a hash of lets say, a software asset but the software asset itself will be stored elsewhere, in something else like a git repository. We dont literally have to put 10 megabytes worth of data on the blockchain. 

Commercial developers would be well served to heed experiences like FICOs because political leaders are clearly waking up to the risks presented by AI. The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products, said U.S. Vice President Kamala Harris in a statement. And every company must comply with existing laws to protect the American people.

The concerns are global, too. As the EU official tells Magazine, To ensure AI is beneficial to society, we need a two-pronged approach: First, further research in the field of trustworthy AI is necessary to improve the technology itself, making it transparent, understandable, accurate, safe and respectful of privacy and values. Second, proper regulation of AI models must be established to guarantee their responsible and ethical use as we propose in the [EU] AI Act.

The private sector should weigh the benefits of self-regulation. It could prove a boon for an enterprises developers, for one. Data scientists sometimes feel like they have been placed in a difficult situation, Zoldi says. The ethics of how they build their models and the standards used are often not specified and this makes them uncomfortable. 

The makers of AI devices dont want to do harm to people, but theyre often not provided with the necessary tools to ensure that doesnt happen. A blockchain can help, though, in the end, it may be one of several self-regulating or jurisdictional guardrails that need to be used to ensure a trustworthy AI future.

You talk to experts and they say, Were smart enough to be able to generate this technology. Were not smart enough to be able to regulate it or understand it or explain it and thats very scary, Zoldi tells Magazine.

All in all, blockchains potential to support a responsible AI has yet to be widely recognized, but that could soon change. Some, like Anthony Day, are even betting on it: Im not sure if blockchain truly will save the world, but Im certain it can save AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe To The Latest Crypto News

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

World Wide Crypto will use the information you provide on this form to be in touch with you and to provide updates and marketing.