The Revolutionary Convergence of AI and Blockchain in the Web3 Ecosystem — Potential Use Cases

--

Artificial Intelligence (AI) is receiving a lot of attention at the moment. The current wave of generative AI tools have demonstrated proficiency in performing mundane tasks like data reclassification and reorganization. However, what truly propelled these tools into the limelight is their capability to generate text, compose music, and craft digital art. These extraordinary skills have not only captured the headlines, but also motivated individual consumers and households to explore and experiment with these technologies on their own. Particularly fascinating is the lightweight UX and the associated media presence of this technology. It seems to be a tangible and easy-to-understand technology whose multiple possibilities people quickly recognise. In contrast, blockchain is still waiting for its own “ChatGPT moment”, but there are already many thoughts on how artificial intelligence and blockchain technology could be brought together. In this article, we will take a closer look at some of these concepts and give an outlook on what exciting developments could await us in the future.

Authors: Hannes Detlefsen, Valentin Kalinov

Definition of Blockchain and AI

What is Blockchain?
Blockchain technology is a registry in which data is stored in blocks and linked together by cryptographic mechanisms. This register is stored and verified on many computers simultaneously. The actors in the network are coordinated by monetary incentives and strive for the security of the network. Once recorded in the blockchain, data is immutable and publicly viewable. Users act under a pseudonym that usually does not reveal their true identity.

What is Artificial Intelligence?
Artificial intelligence (AI) refers to the ability of computers or machines to perform tasks that normally require human intelligence. It is a field of computer science that develops algorithms and techniques to enable computers to learn and recognise patterns, make decisions and solve problems. AI can be used in a variety of areas, such as speech recognition, image processing, automated decision making and robotics. The goal of artificial intelligence is to enable machines to develop human-like intelligence and behaviours and to perform tasks efficiently and accurately.

What is Machine Learning?
Machine Learning (ML) is a subset of AI and refers to algorithms and models that enable computers to learn from experience and data, recognise patterns and make predictions. These ML models are improved through training with the data and can be used continuously to analyze new data and make predictions.

What is Deep Learning?
Deep Learning is a type of ML (computer decision-making based on learning patterns in past data) that uses a specific type of algorithmic structure called a deep neural network. This technique is especially good at learning hard-to-find patterns in data but requires a large amount of data to do so.

In a nutshell:
AI is the umbrella concept that refers to the development of intelligent systems, while ML is a subfield of AI that focuses on learning from data.

So why is AI such a big deal right now?
AI isn’t a recent phenomenon — it has a history dating back to the 1950s. Its current surge in popularity is largely attributed to advancements in technology, including Graphics Processing Units (GPUs), that have significantly boosted computing power and data handling capacities of computers. This exponential increase in processing capabilities has consequently made deep learning more cost-effective and expedient than ever before.

Enhancing Data Quality with Blockchain

Machine Learning uses mathematical algorithms to find patterns in data and consists of three components:

Training data
Training is the process by which the algorithm learns the patterns in the data. The training data consists of large data sets that are used to train the ML algorithm, classify data and make predictions. The format of the files can be very different and include e.g. images, text or audio.

Model architecture
Different ML models often differ in their architecture, e.g. depending on the problems to be processed. The model architecture is adapted accordingly to the different circumstances. Different types of ML-algorithms answer different types of questions.

Model parameters
Model parameters are values or weights that the model iteratively adjusts and improves during the training process to minimise the error between the predicted and actual results.

Furthermore, the life cycle of an ML model can be divided into two phases:

In the training phase, the parameters of the model are adjusted to minimize prediction errors. During training, the algorithm is exposed to the different features that make up the training dataset as well as the associated outcome values. The algorithm learns by adjusting its parameters to best fit the data to which it is exposed. In the inference phase, on the other hand, the trained model is used to make predictions on new data. In ML a dataset is split into a training set and a test set. After training is finished the algorithm can be tested on the test set.

Quality of Data

Since artificial intelligence or machine learning models are built on these large data sets, we consider the acquisition of large data sets in the first step. Machine learning (ML), relies heavily on data. The larger the dataset, the more material the algorithm has to learn from. However, the size of the dataset is only one part of the equation. Its quality is equally, if not more, crucial. Training AI on low-quality data can lead to several negative outcomes. It may result in inaccurate predictions or recommendations, which could be detrimental in sectors like healthcare or finance. Moreover, low-quality or biased data can inadvertently perpetuate and amplify existing societal biases. Ensuring high-quality datasets involves data cleaning, removing irrelevant or redundant information, filling in missing data, and validating the accuracy of the data.

Data Marketplaces

Web3 projects are currently working on decentralized data marketplaces to give individuals and organizations the ability to share, sell, and purchase data in a secure and privacy-preserving manner. Enabling everyone to publish their data sets would provide wider access to training data. However, this is necessary to enable fast queries across multiple datasets and to gain insights from combining them. On one hand, there are large amounts of data generated daily around the world based on different actions and transactions that could potentially be used for different purposes. On the other hand, there are companies that are data-driven to make business processes more efficient or to analyse customer behavior.

Figure 1: Big Data Statistics (source: techjury.net)

Therefore, a decentralised infrastructure for trading standardised data offers the opportunity to give private individuals or companies the chance monetize their data. One of the protocols that has addressed this challenge is AllianceBlock.

AllianceBlock develops the “Data Tunnel”, which is supposed to be a platform on which every user — regardless of whether private or institutional — can easily upload, standardise, improve and monetise their data. The Data Tunnel comprises three different groups of actors:

Contributors
These upload any — generalised — data sets and receive monetary remuneration for each use of their data sets. By making their data available, private individuals, SMEs, but also large corporations can thus generate an additional income stream.

Suppliers
These evaluate existing datasets and combine them to gain new insights and create new applications for the use of the data. They also add additional metadata and links between existing datasets.

Customers
You can use datasets for a variety of purposes, such as training AI/ML models, analysing different markets, analysing consumer behaviour, etc. Importantly, users should be able to create synthetic copies of a dataset that share the same statistical and demographic characteristics but exclude sensitive or personal information such as names, contact details or national insurance numbers. In addition, AI-driven tools will help improve biased or incomplete datasets. Such a structure enables data sharing and compliance with privacy policies.

Protocols like this or the OCEAN protocol, which we have already dealt with in a previous article, create open markets that can operate more independently of large data providers and also include SMEs and private individuals.

“The aim of Ocean Protocol is to give individuals and organizations the ability to share, sell, and purchase data in a secure and privacy-preserving manner. The project went through a few iterations and it is currently at stage V4. The current structure of Ocean focuses on three main features: The Ocean Data Market, veOCEAN staking, and Data Farming. By using blockchain Ocean offers an alternative way of tracking ownership of data. The acts of publishing data, purchasing data, and consuming data are all recorded on the blockchain to make a tamper-proof audit trail.”, ITSA Blog

In addition to marketplaces for the data required by AI models, there are many fields of application in which the blockchain in combination with artificial intelligence appears interesting:

Figure 2: Potential use cases for blockchain and AI

Use cases for these two technologies in combination are assumed to exist both in the traditional world and in decentralized ecosystems.

In the traditional world, for example, it would be possible to enable transparent and secure health record management via blockchain and use AI to diagnose, drug discovery or pacient care. Similarly, supply chain management could benefit from blockchain providing traceability and transparency while AI can analyse data and predict supply chain events — which could lead to efficiency gains. The use of both technologies in education would also be conceivable. It is already known that ChatGPT and other Large Language Models (LLMs) can be quite useful. But with further developments, prompts and knowledge, it will become increasingly easy to learn and understand with the help of artificial intelligence in the future. Online certificates, online courses, etc., where certificates are issued, have the choice of issuing them as non-fungible tokens via a blockchain to make them securely the property of the learner.

However, the interplay of both technologies is also interesting in the context of cryptocurrencies.

Disclaimer — It is worth mentioning beforehand that active research should be carried out, especially for tokens from the AI sector. In the past, we have observed critical market movements, especially in the hype cycles of individual sectors, where tokens and projects without finished products and with dubious white papers have raised money or launched tokens. Nevertheless, there are certainly interesting projects with real benefits and their own token.

In the following, we would like to highlight various possible applications of AI and ML models in the context of blockchain and smart contracts:

dApps consist of smart contracts, which are usually implemented immutably on the blockchain but not necessarily have to be. With ML components, it would be possible to generate dynamic smart contracts that can react to changes in virtual economies via off-chain ML processes.

Lending protocols such as Aave, for example, require information about specific LTV (loan to value) thresholds of the respective collateral assets and rely on central service providers to provide the required information. Machine learning could replace the central service provider here and provide unbiased forecasts and recommendations. Asset management protocols that execute different yield strategies simultaneously could also automate data-driven decisions and execute actions to maximise the investor’s ROI.

Similarly, ML components could be interesting in the context of dynamically adjusting token issuance, burning, fees, etc. of a protocol depending on changes in certain variables within the model. In general, tokens within dApps or blockchains serve to coordinate between different unknown parties. The mechanism usually has a monetary character, which is represented by tokens. If the tokenomics of a protocol are poorly developed, certain events can rock digital economies. ML models can react quickly to developments within the protocol and make the necessary adjustments to bring incentives back into balance.

On the other hand, ML can be used to audit the smart contracts of the various projects to ensure consumer protection. Such a model could determine the degree of centralisation and detect backdoors or exploit opportunities. Audits are often tedious and sometimes flawed — such a tool automates this process and reports in an unbiased way.

Likewise, ML can be used from a regulatory perspective to monitor blockchains, identify conspicuous payment flows and connections and make them available to the regulatory authorities. The information within the blockchain is transparent and analysable for everyone with the common public blockchains. Among other things, PwC Germany uses ML in AML Transaction Monitoring to enable crypto service providers to comply with compliance requirements. Wallets receive a uniform risk score and, in addition, personal data, on-chain data and information from third-party sources are used. ML helps to detect patterns, classify them according to FATF and identify illicit funds.

The classification of Ocean Protocol (OCEAN) according to the ITC

The International Token Classification (ITC) is a multi-dimensional, expandable framework for the classification of tokens. In this example we will classify the OCEAN token according to the latest version of our ITC.

Figure 3: The classification of OCEAN token by ITSA (source: https://itin.itsa.global/FT6BGXK56 )

Economic Purpose (EEP): OCEAN is listed as a Settlement and Governance Token (EEP22TU03) due to its design as a means of payment combined with governance functionality trough veOCEAN.

Industry Type (EIN): The issuer of OCEAN is active in the field of Other Software, Data Storage and Processing (EIN05DA05).

Technological Setup (TTS): OCEAN is an Ethereum ERC-20 Standard Token (TTS42ET01).

Legal Clam (LLC): OCAN does not entitle its holder to any legal claim or rights against the issuing organization, therefore, it is listed as a No-Claim Token (LLC31).

Issuer Type (LIT): The dimension “Issuer Type” provides information on the nature of the issuer of the token. OCEAN’s platform is built by private company. Its Issuer Type is an Private Sector Legal Entity (LIT61PV).

Regulatory Framework (EU) (REU): The dimension “Regulatory Status EU” provides information of the potential classification of a token according to the European Commission’s proposal for a Regulation on Markets in Crypto Assets (MiCA, Regulation Proposal COM/2020/593 final). OCEAN qualifies as a Utility Token (REU52) according to the definition provided in Article 3 (5) of Regulation Proposal COM/2020/593 final.

Consensus Mechanism (TCM): The dimension describes the mechanism that is deployed to achieve consensus on the token’s distributed ledger. OCEAN tokens are issued on Ethereum; therefore, they are listed as Proof-of-Stake (TCM71PS).

Type of Maximum Supply (EMS): The dimension describes the token’s type of maximum supply. The supply of OCEAN is capped to 1.41 billion tokens. Therefore, OCEAN supply is listed as Fixed (EMS81)

Primary Mode of Origination (EMO): The OCEAN Token Sale ended
in May 2019 raising around $30,000,000. Therefore, the token is listed as Sale(EMO92).

Taxes (RTA): One common distinction can be drawn between crypto-assets: those crypto-assets that resemble ‘conventional’ assets, like securities, and which are merely recorded on DLT systems (Conventional Asset Tokens DTA71), and those assets and activities that raise new regulatory challenges such as virtual currencies (New Asset Tokens DTA 72; OECD 2020). OCEAN is listed in the Tokenbase as a New Asset Token (RTA72).

The International Token Standardization Association (ITSA) e.V.

The International Token Standardization Association (ITSA) e.V. is a not-for-profit association of German law that aims at promoting the development and implementation of comprehensive market standards for the identification, classification, and analysis of DLT- and blockchain-based cryptographic tokens. As an independent industry membership body, ITSA unites over 100 international associated founding members from various interest groups. In order to increase transparency and safety on global token markets, ITSA currently develops and implements the International Token Identification Number (ITIN) as a market standard for the identification of cryptographic tokens, the International Token Classification (ITC) as a standard framework for the classification of cryptographic tokens according to their inherent characteristics. ITSA then adds the identified and classified token to the world’s largest register for tokens in our Tokenbase.

  • The International Token Identification Number (ITIN) is a 9-digit alphanumeric technical identifier for both fungible and non-fungible DLT-based tokens. Thanks to its underlying Uniform Token Locator (UTL), ITIN presents a unique and fork-resilient identification of tokens. The ITIN also allows for the connecting and matching of other media and data to the token, such as legal contracts or price data, and increases safety and operational transparency when handling these tokens.
  • The International Token Classification (ITC) is a multi-dimensional, expandable framework for the classification of tokens. Current dimensions include technological, economic, legal, and regulatory dimensions with multiple sub-dimensions. By mid-2021, there will be at least two new dimensions added, including a tax dimension. So far, our classification framework has been applied to 99% of the token market according to the market capitalization of classified tokens.
  • ITSA’s Tokenbase currently holds data on over 4000 tokens. Tokenbase is a holistic database for the analysis of tokens and combines our identification and classification data with market and blockchain data from external providers. Third-party data of several partners is already integrated, and API access is also in development.

Remarks

If you like this article, we would be happy if you forward it to your colleagues or share it on social networks. More information about the International Token Standardization Association can be found on the Internet, on Twitter, or on LinkedIn.

Hannes Detlefsen is a Community Manager at the International Token Standardization Association (ITSA) e.V. and has been active in the blockchain field for several years. Currently he is studying Business Administration at the Christian-Albrechts University in Kiel. Besides his experience in the field of digital assets, his main focus is on decentralized finance. You can contact him via detlefsen@blockchain-sh.de and connect with him on LinkedIn.

Valentin Kalinov is an Executive Director at International Token Standardization Association (ITSA) e.V., working to create the world’s largest token database, including a classification framework and unique token identifiers and locators. He has over five years of experience working at BlockchainHub Berlin in content creation and token analysis, as a project manager at the Research Institute for Cryptoeconomics at the Vienna University of Economics and token analyst at Token Kitchen. You can contact Valentin via valentin.kalinov@itsa.global and connect on Linkedin if you would like to further discuss ITSA e.V. or have any other open questions.

--

--

International Token Standardization Association

The International Token Standardization Association (ITSA) is a not for profit organization working on holistic market standards for the global token economy.