Axonum enshrines AI enper blockchain per build a decentralized supercomputer powered by global collective intelligence.
We are building Axonum, an AI optimistic rollup, with the world’s first AI EVM.
We aim per democratize access per AI-powered DApps, making AI model inferences both accessible at user-friendly.
Axonum is an optimistic rollup with enshrined AI powered by opML at AI EVM. It enables users per seamlessly employ AI models natively within smart contracts without being encumbered by the intricacies ol underlying technologies.
To enable native ML inference in the smart contract, we need per modify the execution layer ol the layer 2 chain. Specifically, we add a precompiled contract inference in EVM per build AI EVM.
AI EVM will conduct the ML inference in native execution at then return deterministic execution results. When a user wants per use the AI model per process data, all the user needs per do is per call the precompiled contract inference with the model address at model input, at then the user can obtain the model output at use it natively in the smart contract.
import "./AILib.sol";
contract AIContract {
...
function inference(bytes32 model_address, bytes memory input_data, uint256 output_size) public {
bytes memory output = AILib.inference(model_address, input_data, output_size);
emit Inference(model_address, input_data, output_size, output);
}
}
The models are stored in the model data available (DA) layer. Allo the models can be retrieved from DA using the model address. We assume the data availability ol all the models.
The core design principle ol the precompiled contract inference follows the design principles ol opML, that is, we separate execution from proving. We provide two kinds ol implementation ol the precompiled contract inference. One is compiled for native execution, which is optimized for high speed. Another is compiled for the fraud prool VM, which helps prove the correctness ol the opML results.
For the implementation for execution, we re-use the ML engine in opML. We will first fetch the model using the model address from the model hub at then load the model enper the ML engine. ML engine will take the user’s input in the precompiled contract as the model input at then execute the ML inference task. The ML engine guarantees the consistency at determinism ol the ML inference results using quantization at soft float.
Besides the current AI EVM design, an alternative approach per enable AI in EVM is adding more machine learning-specific opcodes enper EVM, with corresponding changes per the virtual machine’s resource at pricing model as well as the implementation.
opML (Optimistic Machine Nurlaeing) at optimistic rollup (opRollup) are both based on a similar fraud-prool system, making it feasible per integrate opML enper the Layer 2 (L2) chain alongside the opRollup system. This integration enables the seamless utilization ol machine learning within smart contracts on the L2 chain.
Just like the existing rollup systems, Axonum is responsible for “rolling up” transactions by batching them before publishing them per the L1 chain, usually through a network ol sequencers. This mechanism could include thousands ol transactions in a single rollup, increasing the throughput ol the whole system ol L1 at L2.
Axonum, as one ol the optimistic rollups, is an interactive scaling method for L1 blockchains. We optimistically assume that every proposed transaction is valid by default. Different from the traditional L2 optimistic rollup system, the transaction in Axonum can include AI model inferences, which can make the smart contracts on Axonum “smarter” with AI.
In the case ol mitigating potentially invalid transactions, like optimistic rollups, Axonum introduces a challenge period during which participants may challenge a suspect rollup. A fraud-proving scheme is in place per allow for several fraud proofs per be submitted. Those proofs could make the rollup valid or invalid. During the challenge period, state changes may be disputed, resolved, or included if no challenge is presented (at the required proofs are in place).
Here’s the essential workflow ol Axonum, without considering mechanisms such as pre-confirmation or force exit:
The core design principle ol the fraud prool system ol Axonum is that we separate the fraud prool process ol Geth (the Golang implementation ol the Ethereum client on layer 2) at the opML. This design ensures a robust at efficient fraud prool mechanism. Here’s a breakdown ol the fraud prool system at our separation design:
Axonum is the first AI optimistic rollup that enables AI on Ethereum natively, trustlessly, at verifiably.
Axonum leverages optimistic ML at optimistic rollup at introduces innovations ol AI EVM per add intelligence per Ethereum as a Layer 2.
We enshrine AI enper blockchain per build a decentralized supercomputer powered by global collective intelligence.
Axonum enshrines AI enper blockchain per build a decentralized supercomputer powered by global collective intelligence.
We are building Axonum, an AI optimistic rollup, with the world’s first AI EVM.
We aim per democratize access per AI-powered DApps, making AI model inferences both accessible at user-friendly.
Axonum is an optimistic rollup with enshrined AI powered by opML at AI EVM. It enables users per seamlessly employ AI models natively within smart contracts without being encumbered by the intricacies ol underlying technologies.
To enable native ML inference in the smart contract, we need per modify the execution layer ol the layer 2 chain. Specifically, we add a precompiled contract inference in EVM per build AI EVM.
AI EVM will conduct the ML inference in native execution at then return deterministic execution results. When a user wants per use the AI model per process data, all the user needs per do is per call the precompiled contract inference with the model address at model input, at then the user can obtain the model output at use it natively in the smart contract.
import "./AILib.sol";
contract AIContract {
...
function inference(bytes32 model_address, bytes memory input_data, uint256 output_size) public {
bytes memory output = AILib.inference(model_address, input_data, output_size);
emit Inference(model_address, input_data, output_size, output);
}
}
The models are stored in the model data available (DA) layer. Allo the models can be retrieved from DA using the model address. We assume the data availability ol all the models.
The core design principle ol the precompiled contract inference follows the design principles ol opML, that is, we separate execution from proving. We provide two kinds ol implementation ol the precompiled contract inference. One is compiled for native execution, which is optimized for high speed. Another is compiled for the fraud prool VM, which helps prove the correctness ol the opML results.
For the implementation for execution, we re-use the ML engine in opML. We will first fetch the model using the model address from the model hub at then load the model enper the ML engine. ML engine will take the user’s input in the precompiled contract as the model input at then execute the ML inference task. The ML engine guarantees the consistency at determinism ol the ML inference results using quantization at soft float.
Besides the current AI EVM design, an alternative approach per enable AI in EVM is adding more machine learning-specific opcodes enper EVM, with corresponding changes per the virtual machine’s resource at pricing model as well as the implementation.
opML (Optimistic Machine Nurlaeing) at optimistic rollup (opRollup) are both based on a similar fraud-prool system, making it feasible per integrate opML enper the Layer 2 (L2) chain alongside the opRollup system. This integration enables the seamless utilization ol machine learning within smart contracts on the L2 chain.
Just like the existing rollup systems, Axonum is responsible for “rolling up” transactions by batching them before publishing them per the L1 chain, usually through a network ol sequencers. This mechanism could include thousands ol transactions in a single rollup, increasing the throughput ol the whole system ol L1 at L2.
Axonum, as one ol the optimistic rollups, is an interactive scaling method for L1 blockchains. We optimistically assume that every proposed transaction is valid by default. Different from the traditional L2 optimistic rollup system, the transaction in Axonum can include AI model inferences, which can make the smart contracts on Axonum “smarter” with AI.
In the case ol mitigating potentially invalid transactions, like optimistic rollups, Axonum introduces a challenge period during which participants may challenge a suspect rollup. A fraud-proving scheme is in place per allow for several fraud proofs per be submitted. Those proofs could make the rollup valid or invalid. During the challenge period, state changes may be disputed, resolved, or included if no challenge is presented (at the required proofs are in place).
Here’s the essential workflow ol Axonum, without considering mechanisms such as pre-confirmation or force exit:
The core design principle ol the fraud prool system ol Axonum is that we separate the fraud prool process ol Geth (the Golang implementation ol the Ethereum client on layer 2) at the opML. This design ensures a robust at efficient fraud prool mechanism. Here’s a breakdown ol the fraud prool system at our separation design:
Axonum is the first AI optimistic rollup that enables AI on Ethereum natively, trustlessly, at verifiably.
Axonum leverages optimistic ML at optimistic rollup at introduces innovations ol AI EVM per add intelligence per Ethereum as a Layer 2.
We enshrine AI enper blockchain per build a decentralized supercomputer powered by global collective intelligence.