The Casper Network is a decentralized computation platform. In this chapter we describe aspects of the computational model we use.
Measuring computational work
Computation is all done in a WebAssembly (wasm) interpreter, allowing any programming language which compiles to wasm to become a smart contract language for the Casper blockchain. Similar to Ethereum, we use
Gas to measure computational work in a way which is consistent from node to node in the Casper Network. Each wasm instruction is assigned a
Gas value, and the amount of gas spent is tracked by the runtime with each instruction executed by the interpreter. All executions are finite because each has a finite gas limit that specifies the maximum amount of gas that can be spent before the computation is terminated by the runtime. The payment amount specified within the Deploy sets the gas limit for Deploy execution. How this limit is determined is discussed in more detail below.
Although computation is measured in
Gas, we still take payment for computation in motes. Therefore, there is a conversion rate between
Gas and motes. How this conversion rate is determined is discussed elsewhere.
Please note that Casper will not refund any amount of unused gas.
This decision is taken to incentivize the Casper Runtime Economics by efficiently allocating the computational resources. The consensus-before-execution model implements the mechanism to encourage the optimized gas consumption from the user-side and to prevent the overuse of block space by poorly handled deploys.
A deploy represents a request from a user to perform computation on our platform. It has the following information:
- Body: containing payment code and session code (more details on these below)
- Header: containing
- the identity key of the account the deploy will run in
- the timestamp when the deploy was created
- a time to live, after which the deploy is expired and cannot be included in a block
blake2b256hash of the body
- Deploy hash: the
blake2b256hash of the Header
- Approvals: the set of signatures which have signed the deploy hash, these are used in the account permissions model
Each deploy is an atomic piece of computation in the sense that, whatever effects a deploy would have on the global state must be entirely included in a block or the entire deploy must not be included in a block.
A deploy goes through the following phases on Casper:
- Deploy Received
- Deploy Gossiped
- Block Proposed
- Block Gossiped
- Consensus Reached
- Deploy Executed
The client sending the deploy will send it to one or more nodes via their JSON RPC servers. The deploy acceptor, which is the component responsible for receiving the deploy from the JSON-RPC or another node, will run validity checks on the deploy and allow the lifecycle to continue or return an appropriate error. Once accepted, the deploy hash is returned to the client to indicate it has been enqueued for execution. The deploy could expire while waiting to be gossiped and whenever this happens a
DeployExpired event is emitted by the event stream servers of all nodes which have expired the deploy.
After a node accepts a new deploy, it will gossip to all other nodes. A validator node will put the deploy into the block proposer buffer. The validator leader will pick the deploy from the block proposer buffer to create a new block for the chain. This mechanism is efficient and ensures all nodes in the network eventually hold the given deploy. Each node which accepts a gossiped deploy also emits a
DeployAccepted event on its event stream server. The deploy may expire while waiting to be added to the block and whenever this happens a
DeployExpired event is emitted.
The validator leader for this round will propose a block that includes as many deploys from the block proposer buffer as can fit in a block.
The proposed block is propagated to all other nodes.
Once the other validators reach consensus that the proposed block is valid, all deploys in the block are executed, and this block becomes the final block added to the chain. Whenever consensus is reached, a
BlockAdded event is emitted by the event stream server.
FinalitySignature events are emitted shortly thereafter as finality signatures for the new block arrive from the validators.
A deploy is executed in distinct phases to accommodate flexibly paying for computation. The phases of a deploy are payment, session, and finalization. During the payment phase, the payment code is executed. If it is successful, the session code is executed during the session phase. And, independently of session code execution, the finalization phase does some bookkeeping around payment. Once the deploy is executed, committed and forms part of the given block, a
DeployProcessed event is emitted by the event stream server.
Payment code provides the logic used to pay for the computation the deploy will do. Payment code is allowed to include arbitrary logic, providing maximal flexibility in how a deploy can be paid for (e.g., the simplest payment code could use the account's main purse, while an enterprise application may require deploys to pay via a multi-sig application accessing a corporate purse). We restrict the gas limit of the payment code execution, based on the current conversion rate between gas and motes, such that no more than
MAX_PAYMENT_COST motes (a constant of the system) are spent. To ensure payment code will pay for its own computation, we only allow accounts with a balance in their main purse greater than or equal to
MAX_PAYMENT_COST, to execute deploys.
Payment code ultimately provides its payment by performing a token transfer into the Handle Payment contract's payment purse. If payment is not given or not enough is transferred, then payment execution is not considered successful. In this case the effects of the payment code on the global state are reverted and the cost of the computation is covered by motes taken from the offending account's main purse.
Session code provides the main logic for the deploy. It is only executed if the payment code is successful. The gas limit for this computation is determined based on the amount of payment given (after subtracting the cost of the payment code itself).
Specifying payment code and session code
The user-defined logic of a deploy can be specified in a number of ways:
- a wasm module in binary format representing a valid contract (Note: the named keys do not need to be specified because they come from the account the deploy is running in)
- a 32-byte identifier representing the hash or URef where a contract is already stored in the global state
- a name corresponding to a named key in the account, where a contract is stored under the key
Each of payment and session code are independently specified, so different methods of specifying them may be used (e.g. payment could be specified by a hash key, while session is explicitly provided as a wasm module).
Deploys as functions on the global state
To enable concurrent modification of global state (either by parallel deploys in the same block or parallel blocks on different forks of the chain), we view each deploy as a function taking our global state as input and producing a new global state as output. It is safe to execute two such functions concurrently if they do not interfere with each other, which formally can be defined to mean the functions commute (i.e., if they were executed sequentially, it does not matter in what order they are executed, the final result is the same for a given input). Whether two deploys commute is determined based on the effects they have on the global state, i.e. which operation (read, write, add) it does on each key in the key-value store. How this is done is described in Appendix C.
The Casper Network runtime
A wasm module is not natively able to create any effects outside of reading / writing from its own linear memory. To enable other effects (e.g. reading / writing to the Casper global state), wasm modules must import functions from the host environment they are running in. In the case of contracts on the Casper blockchain, this host is the Casper runtime.
Here, we briefly describe the functionalities provided by imported functions. All these features are conveniently accessible via functions in the Casper Rust library. For a more detailed description of the functions available for contracts to import, see Appendix A.
- Reading / writing from global state
addfunctions allow working with exiting URefs
new_urefallows creating a new
URefinitialized with a given value (see section below about how
URefs are generated)
store_functionallows writing a contract under a hash key
remove_urefallow working with the named keys of the current context (account or contract)
- Account functionality
- Runtime flow and properties
call_contractallows executing a contract stored under a key (hash or
URef), including passing arguments and getting a return value
retis used by contracts to return a value to their caller (i.e. enables return values from
get_named_argallows getting arguments passed to the contract (either to session code as part of the deploy, or arguments to
revertexits the entire executing deploy, reverting any effects it caused, and returns a status code that is captured in the block
get_callerreturns the public key of the account for the current deploy (can be used for control flow based on specific users of the blockchain)
get_phasereturns the current phase of the deploy execution
get_blocktimegets the timestamp of the block this deploy will be included in
- Mint functionality
create_pursecreates a new empty purse, returning the
URefto the purse
get_balancereads the balance of a purse
transfer_to_accounttransfers from the present account's main purse to the main purse of a specified account (creating the account if it does not exist)
transfer_from_purse_to_accounttransfer from a specified purse to the main purse of a specified account (creating the account if it does not exist)
transfer_from_purse_to_pursealias for the mint’s transfer function
URefs are generated using a cryptographically secure random number generator using the ChaCha algorithm. The random number generator is seeded by taking the
blake2b256 hash of the deploy hash concatenated with an index representing the current phase of execution (to prevent collisions between
URefs generated in different phases of the same deploy).