vRAM

vRAM is a caching layer for EOSIO RAM ultimately reducing the total RAM footprint required to run a DAPP

Introduction

On EOSIO RAM must be purchased with the chain's system token, we'll take EOS for example. If we check https://bloks.io/wallet/ram/buy, we can see how much RAM 100 EOS would get me, right now it's 2,875,626 Bytes or almost 3mb.

A user based application using blockchain that wishes to cover the RAM costs for users interacting with the platform would under normal circumstances need to acquire enough RAM to cover its users needs now and in the future. The application must also monitor pricing, as we demonstrated above, RAM is a variable cost asset that can change and potentially spike.

vRAM enables a developer to reduce the total RAM footprint needed for an application. It does so by taking the need for an application to cover all user to only needing to cover the active users. This limits the amount of RAM needed as well as the exposure to price changes.

Technical introduction

vRAM is a caching solution that enables DAPP Service providers (specialized EOS nodes) to load data to and from RAM <> vRAM on demand. Data is evicted from RAM and stored in vRAM after the transaction has been run. This works similar to the way data is passed to and from regular computer RAM and a hard drive. As with EOS, RAM is used in a computer sometimes because it is a faster storage mechanism, but it is scarce in supply as well. For more information on the technical details of the transaction lifecycle, please read the vRAM Guide For Experts article and/or the whitepaper.

vRAM requires a certain amount of data to be stored in RAM permanently in order for the vRAM system to be trustless. This data is stored in a regular eosio::multi_index table with the same name as the dapp::multi_index vRam table defined by the smart contract. Each row in the regular eosio::multi_index table represents the merkle root of a partition of the sharded data with the root hash being vector<char> shard_uri and the partition id being uint64_t shard. Note that this is equivalent to having a single merkle root with the second layer of the tree being written to RAM for faster access. The default amount of shards (which is proportional to the maximum amount of permanent RAM required) is 1024 meaning that, the total amount of RAM that a dapp::multi_index table will need to permanently use is 1024 * (sizeof(vector<char> shard_uri) + sizeof(uint64_t id)).

In order to access/modify vRam entries certain data may need to be loaded into RAM in order to prove (via the merkle root) that an entry exists in the table. This temporary data (the โ€œcacheโ€) is stored in the ipfsentry table. The DAPP Services Provider is responsible for removing this data after the transactionโ€™s lifecycle. If the DSP does not perform this action, the ipfsentry table will continue to grow until the accountโ€™s RAM supply has been exhausted or the DSP resumes its services.

Resource consumption in QUOTA

Each vRAM action requires 3 xwarmup events and 1 xcommit event, this can be further leaned out to 1 xwarmuprow action, see the advanced features section. Each action is charged in QUOTA, the default for a package is 0.0001 QUOTA per action unless the DSP increases that default amount which they may do at any time.

vRAM in the context of user-based applications

vRAM requires a short period of time for when the DSP loads the required data from IPFS and pushes it into RAM so that the user may use it. The vRAM multi index table has a field for delaying the cleanup action for the associated data, delay_sec.

The current delay_sec logic operates on deferred transactions which are deprecated; however, the LiquidScheduler's schedule service would be a great use to schedule the committing of the user's data for the future.

If this delayed cleanup is set to say an hour and an action load is called at the user's login. The DSP can load all of the associated data for that user until that user becomes inactive.

The delay period can be pushed back each time the user performs an action thus creating a cache-like system. The timer can be cancelled and a new timer created for this logic.

This allows the developer to reduce their total RAM footprint while also not compromising transaction speed.

PostgreSQL Alternative Storage Option

IPFS is still in beta, thus the DAPP Network has implemented an alternative storage solution using PostgreSQL which is scalable, reliable, and easier for devops types to use. Data is also stored in IPFS to allow for peering, but DSPs can be configured to also add/fetch from PostgreSQL.

Last updated