KVS is an Erlang abstraction over various native Erlang key-value databases, like Mnesia. Its meta-schema includes only concept of iterators (persisted linked lists) that are locked or guarded by containers (list head pointers). All write operations to the list are serialized using a single Erlang process to provide sequential consistency. The application which starts Erlang processes per container called feeds.
The best use-case for KVS and key-value storages is to store operational data. This data should be later fed to SQL data warehouses for analysis. Operational data stores should be scalable, secure, fault-tolerant and available. That is why we store work-in-progress data in key-value storages.
KVS also supports queries that require secondary indexes, which are not supported by all backends. Currently KVS includes following storage backends: Mnesia, Riak and KAI.
Any data in KVS is represented by regular Erlang records. The first element of the tuple as usual indicates the name of bucket. The second element usually corresponds to the index key field.
Iterator is a sequence of fields used as interface for all tables represented as doubly-linked lists. It defines id, next, prev, feed_id fields. This fields should be at the beginning of user’s record, because KVS core is accessing relative position of the field (like #iterator.next) with setelement/element BIF, e.g.
All records could be chained into the double-linked lists in the database. So you can inherit from the ITERATOR record just like that:
This means your table will support add/remove linked list operations to lists.
Read the chain (undefined means all)
Read flat values by all keys from table:
If you are using iterators records this automatically means you are using containers. Containers are just boxes for storing top/heads of the linked lists. Here is layout of containers:
Usually you only need to specify custom Mnesia indexes and tables tuning. Riak and KAI backends don’t need it. Group your table into table packages represented as modules with handle_notice API.
And plug it into schema sys.config:
After run you can create schema on local node with:
It will create your custom schema.
System functions for start and stop service:
This API allows you to create, initialize and destroy the database schema. Depending on database the format and/or feature set may differ. join/1 function is used to initialize database, replicated from remote node along with its schema.
This API allows you to build forms from table metainfo. You can also use this API for metainfo introspection.
This API allows you to modify the data, chained lists. You can use create/1 to create the container. You can add and remove nodes from lists.
These functions will patch the Erlang record inside database.
Allows you to read the Value by Key and list records with given secondary indexes. get/3 API is used to specify default value.
You can use this API to store all database in a single file when it is possible. It’s ok for development but not very good for production API.