Database services are complicated services. To use database services properly, we need to learn data structure and algorithm and know how different database services work.

dynamo database

Dynamo database is the widest range database in aws database services. It typically is a wide column key value store no-sql database.

Dynamo database provides streams which can get changes. We can use it to capture rapidly changing datas. It can work with datas which may be changed over thousands of times every second. Normally, We use it to trigger lambda function or stream it to kinesis to analyze it, which similar with what we usually use kafka database for.

Dynamo database also provide join syntax in its query language, partiQL, to link with other tables or rows, just similar with relational database services except foreigner key.

So, if we want to use only one database service for what we are building or if we just started learning databases and just want to learn one of them to put it into the project as soon as possible, dynamo database should be one option.

However, even dynamo database can be used widely, it can't work as good as the other databases which is built for specific task.

how dynamo database works

Dynamo database stores data with partitions with partition key and sorted with a sort key.

When we add a new item in dynamo database, it will find a partition base on its partition key and put the item in it. When we query, dynamo database will use its sort key to sort the result. A pair of partition key and sort key is the key of an item. Primary key is required, sort key is optional.

Dynamo database is a key-value store no-sql database. So, for the key part, dynamo database use a pair of partition key and sort key, called primary partition key and primary sort key. As same as other key-value store no-sql database, we need to keep the primary partition key and primary sort key unique in a table. We can only set primary partition key and primary sort key when we create the table. A pair of primary key and primary sort key is called a primary key.

Dynamo database is also a wide column store no-sql database. Its value has multiple columns. Dynamo database also supports index. In dynamo database, there are two types of index.

Global index is a copy of the whole table. Each global index has its own partition key and sort key. Each time we add new item to the table, each of its global index will also get a copy of the item. So, each global index will cost as same as a new table. We can add global indexes or delete global indexes to a table when we create the table or after we create the table.

Global index is pricy. Sometimes we just want to query with primary partition key with another sort key. We can use local index in this case. We can add local indexes when we create the table.

create dynamodb object

To access dynamo database service, we need to create a dynamodb object on client side.

To use dynamodb service on server side, we should use graphql instead or enable server mode and use it with imds.

Access key id and secret key credentials is required by all dynamodb object nodes. We can get access key id and secret key from pre-created IAM user or get allocated from cognito identity pool.

data operation

There are six types of basic data operations in dynamo database, put, delete, update, get, query and scan.

Put item is used to add a new item or replace an old item with a new item. We can call put item with table name, item(as map, column name as key, value as value).

We need to at least provide primary partition key and primary sort key(if the table has primary sort key) in the item map. If the primary key already exists in the table, dynamo database will replace the old item with the new item.

Delete item is used to delete an item by its primary key in a table. We can call delete item with table name, key(as map, column name as key, value as value).

Key is used for primary key. We need to provide primary partition key and primary sort key(if the table has primary sort key) in the key map.

Update item is used to update an item or add a new item by its primary key in a table. We can call update item with table name, key(as map, column name as key, value as value), update expression.

Key is used for primary key. We need to provide primary partition key and primary sort key(if the table has primary sort key) in the key map.

Update expression is used to tell dynamo database how to update the item for us.

We can also add condition expression to do a conditional put, delete, update. Then dynamo database will only do the put, delete, update operation when the condition expression returns true or it will return "The conditional request failed" error.

Get item is used to get the exact item with its primary key in a table. We can call get item with table name, key(as map, column name as key, value as value).

Because primary key is unique in a dynamo database table, this node will always return one and only one item.

Query is used to search items in a table with primary key or index. We can call query with table name, index name(if we want to search with index), key condition expression.

Key condition expression is the condition expression for primary key or the index's key(if we want to search with index). Because items in the dynamo database table is partied, key condition expression will make dynamo database only search in a few partitions and ignore the others.

Scan is another way used to search items in a table. We can call scan with table name, index name(if we want to search with index).

Scan will search the whole dynamo database table or its index. So, it's much slower than query.

We can use projection expression to tell dynamo database we only want to get some of the attributes of items in get, query, scan operation.

Filter expression is used to filter the result and make dynamo database return less results.

In the condition, update, projection, filter, key condition expression, we can use "#" to indicate a placeholder for an attribute name and use ":" to indicate an attribute value. Then, we need to use expression attribute names to specify what the attribute name should be and use expression attribute value to specify what the attribute value should be.

We can also use partiQL to operate the datas. To use partiQL, we need to call Execute statement with the partiQL statement as statement, parameters.

In the statement, we can use "?" to indicate a placeholder for parameters. Then we need to use parameters to specify what each parameter should be.

batch operation

Unreal doesn't support multiple level of nested TArray and TMap, we need to use raw json string to write batch operations in blueprint. And It equals calling multiple data operations. So, let's ignore it.

transact operation

Transact operation means either all data operations in transact operation are success or none of them success.

When we are using transact operation, we need to make sure there is no conflict or the conflicted request will return transaction canceled exception.

We can use transact operation to get, write items, execute partiQL statement.

data streams

Normally, we use dynamo database stream to trigger lambda function or stream it to kinesis to analyze the data instead of using it in blueprint. Blueprint can't get notify from dynamo database stream and can only loop the records in a dynamo database table stream. So, it's useless in blueprint.

relational database

Relational database is the most well-known database in aws database services. Most of databases we heard frequently, for example mysql, sqlite, oracle, are relational databases.

In aws, we use RDS to set up our relational database services.

Aws will end rds data api in February 28, 2023. We need to graphql to access rds database instead.

quantum ledger database

Quantum ledger database is a fully managed ledger database. Ledger database is one of the least known database type. Its distinctive feature is all statements we have executed should be confirmed with a signature.

Compare with dynamo database, quantum ledger database doesn't have key or primary key. Quantum ledger database has index, but it uses a single field as index which works similar with a local index in dynamo database. Because it provides less feature on query data in tables, quantum ledger database is much cheaper than dynamo database.

dynamo

quantum ledger

Write I/Os (1 million requests)

$1.25

$0.70

Read I/Os (1 million requests)

$0.25

$0.136

Storage (GB-month)

$0.25

$0.03

create qldb session object

To access data in quantum ledger database service, we need to create a qldb session object on client side.

To use qldb service on server side, we should use graphql with lambda instead or enable server mode and use it with imds.

Access key id and secret key credentials is required by all qldb session object nodes. We can get access key id and secret key from pre-created IAM user or get allocated from cognito identity pool.

data operation

We can only use partiQL to operate the datas in quantum ledger database.

We need to call start session with ledger name to start a session with quantum ledger database server.

Start session will return with session token.

Then we need to call start transaction with session token.

Start transaction will return transaction id.

After we get the transaction id, we can call create ion helper with transaction id to calculate the signature when we commit the transaction.

Now we can execute statements.

Each time we are going to execute a statement, we need to call add statement from ion helper with statement, parameters.

We can use "?" as a placeholder in the statement and put the value in parameter array. Add statement node will translate the parameters into ion binary form to use it in execute statement node.

Then we can call execute statement with session token, transaction id, statement, parameters.

If the statement is a select statement, the result will be in first page of execute statement result. First page also contains a next page token, if it's not empty, we can call fetch page with session token, transaction id, next page token to get next page.

There will also be a next page token in the page of fetch page result for us to call fetch page to retrieve next page.

After we call a bunch of execute statement, we can call commit transaction to commit all executed statements. Before we call commit transaction, we need to call get digest from ion helper to calculate the signature for the commit transaction node.

Then we can call commit transaction with session token, transaction id, commit digest to commit executed statement in the transaction.

We can also abort a transaction by calling abort transaction with session token.

After calling commit transaction or abort transaction, we can call end session with session token to end the session.

in memory database

coming soon…

other database

coming soon…

© 2019, multiplayscape. All Rights Reserved.

© 2019, multiplayscape.

All Rights Reserved.

© 2019, multiplayscape. All Rights Reserved.