All about Cloud, mostly about Amazon Web Services (AWS)

Adaptive Capacity Isn't a Silver Bullet

 2018-10-06 /  664 words /  4 minutes

Amazon DynamoDB can provide consistent, single-digit millisecond latency and unlimited storage capacity at a relatively low price. Improper setup causes poor performance and high cost. One particular issue with DynamoDB is poor key choice. Poor key choice though is not the only cause of performance and cost issues. This post discusses why Adaptive Capacity isn’t a silver bullet and won’t compensate for a poor understanding of DynamoDB.

DynamoDB Partitions and Capacity Units

In order to provide the storage capacity and performance that DynamoDB delivers, it employs a highly distributed architecture. The partition key splits the data into the partitions. Each partition can store 10G data and support 1,000 Write Capacity Units (WCU) and 3,000 Read Capacity Units (RCU).

If you have 10G data and need 2,000 WCU and 6,000 RCU then DynamoDB splits the data across two partitions with 5G in each and critically, splits the WCU and RCU across the two partitions, with each partition getting the maximum 1,000 WCU and 3,000 RCU.

This scheme works well when data access distributes evenly across the two partitions. Partitions become “hot” though when the partition key choice is poor and the data access is biased towards a small number of keys. This means that while the number of RCU and WCU in total might be sufficient to handle a workload, the RCU and WCU allocated to the hot partition might be insufficient. This leads to ProvisionedThroughputExceededException errors.

How Adaptive Capacity Helps

DynamoDB Adaptive Capacity allows the excess WCU and RCU from underutilized partitions to transfer to the hot partitions. The total number of WCU and RCU still remains the same. Instead of an equal spread of RCU and WCU across the partitions, the hot partition has more than its equal share. All other partitions have less than their equal share.

The AWS documentation regarding Adaptive Capacity is very clear. It states:

To better accommodate uneven access patterns, DynamoDB adaptive capacity enables your application to continue reading and writing to hot partitions without being throttled, provided that traffic does not exceed your table’s total provisioned capacity or the partition maximum capacity. Adaptive capacity works by automatically increasing throughput capacity for partitions that receive more traffic.

Adaptive Capacity is a solution for hot partitions. It does not increase the WCU or RCU more than the total provisioned for the table, or for a single partition increase the RCU over 3,000 or WCU over 1,000. So why do we think Adaptive Capacity isn’t a silver bullet?

Why Adaptive Capacity isn’t a Silver Bullet

Imagine that you have 100G data to load into DynamoDB. The data load must run quickly so you allocate 100,000 WCU. Since each partition can only support 1,000 WCU, DynamoDB creates 100 partitions and allocates 1,000 WCU to each of them. The data loads quickly, but each partition stores only 1GB! Further, there is no way to shrink the number of partitions once DynamoDB has created them.

The expected production usage of this DynamoDB table is 1000 RCU. That means 10 RCU associated with each partition. If access is absolutely evenly spread across the table, this may not be a problem. If access isn’t evenly spread amongst the partitions, this 10 RCU limit would be exceeded. Wouldn’t Adaptive Capacity solve this problem for us? This brings us to another quite from the AWS documentation:

There is typically a 5-minute to 30-minute interval between the time throttling of a hot partition begins and the time that adaptive capacity activates.

In this scenario, the RCU requirements increase because access is slightly uneven across the partitions. The time required to move capacity from a slightly cooler partition to a marginally warmer one is way too long to provide any value. The problem here is that there is no hot partition, there are just too many partitions, or too few WCU/RCU spread across them.

In this scenario, loading the data slower the solution would require fewer WCU initially, and fewer partitions.

While useful, Adaptive Capacity isn’t a silver bullet.


Tags:  aws  database  dynamo  db  adaptive  capacity  provisioned  wcu  rcu  units  iops
Categories:  AWS  Database  Amazon DynamoDB

See Also

 Top Ten Tags

AWS (43)   Kinesis (9)   Streams (8)   AWS Console (5)   Go (5)   Analytics (4)   Data (4)   database (4)   Amazon DynamoDB (3)   Amazon Elastic Compute Cloud (EC2) (3)  


All Tags (173)

Disclaimer

All data and information provided on this site is for informational purposes only. cloudninja.cloud makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.

This is a personal weblog. The opinions expressed here represent my own and not those of my employer. My opinions may change over time.