Controlling an Application’s Throughput Consumption in Cosmos DB with ThroughputControlGroup

David
7 min readNov 17, 2022

--

Sometimes you want to hold the leash on a process to stop it eating all your Cosmos DB throughput… now we can do this in the Cosmos DB Java SDK.

A Little Bit of Background on Cosmos DB Request Units

Cosmos DB is an awesome database platform: it offers low latency, high throughput, global geo-replication and dynamic, flexible scaling. It’s also a true Database as a Service, which means no VM clusters to manage and configure.

Instead of sizing the database by estimating the number and size of VMs you’ll need to host it, you provision the database using throughput metric called Request Units (RU) per Second: this controls basically the amount of work this database will be able to process in any given second. You can have a fixed budget per second, or a budget that dynamically scales between a low and high-water mark, and of course you can adjust these boundaries at any time without down-time. In terms of cost, depending on how many RUs your database has consumed in a month, that’s how much you pay.

Cosmos DB’s budget of Request Units (RUs) per second can scale dynamically in real-time

With the RU-based budget, processes that need to talk to the database each consume that budget on a first-come, first-served basis. In the life of a database though there can be times when some process or other needs to do some heavy lifting which, if left to its own devices, will impact performance for more time-critical things by “hogging the throughput” of the database — the classic scenario here are batch processes that load data, or processes that are doing long-running data manipulation.

In Cosmos DB, this type of behaviour can show up as RU exhaustion — where because one process is grabbing too many RUs, other processes can’t get the RUs they need and so have to wait around for the RU budget to be replenished, which typically shows up as latency spikes and rate-limiting exceptions surfaced in the backend logs (See: Troubleshoot Azure Cosmos DB request rate too large exceptions). One solution is to increase the number of RUs available to the system, but a cheaper way is to limit the use of RUs by the processes that aren’t high priority.

Using ThroughputControlGroups to Control RU Consumption

To stop the sort of “throughput hogging” behaviour discussed above, there is a feature in the Cosmos DB SDK that allows us to control from the client side how many RUs are consumed by our processes.

Currently this feature is only available in the Java SDK. In this post I’m going to show some example code of this feature in action, and we’ll see how it effectively controls the RU consumption of selected processes.

Local Throughput Control vs Global Throughput Control

The basic idea of the throughtput control functionality is that the number of RUs being consumed by a client process is monitored on the client side, and the client holds back operations to try and keep to a limit that we configure in the code.

To do this we configure a Throughput Control Group, give it a name and set an RU limit for this group. We can set then set the control group as an option to client-side operations and when we do that those operations must respect the limit set in the control group. You can also share a control group’s budget across multiple different operations by making them all a part of the same group.

How Throughput Control Groups work conceptually

There are two forms of the throughput control functionality in the SDK:

  • Local Throughput Control is used to limit the RU consumption in the context of a single client connection instance, so for example you can apply it to different operations within a single microservice, or maybe to a single data loading program.
  • The more complex Global Throughput Control uses a container on the server to manage state across multiple instances of clients so they can share a throughput budget.

I’m going to focus on the Local Throughput Control, which shows the simplest way to implement throughput budget control for processes that we believe may over-consume our RU budget.

A Throughput Control Example

With that background let’s step through a worked example of Throughput Control, in this case focussing on a data loading process — a common cause of “high RU consumption” issues. You can access a copy of the example code here: dgpoulet/cosmos-throughput-control: A demo of Cosmos DB throughput control using the Java SDK (github.com)

The example code in the repo demonstrates using the Bulk Executor in Java to create a large amount of load against the database (see Use bulk executor Java library in Azure Cosmos DB to perform bulk import and update operations | Microsoft Learn for more information on the Bulk Executor).

NOTE: Although I’m using the bulk execution operation here to demonstrate throughput control, it’s not limited to that. Every operation in the Java SDK can be controlled using Throughput Control Groups.

We’re going to create a Throughput Control Group which sets a limit of the number of RUs that can be used by members of the group, and then configure the bulk execution operation to be a part of that group.

First, we have to create our Throughput Control Group configuration:

        /*
* Set up the Throughput Control Group definition
*/
ThroughputControlGroupConfig groupConfig =
new ThroughputControlGroupConfigBuilder()
.groupName("localControlGroup")
.targetThroughput(throughputLimit)
.defaultControlGroup(false)
.build();


/*
* Enable throughput control
*/
container.enableLocalThroughputControlGroup(groupConfig);

Breaking down the configuration element first:

  • groupName("group name") can be anything, it’s used to refer to the group later on. This enables the possiblity to have multiple groups with different configurations active at once.
  • targetThoughput(throughputLimit) defines how many RUs this control group will be limited to. It’s also possible to usetargetThoughputThreshold(ratio) to define the limit as a fraction of the maximum RUs allocated to the container (a number between 0.0–1.0).
  • defaultControlGroup(false) defines if this control group will be the default i.e. the restrictions defined here will be applied to all operations against the container which DON’T have a specific control group nominated. Defaults to false.

Once we’ve created the configuration, we then attach it to the instance of the container object that we’ll use for the subsequent operations using:

container.enableLocalThroughputControlGroup(groupConfig);

Remember this is just a client-side control: the container itself in the back end doesn’t know anything about what we’re doing. Note also that you could attach multiple control group configurations to the same container instance, and so give different restrictions to different operations within your client.

Now, when we’re ready to run our bulk execution operation, we have to pass the name of the control group we configured earlier as an option to that operation. This is shown below:

/*
* Create an instance of bulk executor options
* set the throughput control group to be the one we defined earlier
*/
CosmosBulkExecutionOptions bulkoptions =
new CosmosBulkExecutionOptions()
.setThroughputControlGroupName("localControlGroup");

[ ...other code... ]

/*
* Generate a batch of create operations to execute
*/
Flux<CosmosItemOperation> cosmosItemOperations =
customers.map(customer -> CosmosBulkOperations
.getCreateItemOperation(customer,
new PartitionKey(customer.getId())));

/*
* Send the batch of operations to the bulk executor, passing our
* bulk execution options
*/
container.executeBulkOperations(cosmosItemOperations, bulkoptions)
.blockLast();

We need to pass the control group as part of options to the bulk execution, so first thing we do here is create an instance of CosmosBulkExecutionOptionsand then call the setThroughputControlGroupName("group name") operation to pass the name of the control group we created earlier as an option.

Then, when we’re ready to execute our bulk operation, we just have to pass this options object as a parameter to the bulk exec:

container.executeBulkOperations(operations,bulkOptions)

What this does is monitor the RU consumption of this bulk process on the client side and restrict the progress of those operations when we are getting close to our RU throughput limit defined in the control group.

What’s nice about this approach is you don’t get any server-side throttling when you are doing this (the 429 Rate Limiting Exceptions I mentioned before), because throughput is being controlled at the client side.

Seeing It In Operation

To test the example I set it up to run batches of 100,000 create operations, and varied the throughputTarget setting on my control group config to show the impact. I ran the example three times, restricting it to 2000, 4000 and 8,000 RUs.

Then, using the Diagnostics Logs, I ran the following query and graphed the result.

CDBPartitionKeyRUConsumption
| summarize sum(RequestCharge)/5 by bin(TimeGenerated,5s)

It’s a little hard to see the text on the axes, but what you have below is RUs consumed by time:

Bulk data load runs, restricted to 2000, 4000 and 8000 RUs.

The results are aggregated at the 5s level to make it clearer, but you can see the impact of restricting RU consumption at the client side with the control groups. The lower the RU threshold, the longer the job takes, and the less RUs are used.

NOTE: The control of RUs is defined as “best efforts”. As you can see from the graph it’s not precisely held at the RU limit — at 2,000 RUs (the first column) the actual usage shifts up and down around that 2,000 mark as the client tries to adjust the load dynamically. But it’s consistently close enough to allow you to use this method to control processes that would otherwise accelerate to consume as many RUs as they can get their hands on.

Final Thoughts

What I’ve shown is a method that can be used to implement a sort of “Quality of Service” approach to managing RU consumption in Cosmos DB. It’s simple, reliable and easy to plug into existing processes. If you are in a situation where your RU costs are high because you are having to add capacity to accommodate lower priority but data intensive processes, this could be a good way to reduce cost and improve overall performance.

--

--

David

Roving NoSQL specialist @Microsoft. I write about technical stuff and cultural things and anything else that comes to mind.