Azure Cosmos DB Cost


We just configured Squidex in Azure using the guide on your website. This was an easy process to follow and we got up and runnign quickly.

However, today after running it for dev for a few days, we were surprised to see that the Cosmos DB cost was $600 and climbing by $100/day.

This seems an extraordinary amount for a DB that’s currently only 3MB and, using the Azure pricing calculator, we were only expecting to cost in the region of $50/Month

Are you able to shed any light on where this cost could originate, or where we can find information to help identify the cause?


You are right, that strange. Unfortunanetly I have no experience in running cosmos db in Production. There must be something that queries the database very often. Do you have any stats from the Azure cloud portal?

I actually came across this myself just now. The CosmosDB was populated with 16 collections, all of which have been provisioned with 1000 RU/s which would run a monthly cost of 900$/month. For a database which currently contains 385kbs, that’s borderline insane. I’m currently reducing the collections’ RUs to 100 each

Update: minimum allowed is 400, which is going to be a problem in terms of costs.

I know this might not be what you’re looking for, but here are the stats of CosmosDb for a brand new deployment of 2 x Squidex on AKS (so no data, and no API calls)


This due of the clustering, but 600 RU/s should not cost several hundred of dollars per month.

Re-reading OP’s question, I realised that OP’s issue is with the consumption and usage of resources, while mine is with the default provisioned reserved units per collection when Squidex creates the 16 collections on CosmosDB. As such they are different issues and I feel that I have hijacked the discussion, so apologies @aritchie. I shall create a new separate thread for this.

Just for the sake of testing, I tried to create the CosmosDB Squidex and SquidexContent databases manually (since CosmosDB allows me to define the RU per database on creation rather than per collection), but once I started Squidex, it threw this error:

{“logLevel”:“Error”,“message”:“Caught and ignored exception: MongoDB.Driver.MongoCommandException with message: Command update failed: Shared throughput collection should have a partition key\r\nActivityId: 0f561721-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/ thrown from timer callback GrainTimer. TimerCallbackHandler:Squidex.Domain.Apps.Entities.Rules.UsageTracking.UsageTrackerGrain->System.Threading.Tasks.Task b__4_0(System.Object)”,“eventId”:{“id”:101413},“exception”:{“type”:“MongoDB.Driver.MongoCommandException”,“message”:“Command update failed: Shared throughput collection should have a partition key\r\nActivityId: 0f561721-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/”,“stackTrace”:" at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol1.ProcessReply(ConnectionId connectionId, ReplyMessage1 reply)\n at MongoDB.Driver.Core.WireProtocol.CommandUsingQueryMessageWireProtocol1.ExecuteAsync(IConnection connection, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Servers.Server.ServerChannel.ExecuteProtocolAsync[TResult](IWireProtocol1 protocol, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.RetryableWriteOperationExecutor.ExecuteAsync[TResult](IRetryableWriteOperation1 operation, RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkUnmixedWriteOperationBase1.ExecuteBatchesAsync(RetryableWriteContext context, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteBatchAsync(RetryableWriteContext context, Batch batch, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Operations.BulkMixedWriteOperation.ExecuteAsync(IWriteBinding binding, CancellationToken cancellationToken)\n at MongoDB.Driver.OperationExecutor.ExecuteWriteOperationAsync[TResult](IWriteBinding binding, IWriteOperation1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl1.ExecuteWriteOperationAsync[TResult](IClientSessionHandle session, IWriteOperation1 operation, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl1.BulkWriteAsync(IClientSessionHandle session, IEnumerable1 requests, BulkWriteOptions options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl1.UsingImplicitSessionAsync[TResult](Func2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionBase1.UpdateOneAsync(FilterDefinition1 filter, UpdateDefinition1 update, UpdateOptions options, Func3 bulkWriteAsync)\n at Squidex.Infrastructure.MongoDb.MongoExtensions.UpsertVersionedAsync[T,TKey](IMongoCollection1 collection, TKey key, Int64 oldVersion, Int64 newVersion, Func2 updater) in /src/src/Squidex.Infrastructure.MongoDb/MongoDb/MongoExtensions.cs:line 114\n at Squidex.Infrastructure.States.MongoSnapshotStore2.WriteAsync(TKey key, T value, Int64 oldVersion, Int64 newVersion) in /src/src/Squidex.Infrastructure.MongoDb/States/MongoSnapshotStore.cs:line 60\n at Squidex.Infrastructure.States.Persistence2.WriteSnapshotAsync(TSnapshot state) in /src/src/Squidex.Infrastructure/States/Persistence{TSnapshot,TKey}.cs:line 135\n at Squidex.Domain.Apps.Entities.Rules.UsageTracking.UsageTrackerGrain.CheckUsagesAsync() in /src/src/Squidex.Domain.Apps.Entities/Rules/UsageTracking/UsageTrackerGrain.cs:line 107\n at Orleans.Runtime.GrainTimer.ForwardToAsyncCallback(Object state)"},“app”:{“name”:“Squidex”,“version”:“”,“sessionId”:“3525e6ca-376a-4615-afd8-d6012977173e”},“timestamp”:“2019-04-17T17:09:42Z”,“category”:“Orleans.Runtime.GrainTimer”}

The docs say that you can define the reserved RUs either per collection or per database. I guess the solution would be to define it per database and then let Squidex and the mongodb driver create the collections.

Thanks for the response. Yes that is exactly what I tried. And that’s when I got the above listed exception as soon as I started the containers.

Jfyi, alternatively we’re going to be using the cloud version of Squidex for the time being, but we’re losing our custom single sign on OIDC this way. However this is a better alternative to Azure’s alarming costs.


Thanks for your updates (the notification emails were being sent to SPAM and I only just found them!)

Rather than playing around with Cosmos DB settings we have switched to a plain MongoDB setup, we’re unlikely to require the enhanced performance of Cosmos DB, which is more than adequate for our requirements and considerably cheaper.


1 Like