In this tutorial, we’ll build a Web API using Azure Functions that stores data in Azure Cosmos DB with MongoDB API in C#

Image for post
Image for post

Azure Cosmos DB is a globally distributed, multi-model, NoSQL database service that allows us to build highly available and scalable applications. Cosmos DB supports applications that use Document model data through it’s SQL API and MongoDB API.

I’ve been meaning to produce more content on Cosmos DB’s Mongo API, so in this article, I’m going to be developing a Serverless API in Azure Functions that uses a Cosmos DB MongoDB API account. This article has been loosely based on this fantastic tutorial on creating a Web API with ASP.NET Core and MongoDB.

By the end of this article, you’ll know…


Azure Cosmos DB now provides us with the ability to continuously backup our data, allowing more granular control over our backups.

Image for post
Image for post

The ability to perform backups on your data is essential to ensure that you can recover in the event of any data failure, such as data corruption, human-error or datacenter failure.

Azure Cosmos DB takes backups of your data automatically at regular intervals without affecting the performance or availability of our database operations. This is great for when we encounter any of the data failures mentioned above.

There are two options available to us when we need to perform backups in Azure Cosmos DB:

  • Periodic backup mode — the default method of backing up our data
  • Continuous backups or Point-In-Time…


Provisioning Autoscale containers and databases in Azure Cosmos DB is simple and helps our apps perform better.

Image for post
Image for post

Back in January 2020, I wrote on article on Azure Cosmos DB’s ‘Autopilot’ mode that they released in November 2019, which was still in preview at the time of writing.

Not only has this feature gone GA (Generally Available), but it also has a much better name (In my opinion), Autoscale!

In this article, I will cover the following topics:

  • How Throughput works in Azure Cosmos DB.
  • Life before Autoscale Throughput.
  • What is Autoscale Throughput and how does it work?
  • The benefits of Autoscaled Throughput in Azure Cosmos DB.
  • When would you opt for Autoscale over manually provisioned throughput.
  • Creating…


We can implement simple, but powerful event routing in Azure Event Grid thanks to Subject filtering

Image for post
Image for post

I’m currently working on a personal project that reads various different csv files from a local directory, uploads the files to Azure Blob Storage and then persists the records of each file into Azure Cosmos DB. These are downloaded files from my Fitbit dashboard.

At a high level, The architecture looks like this:


Azure Cache for Redis provides us with a powerful in-memory data store that can be used for distributed data, session stores or even message brokering.

Image for post
Image for post

Azure provides us with it’s own implementation of Redis called Azure Cache for Redis. This is an in-memory data store that helps us improve the performance and scalability of our applications. We’re able to process a large amount of application requests by keeping the most frequently accessed data in our server memory so that it can be written to and read from quickly.

Azure Cache for Redis is a managed service and it provides secure Redis server instances with full Redis API compatibility. With Azure Redis Cache, we can use it for the following scenarios:

Caching our data

We wouldn’t…


Writing basic units for Azure Functions triggered by the Change Feed is straightforward with xUnit

Image for post
Image for post

In the process of refactoring some of our microservices at work, I came across a service that didn’t have any unit tests for them! This service uses the Azure Cosmos DB Change Feed to listen to one of our write-optimized containers related to customers. If a new customer is created in that container, we then pick up that Customer document and insert it into a read-optimized container (acting as an aggregate store) which has a read friendly partition key value.

This read-optimized container is then utilized by other services within our pipeline when we need to query the aggregate for…


With Azure Synapse Link for Azure Cosmos DB, we can now gain insights over our transactional data seamlessly without having to develop our own ETL pipelines.

Image for post
Image for post

In the past, performing traditional analytical workloads with Azure Cosmos DB has been a challenge. Workarounds such as manipulating the amount of provisioned through and using the Change Feed or any other ETL mechanism to migrate data from Cosmos DB to platforms more suited to performing analytics on our data do exist, but are a challenge to develop and maintain.

Azure Synapse Link for Cosmos DB addresses the needs to perform analytics over our transactional data without impacting our transactional workloads. This is made possible through the Azure Cosmos DB Analytical store, which allows to sync our transactional data into…


We can now provision ‘Serverless’ Cosmos DB accounts, only paying for Request Units when we use them!

Image for post
Image for post

At Build 2020, the Azure Cosmos DB team announced that they were working on a ‘Serverless’ preview for Cosmos DB, allowing developers to provision Cosmos DB accounts that only use throughput when operations are performed on that Cosmos DB account, instead of having to provision throughput at a constant rate.

Out of absolutely nowhere (it’s been a hell of a week!), the Cosmos DB team announced today that the Serverless preview has been released for the Core API (or SQL)! You can head to Azure right now and provision Cosmos DB accounts without having to provision any throughput.

With Serverless…


Storing unstructured data such as documents, images or videos is simple with Azure Blob Storage

Image for post
Image for post

If we want to store information such as documents, images or videos, Azure Blob Storage is a great storage solutions that provides us as developers with high availability at a low price.

For the AZ-204 exam, we’ll need to know how to do the following:

  • Move items in Blob Storage between accounts and containers.
  • Set and retrieve Storage properties and metadata.
  • Interact with data using code (We’ll be using C# in this post)
  • Implement data archiving and retention.
  • Implement hot, cool and archive storage

But first we’ll need to set up a couple of storage accounts in Azure to follow…


We can develop message-based solutions using either Service Bus or Queue Storage Queues

Image for post
Image for post

In four weeks time, I plan to take my AZ-204 exam. I’ve been working with Azure in my day-to-day job for almost 2 years now, but taking a look at the exam skills outline, there are a few technologies on there that I’ve never touched before.

So in order to help my preparation for the exam, I plan to write a series of blog posts on each skill measured. Hopefully this will not just reinforce my learning, but also help others who may be taking AZ-204 themselves, or just want to learn something new.

In this article, I’m going to…

Will Velida

Microsoft Data Platform MVP. Software Engineer trying to build cool stuff using .NET and Azure. GitHub: https://github.com/willvelida

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store