We can generate a refresh and access token required to call the Fitbit API programmatically with a simple Timer trigger function.

As part of my personal development, I’m building my own personal health platform in Azure. I like to keep track of a variety of different health metrics, such as daily activity, food intake and sleep patterns. To collect this data, I use a Fitbit Ionic.

In the past, I used to download a monthly CSV file and just do some basic analysis on it. This was a bit tedious as I’d have to do some manual scrubbing of the data before I could do anything with it. …


We can publish NuGet packages to internal feeds hosted in Azure Artifacts easily via pipelines defined in YAML files.

Using Azure Artifacts, we can publish NuGet packages to a private (or public) NuGet feed. These feeds can be scoped in Azure DevOps at either an organization level or at a project level.

Creating a private NuGet feed in Azure DevOps is really simple. This article below shows have you can set one up. If you’re following along and you haven’t set up an internal feed yet, stop reading this article and check out the article below. Once you’re done with that, you can return here.

This post will show you how we can use a YAML build file to…


In this tutorial, we’ll build a Web API using Azure Functions that stores data in Azure Cosmos DB with MongoDB API in C#

Azure Cosmos DB is a globally distributed, multi-model, NoSQL database service that allows us to build highly available and scalable applications. Cosmos DB supports applications that use Document model data through it’s SQL API and MongoDB API.

I’ve been meaning to produce more content on Cosmos DB’s Mongo API, so in this article, I’m going to be developing a Serverless API in Azure Functions that uses a Cosmos DB MongoDB API account. This article has been loosely based on this fantastic tutorial on creating a Web API with ASP.NET Core and MongoDB.

By the end of this article, you’ll know…


Azure Cosmos DB now provides us with the ability to continuously backup our data, allowing more granular control over our backups.

The ability to perform backups on your data is essential to ensure that you can recover in the event of any data failure, such as data corruption, human-error or datacenter failure.

Azure Cosmos DB takes backups of your data automatically at regular intervals without affecting the performance or availability of our database operations. This is great for when we encounter any of the data failures mentioned above.

There are two options available to us when we need to perform backups in Azure Cosmos DB:

  • Periodic backup mode — the default method of backing up our data
  • Continuous backups or Point-In-Time…

Provisioning Autoscale containers and databases in Azure Cosmos DB is simple and helps our apps perform better.

Back in January 2020, I wrote on article on Azure Cosmos DB’s ‘Autopilot’ mode that they released in November 2019, which was still in preview at the time of writing.

Not only has this feature gone GA (Generally Available), but it also has a much better name (In my opinion), Autoscale!

In this article, I will cover the following topics:

  • How Throughput works in Azure Cosmos DB.
  • Life before Autoscale Throughput.
  • What is Autoscale Throughput and how does it work?
  • The benefits of Autoscaled Throughput in Azure Cosmos DB.
  • When would you opt for Autoscale over manually provisioned throughput.
  • Creating…

We can implement simple, but powerful event routing in Azure Event Grid thanks to Subject filtering

I’m currently working on a personal project that reads various different csv files from a local directory, uploads the files to Azure Blob Storage and then persists the records of each file into Azure Cosmos DB. These are downloaded files from my Fitbit dashboard.

At a high level, The architecture looks like this:


Azure Cache for Redis provides us with a powerful in-memory data store that can be used for distributed data, session stores or even message brokering.

Azure provides us with it’s own implementation of Redis called Azure Cache for Redis. This is an in-memory data store that helps us improve the performance and scalability of our applications. We’re able to process a large amount of application requests by keeping the most frequently accessed data in our server memory so that it can be written to and read from quickly.

Azure Cache for Redis is a managed service and it provides secure Redis server instances with full Redis API compatibility. With Azure Redis Cache, we can use it for the following scenarios:

Caching our data

We wouldn’t…


Writing basic units for Azure Functions triggered by the Change Feed is straightforward with xUnit

In the process of refactoring some of our microservices at work, I came across a service that didn’t have any unit tests for them! This service uses the Azure Cosmos DB Change Feed to listen to one of our write-optimized containers related to customers. If a new customer is created in that container, we then pick up that Customer document and insert it into a read-optimized container (acting as an aggregate store) which has a read friendly partition key value.

This read-optimized container is then utilized by other services within our pipeline when we need to query the aggregate for…


With Azure Synapse Link for Azure Cosmos DB, we can now gain insights over our transactional data seamlessly without having to develop our own ETL pipelines.

In the past, performing traditional analytical workloads with Azure Cosmos DB has been a challenge. Workarounds such as manipulating the amount of provisioned through and using the Change Feed or any other ETL mechanism to migrate data from Cosmos DB to platforms more suited to performing analytics on our data do exist, but are a challenge to develop and maintain.

Azure Synapse Link for Cosmos DB addresses the needs to perform analytics over our transactional data without impacting our transactional workloads. This is made possible through the Azure Cosmos DB Analytical store, which allows to sync our transactional data into…


We can now provision ‘Serverless’ Cosmos DB accounts, only paying for Request Units when we use them!

At Build 2020, the Azure Cosmos DB team announced that they were working on a ‘Serverless’ preview for Cosmos DB, allowing developers to provision Cosmos DB accounts that only use throughput when operations are performed on that Cosmos DB account, instead of having to provision throughput at a constant rate.

Out of absolutely nowhere (it’s been a hell of a week!), the Cosmos DB team announced today that the Serverless preview has been released for the Core API (or SQL)! You can head to Azure right now and provision Cosmos DB accounts without having to provision any throughput.

With Serverless…

Will Velida

Microsoft Data Platform MVP. Software Engineer trying to build cool stuff using .NET and Azure. GitHub: https://github.com/willvelida

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store