As part of my personal development, I’m building my own personal health platform in Azure. I like to keep track of a variety of different health metrics, such as daily activity, food intake and sleep patterns. To collect this data, I use a Fitbit Ionic.
In the past, I used to download a monthly CSV file and just do some basic analysis on it. This was a bit tedious as I’d have to do some manual scrubbing of the data before I could do anything with it. …
Using Azure Artifacts, we can publish NuGet packages to a private (or public) NuGet feed. These feeds can be scoped in Azure DevOps at either an organization level or at a project level.
Creating a private NuGet feed in Azure DevOps is really simple. This article below shows have you can set one up. If you’re following along and you haven’t set up an internal feed yet, stop reading this article and check out the article below. Once you’re done with that, you can return here.
This post will show you how we can use a YAML build file to…
Azure Cosmos DB is a globally distributed, multi-model, NoSQL database service that allows us to build highly available and scalable applications. Cosmos DB supports applications that use Document model data through it’s SQL API and MongoDB API.
I’ve been meaning to produce more content on Cosmos DB’s Mongo API, so in this article, I’m going to be developing a Serverless API in Azure Functions that uses a Cosmos DB MongoDB API account. This article has been loosely based on this fantastic tutorial on creating a Web API with ASP.NET Core and MongoDB.
By the end of this article, you’ll know…
The ability to perform backups on your data is essential to ensure that you can recover in the event of any data failure, such as data corruption, human-error or datacenter failure.
Azure Cosmos DB takes backups of your data automatically at regular intervals without affecting the performance or availability of our database operations. This is great for when we encounter any of the data failures mentioned above.
There are two options available to us when we need to perform backups in Azure Cosmos DB:
Back in January 2020, I wrote on article on Azure Cosmos DB’s ‘Autopilot’ mode that they released in November 2019, which was still in preview at the time of writing.
Not only has this feature gone GA (Generally Available), but it also has a much better name (In my opinion), Autoscale!
In this article, I will cover the following topics:
I’m currently working on a personal project that reads various different csv files from a local directory, uploads the files to Azure Blob Storage and then persists the records of each file into Azure Cosmos DB. These are downloaded files from my Fitbit dashboard.
At a high level, The architecture looks like this:
Azure provides us with it’s own implementation of Redis called Azure Cache for Redis. This is an in-memory data store that helps us improve the performance and scalability of our applications. We’re able to process a large amount of application requests by keeping the most frequently accessed data in our server memory so that it can be written to and read from quickly.
Azure Cache for Redis is a managed service and it provides secure Redis server instances with full Redis API compatibility. With Azure Redis Cache, we can use it for the following scenarios:
Caching our data
In the process of refactoring some of our microservices at work, I came across a service that didn’t have any unit tests for them! This service uses the Azure Cosmos DB Change Feed to listen to one of our write-optimized containers related to customers. If a new customer is created in that container, we then pick up that Customer document and insert it into a read-optimized container (acting as an aggregate store) which has a read friendly partition key value.
This read-optimized container is then utilized by other services within our pipeline when we need to query the aggregate for…
In the past, performing traditional analytical workloads with Azure Cosmos DB has been a challenge. Workarounds such as manipulating the amount of provisioned through and using the Change Feed or any other ETL mechanism to migrate data from Cosmos DB to platforms more suited to performing analytics on our data do exist, but are a challenge to develop and maintain.
Azure Synapse Link for Cosmos DB addresses the needs to perform analytics over our transactional data without impacting our transactional workloads. This is made possible through the Azure Cosmos DB Analytical store, which allows to sync our transactional data into…
At Build 2020, the Azure Cosmos DB team announced that they were working on a ‘Serverless’ preview for Cosmos DB, allowing developers to provision Cosmos DB accounts that only use throughput when operations are performed on that Cosmos DB account, instead of having to provision throughput at a constant rate.
Out of absolutely nowhere (it’s been a hell of a week!), the Cosmos DB team announced today that the Serverless preview has been released for the Core API (or SQL)! You can head to Azure right now and provision Cosmos DB accounts without having to provision any throughput.