In this article I’m going to share a simple Jupyter notebook that can be used to easily load data into a Cosmos DB Graph database.
I’ve been doing a lot of work with Cosmos DB’s Graph API recently. It’s a graph engine that makes use of Cosmos DB’s scalable backend to provide a fully-managed graph database that can be queried with the open source Gremlin API. Gremlin is the graph query language of the Apache Tinkerpop graph database project. See Apache TinkerPop
At the time of writing, Azure Data Factory does not support loading data into Cosmos GB Graph API…
A basic exploration of Apache beam with Azure, using it to connect to storage and transform and move data into Cosmos DB.
Something I’ve been working on recently is using Apache Beam in Azure to move and transform data. The documentation on this is sparse to the point of being non-existent so hopefully I’ll be able to fill some gaps with what I discovered successfully implementing a small Apache Beam pipeline running on Azure, moving data from Azure Storage to Cosmos DB (Mongo DB API).
First off, what is Apache Beam? Well basically it’s a framework for developing data processing…
I’ve been on a quest for a light, portable laptop that was powerful enough to do what I want but cheap enough to not to make me worry about travelling with it (whenever travelling happens again).
There are a lot of options in the Ultrabook category but, for me, I find most of them are either not light enough, too large or just too expensive. Ideally I didn’t want to spend more than £500.
A pretty tough set of criteria. So I was really excited to read about the Star Labs Lite MkIII which looked to be the perfect specs:
Roving NoSQL specialist @Microsoft