I’ve had this blog post in my mind for quite some time, but, as is sometimes the case with these things, it did not feel like it was time to meet the public quite yet. Now it is. The bedrock of my career (and in many ways, my life) is community. I was a lonely child, sometimes by choice, sometimes not, and I didn’t quite see the point of people until a rather late age. It took me quite some time to get to grips with social interaction in general and it is in large parts due to my wonderful wife that I’ve managed to decode “people” and learn to see all the wonderful things that happen when people interact and do things together.
Tomorrow I’m heading to Oslo and the Nordic Infrastructure Conference (NICCONF) – one of my favorite conferences! I’ve been invited to deliver three sessions this year: “The force awakens – Azure SQL Server for the on-prem DBA” which is an introduction to Azure SQL Server in its different shapes, “Azure Machine Learning for the absolute beginner” which is an introduction to Azure Machine Learning, its capabilities and what can be done with machine learning, and finally “Learning to swim – an introduction to Azure Data Lake”, a quick overview of the what, how and when with Azure Data Lake. Now, I’ve been running around trying to find some props for these sessions, and so far I’ve got fun stuff for two of them.
The first month of the new year is more than half way done. Time flies, but I’ve already had time to go to SQL Saturday Linz in beautiful Austria. I delivered “Headless chicken - calming the sysadmin-turned-DBA” to a full room, and it was 60 minutes of fun, shenanigans and failing to use a flipchart properly - all while having an excellent discussion about the intricacies of waking up as a DBA. Tomorrow I’m leaving for a quick trip to Mechelen in Belgium and the first-ever Power BI Days conference! I have it on good authority that there will be a good crowd and I’m more than happy to be a part of Europe’s newest Power BI-focused event.
I’ve been thinking about this blog post a lot these last few days. The classic “end-of-year” post that most everyone does, but this one has turned out to be rather special for me. At the same time it is kind of scary, as when I look back on what I’ve done and accomplished this year, I realize how much it actually is - and how much I have actually chosen *not* to do. 2018 was the year I decided to step up my speaking game for real. I had spent 2016 and 2017 polishing my skills and sending abstracts to what felt like every conference there was.
It’s time for me to go on the road again, and this time I’m headed to London and the UK Cloud Infrastructure User Group where I will be delivering a brand new session on self-service BI from an infrastructure perspective. This session is not only brand new, it is also a bit of an presentation style experiment. I will be delivering the session in no less than three different voices - as in, three differing points of opinion. The subtitle for this session is “arguing with myself” for a reason… My goal with the 60-minute session is to provide a walk-through what self-service BI is and what makes it so potentially awesome, how it can (and will!
There are several more use cases for a dataflow, but one that is very useful is the ability to share a dataset between apps. Previously we had to duplicate the dataset to each and every app that needed to use it, increasing the risk that one dataset was ignored, not refreshed properly or otherwise out of sync with reality. By using dataflows we can have several apps rely on the same dataflow (via a dataset), and thus it is quite possible to have a “master dataset”. Here is one way to do it: 1. Create an app workspace to keep all your dataflows that you are planning on sharing to different app workspaces.
With Power BI Dataflows out in public preview and everyone exclaiming how absolutely amazing they are, I decided to put together a bit of an example of how you might use it to “hide” some of the data prep work that goes into cleaning a dataset. To do this I will build on a blog post from Erik Svensen that he wrote a few years ago where he uses data from Statistics Sweden to work through a JSON API. Exactly what data I use here isn’t quite as important as how I use it, but the idea is to provide a prepared dataset ready-to-use for an analyst.
I’m on a train heading to Stockholm and Microsoft TechDays, where I’ll be delivering “Azure SQL Server for the on-prem DBA”. This session outlines what’s available in Azure, what is automatic, what is not quite automatic and what is idiosyncratic, as well as explores some of the hard questions one should ask whenever the topic of databases in the cloud comes up. This is the second time I’ve been selected to speak at TechDays, and I find this to be a very nice conference. It’s a good venue, a lot of people and a great sponsor area. This year I’ll apparently hold court in one of the larger rooms - rather exciting!