DevOps Management for Azure Synapse Analytics - Part 2
- Abhijith Nair

- Feb 27, 2022
- 4 min read
Updated: Mar 7, 2022

In the last blog we discussed how to setup git repository and CI/CD techniques to promote synapse pipeline artifacts to other workspaces. However, this method does not promote any database objects. Most organizations will have all the databases created first within a development synapse environment, but might not have a standard practice to promote its objects to a test or production synapse environment. This blog will showcase how this can be effectively done. But before we go over the steps for promotion, where do these database objects reside? Let me explain in simple terms.
Azure Synapse Analytics service has Synapse SQL, which allows users to generate insights on data using SQL queries. There are two different consumption models that can be used:
Serverless Pool: An auto-scale compute resource, where you pay only for the queries being executed. A workspace can have only a single serverless pool.
Dedicated Pool: A Massively Parallel Processing (MPP) based compute resource, where we can have dedicated compute nodes to run our workloads. The performance of this pool is based on data warehouse units or DWUs. The workspace can be paused to cut down on cost. A workspace can have multiple dedicated pools.
What to expect?
This blog will explain how to promote database objects (tables, stored procedures, views, logins etc.) created within a dedicated pool from one synapse workspace to another.
Prerequisites
Please follow the prerequisities and procedure mentioned in the last blog titled - DevOps Management for Azure Synapse Analytics - Part 1.
Part Two - Promoting Synapse Dedicated Pool Database Objects using CI/CD
Step 1: Create an Azure Repo within Azure DevOps project
Log into Azure DevOps account and create a new repository Synapse DACPAC using the Repos option on the left menu.

Step 2: Create Database Objects in Synapse
In Part One, we created a pipeline which copied data from one data lake storage to another. Let’s use this data and load into a table. I have created a staging table (dbo.InvoiceBill) and intermediate table (int.InvoiceBill) for our data.
Note: The tables must be created in a feature branch and then merged into master. To understand how this is done, please check step 3 of the last blog.

Next, copy data from data lake into the dbo staging table.

Using a stored procedure, insert the data into the intermediate table.

Step 3: Create a Database Project in Visual Studio
To promote database objects within synapse environments, we must create a data-tier application package or dacpac and this is done by first creating a new database project in Visual Studio.


Next, enable git repository for the database project.

Provide the link the Synapse DACPAC repository under the Existing remote section.

Step 4: Import the Dedicated Pool to Visual Studio
Once we have the database project connected to Azure Repo, let’s import the development synapse dedicated pool. Right click on the project and select Import -> Database.

Provide the required information for database connection.

Select Start to begin the import process. On completion the project would look something like the illustration below.

Now we have imported all the existing database objects from synapse into the database project. Sync the new objects to the empty Azure Repo.

Next, let’s create a new Employee table in a feature branch – feature/dacpac. To connect and query against the dedicated pool, select SQL Server Object Explorer on the left and establish a connection by providing login credentials.

Push and sync this new table to the remote master branch. Once we have all the latest inclusions to the remote master branch, create a pull request to merge this to our Synapse DACPAC repository in Azure Repo.

To compile all the artifacts, we need to build the solution. Right click on the solution and select Build.

Once the changes are merged over to Azure Repo, the repository will include the Employee.sql file along with other object files. The solution file (InvoiceBill.sln) that was created as part of the build also exists in the repository.

Step 5: Create a Build Pipeline
Although we can publish the solution from Visual Studio (right-click on the project and select Publish), we must follow standard best practice when it comes to DevOps and as part of this, we must first create a build pipeline.
Select Pipelines to create a new build pipeline - Synapse DACPAC Build. Create three activities:
a) Visual Studio Build task: Select the solution file from the repository.

b) Copy Files: Select the source and target folder.

c) Publish Build Artifacts: This task publishes the build into the drop location.

To execute the build pipeline, select Save & queue.

The build pipeline creates a dacpac file. To find its location, click on the published artifact as highlighted in the above illustration.

Step 6: Create a Release Pipeline
Select Releases to create a new pipeline. Provide information on the build pipeline that was executed earlier. Name the stage as Test, as we are going to promote the objects into our test synapse workspace synapsedemo121test.

Under tasks, search for Azure SQL Data Warehouse deployment.

Provide all the necessary information. Store all the variables under pipeline variables or variable groups.

Save and execute the release pipeline.

The pipeline ran successfully. Let's verify the test synapse workspace.

Voila! The newly created Employee table along with other objects are now part of our test synapse environment.
Note: The dacpac only creates the schema for the tables in the target environment. To load data, either use the copy command or create a separate pipeline.
This way we can promote database objects within a dedicated pool to a target synapse environment, without any manual intervention, by following standard CI/CD best practices.
Although this wraps the DevOps implementation part of Azure Synapse Analytics, I will be posting more blogs on creating enterprise - grade standard infrastructure solutions in Azure.
Until then Happy Blogging!





Good content 👍