Tag - SQL-Jobs

Quick CSV to SQL with Azure Databricks | MagnusMinds Blog
Apr 14, 2023

In this blog, we will explore Azure Databricks, a cloud-based analytics platform, and how it can be used to parse a CSV file from Azure storage and then store the data in a database. Additionally, we will also learn how to process stream data and use Databricks notebook in Azure Data Pipeline.   Azure Databricks Overview Azure Databricks is an Apache Spark-based analytics platform that provides a collaborative workspace for data scientists, data engineers, and business analysts. It is a cloud-based service that is designed to handle big data and allows users to process data at scale. Databricks also provides tools for data analysis, machine learning, and visualization. With its integration with Azure Storage, Azure Data Factory, and other Azure services, Azure Databricks can be used to build end-to-end data processing pipelines.   Parsing CSV File from Azure BlobStorage to Database using Azure Databricks Azure Databricks can be used to parse CSV files from Azure Storage and then store the data in a database. Here are the steps to accomplish this:   Configure Various Azure Components 1. Create Azure Resource Group Image 1 2. Create Azure DataBricks Resource  Image 2 3. Create SQL Server Resource  Image 3 4. Create SQL Database Resource Image 4 5. Create Azure Storage Account  Image 5 6. Create Azure DataFactory Resource  Image 6 7. Launch Databricks Resource Workspace  Image 7 8. Create Computing Cluster  Image 8 9. Create New Notebook  Image 9   Parsing CSV File from Azure Storage to Database using Azure Databricks Azure Databricks can be used to parse CSV files from Azure Storage and then store the data in a database. Here are the steps to accomplish this: 1. Create a cluster: First, create a cluster in Azure Databricks as above. A cluster is a group of nodes that work together to process data. 2. Import all the necessary models in the databricks notebook  %python from datetime import datetime, timedelta from azure.storage.blob import BlobServiceClient, generate_blob_sas, BlobSasPermissions import pandas as pd import pymssql import pyspark.sql Code 1 3. Mount Azure Storage: Next, mount the Azure Storage account in Databricks as follows #Configure Blob Connection storage_account_name = "storage" storage_account_access_key="***********************************" blob_container = "blob-container" Code 2 4. Establish The DataBase Connection #DB connection conn = pymssql.connect(server='****************.database.windows.net', user='*****', password='*****', database='DataBricksDB') cursor = conn.cursor() Code 3 5. Parse CSV file: Once the storage account is mounted, you can parse the CSV file using the following code #get a list of all blob from the container blob_list = [] for blob_i in container_client.list_blobs(): blob_list.append(blob_i.name) # print(blob_list)      df_list = [] #Generate SAS key for each file and load to the dataframe  for blob_i in blob_list:     print(blob_i)     sas_i = generate_blob_sas(account_name = storage_account_name,                              container_name = blob_container,                              blob_name = blob_i,                              account_key = storage_account_access_key,                              permission = BlobSasPermissions(read=True),                              expiry = datetime.utcnow() + timedelta(hours=12))       sas_url = 'https://' + storage_account_name +'.blob.core.windows.net/' + blob_container + '/' +blob_i     print(sas_url)          df=pd.read_csv(sas_url)     df_list.append(df) Code 4 6. Transform and Store data in a database: Finally, you can store the data in a database using the following code #Truncate Table Sales Truncate_Query = "IF EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') truncate table sales" cursor.execute(Truncate_Query) conn.commit()   # SQL Query For Table Creation create_table_query = "IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') CREATE TABLE sales (REGION  varchar(max),COUNTRY  varchar(max),ITEMTYPE  varchar(max),SALESCHANNEL  varchar(max),ORDERPRIORITY  varchar(max),ORDERDATE  varchar(max),ORDERID  varchar(max),SHIPDATE  varchar(max),UNITSSOLD  varchar(max),UNITPRICE  varchar(max),UNITCOST  varchar(max),TOTALREVENUE  varchar(max),TOTALCOST  varchar(max),TOTALPROFIT  varchar(max))IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') CREATE TABLE sales (REGION  varchar(max),COUNTRY  varchar(max),ITEMTYPE  varchar(max),SALESCHANNEL  varchar(max),ORDERPRIORITY  varchar(max),ORDERDATE  varchar(max),ORDERID  varchar(max),SHIPDATE  varchar(max),UNITSSOLD  varchar(max),UNITPRICE  varchar(max),UNITCOST  varchar(max),TOTALREVENUE  varchar(max),TOTALCOST  varchar(max),TOTALPROFIT  varchar(max))" cursor.execute(create_table_query) conn.commit()   #Insert Data From Main DataFrame for rows in df_combined.itertuples(index=False,name=None):     row = str(list(rows))     row_data = row[1:-1]     row_data = row_data.replace("nan","''")     row_data = row_data.replace("None","''") insert_query = "insert into sales (REGION,COUNTRY,ITEMTYPE,SALESCHANNEL,ORDERPRIORITY,ORDERDATE,ORDERID,SHIPDATE,UNITSSOLD,UNITPRICE,UNITCOST,TOTALREVENUE,TOTALCOST,TOTALPROFIT) values ("+row_data+")"     print(insert_query)     cursor.execute(insert_query) conn.commit() Code 5 As, Shown here The data from all the files is loaded to the SQL server Table Image 10   Azure Databricks notebook can be used to process stream data in Azure Data Pipeline. Here are the steps to accomplish this: 1. Create a Databricks notebook: First, create a Databricks notebook in Azure Databricks. A notebook is a web-based interface for working with code and data. 2. Create a job: Next, create a job in Azure Data Factory to execute the notebook. A job is a collection of tasks that can be scheduled and run automatically. 3. Configure the job: In the job settings, specify the Azure Databricks cluster and notebook that you want to use. Also, specify the input and output datasets. 4. Write the code: In the Databricks notebook, write the code to process the stream data. Here is an example code: #from pyspark.sql.functions import window stream_data = spark.readStream \     .format("csv") \     .option("header", "true") \     .schema("<schema>") \     .load("/mnt/<mount-name>/<file-name>.csv")   stream_data = stream_data \     .withWatermark("timestamp", "10 minutes") \     .groupBy(window("timestamp", "10 Code 6   How To Use Azure Databrick notebook in Azure Data Factory pipeline and configure the DataFlow Pipeline Using it. Image 11 1. Create ADF Pipeline  Image 12 2. Configure Data Pipeline  Image 13 3. Add Trigger To the PipeLine  Image 14 4. Configure the trigger  Image 15   These capabilities make Azure Databricks an ideal platform for building real-time data processing solutions. Overall, Azure Databricks provides a scalable and flexible solution for data processing and analytics, and it's definitely worth exploring if you're working with big data on the Azure platform. With its powerful tools and easy-to-use interface, Azure Databricks is a valuable addition to any data analytics toolkit.

Parse Json in Below 2016 SQL Server
Dec 21, 2022

Abstract  This article describes a TSQL JSON parser and provides the source. It is also designed to illustrate a number of string manipulation techniques and also eliminate the issues while dealing with the JSON document containing special symbols like (“/” , ”-”....) in T-SQL. With it you can do things like this to extract the data from a JSON file or document which contains noise and complexities.    Summary For Implementation The code for the JSON Parser will run in SQL Server 2005,  and even in SQL Server 2000 (note: some modifications are necessary). First the function stores all strings in the temporary table, even the name of the elements, since they are 'escapes' in a different way, and may contain, unescaped, brackets, Special Characters which denote objects or lists. These are replaced in the json string by tokens which represent the strings. After this fetch all the json keywords and values for further processing by using the regular expressions, various string functions and a list of SQL queries and variables to store the values for a particular object. And at the last function will return a whole table which contains rows and columns with no noise in the values as the other tables in the particular database.   Figure 1:- Json Input   Figure 2:- Function Output Background TSQL isn’t really designed for doing complex string parsing which contains special characters and particularly where strings represent nested data structures such as XML, JSON, or XHTML.   You can do it but it is not a pretty sight; but If you ever want to do it anyway ? (note You can now do this rather more easily using SQL Server 2016’s built-in JSON support.) But If the SQL Server version is older or not compatible with the built-in JSON support then you can use this customized function to get the desired output by parsing any type of json document.  There is so much stuff behind that all happens to you. For example, it could be that DBA doesn’t allow a CLR, or you lack the necessary skills with procedural scripting. Sometimes, there isn’t any application, or you want to run code unobtrusively across databases or servers.   The Traditional way for dealing with data like this is to let a separate business layer parse a JSON ‘document’ into some meaningful structure(Like Tree) and then update the database by making a series of calls and lots of sql procedures. This is pretty, but can get more complicated and headache if you need to ensure that the updates to the database are wrapped into one transaction so that if anything goes wrong or any issues occur, then the whole transaction can be rolled back. This is why a TSQL approach has advantages.  Adjacency list tables have the same structure whatever the data in them. This means that you can define a single Table-Valued  Type and pass data structures around between stored procedures.  Converting the data to Hierarchical table form will be different for each application, but is easy with a TSQL. You can, alternatively, convert the hierarchical table into JSON and interrogate that with SQL.   JSON format JSON is one of the most popular lightweight markup languages, and is probably the best choice for transfer of object data from a web page. JSON is designed to be as lightweight as possible and so it has only two structures. The first, delimited by curly brackets, is a collection of Key/value pairs, separated by commas. The key is followed by a colon. The first snag for TSQL is that the curly or square brackets are not ‘escaped’ within a string, so that there is no way of partitioning a JSON ‘document’ simply. It is difficult to  differentiate a bracket used as the delimiter of an array or structure, and one that is within a string. The second complication is that, unlike YAML, the datatypes of values can’t be explicitly declared. You have to pass them out from applying the rules from the JSON Specification.   Implementation The JSON outputter is a great deal simpler, since one can be sure of the input, but essentially it does the reverse process, working from the root of the json document to the leaves. The only complication is working out the indent of the formatted output string. In the implementation, you’ll see a fairly heavy use of PATINDEX.This uses a RegEx. However, it is all we have, and can be pressed into service by chopping the string it is searching (if only it had an optional third parameter like CHARINDEX that specified the index of the start position of the search!). The STUFF function is also important for this sort of string-manipulation work. CREATE FUNCTION [Platform].[parseJSON] (@JSON NVARCHAR(MAX)) RETURNS @hierarchy TABLE ( Element_ID INT IDENTITY(1, 1) NOT NULL /* internal surrogate primary key gives the order of parsing and the list order */ ,SequenceNo [int] NULL /* the place in the sequence for the element */ ,Parent_ID INT NULL /* if the element has a parent then it is in this column. The document is the ultimate parent, so you can get the structure from recursing from the document */ ,[Object_ID] INT NULL /* each list or object has an object id. This ties all elements to a parent. Lists are treated as objects here */ ,[Name] NVARCHAR(2000) NULL /* the Name of the object */ ,StringValue NVARCHAR(MAX) NOT NULL /*the string representation of the value of the element. */ ,ValueType VARCHAR(10) NOT NULL /* the declared type of the value represented as a string in StringValue*/ ) AS BEGIN DECLARE @FirstObject INT --the index of the first open bracket found in the JSON string ,@OpenDelimiter INT --the index of the next open bracket found in the JSON string ,@NextOpenDelimiter INT --the index of subsequent open bracket found in the JSON string ,@NextCloseDelimiter INT --the index of subsequent close bracket found in the JSON string ,@Type NVARCHAR(10) --whether it denotes an object or an array ,@NextCloseDelimiterChar CHAR(1) --either a '}' or a ']' ,@Contents NVARCHAR(MAX) --the unparsed contents of the bracketed expression ,@Start INT --index of the start of the token that you are parsing ,@end INT --index of the end of the token that you are parsing ,@param INT --the parameter at the end of the next Object/Array token ,@EndOfName INT --the index of the start of the parameter at end of Object/Array token ,@token NVARCHAR(200) --either a string or object ,@value NVARCHAR(MAX) -- the value as a string ,@SequenceNo INT -- the sequence number within a list ,@Name NVARCHAR(200) --the Name as a string ,@Parent_ID INT --the next parent ID to allocate ,@lenJSON INT --the current length of the JSON String ,@characters NCHAR(36) --used to convert hex to decimal ,@result BIGINT --the value of the hex symbol being parsed ,@index SMALLINT --used for parsing the hex value ,@Escape INT --the index of the next escape character /* in this temporary table we keep all strings, even the Names of the elements, since they are 'escaped' in a different way, and may contain, unescaped, brackets denoting objects or lists. These are replaced in the JSON string by tokens representing the string */ DECLARE @Strings TABLE ( String_ID INT IDENTITY(1, 1) ,StringValue NVARCHAR(MAX) ) IF ISNULL(@JSON, '') = '' RETURN SELECT @characters = '0123456789abcdefghijklmnopqrstuvwxyz' --initialise the characters to convert hex to ascii ,@SequenceNo = 0 --set the sequence no. to something sensible. ,@Parent_ID = 0; /* firstly we process all strings. This is done because [{} and ] aren't escaped in strings, which complicates an iterative parse. */ WHILE 1 = 1 --forever until there is nothing more to do BEGIN SELECT @start = PATINDEX('%[^a-zA-Z]["]%', @json collate SQL_Latin1_General_CP850_Bin);--next delimited string IF @start = 0 BREAK --no more so drop through the WHILE loop IF SUBSTRING(@json, @start + 1, 1) = '"' BEGIN --Delimited Name SET @start = @Start + 1; SET @end = PATINDEX('%[^\]["]%', RIGHT(@json, LEN(@json + '|') - @start) collate SQL_Latin1_General_CP850_Bin); END IF @end = 0 --either the end or no end delimiter to last string BEGIN -- check if ending with a double slash... SET @end = PATINDEX('%[\][\]["]%', RIGHT(@json, LEN(@json + '|') - @start) collate SQL_Latin1_General_CP850_Bin); IF @end = 0 --we really have reached the end BEGIN BREAK --assume all tokens found END END SELECT @token = SUBSTRING(@json, @start + 1, @end - 1) --now put in the escaped control characters SELECT @token = REPLACE(@token, FromString, ToString) FROM ( SELECT '\b' ,CHAR(08) UNION ALL SELECT '\f' ,CHAR(12) UNION ALL SELECT '\n' ,CHAR(10) UNION ALL SELECT '\r' ,CHAR(13) UNION ALL SELECT '\t' ,CHAR(09) UNION ALL SELECT '\"' ,'"' UNION ALL SELECT '\/' ,'/' ) substitutions(FromString, ToString) SELECT @token = Replace(@token, '\\', '\') SELECT @result = 0 ,@escape = 1 --Begin to take out any hex escape codes WHILE @escape > 0 BEGIN SELECT @index = 0 --find the next hex escape sequence ,@escape = PATINDEX('%\x[0-9a-f][0-9a-f][0-9a-f][0-9a-f]%', @token collate SQL_Latin1_General_CP850_Bin) IF @escape > 0 --if there is one BEGIN WHILE @index < 4 --there are always four digits to a \x sequence BEGIN SELECT --determine its value @result = @result + POWER(16, @index) * (CHARINDEX(SUBSTRING(@token, @escape + 2 + 3 - @index, 1), @characters) - 1) ,@index = @index + 1; END -- and replace the hex sequence by its unicode value SELECT @token = STUFF(@token, @escape, 6, NCHAR(@result)) END END --now store the string away INSERT INTO @Strings (StringValue) SELECT @token -- and replace the string with a token SELECT @JSON = STUFF(@json, @start, @end + 1, '@string' + CONVERT(NCHAR(5), @@identity)) END -- all strings are now removed. Now we find the first leaf. WHILE 1 = 1 --forever until there is nothing more to do BEGIN SELECT @Parent_ID = @Parent_ID + 1 --find the first object or list by looking for the open bracket SELECT @FirstObject = PATINDEX('%[{[[]%', @json collate SQL_Latin1_General_CP850_Bin) --object or array IF @FirstObject = 0 BREAK IF (SUBSTRING(@json, @FirstObject, 1) = '{') SELECT @NextCloseDelimiterChar = '}' ,@type = 'object' ELSE SELECT @NextCloseDelimiterChar = ']' ,@type = 'array' SELECT @OpenDelimiter = @firstObject WHILE 1 = 1 --find the innermost object or list... BEGIN SELECT @lenJSON = LEN(@JSON + '|') - 1 --find the matching close-delimiter proceeding after the open-delimiter SELECT @NextCloseDelimiter = CHARINDEX(@NextCloseDelimiterChar, @json, @OpenDelimiter + 1) --is there an intervening open-delimiter of either type SELECT @NextOpenDelimiter = PATINDEX('%[{[[]%', RIGHT(@json, @lenJSON - @OpenDelimiter) collate SQL_Latin1_General_CP850_Bin) --object IF @NextOpenDelimiter = 0 BREAK SELECT @NextOpenDelimiter = @NextOpenDelimiter + @OpenDelimiter IF @NextCloseDelimiter < @NextOpenDelimiter BREAK IF SUBSTRING(@json, @NextOpenDelimiter, 1) = '{' SELECT @NextCloseDelimiterChar = '}' ,@type = 'object' ELSE SELECT @NextCloseDelimiterChar = ']' ,@type = 'array' SELECT @OpenDelimiter = @NextOpenDelimiter END ---and parse out the list or Name/value pairs SELECT @contents = SUBSTRING(@json, @OpenDelimiter + 1, @NextCloseDelimiter - @OpenDelimiter - 1) SELECT @JSON = STUFF(@json, @OpenDelimiter, @NextCloseDelimiter - @OpenDelimiter + 1, '@' + @type + CONVERT(NCHAR(5), @Parent_ID)) WHILE (PATINDEX('%[A-Za-z0-9@+.e]%', @contents collate SQL_Latin1_General_CP850_Bin)) <> 0 BEGIN IF @Type = 'object' --it will be a 0-n list containing a string followed by a string, number,boolean, or null BEGIN SELECT @SequenceNo = 0 ,@end = CHARINDEX(':', ' ' + @contents) --if there is anything, it will be a string-based Name. SELECT @start = PATINDEX('%[^A-Za-z@][@]%', ' ' + @contents collate SQL_Latin1_General_CP850_Bin) --AAAAAAAA SELECT @token = RTrim(Substring(' ' + @contents, @start + 1, @End - @Start - 1)) ,@endofName = PATINDEX('%[0-9]%', @token collate SQL_Latin1_General_CP850_Bin) ,@param = RIGHT(@token, LEN(@token) - @endofName + 1) SELECT @token = LEFT(@token, @endofName - 1) ,@Contents = RIGHT(' ' + @contents, LEN(' ' + @contents + '|') - @end - 1) SELECT @Name = StringValue FROM @strings WHERE string_id = @param --fetch the Name END ELSE SELECT @Name = NULL ,@SequenceNo = @SequenceNo + 1 SELECT @end = CHARINDEX(',', @contents) -- a string-token, object-token, list-token, number,boolean, or null IF @end = 0 --HR Engineering notation bugfix start IF ISNUMERIC(@contents) = 1 SELECT @end = LEN(@contents) + 1 ELSE --HR Engineering notation bugfix end SELECT @end = PATINDEX('%[A-Za-z0-9@+.e][^A-Za-z0-9@+.e]%', @contents + ' ' collate SQL_Latin1_General_CP850_Bin) + 1 SELECT @start = PATINDEX('%[^A-Za-z0-9@+.e][-A-Za-z0-9@+.e]%', ' ' + @contents collate SQL_Latin1_General_CP850_Bin) --select @start,@end, LEN(@contents+'|'), @contents SELECT @Value = RTRIM(SUBSTRING(@contents, @start, @End - @Start)) ,@Contents = RIGHT(@contents + ' ', LEN(@contents + '|') - @end) IF SUBSTRING(@value, 1, 7) = '@object' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,SUBSTRING(@value, 8, 5) ,SUBSTRING(@value, 8, 5) ,'object' ELSE IF SUBSTRING(@value, 1, 6) = '@array' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,SUBSTRING(@value, 7, 5) ,SUBSTRING(@value, 7, 5) ,'array' ELSE IF SUBSTRING(@value, 1, 7) = '@string' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,StringValue ,'string' FROM @strings WHERE string_id = SUBSTRING(@value, 8, 5) ELSE IF @value IN ('true', 'false') INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'boolean' ELSE IF @value = 'null' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'null' ELSE IF PATINDEX('%[^0-9-]%', @value collate SQL_Latin1_General_CP850_Bin) > 0 INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'real' ELSE INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'int' IF @Contents = ' ' SELECT @SequenceNo = 0 END END INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT '-' ,1 ,NULL ,'' ,@Parent_ID - 1 ,@type RETURN END Code Snippet 1:- ParseJson Function   Closure The so-called ‘impedance-mismatch’ between applications and databases is an illusion. if the developer has understood the data correctly then there is less complexity  while processing it. But has been trickier with other formats such as JSON. By using techniques like this, it should be possible to liberate the application or website from having to do the mapping from the object model to the relational, and spraying the database with ad-hoc T-SQL  that uses the fact/dimension tables or updateable views.  If the database can be provided with the JSON, or the Table-Valued parameter, then there is a better chance of  maintaining full transactional integrity for the more complex updates. The database developer already has the tools to do the work with XML, but why not the simpler, and more practical JSON? I hope these routines get you started with experimenting with all this for your requirements.  

SQL Server Login and Permissions Setup
Aug 09, 2021

1. To create a login SQL server, Navigate to Security > Logins   2. In the next screen, Enter    a. Login Name    b. Select SQL Server authentication    c. Enter Password for MS SQL create a user with a password   You can also create a login using the T-SQL command for SQL server create login and user.    CREATE LOGIN MyLogin WITH PASSWORD = MsSQL   3. Give Full Access for Demo Login   Login is created If we refresh the Logins, then we can view Login.   How To Create a User? You can use any of the following two ways:      · Using T-SQL      · Using SQL Server Management Studio   Providing limited access only to a certain Database You will be creating a user for the Events27_production database.   1. Connect to SQL server to create a new user       a. Connect to SQL Server then expand the Databases folder from the Object          Explorer.       b. Identify the database for which you need to create the user and expand it.       c. Expand its Security folder.       d. Right-click the Users folder then choose "New User…"    2. Enter User details, you will get the following screen,      a. Enter the desired Username      b. Enter the Login name (created earlier)      User is created for that specific Database.   Create User using T-SQL     create user <user-name> for login <login-name>     create user DemoUser for login Demo   Assigning limited permission to a user in SQL Server Permissions refer to the rules that govern the levels of access that users have on the secured SQL Server resources. SQL Server allows you to grant, revoke and deny such permissions. There are two ways to give SQL server user permissions:  1. Connect to your SQL Server instance and expand the folders from the Object Explorer as shown below. Right-click on the name of the user   2. In the next screen,     a. Click the Securable option from the left.     b. Click on Search   3. In the next window,     a. Select "All Objects belonging to the Schema."     b. Select Schema name as "dbo" 4. Grant or Revoke permission of a specific table or DB object       a. Identify Table you want to Grant Permission       b. In Explicit Permission select Grant   The user DemoUser is granted SELECT permission on final_backup_tidx_sctionSponsors.   Grant Permissions using T-SQL use <database-name> grant <permission-name> on <object-name> to <username\principle>   Use Events27_production Go Grant Select on final_backup_tidx_sctionSponsors to DemoUser 5. Providing ROLE to a specific user:     a. In the object explorer expand the databases and security folder.     b. Expand Roles and right-click on Database Role.     c. Click on New database role. Then a new pop-up window is open.     d. In the General tab enter the role name and click on ok.  6. Refresh the roles. In below screenshot shows the role   Remove Login from SQL Server:    1. To drop login SQL server, Navigate to Security > Logins    2. Select the desired login and click on Delete     Drop Login using T-SQL     DROP LOGIN Demo;  

Quick Setup: Kafka with ELK Integration
Aug 17, 2020

Apache Kafka is the numerous common buffer solution deployed together with the ELK Stack. Kafka is deployed within the logs delivery and the indexing units, acting as a segregation unit for the data being collected: In this blog, we’ll see how to deploy all the components required to set up a resilient logs pipeline with Apache Kafka and ELK Stack: Filebeat – collects logs and forwards them to a Kafka topic. Kafka – brokers the data flow and queues it. Logstash – aggregates the data from the Kafka topic, processes it and ships to Elasticsearch. Elasticsearch – indexes the data. Kibana – for analyzing the data.   My environment: To perform the steps below, I set up a single Ubuntu 18.04 VM machine on AWS EC2 using local storage. In real-life scenarios, you will probably have all these components running on separate machines. I started the instance in the public subnet of a VPC and then set up a security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). Using Apache Access Logs for the pipeline, you can use VPC Flow Logs, ALB Access logs etc. We will start by installing the main component in the stack — Elasticsearch. Login to your Ubuntu system using sudo privileges. For the remote Ubuntu server using ssh to access it. Windows users can use putty or Powershell to log in to Ubuntu system. Elasticsearch requires Java to run on any system. Make sure your system has Java installed by running the following command. This command will show you the current Java version. sudo apt install openjdk-11-jdk-headless Check the installation is successful or not by the below command ~$ java — versionopenjdk 11.0.3 2019–04–16OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing) Finally, I added a new elastic IP address and associated it with the running instance. The example logs used for the tutorial are Apache access logs.   Step 1: Installing Elasticsearch We will start by installing the main component in the stack — Elasticsearch. Since version 7.x, Elasticsearch is bundled with Java so we can jump right ahead with adding Elastic’s signing key: Download and install the public signing key: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Now you may need to install the apt-transport-https package on Debian before proceeding: sudo apt-get install apt-transport-https echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Our next step is to add the repository definition to our system: echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list You can install the Elasticsearch Debian package with: sudo apt-get update && sudo apt-get install elasticsearch Before we bootstrap Elasticsearch, we need to apply some basic configurations using the Elasticsearch configuration file at: /etc/elasticsearch/elasticsearch.yml: sudo su nano /etc/elasticsearch/elasticsearch.yml Since we are installing Elasticsearch on AWS, we will bind Elasticsearch to the localhost. Also, we need to define the private IP of our EC2 instance as a master-eligible node: network.host: "localhost" http.port:9200 cluster.initial_master_nodes: ["<InstancePrivateIP"] Save the file and run Elasticsearch with: sudo service elasticsearch start To confirm that everything is working as expected, point curl to: http://localhost:9200, and you should see something like the following output (give Elasticsearch a minute or two before you start to worry about not seeing any response): {   "name" : "elasticsearch",   "cluster_name" : "elasticsearch",   "cluster_uuid" : "W_Ky1DL3QL2vgu3sdafyag",   "version" : {     "number" : "7.2.0",     "build_flavor" : "default",     "build_type" : "deb",     "build_hash" : "508c38a",     "build_date" : "2019-06-20T15:54:18.811730Z",     "build_snapshot" : false,     "lucene_version" : "8.0.0",     "minimum_wire_compatibility_version" : "6.8.0",     "minimum_index_compatibility_version" : "6.0.0-beta1"   },   "tagline" : "You Know, for Search" }   Step 2: Installing Logstash Next up, the “L” in ELK — Logstash. Logstash and installing it is easy. Just type the following command. sudo apt-get install logstash -y Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs and ships them on to Elasticsearch for indexing. Verify Java is installed: java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) Let’s create a new config file: Since we already defined the repository in the system, all we have to do to install Logstash is run: sudo nano /etc/logstash/conf.d/apache.conf Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs, and ships them on to Elasticsearch for indexing. Let’s create a new config file: input {   kafka {     bootstrap_servers => "localhost:9092"     topics => "apache"     } } filter {     grok {       match => { "message" => "%{COMBINEDAPACHELOG}" }     }     date {     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]     }   geoip {       source => "clientip"     } } output {   elasticsearch {     hosts => ["localhost:9200"]   } } As you can see — we’re using the Logstash Kafka input plugin to define the Kafka host and the topic we want Logstash to pull from. We’re applying some filtering to the logs and we’re shipping the data to our local Elasticsearch instance.   Step 3: Installing Kibana Let’s move on to the next component in the ELK Stack — Kibana. As before, we will use a simple apt command to install Kibana: sudo apt-get install kibana We will then open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure we have the correct configurations defined: server.port: 5601 server.host: "<INSTANCE_PRIVATE_IP>" elasticsearch.hosts: ["http://<INSTANCE_PRIVATE_IP>:9200"] Then enable and start the Kibana service: sudo systemctl enable kibana sudo systemctl start kibana We would need to install Firebeat. Use: sudo apt install filebeat   Open up Kibana in your browser with http://<PUBLIC_IP>:5601. You will be presented with the Kibana home page.

Mastering SSIS Data Flow Task Package
Jul 27, 2020

In this article, we will review how to create a data flow task package of SSIS in Console Application with an example. Requirements Microsoft Visual Studio 2017 SQL Server 2014 SSDT Article  Done with the above requirements? Let's start by launching Microsoft Visual Studio 2017. Create a new Console Project with .Net Core.  After creating a new project, provide a proper name for it. In Project Explorer import relevant references and ensure that you have declared namespaces as below: using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime; using RuntimeWrapper = Microsoft.SqlServer.Dts.Runtime.Wrapper;   To import above namespaces we need to import below refrences.   We need to keep in mind that, above all references should have same version.   After importing namespaces, ask user for the source connection string, destination connection string and table that will be copied to destination. string sourceConnectionString, destinationConnectionString, tableName; Console.Write("Enter Source Database Connection String: "); sourceConnectionString = Console.ReadLine(); Console.Write("Enter Destination Database Connection String: "); destinationConnectionString = Console.ReadLine(); Console.Write("Enter Table Name: "); tableName = Console.ReadLine();   After Declaration, create instance of Application and Package. Application app = new Application(); Package Mipk = new Package(); Mipk.Name = "DatabaseToDatabase";   Create OLEDB Source Connection Manager to the package. ConnectionManager connSource; connSource = Mipk.Connections.Add("ADO.NET:SQL"); connSource.ConnectionString = sourceConnectionString; connSource.Name = "ADO NET DB Source Connection";   Create OLEDB Destination Connection Manager to the package. ConnectionManager connDestination; connDestination= Mipk.Connections.Add("ADO.NET:SQL"); connDestination.ConnectionString = destinationConnectionString; connDestination.Name = "ADO NET DB Destination Connection";   Insert a data flow task to the package. Executable e = Mipk.Executables.Add("STOCK:PipelineTask"); TaskHost thMainPipe = (TaskHost)e; thMainPipe.Name = "DFT Database To Database"; MainPipe df = thMainPipe.InnerObject as MainPipe;   Assign OLEDB Source Component to the Data Flow Task. IDTSComponentMetaData100 conexionAOrigen = df.ComponentMetaDataCollection.New(); conexionAOrigen.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionAOrigen.Name = "ADO NET Source";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instance = conexionAOrigen.Instantiate(); instance.ProvideComponentProperties();   Specify the Connection Manager. conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connSource); conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManagerID = connSource.ID;   Set the custom properties. instance.SetComponentProperty("AccessMode", 0); instance.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Reinitialize the source metadata. instance.AcquireConnections(null); instance.ReinitializeMetaData(); instance.ReleaseConnections();   Now, Add Destination Component to the Data Flow Task. IDTSComponentMetaData100 conexionADestination = df.ComponentMetaDataCollection.New(); conexionADestination.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionADestination.Name = "ADO NET Destination";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instanceDest = conexionADestination.Instantiate(); instanceDest.ProvideComponentProperties();   Specify the Connection Manager. conexionADestination.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connDestination); conexionADestination.RuntimeConnectionCollection[0].ConnectionManagerID = connDestination.ID;   Set the custom properties. instanceDest.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Connect the source to destination component: IDTSPath100 union = df.PathCollection.New(); union.AttachPathAndPropagateNotifications(conexionAOrigen.OutputCollection[0], conexionADestination.InputCollection[0]);   Reinitialize the destination metadata. instanceDest.AcquireConnections(null); instanceDest.ReinitializeMetaData(); instanceDest.ReleaseConnections();   Map Source input Columns and Destination Columns foreach (IDTSOutputColumn100 col in conexionAOrigen.OutputCollection[0].OutputColumnCollection) {     for (int i = 0; i < conexionADestination.InputCollection[0].ExternalMetadataColumnCollection.Count; i++)     {         string c = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].Name;         if (c.ToUpper() == col.Name.ToUpper())         {             IDTSInputColumn100 column = conexionADestination.InputCollection[0].InputColumnCollection.New();             column.LineageID = col.ID;             column.ExternalMetadataColumnID = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].ID;         }     } }   Save Package into the file system. app.SaveToXml(@"D:\Workspace\SSIS\Test_DB_To_DB.dtsx", Mipk, null);   Execute package. Mipk.Execute(); Conclusion In this article, we have explained one of the alternatives for creating SSIS packages using .NET console application. In case you have any questions, please feel free to ask in the comment section below.   RELATED BLOGS: Basics of SSIS(SQL Server Integration Service)

SQL Server DELETE & UPDATE CASCADE
Jul 20, 2020

In this article, we will review on DELETE AND UPDATE CASCADE rules in SQL Server foreign key with different examples. DELETE CASCADE: When we create a foreign key using this option, it deletes the referencing rows in the child table when the referenced row is deleted in the parent table which has a primary key. UPDATE CASCADE: When we create a foreign key using UPDATE CASCADE the referencing rows are updated in the child table when the referenced row is updated in the parent table which has a primary key. We will be discussing the following topics in this article: Creating DELETE CASCADE and UPDATE CASCADE rule in a foreign key using T-SQL script Triggers on a table with DELETE or UPDATE cascading foreign key Let us see how to create a foreign key with DELETE and UPDATE CASCADE rules along with few examples.   Creating a foreign key with DELETE and UPDATE CASCADE rules Please refer to the below T-SQL script which creates a parent, child table and a foreign key on the child table with DELETE CASCADE rule.   Insert some sample data using below T-SQL script.   Now, Check Records.   Now I deleted a row in the parent table with CountryID =1 which also deletes the rows in the child table which has CountryID =1.   Please refer to the below T-SQL script to create a foreign key with UPDATE CASCADE rule.   Now update CountryID in the Countries for a row which also updates the referencing rows in the child table States.   Following is the T-SQL script which creates a foreign key with cascade as UPDATE and DELETE rules. To know the update and delete actions in the foreign key, query sys.foreign_keys view. Replace the constraint name in the script.   The below image shows that a DELETE CASCADE action and UPDATE CASCADE action is defined on the foreign key. Let’s move forward and check the behavior of delete and update rules the foreign keys on a child table which acts as parent table to another child table. The below example demonstrates this scenario. In this case, “Countries” is the parent table of the “States” table and the “States” table is the parent table of Cities table.   We will create a foreign key now with cascade as delete rule on States table which references to CountryID in parent table Countries.   Now on the Cities table, create a foreign key without a DELETE CASCADE rule. If we try to delete a record with CountryID = 3, it will throw an error as delete on parent table “Countries” tries to delete the referencing rows in the child table States. But on Cities table, we have a foreign key constraint with no action for delete and the referenced value still exists in the table.   The delete fails at the second foreign key.   When we create the second foreign key with cascade as delete rule then the above delete command runs successfully by deleting records in the child table “States” which in turn deletes records in the second child table “Cities”.   Triggers on a table with delete cascade or update cascade foreign key An instead of an update trigger cannot be created on the table if a foreign key on with UPDATE CASCADE already exists on the table. It throws an error “Cannot create INSTEAD OF DELETE or INSTEAD OF UPDATE TRIGGER ‘trigger name’ on table ‘table name’. This is because the table has a FOREIGN KEY with cascading DELETE or UPDATE.” Similarly, we cannot create INSTEAD OF DELETE trigger on the table when a foreign key CASCADE DELETE rule already exists on the table.   Conclusion In this article, we explored a few examples on DELETE CASCADE and UPDATE CASCADE rules in SQL Server foreign key. In case you have any questions, please feel free to ask in the comment section below.

Backup Scheduling in SQL Server Express
Sep 20, 2019

Have you ever attempted to set up an automated backup of your SQL Server Express Edition and found that there’s no SQL Server Agent where you can schedule the job which will took a backup of your database. Alas, the world does not end there and you don't need to pay extra bucks just to have the back up via an SQL Agent which is available only to Standard and Enterprise editions. There are many options to automate the backup job which runs on a specific time and does not require manual intervention. Here, we will learn how to do it via SQL Command using batch file and Windows in-build Task Scheduler. Hope, you may find this useful. Create a BAT(batch) file to execute the command to take a backup of Database and save it. echo off :: -------------------------------------------------- :: clear console cls :: -------------------------------------------------- :: Define variables set SERVERNAME=YOUR_SERVER_NAME set DATABASENAME=DATABASE_NAME set MyTime=%TIME: =0% set MyDate=%DATE:~-4%.%DATE:~7,2%.%DATE:~4,2%.%MyTime:~0,2%.%MyTime:~3,2%.%MyTime:~6,2% set FileName=%DATABASENAME%_%MyDate%.bak set BAK_PATH=DIRECTORY_PATH set DEST_FILE=%BAK_PATH%%FileName% :: -------------------------------------------------- :: BACKUP Database sqlcmd -E -S %SERVERNAME% -d master -Q "BACKUP DATABASE [%DATABASENAME%] TO DISK = N'%DEST_FILE%' WITH INIT , NOUNLOAD , NAME = N'%DATABASENAME% backup', NOSKIP , STATS = 10, NOFORMAT" :: -------------------------------------------------- :: Optional Part :: -------------------------------------------------- :: Zip file 7z a -tzip "%DEST_FILE%.zip" "%DEST_FILE%" :: -------------------------------------------------- :: Delete unziped file DEL "%DEST_FILE%"   “SERVERNAME” is the name of SQL Server physical machine. “DATABASENAME” is the database which will be backup. “FileName” sets as a database name and append date which has .bak extension  “BAK_PATH” is the path in which a database backup file will be saved. “DEST_FILE” is use backup path and file name. After defining all the variables database backup will be generated and save as zip file in “DEST_FILE” path and at the end, the unzipped file will be deleted from “DEST_FILE” Now, it's time to schedule this created batch file in #1 Start Menu -> Task Scheduler -> Run as administrator Click on Create Task... from the right bar and configure it with Triggers and Actions  

magnusminds website loader