Tag - SQL

Effortless Data Migration Using MySQL Federated Engine
Mar 13, 2024

Scenario: If someone say you Hey, can you transfer one of MySQL data to another MySQL data and we think about SSIS or other Thing if yes then these article made for you to reduce your effort and save your time Introduction: In the dynamic landscape of database management, the need to seamlessly access and integrate data from multiple sources has become paramount. Whether it's consolidating information from disparate servers or synchronizing databases for backup and redundancy, MySQL offers a robust solution through its querying capabilities. In this guide, we delve into the art of fetching data from one MySQL server to another using SQL queries. This method, often overlooked in favor of complex data transfer mechanisms, provides a streamlined approach to data migration, enabling developers and database administrators to efficiently manage their resources. Through a combination of MySQL's versatile querying language and the innovative use of the FEDERATED storage engine, we'll explore how to establish connections between servers, replicate table structures, and effortlessly transfer data across the network. From setting up the environment to executing queries and troubleshooting common challenges, this tutorial equips you with the knowledge and tools to navigate the intricacies of cross-server data retrieval with ease. As we know We gonna use FEDERATED feature of MySQL workbench so first we need to check that our workbench support FEDERATED engine or not?   Simply open workbench and run below code show engines;   It shows all engines and check our system support FEDERATED OR NOT   If your system also not support don't worry we gonna enable it Open your folder where you save MySQL serve file In my case it in my C drive C>ProgramData>MySQL>MySQL Server 8.0>my.ini    open it in notepad++ or preferable software    Insert FEDERATED key word in script like below   Now need to restart MySQL Press Window+R button and paste services.msc press ok> find MySQL and restart it Now go to workbence and run show engines;  code   Now your FEDERATED engine get supported It show like below   Now our system Support FEDERATED engine This same process need to apply on destination side because both server (from source to destination server) need to support FEDERATED engine Now we make sure to we have permission of access source server for that we need to make user and and give permission of database and tables   Below code demonstrate to make user and give permission to user CREATE USER 'hmysql'@'192.168.1.173' IDENTIFIED BY 'Hardik...'; GRANT ALL PRIVILEGES ON *.* TO 'hmysql'@'192.168.1.173' WITH GRANT OPTION; FLUSH PRIVILEGES;   Now make connection of that user(we make above on source side) on destination server(our system)    Click on plus(+) icon as shown in image and fill all detail   Below image is for detail of user connection   After filling details our user added like below image   Go to user(hardikmysql) and find from which table we want to take data using MySQL query    Here i am taking 'actor' table from 'sakila' database which look like below   Now we need to run FEDERATED query on our system(destination server) with url string   Our MySQL query like below CREATE TABLE `actor` ( `actor_id` smallint unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`actor_id`), KEY `idx_actor_last_name` (`last_name`) ) ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here main part is below ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here 'mysql' is mandatory for connection string you can not use other word. 'hmysql' is user name 'Hardik...'  is password for user '192.168.1.173' is server adderess '3306' is port number 'sakila' is database name 'actor' is table name   Now run above table code and you get data in our system(destination server)    

List all constraints of a particular table - SQL Server
Jan 28, 2024

If you want to have a list of constraints applied on a particular table in the SQL server, this will help you to get it in one go.   DECLARE @TABLENAME VARCHAR(50) = '<table_name>' SELECT ObjectName     ,TypeOfObject     ,TypeOfConstraint     ,ConstraintName     ,ConstraintDescription FROM (     SELECT schema_name(t.schema_id) + '.' + t.[name] AS ObjectName         ,CASE              WHEN t.[type] = 'U'                 THEN 'Table'             WHEN t.[type] = 'V'                 THEN 'View'             END AS [TypeOfObject]         ,CASE              WHEN c.[type] = 'PK'                 THEN 'Primary key'             WHEN c.[type] = 'UQ'                 THEN 'Unique constraint'             WHEN i.[type] = 1                 THEN 'Unique clustered index'             WHEN i.type = 2                 THEN 'Unique index'             END AS TypeOfConstraint         ,ISNULL(c.[name], i.[name]) AS ConstraintName         ,SUBSTRING(column_names, 1, LEN(column_names) - 1) AS [ConstraintDescription]     FROM sys.objects t     LEFT OUTER JOIN sys.indexes i ON t.object_id = i.object_id     LEFT OUTER JOIN sys.key_constraints c ON i.object_id = c.parent_object_id         AND i.index_id = c.unique_index_id     CROSS APPLY (         SELECT col.[name] + ', '         FROM sys.index_columns ic         INNER JOIN sys.columns col ON ic.object_id = col.object_id             AND ic.column_id = col.column_id         WHERE ic.object_id = t.object_id             AND ic.index_id = i.index_id         ORDER BY col.column_id         FOR XML path('')         ) D(column_names)     WHERE is_unique = 1         AND t.name = @TABLENAME         AND t.is_ms_shipped <> 1          UNION ALL          SELECT schema_name(fk_tab.schema_id) + '.' + fk_tab.name AS foreign_table         ,'Table'         ,'Foreign key'         ,fk.name AS fk_ConstraintName         ,cols.[name] + ' REFERENCES ' + schema_name(pk_tab.schema_id) + '.' + pk_tab.name + ' (' + c2.[name] + ')'     FROM sys.foreign_keys fk     INNER JOIN sys.tables fk_tab ON fk_tab.object_id = fk.parent_object_id     INNER JOIN sys.tables pk_tab ON pk_tab.object_id = fk.referenced_object_id     INNER JOIN sys.foreign_key_columns fk_cols ON fk_cols.constraint_object_id = fk.object_id     INNER JOIN sys.columns cols ON cols.object_id = fk_cols.parent_object_id AND cols.column_id = fk_cols.parent_column_id     INNER JOIN sys.columns c2 ON c2.object_id = fk_cols.referenced_object_id AND c2.column_id = fk_cols.referenced_column_id     WHERE fk_tab.name = @TABLENAME         OR pk_tab.name = @TABLENAME          UNION ALL          SELECT schema_name(t.schema_id) + '.' + t.[name]         ,'Table'         ,'Check constraint'         ,con.[name] AS ConstraintName         ,con.[definition]     FROM sys.check_constraints con     LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id     LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id         AND con.parent_object_id = col.object_id     WHERE t.name = @TABLENAME          UNION ALL          SELECT schema_name(t.schema_id) + '.' + t.[name]         ,'Table'         ,'Default constraint'         ,con.[name]         ,col.[name] + ' = ' + con.[definition]     FROM sys.default_constraints con     LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id     LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id         AND con.parent_object_id = col.object_id     WHERE t.name = @TABLENAME     ) a ORDER BY ObjectName     ,TypeOfConstraint     ,ConstraintName   Output: Enjoy.!

How to Check performance of the SPROC
Jan 27, 2024

Stored procedures are an essential part of database management systems. They are used to execute frequently used queries and reduce the load on the database server. However, if not optimized correctly, they can cause performance issues. In this blog, we will discuss how to check the performance of a stored procedure.  Steps to Check Performance of a SPROC  Identify the SPROC: The first step is to identify the stored procedure that needs to be optimized. You can use SQL Server Management Studio (SSMS) to identify the stored procedure.  Check Execution Time: Once you have identified the stored procedure, you can check its execution time. You can use the SET STATISTICS TIME ON command to check the execution time of the stored procedure.  Check Query Plan: The next step is to check the query plan of the stored procedure. You can use the SET SHOWPLAN_TEXT ON command to check the query plan.  Check Indexes: Indexes play a crucial role in the performance of a stored procedure. You can use the sp_helpindex command to check the indexes of the stored procedure.  Check for Blocking: Blocking can cause performance issues in a stored procedure. You can use the sp_who2 command to check for blocking.  Check for Deadlocks: Deadlocks can also cause performance issues in a stored procedure. You can use the DBCC TRACEON(1204) command to check for deadlocks.  Examples : Here are some examples to help you understand how to check the performance of a stored procedure:    To Try this queries yourself I am sharing the Table, Data, SP query so you can direct run and perform this queries : -- Step 1: Create a dummy table CREATE TABLE dbo.Orders ( OrderID INT PRIMARY KEY, CustomerID NVARCHAR(10), OrderDate DATETIME, ProductID INT, Quantity INT ); -- Step 2: Insert dummy data into the table INSERT INTO dbo.Orders (OrderID, CustomerID, OrderDate, ProductID, Quantity) VALUES (1, N'ALFKI', '2024-01-23', 101, 5), (2, N'ALFKI', '2024-01-24', 102, 3), (3, N'BONAP', '2024-01-25', 103, 7), (4, N'BONAP', '2024-01-26', 104, 2), (5, N'COSME', '2024-01-27', 105, 4); -- Step 3: Create a stored procedure CREATE PROCEDURE dbo.usp_GetOrdersByCustomer @CustomerID NVARCHAR(10) AS BEGIN SELECT * FROM dbo.Orders WHERE CustomerID = @CustomerID; END; Example 1: Check Execution Time  SET STATISTICS TIME ON  EXEC dbo.usp_GetOrdersByCustomer @CustomerID = N'ALFKI'  SET STATISTICS TIME OFF  Example 2: Check Query Plan  SET SHOWPLAN_TEXT ON  EXEC dbo.usp_GetOrdersByCustomer @CustomerID = N'ALFKI'  SET SHOWPLAN_TEXT OFF  Example 3: Check Indexes  EXEC sp_helpindex 'dbo.usp_GetOrdersByCustomer'  Example 4: Check for Blocking  EXEC sp_who2  Example 5: Check for Deadlocks  DBCC TRACEON(1204)    Conclusion  In conclusion, checking the performance of a stored procedure is essential to ensure that it runs efficiently. By following the steps mentioned above, you can identify the performance issues and optimize the stored procedure. I hope this blog helps you in optimizing your stored procedures. If you have any questions or suggestions, please feel free to leave a comment below. 

List all indexes of a particular table - SQL Server
Jan 26, 2024

As in recent work with the client, I got the question of finding the indexes to be applied on a particular table in the SQL server. If you want to have listed all the indexes from a particular table from the SQL server, then now you just have to write your table name in the variable and execute the below query. And see the result.   DECLARE @TABLENAME VARCHAR(50) = '<table_name>' SELECT '[' + s.name + '].[' + sObj.name + ']' AS 'TableName'     ,+ ind.name AS 'IndexName'     ,ind.type_desc AS 'IndexType'     ,STUFF((             SELECT ', [' + sc.name + ']' AS "text()"             FROM syscolumns AS sc             INNER JOIN sys.index_columns AS ic ON ic.object_id = sc.id                 AND ic.column_id = sc.colid             WHERE sc.id = Obj.object_id                 AND ic.index_id = sind.indid                 AND ic.is_included_column = 0             ORDER BY key_ordinal             FOR XML PATH('')             ), 1, 2, '') AS 'IndexedColumns' FROM sysindexes AS sind INNER JOIN sys.indexes AS ind ON ind.object_id = sind.id     AND ind.index_id = sind.indid INNER JOIN sysobjects AS sObj ON sObj.id = sind.id INNER JOIN sys.objects AS Obj ON Obj.object_id = sObj.id     AND is_ms_shipped = 0 INNER JOIN sys.schemas AS s ON s.schema_id = Obj.schema_id WHERE ind.object_id = OBJECT_ID(@TABLENAME)     AND ind.is_primary_key = 0     AND ind.is_unique = 0     AND ind.is_unique_constraint = 0 ORDER BY TableName     ,IndexName;   Output:

Step-by-Step Guide to Configure Replication on SQL Servers
Jan 12, 2024

Setting up replication in SQL Server can be a powerful way to ensure data consistency and availability across multiple servers. In this step-by-step guide, we'll walk through the process of configuring replication on SQL Servers.   Step 1: Understand Replication Types Before diving into configuration, it's crucial to understand the types of replication available in SQL Server.  Snapshot Replication: Takes a snapshot of the data at a specific point in time. Transactional Replication: Replicates changes in real-time as they occur. Merge Replication: Allows bidirectional data synchronization between servers. Choose the replication type that aligns with your specific needs and database architecture.   Step 2: Prepare Your Environment Ensure that your SQL Server environment is ready for replication. This involves verifying that you have the necessary permissions and establishing proper connectivity between the SQL Server instances. Remember that replication involves three key components: Publisher, Distributor, and Subscribers. The Distributor can be on the same server as the Publisher or a separate server.   Step 3: Configure Distributor If a Distributor isn't already set up, proceed to configure one. This involves specifying the server that will act as the Distributor and setting up distribution databases. Use either SQL Server Management Studio (SSMS) or T-SQL scripts for this configuration.   Step 4: Enable Replication on the Publisher 1. Open SSMS and connect to the Publisher. 2. Right-click on the target database and choose "Tasks" > "Replication" > "Configure Distribution." 3. Follow the wizard, specifying the Distributor configured in Step 3.   Step 5: Choose Articles Define the articles by selecting the tables, views, or stored procedures you want to replicate. This step allows you to fine-tune your replication by specifying data filters, choosing columns to replicate, and configuring additional options based on your specific requirements.   Step 6: Configure Subscribers 1. Connect to the Subscribers in SSMS. 2. Right-click on the Replication folder and choose "Configure Distribution." 3. Follow the wizard, specifying the Distributor and configuring additional settings based on your chosen replication type.   Step 7: Configure Subscription With the Distributor and Subscribers configured, it's time to set up subscriptions. 1. In SSMS, navigate to the Replication folder on the Publisher. 2. Right-click on the Local Publications and choose "New Subscriptions." 3. Follow the wizard to configure the subscription, specifying the Subscribers and defining any additional settings.   Step 8: Monitor and Maintain Regular monitoring and maintenance are essential for a healthy replication environment. - Use the Replication Monitor in SSMS to view the status of publications, subscriptions, and any potential errors. - Implement routine maintenance tasks such as backing up and restoring the replication databases.   Conclusion Configuring replication in SQL Server involves a series of well-defined steps. By understanding your replication needs, preparing your environment, and carefully configuring each component, you can establish a robust and reliable replication setup. Regular monitoring and maintenance ensure the ongoing efficiency and performance of your replication environment.

Parse Json In Sql Server Below 2016
Dec 21, 2022

Abstract  This article describes a TSQL JSON parser and provides the source. It is also designed to illustrate a number of string manipulation techniques and also eliminate the issues while dealing with the JSON document containing special symbols like (“/” , ”-”....) in T-SQL. With it you can do things like this to extract the data from a JSON file or document which contains noise and complexities.    Summary For Implementation The code for the JSON Parser will run in SQL Server 2005,  and even in SQL Server 2000 (note: some modifications are necessary). First the function stores all strings in the temporary table, even the name of the elements, since they are 'escapes' in a different way, and may contain, unescaped, brackets, Special Characters which denote objects or lists. These are replaced in the json string by tokens which represent the strings. After this fetch all the json keywords and values for further processing by using the regular expressions, various string functions and a list of SQL queries and variables to store the values for a particular object. And at the last function will return a whole table which contains rows and columns with no noise in the values as the other tables in the particular database.   Figure 1:- Json Input   Figure 2:- Function Output Background TSQL isn’t really designed for doing complex string parsing which contains special characters and particularly where strings represent nested data structures such as XML, JSON, or XHTML.   You can do it but it is not a pretty sight; but If you ever want to do it anyway ? (note You can now do this rather more easily using SQL Server 2016’s built-in JSON support.) But If the SQL Server version is older or not compatible with the built-in JSON support then you can use this customized function to get the desired output by parsing any type of json document.  There is so much stuff behind that all happens to you. For example, it could be that DBA doesn’t allow a CLR, or you lack the necessary skills with procedural scripting. Sometimes, there isn’t any application, or you want to run code unobtrusively across databases or servers.   The Traditional way for dealing with data like this is to let a separate business layer parse a JSON ‘document’ into some meaningful structure(Like Tree) and then update the database by making a series of calls and lots of sql procedures. This is pretty, but can get more complicated and headache if you need to ensure that the updates to the database are wrapped into one transaction so that if anything goes wrong or any issues occur, then the whole transaction can be rolled back. This is why a TSQL approach has advantages.  Adjacency list tables have the same structure whatever the data in them. This means that you can define a single Table-Valued  Type and pass data structures around between stored procedures.  Converting the data to Hierarchical table form will be different for each application, but is easy with a TSQL. You can, alternatively, convert the hierarchical table into JSON and interrogate that with SQL.   JSON format JSON is one of the most popular lightweight markup languages, and is probably the best choice for transfer of object data from a web page. JSON is designed to be as lightweight as possible and so it has only two structures. The first, delimited by curly brackets, is a collection of Key/value pairs, separated by commas. The key is followed by a colon. The first snag for TSQL is that the curly or square brackets are not ‘escaped’ within a string, so that there is no way of partitioning a JSON ‘document’ simply. It is difficult to  differentiate a bracket used as the delimiter of an array or structure, and one that is within a string. The second complication is that, unlike YAML, the datatypes of values can’t be explicitly declared. You have to pass them out from applying the rules from the JSON Specification.   Implementation The JSON outputter is a great deal simpler, since one can be sure of the input, but essentially it does the reverse process, working from the root of the json document to the leaves. The only complication is working out the indent of the formatted output string. In the implementation, you’ll see a fairly heavy use of PATINDEX.This uses a RegEx. However, it is all we have, and can be pressed into service by chopping the string it is searching (if only it had an optional third parameter like CHARINDEX that specified the index of the start position of the search!). The STUFF function is also important for this sort of string-manipulation work. CREATE FUNCTION [Platform].[parseJSON] (@JSON NVARCHAR(MAX)) RETURNS @hierarchy TABLE ( Element_ID INT IDENTITY(1, 1) NOT NULL /* internal surrogate primary key gives the order of parsing and the list order */ ,SequenceNo [int] NULL /* the place in the sequence for the element */ ,Parent_ID INT NULL /* if the element has a parent then it is in this column. The document is the ultimate parent, so you can get the structure from recursing from the document */ ,[Object_ID] INT NULL /* each list or object has an object id. This ties all elements to a parent. Lists are treated as objects here */ ,[Name] NVARCHAR(2000) NULL /* the Name of the object */ ,StringValue NVARCHAR(MAX) NOT NULL /*the string representation of the value of the element. */ ,ValueType VARCHAR(10) NOT NULL /* the declared type of the value represented as a string in StringValue*/ ) AS BEGIN DECLARE @FirstObject INT --the index of the first open bracket found in the JSON string ,@OpenDelimiter INT --the index of the next open bracket found in the JSON string ,@NextOpenDelimiter INT --the index of subsequent open bracket found in the JSON string ,@NextCloseDelimiter INT --the index of subsequent close bracket found in the JSON string ,@Type NVARCHAR(10) --whether it denotes an object or an array ,@NextCloseDelimiterChar CHAR(1) --either a '}' or a ']' ,@Contents NVARCHAR(MAX) --the unparsed contents of the bracketed expression ,@Start INT --index of the start of the token that you are parsing ,@end INT --index of the end of the token that you are parsing ,@param INT --the parameter at the end of the next Object/Array token ,@EndOfName INT --the index of the start of the parameter at end of Object/Array token ,@token NVARCHAR(200) --either a string or object ,@value NVARCHAR(MAX) -- the value as a string ,@SequenceNo INT -- the sequence number within a list ,@Name NVARCHAR(200) --the Name as a string ,@Parent_ID INT --the next parent ID to allocate ,@lenJSON INT --the current length of the JSON String ,@characters NCHAR(36) --used to convert hex to decimal ,@result BIGINT --the value of the hex symbol being parsed ,@index SMALLINT --used for parsing the hex value ,@Escape INT --the index of the next escape character /* in this temporary table we keep all strings, even the Names of the elements, since they are 'escaped' in a different way, and may contain, unescaped, brackets denoting objects or lists. These are replaced in the JSON string by tokens representing the string */ DECLARE @Strings TABLE ( String_ID INT IDENTITY(1, 1) ,StringValue NVARCHAR(MAX) ) IF ISNULL(@JSON, '') = '' RETURN SELECT @characters = '0123456789abcdefghijklmnopqrstuvwxyz' --initialise the characters to convert hex to ascii ,@SequenceNo = 0 --set the sequence no. to something sensible. ,@Parent_ID = 0; /* firstly we process all strings. This is done because [{} and ] aren't escaped in strings, which complicates an iterative parse. */ WHILE 1 = 1 --forever until there is nothing more to do BEGIN SELECT @start = PATINDEX('%[^a-zA-Z]["]%', @json collate SQL_Latin1_General_CP850_Bin);--next delimited string IF @start = 0 BREAK --no more so drop through the WHILE loop IF SUBSTRING(@json, @start + 1, 1) = '"' BEGIN --Delimited Name SET @start = @Start + 1; SET @end = PATINDEX('%[^\]["]%', RIGHT(@json, LEN(@json + '|') - @start) collate SQL_Latin1_General_CP850_Bin); END IF @end = 0 --either the end or no end delimiter to last string BEGIN -- check if ending with a double slash... SET @end = PATINDEX('%[\][\]["]%', RIGHT(@json, LEN(@json + '|') - @start) collate SQL_Latin1_General_CP850_Bin); IF @end = 0 --we really have reached the end BEGIN BREAK --assume all tokens found END END SELECT @token = SUBSTRING(@json, @start + 1, @end - 1) --now put in the escaped control characters SELECT @token = REPLACE(@token, FromString, ToString) FROM ( SELECT '\b' ,CHAR(08) UNION ALL SELECT '\f' ,CHAR(12) UNION ALL SELECT '\n' ,CHAR(10) UNION ALL SELECT '\r' ,CHAR(13) UNION ALL SELECT '\t' ,CHAR(09) UNION ALL SELECT '\"' ,'"' UNION ALL SELECT '\/' ,'/' ) substitutions(FromString, ToString) SELECT @token = Replace(@token, '\\', '\') SELECT @result = 0 ,@escape = 1 --Begin to take out any hex escape codes WHILE @escape > 0 BEGIN SELECT @index = 0 --find the next hex escape sequence ,@escape = PATINDEX('%\x[0-9a-f][0-9a-f][0-9a-f][0-9a-f]%', @token collate SQL_Latin1_General_CP850_Bin) IF @escape > 0 --if there is one BEGIN WHILE @index < 4 --there are always four digits to a \x sequence BEGIN SELECT --determine its value @result = @result + POWER(16, @index) * (CHARINDEX(SUBSTRING(@token, @escape + 2 + 3 - @index, 1), @characters) - 1) ,@index = @index + 1; END -- and replace the hex sequence by its unicode value SELECT @token = STUFF(@token, @escape, 6, NCHAR(@result)) END END --now store the string away INSERT INTO @Strings (StringValue) SELECT @token -- and replace the string with a token SELECT @JSON = STUFF(@json, @start, @end + 1, '@string' + CONVERT(NCHAR(5), @@identity)) END -- all strings are now removed. Now we find the first leaf. WHILE 1 = 1 --forever until there is nothing more to do BEGIN SELECT @Parent_ID = @Parent_ID + 1 --find the first object or list by looking for the open bracket SELECT @FirstObject = PATINDEX('%[{[[]%', @json collate SQL_Latin1_General_CP850_Bin) --object or array IF @FirstObject = 0 BREAK IF (SUBSTRING(@json, @FirstObject, 1) = '{') SELECT @NextCloseDelimiterChar = '}' ,@type = 'object' ELSE SELECT @NextCloseDelimiterChar = ']' ,@type = 'array' SELECT @OpenDelimiter = @firstObject WHILE 1 = 1 --find the innermost object or list... BEGIN SELECT @lenJSON = LEN(@JSON + '|') - 1 --find the matching close-delimiter proceeding after the open-delimiter SELECT @NextCloseDelimiter = CHARINDEX(@NextCloseDelimiterChar, @json, @OpenDelimiter + 1) --is there an intervening open-delimiter of either type SELECT @NextOpenDelimiter = PATINDEX('%[{[[]%', RIGHT(@json, @lenJSON - @OpenDelimiter) collate SQL_Latin1_General_CP850_Bin) --object IF @NextOpenDelimiter = 0 BREAK SELECT @NextOpenDelimiter = @NextOpenDelimiter + @OpenDelimiter IF @NextCloseDelimiter < @NextOpenDelimiter BREAK IF SUBSTRING(@json, @NextOpenDelimiter, 1) = '{' SELECT @NextCloseDelimiterChar = '}' ,@type = 'object' ELSE SELECT @NextCloseDelimiterChar = ']' ,@type = 'array' SELECT @OpenDelimiter = @NextOpenDelimiter END ---and parse out the list or Name/value pairs SELECT @contents = SUBSTRING(@json, @OpenDelimiter + 1, @NextCloseDelimiter - @OpenDelimiter - 1) SELECT @JSON = STUFF(@json, @OpenDelimiter, @NextCloseDelimiter - @OpenDelimiter + 1, '@' + @type + CONVERT(NCHAR(5), @Parent_ID)) WHILE (PATINDEX('%[A-Za-z0-9@+.e]%', @contents collate SQL_Latin1_General_CP850_Bin)) <> 0 BEGIN IF @Type = 'object' --it will be a 0-n list containing a string followed by a string, number,boolean, or null BEGIN SELECT @SequenceNo = 0 ,@end = CHARINDEX(':', ' ' + @contents) --if there is anything, it will be a string-based Name. SELECT @start = PATINDEX('%[^A-Za-z@][@]%', ' ' + @contents collate SQL_Latin1_General_CP850_Bin) --AAAAAAAA SELECT @token = RTrim(Substring(' ' + @contents, @start + 1, @End - @Start - 1)) ,@endofName = PATINDEX('%[0-9]%', @token collate SQL_Latin1_General_CP850_Bin) ,@param = RIGHT(@token, LEN(@token) - @endofName + 1) SELECT @token = LEFT(@token, @endofName - 1) ,@Contents = RIGHT(' ' + @contents, LEN(' ' + @contents + '|') - @end - 1) SELECT @Name = StringValue FROM @strings WHERE string_id = @param --fetch the Name END ELSE SELECT @Name = NULL ,@SequenceNo = @SequenceNo + 1 SELECT @end = CHARINDEX(',', @contents) -- a string-token, object-token, list-token, number,boolean, or null IF @end = 0 --HR Engineering notation bugfix start IF ISNUMERIC(@contents) = 1 SELECT @end = LEN(@contents) + 1 ELSE --HR Engineering notation bugfix end SELECT @end = PATINDEX('%[A-Za-z0-9@+.e][^A-Za-z0-9@+.e]%', @contents + ' ' collate SQL_Latin1_General_CP850_Bin) + 1 SELECT @start = PATINDEX('%[^A-Za-z0-9@+.e][-A-Za-z0-9@+.e]%', ' ' + @contents collate SQL_Latin1_General_CP850_Bin) --select @start,@end, LEN(@contents+'|'), @contents SELECT @Value = RTRIM(SUBSTRING(@contents, @start, @End - @Start)) ,@Contents = RIGHT(@contents + ' ', LEN(@contents + '|') - @end) IF SUBSTRING(@value, 1, 7) = '@object' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,SUBSTRING(@value, 8, 5) ,SUBSTRING(@value, 8, 5) ,'object' ELSE IF SUBSTRING(@value, 1, 6) = '@array' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,SUBSTRING(@value, 7, 5) ,SUBSTRING(@value, 7, 5) ,'array' ELSE IF SUBSTRING(@value, 1, 7) = '@string' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,StringValue ,'string' FROM @strings WHERE string_id = SUBSTRING(@value, 8, 5) ELSE IF @value IN ('true', 'false') INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'boolean' ELSE IF @value = 'null' INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'null' ELSE IF PATINDEX('%[^0-9-]%', @value collate SQL_Latin1_General_CP850_Bin) > 0 INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'real' ELSE INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,ValueType ) SELECT @Name ,@SequenceNo ,@Parent_ID ,@value ,'int' IF @Contents = ' ' SELECT @SequenceNo = 0 END END INSERT INTO @hierarchy ( [Name] ,SequenceNo ,Parent_ID ,StringValue ,[Object_ID] ,ValueType ) SELECT '-' ,1 ,NULL ,'' ,@Parent_ID - 1 ,@type RETURN END Code Snippet 1:- ParseJson Function   Closure The so-called ‘impedance-mismatch’ between applications and databases is an illusion. if the developer has understood the data correctly then there is less complexity  while processing it. But has been trickier with other formats such as JSON. By using techniques like this, it should be possible to liberate the application or website from having to do the mapping from the object model to the relational, and spraying the database with ad-hoc T-SQL  that uses the fact/dimension tables or updateable views.  If the database can be provided with the JSON, or the Table-Valued parameter, then there is a better chance of  maintaining full transactional integrity for the more complex updates. The database developer already has the tools to do the work with XML, but why not the simpler, and more practical JSON? I hope these routines get you started with experimenting with all this for your requirements.  

How to create Login, User, and Assign Permissions in SQL Server?
Aug 09, 2021

1. To create a login SQL server, Navigate to Security > Logins   2. In the next screen, Enter    a. Login Name    b. Select SQL Server authentication    c. Enter Password for MS SQL create a user with a password   You can also create a login using the T-SQL command for SQL server create login and user.    CREATE LOGIN MyLogin WITH PASSWORD = MsSQL   3. Give Full Access for Demo Login   Login is created If we refresh the Logins, then we can view Login.   How To Create a User? You can use any of the following two ways:      · Using T-SQL      · Using SQL Server Management Studio   Providing limited access only to a certain Database You will be creating a user for the Events27_production database.   1. Connect to SQL server to create a new user       a. Connect to SQL Server then expand the Databases folder from the Object          Explorer.       b. Identify the database for which you need to create the user and expand it.       c. Expand its Security folder.       d. Right-click the Users folder then choose "New User…"    2. Enter User details, you will get the following screen,      a. Enter the desired Username      b. Enter the Login name (created earlier)      User is created for that specific Database.   Create User using T-SQL     create user <user-name> for login <login-name>     create user DemoUser for login Demo   Assigning limited permission to a user in SQL Server Permissions refer to the rules that govern the levels of access that users have on the secured SQL Server resources. SQL Server allows you to grant, revoke and deny such permissions. There are two ways to give SQL server user permissions:  1. Connect to your SQL Server instance and expand the folders from the Object Explorer as shown below. Right-click on the name of the user   2. In the next screen,     a. Click the Securable option from the left.     b. Click on Search   3. In the next window,     a. Select "All Objects belonging to the Schema."     b. Select Schema name as "dbo" 4. Grant or Revoke permission of a specific table or DB object       a. Identify Table you want to Grant Permission       b. In Explicit Permission select Grant   The user DemoUser is granted SELECT permission on final_backup_tidx_sctionSponsors.   Grant Permissions using T-SQL use <database-name> grant <permission-name> on <object-name> to <username\principle>   Use Events27_production Go Grant Select on final_backup_tidx_sctionSponsors to DemoUser 5. Providing ROLE to a specific user:     a. In the object explorer expand the databases and security folder.     b. Expand Roles and right-click on Database Role.     c. Click on New database role. Then a new pop-up window is open.     d. In the General tab enter the role name and click on ok.  6. Refresh the roles. In below screenshot shows the role   Remove Login from SQL Server:    1. To drop login SQL server, Navigate to Security > Logins    2. Select the desired login and click on Delete     Drop Login using T-SQL     DROP LOGIN Demo;  

Basics of SSIS(SQL Server Integration Service)
Sep 14, 2020

What is SSIS ? SSIS is a platform for data integration and workflow applications. It features a data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. SQL Server Integration Service (SSIS) is a component of the Microsoft SQL Server database software that can be used to execute a wide range of data migration tasks. SSIS is a fast & flexible data warehousing tool used for data extraction, loading and transformation like cleaning, aggregating, merging data, etc. It makes it easy to move data from one database to another database. SSIS can extract data from a wide variety of sources like SQL Server databases, Excel files, Oracle and DB2 databases, etc. SSIS also includes graphical tools & wizards for performing workflow functions like sending email messages, FTP operations, data sources, and destinations. Features of SSIS: Organized and lookup transformations Tight integration with other Microsoft SQL family Provides rich Studio Environments Provides a lot of data integration functions for better transformations High-speed data connectivity   Why SSIS? Extract, Transform, and Load (ETL) data from SQL Server to a file and also from file to SQL. Sending an email. Download the File from FTP. Rename ,Delete , Move File From Defined Path. It allows you to join tables from different databases (SQL, Oracle, etc...) and from potentially different servers.   How SSIS Works? SSIS consists of three major components, mainly: Operational Data: An operational data store (ODS) is a database designed to integrate data from multiple sources for additional operations on the data. This is the place where most of the data used in the current operation is housed before it’s transferred to the data warehouse for longer-term storage or archiving. ETL process: ETL is a process to Extract, Transform and Load the data. Extract, Transform and Load (ETL) is the process of extracting the data from various sources, transforming this data to meet your requirement and then loading into a target data warehouse. ETL provides a ONE STOP SOLUTION for all these problems. Extract: Extraction is the process of extracting the data from various homogeneous or heterogeneous data sources based on different validation points. Transformation: In transformation, entire data is analyzed and various functions are applied on it in order to load the data to the target database in a cleaned and general format. Load: Loading is the process of loading the processed data to a target data repository using minimal resources. Data Warehouse Data Warehouse captures the data from diverse sources for useful analysis and access. Data warehousing is a large set of data accumulated which is used for assembling and managing data from various sources for the purpose of answering business questions. Hence, helps in making decisions.   How to install SSDT(Sql Server Data Tools)? Prerequisite and environment Setup for SSIS Project For Starting SSIS we need 2 Studios SQL Server Data Tools (SSDT) for developing the Integration Services packages that a business solution requires. SQL Server Data Tools (SSDT) provides the Integration Services project in which you create packages. Installation Steps: Download SSDT setup from Microsoft website.   URL: https://docs.microsoft.com/en-us/sql/ssdt/previous-releases-of-sql-server-data-tools-ssdt-and-ssdt-bi?view=sql-server-ver15 When you open the .exe file, you will be asked to restart the system before installation. So, restart first and Run Setup. And press Next. It will show the tools required and the features such as SQL Server Database, SSAS(SQL Server Analysis Services), SSRS(SQL Server Reporting Services) and SSIS(SQL Server Integration Services). Make sure you check SSIS and click the “install” button. Refer the below screenshot for the same.   We will see following contents In SSIS: Variables Connection Manager SSIS Toolbox Container Tasks Data Flow Task   Variable: Variables store values that a SSIS package and its containers, tasks, and event handlers can use at runtime.   System variables : Defined by Integration Services SSIS provides a set of system variables that store information about the running package and its objects. These variables can be used in expressions and property expressions to customize packages, containers, tasks, and event handlers.   User-Defined variables : Defined by Package Developers   How to create user - define  variable?   How to set expression for variable   Connection Manager: SSIS provides different types of connection managers that enable packages to connect to a variety of data sources and servers: There are built-in connection managers that Setup installs when you install Integration Services. There are connection managers that are available for download from the Microsoft website. You can create your own custom connection manager if the existing connection managers do not meet your needs.   Let's see how we can add Connection Manager. 1)Solution Explorer > Connection Managers > New Connection Manager . You can see the list of connection managers for different type of connections.   2)Add connection manager.   After adding your connection. you can see the all connection here.   SSIS Toolbox: Steps: Menu bar > SSIS > select SSIS Toolbox. now, you can see SSIS Toolbox on the left side. SSIS Toolbox have list of tasks and containers that you can perform.   List of Containers: For each Loop Container : Runs a control flow repeatedly by using an enumerator. For Loop Container : Runs a control flow repeatedly by testing a condition. Sequence Container : Groups tasks and containers into control flows that are subsets of the package control flow.   List of Task: Data Flow Task The task that runs data flows to extract data, apply column level transformations, and load data.   Data Preparation Tasks These tasks do the following processes: copy files and directories; download files and data; run Web methods; apply operations to XML documents; and profile data for cleansing.   Workflow Tasks The tasks that communicate with other processes to run packages, run programs or batch files, send and receive messages between packages, send e-mail messages, read Windows Management Instrumentation (WMI) data, and watch for WMI events.   SQL Server Tasks The tasks that access, copy, insert, delete, and modify SQL Server objects and data.   Scripting Tasks The tasks that extend package functionality by using scripts.   Analysis Services Tasks The tasks that create, modify, delete, and process Analysis Services objects.   Maintenance Tasks The tasks that perform administrative functions such as backing up and shrinking SQL Server databases, rebuilding and reorganizing indexes, and running SQL Server Agent jobs. You can add task/container by dragging the task/container from SSIS toolbox to design area.   Data Flow Task : Drag the Data Flow task from SSIS Toolbox to design area and double click on it. You are now in Data flow tab. Now you can see that SSIS Toolbox has different components.   Type: Source : from where you want your data. Destination : it is where you want to move your data. Transformation : It is Operation that perform ETL(Extract, Transform, Load)   Conclusion: SQL Server Integration Services provide tasks to transform and validate data during the load process and transformations to insert data into your destination. Rather than create a stored procedure with T-SQL to validate or change data, is good to know about the different SSIS tasks and how they can be used.   RELATED BLOGS: Create SSIS Data Flow Task Package Programmatically 

Kafka with ELK implementation
Aug 17, 2020

Apache Kafka is the numerous common buffer solution deployed together with the ELK Stack. Kafka is deployed within the logs delivery and the indexing units, acting as a segregation unit for the data being collected: In this blog, we’ll see how to deploy all the components required to set up a resilient logs pipeline with Apache Kafka and ELK Stack: Filebeat – collects logs and forwards them to a Kafka topic. Kafka – brokers the data flow and queues it. Logstash – aggregates the data from the Kafka topic, processes it and ships to Elasticsearch. Elasticsearch – indexes the data. Kibana – for analyzing the data.   My environment: To perform the steps below, I set up a single Ubuntu 18.04 VM machine on AWS EC2 using local storage. In real-life scenarios, you will probably have all these components running on separate machines. I started the instance in the public subnet of a VPC and then set up a security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). Using Apache Access Logs for the pipeline, you can use VPC Flow Logs, ALB Access logs etc. We will start by installing the main component in the stack — Elasticsearch. Login to your Ubuntu system using sudo privileges. For the remote Ubuntu server using ssh to access it. Windows users can use putty or Powershell to log in to Ubuntu system. Elasticsearch requires Java to run on any system. Make sure your system has Java installed by running the following command. This command will show you the current Java version. sudo apt install openjdk-11-jdk-headless Check the installation is successful or not by the below command ~$ java — versionopenjdk 11.0.3 2019–04–16OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing) Finally, I added a new elastic IP address and associated it with the running instance. The example logs used for the tutorial are Apache access logs.   Step 1: Installing Elasticsearch We will start by installing the main component in the stack — Elasticsearch. Since version 7.x, Elasticsearch is bundled with Java so we can jump right ahead with adding Elastic’s signing key: Download and install the public signing key: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Now you may need to install the apt-transport-https package on Debian before proceeding: sudo apt-get install apt-transport-https echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Our next step is to add the repository definition to our system: echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list You can install the Elasticsearch Debian package with: sudo apt-get update && sudo apt-get install elasticsearch Before we bootstrap Elasticsearch, we need to apply some basic configurations using the Elasticsearch configuration file at: /etc/elasticsearch/elasticsearch.yml: sudo su nano /etc/elasticsearch/elasticsearch.yml Since we are installing Elasticsearch on AWS, we will bind Elasticsearch to the localhost. Also, we need to define the private IP of our EC2 instance as a master-eligible node: network.host: "localhost" http.port:9200 cluster.initial_master_nodes: ["<InstancePrivateIP"] Save the file and run Elasticsearch with: sudo service elasticsearch start To confirm that everything is working as expected, point curl to: http://localhost:9200, and you should see something like the following output (give Elasticsearch a minute or two before you start to worry about not seeing any response): {   "name" : "elasticsearch",   "cluster_name" : "elasticsearch",   "cluster_uuid" : "W_Ky1DL3QL2vgu3sdafyag",   "version" : {     "number" : "7.2.0",     "build_flavor" : "default",     "build_type" : "deb",     "build_hash" : "508c38a",     "build_date" : "2019-06-20T15:54:18.811730Z",     "build_snapshot" : false,     "lucene_version" : "8.0.0",     "minimum_wire_compatibility_version" : "6.8.0",     "minimum_index_compatibility_version" : "6.0.0-beta1"   },   "tagline" : "You Know, for Search" }   Step 2: Installing Logstash Next up, the “L” in ELK — Logstash. Logstash and installing it is easy. Just type the following command. sudo apt-get install logstash -y Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs and ships them on to Elasticsearch for indexing. Verify Java is installed: java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) Let’s create a new config file: Since we already defined the repository in the system, all we have to do to install Logstash is run: sudo nano /etc/logstash/conf.d/apache.conf Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs, and ships them on to Elasticsearch for indexing. Let’s create a new config file: input {   kafka {     bootstrap_servers => "localhost:9092"     topics => "apache"     } } filter {     grok {       match => { "message" => "%{COMBINEDAPACHELOG}" }     }     date {     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]     }   geoip {       source => "clientip"     } } output {   elasticsearch {     hosts => ["localhost:9200"]   } } As you can see — we’re using the Logstash Kafka input plugin to define the Kafka host and the topic we want Logstash to pull from. We’re applying some filtering to the logs and we’re shipping the data to our local Elasticsearch instance.   Step 3: Installing Kibana Let’s move on to the next component in the ELK Stack — Kibana. As before, we will use a simple apt command to install Kibana: sudo apt-get install kibana We will then open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure we have the correct configurations defined: server.port: 5601 server.host: "<INSTANCE_PRIVATE_IP>" elasticsearch.hosts: ["http://<INSTANCE_PRIVATE_IP>:9200"] Then enable and start the Kibana service: sudo systemctl enable kibana sudo systemctl start kibana We would need to install Firebeat. Use: sudo apt install filebeat   Open up Kibana in your browser with http://<PUBLIC_IP>:5601. You will be presented with the Kibana home page.