Category - SQL-Server

Transparent Data Encryption [TDE] In SQL Server
Feb 26, 2025

    Data security is a top priority in today's digital landscape. With increasing threats of data breaches, protecting sensitive information stored in databases is essential. Transparent Data Encryption (TDE) is a built-in security feature in SQL Server, Oracle, MySQL, and other relational database management systems (RDBMS) that encrypts data at rest. It ensures that database files, including primary data files, Master Database Files (MDF), transaction logs, Log Database Files (LDF), and backups, remain secure even if they fall into the wrong hands.  Unlike other encryption methods that require modifications to application code, TDE operates at the file level, seamlessly encrypting and decrypting data without impacting application functionality. This guide walks you through the implementation of TDE in SQL Server, including enabling encryption, verifying its status, and backing up encrypted databases.    How TDE Works ? TDE uses a hierarchical encryption architecture to secure database files:  Service Master Key (SMK): A root-level key stored in the master database, managed by the SQL Server instance.  Database Master Key (DMK): A symmetric key used to encrypt certificates and asymmetric keys within a database.  Certificate or Asymmetric Key: Used to encrypt the Database Encryption Key (DEK).  Database Encryption Key (DEK): A symmetric key that encrypts the actual database files.     The encryption hierarchy follows this order :   Database Encryption Key (DEK) → Encrypted by Certificate  Certificate → Encrypted by Database Master Key (DMK)  DMK → Encrypted by Service Master Key (SMK)    Advantages / Why to use TDE?  Enhanced Data Security: Protects database files from unauthorized access, even if stolen.  Minimal Application Impact: Encrypts data at the storage level without requiring code changes.  Compliance: Helps meet regulatory standards such as GDPR, HIPAA, and PCI-DSS(The Payment Card Industry Data Security Standard).  Performance Efficiency: Uses minimal CPU overhead since encryption and decryption occur at the I/O level.  Automatic Encryption: Data is automatically encrypted and decrypted for authorized users without manual intervention.    Disadvantages / What to look up on?  No Protection for Data in Transit: TDE only encrypts data at rest; data in transit must be secured separately.  Backup and Restore Complexity: Requires careful management of certificates and keys to restore encrypted backups on another server.  Performance Overhead: While minimal, TDE may slightly impact disk I/O performance.  Limited Granularity: Encrypts entire database files instead of specific columns or tables.  Key Management Challenges: Losing encryption keys or certificates can result in permanent data loss.    How to Implement TDE in SQL Server ?  Step 1: Create a Master Key if it does not exist.  USE master;  GO  CREATE MASTER KEY ENCRYPTION BY PASSWORD = '{StrongPassword123!}';  GO   HERE MAKE YOUR OWN PASSWORD   Step 2: Create a Certificate  CREATE CERTIFICATE {TDE_Certificate_Name}    WITH SUBJECT = '{Database Encryption Certificate}';  GO  HERE TDE_Cert it’s only a name you can give any name here  , [SUBJECT] means a description about the certificate.  Step 3: Create a Database Encryption Key (DEK)  USE {YourDatabaseName};  GO  CREATE DATABASE ENCRYPTION KEY    WITH ALGORITHM = AES_256    ENCRYPTION BY SERVER CERTIFICATE {TDE_Certificate_Name};  GO    Step 4: Enable Encryption  ALTER DATABASE {YourDatabaseName}    SET ENCRYPTION ON;  GO    Step 5: Verify Encryption Status  SELECT name, is_encrypted    FROM sys.databases    WHERE name = '{YourDatabaseName}';  A result of 1 in the is_encrypted column confirms encryption is enabled.  Step 6: Backup Certificate for Future Restores  BACKUP CERTIFICATE {TDE_Certificate_Name}    TO FILE = '{C:\Backup\TDECert.cer}'     WITH PRIVATE KEY (FILE = '{C:\Backup\TDECertKey.pvk}',  ENCRYPTION BY PASSWORD = '{StrongPassword123!}');  GO   How to Disable TDE in SQL Server?  Step 1: Disable Encryption  ALTER DATABASE {YourDatabaseName}    SET ENCRYPTION OFF;  GO Step 2: Drop the Database Encryption Key  USE {YourDatabaseName};  GO  DROP DATABASE ENCRYPTION KEY;  GO  Step 3: Drop the Certificate and Master Key (Optional)  USE master;  GO  DROP CERTIFICATE {TDE_Certificate_Name};  DROP MASTER KEY;  GO     How to back up an encrypted db  from one server to another?  CREATE   CERTIFICATE {TDE_Certificate_Name}  --[ here it can be any name]   FROM FILE = '{C:\backup\TDE_Cert.cer}'     --path of file that has been sent from source of certificate.  WITH PRIVATE KEY (FILE = '{C:\backup\TDE_Cert_Key.pvk}',    --path of file that has been sent from source of key.  DECRYPTION BY PASSWORD = '{StrongPassword123!}');   --password of the source of encrypted certificate.     Conclusion  Transparent Data Encryption (TDE) is an essential security feature in SQL Server that protects data at rest by encrypting database files. By implementing TDE, organizations can enhance data security without modifying applications. Following the steps outlined in this guide, you can enable, verify, disable, and back up TDE-encrypted databases efficiently.  Ensuring proper backup of encryption keys and certificates is crucial to maintaining accessibility while keeping data secure from unauthorized access. Secure your SQL Server today with TDE and strengthen your database security!       

Always Encrypted In SQL Server
Feb 25, 2025

  Always Encrypted is a security feature introduced by Microsoft in SQL Server 2016, designed to protect sensitive data by ensuring it remains encrypted both at rest and in transit. This functionality is  extended to Azure SQL Database and Azure SQL Managed Instance, providing a robust mechanism to safeguard confidential information from unauthorized access, including database administrators and cloud service providers.     Core Components  Column Master Key (CMK): A key-protecting key stored in a trusted key store, such as Azure Key Vault, Windows Certificate Store, or a Hardware Security Module (HSM). The CMK encrypts one or more Column Encryption Keys.  Column Encryption Key (CEK): A key used to encrypt data within a specific database column. Each CEK is encrypted with a CMK, ensuring that the actual encryption keys are never exposed to the SQL Server instance.    Encryption Types  Deterministic Encryption: Generates the same encrypted value for any given plaintext, enabling operations like equality comparisons and joins on encrypted columns. However, it may reveal patterns in the data, potentially aiding unauthorized inference.  Randomized Encryption: Produces different encrypted values for the same plaintext, offering enhanced security at the cost of limiting query capabilities, as equality searches and joins are not supported.  To address limitations in processing encrypted data, Microsoft introduced Always Encrypted with secure enclaves. A secure enclave is a protected region of memory within the SQL Server process that allows computations on plaintext data inside the enclave, while keeping it encrypted outside. This enhancement enables operations such as pattern matching and range comparisons on encrypted data without exposing it to unauthorized users.     Case Studies  Healthcare Industry  A healthcare provider implemented Always Encrypted to protect patient records, ensuring that sensitive information like social security numbers and medical histories remained confidential. By encrypting  specific columns containing personal data, the organization-maintained compliance with regulations such as HIPAA, while allowing authorized applications to perform necessary operations on the data.  Financial Sector  A financial institution adopted Always Encrypted to secure credit card information and transaction details. By utilizing deterministic encryption for columns involved in frequent queries and randomized    encryption for highly sensitive data, the bank achieved a balance between security and functionality, reducing the risk of data breaches and unauthorized access.  Best Practices  Key Management: Store CMKs in secure, centralized key management systems like Azure Key Vault or HSMs  to prevent unauthorized access.  Data Classification: Identify and categorize sensitive data to determine which columns require encryption, ensuring that only critical information is protected, thereby optimizing performance.  Application Configuration: Ensure that client applications are configured to support Always Encrypted, including the use of compatible drivers and proper handling of encrypted data.  Performance Considerations: Be aware that encrypting columns, especially with randomized encryption, can impact query performance. Plan and test accordingly to balance security needs with system efficiency.    Recent Developments  As of late 2024, Microsoft has enhanced Always Encrypted by integrating it more deeply with Azure services, providing seamless support for secure enclaves in Azure SQL Database. This advancement allows for more complex operations on encrypted data within the cloud environment, expanding the feature's applicability and performance in cloud-based applications.   Advantages of Always Encrypted  Data Confidentiality – Even database admins cannot access plaintext data.  Protection from Insider Threats – Encryption keys are managed externally.  Compliance Support – Helps meet GDPR, HIPAA, PCI-DSS, and SOX requirements.  Minimal Performance Overhead – Works at the column level, reducing processing load.  End-to-End Encryption – Data is encrypted in transit, at rest, and in use.    Limitations of Always Encrypted   Limited SQL Operations – Cannot perform LIKE, ORDER BY, JOIN on encrypted columns (unless deterministic).  No Partial Encryption – The entire column must be encrypted.  Increased Storage – Encrypted data requires more storage due to ciphertext length.  Key Management Complexity – Securely storing and managing CMKs is critical.  Requires Application Changes – Client applications must use compatible drivers.    Implemention of  Always Encrypted in SQL Server   Step 1: Create a Sample Table  CREATE TABLE Customers (     CustomerID INT PRIMARY KEY,     CustomerName NVARCHAR (100),     SSN NVARCHAR (50) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (         COLUMN_ENCRYPTION_KEY = CEK_Auto,         ENCRYPTION_TYPE = DETERMINISTIC,         ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256'  ));   Collation must be Latin1_General_BIN2 for encrypted columns.  The encryption algorithm is AES-256.    Step 2: Create a Column Master Key (CMK)  CMKs are stored outside SQL Server in a secure location (e.g., Windows Certificate Store).    Using SSMS (GUI)  Go to SSMS → Expand Security > Always Encrypted Keys  Right-click "Column Master Keys" → Click New Column Master Key  Enter a name (e.g., CMK_Auto)  Choose "Windows Certificate Store - Local Machine"  Click OK        Using T-SQL  CREATE COLUMN MASTER KEY CMK_Auto  WITH (     KEY_STORE_PROVIDER_NAME = 'MSSQL_CERTIFICATE_STORE',     KEY_PATH = 'CurrentUser/My/1234567890ABCDEF1234567890ABCDEF12345678'  );    Replace KEY_PATH with your actual certificate thumbprint.      Step 3: Create a Column Encryption Key (CEK)  CEK is stored inside SQL Server and encrypted using the CMK.    Using SSMS (GUI)  Go to SSMS → Expand Security > Always Encrypted Keys  Right-click "Column Encryption Keys" → Click New Column Encryption Key  Choose CMK_Auto as the master key  Name it CEK_Auto  Click OK  Using T-SQL  CREATE COLUMN ENCRYPTION KEY CEK_Auto  WITH VALUES (     COLUMN_MASTER_KEY = CMK_Auto,     ALGORITHM = 'RSA_OAEP'  );    Now we have:   CMK (CMK_Auto) → Stored in Windows Certificate Store  CEK (CEK_Auto) → Stored inside SQL Server, encrypted with CMK_Auto    Step 4: Insert Encrypted Data  Use parameterized queries with Always Encrypted enabled.    Using .NET (C#)  using System;  using System.Data.SqlClient;    class Program  {     static void Main()     {         string connectionString = "Data Source=YourServer; Initial Catalog=YourDatabase; Integrated Security=True; Column Encryption Setting=Enabled";         using (SqlConnection conn = new SqlConnection(connectionString))         {             conn.Open();             SqlCommand cmd = new SqlCommand("INSERT INTO Customers (CustomerID, CustomerName, SSN) VALUES (@id, @name, @ssn)", conn);             cmd.Parameters.AddWithValue("@id", 1);             cmd.Parameters.AddWithValue("@name", "John Doe");             cmd.Parameters.AddWithValue("@ssn", "123-45-6789");             cmd.ExecuteNonQuery();         }     }  }    Encryption happens automatically at the client side!    Step 5: Query Encrypted Data  SSMS cannot decrypt encrypted data unless "Column Encryption Setting = Enabled" is used.  Querying in SSMS (without decryption)  SELECT * FROM Customers    SSN will appear as encrypted binary data    Querying with Decryption (Using .NET)  string connectionString = "Data Source=YourServer; Initial Catalog=YourDatabase; Integrated Security=True; Column Encryption Setting=Enabled";  using (SqlConnection conn = new SqlConnection(connectionString))  {     conn.Open();     SqlCommand cmd = new SqlCommand("SELECT CustomerID, CustomerName, SSN FROM Customers", conn);     SqlDataReader reader = cmd.ExecuteReader();     while (reader.Read())     {         Console.WriteLine(reader["CustomerID"] + " | " + reader["CustomerName"] + " | " + reader["SSN"]);  }}    The decrypted SSN will be retrieved automatically for authorized applications.           Conclusion  Always Encrypted offers a robust solution for protecting sensitive data within SQL Server and Azure SQL environments. By encrypting data both at rest and in transit, and leveraging secure enclaves for in-place computations, organizations can maintain data confidentiality and comply with regulatory requirements. Implementing best practices in key management, data classification, and application configuration is essential to fully leverage the security benefits of Always Encrypted.   

Understanding Different Types of Switching in Table Partitioning in Microsoft SQL Server
Jun 21, 2024

Introduction We focused on optimizing database performance and manageability, it’s important to understand the nuances of table partitioning in SQL Server, including partition switching. Partition switching is a feature in SQL Server that allows for fast data movement between tables and partitions. This blog explores the different types of partition switching and their applications in SQL Server.   What is Partition Switching? Partition switching involves moving data between partitions or between a partition and a non-partitioned table without physically copying the data. Instead, metadata pointers are updated, making the operation extremely fast and efficient. This is especially useful for data archiving, loading new data, and maintaining large datasets.   Types of Partition Switching 1. Switching Between Partitions in the Same Table Switching data between partitions within the same table can be useful for reorganizing data or when performing operations that require temporary partition rearrangement. Example: Suppose you have a table SalesData partitioned by month and you need to move data from one month to another. -- Switch data from partition 2 to partition 3 ALTER TABLE SalesData SWITCH PARTITION 2 TO SalesData PARTITION 3; 2. Switching Between a Table and a Partitioned Table This type of switching is typically used for bulk loading or removing data. You can switch a partition of a partitioned table to a non-partitioned table (and vice versa) to quickly load or archive data. Example: Loading new data into a partitioned table SalesData from a staging table StagingSalesData. -- Ensure the staging table matches the schema of the partitioned table CREATE TABLE StagingSalesData ( SaleID int, SaleDate datetime, Amount money ); -- Switch the staging table data into the partition ALTER TABLE StagingSalesData SWITCH TO SalesData PARTITION 1; 3. Switching Between a Partitioned Table and Another Partitioned Table This involves moving data between two different partitioned tables. It’s useful when dealing with different data lifecycle management scenarios, such as archiving old data into a separate historical table. Example: Switching data from a partition in CurrentSalesData to a partition in HistoricalSalesData. -- Both tables should have the same structure and partition scheme ALTER TABLE CurrentSalesData SWITCH PARTITION 2 TO HistoricalSalesData PARTITION 1; 4. Switching Data Out of a Partitioned Table This is used to remove data from a partitioned table and move it into a non-partitioned table for further processing or archiving. Example: Switching data from a partition in SalesData to a table OldSalesData. -- Ensure the target table matches the schema of the partitioned table CREATE TABLE OldSalesData ( SaleID int, SaleDate datetime, Amount money ); -- Switch the data out of the partition ALTER TABLE SalesData SWITCH PARTITION 1 TO OldSalesData; Guidelines for Partition Switching To ensure smooth partition switching, consider the following guidelines: Schema Matching: Ensure that the schemas of the source and target tables match exactly, including constraints and indexes. Partition Alignment: The source and target partitions must align correctly based on the partition function. Check Constraints: Check constraints on the tables must be consistent with the partition boundary conditions. Minimal Indexes: Avoid using non-aligned indexes on partitioned tables to ensure efficient switching. Benefits of Partition Switching Performance Efficiency: Since partition switching involves metadata operations rather than physical data movement, it is extremely fast and efficient. Minimal Downtime: Enables quick data loading, archiving, and reorganization with minimal downtime. Data Management Flexibility: Facilitates flexible data management strategies, allowing for efficient data lifecycle management. Conclusion Partition switching is a powerful feature in SQL Server that enhances the performance and manageability of large datasets. Understanding the different types of partition switching and their applications allows you, to implement efficient data loading, archiving, and maintenance strategies. By leveraging partition switching, you can ensure that your SQL Server environment remains robust, responsive, and well-organized, ultimately supporting your organization’s data management goals.

Enhancing Performance and Manageability: Table Partitioning in Microsoft SQL Server
Jun 14, 2024

Introduction Overseeing data management and performance optimization, implementing table partitioning in Microsoft SQL Server is a strategic decision to enhance database performance and manageability. Table partitioning is a powerful technique that allows large tables to be divided into smaller, more manageable pieces, improving query performance and simplifying maintenance tasks. In this blog, we'll explore the concept of table partitioning, its benefits, and a step-by-step guide to implementing it in SQL Server.   Understanding Table Partitioning Table partitioning involves dividing a large table into smaller, more manageable segments called partitions. Each partition can be managed and accessed independently, which can significantly improve query performance and simplify maintenance tasks. Partitioning is especially beneficial for large tables with millions or billions of rows, where operations such as data loading, archiving, and querying can become cumbersome.   Key Concepts Partition Function: Defines how data is distributed across partitions based on a specified column or columns. Partition Scheme: Maps the partitions defined by the partition function to specific filegroups within the database. Aligned Indexes: Indexes that are partitioned in the same way as the table, ensuring that queries using these indexes benefit from partitioning.   Benefits of Table Partitioning Improved Query Performance: Queries that target specific partitions can avoid scanning the entire table, resulting in faster response times. Parallel processing of partitions can enhance performance for complex queries. Simplified Maintenance: Partition-level operations such as loading, archiving, and deleting data can be performed independently, reducing the impact on overall database performance. Easier management of large tables, as partitions can be individually managed and optimized. Enhanced Data Management: Partitioning can facilitate better data organization and management, such as separating historical data from current data. Efficient handling of data purging and archiving processes. Types of Table Partitions in SQL Server 1. Range Partitioning Range partitioning is the most common type of partitioning in SQL Server. It involves dividing a table based on a range of values in a specified column, often a date or numerical column. Each partition holds data that falls within a specific range. Use Cases: Partitioning data by date to manage historical data efficiently. Improving query performance for range-based queries. Example: CREATE PARTITION FUNCTION rangePartitionFunction (datetime) AS RANGE LEFT FOR VALUES ('2021-01-01', '2022-01-01', '2023-01-01'); CREATE PARTITION SCHEME rangePartitionScheme AS PARTITION rangePartitionFunction TO (fg1, fg2, fg3, fg4); CREATE TABLE SalesData ( SaleID int, SaleDate datetime, Amount money ) ON rangePartitionScheme (SaleDate);   2. List Partitioning List partitioning allows you to divide a table based on a list of values. Each partition is associated with specific values of a column, often used for categorizing data by discrete values such as regions or departments. Use Cases: Partitioning data by specific categories (e.g., regions, product types). Enhancing query performance for category-based queries. Example: CREATE PARTITION FUNCTION listPartitionFunction (nvarchar(20)) AS RANGE LEFT FOR VALUES ('North', 'South', 'East', 'West'); CREATE PARTITION SCHEME listPartitionScheme AS PARTITION listPartitionFunction TO (fg1, fg2, fg3, fg4); CREATE TABLE SalesRegionData ( SaleID int, Region nvarchar(20), Amount money ) ON listPartitionScheme (Region);   3. Composite Partitioning Composite partitioning combines two or more partitioning strategies. The most common combination is range-list or range-hash partitioning. This approach allows for more complex and flexible data distribution strategies. Use Cases: Managing large datasets with multiple logical divisions. Enhancing performance and manageability for complex queries. Example: -- Range-List Partitioning Example CREATE PARTITION FUNCTION rangePartitionFunction (datetime) AS RANGE LEFT FOR VALUES ('2021-01-01', '2022-01-01', '2023-01-01'); CREATE PARTITION FUNCTION listPartitionFunction (nvarchar(20)) AS RANGE LEFT FOR VALUES ('North', 'South', 'East', 'West'); Choosing the Right Partitioning Strategy Selecting the appropriate partitioning strategy depends on several factors, including data characteristics, query patterns, and maintenance requirements. Here are some guidelines to help you choose: Range Partitioning: Best for time-series data or data with natural ranges. Ideal for scenarios where you frequently query specific ranges of data. List Partitioning: Suitable for categorical data with a limited number of discrete values. Useful for scenarios where queries target specific categories. Composite Partitioning: Best for complex data structures that require multiple partitioning dimensions. Ideal for large datasets with varied query patterns and maintenance needs. Implementing Table Partitioning in SQL Server Step 1: Planning and Design Identify Candidate Tables: Analyze your database to identify large tables that will benefit from partitioning. Consider factors such as table size, query patterns, and data lifecycle. Choose Partitioning Column: Select a column that will be used to distribute data across partitions, often based on date or range values. Ensure the column has a high degree of cardinality to evenly distribute data. Step 2: Creating a Partition Function Define the Partition Function: Create a partition function that specifies the boundaries for each partition. CREATE PARTITION FUNCTION myPartitionFunction (int) AS RANGE LEFT FOR VALUES (1000, 2000, 3000);   Step 3: Creating a Partition Scheme Map Partitions to Filegroups: Create a partition scheme that maps each partition to a specific filegroup. CREATE PARTITION SCHEME myPartitionScheme AS PARTITION myPartitionFunction TO (fg1, fg2, fg3, fg4);   Step 4: Creating a Partitioned Table Create the Table Using Partition Scheme: Create the partitioned table and specify the partition scheme. CREATE TABLE myPartitionedTable ( id int, data nvarchar(100), partition_column int ) ON myPartitionScheme (partition_column);   Step 5: Managing Indexes on Partitioned Tables Create Aligned Indexes: Ensure indexes are partitioned in the same way as the table. CREATE INDEX idx_myPartitionedTable ON myPartitionedTable (partition_column) ON myPartitionScheme (partition_column);   Step 6: Maintaining Partitioned Tables Data Management: Use partition-level operations for data loading, archiving, and purging. Utilize partition switching to efficiently move data between tables. Monitoring and Optimization: Regularly monitor partition performance and manage storage distribution. Rebuild or reorganize partitions as needed to maintain optimal performance. Conclusion Implementing table partitioning in Microsoft SQL Server is a powerful strategy for improving database performance and manageability, especially for large tables. Guiding your team through the careful planning and implementation of partitioning can lead to significant performance gains and simplified maintenance processes. By following the steps outlined in this blog, you can ensure a successful partitioning implementation that enhances your organization's data management capabilities. Table partitioning is not just a technical enhancement; it's a strategic move towards better data management and performance optimization. Embrace this powerful feature to keep your SQL Server environment robust and responsive.

Seamless Migration: On-Premise Report Server to Azure SQL Managed Instance
Apr 22, 2024

Introduction The migration of an on-premise report server to Azure SQL Managed Instance requires strategic planning and meticulous execution. This transition offers numerous benefits, including scalability, reliability, and reduced maintenance overhead. In this blog, we'll explore the essential steps involved in migrating an on-premise report server to Azure SQL Managed Instance, ensuring a seamless transition for your organization.   Understanding Azure SQL Managed Instance Before diving into the migration process, let's briefly understand Azure SQL Managed Instance. It is a fully managed platform as a service (PaaS) offering from Microsoft Azure, providing near-complete compatibility with on-premise SQL Server. Managed Instance offers features like automatic patching, automated backups, and built-in high availability, making it an attractive option for hosting SQL Server workloads in the cloud.   Pre-Requisites 1. Azure SQL Managed Instance 2. SQL Server User Account – Using to connect Azure SQL Managed Instance 3. Azure Virtual Machine   Configure Azure SQL Managed Instance 1. Go to Azure Portal and search for Azure SQL Managed Instance. 2. Set up the username and password, it will require connecting from SSMS and SSRS later. 3. Set up the required configuration. 4. Create the Azure SQL. 5. Create a new database (optional). 6. Open SSMS and verify the instance connection with SQL Server Authentication by entering a username and password of #2. 7. If it’s connecting successfully then we have configured Azure SQL Managed Instance correctly.   Configure Azure Virtual Machine 1. Go to Azure Portal and search for Virtual Machine. 2. Select the Windows Operating System and set up the required configurations. 3. Create a Virtual Machine and connect via RDP.   Install SSRS (SQL Server Reporting Services) in Azure VM 1. Connect your Azure VM using RDP. 2. Download the 2022 SSRS installer - Click here to download 3. Launch the installer of 2022 SSRS. 4. Choose Install Reporting Services and click Next. 5. Choose the appropriate Edition to match your licensing. Once selected choose Next. 6. Now you will want to accept the license and click Next. 7. Choose Install Reporting Services Only and click Next. 8. Change the Installation Location to a path of your choice, if you would like, then click Install. 9. Open Report Server Configuration Manager and click on Connect. 10. Start the Report Service if it’s not started.   Connect On-Premises SQL Server 1. Connect to your on-premises SQL Server. 2. Take a backup of your ReportServer and ReportServerTempDB databases. 3. After successfully backup of both databases, upload it to Azure Blob Storage.   Connect Azure SQL Managed Instance in SSMS 1. Connect your Azure SQL Managed Instance with your credentials. 2. Generate SAS Token to access Azure Blob Storage account. 3. Create new Credentials in SQL Managed Instance. CREATE CREDENTIAL [AZURE BLOB URL WITH CONTAINER/FOLDER] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'SAS TOKEN' ; GO 4. Restore ReportServer and ReportServerTempDB Databases RESTORE DATABASE ReportServer FROM URL = 'AZURE BLOB URL OF DATABASE BACKUP FILE' ; GO RESTORE DATABASE ReportServerTempDB FROM URL = 'AZURE BLOB URL OF DATABASE BACKUP FILE' ; GO 5. Delete old record from ReportServer.dbo.Keys table based on MachineName or InstanceName. (DELETE ReportServer.[dbo].[Keys] WHERE MachineName = 'OLD MACHINE NAME') 6. To view all subscriptions in the new server execute the below query. DECLARE @OldUserID uniqueidentifier DECLARE @NewUserID uniqueidentifier SELECT @OldUserID = UserID FROM dbo.Users WHERE UserName = 'OLD SERVER NAME WITH USER' SELECT @NewUserID = UserID FROM dbo.Users WHERE UserName = 'NEW SERVER NAME WITH USER' UPDATE dbo.Subscriptions SET OwnerID = @NewUserID WHERE OwnerID = @OldUserID 7. Restart SQL Server Reporting Service. 8. Open the Report Server in the browser to verify all the Reports and Subscriptions.   Configure SSRS (SQL Server Reporting Services) in Azure VM 1. Connect your Azure VM using RDP. 2. Open Report Server Configuration Manager and click on Connect. 3. Start the Report Service if it’s not started. 4. Go to Database and click on Change Database. 5. Choose existing database option and click on Next. 6. Enter the database connection information of Azure SQL Managed Instance, Test the connection and click on Next. – IMPORTANT 7. Inside credentials, choose SQL Server Credentials option and, enter username and password of  Azure SQL Managed Instance and click on Next. 8. Please verify the SQL Server Instance Name and other details in Summary and click on Next. 9. Click on Finish. 10. In Report Configuration Manager and select Web Service URL, then click Apply. 11. Go to Web Portal URL, then click Apply. 12. Go to E-mail Settings, update your email settings to send report subscription emails. 13. Open browser and enter your report server Web Portal URL.

Difference LINQ and Stored Procedures
Mar 20, 2024

Introduction  In the world of database management and querying, two commonly used methods are Language Integrated Query (LINQ) and Stored Procedures. Both serve the purpose of retrieving and manipulating data from databases, but they differ significantly in their approach and implementation. In this blog post, we'll delve into the disparities between LINQ and Stored Procedures to help you understand when to use each. 1. Conceptual Differences:    - LINQ Example:  var query = from p in db.Products                  where p.Category == "Electronics"                  select p;            foreach (var product in query)      {          Console.WriteLine(product.Name);      } In this LINQ example, we're querying a collection of products from a database context (`db.Products`). The LINQ query selects all products belonging to the "Electronics" category.    - Stored Procedures Example: CREATE PROCEDURE GetElectronicsProducts      AS BEGIN     SELECT * FROM Products WHERE Category = 'Electronics' END Here, we've created a Stored Procedure named `GetElectronicsProducts` that retrieves all products in the "Electronics" category from the `Products` table. 2. Performance:    - LINQ: LINQ queries are translated into SQL queries at runtime by the LINQ provider. While LINQ provides a convenient and intuitive way to query data, the performance might not always be optimal, especially for complex queries or large datasets.    - Stored Procedures: Stored Procedures are precompiled and optimized on the database server, leading to potentially better performance compared to dynamically generated LINQ queries. They can leverage indexing and caching mechanisms within the database, resulting in faster execution times. 3. Maintenance and Deployment:    - LINQ: LINQ queries are embedded directly within the application code, making them easier to maintain and deploy alongside the application itself. However, changes to LINQ queries often require recompilation and redeployment of the application.    - Stored Procedures: Stored Procedures are maintained separately from the application code and are stored within the database. This separation of concerns allows for easier maintenance and updates to the database logic without impacting the application code. Additionally, Stored Procedures can be reused across multiple applications. 4. Security:    - LINQ: LINQ queries are susceptible to SQL injection attacks if proper precautions are not taken. Parameterized LINQ queries can mitigate this risk to some extent, but developers need to be vigilant about input validation and sanitation.    - Stored Procedures: Stored Procedures can enhance security by encapsulating database logic and preventing direct access to underlying tables. They provide a layer of abstraction that can restrict users' access to only the operations defined within the Stored Procedure, reducing the risk of unauthorized data access or modification. Conclusion: In summary, both LINQ and Stored Procedures offer distinct advantages and considerations when it comes to querying databases. LINQ provides a more integrated and developer-friendly approach, while Stored Procedures offer performance optimization, maintainability, and security benefits. The choice between LINQ and Stored Procedures depends on factors such as application requirements, performance considerations, and security concerns. Understanding the differences between the two methods can help developers make informed decisions when designing database interactions within their applications.

Database Migration to Microsoft SQL Server
Feb 07, 2024

Introduction Migrating Microsoft SQL Server databases from one server to another is a critical task that requires careful planning and execution. Overseeing this migration project, it's essential to have a detailed checklist to ensure a smooth and successful transition. In this blog, we will explore the key steps involved in migrating SQL Server databases and provide a comprehensive checklist to guide you through the process.   Checklist for SQL Server Database Migration 1. Assessment and Planning: Database Inventory: Identify all databases to be migrated. Document database sizes, configurations, and dependencies. Compatibility Check: Verify the compatibility of SQL Server versions. Check for deprecated features or components. Backup Strategy: Ensure full backups of all databases are taken before migration. Confirm the backup and restore processes are working correctly.   2. Server Environment Preparation: Server Infrastructure: Verify that the new server meets hardware and software requirements. Install the necessary SQL Server version on the new server. Security Considerations: Plan for server-level security, including logins and permissions. Transfer relevant security configurations from the old server. Firewall and Networking: Update firewall rules to allow communication between old and new servers. Confirm network configurations to avoid connectivity issues.   3. Database Schema and Data Migration: Schema Scripting: Generate scripts for database schema (tables, views, stored procedures, etc.). Validate the scripts in a test environment. Data Migration: Choose an appropriate method for data migration (Backup and Restore, Detach and Attach, or SQL Server Integration Services - SSIS). Perform a trial data migration to identify and address potential issues.??????? Restore Strategy: Ensure full backups of all databases are available on the new server. Restore databases and confirm the processes are working correctly.   4. Application and Dependency Testing: Application Compatibility: Test the application with the new SQL Server to ensure compatibility. Address any issues related to SQL Server version changes. Dependency Verification: Confirm that linked servers, jobs, database mail, and maintenance plans are updated. Test connectivity to other applications relying on the database.   5. Post-Migration Validation: Data Integrity Check: Execute DBCC CHECKDB to ensure the integrity of the migrated databases. Address any issues identified during the integrity check. Performance Testing: Conduct performance testing to ensure the new server meets performance expectations. Optimize queries or configurations if needed. User Acceptance Testing (UAT): Involve end-users in testing to validate the functionality of the migrated databases. Address any user-reported issues promptly.     Conclusion A successful Microsoft SQL Server database migration requires meticulous planning, thorough testing, and effective communication. Following this comprehensive checklist will help ensure a smooth transition from one server to another while minimizing disruptions to business operations. Regularly communicate with your team and stakeholders throughout the migration process to address any challenges promptly and ensure a successful outcome. Download Checklist for MSSQL Server Migration

Quick Guide: SQL Server Table Constraints
Jan 28, 2024

If you want to have a list of constraints applied on a particular table in the SQL server, this will help you to get it in one go.   DECLARE @TABLENAME VARCHAR(50) = '<table_name>' SELECT ObjectName     ,TypeOfObject     ,TypeOfConstraint     ,ConstraintName     ,ConstraintDescription FROM (     SELECT schema_name(t.schema_id) + '.' + t.[name] AS ObjectName         ,CASE              WHEN t.[type] = 'U'                 THEN 'Table'             WHEN t.[type] = 'V'                 THEN 'View'             END AS [TypeOfObject]         ,CASE              WHEN c.[type] = 'PK'                 THEN 'Primary key'             WHEN c.[type] = 'UQ'                 THEN 'Unique constraint'             WHEN i.[type] = 1                 THEN 'Unique clustered index'             WHEN i.type = 2                 THEN 'Unique index'             END AS TypeOfConstraint         ,ISNULL(c.[name], i.[name]) AS ConstraintName         ,SUBSTRING(column_names, 1, LEN(column_names) - 1) AS [ConstraintDescription]     FROM sys.objects t     LEFT OUTER JOIN sys.indexes i ON t.object_id = i.object_id     LEFT OUTER JOIN sys.key_constraints c ON i.object_id = c.parent_object_id         AND i.index_id = c.unique_index_id     CROSS APPLY (         SELECT col.[name] + ', '         FROM sys.index_columns ic         INNER JOIN sys.columns col ON ic.object_id = col.object_id             AND ic.column_id = col.column_id         WHERE ic.object_id = t.object_id             AND ic.index_id = i.index_id         ORDER BY col.column_id         FOR XML path('')         ) D(column_names)     WHERE is_unique = 1         AND t.name = @TABLENAME         AND t.is_ms_shipped <> 1          UNION ALL          SELECT schema_name(fk_tab.schema_id) + '.' + fk_tab.name AS foreign_table         ,'Table'         ,'Foreign key'         ,fk.name AS fk_ConstraintName         ,cols.[name] + ' REFERENCES ' + schema_name(pk_tab.schema_id) + '.' + pk_tab.name + ' (' + c2.[name] + ')'     FROM sys.foreign_keys fk     INNER JOIN sys.tables fk_tab ON fk_tab.object_id = fk.parent_object_id     INNER JOIN sys.tables pk_tab ON pk_tab.object_id = fk.referenced_object_id     INNER JOIN sys.foreign_key_columns fk_cols ON fk_cols.constraint_object_id = fk.object_id     INNER JOIN sys.columns cols ON cols.object_id = fk_cols.parent_object_id AND cols.column_id = fk_cols.parent_column_id     INNER JOIN sys.columns c2 ON c2.object_id = fk_cols.referenced_object_id AND c2.column_id = fk_cols.referenced_column_id     WHERE fk_tab.name = @TABLENAME         OR pk_tab.name = @TABLENAME          UNION ALL          SELECT schema_name(t.schema_id) + '.' + t.[name]         ,'Table'         ,'Check constraint'         ,con.[name] AS ConstraintName         ,con.[definition]     FROM sys.check_constraints con     LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id     LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id         AND con.parent_object_id = col.object_id     WHERE t.name = @TABLENAME          UNION ALL          SELECT schema_name(t.schema_id) + '.' + t.[name]         ,'Table'         ,'Default constraint'         ,con.[name]         ,col.[name] + ' = ' + con.[definition]     FROM sys.default_constraints con     LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id     LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id         AND con.parent_object_id = col.object_id     WHERE t.name = @TABLENAME     ) a ORDER BY ObjectName     ,TypeOfConstraint     ,ConstraintName   Output: Enjoy.!

Quick Guide: SPROC Performance Check
Jan 27, 2024

Stored procedures are an essential part of database management systems. They are used to execute frequently used queries and reduce the load on the database server. However, if not optimized correctly, they can cause performance issues. In this blog, we will discuss how to check the performance of a stored procedure.  Steps to Check Performance of a SPROC  Identify the SPROC: The first step is to identify the stored procedure that needs to be optimized. You can use SQL Server Management Studio (SSMS) to identify the stored procedure.  Check Execution Time: Once you have identified the stored procedure, you can check its execution time. You can use the SET STATISTICS TIME ON command to check the execution time of the stored procedure.  Check Query Plan: The next step is to check the query plan of the stored procedure. You can use the SET SHOWPLAN_TEXT ON command to check the query plan.  Check Indexes: Indexes play a crucial role in the performance of a stored procedure. You can use the sp_helpindex command to check the indexes of the stored procedure.  Check for Blocking: Blocking can cause performance issues in a stored procedure. You can use the sp_who2 command to check for blocking.  Check for Deadlocks: Deadlocks can also cause performance issues in a stored procedure. You can use the DBCC TRACEON(1204) command to check for deadlocks.  Examples : Here are some examples to help you understand how to check the performance of a stored procedure:    To Try this queries yourself I am sharing the Table, Data, SP query so you can direct run and perform this queries : -- Step 1: Create a dummy table CREATE TABLE dbo.Orders ( OrderID INT PRIMARY KEY, CustomerID NVARCHAR(10), OrderDate DATETIME, ProductID INT, Quantity INT ); -- Step 2: Insert dummy data into the table INSERT INTO dbo.Orders (OrderID, CustomerID, OrderDate, ProductID, Quantity) VALUES (1, N'ALFKI', '2024-01-23', 101, 5), (2, N'ALFKI', '2024-01-24', 102, 3), (3, N'BONAP', '2024-01-25', 103, 7), (4, N'BONAP', '2024-01-26', 104, 2), (5, N'COSME', '2024-01-27', 105, 4); -- Step 3: Create a stored procedure CREATE PROCEDURE dbo.usp_GetOrdersByCustomer @CustomerID NVARCHAR(10) AS BEGIN SELECT * FROM dbo.Orders WHERE CustomerID = @CustomerID; END; Example 1: Check Execution Time  SET STATISTICS TIME ON  EXEC dbo.usp_GetOrdersByCustomer @CustomerID = N'ALFKI'  SET STATISTICS TIME OFF  Example 2: Check Query Plan  SET SHOWPLAN_TEXT ON  EXEC dbo.usp_GetOrdersByCustomer @CustomerID = N'ALFKI'  SET SHOWPLAN_TEXT OFF  Example 3: Check Indexes  EXEC sp_helpindex 'dbo.usp_GetOrdersByCustomer'  Example 4: Check for Blocking  EXEC sp_who2  Example 5: Check for Deadlocks  DBCC TRACEON(1204)    Conclusion  In conclusion, checking the performance of a stored procedure is essential to ensure that it runs efficiently. By following the steps mentioned above, you can identify the performance issues and optimize the stored procedure. I hope this blog helps you in optimizing your stored procedures. If you have any questions or suggestions, please feel free to leave a comment below. 

magnusminds website loader