Introduction Migrating Microsoft SQL Server databases from one server to another is a critical task that requires careful planning and execution. Overseeing this migration project, it's essential to have a detailed checklist to ensure a smooth and successful transition. In this blog, we will explore the key steps involved in migrating SQL Server databases and provide a comprehensive checklist to guide you through the process. Checklist for SQL Server Database Migration 1. Assessment and Planning: Database Inventory: Identify all databases to be migrated. Document database sizes, configurations, and dependencies. Compatibility Check: Verify the compatibility of SQL Server versions. Check for deprecated features or components. Backup Strategy: Ensure full backups of all databases are taken before migration. Confirm the backup and restore processes are working correctly. 2. Server Environment Preparation: Server Infrastructure: Verify that the new server meets hardware and software requirements. Install the necessary SQL Server version on the new server. Security Considerations: Plan for server-level security, including logins and permissions. Transfer relevant security configurations from the old server. Firewall and Networking: Update firewall rules to allow communication between old and new servers. Confirm network configurations to avoid connectivity issues. 3. Database Schema and Data Migration: Schema Scripting: Generate scripts for database schema (tables, views, stored procedures, etc.). Validate the scripts in a test environment. Data Migration: Choose an appropriate method for data migration (Backup and Restore, Detach and Attach, or SQL Server Integration Services - SSIS). Perform a trial data migration to identify and address potential issues.??????? Restore Strategy: Ensure full backups of all databases are available on the new server. Restore databases and confirm the processes are working correctly. 4. Application and Dependency Testing: Application Compatibility: Test the application with the new SQL Server to ensure compatibility. Address any issues related to SQL Server version changes. Dependency Verification: Confirm that linked servers, jobs, database mail, and maintenance plans are updated. Test connectivity to other applications relying on the database. 5. Post-Migration Validation: Data Integrity Check: Execute DBCC CHECKDB to ensure the integrity of the migrated databases. Address any issues identified during the integrity check. Performance Testing: Conduct performance testing to ensure the new server meets performance expectations. Optimize queries or configurations if needed. User Acceptance Testing (UAT): Involve end-users in testing to validate the functionality of the migrated databases. Address any user-reported issues promptly. Conclusion A successful Microsoft SQL Server database migration requires meticulous planning, thorough testing, and effective communication. Following this comprehensive checklist will help ensure a smooth transition from one server to another while minimizing disruptions to business operations. Regularly communicate with your team and stakeholders throughout the migration process to address any challenges promptly and ensure a successful outcome. Download Checklist for MSSQL Server Migration
If you want to have a list of constraints applied on a particular table in the SQL server, this will help you to get it in one go. DECLARE @TABLENAME VARCHAR(50) = '<table_name>' SELECT ObjectName ,TypeOfObject ,TypeOfConstraint ,ConstraintName ,ConstraintDescription FROM ( SELECT schema_name(t.schema_id) + '.' + t.[name] AS ObjectName ,CASE WHEN t.[type] = 'U' THEN 'Table' WHEN t.[type] = 'V' THEN 'View' END AS [TypeOfObject] ,CASE WHEN c.[type] = 'PK' THEN 'Primary key' WHEN c.[type] = 'UQ' THEN 'Unique constraint' WHEN i.[type] = 1 THEN 'Unique clustered index' WHEN i.type = 2 THEN 'Unique index' END AS TypeOfConstraint ,ISNULL(c.[name], i.[name]) AS ConstraintName ,SUBSTRING(column_names, 1, LEN(column_names) - 1) AS [ConstraintDescription] FROM sys.objects t LEFT OUTER JOIN sys.indexes i ON t.object_id = i.object_id LEFT OUTER JOIN sys.key_constraints c ON i.object_id = c.parent_object_id AND i.index_id = c.unique_index_id CROSS APPLY ( SELECT col.[name] + ', ' FROM sys.index_columns ic INNER JOIN sys.columns col ON ic.object_id = col.object_id AND ic.column_id = col.column_id WHERE ic.object_id = t.object_id AND ic.index_id = i.index_id ORDER BY col.column_id FOR XML path('') ) D(column_names) WHERE is_unique = 1 AND t.name = @TABLENAME AND t.is_ms_shipped <> 1 UNION ALL SELECT schema_name(fk_tab.schema_id) + '.' + fk_tab.name AS foreign_table ,'Table' ,'Foreign key' ,fk.name AS fk_ConstraintName ,cols.[name] + ' REFERENCES ' + schema_name(pk_tab.schema_id) + '.' + pk_tab.name + ' (' + c2.[name] + ')' FROM sys.foreign_keys fk INNER JOIN sys.tables fk_tab ON fk_tab.object_id = fk.parent_object_id INNER JOIN sys.tables pk_tab ON pk_tab.object_id = fk.referenced_object_id INNER JOIN sys.foreign_key_columns fk_cols ON fk_cols.constraint_object_id = fk.object_id INNER JOIN sys.columns cols ON cols.object_id = fk_cols.parent_object_id AND cols.column_id = fk_cols.parent_column_id INNER JOIN sys.columns c2 ON c2.object_id = fk_cols.referenced_object_id AND c2.column_id = fk_cols.referenced_column_id WHERE fk_tab.name = @TABLENAME OR pk_tab.name = @TABLENAME UNION ALL SELECT schema_name(t.schema_id) + '.' + t.[name] ,'Table' ,'Check constraint' ,con.[name] AS ConstraintName ,con.[definition] FROM sys.check_constraints con LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id AND con.parent_object_id = col.object_id WHERE t.name = @TABLENAME UNION ALL SELECT schema_name(t.schema_id) + '.' + t.[name] ,'Table' ,'Default constraint' ,con.[name] ,col.[name] + ' = ' + con.[definition] FROM sys.default_constraints con LEFT OUTER JOIN sys.objects t ON con.parent_object_id = t.object_id LEFT OUTER JOIN sys.all_columns col ON con.parent_column_id = col.column_id AND con.parent_object_id = col.object_id WHERE t.name = @TABLENAME ) a ORDER BY ObjectName ,TypeOfConstraint ,ConstraintName Output: Enjoy.!
As in recent work with the client, I got the question of finding the indexes to be applied on a particular table in the SQL server. If you want to have listed all the indexes from a particular table from the SQL server, then now you just have to write your table name in the variable and execute the below query. And see the result. DECLARE @TABLENAME VARCHAR(50) = '<table_name>' SELECT '[' + s.name + '].[' + sObj.name + ']' AS 'TableName' ,+ ind.name AS 'IndexName' ,ind.type_desc AS 'IndexType' ,STUFF(( SELECT ', [' + sc.name + ']' AS "text()" FROM syscolumns AS sc INNER JOIN sys.index_columns AS ic ON ic.object_id = sc.id AND ic.column_id = sc.colid WHERE sc.id = Obj.object_id AND ic.index_id = sind.indid AND ic.is_included_column = 0 ORDER BY key_ordinal FOR XML PATH('') ), 1, 2, '') AS 'IndexedColumns' FROM sysindexes AS sind INNER JOIN sys.indexes AS ind ON ind.object_id = sind.id AND ind.index_id = sind.indid INNER JOIN sysobjects AS sObj ON sObj.id = sind.id INNER JOIN sys.objects AS Obj ON Obj.object_id = sObj.id AND is_ms_shipped = 0 INNER JOIN sys.schemas AS s ON s.schema_id = Obj.schema_id WHERE ind.object_id = OBJECT_ID(@TABLENAME) AND ind.is_primary_key = 0 AND ind.is_unique = 0 AND ind.is_unique_constraint = 0 ORDER BY TableName ,IndexName; Output:
Are you grappling with performance issues in your project? Look no further—Application Insights is here to help! In this blog post, I'll guide you through the process of configuring and implementing Application Insights to supercharge your application's performance monitoring. Step 1: Installing the Application Insights Package The first crucial step is to integrate the Application Insights package into your project. Simply add the following PackageReference to your project file: <PackageReference Include="Microsoft.ApplicationInsights.AspNetCore" Version="2.22.0" /> And Register service in Program.cs or Startup.cs : builder.Services.AddApplicationInsightsTelemetry(); builder.Services.ConfigureTelemetryModule<DependencyTrackingTelemetryModule>((module, o) => { module.EnableSqlCommandTextInstrumentation = true; }); Add connection string in appsettings.json : "ApplicationInsights": { "InstrumentationKey": "" } This sets the stage for a seamless integration of Application Insights into your application. Step 2: Unleashing the Power of Application Insights Now that the package is part of your project, let's dive into the benefits it brings to the table: 1. Identify Performance Bottlenecks Application Insights allows you to track the execution time of individual stored procedures, queries, and API calls. This invaluable information helps you pinpoint areas that require optimization, paving the way for improved performance. 2. Monitor Database Interactions Efficiently analyze the database calls made by specific APIs within your application. With this visibility, you can optimize and fine-tune database interactions for enhanced performance. 3. Comprehensive Error and Exception Tracking Application Insights goes beyond performance monitoring by providing detailed information about errors, traces, and exceptions. This level of insight is instrumental in effective troubleshooting, allowing you to identify and resolve issues swiftly. Step 3: Integration with Azure for Data Collection and Analysis To maximize the benefits of Application Insights, consider integrating it with Azure for comprehensive data collection and analysis. This step amplifies your ability to make informed decisions regarding performance optimization and problem resolution. In conclusion, Application Insights equips you with the tools needed to elevate your application's performance. By identifying bottlenecks, monitoring database interactions, and offering comprehensive error tracking, it becomes a cornerstone for effective troubleshooting and optimization. Stay tuned for more tips and insights on how to harness the full potential of Application Insights for a high-performing application!
Setting up replication in SQL Server can be a powerful way to ensure data consistency and availability across multiple servers. In this step-by-step guide, we'll walk through the process of configuring replication on SQL Servers. Step 1: Understand Replication Types Before diving into configuration, it's crucial to understand the types of replication available in SQL Server. Snapshot Replication: Takes a snapshot of the data at a specific point in time. Transactional Replication: Replicates changes in real-time as they occur. Merge Replication: Allows bidirectional data synchronization between servers. Choose the replication type that aligns with your specific needs and database architecture. Step 2: Prepare Your Environment Ensure that your SQL Server environment is ready for replication. This involves verifying that you have the necessary permissions and establishing proper connectivity between the SQL Server instances. Remember that replication involves three key components: Publisher, Distributor, and Subscribers. The Distributor can be on the same server as the Publisher or a separate server. Step 3: Configure Distributor If a Distributor isn't already set up, proceed to configure one. This involves specifying the server that will act as the Distributor and setting up distribution databases. Use either SQL Server Management Studio (SSMS) or T-SQL scripts for this configuration. Step 4: Enable Replication on the Publisher 1. Open SSMS and connect to the Publisher. 2. Right-click on the target database and choose "Tasks" > "Replication" > "Configure Distribution." 3. Follow the wizard, specifying the Distributor configured in Step 3. Step 5: Choose Articles Define the articles by selecting the tables, views, or stored procedures you want to replicate. This step allows you to fine-tune your replication by specifying data filters, choosing columns to replicate, and configuring additional options based on your specific requirements. Step 6: Configure Subscribers 1. Connect to the Subscribers in SSMS. 2. Right-click on the Replication folder and choose "Configure Distribution." 3. Follow the wizard, specifying the Distributor and configuring additional settings based on your chosen replication type. Step 7: Configure Subscription With the Distributor and Subscribers configured, it's time to set up subscriptions. 1. In SSMS, navigate to the Replication folder on the Publisher. 2. Right-click on the Local Publications and choose "New Subscriptions." 3. Follow the wizard to configure the subscription, specifying the Subscribers and defining any additional settings. Step 8: Monitor and Maintain Regular monitoring and maintenance are essential for a healthy replication environment. - Use the Replication Monitor in SSMS to view the status of publications, subscriptions, and any potential errors. - Implement routine maintenance tasks such as backing up and restoring the replication databases. Conclusion Configuring replication in SQL Server involves a series of well-defined steps. By understanding your replication needs, preparing your environment, and carefully configuring each component, you can establish a robust and reliable replication setup. Regular monitoring and maintenance ensure the ongoing efficiency and performance of your replication environment.
SSRS or SQL Server Reporting Services is one of the tools available in Microsoft SQL Server Data Tools. It is a server-based reporting platform that you can use to create and manage tabular, matrix, graphical, and free-form reports that contain data from relational and multidimensional data sources. SSRS is a set of readymade tools, that helps you to create, deploy and manage reports. SSRS allows are reports to be exported in various formats (Excel, PDF, word ,CSV ,XML etc) SSRS allows reports to be delivered via emails or dropped to a share location. Advantages of using SSRS Supports a variety of file formats Facility to drill down to different levels of data Helpful and perceptive reporting Access to enterprise-level features Simplistic implementation owing to a centralized server Why SSRS? Here, are prime reasons for using SSRS tool: SSRS is an enhanced tool compared to Crystal Reports Faster processing of reports on both relational and multidimensional data Allows better and more accurate Decision-making mechanism for the users Allows users to interact with information without involving IT professionals It provides a World Wide Web-based connection for deploying reports. Hence, reports can be accessed over the internet SSRS allows reports to be exported in different formats. You can deliver SSRS reports using emails SSRS provides a host of security features, which helps you to control, who can access which report Working and Architecture The main components of SSRS are the following: Report Builder -This component is basically used as a drag and drop utility which can be used to pick any functionality or tables and drag it as per usage. It runs on the client computer. Report Designer - This component is used to develop reports. Complex reports can be developed with ease using this component. It is a publishing tool which is hosted in SSDT (SQL Server Data Tools) or visual studio. Report Manager -To access any web-based reports, we can make use of Report Manager. Report Server - This component is used to store SQL server Engine metadata. Server Database Report - This component is used to store security settings, report definitions, metadata, delivery data, etc. Data Sources - The reporting service components retrieve data from data sources like multidimensional, relational or traditional data sources. Reporting Life Cycle Report Authoring: In this phase, the report author defines the layout and syntax of the data. The tools used in this process are the SQL Server Development Studio and SSRS tool. Management: This phase involves managing a published report which is mostly part of the websites. In this stage, you need to consider access control over report execution. Delivery: In this phase, you need to understand when the reports need to be delivered to the customer base. Delivery can be on-demand or pre-defined schedule. You can also add an automation feature of subscription which creates reports and sends to the customer automatically. What is RDL? Report Definition Language (RDL) is an XML representation of a SQL Server Reporting Services report definition. A report definition contains data retrieval and layout information for a report. Create RDL Report You can create a RDL reports using any of the following reporting tools, Syncfusion Web Report Designer: Provides intuitive user interface to create or edit report online. Microsoft Report Builder: You can create a RDL report using the Microsoft stand-alone Report Builder. Visual Studio Report Server project template: To create a RDL report in Visual Studio, a Report Server project is required where you can save your report definition (.rdl) file. How To create a report server project From the File menu, select New > Project. In the left-most column under Installed, select Reporting Services Select the Report Server Project icon Creating a report definition file (RDL) In the Solution Explorer pane, right-click on the Reports folder. If you don't see the Solution Explorer pane, select View menu > Solution Explorer. Select Add > New Item. In the Add New Item window, select the Report icon. Type "PatientDetail.rdl" into the Name text box. Select the Add button on the lower right side of the Add New Item dialog box to complete the process. Report Designer opens and displays the Patient Detail report file in Design view. Setup Connection In the Report Data pane, select New > Data Source. If the Report Data pane isn't visible, then select View menu > Report Data OR (ctrl + Alt + D). The Data Source Properties dialog box opens with the General section displayed. In the Name text box, type “PatientDetail". Select the Embedded connection radio button. In the Type dropdown selection box, select "Microsoft SQL Server". In the Connection string text box, type the following string: Data source=Magnusminds; initial catalog=Patient Select the Credentials tab, and under the section Change the credentials used to connect to the data source, select the Use Windows Authentication (integrated security) radio button. Select OK to complete the process. Define a Dataset for the Table Report In the Report Data pane, select New > Dataset.... The Dataset Properties dialog box opens with the Query section displayed. In the Name text box, type "GetPatientDetails". Below that, select the Use a dataset embedded in my report radio button. From the Data source dropdown box, select PatientDetail. For the Query type, select the Text radio button and Type Query into the Query text box. Select OK to exit the Dataset Properties dialog box. The Report Data pane displays the AdventureWorksDataset dataset and fields. Add a Table to the Report Select the Toolbox tab in the left pane of the Report Designer. With your mouse, select the Table object and drag it to the report design surface. Report Designer draws a table data region with three columns in the center of the design surface. If you don't see the Toolbox tab, select View menu >Toolbox. In the Report Data pane, expand the AdventureWorksDataset to display the fields. Drag the field from the Report Data pane to the first column in the table. Preview Your Report Select the Preview tab. Report Designer runs the report and displays it in the Preview view. The following diagram shows part of the report in Preview view. Deployment of An RDL Report File in SQL Report Server By Uploading RDL file in Report Server. Open SSRS Server from webportal URL. There, you will see the upload button. Click the upload option and browse the rdl file of the report from the location. It uploads your report to the report server. Click on the uploaded file it runs the report in the browser, hence, you can view it in the browser.
In this article, we will review on DELETE AND UPDATE CASCADE rules in SQL Server foreign key with different examples. DELETE CASCADE: When we create a foreign key using this option, it deletes the referencing rows in the child table when the referenced row is deleted in the parent table which has a primary key. UPDATE CASCADE: When we create a foreign key using UPDATE CASCADE the referencing rows are updated in the child table when the referenced row is updated in the parent table which has a primary key. We will be discussing the following topics in this article: Creating DELETE CASCADE and UPDATE CASCADE rule in a foreign key using T-SQL script Triggers on a table with DELETE or UPDATE cascading foreign key Let us see how to create a foreign key with DELETE and UPDATE CASCADE rules along with few examples. Creating a foreign key with DELETE and UPDATE CASCADE rules Please refer to the below T-SQL script which creates a parent, child table and a foreign key on the child table with DELETE CASCADE rule. Insert some sample data using below T-SQL script. Now, Check Records. Now I deleted a row in the parent table with CountryID =1 which also deletes the rows in the child table which has CountryID =1. Please refer to the below T-SQL script to create a foreign key with UPDATE CASCADE rule. Now update CountryID in the Countries for a row which also updates the referencing rows in the child table States. Following is the T-SQL script which creates a foreign key with cascade as UPDATE and DELETE rules. To know the update and delete actions in the foreign key, query sys.foreign_keys view. Replace the constraint name in the script. The below image shows that a DELETE CASCADE action and UPDATE CASCADE action is defined on the foreign key. Let’s move forward and check the behavior of delete and update rules the foreign keys on a child table which acts as parent table to another child table. The below example demonstrates this scenario. In this case, “Countries” is the parent table of the “States” table and the “States” table is the parent table of Cities table. We will create a foreign key now with cascade as delete rule on States table which references to CountryID in parent table Countries. Now on the Cities table, create a foreign key without a DELETE CASCADE rule. If we try to delete a record with CountryID = 3, it will throw an error as delete on parent table “Countries” tries to delete the referencing rows in the child table States. But on Cities table, we have a foreign key constraint with no action for delete and the referenced value still exists in the table. The delete fails at the second foreign key. When we create the second foreign key with cascade as delete rule then the above delete command runs successfully by deleting records in the child table “States” which in turn deletes records in the second child table “Cities”. Triggers on a table with delete cascade or update cascade foreign key An instead of an update trigger cannot be created on the table if a foreign key on with UPDATE CASCADE already exists on the table. It throws an error “Cannot create INSTEAD OF DELETE or INSTEAD OF UPDATE TRIGGER ‘trigger name’ on table ‘table name’. This is because the table has a FOREIGN KEY with cascading DELETE or UPDATE.” Similarly, we cannot create INSTEAD OF DELETE trigger on the table when a foreign key CASCADE DELETE rule already exists on the table. Conclusion In this article, we explored a few examples on DELETE CASCADE and UPDATE CASCADE rules in SQL Server foreign key. In case you have any questions, please feel free to ask in the comment section below.
What is table partitioning in SQL? Table partitioning is a way to divide a large table into smaller, more manageable parts without having to create separate tables for each part. Data in a partitioned table is physically stored in groups of rows called partitions and each partition can be accessed and maintained separately. Partitioning is not visible to end-users, a partitioned table behaves like one logical table when queried. Data in a partitioned table is partitioned based on a single column, the partition column often called the partition key. Only one column can be used as the partition column, but it is possible to use a computed column. The partition scheme maps the logical partitions to physical filegroups. It is possible to map each partition to its own filegroup or all partitions to one filegroup.
Have you ever attempted to set up an automated backup of your SQL Server Express Edition and found that there’s no SQL Server Agent where you can schedule the job which will took a backup of your database. Alas, the world does not end there and you don't need to pay extra bucks just to have the back up via an SQL Agent which is available only to Standard and Enterprise editions. There are many options to automate the backup job which runs on a specific time and does not require manual intervention. Here, we will learn how to do it via SQL Command using batch file and Windows in-build Task Scheduler. Hope, you may find this useful. Create a BAT(batch) file to execute the command to take a backup of Database and save it. echo off :: -------------------------------------------------- :: clear console cls :: -------------------------------------------------- :: Define variables set SERVERNAME=YOUR_SERVER_NAME set DATABASENAME=DATABASE_NAME set MyTime=%TIME: =0% set MyDate=%DATE:~-4%.%DATE:~7,2%.%DATE:~4,2%.%MyTime:~0,2%.%MyTime:~3,2%.%MyTime:~6,2% set FileName=%DATABASENAME%_%MyDate%.bak set BAK_PATH=DIRECTORY_PATH set DEST_FILE=%BAK_PATH%%FileName% :: -------------------------------------------------- :: BACKUP Database sqlcmd -E -S %SERVERNAME% -d master -Q "BACKUP DATABASE [%DATABASENAME%] TO DISK = N'%DEST_FILE%' WITH INIT , NOUNLOAD , NAME = N'%DATABASENAME% backup', NOSKIP , STATS = 10, NOFORMAT" :: -------------------------------------------------- :: Optional Part :: -------------------------------------------------- :: Zip file 7z a -tzip "%DEST_FILE%.zip" "%DEST_FILE%" :: -------------------------------------------------- :: Delete unziped file DEL "%DEST_FILE%" “SERVERNAME” is the name of SQL Server physical machine. “DATABASENAME” is the database which will be backup. “FileName” sets as a database name and append date which has .bak extension “BAK_PATH” is the path in which a database backup file will be saved. “DEST_FILE” is use backup path and file name. After defining all the variables database backup will be generated and save as zip file in “DEST_FILE” path and at the end, the unzipped file will be deleted from “DEST_FILE” Now, it's time to schedule this created batch file in #1 Start Menu -> Task Scheduler -> Run as administrator Click on Create Task... from the right bar and configure it with Triggers and Actions