Admin

Parshwa Kapadia

Project Lead at MagnusMinds IT Solution

A seasoned project lead with extensive experience in Database Management, SSIS, SSRS, PowerBI, .NET Development, Cloud Services, Version Control, CI/CD, Setup of Azure/AWS accounts, Infrastructure Setup, and Server Configuration. I have had the privilege of leading and contributing to a wide array of projects, ranging from database optimization and architecture design to cloud migration and software development. With a passion for technology and a commitment to excellence, I am excited to share my insights, experiences, and expertise with you. Join me on this journey as we explore the dynamic world of IT together.

Posts by Parshwa Kapadia

Seamless Migration: On-Premise Report Server to Azure SQL Managed Instance
Apr 22, 2024

Introduction The migration of an on-premise report server to Azure SQL Managed Instance requires strategic planning and meticulous execution. This transition offers numerous benefits, including scalability, reliability, and reduced maintenance overhead. In this blog, we'll explore the essential steps involved in migrating an on-premise report server to Azure SQL Managed Instance, ensuring a seamless transition for your organization.   Understanding Azure SQL Managed Instance Before diving into the migration process, let's briefly understand Azure SQL Managed Instance. It is a fully managed platform as a service (PaaS) offering from Microsoft Azure, providing near-complete compatibility with on-premise SQL Server. Managed Instance offers features like automatic patching, automated backups, and built-in high availability, making it an attractive option for hosting SQL Server workloads in the cloud.   Pre-Requisites 1. Azure SQL Managed Instance 2. SQL Server User Account – Using to connect Azure SQL Managed Instance 3. Azure Virtual Machine   Configure Azure SQL Managed Instance 1. Go to Azure Portal and search for Azure SQL Managed Instance. 2. Set up the username and password, it will require connecting from SSMS and SSRS later. 3. Set up the required configuration. 4. Create the Azure SQL. 5. Create a new database (optional). 6. Open SSMS and verify the instance connection with SQL Server Authentication by entering a username and password of #2. 7. If it’s connecting successfully then we have configured Azure SQL Managed Instance correctly.   Configure Azure Virtual Machine 1. Go to Azure Portal and search for Virtual Machine. 2. Select the Windows Operating System and set up the required configurations. 3. Create a Virtual Machine and connect via RDP.   Install SSRS (SQL Server Reporting Services) in Azure VM 1. Connect your Azure VM using RDP. 2. Download the 2022 SSRS installer - Click here to download 3. Launch the installer of 2022 SSRS. 4. Choose Install Reporting Services and click Next. 5. Choose the appropriate Edition to match your licensing. Once selected choose Next. 6. Now you will want to accept the license and click Next. 7. Choose Install Reporting Services Only and click Next. 8. Change the Installation Location to a path of your choice, if you would like, then click Install. 9. Open Report Server Configuration Manager and click on Connect. 10. Start the Report Service if it’s not started.   Connect On-Premises SQL Server 1. Connect to your on-premises SQL Server. 2. Take a backup of your ReportServer and ReportServerTempDB databases. 3. After successfully backup of both databases, upload it to Azure Blob Storage.   Connect Azure SQL Managed Instance in SSMS 1. Connect your Azure SQL Managed Instance with your credentials. 2. Generate SAS Token to access Azure Blob Storage account. 3. Create new Credentials in SQL Managed Instance. CREATE CREDENTIAL [AZURE BLOB URL WITH CONTAINER/FOLDER] WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'SAS TOKEN' ; GO 4. Restore ReportServer and ReportServerTempDB Databases RESTORE DATABASE ReportServer FROM URL = 'AZURE BLOB URL OF DATABASE BACKUP FILE' ; GO RESTORE DATABASE ReportServerTempDB FROM URL = 'AZURE BLOB URL OF DATABASE BACKUP FILE' ; GO 5. Delete old record from ReportServer.dbo.Keys table based on MachineName or InstanceName. (DELETE ReportServer.[dbo].[Keys] WHERE MachineName = 'OLD MACHINE NAME') 6. To view all subscriptions in the new server execute the below query. DECLARE @OldUserID uniqueidentifier DECLARE @NewUserID uniqueidentifier SELECT @OldUserID = UserID FROM dbo.Users WHERE UserName = 'OLD SERVER NAME WITH USER' SELECT @NewUserID = UserID FROM dbo.Users WHERE UserName = 'NEW SERVER NAME WITH USER' UPDATE dbo.Subscriptions SET OwnerID = @NewUserID WHERE OwnerID = @OldUserID 7. Restart SQL Server Reporting Service. 8. Open the Report Server in the browser to verify all the Reports and Subscriptions.   Configure SSRS (SQL Server Reporting Services) in Azure VM 1. Connect your Azure VM using RDP. 2. Open Report Server Configuration Manager and click on Connect. 3. Start the Report Service if it’s not started. 4. Go to Database and click on Change Database. 5. Choose existing database option and click on Next. 6. Enter the database connection information of Azure SQL Managed Instance, Test the connection and click on Next. – IMPORTANT 7. Inside credentials, choose SQL Server Credentials option and, enter username and password of  Azure SQL Managed Instance and click on Next. 8. Please verify the SQL Server Instance Name and other details in Summary and click on Next. 9. Click on Finish. 10. In Report Configuration Manager and select Web Service URL, then click Apply. 11. Go to Web Portal URL, then click Apply. 12. Go to E-mail Settings, update your email settings to send report subscription emails. 13. Open browser and enter your report server Web Portal URL.

A Comprehensive Guide to Microsoft SQL Server Database Migration
Feb 07, 2024

Introduction Migrating Microsoft SQL Server databases from one server to another is a critical task that requires careful planning and execution. Overseeing this migration project, it's essential to have a detailed checklist to ensure a smooth and successful transition. In this blog, we will explore the key steps involved in migrating SQL Server databases and provide a comprehensive checklist to guide you through the process.   Checklist for SQL Server Database Migration 1. Assessment and Planning: Database Inventory: Identify all databases to be migrated. Document database sizes, configurations, and dependencies. Compatibility Check: Verify the compatibility of SQL Server versions. Check for deprecated features or components. Backup Strategy: Ensure full backups of all databases are taken before migration. Confirm the backup and restore processes are working correctly.   2. Server Environment Preparation: Server Infrastructure: Verify that the new server meets hardware and software requirements. Install the necessary SQL Server version on the new server. Security Considerations: Plan for server-level security, including logins and permissions. Transfer relevant security configurations from the old server. Firewall and Networking: Update firewall rules to allow communication between old and new servers. Confirm network configurations to avoid connectivity issues.   3. Database Schema and Data Migration: Schema Scripting: Generate scripts for database schema (tables, views, stored procedures, etc.). Validate the scripts in a test environment. Data Migration: Choose an appropriate method for data migration (Backup and Restore, Detach and Attach, or SQL Server Integration Services - SSIS). Perform a trial data migration to identify and address potential issues.??????? Restore Strategy: Ensure full backups of all databases are available on the new server. Restore databases and confirm the processes are working correctly.   4. Application and Dependency Testing: Application Compatibility: Test the application with the new SQL Server to ensure compatibility. Address any issues related to SQL Server version changes. Dependency Verification: Confirm that linked servers, jobs, database mail, and maintenance plans are updated. Test connectivity to other applications relying on the database.   5. Post-Migration Validation: Data Integrity Check: Execute DBCC CHECKDB to ensure the integrity of the migrated databases. Address any issues identified during the integrity check. Performance Testing: Conduct performance testing to ensure the new server meets performance expectations. Optimize queries or configurations if needed. User Acceptance Testing (UAT): Involve end-users in testing to validate the functionality of the migrated databases. Address any user-reported issues promptly.     Conclusion A successful Microsoft SQL Server database migration requires meticulous planning, thorough testing, and effective communication. Following this comprehensive checklist will help ensure a smooth transition from one server to another while minimizing disruptions to business operations. Regularly communicate with your team and stakeholders throughout the migration process to address any challenges promptly and ensure a successful outcome. Download Checklist for MSSQL Server Migration

Kafka with ELK implementation
Aug 17, 2020

Apache Kafka is the numerous common buffer solution deployed together with the ELK Stack. Kafka is deployed within the logs delivery and the indexing units, acting as a segregation unit for the data being collected: In this blog, we’ll see how to deploy all the components required to set up a resilient logs pipeline with Apache Kafka and ELK Stack: Filebeat – collects logs and forwards them to a Kafka topic. Kafka – brokers the data flow and queues it. Logstash – aggregates the data from the Kafka topic, processes it and ships to Elasticsearch. Elasticsearch – indexes the data. Kibana – for analyzing the data.   My environment: To perform the steps below, I set up a single Ubuntu 18.04 VM machine on AWS EC2 using local storage. In real-life scenarios, you will probably have all these components running on separate machines. I started the instance in the public subnet of a VPC and then set up a security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). Using Apache Access Logs for the pipeline, you can use VPC Flow Logs, ALB Access logs etc. We will start by installing the main component in the stack — Elasticsearch. Login to your Ubuntu system using sudo privileges. For the remote Ubuntu server using ssh to access it. Windows users can use putty or Powershell to log in to Ubuntu system. Elasticsearch requires Java to run on any system. Make sure your system has Java installed by running the following command. This command will show you the current Java version. sudo apt install openjdk-11-jdk-headless Check the installation is successful or not by the below command ~$ java — versionopenjdk 11.0.3 2019–04–16OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing) Finally, I added a new elastic IP address and associated it with the running instance. The example logs used for the tutorial are Apache access logs.   Step 1: Installing Elasticsearch We will start by installing the main component in the stack — Elasticsearch. Since version 7.x, Elasticsearch is bundled with Java so we can jump right ahead with adding Elastic’s signing key: Download and install the public signing key: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Now you may need to install the apt-transport-https package on Debian before proceeding: sudo apt-get install apt-transport-https echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Our next step is to add the repository definition to our system: echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list You can install the Elasticsearch Debian package with: sudo apt-get update && sudo apt-get install elasticsearch Before we bootstrap Elasticsearch, we need to apply some basic configurations using the Elasticsearch configuration file at: /etc/elasticsearch/elasticsearch.yml: sudo su nano /etc/elasticsearch/elasticsearch.yml Since we are installing Elasticsearch on AWS, we will bind Elasticsearch to the localhost. Also, we need to define the private IP of our EC2 instance as a master-eligible node: network.host: "localhost" http.port:9200 cluster.initial_master_nodes: ["<InstancePrivateIP"] Save the file and run Elasticsearch with: sudo service elasticsearch start To confirm that everything is working as expected, point curl to: http://localhost:9200, and you should see something like the following output (give Elasticsearch a minute or two before you start to worry about not seeing any response): {   "name" : "elasticsearch",   "cluster_name" : "elasticsearch",   "cluster_uuid" : "W_Ky1DL3QL2vgu3sdafyag",   "version" : {     "number" : "7.2.0",     "build_flavor" : "default",     "build_type" : "deb",     "build_hash" : "508c38a",     "build_date" : "2019-06-20T15:54:18.811730Z",     "build_snapshot" : false,     "lucene_version" : "8.0.0",     "minimum_wire_compatibility_version" : "6.8.0",     "minimum_index_compatibility_version" : "6.0.0-beta1"   },   "tagline" : "You Know, for Search" }   Step 2: Installing Logstash Next up, the “L” in ELK — Logstash. Logstash and installing it is easy. Just type the following command. sudo apt-get install logstash -y Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs and ships them on to Elasticsearch for indexing. Verify Java is installed: java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) Let’s create a new config file: Since we already defined the repository in the system, all we have to do to install Logstash is run: sudo nano /etc/logstash/conf.d/apache.conf Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs, and ships them on to Elasticsearch for indexing. Let’s create a new config file: input {   kafka {     bootstrap_servers => "localhost:9092"     topics => "apache"     } } filter {     grok {       match => { "message" => "%{COMBINEDAPACHELOG}" }     }     date {     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]     }   geoip {       source => "clientip"     } } output {   elasticsearch {     hosts => ["localhost:9200"]   } } As you can see — we’re using the Logstash Kafka input plugin to define the Kafka host and the topic we want Logstash to pull from. We’re applying some filtering to the logs and we’re shipping the data to our local Elasticsearch instance.   Step 3: Installing Kibana Let’s move on to the next component in the ELK Stack — Kibana. As before, we will use a simple apt command to install Kibana: sudo apt-get install kibana We will then open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure we have the correct configurations defined: server.port: 5601 server.host: "<INSTANCE_PRIVATE_IP>" elasticsearch.hosts: ["http://<INSTANCE_PRIVATE_IP>:9200"] Then enable and start the Kibana service: sudo systemctl enable kibana sudo systemctl start kibana We would need to install Firebeat. Use: sudo apt install filebeat   Open up Kibana in your browser with http://<PUBLIC_IP>:5601. You will be presented with the Kibana home page.