Tag - Business-Intelligence-Development

Effortless Data Migration Using MySQL Federated Engine
Mar 13, 2024

Scenario: If someone say you Hey, can you transfer one of MySQL data to another MySQL data and we think about SSIS or other Thing if yes then these article made for you to reduce your effort and save your time Introduction: In the dynamic landscape of database management, the need to seamlessly access and integrate data from multiple sources has become paramount. Whether it's consolidating information from disparate servers or synchronizing databases for backup and redundancy, MySQL offers a robust solution through its querying capabilities. In this guide, we delve into the art of fetching data from one MySQL server to another using SQL queries. This method, often overlooked in favor of complex data transfer mechanisms, provides a streamlined approach to data migration, enabling developers and database administrators to efficiently manage their resources. Through a combination of MySQL's versatile querying language and the innovative use of the FEDERATED storage engine, we'll explore how to establish connections between servers, replicate table structures, and effortlessly transfer data across the network. From setting up the environment to executing queries and troubleshooting common challenges, this tutorial equips you with the knowledge and tools to navigate the intricacies of cross-server data retrieval with ease. As we know We gonna use FEDERATED feature of MySQL workbench so first we need to check that our workbench support FEDERATED engine or not?   Simply open workbench and run below code show engines;   It shows all engines and check our system support FEDERATED OR NOT   If your system also not support don't worry we gonna enable it Open your folder where you save MySQL serve file In my case it in my C drive C>ProgramData>MySQL>MySQL Server 8.0>my.ini    open it in notepad++ or preferable software    Insert FEDERATED key word in script like below   Now need to restart MySQL Press Window+R button and paste services.msc press ok> find MySQL and restart it Now go to workbence and run show engines;  code   Now your FEDERATED engine get supported It show like below   Now our system Support FEDERATED engine This same process need to apply on destination side because both server (from source to destination server) need to support FEDERATED engine Now we make sure to we have permission of access source server for that we need to make user and and give permission of database and tables   Below code demonstrate to make user and give permission to user CREATE USER 'hmysql'@'192.168.1.173' IDENTIFIED BY 'Hardik...'; GRANT ALL PRIVILEGES ON *.* TO 'hmysql'@'192.168.1.173' WITH GRANT OPTION; FLUSH PRIVILEGES;   Now make connection of that user(we make above on source side) on destination server(our system)    Click on plus(+) icon as shown in image and fill all detail   Below image is for detail of user connection   After filling details our user added like below image   Go to user(hardikmysql) and find from which table we want to take data using MySQL query    Here i am taking 'actor' table from 'sakila' database which look like below   Now we need to run FEDERATED query on our system(destination server) with url string   Our MySQL query like below CREATE TABLE `actor` ( `actor_id` smallint unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`actor_id`), KEY `idx_actor_last_name` (`last_name`) ) ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here main part is below ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here 'mysql' is mandatory for connection string you can not use other word. 'hmysql' is user name 'Hardik...'  is password for user '192.168.1.173' is server adderess '3306' is port number 'sakila' is database name 'actor' is table name   Now run above table code and you get data in our system(destination server)    

Revolutionizing Business Intelligence: A Step-by-Step Guide to Implementing a BI ChatBot in Domo
Jan 05, 2024

In the ever-evolving landscape of business intelligence (BI), the need for seamless interaction with data is paramount. Imagine a world where you could effortlessly pose natural language questions to your datasets and receive insightful answers in return. Welcome to the future of BI, where the power of conversational interfaces meets the robust capabilities of Domo. This blog post serves as your comprehensive guide to implementing a BI ChatBot within the Domo platform, a revolutionary step towards making data exploration and analysis more intuitive and accessible than ever before. Gone are the days of wrestling with complex queries or navigating through intricate dashboards. With the BI ChatBot in Domo, users can now simply articulate their questions in plain language and navigate through datasets with unprecedented ease. Join us on this journey as we break down the process into manageable steps, allowing you to harness the full potential of BI ChatBot integration within the Domo ecosystem. Whether you're a seasoned data analyst or a business professional seeking data-driven insights, this guide will empower you to unlock the true value of your data through natural language interactions. Get ready to elevate your BI experience and transform the way you interact with your datasets. Let's dive into the future of business intelligence with the implementation of a BI ChatBot in Domo.   Prerequisites: ChatGPT API Key: Prepare for the integration of natural language to SQL conversion by obtaining a ChatGPT API Key. This key will empower your system to seamlessly translate user queries in natural language into SQL commands. DOMO Access: Ensure that you have the necessary access rights to create a new application within the Domo platform. This step is crucial for configuring and deploying the BI ChatBot effectively within your Domo environment.   1: Integrate the HTML Easy Bricks App. Begin the process by incorporating the HTML Easy Bricks App into your project. Navigate to the AppStore and add the HTML Easy Bricks to your collection. Save it to your dashboard for easy access. Upon opening the App for the first time, it will have a default appearance. To enhance its visual appeal and functionality, customize it by incorporating the HTML and CSS code. This transformation will result in the refined look illustrated below.   Image 1: DOMO HTML Easy Brick UI   2: Map/Connect the Dataset to the Card. In this phase, establish a connection between the dataset and the card where users will pose their inquiries. Refer to the image below, where the "Key" dataset is linked to "dataset0." Extend this mapping to accommodate up to three datasets. If your project involves more datasets, consider using the DDX-TEN-DATASETS App instead of HTML Easy Bricks for a more scalable solution. This ensures seamless integration and accessibility for users interacting with various datasets within your Domo environment.   Image 2: Attach Dataset With Card   3: Execute the Query on the Dataset for Results. In this phase, you'll implement the code to execute a query on the dataset, fetching the desired results. Before this, initiate a call to the ChatGPT API to dynamically generate an SQL query based on the user's natural language question. It's essential to note that the below code is designed to only accept valid column names in the query, adhering strictly to MySQL syntax. To facilitate accurate query generation from ChatGPT, create a prompt that includes the dataset schema and provides clear guidance for obtaining precise SQL queries. Here is a call to the ChatGPT API to get SQL Query. VAR GPTKEY = 'key' VAR Prompt = 'Write effective prompt' $.ajax({             url: 'https://api.openai.com/v1/chat/completions',             headers: {               'Authorization': 'Bearer ' + GPTKEY,               'Content-Type': 'application/json'             },             method: 'POST',             data: JSON.stringify({               model: 'gpt-3.5-turbo',               messages: Prompt,               max_tokens: 100,               temperature: 0.5,               top_p: 1.0,               frequency_penalty: 0.0,               presence_penalty: 0.0             }),             success: function (response) {                   //Write code to store the Query into the variable            } });   Refer to the code snippet below for executing the query on Domo and retrieving the results. var domo = window.domo; var datasets = window.datasets; domo.post('/sql/v1/'+ 'dataset0', SQLQuery, {contentType: 'text/plain'}).then(function(data) {   //Write your Java or JQuery code to print data. });   The above code will accept the SQL queries generated by ChatGPT. It's important to highlight that, in the code, there is a hardcoded specification that every query will be applied to the dataset mapped as 'dataset0'. It's advisable to customize this part based on user selection. The code is designed to accept datasets with names such as 'dataset0', 'dataset1', and so forth. Ensure that any modifications align with the chosen dataset for optimal functionality, you can also use the domo.get method to get data for more information visit here. The outcome will be presented in JSON format, offering flexibility for further processing. You can seamlessly transfer this data to a table format and display or print it as needed.   Conclusion Incorporating a BI ChatBot in Domo revolutionizes data interaction, seamlessly translating natural language queries into actionable insights. The guide's step-by-step approach simplifies integration, offering both analysts and business professionals an intuitive and accessible data exploration experience. As datasets effortlessly respond to user inquiries, this transformative synergy between ChatGPT and Domo reshapes how we extract value from data, heralding a future of conversational and insightful business intelligence. Dive into this dynamic integration to propel your decision-making processes into a new era of efficiency and accessibility.

Azure Databricks CSV to SQL
Apr 14, 2023

In this blog, we will explore Azure Databricks, a cloud-based analytics platform, and how it can be used to parse a CSV file from Azure storage and then store the data in a database. Additionally, we will also learn how to process stream data and use Databricks notebook in Azure Data Pipeline.   Azure Databricks Overview Azure Databricks is an Apache Spark-based analytics platform that provides a collaborative workspace for data scientists, data engineers, and business analysts. It is a cloud-based service that is designed to handle big data and allows users to process data at scale. Databricks also provides tools for data analysis, machine learning, and visualization. With its integration with Azure Storage, Azure Data Factory, and other Azure services, Azure Databricks can be used to build end-to-end data processing pipelines.   Parsing CSV File from Azure BlobStorage to Database using Azure Databricks Azure Databricks can be used to parse CSV files from Azure Storage and then store the data in a database. Here are the steps to accomplish this:   Configure Various Azure Components 1. Create Azure Resource Group Image 1 2. Create Azure DataBricks Resource  Image 2 3. Create SQL Server Resource  Image 3 4. Create SQL Database Resource Image 4 5. Create Azure Storage Account  Image 5 6. Create Azure DataFactory Resource  Image 6 7. Launch Databricks Resource Workspace  Image 7 8. Create Computing Cluster  Image 8 9. Create New Notebook  Image 9   Parsing CSV File from Azure Storage to Database using Azure Databricks Azure Databricks can be used to parse CSV files from Azure Storage and then store the data in a database. Here are the steps to accomplish this: 1. Create a cluster: First, create a cluster in Azure Databricks as above. A cluster is a group of nodes that work together to process data. 2. Import all the necessary models in the databricks notebook  %python from datetime import datetime, timedelta from azure.storage.blob import BlobServiceClient, generate_blob_sas, BlobSasPermissions import pandas as pd import pymssql import pyspark.sql Code 1 3. Mount Azure Storage: Next, mount the Azure Storage account in Databricks as follows #Configure Blob Connection storage_account_name = "storage" storage_account_access_key="***********************************" blob_container = "blob-container" Code 2 4. Establish The DataBase Connection #DB connection conn = pymssql.connect(server='****************.database.windows.net', user='*****', password='*****', database='DataBricksDB') cursor = conn.cursor() Code 3 5. Parse CSV file: Once the storage account is mounted, you can parse the CSV file using the following code #get a list of all blob from the container blob_list = [] for blob_i in container_client.list_blobs(): blob_list.append(blob_i.name) # print(blob_list)      df_list = [] #Generate SAS key for each file and load to the dataframe  for blob_i in blob_list:     print(blob_i)     sas_i = generate_blob_sas(account_name = storage_account_name,                              container_name = blob_container,                              blob_name = blob_i,                              account_key = storage_account_access_key,                              permission = BlobSasPermissions(read=True),                              expiry = datetime.utcnow() + timedelta(hours=12))       sas_url = 'https://' + storage_account_name +'.blob.core.windows.net/' + blob_container + '/' +blob_i     print(sas_url)          df=pd.read_csv(sas_url)     df_list.append(df) Code 4 6. Transform and Store data in a database: Finally, you can store the data in a database using the following code #Truncate Table Sales Truncate_Query = "IF EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') truncate table sales" cursor.execute(Truncate_Query) conn.commit()   # SQL Query For Table Creation create_table_query = "IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') CREATE TABLE sales (REGION  varchar(max),COUNTRY  varchar(max),ITEMTYPE  varchar(max),SALESCHANNEL  varchar(max),ORDERPRIORITY  varchar(max),ORDERDATE  varchar(max),ORDERID  varchar(max),SHIPDATE  varchar(max),UNITSSOLD  varchar(max),UNITPRICE  varchar(max),UNITCOST  varchar(max),TOTALREVENUE  varchar(max),TOTALCOST  varchar(max),TOTALPROFIT  varchar(max))IF NOT EXISTS (SELECT * FROM sysobjects WHERE name='sales' and xtype='U') CREATE TABLE sales (REGION  varchar(max),COUNTRY  varchar(max),ITEMTYPE  varchar(max),SALESCHANNEL  varchar(max),ORDERPRIORITY  varchar(max),ORDERDATE  varchar(max),ORDERID  varchar(max),SHIPDATE  varchar(max),UNITSSOLD  varchar(max),UNITPRICE  varchar(max),UNITCOST  varchar(max),TOTALREVENUE  varchar(max),TOTALCOST  varchar(max),TOTALPROFIT  varchar(max))" cursor.execute(create_table_query) conn.commit()   #Insert Data From Main DataFrame for rows in df_combined.itertuples(index=False,name=None):     row = str(list(rows))     row_data = row[1:-1]     row_data = row_data.replace("nan","''")     row_data = row_data.replace("None","''") insert_query = "insert into sales (REGION,COUNTRY,ITEMTYPE,SALESCHANNEL,ORDERPRIORITY,ORDERDATE,ORDERID,SHIPDATE,UNITSSOLD,UNITPRICE,UNITCOST,TOTALREVENUE,TOTALCOST,TOTALPROFIT) values ("+row_data+")"     print(insert_query)     cursor.execute(insert_query) conn.commit() Code 5 As, Shown here The data from all the files is loaded to the SQL server Table Image 10   Azure Databricks notebook can be used to process stream data in Azure Data Pipeline. Here are the steps to accomplish this: 1. Create a Databricks notebook: First, create a Databricks notebook in Azure Databricks. A notebook is a web-based interface for working with code and data. 2. Create a job: Next, create a job in Azure Data Factory to execute the notebook. A job is a collection of tasks that can be scheduled and run automatically. 3. Configure the job: In the job settings, specify the Azure Databricks cluster and notebook that you want to use. Also, specify the input and output datasets. 4. Write the code: In the Databricks notebook, write the code to process the stream data. Here is an example code: #from pyspark.sql.functions import window stream_data = spark.readStream \     .format("csv") \     .option("header", "true") \     .schema("<schema>") \     .load("/mnt/<mount-name>/<file-name>.csv")   stream_data = stream_data \     .withWatermark("timestamp", "10 minutes") \     .groupBy(window("timestamp", "10 Code 6   How To Use Azure Databrick notebook in Azure Data Factory pipeline and configure the DataFlow Pipeline Using it. Image 11 1. Create ADF Pipeline  Image 12 2. Configure Data Pipeline  Image 13 3. Add Trigger To the PipeLine  Image 14 4. Configure the trigger  Image 15   These capabilities make Azure Databricks an ideal platform for building real-time data processing solutions. Overall, Azure Databricks provides a scalable and flexible solution for data processing and analytics, and it's definitely worth exploring if you're working with big data on the Azure platform. With its powerful tools and easy-to-use interface, Azure Databricks is a valuable addition to any data analytics toolkit.

POWER BI Implement Row Level Security
Dec 15, 2022

While working on an Embedded Power BI report, I got a requirement to implement Row Level Security.  E.g. Region heads can see data of their Region only. This report is accessible from the .Net MVC application. Locations are assigned to users from the application. This report is hosted on Power BI Server and authenticated by a .Net Application with predefined credentials. Sample Data: There are two tables: Finance Users   Figure 1:-  finance Table   Figure 2:- User   Multiple Options to achieve the same with Power BI We have found out following options to implement the same: RLS (Row Level Security) Through Query String Control report filters that will use in embedded code There are certain limitations with Option#1 and Option#2 (I will describe them later), so moving on with Option#3 “Control report filters”   Implementation with “Control report filters that will use in embedded code” When you embed a Power BI report, you can apply filters automatically during the loading phase, or you can change filters dynamically after the report is loaded. For example, you can create your own custom filter pane and automatically apply those filters to reports to show user-specific insights. You can also create a button that allows users to apply filters to the embedded report.   Step 1: First Identify the filter based on your requirement There are five types of filters available Basic - IBasicFilter Advanced - IAdvancedFilter Top N - ITopNFilter Relative date - IRelativeDateFilter Relative time - IRelativeTimeFilter Based on my requirement I have used a Basic filter and you can use others based on your need. If wants to know more about others click here Here in this blog I am continuing with the #1 basic filter method   Step 2: Determine the table and column that you wants to filter In my case, I want to filter the table ‘finance’ and my column was ‘country’ because when particular user login the country column must be filtered For example:- user G1 login the country = ‘Germany’ and this column in table ‘finance’ Table:- ‘finance’   AND Column:- ‘country’   Step 3: Put this two things in this code   Step 4:  Make Dynamic You can see into this code that there is ‘values’ where I pass “USA” and “Canada” that is hardcode but we want that dynamic values that have to be changed based on which user is login That’s why I made variable that contain ‘country’ name which is assigned to particular login user, For example:- the variable name is ‘Array’ If user G1 login this variable return the value [“Germany”] and set this variable to value Like:   Note: if login person has multiple locations than that variable should return values like [“Germany”, ”USA”, ”India”] Note: if login person is manger or CEO they can see all the country data for that Variable has to return Null instead of blank array [ ]   Step 5:  Put This code into embedded code. Step 5.1: Identify your Method User-owns-data( Embed For your Organization) App-owns-data (Embed for your customer) Step 5.2: According to your method put this code in the given place For the #1 method Add this code into [EmbedReport.aspx] file   Figure 3: User-owns-data   For the #2 method Add this code into [EmbedeReport.cshtml] file Figure 4: App-owns-data   Figure 5: Sample   And that’s it. Now, try to run the report from the application and it should work as expected. Feel free to reach out to me, if you face any issues.   Conclusion I Hope I’ve Shown You How Easy It Is To Implement Row Level Security, So far we have learned step by step process of it. We started with identifying the filter and then finding out the actual table and column that we want to filter on and generating the code. last but not least put this code into our embedded code.

Create SSIS Data Flow Task Package Programmatically
Jul 27, 2020

In this article, we will review how to create a data flow task package of SSIS in Console Application with an example. Requirements Microsoft Visual Studio 2017 SQL Server 2014 SSDT Article  Done with the above requirements? Let's start by launching Microsoft Visual Studio 2017. Create a new Console Project with .Net Core.  After creating a new project, provide a proper name for it. In Project Explorer import relevant references and ensure that you have declared namespaces as below: using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime; using RuntimeWrapper = Microsoft.SqlServer.Dts.Runtime.Wrapper;   To import above namespaces we need to import below refrences.   We need to keep in mind that, above all references should have same version.   After importing namespaces, ask user for the source connection string, destination connection string and table that will be copied to destination. string sourceConnectionString, destinationConnectionString, tableName; Console.Write("Enter Source Database Connection String: "); sourceConnectionString = Console.ReadLine(); Console.Write("Enter Destination Database Connection String: "); destinationConnectionString = Console.ReadLine(); Console.Write("Enter Table Name: "); tableName = Console.ReadLine();   After Declaration, create instance of Application and Package. Application app = new Application(); Package Mipk = new Package(); Mipk.Name = "DatabaseToDatabase";   Create OLEDB Source Connection Manager to the package. ConnectionManager connSource; connSource = Mipk.Connections.Add("ADO.NET:SQL"); connSource.ConnectionString = sourceConnectionString; connSource.Name = "ADO NET DB Source Connection";   Create OLEDB Destination Connection Manager to the package. ConnectionManager connDestination; connDestination= Mipk.Connections.Add("ADO.NET:SQL"); connDestination.ConnectionString = destinationConnectionString; connDestination.Name = "ADO NET DB Destination Connection";   Insert a data flow task to the package. Executable e = Mipk.Executables.Add("STOCK:PipelineTask"); TaskHost thMainPipe = (TaskHost)e; thMainPipe.Name = "DFT Database To Database"; MainPipe df = thMainPipe.InnerObject as MainPipe;   Assign OLEDB Source Component to the Data Flow Task. IDTSComponentMetaData100 conexionAOrigen = df.ComponentMetaDataCollection.New(); conexionAOrigen.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionAOrigen.Name = "ADO NET Source";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instance = conexionAOrigen.Instantiate(); instance.ProvideComponentProperties();   Specify the Connection Manager. conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connSource); conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManagerID = connSource.ID;   Set the custom properties. instance.SetComponentProperty("AccessMode", 0); instance.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Reinitialize the source metadata. instance.AcquireConnections(null); instance.ReinitializeMetaData(); instance.ReleaseConnections();   Now, Add Destination Component to the Data Flow Task. IDTSComponentMetaData100 conexionADestination = df.ComponentMetaDataCollection.New(); conexionADestination.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionADestination.Name = "ADO NET Destination";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instanceDest = conexionADestination.Instantiate(); instanceDest.ProvideComponentProperties();   Specify the Connection Manager. conexionADestination.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connDestination); conexionADestination.RuntimeConnectionCollection[0].ConnectionManagerID = connDestination.ID;   Set the custom properties. instanceDest.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Connect the source to destination component: IDTSPath100 union = df.PathCollection.New(); union.AttachPathAndPropagateNotifications(conexionAOrigen.OutputCollection[0], conexionADestination.InputCollection[0]);   Reinitialize the destination metadata. instanceDest.AcquireConnections(null); instanceDest.ReinitializeMetaData(); instanceDest.ReleaseConnections();   Map Source input Columns and Destination Columns foreach (IDTSOutputColumn100 col in conexionAOrigen.OutputCollection[0].OutputColumnCollection) {     for (int i = 0; i < conexionADestination.InputCollection[0].ExternalMetadataColumnCollection.Count; i++)     {         string c = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].Name;         if (c.ToUpper() == col.Name.ToUpper())         {             IDTSInputColumn100 column = conexionADestination.InputCollection[0].InputColumnCollection.New();             column.LineageID = col.ID;             column.ExternalMetadataColumnID = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].ID;         }     } }   Save Package into the file system. app.SaveToXml(@"D:\Workspace\SSIS\Test_DB_To_DB.dtsx", Mipk, null);   Execute package. Mipk.Execute(); Conclusion In this article, we have explained one of the alternatives for creating SSIS packages using .NET console application. In case you have any questions, please feel free to ask in the comment section below.   RELATED BLOGS: Basics of SSIS(SQL Server Integration Service)

What is meant by MSBI?
Oct 16, 2019

MSBI stands for Microsoft Business Intelligence. This powerful suite is composed of tools that help in providing the best solutions for Business Intelligence and Data Mining Queries. This tool uses a Visual studio along with an SQL server. It empowers users to gain access to accurate and up-to-date information for better decision making in an organization. It offers different tools for different processes that are required in Business Intelligence (BI) solutions. MSBI is divided into 3 categories: SSIS – SQL Server Integration Services – Integration tool. This tool is used for integration like duping the data from one database to another like from Oracle to SQL Server or from Excel to SQL Server etc. This tool is also used for bulk transactions in the database like inserting lacs of records at once. We can create the integration services modules which will do the job for us.   SSAS – SQL Server Analytical Services -Analysis tool. This tool is used to analyze the performance of the SQL Server in terms of load balancing, heavy data, transaction, etc. So it is more or less related to the administration of the SQL Server using this tool. This is a very powerful tool and through this, we can analyze the data inserting into the database like how many transactions happen in a second, etc.   SSRS – SQL Server Reporting Services – Reporting tool. This is a very efficient tool as it is platform-independent. We can generate the report using this tool and can use it in any type of application. Nowadays this is very popular in the market.   “A visual always helps better to understand any concept.” The above Diagram broadly defines “Microsoft Business Intelligence (MSBI)”   RELATED BLOGS: POWER BI Report Integration