Category - Microsoft Business Intelligence

Basics of SSIS(SQL Server Integration Service)
Sep 14, 2020

What is SSIS ? SSIS is a platform for data integration and workflow applications. It features a data warehousing tool used for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases and updates to multidimensional cube data. SQL Server Integration Service (SSIS) is a component of the Microsoft SQL Server database software that can be used to execute a wide range of data migration tasks. SSIS is a fast & flexible data warehousing tool used for data extraction, loading and transformation like cleaning, aggregating, merging data, etc. It makes it easy to move data from one database to another database. SSIS can extract data from a wide variety of sources like SQL Server databases, Excel files, Oracle and DB2 databases, etc. SSIS also includes graphical tools & wizards for performing workflow functions like sending email messages, FTP operations, data sources, and destinations. Features of SSIS: Organized and lookup transformations Tight integration with other Microsoft SQL family Provides rich Studio Environments Provides a lot of data integration functions for better transformations High-speed data connectivity   Why SSIS? Extract, Transform, and Load (ETL) data from SQL Server to a file and also from file to SQL. sending an email. Download the File from FTP. Rename ,Delete , Move File From Defined Path. It allows you to join tables from different databases (SQL, Oracle, etc...) and from potentially different servers.   How SSIS Works? SSIS consists of three major components, mainly: Operational Data: An operational data store (ODS) is a database designed to integrate data from multiple sources for additional operations on the data. This is the place where most of the data used in the current operation is housed before it’s transferred to the data warehouse for longer-term storage or archiving. ETL process: ETL is a process to Extract, Transform and Load the data. Extract, Transform and Load (ETL) is the process of extracting the data from various sources, transforming this data to meet your requirement and then loading into a target data warehouse. ETL provides a ONE STOP SOLUTION for all these problems. Extract: Extraction is the process of extracting the data from various homogeneous or heterogeneous data sources based on different validation points. Transformation: In transformation, entire data is analyzed and various functions are applied on it in order to load the data to the target database in a cleaned and general format. Load: Loading is the process of loading the processed data to a target data repository using minimal resources. Data Warehouse Datawarehouse captures the data from diverse sources for useful analysis and access. Data warehousing is a large set of data accumulated which is used for assembling and managing data from various sources for the purpose of answering business questions. Hence, helps in making decisions.   How to install SSDT(Sql Server Data Tools)? Prerequisite and environment Setup for SSIS Project For Starting SSIS we need 2 Studios SQL Server Data Tools (SSDT) for developing the Integration Services packages that a business solution requires. SQL Server Data Tools (SSDT) provides the Integration Services project in which you create packages. Installation Steps: Download SSDT setup from Microsoft website.   URL: https://docs.microsoft.com/en-us/sql/ssdt/previous-releases-of-sql-server-data-tools-ssdt-and-ssdt-bi?view=sql-server-ver15 When you open the .exe file, you will be asked to restart the system before installation. So, restart first and Run Setup. And press Next. It will show the tools required and the features such as SQL Server Database, SSAS(SQL Server Analysis Services), SSRS(SQL Server Reporting Services) and SSIS(SQL Server Integration Services). Make sure you check SSIS and click the “install” button. Refer the below screenshot for the same.   We will see following contents In SSIS: Variables Connection Manager SSIS Toolbox Container Tasks Data Flow Task   Variable: Variables store values that a SSIS package and its containers, tasks, and event handlers can use at runtime.   System variables : Defined by Integration Services SSIS provides a set of system variables that store information about the running package and its objects. These variables can be used in expressions and property expressions to customize packages, containers, tasks, and event handlers.   User-Defined variables : Defined by Package Developers   How to create user - define  variable?   How to set expression for variable   Connection Manager: SSIS provides different types of connection managers that enable packages to connect to a variety of data sources and servers: There are built-in connection managers that Setup installs when you install Integration Services. There are connection managers that are available for download from the Microsoft website. You can create your own custom connection manager if the existing connection managers do not meet your needs.   Let's see how we can add Connection Manager. 1)Solution Explorer > Connection Managers > New Connection Manager . You can see the list of connection managers for different type of connections.   2)Add connection manager.   After adding your connection. you can see the all connection here.   SSIS Toolbox: Steps: Menu bar > SSIS > select SSIS Toolbox. now, you can see SSIS Toolbox on the left side. SSIS Toolbox have list of tasks and containers that you can perform.   List of Containers: For each Loop Container : Runs a control flow repeatedly by using an enumerator. For Loop Container : Runs a control flow repeatedly by testing a condition. Sequence Container : Groups tasks and containers into control flows that are subsets of the package control flow.   List of Task: Data Flow Task The task that runs data flows to extract data, apply column level transformations, and load data.   Data Preparation Tasks These tasks do the following processes: copy files and directories; download files and data; run Web methods; apply operations to XML documents; and profile data for cleansing.   Workflow Tasks The tasks that communicate with other processes to run packages, run programs or batch files, send and receive messages between packages, send e-mail messages, read Windows Management Instrumentation (WMI) data, and watch for WMI events.   SQL Server Tasks The tasks that access, copy, insert, delete, and modify SQL Server objects and data.   Scripting Tasks The tasks that extend package functionality by using scripts.   Analysis Services Tasks The tasks that create, modify, delete, and process Analysis Services objects.   Maintenance Tasks The tasks that perform administrative functions such as backing up and shrinking SQL Server databases, rebuilding and reorganizing indexes, and running SQL Server Agent jobs. you can add task/container by drag the task/container from SSIS toolbox to design area.   Data Flow Task : Drag the Data Flow task from SSIS Toolbox to design area and double click on it. you are now in Data flow tab. now you can see that SSIS Toolbox has different components.   Type: Source : from where you want your data. Destination : it is where you want to move your data. Transformation : It is Operation that perform ETL(Extract, Transform, Load)   Conclusion : SQL Server Integration Services provide tasks to transform and validate data during the load process and transformations to insert data into your destination. Rather than create a stored procedure with T-SQL to validate or change data, is good to know about the different SSIS tasks and how they can be used.

Test Automation using Robot Framework
Aug 31, 2020

In this article, I will explain how we can prepare and manage automation test cases in a better way with Robot Framework. As an example, let’s look at below list of test cases. In the above image, first column is Test Case Number Second column is Page name and Third column is actual test case that we need to automate. So we need to write automation scripts for Payer - Order details page The very first step that we need to take is to prepare page wise resource file in Robot Framework.   For our example, Order.robot In this file, we will define different keywords based on different functionalities of Order page. For example, there will be one keyword defined to open Order menu item i.e. Open Order Menu – Payer We can define one more keyword to search order number with the use of filter. Similarly, we can define other keywords like opening of any order based on search results etc. See the below image:   This way we can define various keywords based on page object model for different pages. Now, for our actual test cases, we simply need to call these keywords as and when required after importing its respective resource files.   As you can see in the above image, we can simply convert manual test steps into automation steps. We can add verification points in our test cases based on test case requirement and group all these test cases under one test suite. We can run the test suit and on successful completion of test run, we will get report similar to below image.   This is the fastest way to implement test automation using Robot Framework. Hope this information will be useful to you if you are starting a new automation project and analyzing how to manage automation scripts in a better way.

Why Software Testing is Important in software development life cycle?
Aug 24, 2020

Testing is important as it discovers defects/errors before delivery to the customer, ensuring the quality of the software. It makes the software more reliable and easier to use. Thoroughly tested software ensures reliable, high-performance software operation. For example, assume you are using an Online Shopping application to Place any order and Pay the amount of order using any online platform. So, you initiate the transaction, get a successful transaction message, and the amount also deducts from your account. However, your Order is not placed yet. Likewise, your account is also not reflecting the reversed transaction. This will surely make you upset and leave you as an unsatisfied customer.   Software testing contributes to the overall success of the project While requirement review and creating project timeline testers involvement - while review project requirement tester help to identification of some of the requirement defects even before their implementation. this help in Project cost and timeline. While the Design phase tester works with the system designers - it will increase the quality of design. it will help in reducing design flaws and make it user friendly. While Project Developments tester works with developers - Testers are aware of areas that are considered risky by the developer so they can adjust their preferences accordingly. In addition, the developers also get the tester's insights. Helps to recreate errors there and then, without going through a lengthy defect management process. Before the Software release testers verifying and validating the software - This helps detect errors that might otherwise go unnoticed, and supports the process of removing the defects that caused the failures. Running tests at multiple levels increase the likelihood that the software will have fewer bugs and also meet customer needs.   Reasons Why Software Testing is Important 1. Identification of Bugs and Defects Testing phase is one phase that determines the bugs and errors in the application. These bugs can be at the unit level or system level. With so many testing phases, you get to keep away from any kind of bugs that may be affecting your application.   2. Improvement in Product Quality As testing is a phase that helps in knowing the actual outcome and the expected outcome. Surely, it can help you to improve your product quality. With proper testing done, you come out of errors and develop perfect software applications for the users.   3. Technical Importance In any SDLC lifecycle, the testing phase is important for the technical aspect, as it has to come out with the best technically sound application.   4. Improved Quality Properly tested software gives you more confidence in coming up with great software. Secondly, it improves the quality of your software application as continuous and all kinds of testing modes have made a secure and safe application that can be used by the end-users.   5. Verification and validation One of the main aims of the testing phase in SDLC is for verification and validation. Testing is a phase that can serve as metrics as it is used heavily in the V&V method. Based on the result, you could come out with a comparison between qualities amongst different products.   6. Reliability Estimation  This is yet another important factor that is determined by the testing phase. If your software application has gone through all small levels like unit testing and major testing like regression testing and all, then surely it is a reliable application. Hence testing determines the reliability of your software application. Testing may serve as the best statistical method that defines operational testing on application resulting in a reliable software product.   7. Prove Usability and Operability One very important aim of software testing is to prove the software is both usable and operable. Usability testing is where the software is released to a select group of users and their working with the product is observed. All aspects of a user's interaction with the software, like the ease of use and where users are facing problems, are recorded and analyzed   Conclusion: To conclude, the importance of software testing is imperative. Software testing is a crucial component of software product development because it improves consistency and performance. The main benefit of testing is the identification and subsequent removal of the errors. However, testing also helps developers and testers to compare actual and expected results in order to improve quality. If the software production happens without testing it, it could be useless or sometimes dangerous for customers. So, a tester should wear a unique hat that protects the reliability of the software and makes it safe to use in real-life scenarios.  

Kafka with ELK implementation
Aug 17, 2020

Apache Kafka is the numerous common buffer solution deployed together with the ELK Stack. Kafka is deployed within the logs delivery and the indexing units, acting as a segregation unit for the data being collected: In this blog, we’ll see how to deploy all the components required to set up a resilient logs pipeline with Apache Kafka and ELK Stack: Filebeat – collects logs and forwards them to a Kafka topic. Kafka – brokers the data flow and queues it. Logstash – aggregates the data from the Kafka topic, processes it and ships to Elasticsearch. Elasticsearch – indexes the data. Kibana – for analyzing the data.   My environment: To perform the steps below, I set up a single Ubuntu 18.04 VM machine on AWS EC2 using local storage. In real-life scenarios, you will probably have all these components running on separate machines. I started the instance in the public subnet of a VPC and then set up a security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). Using Apache Access Logs for the pipeline, you can use VPC Flow Logs, ALB Access logs etc. We will start by installing the main component in the stack — Elasticsearch. Login to your Ubuntu system using sudo privileges. For the remote Ubuntu server using ssh to access it. Windows users can use putty or Powershell to log in to Ubuntu system. Elasticsearch requires Java to run on any system. Make sure your system has Java installed by running the following command. This command will show you the current Java version. sudo apt install openjdk-11-jdk-headless Check the installation is successful or not by the below command ~$ java — versionopenjdk 11.0.3 2019–04–16OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing) Finally, I added a new elastic IP address and associated it with the running instance. The example logs used for the tutorial are Apache access logs.   Step 1: Installing Elasticsearch We will start by installing the main component in the stack — Elasticsearch. Since version 7.x, Elasticsearch is bundled with Java so we can jump right ahead with adding Elastic’s signing key: Download and install the public signing key: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Now you may need to install the apt-transport-https package on Debian before proceeding: sudo apt-get install apt-transport-https echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Our next step is to add the repository definition to our system: echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list You can install the Elasticsearch Debian package with: sudo apt-get update && sudo apt-get install elasticsearch Before we bootstrap Elasticsearch, we need to apply some basic configurations using the Elasticsearch configuration file at: /etc/elasticsearch/elasticsearch.yml: sudo su nano /etc/elasticsearch/elasticsearch.yml Since we are installing Elasticsearch on AWS, we will bind Elasticsearch to the localhost. Also, we need to define the private IP of our EC2 instance as a master-eligible node: network.host: "localhost" http.port:9200 cluster.initial_master_nodes: ["<InstancePrivateIP"] Save the file and run Elasticsearch with: sudo service elasticsearch start To confirm that everything is working as expected, point curl to: http://localhost:9200, and you should see something like the following output (give Elasticsearch a minute or two before you start to worry about not seeing any response): {   "name" : "elasticsearch",   "cluster_name" : "elasticsearch",   "cluster_uuid" : "W_Ky1DL3QL2vgu3sdafyag",   "version" : {     "number" : "7.2.0",     "build_flavor" : "default",     "build_type" : "deb",     "build_hash" : "508c38a",     "build_date" : "2019-06-20T15:54:18.811730Z",     "build_snapshot" : false,     "lucene_version" : "8.0.0",     "minimum_wire_compatibility_version" : "6.8.0",     "minimum_index_compatibility_version" : "6.0.0-beta1"   },   "tagline" : "You Know, for Search" }   Step 2: Installing Logstash Next up, the “L” in ELK — Logstash. Logstash and installing it is easy. Just type the following command. sudo apt-get install logstash -y Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs and ships them on to Elasticsearch for indexing. Verify Java is installed: java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) Let’s create a new config file: Since we already defined the repository in the system, all we have to do to install Logstash is run: sudo nano /etc/logstash/conf.d/apache.conf Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs, and ships them on to Elasticsearch for indexing. Let’s create a new config file: input {   kafka {     bootstrap_servers => "localhost:9092"     topics => "apache"     } } filter {     grok {       match => { "message" => "%{COMBINEDAPACHELOG}" }     }     date {     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]     }   geoip {       source => "clientip"     } } output {   elasticsearch {     hosts => ["localhost:9200"]   } } As you can see — we’re using the Logstash Kafka input plugin to define the Kafka host and the topic we want Logstash to pull from. We’re applying some filtering to the logs and we’re shipping the data to our local Elasticsearch instance.   Step 3: Installing Kibana Let’s move on to the next component in the ELK Stack — Kibana. As before, we will use a simple apt command to install Kibana: sudo apt-get install kibana We will then open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure we have the correct configurations defined: server.port: 5601 server.host: "<INSTANCE_PRIVATE_IP>" elasticsearch.hosts: ["http://<INSTANCE_PRIVATE_IP>:9200"] Then enable and start the Kibana service: sudo systemctl enable kibana sudo systemctl start kibana We would need to install Firebeat. Use: sudo apt install filebeat   Open up Kibana in your browser with http://<PUBLIC_IP>:5601. You will be presented with the Kibana home page.

Easy jQuery Trick
Aug 10, 2020

1. Back to top button $('a.top').click(function() { $(document.body).animate({scrollTop : 0},800); return false; }); //Create an anchor tag <a class=”top” href=”#”>Back to top</a> As you can see using the animate and scrollTop functions in jQuery we don’t need a plugin to create a simple scroll to top animation. By changing the scrollTop value we can change where we want the scrollbar to land, in my case I used a value of 0 because I want it to go to the very top of our page, but if I wanted an offset of 100px I could just type 100px in the function. So all we are really doing is animating the body of our document throughout the course of 800ms until it scrolls all the way to the top of the document. 2. Checking if images are loaded $(‘img’).load(function() { console.log(‘image load successful’); }); Sometimes you need to check if your images are fully loaded in order to continue with your scripts, this three line jQuery snippet can do that for you easily. You can also check if one particular image has loaded by replacing the img tag with an ID or class.   3. Fix broken images automatically $('img').error(function(){ $(this).attr('src', ‘img/broken.png’); }); Occasionally we have times when we have broken image links on our website and replacing them one by one isn’t easy, so adding this simple piece of code can save you a lot of headaches. Even if you don’t have any broken links adding this doesn’t do any harm.   4. Toggle class on hover $(‘.btn').hover(function(){ $(this).addClass(‘hover’); }, function(){ $(this).removeClass(‘hover’); }); We usually want to change the visual of a clickable element on our page when the user hovers over and this jQuery snippet does just that, it adds a class to your element when the user is hovering and when the user stops it removes the class, so all you need to do is add the necessary styles in your CSS file.   5. Disabling input fields $('input[type="submit"]').attr("disabled", true); On occasion you may want the submit button of a form or even one of its text inputs to be disabled until the user has performed a certain action (checking the “I’ve read the terms” checkbox for example) and this line of code accomplishes that; it adds the disabled attribute to your input so you can enable it when you want to. To do that all you need to do is run the removeAttr function on the input with disabled as the parameter: $('input[type="submit"]').removeAttr("disabled”);   6. Stop the loading of links $(‘a.no-link').click(function(e){ e.preventDefault(); }); Sometimes we don’t want links to go to a certain page or even reload it, we want them to do something else like trigger some other script and in that case this piece of code will do the trick of preventing the default action.   7. Toggle fade/slide // Fade $( “.btn" ).click(function() { $( “.element" ).fadeToggle("slow"); }); // Toggle $( “.btn" ).click(function() { $( “.element" ).slideToggle("slow"); }); Slides and Fades are something we use plenty in our animations using jQuery, sometimes we just want to show an element when we click something and for that the fadeIn and slideDown methods are perfect, but if we want that element to appear on the first click and then disappear on the second this piece of code will work just fine.   8. Make two divs the same height $(‘.div').css('min-height', $(‘.main-div').height());   9. Self-closing elements When inserting empty elements into the DOM, you can use self-closing tags: $('#el1').append('<table class="holly" />'); $('#el2').before('<p class="user-message note" />');   10. Inserting content —Before, after, prepend or append? When inserting elements into an element, use .prepend() or .append(). The difference is that prepend inserts the new element into the start (i.e. insert element as the first child), while append does the exact opposite, inserting the element as the last child: $('div').prepend('<p>Hello!</p>').append('<p>Goodbye!</p>');   11. Loopy doopy do! Very often people wonder how to loop through each element individually, and then target the current element. For this purpose, we can use .each() and $(this). Let’s say we want to add a class to all paragraphs, based on their index. If you are working on a static page, you might be tempted to hardcode it: $('p').each(function (i) {     $(this).addClass('para-' + (i+1)); } Remember that i is a zero-based index, i.e. it starts from 0 for the first encountered element.   12. Automatic vendor prefixes jQuery automatically adds vendor prefixes when it sees fit — therefore saving you the headache of adding vendor prefixes yourself: $('.box').css({ 'transform': 'translate(100,100) scale(2)' }); 13. Selectors As previously mentioned, one of the core concepts of jQuery is to select elements and perform an action. jQuery has done a great job of making the task of selecting an element, or elements, extremely easy by mimicking that of CSS. On top of the general CSS selectors, jQuery has support for all of the unique CSS3 selectors, which work regardless of which browser is being used. $('.feature');           // Class selector $('li strong');          // Descendant selector $('em, i');              // Multiple selector $('a[target="_blank"]'); // Attribute selector $('p:nth-child(2)');     // Pseudo-class selector 14. 'this' Selection Keyword When working inside of a jQuery function you may want to select the element in which was referenced inside of the original selector. In this event the this keyword may be used to refer to the element selected in the current handler. $('div').click(function(event){    $(this); });   15. jQuery Selection Filters Should CSS selectors not be enough there are also custom filters built into jQuery to help out. These filters are an extension to CSS3 and provide more control over selecting an element or its relatives. $('div:has(strong)');   16. Slide Demo //HTML <div class="panel">   <div class="panel-stage"></div>   <a href="#" class="panel-tab">Open <span>&#9660;</span></a> </div>   //Function $('.panel-tab').on('click', function(event){   event.preventDefault();   $('.panel-stage').slideToggle('slow', function(event){     if($(this).is(':visible')){       $('.panel-tab').html('Close <span>&#9650;</span>');     } else {       $('.panel-tab').html('Open <span>&#9660;</span>');     }   }); });   17. Tabs Demo //HTML <ul class="tabs-nav">   <li><a href="#tab-1">Features</a></li>   <li><a href="#tab-2">Details</a></li> </ul> <div class="tabs-stage">   <div id="tab-1">...</div>   <div id="tab-2">...</div> </div> // Show the first tab by default $('.tabs-stage div').hide(); $('.tabs-stage div:first').show(); $('.tabs-nav li:first').addClass('tab-active'); // Change tab class and display content $('.tabs-nav a').on('click', function(event){   event.preventDefault();   $('.tabs-nav li').removeClass('tab-active');   $(this).parent().addClass('tab-active');   $('.tabs-stage div').hide();   $($(this).attr('href')).show(); });

POWER BI Report Integration
Aug 04, 2020

Here we will see how we can access Power BI Embed reports and Dashboards in ASP.NET MVC applications. Let’s start with a small introduction to Power BI. Power BI is a reporting tool provided by Microsoft. It is accessible in two forms windows version and web version. Using windows version, we can design reports and then publish it to the Power BI server and Web version Power BI can be used to view that Published report. Power BI server URL: https://powerbi.microsoft.com/en-us/   Prerequisite to Start with Power BI: Visual Studio 2017 (https://visualstudio.microsoft.com/vs/older-downloads/) Power BI Desktop (https://powerbi.microsoft.com/en-us/downloads/) Azure subscription to register the application. (http://portal.azure.com/) Please follow the below steps to set up the Environment before starting with integration. Create an Active Directory on AZURE portal Register Application on Azure Portal. Note: Copy these details to somewhere User Id, Password, ClientID, Client Secret Key Grant All Access Permissions to application. Later on, you can remove extra permissions as per requirement. Register on the Power BI server to host reports and create the workspace. Create one sample report and Deploy on the Workspace we create on power BI server We can access this report directly without any authorization with the help of the URL. In that case, there is no data security. That’s why we are using Power BI Embed, and this report will be loaded with the help of Embedded URL and We’ll use Authorization Token for Authentication.   Power Integration Steps Generate Bearer token for Authenticate your application with Azure Active Directory with help of User Id, Password, ClientID, Client Secret key of Azure portal. We are going to generate this with the help of Postman. This bearer token will be used to access all the APIs related to Power BI. I.E. Get Workspace list inside Active Directory, Get Workspace items list, etc. API URL for generating Bearer token: https://login.microsoftonline.com/common/oauth2/token    Now, we have Bearer Token. We’ll get Workspace (Groups) list with the help of that bearer token. Below API will provide Workspace Details. Like Workspace Name, id, IsReadOnly, etc. API URL:  https://api.powerbi.com/v1.0/myorg/groups/   Now we have bearer Token and group details. Based on these all details. We’ll get report details which we hosted for integration.  We have different API methods for fetch report details, which will be used to integrate with our application. It will provide report id, name, report type, Web URL, Embed URL, Dataset Id, etc. Web URL: It can be accessed without authentication, which is open for all. You can access reports by browsing the URL in the browser directly. Embed URL: It is Secure URL which we are going to use for secure integration purpose. API URL: https://api.powerbi.com/v1.0/myorg/groups/{GroupID}/reports   Now we have all the required details to access the Power BI report. But still, we cannot access the report. It will require an Authentication token for that specific report to access it.  So, we’ll generate a Token for it. API URL:   https://api.powerbi.com/v1.0/myorg/groups/{GroupID}/reports/{ReportId}/generatetoken     We’ll Use Embed URL and Report Authentication Token to integrate a report with Existing Application.   Create a sample MVC application Create All the API methods as explained in step 1 to 4 Add below code in the controller   Add the below code in the view. Now, Run the project it will display sample reports in a web browser. Conclusion: After reading the blog, We have learned about Power BI API and the integration of those APIs into ASP.NET MVC application.

Create SSIS Data Flow Task Package Programmatically
Jul 27, 2020

In this article, we will review how to create a data flow task package of SSIS in Console Application with example. Requirements Microsoft Visual Studio 2017 SQL Server 2014 SSDT Article  Done with the above requirements? Lets start by launching Microsoft Visual Studio 2017. Create a new Console Project with .Net Core.  After created new project provide proper name to it. In Project Explorer import relevant references and ensure that you have declared namespaces as below: using Microsoft.SqlServer.Dts.Pipeline.Wrapper; using Microsoft.SqlServer.Dts.Runtime; using RuntimeWrapper = Microsoft.SqlServer.Dts.Runtime.Wrapper;   To import above namespaces we need to import below refrences.   We need to keep in mind that, above all references should have same version.   After importing namespaces, ask user for the source connection string, destination connection string and table that will be copied to destination. string sourceConnectionString, destinationConnectionString, tableName; Console.Write("Enter Source Database Connection String: "); sourceConnectionString = Console.ReadLine(); Console.Write("Enter Destination Database Connection String: "); destinationConnectionString = Console.ReadLine(); Console.Write("Enter Table Name: "); tableName = Console.ReadLine();   After Declaration, create instance of Application and Package. Application app = new Application(); Package Mipk = new Package(); Mipk.Name = "DatabaseToDatabase";   Create OLEDB Source Connection Manager to the package. ConnectionManager connSource; connSource = Mipk.Connections.Add("ADO.NET:SQL"); connSource.ConnectionString = sourceConnectionString; connSource.Name = "ADO NET DB Source Connection";   Create OLEDB Destination Connection Manager to the package. ConnectionManager connDestination; connDestination= Mipk.Connections.Add("ADO.NET:SQL"); connDestination.ConnectionString = destinationConnectionString; connDestination.Name = "ADO NET DB Destination Connection";   Insert a data flow task to the package. Executable e = Mipk.Executables.Add("STOCK:PipelineTask"); TaskHost thMainPipe = (TaskHost)e; thMainPipe.Name = "DFT Database To Database"; MainPipe df = thMainPipe.InnerObject as MainPipe;   Assign OLEDB Source Component to the Data Flow Task. IDTSComponentMetaData100 conexionAOrigen = df.ComponentMetaDataCollection.New(); conexionAOrigen.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionAOrigen.Name = "ADO NET Source";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instance = conexionAOrigen.Instantiate(); instance.ProvideComponentProperties();   Specify the Connection Manager. conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connSource); conexionAOrigen.RuntimeConnectionCollection[0].ConnectionManagerID = connSource.ID;   Set the custom properties. instance.SetComponentProperty("AccessMode", 0); instance.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Reinitialize the source metadata. instance.AcquireConnections(null); instance.ReinitializeMetaData(); instance.ReleaseConnections();   Now, Add Destination Component to the Data Flow Task. IDTSComponentMetaData100 conexionADestination = df.ComponentMetaDataCollection.New(); conexionADestination.ComponentClassID = "Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=14.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; conexionADestination.Name = "ADO NET Destination";   Get Design time instance of the component and initialize it. CManagedComponentWrapper instanceDest = conexionADestination.Instantiate(); instanceDest.ProvideComponentProperties();   Specify the Connection Manager. conexionADestination.RuntimeConnectionCollection[0].ConnectionManager = DtsConvert.GetExtendedInterface(connDestination); conexionADestination.RuntimeConnectionCollection[0].ConnectionManagerID = connDestination.ID;   Set the custom properties. instanceDest.SetComponentProperty("TableOrViewName", "\"dbo\".\"" + tableName + "\"");   Connect the source to destination component: IDTSPath100 union = df.PathCollection.New(); union.AttachPathAndPropagateNotifications(conexionAOrigen.OutputCollection[0], conexionADestination.InputCollection[0]);   Reinitialize the destination metadata. instanceDest.AcquireConnections(null); instanceDest.ReinitializeMetaData(); instanceDest.ReleaseConnections();   Map Source input Columns and Destination Columns foreach (IDTSOutputColumn100 col in conexionAOrigen.OutputCollection[0].OutputColumnCollection) {     for (int i = 0; i < conexionADestination.InputCollection[0].ExternalMetadataColumnCollection.Count; i++)     {         string c = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].Name;         if (c.ToUpper() == col.Name.ToUpper())         {             IDTSInputColumn100 column = conexionADestination.InputCollection[0].InputColumnCollection.New();             column.LineageID = col.ID;             column.ExternalMetadataColumnID = conexionADestination.InputCollection[0].ExternalMetadataColumnCollection[i].ID;         }     } }   Save Package into the file system. app.SaveToXml(@"D:\Workspace\SSIS\Test_DB_To_DB.dtsx", Mipk, null);   Execute package. Mipk.Execute(); Conclusion In this article, we have explained one of the alternatives for creating SSIS packages using .NET console application. In case you have any questions, please feel free to ask in the comment section below.

DELETE and UPDATE CASCADE in SQL Server foreign key
Jul 20, 2020

In this article, we will review on DELETE AND UPDATE CASCADE rules in SQL Server foreign key with different examples. DELETE CASCADE: When we create a foreign key using this option, it deletes the referencing rows in the child table when the referenced row is deleted in the parent table which has a primary key. UPDATE CASCADE: When we create a foreign key using UPDATE CASCADE the referencing rows are updated in the child table when the referenced row is updated in the parent table which has a primary key. We will be discussing the following topics in this article: Creating DELETE CASCADE and UPDATE CASCADE rule in a foreign key using T-SQL script Triggers on a table with DELETE or UPDATE cascading foreign key Let us see how to create a foreign key with DELETE and UPDATE CASCADE rules along with few examples.   Creating a foreign key with DELETE and UPDATE CASCADE rules Please refer to the below T-SQL script which creates a parent, child table and a foreign key on the child table with DELETE CASCADE rule.   Insert some sample data using below T-SQL script.   Now, Check Records.   Now I deleted a row in the parent table with CountryID =1 which also deletes the rows in the child table which has CountryID =1.   Please refer to the below T-SQL script to create a foreign key with UPDATE CASCADE rule.   Now update CountryID in the Countries for a row which also updates the referencing rows in the child table States.   Following is the T-SQL script which creates a foreign key with cascade as UPDATE and DELETE rules. To know the update and delete actions in the foreign key, query sys.foreign_keys view. Replace the constraint name in the script.   The below image shows that a DELETE CASCADE action and UPDATE CASCADE action is defined on the foreign key. Let’s move forward and check the behavior of delete and update rules the foreign keys on a child table which acts as parent table to another child table. The below example demonstrates this scenario. In this case, “Countries” is the parent table of the “States” table and the “States” table is the parent table of Cities table.   We will create a foreign key now with cascade as delete rule on States table which references to CountryID in parent table Countries.   Now on the Cities table, create a foreign key without a DELETE CASCADE rule. If we try to delete a record with CountryID = 3, it will throw an error as delete on parent table “Countries” tries to delete the referencing rows in the child table States. But on Cities table, we have a foreign key constraint with no action for delete and the referenced value still exists in the table.   The delete fails at the second foreign key.   When we create the second foreign key with cascade as delete rule then the above delete command runs successfully by deleting records in the child table “States” which in turn deletes records in the second child table “Cities”.   Triggers on a table with delete cascade or update cascade foreign key An instead of an update trigger cannot be created on the table if a foreign key on with UPDATE CASCADE already exists on the table. It throws an error “Cannot create INSTEAD OF DELETE or INSTEAD OF UPDATE TRIGGER ‘trigger name’ on table ‘table name’. This is because the table has a FOREIGN KEY with cascading DELETE or UPDATE.” Similarly, we cannot create INSTEAD OF DELETE trigger on the table when a foreign key CASCADE DELETE rule already exists on the table.   Conclusion In this article, we explored a few examples on DELETE CASCADE and UPDATE CASCADE rules in SQL Server foreign key. In case you have any questions, please feel free to ask in the comment section below.

Terraform Infrastructure as a Code
Jul 13, 2020

You may have heard infrastructure as code(IaC), But do you know what infrastructure is? Why do we need infrastructure as code? What are the benefits of infrastructure as code? Is it safe and secure?    What is Infrastructure as Code(IoC)? Infrastructure as code (IaC) means to manage and upgrade your environments as infrastructure using configuration files. Terraform provides infrastructure as code for provisioning, compliance, and management across any public cloud, private data center, and third-party service. Enables teams to write, share, manage, and automate any infrastructure using version control With automated policy enforcement for security, compliance, and operational best practices and Enable developers to provision their desired infrastructure from within their workflows. IOC has a high impact on the Business perspective by providing Increased Productivity, Reduced Risk, Reduced Cost   Why do we use Infrastructure as Code(IoC)? Terraform is a simple human-readable configuration language, to define the desired topology of infrastructure resources VCS Integration Write, version, review, and collaborate on Terraform code using your preferred version control system Workspaces Workspaces decompose monolithic infrastructure into smaller components, or "micro-infrastructures". These workspaces can be aligned to teams for role-based access control. Variables Granular variables allow easy reuse of code and enable dynamic changes to scale resources and deploy new versions. Runs Terraform uses two-phased provisioning a plan (dry run) & apply (execution). Plans can be inspected before execution to ensure expected behavior and safety. Infrastructure State The state file is a record of currently provisioned resources. State files enable a versioned history of the infrastructure and are encrypted at rest. Versions can be inspected to see incremental changes. Policy as Code Sentinel is a policy as a code framework to automate multi-cloud governance.   What are the benefits of Infrastructure as Code(IoC)? Infrastructure as Code enables Infrastructure teams to test the applications in staging environments or development environment early - likely in the development cycle Infrastructure as Code Saves You Time and Money We can have a version history like when the infrastructure is upgraded and who has done it from the code itself. Else we have to ask to check the Infrastructure admin to look into logs and which is very time-consuming. We can check it into version control and I get versioning. Now we can see an incremental history of who changed what Use Infrastructure as Code to build update and manage any cloud, infrastructure, or services Terraform makes it easy to re-use configurations for the environment for similar infrastructure, helping you avoid mistakes and save time. We can use the same configuration code for the different staging Production and development environments. Terraform supports many Providers to be built from just a simple and less line of code. Major providers are as follows AWS Azure GitHub GitLab Google Cloud Platform VMWare Docker  and  200+ more. A Simple example to create an Ec2 Instance with just a few lines of code. resource "aws_instance" "ec2_instance" {   ami = "ami-*******"   instance_type = "t2.micro"   vpc_security_group_ids = ["${aws_security_group.*****.id}"]   key_name = "${aws_key_pair.****.id}"   tags {     Name = "New-EC2-Instance"   } } But First, we have to write code for which provider we are writing our code. To do so  here is the simple basic code to assign a provider provider "aws" {   region = "us-west-2"   ## PROVIDE CREDENTIALS } Now to Create your Ec2 Instance in AWS. We have to run the commands. So terraform has Four commands to check and apply the infrastructure changes, Init Plan Apply Destroy.   1. Init $ terraform init We can understand from the name of the command that is used to initialize something. So here terraform will be initialized in our code. which will create some basic backend and tfstate files in folders for internal use. 2. Plan $ terraform plan As we do compile in some code languages, it will check for the compilation errors and plan what is going to happen when we run the script to generate infrastructure code. It will show you what resources are going to be created and what will be the configuration. 3. Apply $ terraform apply It is time to run the script and check what is being generated from the scripts. So the command will execute the script and apply the changes in our infrastructure, which will generate some resources for what we have written in the code.  4. Destroy $ terraform destroy This command is used when we want to remove or destroy the resource. After some time we don't need that resource then we just run the command which will destroy the resource. And your money is saved.