Tag - C

Import Mode vs DirectQuery In Power BI
Mar 22, 2024

Understanding Import and DirectQuery Modes in Power BI Power BI empowers users to analyze data from various sources. This post dives into two key connection modes: Import and DirectQuery. Each offers distinct advantages depending on your data analysis needs.   Import Mode: Power and Flexibility Import mode brings your data directly into Power BI's internal memory. This creates a copy of the data, allowing for: Faster Performance: Since the information is readily available, visualizations and calculations happen swiftly. Enhanced Data Manipulation: Transform and mold the data to your liking before loading, offering greater control compared to DirectQuery. Offline Accessibility: Reports built with import mode function flawlessly even without an internet connection. However, there are limitations to consider: Resource Demands: Importing large datasets strains system resources like RAM and disk space. Data Refresh: Changes made to the source data won't be reflected until you refresh the import. Setting up automatic refreshes can help, but large datasets can lead to slow refresh times.   DirectQuery Mode: Real-Time Insights DirectQuery mode bypasses internal storage. Instead, it sends queries directly to the original data source. This approach offers several benefits: Real-Time Analysis: Always see the latest data without manual refreshes. Changes in the source database are instantly reflected in your reports. Large Dataset Efficiency: DirectQuery handles massive datasets effectively, avoiding memory constraints encountered in Import mode. Guaranteed Accuracy: Reports always showcase the most up-to-date information. However, DirectQuery comes with its own limitations: Limited Functionality: Certain features like calculated columns and complex data models are restricted due to the reliance on live data. Potential Performance Lag: Queries travel back and forth between Power BI and the source system, impacting response times compared to Import mode.   Let’s take a look on how Import & Direct Query modes works One of the main advantages of using Power BI is its ability to import data from various online sources. To import data from your database directly into your Power BI reports and dashboards, you need to connect to the database first. Here are the steps to follow:  Open Power BI and click on the “Get Data” button. In the “Get Data” window, select the “Database” option. Choose SQL Server option. Enter the server name and credentials to connect to the database. Select the specific tables or views you want to import data from. Click on the “Load” button to import the data into Power BI or you can Transform the data if any necessary transformations or filters you wants to apply.   Setting Up a DirectQuery Connection Except for the import mode prompt, the steps for configuring a DirectQuery connection will remain the same. Choose the DirectQuery option when prompted for the import mode. Choosing the Right Mode The optimal mode hinges on your specific needs: Import Mode: When speed, offline access, and intricate data manipulation are paramount, Import mode shines. It fosters a responsive environment for in-depth analysis, ideal for creating reports and dashboards that can be explored without an internet connection. This mode is particularly advantageous for smaller to medium-sized datasets, where refresh times are manageable. DirectQuery Mode: This mode is your go-to for real-time insights. It ensures you're always basing your decisions on the freshest data available, minimizing the risk of outdated information influencing critical choices. For very large datasets, DirectQuery eliminates the memory limitations of Import mode, making it a powerful tool for handling massive volumes of information. By understanding the strengths and weaknesses of each mode, you can leverage Power BI effectively to make informed decisions based on your unique data analysis requirements.

Understanding the Difference Between LINQ and Stored Procedures
Mar 20, 2024

Introduction  In the world of database management and querying, two commonly used methods are Language Integrated Query (LINQ) and Stored Procedures. Both serve the purpose of retrieving and manipulating data from databases, but they differ significantly in their approach and implementation. In this blog post, we'll delve into the disparities between LINQ and Stored Procedures to help you understand when to use each. 1. Conceptual Differences:    - LINQ Example:  var query = from p in db.Products                  where p.Category == "Electronics"                  select p;            foreach (var product in query)      {          Console.WriteLine(product.Name);      } In this LINQ example, we're querying a collection of products from a database context (`db.Products`). The LINQ query selects all products belonging to the "Electronics" category.    - Stored Procedures Example: CREATE PROCEDURE GetElectronicsProducts      AS BEGIN     SELECT * FROM Products WHERE Category = 'Electronics' END Here, we've created a Stored Procedure named `GetElectronicsProducts` that retrieves all products in the "Electronics" category from the `Products` table. 2. Performance:    - LINQ: LINQ queries are translated into SQL queries at runtime by the LINQ provider. While LINQ provides a convenient and intuitive way to query data, the performance might not always be optimal, especially for complex queries or large datasets.    - Stored Procedures: Stored Procedures are precompiled and optimized on the database server, leading to potentially better performance compared to dynamically generated LINQ queries. They can leverage indexing and caching mechanisms within the database, resulting in faster execution times. 3. Maintenance and Deployment:    - LINQ: LINQ queries are embedded directly within the application code, making them easier to maintain and deploy alongside the application itself. However, changes to LINQ queries often require recompilation and redeployment of the application.    - Stored Procedures: Stored Procedures are maintained separately from the application code and are stored within the database. This separation of concerns allows for easier maintenance and updates to the database logic without impacting the application code. Additionally, Stored Procedures can be reused across multiple applications. 4. Security:    - LINQ: LINQ queries are susceptible to SQL injection attacks if proper precautions are not taken. Parameterized LINQ queries can mitigate this risk to some extent, but developers need to be vigilant about input validation and sanitation.    - Stored Procedures: Stored Procedures can enhance security by encapsulating database logic and preventing direct access to underlying tables. They provide a layer of abstraction that can restrict users' access to only the operations defined within the Stored Procedure, reducing the risk of unauthorized data access or modification. Conclusion: In summary, both LINQ and Stored Procedures offer distinct advantages and considerations when it comes to querying databases. LINQ provides a more integrated and developer-friendly approach, while Stored Procedures offer performance optimization, maintainability, and security benefits. The choice between LINQ and Stored Procedures depends on factors such as application requirements, performance considerations, and security concerns. Understanding the differences between the two methods can help developers make informed decisions when designing database interactions within their applications.

Implementing Facebook Authentication in ASP.NET: A Step-by-Step Guide
Mar 18, 2024

Introduction: Integrating Facebook authentication into your .NET project offers a user-friendly login option, allowing users to sign in with their Facebook credentials. This guide will walk you through the steps to implement Facebook login, enhancing user convenience, trust, and providing access to user data. Creating a Demo for Facebook Authentication in .NET Step 1: Set Up .NET Project  1. Create a new ASP.NET MVC project using Visual Studio or your preferred IDE.  Step 2: Create Facebook Developer App  2. Go to the [Facebook Developer Portal] : https://developers.facebook.com/ 3. Create a new app.  4. Configure the app details and obtain the App ID and App Secret.     Step 3: Configure Facebook Authentication in .NET Project  5. In your .NET project, open `Startup.cs`.  6. Configure Facebook authentication:  services.AddAuthentication(options =>     {         options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme;         options.DefaultChallengeScheme = FacebookDefaults.AuthenticationScheme;     })    .AddCookie()     .AddFacebook(options =>     {         options.AppId = "Your-Facebook-App-ID";         options.AppSecret = "Your-Facebook-App-Secret";         options.CallbackPath = new PathString("/Auth/FacebookCallback");    });  Step 4: Create AuthController  7. Create an `AuthController` with actions for Facebook login and callback:  public class AuthController : Controller     {         public IActionResult Index()         {             return View();         }         [HttpGet]         [Route("signin-facebook")]         public async Task<IActionResult> FacebookCallback()         {             var result = await HttpContext.AuthenticateAsync("Facebook");             if (result.Succeeded)             {                 // Authentication succeeded. Add your logic here.                 return RedirectToAction("Index", "Home");             }             // Authentication failed. Handle the error.             return RedirectToAction("Login", "Account");         }         public IActionResult FacebookLogin()         {             var properties = new AuthenticationProperties             {                 RedirectUri = Url.Action("https://localhost:7135/Auth/FacebookCallback"),             };              return Challenge(properties, FacebookDefaults.AuthenticationScheme);         }     }  Step 5: Implement Facebook Login Button  8. In your `Index.cshtml` or another appropriate view, add a button for Facebook login: <h1>Facebook Authentication</h1>  <button class="btn btn-primary"><a style="color:white" asp-controller="Auth" asp-action="FacebookLogin">Login with Facebook</a></button>  Step 6: Update App Settings  9. In the Facebook Developer Portal, update the "Valid OAuth Redirect URIs" with `https://localhost:7135/Auth/FacebookCallback`.    Login Facebook > Settings. Step 7: Run and Test  10. Run your .NET project and test the Facebook authentication by clicking the "Login with Facebook" button.      Click on Login with Facebook > Continue. You can create Successful login in redirect logic. You Can also use JavaScript SDK to use authenticate in Your project I n our case will use MVC    Here will use the same app we already create just we will Update the controller action to JS function provided by "Meta Developer" Quick Start Add this JavaScript code in your view where your login button is available  <button class="btn btn-primary"><a style="color:white" onclick="loginWithFacebook()">Login with Facebook</button> <script> window.fbAsyncInit = function () { FB.init({ appId: '1438230313570431', xfbml: true, version: 'v19.0' }); FB.AppEvents.logPageView(); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) { return; } js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); function loginWithFacebook() { FB.login(function (response) { if (response.authResponse) { // User is logged in and authorized your app console.log('Successful login for: ' + response.authResponse.userID); console.log(response); debugger; window.location = "https://localhost:44304/Auth/SuccesfullLogin"; } else { // User cancelled login or did not authorize your app console.log('Login cancelled'); } }, { scope: 'public_profile,email' }); // Specify the required permissions } </script> Now we Have to add js.src link in your JS functions is need to be Added in Meta developer App In our case it is :   https://connect.facebook.net/en_US/sdk.js will go to again Use cases > customize > settings.  Will add our link in "Allowed Domains for the JavaScript SDK" section Make sure "Login with the JavaScript SDK" toggle is "Yes". Now, you have a comprehensive guide for creating a demo on Facebook authentication in a .NET project. Share this guide, and users can follow each step to implement Facebook login functionality in their ASP.NET applications. 

Understanding and Creating Sitemaps for Your Website
Mar 14, 2024

Introduction:  A sitemap is a crucial element in optimizing your website for search engines. It serves as a roadmap, guiding search engine crawlers through the various pages and content on your site. In this blog post, we'll delve into what sitemaps are, how they are used, and provide step-by-step guidance on creating one. Additionally, we'll explore an alternative method using online sitemap generators.  What is a Sitemap?  A sitemap is essentially a file that provides information about the structure and content of your website to search engines. It lists URLs and includes additional metadata such as the last modification date, change frequency, and priority of each page. The primary purpose is to help search engine bots crawl and index your site more efficiently.  How is a Sitemap Used?  1. Improved Crawling:     Search engines use sitemaps to discover and understand the organization of your website. This aids in more efficient crawling, ensuring that no important pages are missed.  2. Enhanced Indexing:    By providing metadata like the last modification date and change frequency, sitemaps help search engines prioritize and index pages based on their relevance and importance.  3. SEO Benefits:    Having a well-structured sitemap can positively impact your site's search engine optimization (SEO), potentially leading to better visibility in search results.  How to Create a Sitemap:  1. Understand Your Website Structure:  Before creating a sitemap, familiarize yourself with your site's structure, including main pages, categories, and any dynamic content.  2. Choose a Sitemap Generation Method:  Manual Method: Use a text editor to create an XML file, including URLs, last modification dates, etc.  CMS Plugin: If you use a content management system (CMS) like WordPress, leverage plugins such as Yoast SEO or Google XML Sitemaps.  Online Sitemap Generator: Use online tools like XML-sitemaps.com or Screaming Frog to automatically generate a sitemap.  3. Include Relevant Information:  For each URL, include the `<loc>` (URL), `<lastmod>` (last modification date), `<changefreq>` (change frequency), and `<priority>` (priority) tags.  4. Save and Upload:  Save the XML file with a ".xml" extension (e.g., "sitemap.xml"). Upload it to your website's root directory using FTP or your hosting provider's file manager.  5. Submit to Search Engines:  Submit your sitemap to search engines using their webmaster tools (e.g., Google Search Console, Bing Webmaster Tools).  Alternative Method: Using Online Sitemap Generator:  1. Choose a Tool:  Select an online sitemap generator such as XML-sitemaps.com or Screaming Frog.  2. Enter Your Website URL:  Input your website's URL into the generator.     3. Generate and Download:     Click the "Generate" or "Crawl" button to initiate the process.  Once complete, download the generated sitemap file.     4. Upload and Submit:  Upload the downloaded file to your website's root directory and submit it to search engines.    Conclusion:  Creating and submitting a sitemap is a fundamental step in optimizing your website for search engines. Whether you opt for manual creation or use online generators, a well-structured sitemap can significantly contribute to better search engine visibility and improved SEO. Regularly update and submit your sitemap to ensure that search engines stay informed about changes to your site's content. 

Effortless Data Migration Using MySQL Federated Engine
Mar 13, 2024

Scenario: If someone say you Hey, can you transfer one of MySQL data to another MySQL data and we think about SSIS or other Thing if yes then these article made for you to reduce your effort and save your time Introduction: In the dynamic landscape of database management, the need to seamlessly access and integrate data from multiple sources has become paramount. Whether it's consolidating information from disparate servers or synchronizing databases for backup and redundancy, MySQL offers a robust solution through its querying capabilities. In this guide, we delve into the art of fetching data from one MySQL server to another using SQL queries. This method, often overlooked in favor of complex data transfer mechanisms, provides a streamlined approach to data migration, enabling developers and database administrators to efficiently manage their resources. Through a combination of MySQL's versatile querying language and the innovative use of the FEDERATED storage engine, we'll explore how to establish connections between servers, replicate table structures, and effortlessly transfer data across the network. From setting up the environment to executing queries and troubleshooting common challenges, this tutorial equips you with the knowledge and tools to navigate the intricacies of cross-server data retrieval with ease. As we know We gonna use FEDERATED feature of MySQL workbench so first we need to check that our workbench support FEDERATED engine or not?   Simply open workbench and run below code show engines;   It shows all engines and check our system support FEDERATED OR NOT   If your system also not support don't worry we gonna enable it Open your folder where you save MySQL serve file In my case it in my C drive C>ProgramData>MySQL>MySQL Server 8.0>my.ini    open it in notepad++ or preferable software    Insert FEDERATED key word in script like below   Now need to restart MySQL Press Window+R button and paste services.msc press ok> find MySQL and restart it Now go to workbence and run show engines;  code   Now your FEDERATED engine get supported It show like below   Now our system Support FEDERATED engine This same process need to apply on destination side because both server (from source to destination server) need to support FEDERATED engine Now we make sure to we have permission of access source server for that we need to make user and and give permission of database and tables   Below code demonstrate to make user and give permission to user CREATE USER 'hmysql'@'192.168.1.173' IDENTIFIED BY 'Hardik...'; GRANT ALL PRIVILEGES ON *.* TO 'hmysql'@'192.168.1.173' WITH GRANT OPTION; FLUSH PRIVILEGES;   Now make connection of that user(we make above on source side) on destination server(our system)    Click on plus(+) icon as shown in image and fill all detail   Below image is for detail of user connection   After filling details our user added like below image   Go to user(hardikmysql) and find from which table we want to take data using MySQL query    Here i am taking 'actor' table from 'sakila' database which look like below   Now we need to run FEDERATED query on our system(destination server) with url string   Our MySQL query like below CREATE TABLE `actor` ( `actor_id` smallint unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`actor_id`), KEY `idx_actor_last_name` (`last_name`) ) ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here main part is below ENGINE=FEDERATED default charset=utf8mb4 CONNECTION='mysql://hmysql:[email protected]:3306/sakila/actor';   Here 'mysql' is mandatory for connection string you can not use other word. 'hmysql' is user name 'Hardik...'  is password for user '192.168.1.173' is server adderess '3306' is port number 'sakila' is database name 'actor' is table name   Now run above table code and you get data in our system(destination server)    

Connect Json File to Power BI and Normalize Json File In Power Query
Mar 12, 2024

Introduction: Welcome to our guide on normalizing JSON files using Power Query! JSON (JavaScript Object Notation) has become one of the most popular formats for storing and exchanging data due to its simplicity and flexibility. However, working with JSON data in its raw form can sometimes be challenging, especially when dealing with nested structures or arrays. In this blog post, we'll delve into the process of normalizing JSON files using Power Query. Normalization refers to the process of organizing data into a tabular format, making it easier to analyze and manipulate.    First we need to import data in power bi desktop with Json connection.  click on get data inside home tab of power BI Desktop.   Click on more tab to get Json file connection. click on connect button which redirect you to your systeam then choose your Json file.   You redirect to your Power Query Editor. Power Query editor interface like below. Here we can check Table.column ,Table.Rows are in form of list Not in form of row data that we want. Click on one of list which show what inside of list. Here data inside of one  list.   It has around 121 rows inside one list and we need to expand it Row wise. so first idea is to exoand list column by left click on right side of column and choose expand new rows for every column which contains list.   But you notice that for one row there are multiple sub-rows which is not appropriate like below. Here for extract time(First row) only one row valid other rows are not valid or duplicate so these happen with every rows and roes count for table is not acceptable. so our approach is not valid here we miss something In above image(image 5) table.rows and table.column has list in row. if you turn by turn expand both list you notice that both list has data(in some case same rows and in other case rows are not same) with same row count which indicate that both lists data are connected with each other. so we need to make a new column which combine both list in form of table . Go to add column tab> select custome column  it open  custome column interface and write code like below. Table.FromColumns({[Tables.Columns],[Tables.Rows]})  it add new column in existing table like below. when you click on customecolumns row you will redirect to expanded version of table which look like below.   now we can normalize data. now we can separate table to multiple table by for easy for data modeling  create duplicate table and filter out as needed  now close your power query tab and you get your desired output below is advance editor code if you don't want to apply all  steps that we implemented above let Source = Json.Document(File.Contents("C:\Users\MagnusMinds\Downloads\Order_CRHESTHASH_ET_FD2024-03-11_PID184_ORD686_MRK2024-03-08 09 14 44.410_T00638455040948575166.json")), #"Converted to Table" = Table.FromRecords({Source}), #"Expanded Tables" = Table.ExpandListColumn(#"Converted to Table", "Tables"), #"Expanded Tables1" = Table.ExpandRecordColumn(#"Expanded Tables", "Tables", {"Name", "QueryStartTimeUtc", "QueryEndTimeUtc", "QueryElapsedTimeMs", "CurrentRow", "Columns", "Rows", "OriginalRowCount", "Hashes", "MasterRecordIsDuplicate"}, {"Tables.Name", "Tables.QueryStartTimeUtc", "Tables.QueryEndTimeUtc", "Tables.QueryElapsedTimeMs", "Tables.CurrentRow", "Tables.Columns", "Tables.Rows", "Tables.OriginalRowCount", "Tables.Hashes", "Tables.MasterRecordIsDuplicate"}), #"Changed Type" = Table.TransformColumnTypes(#"Expanded Tables1",{{"Tables.Name", type text}, {"Tables.QueryStartTimeUtc", type datetime}, {"Tables.QueryEndTimeUtc", type datetime}, {"Tables.QueryElapsedTimeMs", Int64.Type}, {"Tables.CurrentRow", Int64.Type}, {"Tables.Columns", type any}, {"Tables.Rows", type any}, {"Tables.OriginalRowCount", Int64.Type}, {"Tables.Hashes", type any}, {"Tables.MasterRecordIsDuplicate", type logical}}), #"Expanded Tables.Rows" = Table.ExpandListColumn(#"Changed Type", "Tables.Rows"), #"Added Custom" = Table.AddColumn(#"Expanded Tables.Rows", "Customcolumn", each Table.FromColumns({[Tables.Columns],[Tables.Rows]})), #"Expanded Customcolumn" = Table.ExpandTableColumn(#"Added Custom", "Customcolumn", {"Column1", "Column2"}, {"Customcolumn.Column1", "Customcolumn.Column2"}), #"Expanded Customcolumn.Column1" = Table.ExpandRecordColumn(#"Expanded Customcolumn", "Customcolumn.Column1", {"Name"}, {"Customcolumn.Column1.Name"}) in #"Expanded Customcolumn.Column1"  

Creating Custom Calendars for Accurate Working Day Calculation in Power BI
Mar 08, 2024

Introduction Have you ever tried to add holidays which lie between today (or any dates) to a specific date or the same kind of scenario? If yeah Then this article helps you   Scenario Recently, I faced a situation where I needed to add days to the current date (today) which depends on the total number of hours left for a particular employee. Means employee A has 150 hours left and spent only 6-hour for one working day, then divide the hour by daily spend and find the date (forecast date) where employee A completed their task but did not include holidays and weekends. Let’s consider employee X. Get 15 fore-cast days on the basis of the hours left. Those 15 days do not include weekends and holidays, which we need to add to the current date (today). Let’s consider Employee X got 3 days of weekend or public holiday out of 15 days, then it also needs to add in fore-cast days, which means now 15 days become 18 days.  If employee X's forecast date falls on a weekend or a holiday, it is also not applicable.   We take some data for solving scenario   Holiday Table   Logic behind solving this problem is we going to take a separate date table which will help for deriving fore-cast date and add new column of Holidays. After that we add a new column which give us a flag of 0 if Date in weekends or in holidays and 1 for date in working day after that we give rank according to flag with exclude of Flag 0.   We make a new date Table which particular Use for Fore-cast Date Using Below DAX Code Here I am taking only 2024 Date field fore-cast Datetable = ADDCOLUMNS( CALENDAR( TODAY(), DATE(2024,12,31) ), "weekday", WEEKDAY([Date],2), "Dayname", FORMAT([Date],"dddd") )   Join holiday Table and fore-cast Date table using Date field     Make a new column (Holiday) in Fore-cast Date table which help us to bring only holiday date from holiday table to existing table holiday = RELATED('Holiday Table'[holiday date])   Make another new column with some condition using Dax code which give Flag 0 for those date which are weekends and holidays and 1 for Working days. workingday = IF( NOT('fore-cast Datetable'[weekday]) in {6,7}, IF( 'fore-cast Datetable'[holiday]<>BLANK(), 0, 1 ), 0 ) Add column Which Give day no from Today with ignoring Weekends and holidays Day Number = RANKX( FILTER( ALL('fore-cast Datetable'[Date],'fore-cast Datetable'[workingday]), 'fore-cast Datetable'[workingday]<>0 ), 'fore-cast Datetable'[Date],,ASC )   Our main goal is taking Days on behalf of Hours and find those date which is equivalent to left days from today with ignoring weekends and holidays. Day Number column give us that ability   Fore-cast date table Look like below Now make a new column in employee table Forecast date column = var a =FLOOR((Employee[Hour left]/6),1) return MAXX( FILTER( ALL('fore-cast Datetable'[Date],'fore-cast Datetable'[Day Number]), 'fore-cast Datetable'[Day Number]=a ), 'fore-cast Datetable'[Date] )   We can make measure with little bit changes fore-cast date using measure = var a = FLOOR(SUM(employee[hour left])/6,1) return MAXX( FILTER( ALL('fore-cast Datetable'[Date],'fore-cast Datetable'[Day Number]), 'fore-cast Datetable'[Day Number]=a ), 'fore-cast Datetable'[Date] )   Make a table visual     If you don’t want to add new table to your Existing data model We can also achieve Without fore-cast date table by creating it on fly.   Use below Dax code for column fore-cast date without table refrence = var a = FLOOR(('Employee'[Hour left]/6),1) var b =NETWORKDAYS(TODAY(),TODAY()+a,1) var c = ADDCOLUMNS( CALENDAR(TODAY(),TODAY()+a+b), "weekday", WEEKDAY([Date],2), "dayname",FORMAT([Date],"dddd"), "publikholiday",LOOKUPVALUE('Holiday Table'[Holiday Date],[holiday date],[Date]), "holidaycondition", var a1 = IF( not(WEEKDAY([Date],2)) in {6,7}, IF( LOOKUPVALUE('Holiday Table'[Holiday Date],[holiday date],[Date])<>BLANK(), 0, 1 ), 0 ) RETURN a1 ) var f = ADDCOLUMNS(c,"rank",RANKX(FILTER(c,[holidaycondition]<>0),[Date],,ASC)) var e= MINX(FILTER(f,[rank]=a),[Date]) return e here little bit changes in above code for measure Use below code for measure fore-cast date measurewithout table refrence = var a = FLOOR(sum(employee[hour left])/6,1) var b =NETWORKDAYS(TODAY(),TODAY()+a,1) var c =ADDCOLUMNS( CALENDAR(TODAY(),TODAY()+a+b), "weekday",WEEKDAY([Date],2), "dayname",FORMAT([Date],"dddd"), "publikholiday",LOOKUPVALUE('Holiday Table'[Holiday Date],'Holiday Table'[holiday date],[Date]), "holidaycondition", var a1 = IF( not(WEEKDAY([Date],2)) in {6,7}, IF( LOOKUPVALUE('Holiday Table'[Holiday Date],'Holiday Table'[holiday date],[Date])<>BLANK(), 0, 1 ), 0 ) RETURN a1 ) var f = ADDCOLUMNS(c,"rank",RANKX(FILTER(c,[holidaycondition]<>0),[Date],,ASC)) var e= MINX(FILTER(f,[rank]=a),[Date]) return e   Output without Using Table refrence

Year-to-Date & Year-over-Year Calculation using DAX in Power BI
Feb 29, 2024

Introduction to Power BI and Year-to-Date(YTD) & Year-over-Year(YoY) Calculations Power BI is a data visualization and business intelligence tool that allows users to connect to different data sources, transform data, and create insightful reports and dashboards. With Power BI, users can easily perform complex calculations such as YTD calculation, which provides a way to view data from the beginning of the year up to a given point in time. YoY growth is a change in a metric compared to the same period one year prior. There are several approaches to achieve YTD & YoY calculation using DAX in Power BI. Let's use one of the approach to accomplish that.   What is Year-to-Date(YTD)? Imagine you’re in February, looking back at all the data from the beginning of the year (January 1st) until today. That’s YTD. It’s like a running total of your performance throughout the current year.   How to Calculate Year-toDate(YTD)? Assume we have a calendar & sales table and having a column for sales amount. Now use DAX to develop a measure that computes the current year's YTD revenue. Previous Year-to-Date(PYTD): Now, rewind to the same day in the previous year. The data from January 1st of that year up to that day is PYTD. It’s your benchmark, a reference point to compare your current year’s progress.   How to Calculate Previous Year-to-Date(PYTD)? Using SAMEPERIODLASTYEAR function we can get the same date of previous year. Year-over-Year(YoY) Growth: This is where things get exciting! YoY is the change between your current YTD and the PYTD for the same day. It tells you how much you’ve grown (or shrunk) compared to the same period last year.   How to calculate YoY growth : Subtract PYTD(YTD Rev LY) from YTD Revenue(YTD Rev) :   The DAX functions I utilized to get these calculations : LASTDATE(Dates) : Returns last non blank date STARTOFYEAR(Dates) : Returns the start of year SAMEPERIODLASTYEAR(Dates) : Returns a set of dates in the current selection from the previous year CALCULATE (Expression,Filter,Filter, …) : Evaluates an expression in a context modified by filters. DATESBETWEEN(Dates,StartDate,EndDate) : Returns the dates between two given dates.   Conclusion : Calculating YTD and YOY growth in Power BI using DAX is a valuable technique for analyzing financial performance and identifying trends. Furthermore, it's important to note that this comprehensive approach leverages only pre-defined DAX functions. By understanding and practicing these versatile functions, you can unlock the ability to perform a wide range of complex calculations within Power BI, ultimately transforming your data into actionable insights.

How to Store/Archive Data in AWS S3 Bucket Using AWS Glue Script
Feb 29, 2024

  In today's data-driven world, efficient storage and management of data are paramount for businesses of all sizes. Nowadays there are multiple data sources available in the market and the need for analytics is vastly increased, hence having a reliable and scalable database management is essential. Amazon Web Services (AWS) offers a robust set of tools for data storage, including the Simple Storage Service (S3), a highly durable and scalable object storage solution, and a fully managed extract, transform, and load (ETL) service known as AWS Glue Script.   In this blog, we'll discuss about process of storing and archiving data using an AWS S3 bucket and an AWS Glue script. We'll explore the benefits of this approach and provide a step-by-step guide to help you set up your data storage and archiving solution.    Setting Up Data Storage and Archiving with AWS S3 and AWS Glue: Let’s learn how to create S3 Bucket & AWS Glue Script.   Step 1) Create an AWS S3 Bucket: Create an S3 bucket in the AWS Management Console. Choose a unique bucket name, Remember to select the appropriate Region & Access settings according to your requirements and configure the necessary permissions. This bucket will be used to store the archived data.    Step 2) Configure Life-cycle Policies: Once the bucket is created, We can manage the Life-cycle & Replication Policies of the stored object. Open the created bucket & go to the management tab. Configure the rules for Life-cycle & Replication Policies as per the requirement. We can define rules to transition objects to different storage classes or delete them after a certain period.   Step 3) Develop an AWS Glue Script: Now we have created a storage system to store the archived data. The next step is to develop an AWS Glue script to perform the necessary ETL operations on your data. This may include extracting data from various sources, transforming it into the desired format, and loading it into your S3 bucket. AWS Glue supports Python as the scripting language for defining ETL jobs, making it flexible and easy to use for developers and data engineers.      Here's a detailed breakdown of how to develop an AWS Glue script: Create a Glue Job: In the AWS Glue console, navigate to the "Jobs" section and click on "Add job". Provide a name for your job and select the IAM role that grants necessary permissions for Glue to access your data sources and write to the S3 bucket. AWS also provides a visual interface that allows users to create, run, and monitor data integration jobs in AWS Glue. It offers a graphical, no-code interface for building AWS Glue jobs with easy steps.   Define Data Sources & Destination: Identify the data sources you'll be working with. These can include various types of data repositories such as relational databases, data lakes, or even streaming data sources like Amazon Kinesis. AWS Glue supports a wide range of data sources, allowing you to extract data from diverse platforms.   We just have to configure the Source, Transformation if needed & the Destination with the proper connection string. For example, For the Relational Database source, we have to provide a JDBC connection of the server OR Data catalog table. Once the connection is successful we can enter the schema & object name & get the data preview in AWS.   After successfully configuring the source & destination location, AWS will automatically generate the ETL script which we can refer to in the script tab.   Write Glue Script:  In the script editor, We can also directly write the Python code that defines our ETL operations. Let's understand it with an example. We will connect the SQL server Database & execute the SP which will transfer the data into a table. From this table we will archive the data into our S3 bucket.   Please find the structure of it below : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job args = getResolvedOptions(sys.argv, ["JOB_NAME"]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args["JOB_NAME"], args) from py4j.java_gateway import java_import source_jdbc_conf = glueContext.extract_jdbc_conf('ConnectionName')   java_import(sc._gateway.jvm,"java.sql.Connection") java_import(sc._gateway.jvm,"java.sql.DatabaseMetaData") java_import(sc._gateway.jvm,"java.sql.DriverManager") java_import(sc._gateway.jvm,"java.sql.SQLException") conn = sc._gateway.jvm.DriverManager.getConnection(source_jdbc_conf.get('url') + ";databaseName=DB_NAME", source_jdbc_conf.get('user'), source_jdbc_conf.get('password')) cstmt = conn.prepareCall("{call dbo.sptoGetthedatandtransferintotable(?)}"); results = cstmt.execute(); # Script generated for node SQL Server table SQLServertable_node1 = glueContext.create_dynamic_frame.from_options(     connection_type="sqlserver",     connection_options={         "useConnectionProperties": "true",         "dbtable": "Data_Table",         "connectionName": "Connection_name"     },     transformation_ctx="SQLServertable_node1", ) # Script generated for node S3 bucket S3bucket_node3 = glueContext.write_dynamic_frame.from_options(     frame=SQLServertable_node1,     connection_type="s3",     format="glueparquet",     connection_options={"path": "s3://s3_newcreatedbucket//"},     format_options={"compression": "snappy"},     transformation_ctx="S3bucket_node3", ) conn.close(); job.commit();   Step 4) Version Control: AWS also provides version controling of the job script through GIT so we can track and manage changes.   Step 5) Run the Glue Job: Once you're satisfied with the script's functionality, you can run the Glue job either on-demand or schedule it to run at specific intervals. AWS Glue will execute the script, extract data from the defined sources, perform transformations, and load the transformed data into the specified S3 bucket.   Step 6) Monitor Job Execution: Monitor the job execution in the AWS Glue console or via AWS Cloud Watch. You can track metrics such as job run time, success/failure status, and resource utilization to ensure that your ETL processes are running smoothly.   After following these steps, You should be able to efficiently store/archive data in S3 Bucket using AWS Glue script. Before wrapping up let's understand what are the benefits of AWS S3 & AWS Glue Script Service.   Benefits of Using AWS S3 and AWS Glue:  Scalability: AWS S3 provides virtually unlimited storage capacity, Individual objects can be up to 5TB in size. Allowing you to scale your storage resources seamlessly as your data grows. Durability: S3 offers 99.999999999% durability for stored objects, this means that if you store 100 billion objects in S3, you will lose one object at most. This ensures that your data is highly resilient and protected against loss. Cost-effectiveness: With AWS S3, you only pay for the storage you use, making it a cost-effective solution for businesses of all sizes. Simplified Management: AWS Glue automates the process of data discovery, transformation, and loading, streamlining the data management process and reducing the need for manual intervention. Integration: Both AWS S3 and AWS Glue seamlessly integrate with other AWS services, such as Amazon RDS, Amazon Redshift, Amazon Athena, and Amazon EMR, allowing you to build comprehensive data pipelines and analytics workflows. Availability: Amazon S3 replicates data across multiple disks, so even if one of them fails, customers can still access their data with no downtime. It Ensures that your data is always available whenever we require it.   So Overall to summarize this blog we learned that, by leveraging AWS S3 and AWS Glue, you can build a robust data storage and archiving solution that is scalable, durable, and cost-effective. Whether you're dealing with large volumes of data or need to automate the process of archiving historical data, AWS provides the tools and services you need to streamline your data management workflows. Start exploring the possibilities today and unlock the full potential of your data with AWS.   Thank you for your visit. Hoping this blog was helpful & you got what you were looking for. Best of Luck