What is Azure VM? Azure virtual machines (VMs) are a cloud-based computing service that allows users to run applications on the Microsoft Azure platform. VMs are a type of on-demand, scalable computing resource that offers a number of benefits, including: Security: Azure VMs offer a secure way to run applications. Affordability: Users can pay for extra VMs when needed and shut them down when not. Flexibility: Users can choose from various operating systems, including Windows and Linux. Scalability: Users can scale up to thousands of VMs based on demand or schedules. Performance: Users can enhance network and storage performance with custom hardware. Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based user interface to create VMs and their associated resources. In this blog, I'll show you how to use the Azure portal to deploy a virtual machine (VM) in Azure. Sign in to the Azure portal. Create Virtual Machine Enter Virtual Machine in the search. Under Services, select Virtual machines. In the Virtual Machines page, select Create and then Azure virtual machine. The Create a Virtual Machine page opens. Under Project details, select the resource group. Under the instance details enter the Virtual machine name and choose "Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2" for the Image. Leave the other defaults. Also use can choose image based on your requirement. Under Administrator account, provide a username, such as azureuser, and a password. Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP (80) from the drop-down. Leave the remaining defaults and then select the Review + create button at the bottom of the page. After validation runs, select the Create button at the bottom of the page. After deployment is complete, select Go to resource. Connect to Virtual Machine On the overview page for your virtual machine, select Connect. Download the RDP file. Open the downloaded RDP file and click Connect when prompted. Click on more choise and enter your username and password that you have added while creating the VM and click on OK. You can see your VM is running now. Here are some other things to know about Azure VMs: Maintenance: Users still need to maintain the VM by configuring, patching, and installing software. Cost: The cost of an Azure VM depends on the size and type of VM, as well as other services used with it. Security: Users should take steps to ensure the security of their data and applications, such as identity management, encryption, and network protection. Virtual machine selector: Users can use the virtual machine selector to find the right VMs for their needs and budget. Conclusion: An Azure virtual machine gives you the flexibility of virtualization without buying and maintaining the physical hardware that runs it. However, you still need to maintain the virtual machine by performing tasks, such as configuring, patching, and installing the software that runs on it.
Simplifying API Responses with AutoWrapper.Core in .NET Core. Handling API responses effectively is a crucial aspect of building robust and user-friendly applications. In .NET Core applications, the AutoWrapper.Core library comes to the rescue, providing a streamlined way to structure and standardize API responses. In this blog post, we'll explore how to use AutoWrapper.Core to create fixed responses for different status codes in your API. Firstly, you'll need to install the AutoWrapper.Core NuGet package. Add the following line to your project's .csproj file: <PackageReference Include="AutoWrapper.Core" Version="4.5.1" /> This package simplifies the process of handling API responses and ensures a consistent format for success, error, and data messages. Example: Login Method Let's consider a common scenario, the login method, where we want to ensure fixed responses for both successful and unsuccessful attempts. [HttpPost("Login")] public async Task<ApiResponse> Login([FromBody] Login model) { var user = await _userService.GetUserByName(model.UserName); if (user != null && await _userService.CheckUserPassword(user, model.Password)) { var userResponse = await _tokenService.GenerateToken(user); return new ApiResponse(message: "Login Successfully.", result: userResponse, statusCode: 200); } return new ApiResponse(message: "Invalid Credential.", result: null, statusCode: 401); } In this example, we're using AutoWrapper.Core's ApiResponse class to encapsulate our responses. For a successful login attempt (status code 200), we return a positive message along with the user response. In case of invalid credentials (status code 401), an appropriate error message is provided. ApiResponse Class Now, let's take a closer look at the ApiResponse class from AutoWrapper.Core: namespace AutoWrapper.Wrappers; public class ApiResponse { public string Version { get; set; } [JsonProperty(DefaultValueHandling = DefaultValueHandling.Ignore)] public int StatusCode { get; set; } public string Message { get; set; } [JsonProperty(DefaultValueHandling = DefaultValueHandling.Ignore)] public bool? IsError { get; set; } public object ResponseException { get; set; } public object Result { get; set; } [JsonConstructor] public ApiResponse(string message, object result = null, int statusCode = 200, string apiVersion = "1.0.0.0") { StatusCode = statusCode; Message = message; Result = result; Version = apiVersion; } public ApiResponse(object result, int statusCode = 200) { StatusCode = statusCode; Result = result; } public ApiResponse(int statusCode, object apiError) { StatusCode = statusCode; ResponseException = apiError; IsError = true; } public ApiResponse() { } } The ApiResponse class provides flexibility in constructing responses with different components such as the message, result, and status code. It helps maintain a standardized format for all API responses. Create a Custom Wrapper: AutoWrapper allows you to create a custom wrapper by implementing the IApiResponse interface. You can create a class that implements this interface to customize the fixed response. Here's an example: Create a Custom Wrapper: AutoWrapper allows you to create a custom wrapper by implementing the IApiResponse interface. You can create a class that implements this interface to customize the fixed response. Here's an example: using AutoWrapper.Wrappers; public class CustomApiResponse<T> : ApiResponse<T> { public string CustomProperty { get; set; } public CustomApiResponse(T result, string customProperty) : base(result) { CustomProperty = customProperty; } } Configure AutoWrapper: In your Startup.cs file, configure AutoWrapper to use your custom wrapper. You can do this in the ConfigureServices method: services.AddAutoWrapper(config => { config.UseCustomSchema<CustomApiResponse<object>>(); }); Replace CustomApiResponse<object> with the custom wrapper class you created. Use Custom Wrapper in Controller Actions: Now, you can use your custom wrapper in your controller actions. For example: [ApiController] [Route("api/[controller]")] public class MyController : ControllerBase { [HttpGet] public IActionResult Get() { // Your logic here var data = new { Message = "Hello, World!" }; // Use the custom wrapper var response = new CustomApiResponse<object>(data, "CustomProperty"); return Ok(response); } } Customize the CustomApiResponse according to your needs, and use it in your controller actions. This way, you can integrate AutoWrapper with other packages and customize the fixed response format in your .NET application. In conclusion, by incorporating AutoWrapper.Core into your .NET Core applications, you can simplify the handling of API responses, making your code more readable, maintainable, and user-friendly. Consider adopting this approach to enhance the overall developer experience and ensure consistency in your API communication.
To set up the AWS Cognito for the registration/login flow, follow these steps: First Flow: User Registration in Cognito1. Install the following NuGet packages in your .NET project: <PackageReference Include="Amazon.AspNetCore.Identity.Cognito" Version="3.0.1" /> <PackageReference Include="Amazon.Extensions.Configuration.SystemsManager" Version="5.0.0" /> <PackageReference Include="AWSSDK.SecretsManager" Version="3.7.101.27" /> Declare AWS configuration values in appsettings: "Region": "me-south-1", "UserPoolClientId": "UserPoolClientId", "UserPoolClientSecret": "UserPoolClientSecret", "UserPoolId": "me-south-pool" Additional Configuration Add authentication in program/startup files to enable sign-in with Cognito. 2. Create a CognitoUserPool with a unique ID in the controller: private readonly CognitoUserPool _pool; private readonly CognitoUserManager<CognitoUser> _userManager; var user = _pool.GetUser(registerUserRequest.LoginId); 3.Add user attributes (email, phone number, custom attributes) using user.Attributes.Add(). user.Attributes.Add(CognitoAttribute.Email.AttributeName, registerUserRequest.Email); user.Attributes.Add(CognitoAttribute.PhoneNumber.AttributeName, registerUserRequest.Mobile); user.Attributes.Add("custom:branch_code", registerUserRequest.BranchCode); user.Attributes.Add("custom:preferred_mode", preferedMode); 4. Create the user: cognitoResponse = await _userManager.CreateAsync(user, registerUserRequest.Password); Check cognitoResponse.Succeeded to determine if the user was created successfully. Second Flow: User Login with Cognito 1.Search for the user in Cognito using the login ID: var cognitoUser = await _userManager.FindByIdAsync(loginUserRequest.LoginId); 2.Set a password for the Cognito model: var authRequest = new InitiateSrpAuthRequest { Password = loginUserRequest.Password }; 3.Use StartWithSrpAuthAsync to get the session ID: var authResponse = await cognitoUser.StartWithSrpAuthAsync(authRequest); 4.Add MFA method and validate using MFA auth if needed. For MFA validation, set the MFA settings in Cognito:v ar authRequest = new RespondToMfaRequest { SessionID = validateLoginUserRequest.SessionId, MfaCode = validateLoginUserRequest.Otp, ChallengeNameType = ChallengeNameType.SMS_MFA }; authResponse = await cognitoUser.RespondToMfaAuthAsync(authRequest); Extract tokens from Cognito: authResponse.AuthenticationResult.IdToken authResponse.AuthenticationResult.RefreshToken Forgot Password Flow 1.Search for the user with LoginId in Cognito and call ForgotPasswordAsync: var user = await _userManager.FindByIdAsync(loginUserRequest.LoginId); await user.ForgotPasswordAsync(); 2.Optionally, call ConfirmForgotPassword method in Cognito. _userManager.ConfirmForgotPassword(userID, token, newPassword, CancellationToken cancellationToken) Here, understanding AWS Cognito Authentication Methods and Utilizing Them as Needed.
Hosting an Angular application on IIS involves a few straightforward steps. Follow this step-by-step guide to seamlessly deploy your Angular project on IIS. Step 1: Open Your Angular Project in Visual Studio Code Review the build command in the package.json file. By default, it's usually set to ng build. Step 2: Run the Build Command Execute the ng build command in the terminal to compile your Angular application. This command creates a 'dist' folder, typically located at the specified output path in the angular.json file. Step 3: Install IIS Ensure that IIS is installed on your machine. You can install it through the "Turn Windows features on or off" option in the Control Panel. Step 4: Create a New Site in IIS Open the IIS Manager. In the Connections pane, right-click on the "Sites" node and select "Add Website." Fill in the required information, such as the Site name, Physical path to the folder , and choose a port. Step 5: Configure URL Rewrite (Optional) If your Angular application uses routing, consider configuring URL Rewrite for proper routing. Create a 'web.config' file in your 'dist' folder with the appropriate configurations. Here's a simple example of a web.config file for an Angular application with routing.This file helps configure how the server handles URL requests. ---------------------------------------------------------------------------------------------------------------- <?xml version="1.0" encoding="utf-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Angular Routes" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll"> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="/" /> </rule> </rules> </rewrite> <staticContent> <remove fileExtension=".json" /> <mimeMap fileExtension=".json" mimeType="application/json" /> </staticContent> </system.webServer> </configuration> ---------------------------------------------------------------------------------------------------------------- Step 6: Restart IIS After making these changes, restart IIS to apply the configurations. Step 7: Access Your Angular Application Open a web browser and navigate to http://localhost:yourport (replace 'yourport' with the specified port from Step 4). Now, your Angular application is hosted on IIS. Access it through the specified port. If any issues arise, check the IIS logs for more information. Customize these instructions based on your specific requirements and environment. Thanks!
In the ever-evolving landscape of business intelligence (BI), the need for seamless interaction with data is paramount. Imagine a world where you could effortlessly pose natural language questions to your datasets and receive insightful answers in return. Welcome to the future of BI, where the power of conversational interfaces meets the robust capabilities of Domo. This blog post serves as your comprehensive guide to implementing a BI ChatBot within the Domo platform, a revolutionary step towards making data exploration and analysis more intuitive and accessible than ever before. Gone are the days of wrestling with complex queries or navigating through intricate dashboards. With the BI ChatBot in Domo, users can now simply articulate their questions in plain language and navigate through datasets with unprecedented ease. Join us on this journey as we break down the process into manageable steps, allowing you to harness the full potential of BI ChatBot integration within the Domo ecosystem. Whether you're a seasoned data analyst or a business professional seeking data-driven insights, this guide will empower you to unlock the true value of your data through natural language interactions. Get ready to elevate your BI experience and transform the way you interact with your datasets. Let's dive into the future of business intelligence with the implementation of a BI ChatBot in Domo. Prerequisites: ChatGPT API Key: Prepare for the integration of natural language to SQL conversion by obtaining a ChatGPT API Key. This key will empower your system to seamlessly translate user queries in natural language into SQL commands. DOMO Access: Ensure that you have the necessary access rights to create a new application within the Domo platform. This step is crucial for configuring and deploying the BI ChatBot effectively within your Domo environment. 1: Integrate the HTML Easy Bricks App. Begin the process by incorporating the HTML Easy Bricks App into your project. Navigate to the AppStore and add the HTML Easy Bricks to your collection. Save it to your dashboard for easy access. Upon opening the App for the first time, it will have a default appearance. To enhance its visual appeal and functionality, customize it by incorporating the HTML and CSS code. This transformation will result in the refined look illustrated below. Image 1: DOMO HTML Easy Brick UI 2: Map/Connect the Dataset to the Card. In this phase, establish a connection between the dataset and the card where users will pose their inquiries. Refer to the image below, where the "Key" dataset is linked to "dataset0." Extend this mapping to accommodate up to three datasets. If your project involves more datasets, consider using the DDX-TEN-DATASETS App instead of HTML Easy Bricks for a more scalable solution. This ensures seamless integration and accessibility for users interacting with various datasets within your Domo environment. Image 2: Attach Dataset With Card 3: Execute the Query on the Dataset for Results. In this phase, you'll implement the code to execute a query on the dataset, fetching the desired results. Before this, initiate a call to the ChatGPT API to dynamically generate an SQL query based on the user's natural language question. It's essential to note that the below code is designed to only accept valid column names in the query, adhering strictly to MySQL syntax. To facilitate accurate query generation from ChatGPT, create a prompt that includes the dataset schema and provides clear guidance for obtaining precise SQL queries. Here is a call to the ChatGPT API to get SQL Query. VAR GPTKEY = 'key' VAR Prompt = 'Write effective prompt' $.ajax({ url: 'https://api.openai.com/v1/chat/completions', headers: { 'Authorization': 'Bearer ' + GPTKEY, 'Content-Type': 'application/json' }, method: 'POST', data: JSON.stringify({ model: 'gpt-3.5-turbo', messages: Prompt, max_tokens: 100, temperature: 0.5, top_p: 1.0, frequency_penalty: 0.0, presence_penalty: 0.0 }), success: function (response) { //Write code to store the Query into the variable } }); Refer to the code snippet below for executing the query on Domo and retrieving the results. var domo = window.domo; var datasets = window.datasets; domo.post('/sql/v1/'+ 'dataset0', SQLQuery, {contentType: 'text/plain'}).then(function(data) { //Write your Java or JQuery code to print data. }); The above code will accept the SQL queries generated by ChatGPT. It's important to highlight that, in the code, there is a hardcoded specification that every query will be applied to the dataset mapped as 'dataset0'. It's advisable to customize this part based on user selection. The code is designed to accept datasets with names such as 'dataset0', 'dataset1', and so forth. Ensure that any modifications align with the chosen dataset for optimal functionality, you can also use the domo.get method to get data for more information visit here. The outcome will be presented in JSON format, offering flexibility for further processing. You can seamlessly transfer this data to a table format and display or print it as needed. Conclusion Incorporating a BI ChatBot in Domo revolutionizes data interaction, seamlessly translating natural language queries into actionable insights. The guide's step-by-step approach simplifies integration, offering both analysts and business professionals an intuitive and accessible data exploration experience. As datasets effortlessly respond to user inquiries, this transformative synergy between ChatGPT and Domo reshapes how we extract value from data, heralding a future of conversational and insightful business intelligence. Dive into this dynamic integration to propel your decision-making processes into a new era of efficiency and accessibility.
Here, I will explain how to restrict users from using expired tokens in a .NET Core application. Token expiration checks are crucial for ensuring the security of your application. Here's a general outline of how you can achieve this: 1. Configure Token Expiration: When generating a token, such as a JWT, set an expiration time for the token. This is typically done during token creation. For example, when using JWTs, you can specify the expiration claim: var tokenDescriptor = new SecurityTokenDescriptor { Expires = DateTime.Now.AddMinutes(30) // Set expiration time }; 2. Token Validation Middleware: Create middleware in your application to validate the token on each request. This middleware should verify the token's expiration time. You can configure this middleware in the startup or program file on the .NET side. public void Configure(IApplicationBuilder app, IHostingEnvironment env) { app.UseMiddleware<TokenExpirationMiddleware>(); } 3. Token Expiration Middleware: Develop middleware to validate the token's expiration time. Take note of the following points: ValidateIssuerSigningKey: Set to true, indicating that the system should validate the issuer signing key. IssuerSigningKey: The byte array represents the secret key used for both signing and verifying the JWT token. ValidateIssuer and ValidateAudience: Set to false, indicating that validation of the issuer and audience is skipped. By setting ClockSkew to TimeSpan.Zero, you specify no tolerance for clock differences. If the current time on the server or client is not precisely within the token's validity period, the token is considered expired. public class TokenExpirationMiddleware { private readonly RequestDelegate _next; public TokenExpirationMiddleware(RequestDelegate next) { _next = next; } public async Task Invoke(HttpContext context) { // Check if the request has a valid token var token = context.Request.Headers["Authorization"].FirstOrDefault()?.Split(" ").Last(); if (token != null) { var tokenHandler = new JwtSecurityTokenHandler(); var key = Encoding.ASCII.GetBytes("YourSecretKey"); // Replace with your actual secret key of Issuer var tokenValidationParameters = new TokenValidationParameters { ValidateIssuerSigningKey = true, IssuerSigningKey = new SymmetricSecurityKey(key), ValidateIssuer = false, ValidateAudience = false, ClockSkew = TimeSpan.Zero }; try { // Validate the token var principal = tokenHandler.ValidateToken(token, tokenValidationParameters, out var securityToken); // Check if the token is expired if (securityToken is JwtSecurityToken jwtSecurityToken) { if (jwtSecurityToken.ValidTo < DateTime.Now) { // Token is expired context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; return; } } } catch (SecurityTokenException) { // Token validation failed context.Response.StatusCode = (int)HttpStatusCode.Unauthorized; return; } } await _next(context); } } Working fine with proper token time. Here is an example: I am providing an expired token, and it will result in a 401 Unauthorized status. You can also check the token in https://jwt.io/ for time expired (exp) . By following these steps, you can effectively implement checks to ensure that users are not able to use expired tokens within your .NET Core application.
You may have heard infrastructure as code(IaC), But do you know what infrastructure is? Why do we need infrastructure as code? What are the benefits of infrastructure as code? Is it safe and secure? What is Infrastructure as Code(IoC)? Infrastructure as code (IaC) means to manage and upgrade your environments as infrastructure using configuration files. Terraform provides infrastructure as code for provisioning, compliance, and management across any public cloud, private data center, and third-party service. Enables teams to write, share, manage, and automate any infrastructure using version control With automated policy enforcement for security, compliance, and operational best practices and Enable developers to provision their desired infrastructure from within their workflows. IOC has a high impact on the Business perspective by providing Increased Productivity, Reduced Risk, Reduced Cost Why do we use Infrastructure as Code(IoC)? Terraform is a simple human-readable configuration language, to define the desired topology of infrastructure resources VCS Integration Write, version, review, and collaborate on Terraform code using your preferred version control system Workspaces Workspaces decompose monolithic infrastructure into smaller components, or "micro-infrastructures". These workspaces can be aligned to teams for role-based access control. Variables Granular variables allow easy reuse of code and enable dynamic changes to scale resources and deploy new versions. Runs Terraform uses two-phased provisioning a plan (dry run) & apply (execution). Plans can be inspected before execution to ensure expected behavior and safety. Infrastructure State The state file is a record of currently provisioned resources. State files enable a versioned history of the infrastructure and are encrypted at rest. Versions can be inspected to see incremental changes. Policy as Code Sentinel is a policy as a code framework to automate multi-cloud governance. What are the benefits of Infrastructure as Code(IoC)? Infrastructure as Code enables Infrastructure teams to test the applications in staging environments or development environment early - likely in the development cycle Infrastructure as Code Saves You Time and Money We can have a version history like when the infrastructure is upgraded and who has done it from the code itself. Else we have to ask to check the Infrastructure admin to look into logs and which is very time-consuming. We can check it into version control and I get versioning. Now we can see an incremental history of who changed what Use Infrastructure as Code to build update and manage any cloud, infrastructure, or services Terraform makes it easy to re-use configurations for the environment for similar infrastructure, helping you avoid mistakes and save time. We can use the same configuration code for the different staging Production and development environments. Terraform supports many Providers to be built from just a simple and less line of code. Major providers are as follows AWS Azure GitHub GitLab Google Cloud Platform VMWare Docker and 200+ more. A Simple example to create an Ec2 Instance with just a few lines of code. resource "aws_instance" "ec2_instance" { ami = "ami-*******" instance_type = "t2.micro" vpc_security_group_ids = ["${aws_security_group.*****.id}"] key_name = "${aws_key_pair.****.id}" tags { Name = "New-EC2-Instance" } } But First, we have to write code for which provider we are writing our code. To do so here is the simple basic code to assign a provider provider "aws" { region = "us-west-2" ## PROVIDE CREDENTIALS } Now to Create your Ec2 Instance in AWS. We have to run the commands. So terraform has Four commands to check and apply the infrastructure changes, Init Plan Apply Destroy. 1. Init $ terraform init We can understand from the name of the command that is used to initialize something. So here terraform will be initialized in our code which will create some basic backend and tfstate files in folders for internal use. 2. Plan $ terraform plan As we do compile in some code languages, it will check for the compilation errors and plan what is going to happen when we run the script to generate infrastructure code. It will show you what resources are going to be created and what will be the configuration. 3. Apply $ terraform apply It is time to run the script and check what is being generated from the scripts. So the command will execute the script and apply the changes in our infrastructure, which will generate some resources for what we have written in the code. 4. Destroy $ terraform destroy This command is used when we want to remove or destroy the resource. After some time we don't need that resource then we just run the command which will destroy the resource. And your money is saved.