In this blog, I will guide you on the power of CI/CD in GitHub with a step-by-step guide. Learn to set up automated workflows, boost project efficiency, and streamline development processes for better code quality and faster deployments.
Certainly! It seems that I've encountered a challenge in my current development workflow when deploying minor code changes. The existing process involves manually publishing code from Visual Studio, creating backups of the current code on the server, and then replacing it with the new code. To address this, it's advisable to transition to an automated solution utilizing a Continuous Integration/Continuous Deployment (CI/CD) pipeline.
By implementing a CI/CD pipeline, you can streamline and automate the deployment process, making it more efficient and reducing the risk of manual errors. The CI/CD pipeline will handle tasks such as code compilation, testing, and deployment automatically, ensuring that the latest changes are seamlessly deployed to the desired environment.
This transition will not only save time but also enhance the reliability of your deployment process, allowing your team to focus more on development and less on manual deployment tasks.
For additional information, refer to the steps outlined below for guidance.
Step 1:
Go to your repository and click on the Actions tab
Step 2:
Now, Select the workflow according to your development. Here I am using .NET workflow.
Step 3:
Now you can see the default pipeline as below.
In that, you can change your branch as per your requirement.
Step 4:
You can now incorporate three new sections as outlined below to build the code and publish the folder as an artifact.
- name: Build and publish
run: |
dotnet restore
dotnet build
dotnet publish -o publish
- name: Zip output
run: |
cd publish
zip -r ../output .
- name: Upload zip archive
uses: actions/upload-artifact@v2
with:
name: test
path: ./publish
Upon integrating this code, your YAML file will now appear as follows.
In the code above, you have the flexibility to rename the zip file or the publish folder according to your preferences.
Build and Publish : This step is responsible for building and publishing the code.
Commands:
dotnet restore: Restores the project's dependencies.
dotnet build: Compiles the project.
dotnet publish -o publish: Publishes the project output to the 'publish' folder.
Zip Output : This step involves compressing the contents of the 'publish' folder into a zip file.
Commands:
cd publish: Changes the working directory to the 'publish' folder.
zip -r ../output .: Creates a zip file named 'output' containing the contents of the 'publish' folder.
Upload Zip Archive :This step uploads the zip archive to the workflow run as an artifact.
Using: The actions/upload-artifact@v2 GitHub Action.
Configuration:
name: test: Specifies the name of the artifact as 'test'.
path: ./publish: Indicates the path of the folder to be archived and uploaded.
By using the given code, you receive a finalized published folder prepared for deployment on the server. However, the deployment process on the server requires manual intervention.
To access the published folder, navigate to the "Actions" tab. Click on the "test" workflow, and you can download the published folder from there.
Step 5:
In the steps mentioned above, you previously followed a manual process, but now you have transitioned to an automatic process.
To automate the process, you'll need to install a self-hosted runner on the virtual machine where your application is hosted.
What is Self-hosted runner?
self-hosted runner is a system that you deploy and manage to execute jobs from GitHub Actions on GitHub.com.
To install the self-hosted runner, follow the basic steps.
Select the operating system image and architecture of your self-hosted runner machine.
Open a shell on your self-hosted runner machine and run each shell command in the order shown.
For more details you can visit https://docs.github.com/en/enterprise-cloud@latest/actions/hosting-your-own-runners/managing-self-hosted-runners/adding-self-hosted-runners
Step 6:
To automate the process, you can remove the last two sections, "Zip Output" and "Upload Zip Archive," and replace them with the following code.
Commands:
$datestamp = Get-Date -Format "yyyyMMddHHmmss": Retrieves the current date and time in the specified format.
cd publish: Changes the working directory to the 'publish' folder.
Remove-Item web.config: Deletes the 'web.config' file.
Remove-Item appsettings.json: Deletes the 'appsettings.json' file.
Remove-Item appsettings.Development.json: Deletes the 'appsettings.Development.json' file.
Stop-WebSite 'DemoGitHubPipeline': Stops the website with the specified name.
Compress-Archive D:\Published\DemoGitHubPipeline D:\Published\Backup\Backup_$datestamp.zip: Creates a compressed archive (zip) of the existing deployment with proper timestamp.
Copy-Item * D:\Published\DemoGitHubPipeline -Recurse -Force: Copies all contents from the 'publish' folder to the deployment directory.
Start-WebSite 'DemoGitHubPipeline': Restarts the website with the specified name.
Note:
Ensure that the paths and folder structures match the actual locations in your setup.
Adjust the website name and paths based on your specific configuration.
Conclusion:
In summary, implementing a CI/CD pipeline in GitHub is a pivotal step towards achieving efficiency, reliability, and accelerated development cycles. The integration of CI/CD streamlines the software delivery process by automating testing, building, and deployment, leading to consistent and high-quality releases.
GitHub Actions, with its native CI/CD capabilities, provides a powerful and flexible platform for orchestrating workflows. Leveraging its features, development teams can not only automate repetitive tasks but also ensure rapid feedback on code changes, enabling early detection of issues and facilitating collaboration.
What is Azure VM? Azure virtual machines (VMs) are a cloud-based computing service that allows users to run applications on the Microsoft Azure platform. VMs are a type of on-demand, scalable computing resource that offers a number of benefits, including: Security: Azure VMs offer a secure way to run applications. Affordability: Users can pay for extra VMs when needed and shut them down when not. Flexibility: Users can choose from various operating systems, including Windows and Linux. Scalability: Users can scale up to thousands of VMs based on demand or schedules. Performance: Users can enhance network and storage performance with custom hardware. Azure virtual machines (VMs) can be created through the Azure portal. This method provides a browser-based user interface to create VMs and their associated resources. In this blog, I'll show you how to use the Azure portal to deploy a virtual machine (VM) in Azure. Sign in to the Azure portal. Create Virtual Machine Enter Virtual Machine in the search. Under Services, select Virtual machines. In the Virtual Machines page, select Create and then Azure virtual machine. The Create a Virtual Machine page opens. Under Project details, select the resource group. Under the instance details enter the Virtual machine name and choose "Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2" for the Image. Leave the other defaults. Also use can choose image based on your requirement. Under Administrator account, provide a username, such as azureuser, and a password. Under Inbound port rules, choose Allow selected ports and then select RDP (3389) and HTTP (80) from the drop-down. Leave the remaining defaults and then select the Review + create button at the bottom of the page. After validation runs, select the Create button at the bottom of the page. After deployment is complete, select Go to resource. Connect to Virtual Machine On the overview page for your virtual machine, select Connect. Download the RDP file. Open the downloaded RDP file and click Connect when prompted. Click on more choise and enter your username and password that you have added while creating the VM and click on OK. You can see your VM is running now. Here are some other things to know about Azure VMs: Maintenance: Users still need to maintain the VM by configuring, patching, and installing software. Cost: The cost of an Azure VM depends on the size and type of VM, as well as other services used with it. Security: Users should take steps to ensure the security of their data and applications, such as identity management, encryption, and network protection. Virtual machine selector: Users can use the virtual machine selector to find the right VMs for their needs and budget. Conclusion: An Azure virtual machine gives you the flexibility of virtualization without buying and maintaining the physical hardware that runs it. However, you still need to maintain the virtual machine by performing tasks, such as configuring, patching, and installing the software that runs on it.
Serverless computing is a widely adopted approach and an extension of the cloud computing model where customers can focus solely on building logic, with the server infrastructure being completely managed by third-party cloud service providers. In Microsoft Azure, serverless computing can be implemented in various ways, one of which is by using Azure Functions. In this blog, we will discuss how to use Azure Functions for serverless computing. Firstly, let us understand the following terms. What Is Serverless Computing? Serverless computing, also known as the Function-as-a-Service (FAAS) approach to building software applications, eliminates the need to manage the server hardware and software by the consumer and be taken care of by third-party vendors. What Are Azure Functions? Azure functions are the serverless solution that provides all the necessary resources to carry out the tasks with minimal lines of code, infrastructure, and cost. The Azure function are a combination of code and event allowing us to write the code in any language. A Step-by-Step Approach For Creating An Azure Function Go to the Azure portal, search for Function App, and select Function App. Create a new Function App and fill in the details accordingly. Basic tab You can select the Runtime stack and version based on your requirements. Here, I am selecting .NET with version 8 and the operating system Windows. Storage You may leave the default values or configure them according to your project requirements. The default values are configured as. Storage account: You may use the existing storage account or create a new account to store your function app. Monitoring Enable the Application insights to monitor the activity. Deployment tab To enable Continuous Integration and Continuous Deployment (CI/CD), you may connect your function app to a repository by authorizing it to GitHub. These are the important things to focus on while creating your function app, you may leave the remaining details as default or customize them according to your requirements. Once you finish configuring your app, you can click the “create” button at the bottom of the page.Now your app will start the process of deployment. Once deployment is done click on go to the resource tab and you will see your function app was created successfully. Now we need to create a function in our function app. As you can see We have various options to choose Visual Studio, VS code, and other editors or CLI. Choose an environment to create your function. I’ve chosen Visual Studio to create my function app. Create an Azure Functions with Visual Studio From the Visual Studio menu, select File > New > Project. In Create a new project, enter functions in the search box, choose the Azure Functions template, and then select Next. Here you can select the function based on your requirements. Here I am selecting Timer trigger function. Then click on the Create button to create a project. You will see that the default Timer trigger function is created. Here I have created one more function called "HTTPTrigger". Here, you can see two JSON files: host.json and local.settings.json. The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally. When you publish your project to Azure, be sure to also add any required settings to the app settings for the function app. Publish to Azure Use the following steps to publish your project to a function app in Azure. In Solution Explorer, right-click the project and select Publish. In Target, select Azure then Next. Select Azure Function App (Windows) for the Specific target, which creates a function app that runs on Windows, and then select Next. In the Functions instance, You have to select the function that you created on the Azure portal and then click the finish button. You can see that the publish profile has been added. Now, click on the Publish button to publish the function to Azure. Once the function is published, go to the Azure portal and search for Application Insights. You can find the Application Insights instance with the same name as the function. On the LHS, go to the Transaction search tab under Investigate and click on See all data in the last 24 hours. In the logs, you can see that your function is working properly. Conclusion In a nutshell, Azure functions provide a very precise environment for developers allowing them to more focus on coding rather than then managing infrastructure. This feature plays a key role in building scalable and responsive applications with low cost.
Introduction In the ever-evolving landscape of web development, simplicity is key. Enter Minimal APIs in ASP.NET Core, a lightweight and streamlined approach to building web applications. In this detailed blog, we'll explore the concept of Minimal APIs, understand why they matter, and walk through their implementation in ASP.NET Core. When to Use Minimal APIs? Minimal APIs are well-suited for small to medium-sized projects, microservices, or scenarios where a lightweight and focused API is sufficient. They shine in cases where rapid development and minimal ceremony are top priorities. You can find in this blog <link> how to create minimal api. I am directly showing the comparison between MinimalAPI and controller. Controllers: Structured and Versatile Controllers, deeply rooted in the MVC pattern, have been a cornerstone of ASP.NET API development for years. They provide a structured way to organize endpoints, models, and business logic within dedicated controller classes. Let's consider an example using Microsoft.AspNetCore.Mvc; namespace MinimalAPI.Controllers { [ApiController] [Route("[controller]")] public class WeatherForecastController : ControllerBase { private static readonly string[] Summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; private readonly ILogger<WeatherForecastController> _logger; public WeatherForecastController(ILogger<WeatherForecastController> logger) { _logger = logger; } [HttpGet(Name = "GetWeatherForecast")] public IEnumerable<WeatherForecast> Get() { return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)), TemperatureC = Random.Shared.Next(-20, 55), Summary = Summaries[Random.Shared.Next(Summaries.Length)] }) .ToArray(); } } } Advantages of Controllers in Action Structure and Organization: Controllers offer a clear structure, separating concerns and enhancing maintainability. Flexibility: They enable custom routes, complex request handling, and support various HTTP verbs. Testing: Controllers facilitate unit testing of individual actions, promoting a test-driven approach Minimal APIs: Concise and Swift With the advent of .NET 6, Minimal APIs emerged as a lightweight alternative, aiming to minimize boilerplate code and simplify API creation. Here's an example showcasing Minimal APIs. using MinimalAPI; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); app.MapGet("/GetWeatherForecast", () => { var rng = new Random(); var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; var weatherForecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index).Date, TemperatureC = rng.Next(-20, 55), Summary = summaries[rng.Next(summaries.Length)] }).ToArray(); return Results.Ok(weatherForecasts); }); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); app.Run(); Advantages of Minimal APIs in Focus Simplicity: Minimal APIs drastically reduce code complexity, ideal for smaller projects or rapid prototyping. Ease of Use: They enable quick API creation with fewer dependencies, accelerating development cycles. Potential Performance Boost: The reduced overhead might lead to improved performance, especially in smaller applications. What you choose between MinimalAPI and Controller? Choosing between Controllers and Minimal APIs hinges on various factors. Project Scale: Controllers offer better organization and structure for larger projects with intricate architectures. Development Speed: Minimal APIs shine when speed is crucial, suitable for rapid prototyping or smaller projects. Team Expertise: Consider your team's familiarity with MVC patterns versus readiness to adopt Minimal APIs. Conclusion The decision between Controllers and Minimal APIs for .NET APIs isn't about one being superior to the other. Rather, it's about aligning the choice with the project's specific needs and constraints. Controllers offer robustness and versatility, perfect for larger, complex projects. On the other hand, Minimal APIs prioritize simplicity and rapid development, ideal for smaller, more straightforward endeavours.
Experienced .NET developer proficient in various technologies with a passion for continuous learning. Over 2 years of hands-on experience in software development across multiple domains. Enthusiastic about technology and adept at adapting to new challenges.