Authentication and authorization are essential components of any web application, ensuring the security and proper access control for users. In NET Core, these concepts play a crucial role in protecting resources and determining user permissions. Authentication in NET Core Authentication is the process of verifying the identity of a user, ensuring they are who they claim to be. This is typically done by presenting credentials, such as a username and password, and validating them against a trusted source, such as a database or an external authentication provider. Once authenticated, the user is assigned an identity, which is then used for subsequent authorization checks. Authentication in NET Core Authentication in NET Core revolves around the concept of authentication schemes. An authentication scheme represents a specific method or protocol used to authenticate users. NET Core supports various authentication schemes out of the box, including cookie authentication, JWT bearer authentication, and external authentication providers like OAuth and OpenID Connect. Understanding Authentication Schemes Authentication schemes are registered in the application’s startup class using the AddAuthentication method. This method allows you to specify one or more authentication schemes and their respective options. For example, to enable cookie authentication, you can use the AddCookie services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(options => { // Configure CookieAuthenticationDefaults options }); Configuring Cookie Authentication To configure cookie authentication, you need to specify the authentication scheme as CookieAuthenticationDefaults.AuthenticationScheme and provide the necessary options, such as the cookie name, login path, and authentication endpoint. Here's an example: services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) .AddCookie(options => { options.Cookie.Name = "MyCookie"; options.LoginPath = "/Admin/Login"; }); In this example, the cookie authentication middleware is configured to use the scheme named “MyCookie” and redirect users to the “/Admin/Login” page if they try to access a protected resource without being authenticated. The options object allows you to customize various aspects of cookie authentication, such as cookie expiration and sliding expiration. Implementing Claim-Based Authentication A claim represents a piece of information about the user, such as their name, email address, or role. By using claims, you can easily extend the user’s identity with additional data and make authorization decisions based on these claims. In NET Core, claim-based authentication is implemented using the ClaimsIdentity and ClaimsPrincipal classes. The ClaimsIdentity represents a collection of claims associated with a user, while the ClaimsPrincipal represents the user's identity as a whole. When a user is authenticated, their claims are stored in a ClaimsPrincipal, which is then attached to the current request's HttpContext.User property. To implement claim-based authentication, you need to create and populate a ClaimsIdentity object with the relevant claims. This can be done during the authentication process, typically in a custom authentication handler. Here's an example of how to create a ClaimsIdentity with a username claim: var claims = new List<Claim> { new Claim(ClaimTypes.Name, "Himanshu") }; var identity = new ClaimsIdentity(claims, "MyAuthenticationScheme"); var principal = new ClaimsPrincipal(identity); await HttpContext.SignInAsync(principal); External Authentication Providers External authentication allows users to sign in to your application using their existing accounts from popular platforms like Google, Facebook, Twitter, and Microsoft. To enable external authentication, you need to configure the desired authentication provider and register it in your application’s startup class. services.AddAuthentication() .AddGoogle(options => { options.ClientId = "YOUR_GOOGLE_CLIENT_ID"; options.ClientSecret = "YOUR_GOOGLE_CLIENT_SECRET"; }); Securing APIs with JWT Bearer Authentication .NET Core provides built-in support for securing APIs using JSON Web Tokens (JWT) and the JWT bearer authentication scheme. JWTs are self-contained tokens that contain information about the user and their permissions. By validating the integrity and authenticity of a JWT, you can trust the claims it contains and authenticate API requests. To enable JWT bearer authentication, you need to configure the authentication scheme and provide the necessary options, such as the token validation parameters and the issuer signing key. Here’s an example of configuring JWT bearer authentication: services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(options => { options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = true, ValidateAudience = true, ValidateIssuerSigningKey = true, ValidIssuer = "YOUR_ISSUER", ValidAudience = "YOUR_AUDIENCE", IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes("YOUR_SIGNING_KEY")) }; }); In this example, the AddJwtBearer extension method is used to configure JWT bearer authentication. The TokenValidationParameters object is set with the necessary validation rules, such as validating the issuer, audience, and the issuer's signing key. You need to replace the placeholder values with your own values specific to your JWT setup. With JWT bearer authentication enabled, API endpoints can be protected by applying the [Authorize] attribute to the corresponding controller or action. This ensures that only requests with valid and authenticated JWTs are allowed access to the protected resources. Maintain secure Authorization Authorization in NET Core is primarily controlled through the use of the [Authorize] attribute. This attribute can be applied at the controller or action level to restrict access to specific components of your application. By default, the [Authorize] attribute allows only authenticated users to access the protected resource. The Role of Authorize Attribute : For example, you can use the [Authorize(Roles = "Admin")] attribute to restrict access to administrators only. This ensures that only users with the "Admin" role can access the protected resource. Restricting Access with Policies : While the [Authorize] attribute provides a simple way to restrict access, ASP.NET Core also supports more advanced authorization policies. Authorization policies allow you to define fine-grained rules for determining whether a user is authorized to perform a specific action. To use authorization policies, you need to define them in your application’s startup class using the AddAuthorization method. Here's an example: services.AddAuthorization(options => { options.AddPolicy("AdminOnly", policy => { policy.RequireRole("Admin"); }); }); Rrole-based authorization can be implemented using the built-in role-based authentication system or by integrating with an external identity provider, such as Active Directory or Azure AD. Implementing Two-Factor Authentication Two-factor authentication (2FA) adds an extra layer of security to the authentication process by requiring users to provide additional verification, typically in the form of a one-time password or a biometric factor. Implementing 2FA can significantly reduce the risk of unauthorized access, especially for sensitive applications or those handling confidential information. To implement two-factor authentication, you need to configure the desired authentication providers, such as SMS, email, or authenticator apps, and register them in your application’s startup class. You also need to configure the necessary options, such as the message templates or the issuer signing key. By enabling two-factor authentication, you provide an additional layer of security that can help protect user accounts from unauthorized access, even if their credentials are compromised. Protecting Against Common Security Vulnerabilities When implementing authentication and authorization in your application, it’s crucial to be aware of common security vulnerabilities and take appropriate measures to prevent them. By understanding these vulnerabilities and following security best practices, you can ensure the integrity and confidentiality of user data. Some common security vulnerabilities to consider when implementing authentication and authorization include: Cross-Site Scripting (XSS): Protect against XSS attacks by properly encoding user input and validating data before rendering it in HTML or JavaScript. Cross-Site Request Forgery (CSRF): Implement CSRF protection mechanisms, such as anti-forgery tokens, to prevent attackers from executing unauthorized actions on behalf of authenticated users. Brute-Force Attacks: Implement account lockout policies and rate limiting to protect against brute-force attacks that attempt to guess user credentials. Session Management: Use secure session management techniques, such as session timeouts, secure cookie attributes, and session regeneration, to prevent session hijacking or session fixation attacks. Password Storage: Store passwords securely by using strong hashing algorithms, salting, and iteration counts to protect against password cracking attempts. By addressing these vulnerabilities and following security best practices, you can minimize the risk of unauthorized access, data breaches, and other security incidents. Conclusion: Authentication and authorization are critical components of building secure and robust web applications in .Net Core. By understanding the concepts and leveraging the powerful features provided by. NET Core, developers can implement robust security measures to protect their applications and ensure that users access resources securely and efficiently.
The software development industry is rapidly changing, with key trends shaping the landscape in 2025. Staying informed on these trends is important for professionals and businesses to stay competitive and adapt to technological advancements. Despite financial pressures from inflation, businesses continue to invest in digital transformation initiatives to drive growth and efficiency. In our blog, we explore the top 9 software development trends in 2025, from AI advancements to emerging technologies. Native app development is being replaced by progressive web apps, and low code and no code platforms are gaining popularity. Technologies like IoT, augmented reality, blockchain, and AI are leading the way in software advancements. Stay updated with MagnusMinds blogs to learn about generative AI, quantum computing, and other industry innovations. Keep up with the latest trends in software development to stay ahead in the market. Discover how custom software development can benefit companies and explore upcoming industry developments. Stay informed and explore the top software industry trends for 2025. Generative AI Transforms Development Practices Generative AI, such as OpenAI's GPT-4, is transforming modern IT development by revolutionizing code generation, debugging, and design. It is no longer just limited to chatbots but has become an essential tool for enhancing development processes. These advanced models are enhancing natural language processing, automating repetitive tasks, creating complex algorithms, and even generating codebases from simple descriptions. With the integration of generative AI into everyday development tasks, developers can streamline workflows, focus on higher-level problem-solving, and make significant strides in the field of IT development. OpenAI's GPT-4 and similar technologies are at the forefront of this AI-powered development revolution. Example: GitHub Copilot, powered by GPT-4, speeds up development by suggesting code snippets and automating repetitive tasks. For example, a developer writing a Python script for data analysis can use Copilot to create complex functions or handle API integrations with minimal manual effort. Tools like Copilot are changing how code is written, as it can suggest entire functions or snippets based on the code context. This feature expedites development, reduces coding errors, and allows developers to focus on high-level design. OpenAI's Codex is another powerful tool that translates natural language descriptions into code, making it easier to create web forms and other applications quickly. Quantum Computing: Practical Implications on the Horizon Quantum computing is advancing rapidly, promising to revolutionize problem-solving methods across industries. While widespread use of full-scale quantum computers is not yet common, progress is evident in quantum algorithms and hybrid models. The year 2025 is expected to bring significant advancements in quantum computing, with practical applications becoming more prominent. Developers will need to learn quantum programming languages to stay ahead of developments. Despite still being experimental, quantum computing is beginning to make a tangible impact in fields such as cryptography and simulations. Transitioning from theoretical research to practical use, quantum computing is on the brink of major breakthroughs. Example: IBM’s Quantum Hummingbird is a 127-qubit processor pioneering practical quantum computing for drug discovery and material science. By simulating molecular interactions at a quantum level, breakthroughs in creating new pharmaceuticals or materials are on the horizon. On the other hand, D-Wave’s Advantage, a quantum annealing system, is being utilized by companies like Volkswagen to optimize traffic flow in urban areas. Leveraging quantum computing to process complex traffic patterns, Volkswagen aims to enhance city traffic management and overall transportation efficiency. Cybersecurity: Advanced Threat Detection and Response Cybersecurity is a top priority in IT development due to the growing sophistication of cyber threats. In 2025, we expect to see more emphasis on advanced threat detection, zero-trust security models, and comprehensive encryption techniques. Companies are investing in AI-powered systems for detecting threats, while developers are integrating robust security measures and staying informed about the latest practices and compliance requirements. With cyber threats constantly evolving, cybersecurity measures are also advancing to keep up. Regulatory compliance will drive the need for stronger security measures across all development levels to protect against these threats. Example: Google's BeyondCorp is a zero-trust security model that eliminates traditional perimeter-based security measures by continuously verifying user and device identity before granting access. This approach improves security by considering threats from both inside and outside the organization. Meanwhile, Darktrace's Antigena is an autonomous response technology using machine learning to detect and respond to cybersecurity threats in real-time. For example, it can identify unauthorized network activity and promptly act, like isolating affected systems, to prevent further damage. Edge Computing Enhances Real-Time Data Processing Edge computing is gaining traction by moving computational power closer to data sources, reducing latency and improving real-time processing. It is essential for applications needing fast data processing by shortening data travel distance. This technology enhances performance for IoT, autonomous vehicles, and smart cities. To adapt to this shift, developers should focus on optimizing software for edge environments and efficiently managing distributed data. Edge computing is transforming data processing by bringing computation closer to the source, benefiting applications that require real-time data processing. As more companies embrace this trend, developers must optimize applications for decentralized environments and manage data across distributed systems effectively. Example: Edge computing is used in smart cities to analyze data from surveillance cameras in real-time, enabling quick responses to traffic violations or security threats. For example, Cisco's Edge Intelligence platform helps businesses deploy edge computing solutions for real-time analysis of data from IoT sensors, such as predicting equipment failures in manufacturing settings to prevent downtime and improve efficiency. Low-Code and No-Code Platforms Foster Rapid Development Low-code and no-code platforms are revolutionizing application development, allowing non-developers to easily create functional software. These platforms are democratizing the process, empowering users with limited coding skills to build their own applications. As we look ahead to 2025, these platforms will continue to evolve, offering more advanced features and integrations. This advancement will streamline development processes and enable a wider range of individuals to contribute to IT solutions. Developers may increasingly collaborate with these platforms to enhance their capabilities and create tailored solutions for businesses. Example: Low-code/no-code platforms like Microsoft PowerApps, Bubble, and AppGyver empower business users to create custom applications without advanced programming skills. For instance, PowerApps and Bubble enable a marketing team to develop a tailored CRM solution without IT support. AppGyver offers a no-code environment for building complex mobile and web apps, such as a healthcare provider designing a custom patient management system for better service delivery and streamlined information handling. check full details about PowerApps in our Detailed Guide. Green IT: Driving Sustainable Practices Sustainability is becoming a key priority in IT development, with a particular emphasis on green IT practices to reduce environmental impact. This includes energy-efficient data centers, sustainable hardware, and eco-friendly coding techniques gaining popularity. Companies are placing a greater importance on incorporating sustainability into their IT strategies to decrease their carbon footprint and uphold environmental responsibility. As a result, developers are being urged to consider the ecological implications of their work and integrate sustainable practices into their projects. This shift towards green IT is essential for minimizing environmental impact and promoting eco-friendly operations in the IT industry. Example: Tech giants like Google and Microsoft are leading the way in adopting energy-efficient technologies in data centers. Google has committed to operating all data centers on renewable energy, setting a high standard for the industry. Microsoft's Project Natick is developing underwater data centers that use natural cooling properties, reducing energy consumption. These efforts are reducing carbon footprints and creating a more sustainable IT infrastructure. 5G and Emerging 6G Technologies The roll out of 5G networks is boosting connectivity, speeding up data transfer, and introducing new applications. Research is already in progress for 6G technology, which is expected to bring further advancements. In 2025, we can anticipate significant progress in 5G technology and exploration of 6G possibilities. These advancements will fuel innovation in augmented reality (AR), virtual reality (VR), and the Internet of Things (IoT). The expansion of 5G networks is revolutionizing connectivity by supporting fast data speeds and reducing latency. This year, we are witnessing wider acceptance of 5G, driving innovations in AR, VR, and IoT. Additionally, ongoing research into 6G technology is likely to lead to even more advanced connectivity solutions. Developers should stay informed about these developments to harness new opportunities and create applications that can fully utilize next-generation networks. Example: The deployment of 5G networks has led to the rise of real-time interactive augmented reality (AR) applications like gaming and remote assistance. Researchers are now looking into 6G technology to achieve even faster speeds and lower latency, potentially transforming fields like autonomous driving and immersive virtual reality experiences. Additionally, Qualcomm's Snapdragon X65 5G modem allows for high-speed data transfer and low latency, enabling applications such as high-definition live streaming and AR experiences. The development of 6G may further advance technologies like holographic communication and immersive VR environments. Enhanced User Experience (UX) with AI and Personalization User experience (UX) is vital, focusing on personalized and intuitive interfaces. The evolution of UX emphasizes personalization and intelligent design, aided by AI advancements. In 2025, IT development will prioritize creating personalized experiences across digital platforms. AI-driven insights will enable developers to customize applications and services based on individual user preferences and behaviors. Enhancing engagement and satisfaction, developers are increasingly tailoring experiences to user preferences. UX design is becoming more data-driven, emphasizing understanding user behavior to create meaningful interactions. Exceptional user experiences, focusing on personalization, remain a top priority in the industry. Example: Streaming services like Netflix utilize machine learning algorithms to analyze user preferences and habits, offering personalized content recommendations for an improved user experience. Similarly, Adobe Experience Cloud employs AI technology to personalize content and optimize user experiences on various platforms, enhancing user engagement and satisfaction through tailored recommendations and targeted marketing strategies. Blockchain Applications Beyond Financial Transactions Blockchain technology is expanding beyond cryptocurrency into various industries. By 2025, it will be prominently used in supply chain management, identity verification, and smart contracts. The transparency and security features of blockchain make it a valuable tool for businesses. Streaming services like Netflix utilize machine learning to analyze user habits and provide personalized content recommendations, improving user satisfaction. This personalized approach ensures that the content offered matches individual preferences and viewing history. Blockchain developers need to understand its principles and explore its potential in different scenarios outside of financial transactions. Example: Blockchain is utilized in supply chain management to trace product origins, enhance transparency, and mitigate fraud. IBM and Walmart employ blockchain to monitor goods from production to consumption, improving food safety. Everledger, on the other hand, utilizes blockchain to track diamonds and high-value items, creating an unchangeable record of their journey. This ensures transparency and helps in preventing fraud within the diamond supply chain, offering consumers accurate information regarding their purchases. Advancements in Remote Work and Collaboration Tools The remote work trend is advancing with upgraded tools for collaboration and project management. Companies are investing in enhanced tools for productivity and teamwork. Developers are creating more integrated, secure, and efficient solutions like virtual workspaces, collaborative coding environments, and project management tools. The goal is to design solutions that enable seamless communication and productivity, regardless of location. Example: The remote work trend is growing with improved collaboration and project management tools. Companies are investing in productivity and teamwork tools. Developers are creating secure, efficient solutions like virtual workspaces and collaborative coding environments to enhance communication and productivity. Conclusion The software development landscape in 2025 is characterized by rapid advancements and transformative technologies such as generative AI, edge computing, cybersecurity, and sustainability. Staying informed about these trends is crucial for IT professionals and organizations to leverage new technologies effectively and remain competitive in a rapidly evolving industry. Adapting to these changes will be key for developers to push the boundaries of what's possible and shape the future of IT. By embracing innovations like generative AI, quantum computing, and advanced cybersecurity, the industry is presented with new opportunities for growth and progress. Keeping an eye on these trends throughout the year will ensure that you stay current and position yourself for future success. Stay tuned for more insights and updates as we navigate these exciting developments together.
Why API Versioning? API versioning allows developers to: Introduce new API features without breaking existing clients. Deprecate older API versions in a controlled manner. Provide clear communication about supported versions. With .NET 8.0, setting up API versioning is straightforward and efficient. Let’s explore how to implement it. In the Program.cs file, configure services for controllers and API versioning: using Microsoft.AspNetCore.Mvc; var builder = WebApplication.CreateBuilder(); // Add services for controllers and API versioning builder.Services.AddControllersWithViews(); builder.Services.AddApiVersioning(o => { o.ReportApiVersions = true; // Include version information in responses }); var app = builder.Build(); // Map default controller route app.MapDefaultControllerRoute(); app.Run(); Nuget Package Name : Microsoft.AspNetCore.Mvc.Versioning Implementing a Versioned Controller Define a versioned controller to handle API requests. Use the ApiVersion attribute to specify the API version and the route. [ApiVersion("1.0")] [ApiVersion("2.0")] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class HelloWorldController : ControllerBase { [HttpGet] public IActionResult Get(ApiVersion apiVersion) => Ok(new { Controller = GetType().Name, Version = apiVersion.ToString(), Message = "This is version 1 of the API" }); [HttpGet, MapToApiVersion("2")] public IActionResult GetV2(ApiVersion apiVersion) => Ok(new { Controller = GetType().Name, Version = apiVersion.ToString(), Message = "This is version 2 of the API" }); } Key Points in the Code ApiVersion("1"): Specifies that this controller handles API version 1. Route("api/v{version:apiVersion}/[controller]"): Dynamically includes the API version in the route. ApiVersion** parameter**: Captures the requested version and includes it in the response. Endpoint : GET http://localhost:51346/api/v1/HelloWorld Response : { "Controller": "HelloWorldController", "Version": "1", "Message": "This is version 1 of the API" } Endpoint : GET http://localhost:51346/api/v2/HelloWorld Response : { "Controller": "HelloWorldController", "Version": "2", "Message": "This is version 2 of the API" } Conclusion API versioning in .NET 8.0 is a simple yet powerful feature for managing evolving APIs. By integrating AddApiVersioning and leveraging attributes like ApiVersion and Route, developers can efficiently support multiple API versions without sacrificing maintainability. If you have further questions or insights, feel free to share them in the comments!
Before diving into optimization techniques, it’s important to identify the areas of your code that require improvement. By measuring and profiling your application’s performance, you can pinpoint the exact bottlenecks and focus your optimization efforts where they matter the most (Measure and Identify Bottlenecks). In this blog, I’ll explain effective strategies for handling memory and reducing garbage collection overhead in your C# applications. Memory management and garbage collection are essential aspects of performance tuning in C#, so these best practices will help you optimize your code for maximum efficiency. Here are 8 tips that will help with performance optimization. 1. Use the IDisposable interface : Utilizing the IDisposable interface is a crucial C# performance tip. It helps you properly manage unmanaged resources and ensures that your application’s memory usage is efficient. Bad way: public class ResourceHolder { private Stream _stream; public ResourceHolder(string filePath) { _stream = File.OpenRead(filePath); } // Missing: IDisposable implementation } Good way: public class ResourceHolder : IDisposable { private Stream _stream; public ResourceHolder(string filePath) { _stream = File.OpenRead(filePath); } public void Dispose() { _stream?.Dispose(); // Properly disposing the unmanaged resource. } } By implementing the IDisposable interface, you ensure that unmanaged resources will be released when no longer needed, preventing memory leaks and reducing pressure on the garbage collector. This is a fundamental code optimization technique in C# that developers should utilize. 2. Asynchronous Programming with async/await Asynchronous programming is a powerful technique for improving C# performance in I/O-bound operations, allowing you to enhance your app’s responsiveness and efficiency. Here, we’ll explore some best practices for async/await in C#. Limit the number of concurrent operations Bad way: public async Task ProcessManyItems(List<string> items) { var tasks = items.Select(async item => await ProcessItem(item)); await Task.WhenAll(tasks); } Good way: public async Task ProcessManyItems(List<string> items, int maxConcurrency = 10) { using (var semaphore = new SemaphoreSlim(maxConcurrency)) { var tasks = items.Select(async item => { await semaphore.WaitAsync(); // Limit concurrency by waiting for the semaphore. try { await ProcessItem(item); } finally { semaphore.Release(); // Release the semaphore to allow other operations. } }); await Task.WhenAll(tasks); } } Without limiting concurrency, many tasks will run simultaneously, which can lead to heavy load and degraded overall performance. Instead, use a SemaphoreSlim to control the number of concurrent operations. 3. UseConfigureAwait(false) when possible ConfigureAwait(false) is a valuable C# performance trick that can help prevent deadlocks in your async code and improve efficiency by not forcing continuations to run on the original synchronization context. public async Task<string> DataAsync() { var data = await ReadDataAsync().ConfigureAwait(false); // Use ConfigureAwait(false) to avoid potential deadlocks. return ProcessData(data); } 4. Parallel Computing and Task Parallel Library This will help the power of multicore processors and speed up CPU-bound operations Bad way: private void Data(List<int> data) { for (int i = 0; i < data.Count; i++) { PerformExpensiveOperation(data[i]); } } Good way: private void Data(List<int> data) { Parallel.ForEach(data, item => PerformExpensiveOperation(item)); } Parallel loops can considerably accelerate processing of large collections by distributing the workload among multiple CPU cores. Switch from regular for and foreach loops to their parallel counterparts whenever it’s feasible and safe. 5. Importance of Caching Data Utilizing in-memory caching can drastically reduce time-consuming database fetches and speed up your application. The good way demonstrates the use of in-memory caching to store product data and reduce time-consuming database fetches. 6. Optimizing LINQ Performance Force immediate execution using ToList() or ToArray() when needed. Use the AsParallel() extension method to ensure safety and parallelism. Selecting a HashSet instead of a List offers faster look-up times and greater performance 7. Task and ValueTask for reusing asynchronous code Use ValueTask to reduce heap allocations public async ValueTask<string> DataAsync() { var data = await ReadFromStreamAsync(_stream); return ProcessData(data); } By switching from Task<TResult> to ValueTask<TResult>, you can reduce heap allocations and ultimately improve your C# performance 8. Use HttpClientFactory to manage HttpClient instances private readonly HttpClient _httpClient; public MyClass(HttpClient httpClient) { _httpClient = httpClient; } public async Task GetDataAsync() { var response = await _httpClient.GetAsync("http://himashu.com/data"); } This approach manages the lifetimes of your HttpClient instances more efficiently, preventing socket exhaustion. - Use null-coalescing operators (??, ??=) string datInput = NullableString() ?? "default"; - Using Span and Memory for efficient buffer management // Using Span<T> avoids additional memory allocation and copying byte[] data = GetData(); Span<byte> dataSpan = data.AsSpan(); ProcessData(dataSpan); - Use StringComparison options for efficient string comparison bool equal = string.Equals(string1, string2, StringComparison.OrdinalIgnoreCase); - Use StringBuilder over string concatenation in loops StringBuilder sb = new StringBuilder(); for (int i = 0; i < 1000; i++) { sb.AppendFormat("Iteration: {0}", i); } string result = sb.ToString(); This has been a collection of just a few things I’ve found useful for enhancing the performance of my C# .NET code. Remember that the key to successful development is a balance between code quality and performance optimizations. By employing these techniques, you’ll be able to build high-performing C# applications that deliver a seamless user experience.
In the world of software development and testing, having access to realistic and diverse data sets is crucial. That's why we are thrilled to introduce IndiGen, a powerful and versatile package designed to generate realistic Indian data with ease. Why IndiGen? IndiGen is a comprehensive tool that caters specifically to the needs of developers and testers who require authentic Indian data for their projects. Whether you are working on unit tests, creating sample data, or validating functionality, IndiGen has got you covered. Key Features Realistic Indian Names: Generate complete names, first names, last names, middle names, prefixes, and suffixes. var fullName = India.Faker.Name.FullName(); // Example: Ramesh Babu var firstName = India.Faker.Name.First(); // Example: Amitabh var lastName = India.Faker.Name.Last(); // Example: Kapoor var middleName = India.Faker.Name.Middle(); // Example: Hrutvik var prefix = India.Faker.Name.Prefix(); // Example: Shri var suffix = India.Faker.Name.Suffix(); // Example: Bhai, Kumar Valid Phone Numbers: Generate realistic Indian phone numbers. var phoneNumber = India.Faker.Phone.Number(); // Example: +91-9988776655, 9998887770, 079-27474747 Authentic Vehicle Number Plates: Generate vehicle number plates in Indian formats. var vehicleNumberPlate = India.Faker.VehicleNumberPlate.Number(); // Example: GJ 01 AA 7777, 24 BH 9999 AA Valid PAN Card Numbers: Generate PAN card numbers that conform to Indian standards. var panCardNumber = India.Faker.PanCardNumber.Number(); // Example: AABBB8888A Aadhaar Card Numbers: Generate Aadhaar card numbers. var aadhaarCardNumber = India.Faker.AadharCardNumber.Number(); // Example: 2222 4444 2222 Supported Versions IndiGen is compatible with a wide range of .NET versions, ensuring flexibility and ease of integration into your projects: .NET Framework 4.5, 4.6, 4.7, 4.8 .NET Standard 2.0, 2.1 .NET Core 3.0, 3.1 .NET 5.0, 6.0 Get Started with IndiGen Getting started with IndiGen is simple. Visit our NuGet package page and integrate it into your projects to start generating realistic Indian data today. How to Install Installing IndiGen is straightforward. You can add it to your project using the NuGet Package Manager, .NET CLI, or by editing your project file. Using NuGet Package Manager Open your project in Visual Studio. Go to Tools > NuGet Package Manager > Manage NuGet Packages for Solution. Search for IndiGen. Select the package and click Install. Using .NET CLI Run the following command in your terminal:\ dotnet add package IndiGen Editing Your Project File Add the following line to your .csproj file: <PackageReference Include="IndiGen" Version="8.0.1" /> Replace "8.0.1" with the latest version of IndiGen. NuGet Package: IndiGen Contribute to IndiGen We welcome contributions from the community. If you have suggestions, improvements, or new features in mind, please open an issue or submit a pull request. Together, we can make IndiGen even better! IndiGen is here to simplify your development and testing process by providing realistic Indian data. Try it out and let us know your thoughts. Happy coding!
Introduction In the ever-evolving landscape of web development, simplicity is key. Enter Minimal APIs in ASP.NET Core, a lightweight and streamlined approach to building web applications. In this detailed blog, we'll explore the concept of Minimal APIs, understand why they matter, and walk through their implementation in ASP.NET Core. When to Use Minimal APIs? Minimal APIs are well-suited for small to medium-sized projects, microservices, or scenarios where a lightweight and focused API is sufficient. They shine in cases where rapid development and minimal ceremony are top priorities. You can find in this blog <link> how to create minimal api. I am directly showing the comparison between MinimalAPI and controller. Controllers: Structured and Versatile Controllers, deeply rooted in the MVC pattern, have been a cornerstone of ASP.NET API development for years. They provide a structured way to organize endpoints, models, and business logic within dedicated controller classes. Let's consider an example using Microsoft.AspNetCore.Mvc; namespace MinimalAPI.Controllers { [ApiController] [Route("[controller]")] public class WeatherForecastController : ControllerBase { private static readonly string[] Summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; private readonly ILogger<WeatherForecastController> _logger; public WeatherForecastController(ILogger<WeatherForecastController> logger) { _logger = logger; } [HttpGet(Name = "GetWeatherForecast")] public IEnumerable<WeatherForecast> Get() { return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateOnly.FromDateTime(DateTime.Now.AddDays(index)), TemperatureC = Random.Shared.Next(-20, 55), Summary = Summaries[Random.Shared.Next(Summaries.Length)] }) .ToArray(); } } } Advantages of Controllers in Action Structure and Organization: Controllers offer a clear structure, separating concerns and enhancing maintainability. Flexibility: They enable custom routes, complex request handling, and support various HTTP verbs. Testing: Controllers facilitate unit testing of individual actions, promoting a test-driven approach Minimal APIs: Concise and Swift With the advent of .NET 6, Minimal APIs emerged as a lightweight alternative, aiming to minimize boilerplate code and simplify API creation. Here's an example showcasing Minimal APIs. using MinimalAPI; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); app.MapGet("/GetWeatherForecast", () => { var rng = new Random(); var summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; var weatherForecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index).Date, TemperatureC = rng.Next(-20, 55), Summary = summaries[rng.Next(summaries.Length)] }).ToArray(); return Results.Ok(weatherForecasts); }); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); app.Run(); Advantages of Minimal APIs in Focus Simplicity: Minimal APIs drastically reduce code complexity, ideal for smaller projects or rapid prototyping. Ease of Use: They enable quick API creation with fewer dependencies, accelerating development cycles. Potential Performance Boost: The reduced overhead might lead to improved performance, especially in smaller applications. What you choose between MinimalAPI and Controller? Choosing between Controllers and Minimal APIs hinges on various factors. Project Scale: Controllers offer better organization and structure for larger projects with intricate architectures. Development Speed: Minimal APIs shine when speed is crucial, suitable for rapid prototyping or smaller projects. Team Expertise: Consider your team's familiarity with MVC patterns versus readiness to adopt Minimal APIs. Conclusion The decision between Controllers and Minimal APIs for .NET APIs isn't about one being superior to the other. Rather, it's about aligning the choice with the project's specific needs and constraints. Controllers offer robustness and versatility, perfect for larger, complex projects. On the other hand, Minimal APIs prioritize simplicity and rapid development, ideal for smaller, more straightforward endeavours.
What is Authorization? Authorization verifies whether a user has permission to use specific applications or services. While authentication and authorization are distinct processes, authentication must precede authorization, ensuring the user's identity is confirmed before determining their access rights. When logging into a system, a user must provide credentials like a username and password to authenticate. Next, the authorization process grants rights. For example, an administrative user can create a document library to add, edit, and delete documents, while a non-administrative user can only read documents in the library. Types of Authorization: Simple Authorization Role-Based Authorization Claim-Based Authorization Policy-Based Authorization I have implemented an example of role-based authorization in .NET. Step 1: Create one new MVC Web Application with the Authentication type “Individual Account”. Step 2: Register Identity with DefaultTokenProvider in the program.cs file. builder.Services.AddIdentity<IdentityUser, IdentityRole>(options => options.SignIn.RequireConfirmedAccount = false) .AddEntityFrameworkStores<ApplicationDbContext>() .AddDefaultTokenProviders(); For better understanding, I have added one page to add a new role. Create a new method in the controller and add the following code. [HttpGet] public IActionResult Admin() { return View(); } Create a new model Role.cs. namespace Authorization.Models { public class Role { public string RoleName { get; set; } } } Create html page for add role. @model Role @{ ViewData["Title"] = "Admin"; } <h1>Admin</h1> <div class="row"> <div class="col-md-12"> <form method="post" action="@Url.Action("Admin","Home")"> <div class="form-group"> <label>Role Name</label> <input type="text" class="form-control" style="width:30%;" asp-for="RoleName" placeholder="Role name" required> </div> <br /> <button class="btn btn-success" type="submit">Add</button> </form> </div> </div> I have created a simple page, You can modify the page as per your requirements. Add a new method in the controller and add the following code. And declare RoleManager<IdentityRole> and inject in the constructor. private readonly RoleManager<IdentityRole> _roleManager; public HomeController(RoleManager<IdentityRole> roleManager) { _roleManager = roleManager; } [HttpPost] public async Task<IActionResult> Admin(Role role) { var result = _roleManager.RoleExistsAsync(role.RoleName).Result; if (!result) { await _roleManager.CreateAsync(new IdentityRole(role.RoleName)); } return RedirectToAction("Admin"); } Set a new tab in _Layout.cshtml file to redirect to the Add role page. <li class="nav-item"> <a class="nav-link text-dark" asp-area="" asp-controller="Home" asp-action="Admin">Add new role</a> </li> Run the project and you will see the output. Here, You can add a new role. Add a new field in register.cshtml using the following code to assign a role to the user. <div class="form-floating mb-3"> <select asp-for="Input.Role" class="form-control" aria-required="true"> <option value="">Select role</option> @foreach (var item in Model.RoleList) { <option value="@item.Name">@item.Name</option> } </select> <span asp-validation-for="Input.ConfirmPassword" class="text-danger"></span> </div> To get the list of roles you can add the following code in your register.cshtml.cs file. And Add RoleList = _roleManager.Roles in OnPostAsync method also. public IQueryable<IdentityRole> RoleList { get; set; } public async Task OnGetAsync(string returnUrl = null) { ReturnUrl = returnUrl; RoleList = _roleManager.Roles; ExternalLogins = (await _signInManager.GetExternalAuthenticationSchemesAsync()).ToList(); } Run the project and see the output. Now, Assign the role to the user, and for that add the following code in OnPostAsync after the user is created. await _userManager.AddToRoleAsync(user, Input.Role); Full code of register.cshtml.cs file. using Microsoft.AspNetCore.Authentication; using Microsoft.AspNetCore.Identity; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using Microsoft.AspNetCore.WebUtilities; using System.ComponentModel.DataAnnotations; using System.Text; namespace Authorization.Areas.Identity.Pages.Account { public class RegisterModel : PageModel { private readonly SignInManager<IdentityUser> _signInManager; private readonly UserManager<IdentityUser> _userManager; private readonly RoleManager<IdentityRole> _roleManager; private readonly IUserStore<IdentityUser> _userStore; private readonly IUserEmailStore<IdentityUser> _emailStore; private readonly ILogger<RegisterModel> _logger; //private readonly IEmailSender _emailSender; public RegisterModel( UserManager<IdentityUser> userManager, IUserStore<IdentityUser> userStore, SignInManager<IdentityUser> signInManager, ILogger<RegisterModel> logger, RoleManager<IdentityRole> roleManager ) { _userManager = userManager; _userStore = userStore; _emailStore = GetEmailStore(); _signInManager = signInManager; _roleManager = roleManager; _logger = logger; } [BindProperty] public InputModel Input { get; set; } public IQueryable<IdentityRole> RoleList { get; set; } public string ReturnUrl { get; set; } public IList<AuthenticationScheme> ExternalLogins { get; set; } public class InputModel { [Required] [EmailAddress] [Display(Name = "Email")] public string Email { get; set; } [Required] [Display(Name = "Role")] public string Role { get; set; } [Required] [StringLength(100, ErrorMessage = "The {0} must be at least {2} and at max {1} characters long.", MinimumLength = 6)] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [DataType(DataType.Password)] [Display(Name = "Confirm password")] [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")] public string ConfirmPassword { get; set; } } public async Task OnGetAsync(string returnUrl = null) { ReturnUrl = returnUrl; RoleList = _roleManager.Roles; ExternalLogins = (await _signInManager.GetExternalAuthenticationSchemesAsync()).ToList(); } public async Task<IActionResult> OnPostAsync(string returnUrl = null) { returnUrl ??= Url.Content("~/"); RoleList = _roleManager.Roles; ExternalLogins = (await _signInManager.GetExternalAuthenticationSchemesAsync()).ToList(); if (ModelState.IsValid) { var user = CreateUser(); await _userStore.SetUserNameAsync(user, Input.Email, CancellationToken.None); await _emailStore.SetEmailAsync(user, Input.Email, CancellationToken.None); var result = await _userManager.CreateAsync(user, Input.Password); if (result.Succeeded) { _logger.LogInformation("User created a new account with password."); await _userManager.AddToRoleAsync(user, Input.Role); var userId = await _userManager.GetUserIdAsync(user); var code = await _userManager.GenerateEmailConfirmationTokenAsync(user); code = WebEncoders.Base64UrlEncode(Encoding.UTF8.GetBytes(code)); if (_userManager.Options.SignIn.RequireConfirmedAccount) { return RedirectToPage("RegisterConfirmation", new { email = Input.Email, returnUrl = returnUrl }); } else { await _signInManager.SignInAsync(user, isPersistent: false); return LocalRedirect(returnUrl); } } foreach (var error in result.Errors) { ModelState.AddModelError(string.Empty, error.Description); } } return Page(); } private IdentityUser CreateUser() { try { return Activator.CreateInstance<IdentityUser>(); } catch { throw new InvalidOperationException($"Can't create an instance of '{nameof(IdentityUser)}'. " + $"Ensure that '{nameof(IdentityUser)}' is not an abstract class and has a parameterless constructor, or alternatively " + $"override the register page in /Areas/Identity/Pages/Account/Register.cshtml"); } } private IUserEmailStore<IdentityUser> GetEmailStore() { if (!_userManager.SupportsUserEmail) { throw new NotSupportedException("The default UI requires a user store with email support."); } return (IUserEmailStore<IdentityUser>)_userStore; } } } Now, register one new user and assign a role to them. For example, I have created one user and assign “Admin” to them. I have added two new methods in the controller and added a default view for that. [Authorize(Roles = "User")] public IActionResult UserRoleCheck() { return View(); } [Authorize(Roles = "Admin")] public IActionResult AdminRoleCheck() { return View(); } I have set the Authorize attribute with the role name on both authorization methods. Now, I am running the project and clicking on Admin Role, It will open the page of admin because the logged user role and method role both are the same. If I click on User Role, It will give an Access denied error. Because logged user role and method role both are different. Here, I am using by default access denied page of identity. You can use custom page also, Just set this path to program.cs file. builder.Services.ConfigureApplicationCookie(options => { options.AccessDeniedPath = "/Identity/Account/AccessDenied"; // Customize this path as per your application's structure }); Using this way you will implement the role-based authorization in your application. Conclusion By properly implementing authorization in your applications, you can ensure that resources and sensitive information are accessible only to authorized users. Remember to choose the appropriate authorization technique based on your application’s requirements and complexity.
To utilize custom fonts from your Dotnet codebase in HTML or PDF documents, follow these steps: Add the fonts you intend to use for your PDF or HTML documents. Ensure they are in the .ttf extension format. <PackageReference Include="Polybioz.HtmlRenderer.PdfSharp.Core" Version="1.0.0"> Include the necessary package by adding the following line to your project file: Initialize the IServiceCollection to utilize the CustomFontResolver class. You can achieve this by adding the following extension method: public static class IServicesCollectionExtension { public static IServiceCollection InitializeDocumentProcessor(this IServiceCollection services) { GlobalFontSettings.FontResolver = new CustomFontResolver(); return services; } } Initialize the class in your program file: builder.Services.InitializeDocumentProcessor(); Specify the DefaultFontName you wish to use. You can also manage bold and italic styles. public class CustomFontResolver : IFontResolver { string IFontResolver.DefaultFontName => "Rubik"; public FontResolverInfo ResolveTypeface(string familyName, bool isBold, bool isItalic) { if (isBold) { if (isItalic) { return new FontResolverInfo("Rubik#bi"); } return new FontResolverInfo("Rubik#b"); } if (isItalic) return new FontResolverInfo("Rubik#i"); return new FontResolverInfo("Rubik"); } public byte[] GetFont(string faceName) { switch (faceName) { case "Rubik": return CustomFontHelper.Rubik; case "Rubik#b": return CustomFontHelper.RubikBold; case "Rubik#bi": return CustomFontHelper.RubikBoldItalic; case "Rubik#i": return CustomFontHelper.RubikItalic; } return GetFont(faceName); } } Define a helper class CustomFontHelper to facilitate loading font data. Ensure you have added the fonts for all the types you intend to use. public static class CustomFontHelper { public static byte[] Rubik { get { return LoadFontData("Rubik-Light.ttf"); } } public static byte[] RubikBold { get { return LoadFontData("Rubik-SemiBold.ttf"); } } public static byte[] RubikBoldItalic { get { return LoadFontData("Rubik-SemiBoldItalic.ttf"); } } public static byte[] RubikItalic { get { return LoadFontData("Rubik-Italic.ttf"); } } static byte[] LoadFontData(string name) { using (Stream stream = File.OpenRead("Fonts/" + name)) { if (stream == null) throw new ArgumentException("No resource with name " + name); int count = (int)stream.Length; byte[] data = new byte[count]; stream.Read(data, 0, count); return data; } } } By following these steps, you can seamlessly integrate custom fonts into your HTML and PDF documents from your Dotnet codebase, without needing to specify the font-family in the HTML directly. You can also pass font styles directly through code.
What is MinimalAPI? Minimal APIs are a simplified way of building web APIs in ASP.NET Core. They are designed for scenarios where you need a quick and minimalistic approach to expose endpoints without the overhead of a full-fledged MVC application. Why Minimal APIs? Efficiency: Write less, do more. A mantra for the modern developer. Performance: They’re lean, mean, and fast, perfect for high-performance scenarios. Ease of Use: New to .NET? No problem! Minimal APIs are accessible and easy to grasp. Flexibility: Simplicity doesn’t mean limited. From microservices to large-scale applications, they’ve got you covered. How Minimal APIs Work? Minimal APIs leverage the WebApplication class to define routes and handle HTTP requests. They rely on a functional approach, allowing developers to define endpoints using lambda expressions. Limitations of Minimal API No support for filters: For example, no support for IAsyncAuthorizationFilter, IAsyncActionFilter, IAsyncExceptionFilter, IAsyncResultFilter, and IAsyncResourceFilter. No support for model binding, i.e. IModelBinderProvider, IModelBinder. Support can be added with a custom binding shim. No support for binding from forms. This includes binding IFormFile. We plan to add support for IFormFile in the future. No built-in support for validation, i.e. IModelValidator No support for application parts or the application model. There's no way to apply or build your own conventions. No built-in view rendering support. We recommend using Razor Pages for rendering views. No support for JsonPatch No support for OData How to create a Minimal API? Creating a Minimal API closely mirrors the traditional approach, so you should encounter no significant challenges. It is a straightforward procedure that can be accomplished in just a few easy steps. Let's get started: Step 1: Open Visual Studio and select the ASP.NET Core Web API. Provide a preferred name for your project and select the location where you wish to store it. For the final step, choose the targeted framework, ensure that the "Configure for HTTPS" and "Enable OpenAPI support" checkboxes are checked, and, most importantly, leave the checkbox "Use controllers (uncheck to use Minimal API)" unchecked. Then, click the "Create" button. Step 2: Create one class with two fields and create one list class with some static values. namespace MinimalAPI { public class Student { public int Id { get; init; } public string Name { get; set; } } public static class StudentList { public static List<Student> student = new List<Student>() { new Student() { Id = 1, Name = "Test1", }, new Student() { Id = 2, Name = "Test2", }, new Student() { Id = 3, Name = "Test3", } }; } } Now add register new endpoint in Program.cs file. app.MapGet("GetAllStudent", () => StudentList.student); Run the project and see the output. I have added Create, Update and Delete student endpoint. See the full code below. using MinimalAPI; var builder = WebApplication.CreateBuilder(args); // Add services to the container. builder.Services.AddControllers(); // Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // GetAll app.MapGet("GetAllStudent", () => StudentList.student); // GetById app.MapGet("GetByStudentId/{id}", (int id) => StudentList.student.FirstOrDefault(user => user.Id == id)); // Create app.MapPost("CreateStudent", (Student student) => StudentList.student.Add(student)); // Update app.MapPut("UpdateStudent/{id}", (int id, Student student) => { Student currentStudent = StudentList.student.FirstOrDefault(user => user.Id == id); currentStudent.Name = student.Name; }); // Delete app.MapDelete("DeleteStudent/{id}", (int id) => { var student = StudentList.student.FirstOrDefault(user => user.Id == id); StudentList.student.Remove(student!); }); // Configure the HTTP request pipeline. if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); app.Run(); Run the code and see the output. Conclusion: Minimal APIs in ASP.NET Core, introduced in .NET 6, offer a simplified and concise approach to building lightweight HTTP services, reducing boilerplate and emphasizing convention-based routing. While ideal for rapid development of small to medium-sized APIs, they lack advanced features found in traditional ASP.NET Core applications and may not be suitable for complex scenarios.