Category - Google-Cloud-Platform

Quick Setup: Kafka with ELK Integration
Aug 17, 2020

Apache Kafka is the numerous common buffer solution deployed together with the ELK Stack. Kafka is deployed within the logs delivery and the indexing units, acting as a segregation unit for the data being collected: In this blog, we’ll see how to deploy all the components required to set up a resilient logs pipeline with Apache Kafka and ELK Stack: Filebeat – collects logs and forwards them to a Kafka topic. Kafka – brokers the data flow and queues it. Logstash – aggregates the data from the Kafka topic, processes it and ships to Elasticsearch. Elasticsearch – indexes the data. Kibana – for analyzing the data.   My environment: To perform the steps below, I set up a single Ubuntu 18.04 VM machine on AWS EC2 using local storage. In real-life scenarios, you will probably have all these components running on separate machines. I started the instance in the public subnet of a VPC and then set up a security group to enable access from anywhere using SSH and TCP 5601 (for Kibana). Using Apache Access Logs for the pipeline, you can use VPC Flow Logs, ALB Access logs etc. We will start by installing the main component in the stack — Elasticsearch. Login to your Ubuntu system using sudo privileges. For the remote Ubuntu server using ssh to access it. Windows users can use putty or Powershell to log in to Ubuntu system. Elasticsearch requires Java to run on any system. Make sure your system has Java installed by running the following command. This command will show you the current Java version. sudo apt install openjdk-11-jdk-headless Check the installation is successful or not by the below command ~$ java — versionopenjdk 11.0.3 2019–04–16OpenJDK Runtime Environment (build 11.0.3+7-Ubuntu-1ubuntu218.04.1)OpenJDK 64-Bit Server VM (build 11.0.3+7-Ubuntu-1ubuntu218.04.1, mixed mode, sharing) Finally, I added a new elastic IP address and associated it with the running instance. The example logs used for the tutorial are Apache access logs.   Step 1: Installing Elasticsearch We will start by installing the main component in the stack — Elasticsearch. Since version 7.x, Elasticsearch is bundled with Java so we can jump right ahead with adding Elastic’s signing key: Download and install the public signing key: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - Now you may need to install the apt-transport-https package on Debian before proceeding: sudo apt-get install apt-transport-https echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list Our next step is to add the repository definition to our system: echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list You can install the Elasticsearch Debian package with: sudo apt-get update && sudo apt-get install elasticsearch Before we bootstrap Elasticsearch, we need to apply some basic configurations using the Elasticsearch configuration file at: /etc/elasticsearch/elasticsearch.yml: sudo su nano /etc/elasticsearch/elasticsearch.yml Since we are installing Elasticsearch on AWS, we will bind Elasticsearch to the localhost. Also, we need to define the private IP of our EC2 instance as a master-eligible node: network.host: "localhost" http.port:9200 cluster.initial_master_nodes: ["<InstancePrivateIP"] Save the file and run Elasticsearch with: sudo service elasticsearch start To confirm that everything is working as expected, point curl to: http://localhost:9200, and you should see something like the following output (give Elasticsearch a minute or two before you start to worry about not seeing any response): {   "name" : "elasticsearch",   "cluster_name" : "elasticsearch",   "cluster_uuid" : "W_Ky1DL3QL2vgu3sdafyag",   "version" : {     "number" : "7.2.0",     "build_flavor" : "default",     "build_type" : "deb",     "build_hash" : "508c38a",     "build_date" : "2019-06-20T15:54:18.811730Z",     "build_snapshot" : false,     "lucene_version" : "8.0.0",     "minimum_wire_compatibility_version" : "6.8.0",     "minimum_index_compatibility_version" : "6.0.0-beta1"   },   "tagline" : "You Know, for Search" }   Step 2: Installing Logstash Next up, the “L” in ELK — Logstash. Logstash and installing it is easy. Just type the following command. sudo apt-get install logstash -y Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs and ships them on to Elasticsearch for indexing. Verify Java is installed: java -version openjdk version "1.8.0_191" OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-2ubuntu0.16.04.1-b12) OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode) Let’s create a new config file: Since we already defined the repository in the system, all we have to do to install Logstash is run: sudo nano /etc/logstash/conf.d/apache.conf Next, we will configure a Logstash pipeline that pulls our logs from a Kafka topic, processes these logs, and ships them on to Elasticsearch for indexing. Let’s create a new config file: input {   kafka {     bootstrap_servers => "localhost:9092"     topics => "apache"     } } filter {     grok {       match => { "message" => "%{COMBINEDAPACHELOG}" }     }     date {     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]     }   geoip {       source => "clientip"     } } output {   elasticsearch {     hosts => ["localhost:9200"]   } } As you can see — we’re using the Logstash Kafka input plugin to define the Kafka host and the topic we want Logstash to pull from. We’re applying some filtering to the logs and we’re shipping the data to our local Elasticsearch instance.   Step 3: Installing Kibana Let’s move on to the next component in the ELK Stack — Kibana. As before, we will use a simple apt command to install Kibana: sudo apt-get install kibana We will then open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure we have the correct configurations defined: server.port: 5601 server.host: "<INSTANCE_PRIVATE_IP>" elasticsearch.hosts: ["http://<INSTANCE_PRIVATE_IP>:9200"] Then enable and start the Kibana service: sudo systemctl enable kibana sudo systemctl start kibana We would need to install Firebeat. Use: sudo apt install filebeat   Open up Kibana in your browser with http://<PUBLIC_IP>:5601. You will be presented with the Kibana home page.

Quick Tips: Terraform Infrastructure as Code
Jul 13, 2020

You may have heard infrastructure as code(IaC), But do you know what infrastructure is? Why do we need infrastructure as code? What are the benefits of infrastructure as code? Is it safe and secure?    What is Infrastructure as Code(IoC)? Infrastructure as code (IaC) means to manage and upgrade your environments as infrastructure using configuration files. Terraform provides infrastructure as code for provisioning, compliance, and management across any public cloud, private data center, and third-party service. Enables teams to write, share, manage, and automate any infrastructure using version control With automated policy enforcement for security, compliance, and operational best practices and Enable developers to provision their desired infrastructure from within their workflows. IOC has a high impact on the Business perspective by providing Increased Productivity, Reduced Risk, Reduced Cost   Why do we use Infrastructure as Code(IoC)? Terraform is a simple human-readable configuration language, to define the desired topology of infrastructure resources VCS Integration Write, version, review, and collaborate on Terraform code using your preferred version control system Workspaces Workspaces decompose monolithic infrastructure into smaller components, or "micro-infrastructures". These workspaces can be aligned to teams for role-based access control. Variables Granular variables allow easy reuse of code and enable dynamic changes to scale resources and deploy new versions. Runs Terraform uses two-phased provisioning a plan (dry run) & apply (execution). Plans can be inspected before execution to ensure expected behavior and safety. Infrastructure State The state file is a record of currently provisioned resources. State files enable a versioned history of the infrastructure and are encrypted at rest. Versions can be inspected to see incremental changes. Policy as Code Sentinel is a policy as a code framework to automate multi-cloud governance.   What are the benefits of Infrastructure as Code(IoC)? Infrastructure as Code enables Infrastructure teams to test the applications in staging environments or development environment early - likely in the development cycle Infrastructure as Code Saves You Time and Money We can have a version history like when the infrastructure is upgraded and who has done it from the code itself. Else we have to ask to check the Infrastructure admin to look into logs and which is very time-consuming. We can check it into version control and I get versioning. Now we can see an incremental history of who changed what Use Infrastructure as Code to build update and manage any cloud, infrastructure, or services Terraform makes it easy to re-use configurations for the environment for similar infrastructure, helping you avoid mistakes and save time. We can use the same configuration code for the different staging Production and development environments. Terraform supports many Providers to be built from just a simple and less line of code. Major providers are as follows AWS Azure GitHub GitLab Google Cloud Platform VMWare Docker  and  200+ more. A Simple example to create an Ec2 Instance with just a few lines of code. resource "aws_instance" "ec2_instance" {   ami = "ami-*******"   instance_type = "t2.micro"   vpc_security_group_ids = ["${aws_security_group.*****.id}"]   key_name = "${aws_key_pair.****.id}"   tags {     Name = "New-EC2-Instance"   } } But First, we have to write code for which provider we are writing our code. To do so  here is the simple basic code to assign a provider provider "aws" {   region = "us-west-2"   ## PROVIDE CREDENTIALS } Now to Create your Ec2 Instance in AWS. We have to run the commands. So terraform has Four commands to check and apply the infrastructure changes, Init Plan Apply Destroy.   1. Init $ terraform init We can understand from the name of the command that is used to initialize something. So here terraform will be initialized in our code which will create some basic backend and tfstate files in folders for internal use. 2. Plan $ terraform plan As we do compile in some code languages, it will check for the compilation errors and plan what is going to happen when we run the script to generate infrastructure code. It will show you what resources are going to be created and what will be the configuration. 3. Apply $ terraform apply It is time to run the script and check what is being generated from the scripts. So the command will execute the script and apply the changes in our infrastructure, which will generate some resources for what we have written in the code.  4. Destroy $ terraform destroy This command is used when we want to remove or destroy the resource. After some time we don't need that resource then we just run the command which will destroy the resource. And your money is saved.

magnusminds website loader