Wednesday, December 20, 2017

AWS Cli IMP commands

Describe Load Balancer

aws elb describe-load-balancers --load-balancer-names lbname --output text|grep INSTANCES|awk '{print $2}' > filename.txt

Describe intance  with particular instnace type

aws ec2 describe-instances --filters "Name=instance-type,Values=m5.large"

aws ec2 describe-instances

Describe aws account all volume  and there information

aws ec2 describe-volumes --query 'Volumes[*].{ID:VolumeId,Tag:Tags}'

Get Instance ID information
wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

Get instance public ip

aws ec2 describe-instances --instance-id $p | grep PublicIpAddress | grep -o -P "\d+\.\d+\.\d+\.\d+" | grep -v '^10\.' >> filename.txt










Configure httpd server status enable and configure on 800 port.

Then while use aws cli configure shell script and set in cron which send data to cloudwatch after every few min.

#!/bin/bash

logger "Apache Status Started"

export AWS_CREDENTIAL_FILE=/opt/aws/credential-file-path.template
export AWS_CLOUDWATCH_HOME=/opt/aws/apitools/mon
export AWS_PATH=/opt/aws
export AWS_AUTO_SCALING_HOME=/opt/aws/apitools/as
export AWS_ELB_HOME=/opt/aws/apitools/elb
export AWS_RDS_HOME=/opt/aws/apitools/rds
export EC2_AMITOOL_HOME=/opt/aws/amitools/ec2
export EC2_HOME=/opt/aws/apitools/ec2
export JAVA_HOME=/usr/lib/jvm/jre
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin

SERVER=`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`
#echo SERVER=$SERVER
BUSYWORKERS=`wget -q -O - http://localhost:800/server-status?auto | grep BusyWorkers | awk '{ print $2 }'`
#echo BUSYWORKERS=$BUSYWORKERS
IDLEWORKERS=`wget -q -O - http://localhost:800/server-status?auto | grep IdleWorkers | awk '{ print $2 }'`
#echo IDLEWORKERS=$IDLEWORKERS

/opt/aws/bin/mon-put-data --metric-name httpd-busyworkers --namespace "EC2: HTTPD" --dimensions "InstanceId=$SERVER" --unit Count --value $BUSYWORKERS

/opt/aws/bin/mon-put-data --metric-name httpd-idleworkers --namespace "EC2: HTTPD" --dimensions "InstanceId=$SERVER" --unit Count --value $IDLEWORKERS

logger "Apache Status Ended with $SERVER $BUSYWORKERS $IDLEWORKERS"


we can setup Alaram for scale server once Busyworks reach at MinSpareServers or above more than 5 min.  as well as scale down once Busyworks less down MinSpareServers more than 5 min.

Detail and brief information we can found on

http://blog.domenech.org/2012/11/aws-cloudwatch-custom-metric-for-apache.html
While try to convert  C4,M4,T2,R3 instance in C5 instance basically we get below error.

Error starting instances
Enhanced networking with the Elastic Network Adapter (ENA) is required for the 'c5.xlarge' instance type. Ensure that your instance 'i-0de56fe4bb5f3ba27' is enabled for ENA.

So we have to follow below steps.

modinfo ena   this will show ena support is on or not if not then do yum update and reboot instance

ethtool -i eth0        check ena module is loaded

configure aws cli on another instance from that instance your can query your instance status

aws ec2 describe-instances --instance-ids i-040b1236aXXXXX --query 'Reservations[].Instances[].EnaSupport'

configure aws cli on another instance from that instance your can query your AMI status 

aws ec2 describe-images --image-id ami-2XXXXX --query 'Images[].EnaSupport'

command to enable ena support

aws ec2 modify-instance-attribute --instance-id i-040bXXXX--ena-support

Backup your instance and create AMI for safer side.

First run yum update on your instance which help to install ena support driver then shutdown instance and query for ena support if value come null means need to modify instance and enable ena support.  if value come true means you can upgrade instance as C5.

For ena support AMI first create instance and make ena support and then take new AMI.
When we got /bin/bash^M: bad interpreter: error means script created on windows system via editor

(Windows made script run in linux /bin/bash^M: bad interpreter: No such file or directory [duplicate])

in that case we have to run below and replace

Run following command in terminal
sed -i -e 's/\r$//' scriptname.sh
Then try
./scriptname.sh
It should work.
Wednesday, September 27, 2017

Schedule Automated Amazon EBS Snapshots Using CloudWatch Events

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/TakeScheduledSnapshot.html

Step 1: Create a Rule

Create a rule that takes snapshots on a schedule. You can use a rate expression or a cron expression to specify the schedule. For more information, see Schedule Expressions for Rules.
To create a rule
  1. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
  2. In the navigation pane, choose Events, Create rule.
  3. For Event Source, do the following:
    1. Choose Schedule.
    2. Choose Fixed rate of and specify the schedule interval (for example, 5 minutes). Alternatively, choose Cron expression and specify a cron expression (for example, every 15 minutes Monday through Friday, starting at the current time).
  4. For Targets, choose Add target and then select EC2 Create Snapshot API call.
  5. For Volume ID, choose an EBS volume.
  6. Choose Configure details.
  7. For Rule definition, type a name and description for the rule.
  8. For AWS permissions, choose the option to create a new role. This opens the IAM console in a new tab. The new role grants the built-in target permissions to access resources on your behalf. Choose Allow. The tab with the IAM window closes.
  9. Choose Create rule.     
Friday, August 18, 2017

AWS S3 Bucket access policy

AWS ELB Access log S3 bucket policy

{
    "Version": "2012-10-17",
    "Id": "AWSConsole-AccessLogs-Policy-1503036723495",
    "Statement": [
        {
            "Sid": "AWSConsoleStmt-1503036723495",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::127311923021 (127 is AWS Loadbalancer Account ID):root"
            },
            "Action": "s3:PutObject",
         "Resource": ["arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*",
                         "arn:aws:s3:::S3 Bucket Name/foldername/AWSLogs/AWS Your Account ID/*"
          ]       
        }
    ]
}

http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html





AWS S3 bucket Public readonly access policy

{
    "Version": "2008-10-17",
    "Id": "Policy1380877762691",
    "Statement": [
        {
            "Sid": "Stmt1380877761162",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::Bucketname/*"
        }
    ]
}


S3 Bucket copy from one account to another policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::remote aws account number:root",
                    "arn:aws:iam::working aws account number :user/aws user"
                ]
            },
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::Bucketname",
                "arn:aws:s3:::Bucketname/*"
            ]
        }
    ]
}
Open your VPC dashboard ----  go to “Network ACLs” view  -------- choose ACL Rules

         1.    Select the subnet to of your EC2 instances
         2.    Click “Inbound Rules”
         3.    Click “Edit”
         4.    Add a rule to block the traffic

   While setup rules need to understand below which guide how it work

         1.    Rule: Use any number less than 100, 100 is the number of the default accept-all rule. This is important because rules are evaluated in order, and your rule needs to come before the default.
         2.    Type: Select “All traffic”
         3.    Protocol: Locked to “ALL”
         4.    Source: The CIDR you want to block. To match a single IP address, enter it here and append /32. For example, I blocked 49.212.52.94/32
         5.    Select “DENY”

    Now click Save and you should see the updated rules.
    
While searching for a way to block traffic, through AWS web portal you  found lots of articles saying that it wasn’t possible because the security group rules in AWS only support white listing. So this level of control may be a relatively to AWS.
At the bucket level, click on Properties, Expand Permissions, then Select Add bucket policy. Paste the above generated code into the editor and hit save.


{
    "Version": "2008-10-17",
    "Id": "Policy1380877762691",
    "Statement": [
        {
            "Sid": "AllowPublicRead",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::Bucketname/*"
        }
    ]
}
Thursday, July 20, 2017

How to upgrade Nagios 4.2.2 to Nagios 4.3.2 step by step

Backup current nagios directory

cd /usr/local
tar -zcvf nagios-backup.ar.gz nagios

cd /tmp
wget https://assets.nagios.com/downloads/nagioscore/releases/nagios-4.3.2.tar.gz .
tar -zxvf  nagios-4.3.2.tar.gz
cd nagios-4.3.2

./configure --with-command-group=nagcmd
make all
make install

Once the above process is completed run the verification check on the upgrade to make sure there are no errors reported.

# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

 It will give below warning then resolve by executing below command

WARNING: The normal_check_interval attribute is deprecated and will be removed in future versions. Please use check_interval instead.

for resolve excute cd /usr/local/nagios/etc/
sed -i -e 's/normal_check_interval/check_interval/g' *.cfg

WARNING: The normal_retry_interval attribute is deprecated and will be removed in future versions. Please use retry_interval instead.

for resolve excute cd /usr/local/nagios/etc/
sed -i -e 's/retry_check_interval/retry_interval/g' *.cfg

Verify once again vai /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Restart nagios it will show updated version

If face any issue just remove /usr/local/nagios directory and restore tar file backup and restart nagios

In this scenario you have upgraded the plugin on the Nagios server but the client has not been upgraded (hostname in this example).

You execute this command on the nagios server:
/usr/local/nagios/libexec/check_nrpe -H hostname

This is the result from running the command:
NRPE v2.15

On the NRPE v2 Client you will see the following logged per connection attempt:
Jul  5 08:16:22 hostname nrpe[11030]: Error: Request packet type/version was invalid!
Jul  5 08:16:22 hostname nrpe[11030]: Client request was invalid, bailing out...
Jul  5 08:17:21 hostname nrpe[11613]: Error: Request packet type/version was invalid!
Jul  5 08:17:21 hostname nrpe[11613]: Client request was invalid, bailing out...
Jul  5 08:17:44 hostname nrpe[11808]: Error: Request packet type/version was invalid!
Jul  5 08:17:44 hostname nrpe[11808]: Client request was invalid, bailing out...


On the Nagios server with the Plugin v3 will see the following logged per connection attempt:
Jun 24 16:42:04 fbsd01 check_nrpe: Remote 10.25.13.30 does not support Version 3 Packets
Jun 24 16:42:06 fbsd01 check_nrpe: Remote 10.25.13.30 accepted a Version 2 Packet

When the NRPE v3 client first establishes a connection, it tries with the v3 packet. This results in the older client rejecting the request. Upon receiving the rejected request the plugin will then attempt to connect with the v2 packet. This request will succeed however errors are produced in the log on the client and the Nagios server.
The options you have to stop the errors are:
    Upgrade the client to v3
        This will stop the errors
    Force the plugin to send v2 packets
        Using the -2 argument will force the plugin to connect with v2 packets
        /usr/local/nagios/libexec/check_nrpe -2 -H hostname

You will need to update your Nagios command and service definitions to include -2 to allow the plugin and client to communicate on nagios server

/usr/local/nagios/etc/checkcommands.cfg all check_nrpe updated as check_nrpe -2
Thursday, May 25, 2017

How to revert yum update

How to revert yum update if face any issue

yum history  it will show when last yum update done and id date & time & Action it will show E,erase I, install U update

for undo changes

yum history undo id number
Tuesday, April 11, 2017

Git Help full command



What is sparse checkout : you basically tell Git to exclude a certain set of files from the working tree. Those files will still be part of the repository but they won't show up in your working directory.

Internally, sparse checkout uses the skip-worktree flag to mark all the excluded files as always updated. 


So onward we can pull particular required directory on that particular server via git pull. E.g. like we will use git pull on web while executing below command.


mkdir configurations
cd  configurations
git init
git remote add -f origin
https://github.com/configurations.git
git config core.sparseCheckout true
echo "web/" > .git/info/sparse-checkout
git pull origin master
git add files
git commit -m 'your comment'
git push --set-upstream origin master
git push


For git merge
git stash
git pull
git stash apply
git add
git commit
git push

For keep data from server
git checkout filename

git pull        for fetch data (checkout)
git diff         for check difference between files
git push       for push data (commit in repo)
git log         check all commit