Blog

Creating an AMI Image as a backup with Python

We all know the importance of having current backups. Let’s take a look at programatically selecting a server based on name tag (in my case I decided to backup the private git server we setup previously).

We can also utilize a similar setup to create load balancing servers for our web apps, or we can use this similar to docker.

Let’s make do our imports. I decided to use boto2 for ease of sorting instance tags.

#!/usr/bin/python
# -*- coding: utf-8 -*-
#import our dependencies
import boto
from boto import ec2
from boto.ec2 import connection, connect_to_region
import sys
import os
import uuid

class AMICreation(object):

def __init__(self):
#this section is for windows users storing their keys in environment variables
#os.environ[‘AWS_ACCESS_KEY_ID’]
#os.environ[‘AWS_SECRET_ACCESS_KEY’]
#os.environ[‘AWS_DEFAULT_REGION’] = ‘us-west-2’

self.connection = connect_to_region(‘us-west-2’)
#now we iterate through the open instances
self.instances = [i for r in
self.connection.get_all_instances() for i in
r.instances]
self.action = ‘failure’
self.has_error = ‘no’
self.instance = None
#create a function to create image id
def create_image_id(
self,
instance=None,
description=None,
no_reboot=None,
ami_name=None,
):
#name name our image
image_id = instance.create_image(ami_name,
description=description, no_reboot=no_reboot)

if ‘ami’ in image_id:
print image_id

return ‘success’
else:

return ‘failure’

def find_instance_id_and_create(
self,
servername,
descritption,
no_reboot,
):
#iterate through our instances and find the name tag matching “gitserver”
for i in self.instances:

if ‘Name’ in i.tags:

state = i.state

name = i.tags[‘Name’]

instance_id = str(i.id)

print name, state, instance_id

if name.lower() == servername.lower():

ami_name = servername.lower() + ‘-‘ \
+ str(uuid.uuid4().fields[-1])[:5]

status = self.create_image_id(i, str(descritption),
str(no_reboot), str(ami_name))

if status == ‘success’:

self.has_error = ‘no’

return self
else:

self.has_error = ‘yes’

return self
else:

self.has_error = ‘no instances named %s’ \
% servername.lower()

AMICreator = AMICreation()

AMICreator.find_instance_id_and_create(‘gitserver’,
‘this is a git server backup base image’, ‘False’)

if str(AMICreator.has_error) == ‘no’:
print ‘success’
else:
print AMICreator.has_error

Now we have created a backup image of our EC2 instance. Now we can return to a running state at this point in time easily if we need to.

Advertisements

List of Linux Bash Commands for your reference

  alias    Create an alias •
  apropos  Search Help manual pages (man -k)
  apt-get  Search for and install software packages (Debian/Ubuntu)
  aptitude Search for and install software packages (Debian/Ubuntu)
  aspell   Spell Checker
  awk      Find and Replace text, database sort/validate/index
b
  basename Strip directory and suffix from filenames
  bash     GNU Bourne-Again SHell 
  bc       Arbitrary precision calculator language 
  bg       Send to background
  bind     Set or display readline key and function bindings •
  break    Exit from a loop •
  builtin  Run a shell builtin
  bzip2    Compress or decompress named file(s)
c
  cal      Display a calendar
  case     Conditionally perform a command
  cat      Concatenate and print (display) the content of files
  cd       Change Directory
  cfdisk   Partition table manipulator for Linux
  chgrp    Change group ownership
  chmod    Change access permissions
  chown    Change file owner and group
  chroot   Run a command with a different root directory
  chkconfig System services (runlevel)
  cksum    Print CRC checksum and byte counts
  clear    Clear terminal screen
  cmp      Compare two files
  comm     Compare two sorted files line by line
  command  Run a command - ignoring shell functions •
  continue Resume the next iteration of a loop •
  cp       Copy one or more files to another location
  cron     Daemon to execute scheduled commands
  crontab  Schedule a command to run at a later time
  csplit   Split a file into context-determined pieces
  curl     Transfer data  from or to a server
  cut      Divide a file into several parts
d
  date     Display or change the date & time
  dc       Desk Calculator
  dd       Convert and copy a file, write disk headers, boot records
  ddrescue Data recovery tool
  declare  Declare variables and give them attributes •
  df       Display free disk space
  diff     Display the differences between two files
  diff3    Show differences among three files
  dig      DNS lookup
  dir      Briefly list directory contents
  dircolors Colour setup for `ls'
  dirname  Convert a full pathname to just a path
  dirs     Display list of remembered directories
  dmesg    Print kernel & driver messages 
  du       Estimate file space usage
e
  echo     Display message on screen •
  egrep    Search file(s) for lines that match an extended expression
  eject    Eject removable media
  enable   Enable and disable builtin shell commands •
  env      Environment variables
  ethtool  Ethernet card settings
  eval     Evaluate several commands/arguments
  exec     Execute a command
  exit     Exit the shell
  expect   Automate arbitrary applications accessed over a terminal
  expand   Convert tabs to spaces
  export   Set an environment variable
  expr     Evaluate expressions
f
  false    Do nothing, unsuccessfully
  fdformat Low-level format a floppy disk
  fdisk    Partition table manipulator for Linux
  fg       Send job to foreground 
  fgrep    Search file(s) for lines that match a fixed string
  file     Determine file type
  find     Search for files that meet a desired criteria
  fmt      Reformat paragraph text
  fold     Wrap text to fit a specified width.
  for      Expand words, and execute commands
  format   Format disks or tapes
  free     Display memory usage
  fsck     File system consistency check and repair
  ftp      File Transfer Protocol
  function Define Function Macros
  fuser    Identify/kill the process that is accessing a file
g
  gawk     Find and Replace text within file(s)
  getopts  Parse positional parameters
  grep     Search file(s) for lines that match a given pattern
  groupadd Add a user security group
  groupdel Delete a group
  groupmod Modify a group
  groups   Print group names a user is in
  gzip     Compress or decompress named file(s)
h
  hash     Remember the full pathname of a name argument
  head     Output the first part of file(s)
  help     Display help for a built-in command •
  history  Command History
  hostname Print or set system name
  htop     Interactive process viewer
i
  iconv    Convert the character set of a file
  id       Print user and group id's
  if       Conditionally perform a command
  ifconfig Configure a network interface
  ifdown   Stop a network interface 
  ifup     Start a network interface up
  import   Capture an X server screen and save the image to file
  install  Copy files and set attributes
  ip       Routing, devices and tunnels
j
  jobs     List active jobs •
  join     Join lines on a common field
k
  kill     Kill a process by specifying its PID
  killall  Kill processes by name
l
  less     Display output one screen at a time
  let      Perform arithmetic on shell variables •
  link     Create a link to a file 
  ln       Create a symbolic link to a file
  local    Create variables •
  locate   Find files
  logname  Print current login name
  logout   Exit a login shell •
  look     Display lines beginning with a given string
  lpc      Line printer control program
  lpr      Off line print
  lprint   Print a file
  lprintd  Abort a print job
  lprintq  List the print queue
  lprm     Remove jobs from the print queue
  ls       List information about file(s)
  lsof     List open files
m
  make     Recompile a group of programs
  man      Help manual
  mkdir    Create new folder(s)
  mkfifo   Make FIFOs (named pipes)
  mkisofs  Create an hybrid ISO9660/JOLIET/HFS filesystem
  mknod    Make block or character special files
  more     Display output one screen at a time
  most     Browse or page through a text file
  mount    Mount a file system
  mtools   Manipulate MS-DOS files
  mtr      Network diagnostics (traceroute/ping)
  mv       Move or rename files or directories
  mmv      Mass Move and rename (files)
n
  nc       Netcat, read and write data across networks
  netstat  Networking information
  nice     Set the priority of a command or job
  nl       Number lines and write files
  nohup    Run a command immune to hangups
  notify-send  Send desktop notifications
  nslookup Query Internet name servers interactively
o
  open     Open a file in its default application
  op       Operator access 
p
  passwd   Modify a user password
  paste    Merge lines of files
  pathchk  Check file name portability
  ping     Test a network connection
  pkill    Kill processes by a full or partial name.
  popd     Restore the previous value of the current directory
  pr       Prepare files for printing
  printcap Printer capability database
  printenv Print environment variables
  printf   Format and print data •
  ps       Process status
  pushd    Save and then change the current directory
  pv       Monitor the progress of data through a pipe 
  pwd      Print Working Directory
q
  quota    Display disk usage and limits
  quotacheck Scan a file system for disk usage
r
  ram      ram disk device
  rar      Archive files with compression
  rcp      Copy files between two machines
  read     Read a line from standard input •
  readarray Read from stdin into an array variable •
  readonly Mark variables/functions as readonly
  reboot   Reboot the system
  rename   Rename files
  renice   Alter priority of running processes 
  remsync  Synchronize remote files via email
  return   Exit a shell function
  rev      Reverse lines of a file
  rm       Remove files
  rmdir    Remove folder(s)
  rsync    Remote file copy (Synchronize file trees)
s
  screen   Multiplex terminal, run remote shells via ssh
  scp      Secure copy (remote file copy)
  sdiff    Merge two files interactively
  sed      Stream Editor
  select   Accept keyboard input
  seq      Print numeric sequences
  set      Manipulate shell variables and functions
  sftp     Secure File Transfer Program
  shift    Shift positional parameters
  shopt    Shell Options
  shutdown Shutdown or restart linux
  sleep    Delay for a specified time
  slocate  Find files
  sort     Sort text files
  source   Run commands from a file '.'
  split    Split a file into fixed-size pieces
  ssh      Secure Shell client (remote login program)
  stat     Display file or file system status 
  strace   Trace system calls and signals
  su       Substitute user identity
  sudo     Execute a command as another user
  sum      Print a checksum for a file
  suspend  Suspend execution of this shell •
  sync     Synchronize data on disk with memory
t
  tail     Output the last part of file
  tar      Store, list or extract files in an archive
  tee      Redirect output to multiple files
  test     Evaluate a conditional expression
  time     Measure Program running time
  timeout  Run a command with a time limit
  times    User and system times
  touch    Change file timestamps
  top      List processes running on the system
  tput     Set terminal-dependent capabilities, color, position
  traceroute Trace Route to Host
  trap     Run a command when a signal is set(bourne)
  tr       Translate, squeeze, and/or delete characters
  true     Do nothing, successfully
  tsort    Topological sort
  tty      Print filename of terminal on stdin
  type     Describe a command •
u
  ulimit   Limit user resources •
  umask    Users file creation mask
  umount   Unmount a device
  unalias  Remove an alias •
  uname    Print system information
  unexpand Convert spaces to tabs
  uniq     Uniquify files
  units    Convert units from one scale to another
  unrar    Extract files from a rar archive 
  unset    Remove variable or function names
  unshar   Unpack shell archive scripts
  until    Execute commands (until error)
  uptime   Show uptime
  useradd  Create new user account
  userdel  Delete a user account
  usermod  Modify user account
  users    List users currently logged in
  uuencode Encode a binary file 
  uudecode Decode a file created by uuencode
v
  v        Verbosely list directory contents (`ls -l -b')
  vdir     Verbosely list directory contents (`ls -l -b')
  vi       Text Editor
  vmstat   Report virtual memory statistics
w
  wait     Wait for a process to complete •
  watch    Execute/display a program periodically
  wc       Print byte, word, and line counts
  whereis  Search the user's $path, man pages and source files for a program
  which    Search the user's $path for a program file
  while    Execute commands
  who      Print all usernames currently logged in
  whoami   Print the current user id and name (`id -un')
  wget     Retrieve web pages or files via HTTP, HTTPS or FTP
  write    Send a message to another user 
x
  xargs    Execute utility, passing constructed argument list(s)
  xdg-open Open a file or URL in the user's preferred application.
  xz       Compress or decompress .xz and .lzma files
  yes      Print a string until interrupted
  zip      Package and compress (archive) files.
  .        Run a command script in the current shell
  !!       Run the last command again
  ###      Comment / Remark

Commands marked • are bash built-ins
Many commands particularly the Core Utils are also available under alternate shells (C shell, Korn shell etc).

Setting up a private GIT Server

Git is a versioning system that is used by millions of users around the world. Developed by Linus Torvalds in April of 2005, Git is used for over 21.8 million repositories.

Why not just use Github? Was the first question I asked when considering why I should write this article. Github along with other hosted repository services usually allow only a few private repositories. This provides a dilemma for the little man, should we pay for more private repos , should we spread our repositories out over multiple services or should we host our own private git server?

There are benefits to hosting your own git server. Unlimited private repos, the possibility to have more control for each user and group privileges, just to name a couple. Now that we have looked at the options available and weighed the pros and cons of each service, maybe you have decided to host your own git server.

First things first, which open source git server should we use? I decided to utilize GitLab, being open source, and readily available it also has a web based GUI.

Before we install GitLab, I recommend installing Postfix and setting up SMTP email server so that GitLab can push emails when needed.

Assuming you have already installed and setup Postfix, Let’s move on to GitLab.

Download the packages using wget. Then install the package:

wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.9.4-omnibus.1-1_amd64.deb
sudo dpkg -i gitlab_7.9.4-omnibus.1-1_amd64.deb

Now we need to configure GitLab:

sudo gitlab-ctl reconfigure
nano /etc/gitlab/gitlab.rb

Edit the ‘external_url’, give the server domain, and save the file.

gitlab-2

In your web browser, open your GitLab site, using ‘root; for the system admin and ‘5iveL!fe’ for the password. Change your password after your first login for obvious security reasons.

Thank you for utilizing this quick and simple installation guide for installing and setting up your own private git server.

Automating ELK Stack Installation

Last time we installed an ELK stack on AWS. Today let’s setup an automation script using Python 2.7 to automate the installation of an Elk server.

Let’s make our calls for necessary modules.

import os
import boto3

We set up our access keys using environment variables so we don’t accidentally publish this information to a public repository. Then set the region we want our AWS EC2 instance.

os.environ[“AWS_ACCESS_KEY_ID”]
os.environ[“AWS_SECRET_ACCESS_KEY”]
os.environ[“AWS_DEFAULT_REGION”] = “us-west-2”

Let’s create the bash script that will pass to our instance once it is created. Our bash commands need to contain information for installing Java, creating the repositories for ElasticSearch, Logstash, and Kibana. We also need to include commands for starting our services and configuring the config files.

#Bash commands for installing elk stack
userdata = “””#!/bin/bash
sudo su
cd ~
wget –no-cookies –no-check-certificate –header “Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie” “http://download.oracle.com/otn-pub/java/jdk/8u73-b02/jdk-8u73-linux-x64.rpm”
yum -y localinstall jdk-8u73-linux-x64.rpm
rpm –import http://packages.elastic.co/GPG-KEY-elasticsearch
#create new repo for elasticsearch using rawgit.com to create a downloadable link to get the file needed
wget “https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/elasticsearch.repo” -P /etc/yum.repos.d/
#install elastic search
yum -y install elasticsearch
#edit elasticsearch config
sed -i ‘$network.host: localhost’ /etc/elasticsearch/elasticsearch.yml
service elasticsearch start
service enable elasticsearch
#kibana
#add kibana repo used rawgit.com to create a downloadable link to get the file needed
wget ‘https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/kibana.repo’ -P /etc/yum.repos.d/
#install kibana
yum -y install kibana
#edit kibana config
sed -i ‘$server.host: “localhost”‘ /opt/kibana/config/
#start kibana
service kibana start
#install logstash
#add logstash repo
wget ‘https://cdn.rawgit.com/lanerjo/aws_ELK_stack_launcher/master/logstash.repo’ -P /etc/yum.repos.d/
#install logstash
yum -y install logstash
service logstash start
“””

Finally we create and start our instance, pass our bash script and return needed information about our server.

#creating the ec2 instance on AWS using a predefined security group, t2 micro size, and amazon linux machine image
ec2 = boto3.resource(‘ec2′)
instances = ec2.create_instances(
ImageId=’ami-7172b611′,
InstanceType=’t2.micro’,
KeyName=’AWS_Testing’,
MinCount=1,
MaxCount=1,
SecurityGroupIds=[‘Jenkins’],
UserData=userdata,
)
#start the instance and print to command instance id, state, public dns, public ip
for instance in instances:
print(“Waiting until running…”)
instance.wait_until_running()
instance.reload()
print((instance.id, instance.state, instance.public_dns_name,
instance.public_ip_address))

Running this script from command line will start our ELK stack automated installation on a new AWS EC2 Instance.

Up next: creating our own private GIT server.

Setting up an ELK Stack on AWS

ELK Stack – this was a new term to me before I undertook this process, it seems overwhelming the first time you take on a new task.

ELK stands for Elasticsearch, Logstash and Kibana. Elasticsearch is a NoSQL database that allows NRT (near real time) queries. Kibana offers a nice interactive interface for analyzing data contained in the Elasticsearch data. Logstash is the intermediary between Elasticsearch and Kibana.

ELK has a large open source community, making this set of utilities quite popular. There are plenty of guides out there and the documentation is helpful. This article will not cover using an ELK stack in a production evironment, we will be setting up a test stack and getting familiar with the process. However, to set up an ELK stack for a production environment would not need too much changing of this process.

 

Getting Started:

Every component of our ELK stack requires Java. Let’s get busy and start setting up java on an Ubuntu AWS instance via SSH and shell commands. Make sure you have root access: sudo su

Installing Java:


  1. apt-get update
  2. apt-get upgrade
  3. apt-get install openjdk-7-jre-headless

Installing Elasticsearch:


  1. wget qO https://packages.elastic.co/GPGKEYelasticsearch | sudo aptkey add
  2. echo “deb http://packages.elastic.co/elasticsearch/1.7/debian stable main” | sudo tee a /etc/apt/sources.list.d/elasticsearch1.7.list
  3. apt-get update
  4. apt-get install elasticsearch
  5. service elasticsearch restart

Installing Logstash:


  1. echo “deb http://packages.elasticsearch.org/logstash/1.5/debian stable main” | sudo tee a /etc/apt/sources.list
  2. apt-get update
  3. apt-get install logstash
  4. service logstash start

Create config file for logstash:


vi /etc/logstash/conf.d/10-syslog.conf

  1. input {
  2. file {
  3. type => “syslog”
  4. path => [ “/var/log/messages”, “/var/log/*.log” ]
  5. }
  6. }
  7. output {
  8. stdout {
  9. codec => rubydebug
  10. }
  11. elasticsearch {
  12. host => “localhost” # Use the internal IP of your Elasticsearch server
  13. # for production
  14. }
  15. }
  16. :wq

service logstash restart


Kibana Installation:


  1. wget https://download.elastic.co/kibana/kibana/kibana4.1.1linuxx64.tar.gz
  2. tar -xzf kibana-4.1.1-linux-x64.tar.gz
  3. cd /kibana-4.1.1-linux-x64/
  4. mkdir -p /opt/kibana
  5. mv kibana-4.1.1-linux-x64/* /opt/kibana
  6. cd /etc/init.d && sudo wget https://raw.githubusercontent.com/akabdog/scripts/master/kibana4_init O kibana4
  7. chmod +x /etc/init.d/kabana4
  8. service kibana4 start

Testing our installs:

Point your browser to ‘http://YOUR_ELASTIC_IP:5601’ after Kibana is started

Using Python to Automate Jenkins Install on AWS EC2 Instance

One of the main goals for a DevOps professional is automation. This week I was given a “simple” task, I was supposed to write a script that would login to AWS, create an instance, and install Jenkins.

Why would I want to do all that work when there are GUIs to assist with this process?

Automation is the key, when you may be faced with repetitive tasks automation just makes sense.

For the purpose of this tutorial is is assumed that you already have an AWS account, SecretKey, SecretAccessID, a security group policy already set up to accept incoming on port 8080, Python 2.7, and Boto3 installed.


import os

import boto3


First we need to make our calls. Creating an instance requires the os module and Boto modules, I decided to utilize boto3.


# note, later I made these system environment calls so they
# aren't accidentally published in a public repository.
os.environ["AWS_ACCESS_KEY_ID"]
os.environ["AWS_SECRET_ACCESS_KEY"]
os.environ["AWS_DEFAULT_REGION"] = "us-west-2"

This sets the access key, secret key and default region. You can change the region to wherever you need.


userdata = """#!/bin/bash
yum update -y
wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
rpm --import http://pkg.jenkins-ci.org/redhat-stable/jenkins-ci.org.key
yum install jenkins -y
service jenkins start
chkconfig jenkins on

"""

Now we pass our #!bash script through userdata. We start by updating yum. The next two lines are adding the Jenkins repository. Next we install Jenkins. Finally we start Jenkins as a service.
Important note: any commands sent that will require user input need to include the input. Ex: yum requires -y


ec2 = boto3.resource('ec2')
instances = ec2.create_instances(
 ImageId='ami-7172b611',
 InstanceType='t2.micro',
 KeyName='AWS_Testing',
 MinCount=1,
 MaxCount=1,
 SecurityGroupIds=['Jenkins'],
 UserData=userdata
)

Now we need to define our instance to be created. You may use a different AMI, KeyName, SecurityGroup.


for instance in instances:
 print("Waiting until running...")
 instance.wait_until_running()
 instance.reload()
 print((instance.id, instance.state, instance.public_dns_name,
instance.public_ip_address))

Now we put everything together and return information about our instance.

Put everything together and test it out. Don’t forget to SSH into the instance and get the Jenkins default password.

The next task given to me is to set up an ELK Stack… Stay tuned.

Developing A Basic Understanding

The IT industry is a constantly changing environment. Adapt and survive or else, should be the motto. When I started in computers and technology a state of the art machine was a Franklin PC 5000, which included dual 5.25 floppy drives, 64k ram, a VGA monitor, and ran Disk Operating System (DOS). Basic was the language to learn.

It is truly amazing how fast computers have changed since then. Recently, I decided that I would take up learning Python 2.7 in pusuit of a career in the DevOps field. I have achieved this through the use of http://www.codeacademy.com and http://learnpythonthehardway.org . Both of these resources are excellent at learning the fundamentals.

DevOps is a new term utilizing Agile and Lean methodologies to bring development and operations together. There is the “CAMS” (Culture, Automation, Measurement and Sharing) acronym popularized by John Willis and Damon Edwards. When you think DevOps, think continuous, think automated, think security, think network.

After much research, I have found a list of tools that I will be showing others how to use, install and setup. Follow along and maybe you can learn something useful along the way.