Tuesday 18 July 2017

Jenkins -5

                                                                Jenkins Slave Setup-2
Jenkins slave setting can be done in following way

Click on Manage Jenkins-->Manage Nodes--> New Node-->The Give The Node Name and select Checkbox for Dumb Node--> Then Ok
Now We have to give more detailes about slave as slave name and description and number of excecutors  means the number of jobs jenkins slave executes by default it is 1.we can change it.
Remote root directory as /home/jenkins,Labels,Usage,Launch method etc as below





After selecting SSH as above screen you will get 3 options select .ssh from master checkbox then add.
Your node will be added to the jenkins. 

Friday 7 July 2017

Puppet-2


Puppet-2

Puppet, from Puppet Labs, is a configuration management tool that helps system administrators automate the provisioning, configuration, and management of a server infrastructure. Planning ahead and using config management tools like Puppet can cut down on time spent repeating basic tasks, and help ensure that your configurations are consistent and accurate across your infrastructure. Once you get the hang of managing your servers with Puppet and other automation tools, you will free up time which can be spent improving other aspects of your overall setup.

Puppet comes in two varieties, Puppet Enterprise and open source Puppet. It runs on most Linux distributions, various UNIX platforms, and Windows.

In this tutorial, we will cover how to install open source Puppet in an Agent/Master setup. This setup consists of a central Puppet Master server, where all of your configuration data will be managed and distributed from, and all your remaining servers will be Puppet Agent nodes, which can be configured by the puppet master server.

Prerequisites


To follow this tutorial, you must have root access to all of the servers that you want to configure Puppet with. You will also be required to create a new Ubuntu 14.04 VPS to act as the Puppet master server. If you do not have an existing infrastructure, feel free to recreate the example infrastructure (described below) by following the prerequisite DNS setup tutorial.

Create Puppet Master Server


Create a new Ubuntu 14.04 x64 VPS, using "puppet" as its hostname. Add its private network to your DNS with the following details:
HostnameRolePrivate FQDN
puppetPuppet Masterpuppet.nyc2.example.com
If you just set up your DNS and are unsure how to add your host to DNS, refer to the Maintaining DNS Records section of the DNS tutorial. Essentially, you need to add an "A" and "PTR" record, and allow the new host to perform recursive queries. Also, ensure that you configure your search domain so your servers can use short hostnames to look up each other.

Using "puppet" as the Puppet master's hostname simplifies the agent setup slightly, because it is the default name that agents will use when attempting to connect to the master.

Now we need to set up NTP.
Install NTP

Because it acts as a certificate authority for agent nodes, the puppet master server must maintain accurate system time to avoid potential problems when it issues agent certificates--certificates can appear to be expired if there are time discrepancies. We will use Network Time Protocol (NTP) for this purpose.

First, do a one-time time synchronization using the ntpdate command:
sudo ntpdate pool.ntp.org
Your system time will be updated, but you need to install the NTP daemon to automatically update the time to minimize time drift. Install it with the following apt command:
sudo apt-get update && sudo apt-get -y install ntp
It is common practice to update the NTP configuration to use "pools zones" that are geographically closer to your NTP server. In a web browser, go to the NTP Pool Project and look up a pool zone that is geographically close the datacenter that you are using. We will go with the United States pool (http://www.pool.ntp.org/zone/us) in our example, because the servers are located in a New York datacenter.

Open ntp.conf for editing:
sudo vi /etc/ntp.conf
Add the time servers from the NTP Pool Project page to the top of the file:
server 0.us.pool.ntp.org
server 1.us.pool.ntp.org
server 2.us.pool.ntp.org
server 3.us.pool.ntp.org
Save and exit. Restart NTP to add the new time servers.
sudo service ntp restart
Now that our server is keeping accurate time, let's install the Puppet master software.

Install Puppet Master

There are a variety of ways to install open source Puppet. We will use the debian package called puppetmaster-passenger, which is provided by Puppet Labs. The puppetmaster-passenger package includes the Puppet master plus production-ready web server (Passenger with Apache), which eliminates a few configuration steps compared to using the basic puppetmaster package.

Download the Puppet Labs package:

cd ~; wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb


Install the package:
sudo dpkg -i puppetlabs-release-trusty.deb


Update apt's list of available packages:sudo apt-get update

Install the puppetmaster-passenger package:
sudo apt-get install puppetmaster-passenger
The Puppet master, Passenger, Apache, and other required packages are now installed. Because we are using Passenger with Apache, the Puppet master process is controlled by Apache, i.e. it runs when Apache is running.

Devops Interview Questions-2

Devops interview questions

Here Iam posting Some more questions for devops role.Hoping these questions are helpful to you guyz .Below i had given two sets of questions try to read every question clearly and understand it.


SET-1
1. Explain your understanding and expertise on both the software development side and the technical operations side of an organisation you’ve worked for in the past.
2. Explain what would you check If a Linux-build-server suddenly starts getting slow.
3. How would you make software deployable?
4. How have you used SSH?
5. What are the important aspects of a system of continuous integration and deployment?
6. Describe Puppet master agent architecture. How have you implemented it in your project?
7. What testing is necessary to ensure that a new service is ready for production?
8. How DNS works? Explain what happens in all layers of OSI when URL is entered in the browser? How a system forks a child?
9. Tell us about the CI tools that you are familiar with.
10. What DevOp tools have you worked with?
11. What different types of testing need to be carried out on a software system, and what tools would you use to achieve this testing?
12. How much have you interacted with cloud-based software development?
13. Discuss your experience building bridges between IT Ops, QA, and development.
14. Are you familiar with just Linux or have you worked with Windows environments as well?
15. Did you get a chance to work on Amazon tools?
16. What are some DevOps projects you’ve worked on in the past ‘using systems automation and configuration?
17. What was your greatest achievement on a recent project?
18. What problems did you face and how did you solve them?
19. What’s your career objective in your role as a DevOps engineer?

20. Explain the achievements and technology establishments achieved by you in your previous organization.
SET-2
  1. How to scale a database without just increasing capacity of a single machine while maintaining ACID? 
  2. How to choose between relational database and noSQL? 
  3. What advantages a NoSQL database like MongoDB has, comparing to MySQL? 
  4. How to manage API versions? 
  5. How to reduce load time of a dynamic website? 
  6. How to reduce load time of a static website?
  7. Difference between authorization and authentication? 
  8. Describe two-factor authentication 
  9. Describe how would you secure a web application 
  10. HTTP vs HTTPS 
  11. Talk about PKI/your experience with SSL/Certificates
  12. Difference between RAID 0, 1 and 5? 
  13. What’s the advantage of one RAID over another? 
  14. Alternative to init.d in Linux? 
  15. How to view running processes in Linux? 
  16. How to check DNS records in Linux? 
  17. Describe your experience with scripting
  18. Explain what is DevOps?
  19.  Explain which scripting language is most important for a DevOps engineer?
  20.  Explain how DevOps is helpful to developers?
  21.  List out some popular tools for DevOps?
  22.  Mention at what instance have you used the SSH?
  23.  Explain how would you handle revision (version) control?
  24.  Mention what are the types of Http requests?
  25.  Mention what are the key aspects or principle behind DevOps?
  26.  What are the core operations of DevOps with application development and with infrastructure?
These are small set of questions on Devops. We will update more on this in future posts.If you know the answers to the above questions write in comments by providing q.no .We will update in this blog so every one acces it

Docker-2

Docker Dairy-2
Why do we need Docker?

The list of benefits is the following:
Faster development process. There is no need to install 3rd parties like PostgreSQL, Redis, Elasticsearch. Those can be run in containers.
Handy application encapsulation (you can deliver your application in one piece).
Same behaviour on local machine / dev / stage / production servers.
Easy and clear monitoring.
Easy to scale (if you’ve done your application right it will be ready to scaling not only in Docker).
Supported platforms

Docker’s native platform is Linux, as it’s based on features provided by Linux kernel. However, you can still run it on macOS and Windows. The only difference is that on macOS and Windows Docker is encapsulated into a tiny virtual machine. At the moment Docker for macOS and Windows has reached a significant level of usability and feels more like a native app.

Moreover, there a lot of supplementary apps such as
Kitematic or Docker Machine which help to install and operate Docker on non Linux platforms.
Terminology
Container — running instance that encapsulates required software. Containers are always created from images.
Container can expose ports and volumes to interact with other containers or/and outer world.
Container can be easily killed / removed and re-created again in a very short time.
Image — basic element for every container. When you create an image every step is cached and can be reused (Copy On Write model). Depending on the image it can take some time to build it. Containers, on the other hand can be started from images right away.
Port — a TCP/UDP port in its original meaning. To keep things simple let’s assume that ports can be exposed to the outer world (accessible from host OS) or connected to other containers — accessible only from those containers and invisible to the outer world.
Volume — can be described as a shared folder. Volumes are initialized when a container is created. Volumes are designed to persist data, independent of the container’s lifecycle.
Registry — the server that stores Docker images. It can be compared to Github — you can pull an image from the registry to deploy it locally, and you can push locally built images to the registry.
Docker hub — a registry with web-interface provided by Docker Inc. It stores a lot of Docker images with different software. Docker hub is a source of the “official” Docker images made by Docker team or made in cooperation with the original software manufacturer (it doesn’t necessary mean that these “original” images are from official software manufacturers). Official images list their potential vulnerabilities.This information is available for any logged in user. There are both free and paid accounts available. You can have one private image per account and an infinite amount of public images for free.

Example 1: hello world

It’s time to run your first container:
docker run ubuntu /bin/echo 'Hello world'


Console output:
Unable to find image 'ubuntu:latest' locally  
latest: Pulling from library/ubuntu  
d54efb8db41d: Pull complete  
f8b845f45a87: Pull complete  
e8db7bf7c39f: Pull complete  
9654c40e9079: Pull complete  
6d9ef359eaaa: Pull complete  
Digest: sha256:dd7808d8792c9841d0b460122f1acf0a2dd1f56404f8d1e56298048885e45535  
Status: Downloaded newer image for ubuntu:latest  
Hello world


docker run is a command to run a container.
ubuntu is the image you run, for example, the Ubuntu operating system image. When you specify an image, Docker looks first for the image on your Docker host. If the image does not exist locally, then the image is pulled from the public image registry — Docker Hub.
/bin/echo ‘Hello world’ is the command that will run inside a new container. This container simply prints Hello world and stops the execution.


AWS-5


AWS-S3-2


Key Features

Storage and Security


• Amazon S3 stores data as objects within resources called "buckets." You can store as many objects asyou want within a bucket, and write, read, and delete objects in your bucket. Objects can be up to 5
terabytes in size.
• You can control access to the bucket (who can create, delete, and retrieve objects in the bucket for
example), view access logs for the bucket and its objects, and choose the AWS region where a bucket
is stored to optimize for latency, minimize costs, or address regulatory requirements
Cross-Region Replication
• Cross-region replication (CRR) provides automated, fast, reliable data replication across AWS regions.
Every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a
different AWS region that you choose
Event Notifications
• Amazon S3 event notifications can be sent when objects are uploaded to or deleted from Amazon
S3
Versioning
• Amazon S3 allows you to enable versioning so you can preserve, retrieve, and restore every
version of every object stored in an Amazon S3 bucket.
Lifecycle Management
• Amazon S3 provides a number of capabilities to manage the lifecycle of your data, including
automated migration of older data from S3 Standard to S3 Standard - Infrequent Access and
Amazon Glacier
Encryption
• Amazon S3 encrypts data in transit via SSL-encrypted endpoints and can also encrypt data at rest
with three options for managing encryption keys: directly by S3, through AWS Key Management
Service (AWS KMS), or you can provide your own keys
Security and Access Management
• Amazon S3 provides several mechanisms to control and monitor who can access your data as well as how, when, and where they can access it. VPC endpoints allow you to create a secure connection
without a gateway or NAT instances
Cost Monitoring and Controls
• Amazon S3 has several features for managing and controlling your costs, including bucket
tagging to manage cost allocation and integration with Amazon CloudWatch to receive billing
alerts
Flexible Storage Options
• Amazon S3 is designed for 99.999999999% durability and up to 99.99% availability of objects over a given year.
• In addition to S3 Standard, there is a lower-cost Standard - Infrequent Access option for
infrequently accessed data, and Amazon Glacier for archiving cold dataat the lowest possible cost
Time-limited Access to Objects
• Amazon S3 supports query string authentication by devs, which allows you to provide a URL that is
valid only for a length of time that you define. This time limited URL can be useful for scenarios such as software downloads or other applications where you want to restrict the length of time users
have access to an object
Data Lifecycle Management
• Lifecycle management of data refers to how your data is managed and stored from creation and
initial storage to he it’s o lo ger eeded a d deleted
Transferring Large Amounts of Data
• AWS Import/Export accelerates moving large amounts of data into and out of AWS. AWS transfers
your data directly onto and off of storage devices using A azo ’s high-speed internal network and
bypassing the Internet.
• You can use AWS Import/Export for migrating data into the cloud, distributing content to your
customers, sending backups to AWS, and disaster recovery.
You can also use AWS Direct Connect to transfer large amounts of data to Amazon S3.

AWS-4

AWS-S3-1


Introduction to Amazon S3
Amazon S3
• Amazon Simple Storage Service (Amazon S3), provides secure, durable, highly-scalable object storage.
• Amazon S3 stores data as objects within resources called "buckets." You can store as many objects as you want within a bucket, and write, read, and delete objects in your bucket. Objects can be up to 5
terabytes in size.
• Amazon S3 is easy to use, with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
S3 Benefits
Durable
• Amazon S3 provides durable infrastructure to store important data and is designed for durability of
99.999999999% of objects.
Low Cost
• Amazon S3 allows you to store large amounts of data at a very low cost.
Available
• Amazon S3 Standard is designed for up to 99.99% availability of objects over a given year and is
backed by the Amazon S3 Service Level Agreement, ensuring that you can rely on it when needed.
Secure
• Amazon S3 supports data transfer over SSL and automatic encryption of your data once it is uploaded.
You can also configure bucket policies to manage object permissions and control access to your data
using AWS Identity and Access Management (IAM).
Scalable
• With Amazon S3, you can store as much data as you want and access it when needed. You can stop
guessing your future storage needs and scale up and down as required, dramatically increasing
business agility.
Send Event Notifications
• Amazon S3 can send event notifications when objects are uploaded to Amazon S3. Amazon S3 eventnotifications can be delivered using Amazon SQS or Amazon SNS enabling you to trigger workflows,alerts, or other processing.
For example, you could use Amazon S3 event notifications to trigger media files when they are uploaded,processing of data files when they become available, or synchronization of Amazon S3 objects with otherdata stores
S3 integrations include Amazon CloudFront, Amazon CloudWatch, Amazon Kinesis, Amazon RDS, Amazon Glacier,Amazon EBS, Amazon DynamoDB, Amazon Redshift, Amazon Route 53, Amazon EMR, Amazon VPC, Amazon KMS,
and AWS Lambda
High Performance
Easy to use
Amazon S3 is easy to use with a web-based management console and mobile app and full REST APIs and SDKs for easy integration with third party technologies.
Use cases
Backup and Archiving
• Amazon S3 offers a highly durable, scalable, and secure solution for backing up and archiving your
critical data.
Big Data Analytics
• Whether ou’re stori g phar a euti al or fi a ial data, or ulti edia files su h as photos and videos, Amazon S3 can be used as your big data object store. Amazon Web Services offers reducing costs, scaling to meet demand, and increasing the speed of innovation.
Static Website Hosting
• You can host your entire static website on Amazon S3 for a low-cost, highly available hosting
solution that can scale automatically to meet traffic demands.
Disaster Recovery

• We can have failover and failback scenario

AWS-3

AWS(EC2-2)

Inexpensive
Amazon EC2 passes on to you the financial benefits of Amazon’s scale. You pay a very low
rate for the compute capacity you actually consume. See Amazon EC2 Instance Purchasing
Options for a more detailed description.
• On-Demand Instances – On-Demand Instances let you pay for compute capacity by the hour
with no long-term commitments.
• This frees you from the costs and complexities of planning, purchasing, and maintaining
hardware and transforms what are commonly large fixed costs into much smaller variable costs.
Reserved Instances
• Reserved Instances provide you with a significant discount (up to 75%) compared to On-
Demand Instance pricing.
• There are three Reserved Instance payment options (No Upfront, Partial Upfront, All Upfront)
that enable you to balance the amount you pay upfront with your effective hourly price.
• The Reserved Instance Marketplace is also available, which provides you with the
opportunity to sell Reserved Instances if your needs change.
Examples:
You want to move instances to a new AWS Region, change to a new instance type, or sell
capacity for projects that end before your Reserved Instance term expires).
Spot Instances
• Spot Instances allow customers to bid on unused Amazon EC2 capacity and run those
instances for as long as their bid exceeds the current Spot Price.
To use Amazon EC2, you simply:
• Select a pre-configured, template Amazon Machine Image (AMI) to get up and running immediately. Or create an AMI containing your applications, libraries, data, and associated configuration settings.
• Configure security and network access on your Amazon EC2 instance..
• Choose which instance type(s) you want, then start, terminate, and monitor as many instances
of your AMI as needed, using the web service APIs or the variety of management tools
provided.
• Determine whether you want to run in multiple locations, utilize static IP endpoints, or attach
persistent block storage to your instances.
• Pay only for the resources that you actually consume, like instance-hours or data transfer.
Features 
Amazon EC2 provides a number of powerful features for building scalable, failure resilient,
enterprise class applications.
Amazon Elastic Block Store
• Amazon Elastic Block Store (EBS) offers persistent storage for Amazon EC2 instances.
• Amazon EBS volumes are network-attached, and persist independently from the life of an instance.
• Amazon EBS volumes are highly available, highly reliable volumes that can be leveraged as an Amazon EC2 instance’s boot partition or attached to a running Amazon EC2 instance as a standard block device
EBS-Optimized Instances
• For an additional, low, hourly fee, customers can launch selected Amazon EC2 instances
types as EBS-optimized instances
Multiple Locations
• Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of Regions and Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location. Regions consist of one or more Availability Zones, are geographically dispersed, and will be in separate geographic areas or countries
Elastic IP Addresses
• Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP
address is associated with your account not a particular instance, and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to any instance in your account

AWS-2

AWS(EC2 -1)


Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable
compute capacity in the cloud
• Amazon EC2’s simple web service interface allows you to obtain and configure capacity with
minimal friction.
• It provides you with complete control of your computing resources and lets you run on
Amazon’s proven computing environment.
• Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
• Amazon EC2 changes the economics of computing by allowing you to pay only for capacity
that you actually use.
• Amazon EC2 provides developers the tools to build failure resilient applications and isolate
themselves from common failure scenarios.

Amazon EC2 Benefits
Elastic Web-Scale Computing
Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days. You can
commission one, hundreds or even thousands of server instances simultaneously because this is all controlled with web service APIs, your application can automatically scale itself up and down depending on its needs.
• Completely Controlled
You have complete control of your instances. You have root access to each one, and you can interact
with them as you would any machine. You can stop your instance while retaining the data on your boot partition and then subsequently restart the same instance using web service APIs. Instances can be rebooted remotely using web service APIs. You also have access to console output of your instances.
• Flexible Cloud Hosting Services
You have the choice of multiple instance types, operating systems, and software packages.
Amazon EC2 allows you to select a configuration of memory, CPU, instance storage, and the boot
partition size that is optimal for your choice of operating system and application. For example, your
choice of operating systems includes numerous Linux distributions, and Microsoft Windows Server.
• Designed for use with other Amazon Web Services
Amazon EC2 works in conjunction with Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon SimpleDB and Amazon Simple Queue Service (Amazon SQS) to provide a complete solution for computing, query processing and storage across a wide range of applications.
• Reliable
Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and
predictably commissioned. The service runs within Amazon’s proven network infrastructure and data centers. The Amazon EC2 Service Level Agreement commitment is 99.95% availability for each Amazon EC2 Region.
Secure
• Amazon EC2 works in conjunction with Amazon VPC to provide security and robust
networking functionality for your compute resources.
• Your compute instances are located in a Virtual Private Cloud (VPC) with an IP range that you
specify. You decide which instances are exposed to the Internet and which remain private.
• Security Groups and networks ACLs allow you to control inbound and outbound network
access to and from your instances.
You can connect your existing IT infrastructure to resources in your VPC using industry-standard
encrypted IPsec VPN connections.
• You can provision your EC2 resources as Dedicated Instances. Dedicated Instances are Amazon
EC2 Instances that run on hardware dedicated to a single customer for additional isolation.
• If you do not have a default VPC you must create a VPC and launch instances into that VPC
to leverage advanced networking features such as private subnets, outbound security group
filtering, network ACLs, Dedicated Instances, and VPN connections.

Wednesday 5 July 2017

Git-2

                                                                              
                                                                                GitHub

GitHub is a web-based Git or version control repository and Internet hosting service. It is mostly used for code. It offers all of the distributed version control and source code management functionality of Git as well as adding its own features.

There are many ways it can be set up and configured, but at my job, here's how we use it: when a new employee starts, he downloads all the files from Github, which is an online server we're all connected to.

So he has his local version of the files, I have my local version, our boss has his local version, etc.

Before working with github i need to create an account with github then have to connect my local git repository with github using a pull request from my local machine to the github by using ssh connection.

When I make a change to some files, I go through the following process in the Terminal. (There are GUI clients for Git, but I prefer working on the command line.)


 > git pull

That pulls the latest changes down from github. If there are conflicts between those and my local ones, it tells me what they are, file-by-file, line-by-line, and I now have a chance to reconcile those differences.

After editing the files or creating new ones, I run this command:

 > git add .

Which adds all of my local changes to git, so that git knows about them. The dot after add specifically means to add all the changes I've made, e.g. new files I've added to my local folder or changes I've made to existing files. If I want, I can add only specific files, e.g.

 > git add myNewFile.js

I now write a comment about the adds I just made.

 > git commit -m "Fixed a major bug which stopped reports from printing."

Finally, I upload my changes to the server.

 > git push

Now, when my colleagues do a ...

 > git pull

... they will get my changes, and they will be notified if any of them conflict with their local versions.

There are all kinds of cool, useful commands for rolling back changes to a particular time or state. But probably the most useful thing about Git is branching. Let's say my team is working on code for an Asteroids game, and I get the idea for making spinning asteroids. This will involve making some major changes to the existing asteroids code, and I'm a little scared to do that. No worries, I can just make a branch.

First of all, I'll check which branches exist:

 > git branch
master*


So there's currently only one branch on my local machine, called master. The star by it means that's the branch I'm currently working in. I'll go ahead and create a new one:

 > git branch spinningAsteroids

That creates a copy of all the files in master. I'll now move into that branch.

 > git checkout spinningAsteroids
> git branch
master
spinningAsteroids*


I now spend a couple of hours in spinningAsteroids, doing whatever coding I need to do, not worrying about messing things up, because I'm in a branch. Meanwhile, I get a tech support call. They've found a critical bug and I need to fix it asap. No worries...

 > git checkout master

... fix bug ...

 > git pull
> git add .
> git commit -m "Fixed critical bug with high scores."
> git push


Now I can resume my work with spinningAsteroids.

 > git checkout spinningAsteroids
> git branch
master
spinningAsteroids*


... work, work, work ...

Okay, I'm now happy with my spinning asteroids, and I want to merge that new code into the main code base, so...

 > git checkout master
> git branch
master*
spinningAsteroids


 > git merge spinningAsteroids

Now the code from my branch is merged into the main code-base. I can now upload it.

 > git pull
> git add .
> git commit -m "added new cool feature! Spinning asteroids!!!"
> git push

Devops Interview Questions-1

  1. How do you expect you would be required to multitask as a DevOps professional?
  2. What testing is necessary to ensure that a new service is ready for production?
  3. What’s a PTR in DNS?
  4. Describe two-factor authentication?
  5. What are the advantages of NoSQL database over RDBMS?
  6. What is an MX record in DNS?
  7. What is DevOps?
  8. Where can DevOps needed in the organization?
  9. What are the core operations of DevOps?
  10. What are the principles of DevOps?
  11. How would you prepare for a migration?
  12. Explain your understanding and expertise on both the software development side and the technical operations side of an organization you’ve worked for in the past.
  13. What is the difference between RAID 0 and RAID 1?
  14. How much have you interacted with cloud based software development?
  15. How can you reduce load time of a dynamic website?
  16. How would you ensure traceability?
  17. Describe the advantages/disadvantages of using CloudFormation to manage your resources
  18. Would you use CloudFormation to create a RDS database?
  19. Describe EC2 spot instances and which use cases it can be used to reduce costs
  20. Talk about IAM roles
  21. Talk about VPC's,Subnets,Internet Gateways,NATing,NACL's,VPN/VPC Peering
  22. Have you used Puppet, Chef, Salt or Ansible?
  23. How long have you used it for? 
  24. Have you used it in production?
  25. Describe the size of the environment that you automated (how many servers, small scale or large scale) 
  26.  What is the difference between Linux and Unix ?
  27.  What's a KVM ?
  28.  How would you make sure a service starts on an OS of your choice ?
  29.  Here's a terminal. What flavor of Linux is running ?
  30.  Write a command to delete all empty file under a directory.
  31.  Kill all the procs by a particular user without using pkill
  32.  What is Active Directory ? How do you make a server join a domain ?
  33.  What is the difference between TCP and UDP ? 
  34.  What is ICMP ? Why should you block it ? 
  35.  What is IPv6 ? Why should we care ? 
  36.  In a corporate environment users from London can ping a particular server but users from New York cannot, what steps will you take to troubleshoot the problem ? 
  37.  What steps are needed to change the hostname on a Linux machine ? 
  38.  Where is the hostname file on a Windows server ? 
  39.  How is a hostname resolved on a Linux machine ? 
  40.  What's a SSL tunnel ? 
  41.  What's a SDN ?
  42.  What is your favorite scripting language ? Why ?
  43.  What are design patterns ?
  44.  Describe some scripts you have written/automation you have done/ programs you have written. Justify your choice of scripting language and design patterns.
  45.  Can you port the same script to another language ? On another OS ?
  46.  How long would it take you to learn another language ?
  47. Have you used AWS or other cloud platforms? 
  48. How long for? 
  49. In production or just at home on personal projects? 
  50. How to keep logs on servers or containers with ephemeral storage? 
  51. Where to look when trying to reduce cloud costs without reducing capacity? 





These are small set questions on Devops. We will update more on this in out future posts.If you know the answers to the above questions write in comments by providing q.no .We will update in this blog so every one acces it




Thanks
Devops Desk Team

Monday 3 July 2017

Jenkins-4

                                                            Jenkins Slave Setup-1

As we all know jenkins is CI/CD tool it is used to schedule jobs.We can schedule multiple jobs using jenkins. These makes a little bit burden on Jenkins master.To reduce that burden we use slave nodes.These slave nodes takes the burden of some jobs from master.

One master can have any number of slaves.

à In my enviornment i am using 3 vms
1.server1.abc.com
2.server2.abc.com
3.server3.abc.com

à The server1.abc.com i am using as jenkins master and server2.abc.com i want to make slave for the server1.abc.com.

To make a slave i don't need to install jenkins on server2.abc.com.

à In this case i installed jenkins on server1.abc.com only.when you install jenkins it creates a user with jenkins on server1. You can't login using that jenkins user.to login with that user we do following things.

Go to 


[root@server1 ~]# vi /etc/passwd

jenkins:x:500:500:jenkins:/home/jenkins:/bin/false

change that false as bash

jenkins:x:500:500:jenkins:/home/jenkins:/bin/bash


à Then give sudo permissions to the jenkins user using visudo command as follws



[root@server1 ~]# visudo



## Allow root to run any commands anywhere

root    ALL=(ALL)          ALL

test    ALL=(ALL) NOPASSWD:ALL
dhoni   ALL=(ALL) NOPASSWD:ALL
jenkins ALL=(ALL) NOPASSWD:ALL

Esc+Shift+wq+:    Save it

à Then goto server2.abc.com create a user with the a name of jenkins and give sudo permissions to him.as below

[root@server2 ~]# useradd jenkins
[root@server2 ~]# passwd jenkins
Changing password for user jenkins.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

à

[root@server1 ~]# visudo



## Allow root to run any commands anywhere

root    ALL=(ALL)          ALL



jenkins ALL=(ALL) NOPASSWD:ALL


Esc+Shift+wq+:    Save it


à Now login on server1.abc.com as jenkins by simply below command switch user(su)


[root@server1 ~]# su jenkins



bash-4.1$ ssh-keygen
                                                          Then Press Enter you will get the below text
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/jenkins/.ssh/id_rsa):
Created directory '/var/lib/jenkins/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/jenkins/.ssh/id_rsa.
Your public key has been saved in /var/lib/jenkins/.ssh/id_rsa.pub.
The key fingerprint is:
47:39:ed:50:b5:ab:1c:08:24:a8:5a:da:83:5e:dd:89 jenkins@server1.abc.com
The key's randomart image is:
+--[ RSA 2048]----+
|     .. .   ...  |
|    .  o   +   . |
|   .    . = . .  |
|  o      o =   . |
| *  . o S o o .  |
|+ o. E o . . o   |
|. ..        o    |
| .               |
|                 |
+-----------------+




à
 Then copy that key to server2.abc.com

bash-4.1$ ssh-copy-id server2.abc.com
The authenticity of host 'server2.abc.com (192.168.33.11)' can't be established.
RSA key fingerprint is 90:d8:41:6f:c5:39:1d:54:0d:43:4e:34:dc:f1:d2:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'server2.abc.com,192.168.33.11' (RSA) to the list of known hosts.
jenkins@server2.abc.com's password:
Now try logging into the machine, with "ssh 'server2.abc.com'", and check in:

  .ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.




à
  Check it wheather its working or not by below command.

bash-4.1$ ssh server2.abc.com

[jenkins@server2 ~]$ exit

logout
Connection to server2.abc.com closed.



à
 To exit from the bash script.

bash-4.1$ exit
exit
[root@server1 ~]#


If you face any problem while practicing feel free to comment it and Bookmark this blog for quick reference.We will try to help you

Thanks
Devops Desk Team


Git

                                       Git Introduction

It is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency

Having a distributed architecture, Git is an example of a DVCS (hence Distributed Version Control System). Rather than have only one single place for the full version history of the software as is common in once-popular version control systems like CVS or Subversion (also known as SVN), in Git, every developer's working copy of the code is also a repository that can contain the full history of all changes.

Git allows a team of people to work together, all using the same files. And it helps the team cope with the confusion that tends to happen when multiple people are editing the same files.

Using git we can travel to different versions of the files in easy way.
--------------------------------------------------------------------------------------------------------------------------
Creating a directory and making it as git repository using git init command.

[test@server1 ~]$ mkdir g1

[test@server1 ~]$ cd g1

[test@server1 g1]$ git init                      ----> it makes a directory as git repository .

Initialized empty Git repository in /home/test/g1/.git/

------------------------------------------------------------------------------------------

After that first time users should give  Their name and email id to git using  the following way.


[test@server1 g1]$ git config --global user.name "test"             

[test@server1 g1]$ git config --global user.name "test@localhost"


[test@server1 g1]$ ls

[test@server1 g1]$ ls -la

total 12
drwxrwxr-x  3 test test 4096 Mar 28 16:28 .
drwx------ 12 test test 4096 Mar 28 16:28 ..
drwxrwxr-x  7 test test 4096 Mar 28 16:28 .git

------------------------------------------------------------------------------------------

Here we will check how git works by doing a simple example using as below

create a index.html file then add it to  to git repository  to track the file.

[test@server1 g1]$ vi index.html                         ---> creating a file inside git repositry 

[test@server1 g1]$ git add index.html                 ---> add it to repository track it

[test@server1 g1]$ git status             --> This command shows the status of git repository as number of files are in tracking   

# On branch master
#
# Initial commit
#
# Changes to be committed:
#   (use "git rm --cached <file>..." to unstage)
#
#       new file:   index.html
#

[test@server1 g1]$ git commit              ---> This command save the changes by give a message and                                                                            commit.
[master (root-commit) c097834] updating
 1 files changed, 1 insertions(+), 0 deletions(-)
 create mode 100644 index.html

--->It will show the log message as how many log files are there for a file which was commited.

[test@server1 g1]$ git log

commit c097834da5f9f54aa4f08cb253aae41f5557e77b
Author: test <test@localhost>
Date:   Tue Mar 28 16:29:36 2017 +0530

    updating
---------------------------------------------------------------------------
Now change the content of the index file and add to the repository and commit it.by below 

[test@server1 g1]$ vi index.html                           --Change the content 

[test@server1 g1]$ git add index.html


[test@server1 g1]$ git commit -m "update second time"

[master b0e8a8f] update second time
 1 files changed, 2 insertions(+), 0 deletions(-)

-------------------------------------------------------------------------

check the git log to get the previous version file.It will shows the commit id which is help full to go the previous version of the file.

[test@server1 g1]$ git log

commit b0e8a8ffb442dcb86b4cc4481e29e3ab42e256f2
Author: test <test@localhost>
Date:   Tue Mar 28 16:31:29 2017 +0530

    update second time

commit c097834da5f9f54aa4f08cb253aae41f5557e77b

Author: test <test@localhost>
Date:   Tue Mar 28 16:29:36 2017 +0530

    updating

---------------------------------------------------
 get the required modification file using commit id from git log using git checkout command as following

[test@server1 g1]$ more index.html

This is the git file

This is a modification doing second time

-----------------------------------------------------------------------------------------------------------------
Here you can observe that content of index file.

[test@server1 g1]$ git checkout c097834da5f9f54aa4f08cb253aae41f5557e77b -- index.html

[test@server1 g1]$ more index.html

This is the git file

---------------------------------------------------------------------------------------------------------------------
By using the commit id we can travel different versions of the git as below

[test@server1 g1]$ git checkout b0e8a8ffb442dcb86b4cc4481e29e3ab42e256f2 -- index.html

[test@server1 g1]$ more index.html

This is the git file

This is a modification doing second time

If you face any problem while practicing feel free to comment it and Bookmark this blog for quick reference.We will try to help you

Thanks
Devops Desk Team


Puppet

Puppet-1
Puppet is a Configuration Management tool that is used for deploying, configuring and managing servers. There are a couple of different tools that PuppetLabs provides (PuppetDB, Puppet Enterprise, Facter, Hiera), as well as models for using Puppet (client/server, where a server is referred to as a master, and clients check in to gather any updates to configuration and apply them to themselves; also master-less). Finally, there is a robust community of open-source tools around Puppet, such as r10k, and a number of community built modules that can be found on the Puppet Forge.


It performs the following functions: 
Defining distinct configurations for each and every host, and continuously checking and confirming whether the required configuration is in place and is not altered (if altered Puppet will revert back to the required configuration) on the host. 
Dynamic scaling-up and scaling-down of machines. 
Providing control over all your configured machines, so a centralized (master-server or repo-based) change gets propagated to all, automatically.



From the ground up, Puppet is organized as follows:

- Manifests, which contain Puppet code (a Ruby DSL), describing the desired configuration, contents, or execution for files, services, scripts, and other pieces of infrastructure 
- Modules, which group common manifests together to achieve some goal (e.g. an elasticsearch module could be used to install, configure, and start elasticsearch on a node) 
- Nodes, which are the systems to be configured by one or more modules 
- Environments, which are user-defined logical groupings, typically representing different versions of the total body of Puppet code, and typically aligned along SDLC states, like 'qa' or 'production'
At runtime, the Puppet executable applies (runs) the given modules for a particular matching node in the desired environment and prints its progress to stdout; if all goes well, at the end, your node should be configured as your Puppet code describes. 
Puppet code is composed primarily of resource declarations. A resource describes something about the state of the system, such as a certain user or file should exist, or a package should be installed. 
Here is an example of a user resource declaration:


user { 'mitchell':
  ensure     => present,
  uid        => '1000',
  gid        => '1000',
  shell      => '/bin/bash',
  home       => '/home/mitchell'
}
resource_type { 'resource_name'
  attribute => value
  ...
}
Therefore, the previous resource declaration describes a user resource named 'mitchell', with the specified attributes.
To list all of the default resource types that are available to Puppet, enter the following command:
puppet resource --types
We will cover a few more resource types throughout this tutorial.

Manifests

Puppet programs are called manifests. Manifests are composed of puppet code and their filenames use the .pp extension. The default main manifest in Puppet installed via apt is /etc/puppet/manifests/site.pp.
If you have followed the prerequisite Puppet tutorial, you have already written a manifest that creates a file and installs Apache. We will also write a few more in this tutorial.

Classes

In Puppet, classes are code blocks that can be called in a code elsewhere. Using classes allows you reuse Puppet code, and can make reading manifests easier.

Class Definition

A class definition is where the code that composes a class lives. Defining a class makes the class available to be used in manifests, but does not actually evaluate anything.
Here is how a class definition is formatted:
class example_class {
  ...
  code
  ...
}
The above defines a class named "example_class", and the Puppet code would go between the curly braces.

Class Declaration

A class declaration occurs when a class is called in a manifest. A class declaration tells Puppet to evaluate the code within the class. Class declarations come in two different flavors: normal and resource-like.
normal class declaration occurs when the include keyword is used in Puppet code, like so:
include example_class
This will cause Puppet to evaluate the code in example_class.
resource-like class declaration occurs when a class is declared like a resource, like so:
class { 'example_class': }
Using resource-like class declarations allows you to specify class parameters, which override the default values of class attributes. If you followed the prerequisite tutorial, you have already used a resource-like class declaration ("apache" class) when you used the PuppetLabs Apache module to install Apache on host2:
node 'host2' {
  class { 'apache': }             # use apache module
  apache::vhost { 'example.com':  # define vhost resource
    port    => '80',
    docroot => '/var/www/html'
  }
}
Now that you know about resources, manifests, and classes, you will want to learn about modules.

Modules

A module is a collection of manifests and data (such as facts, files, and templates), and they have a specific directory structure. Modules are useful for organizing your Puppet code, because they allow you to split your code into multiple manifests. It is considered best practice to use modules to organize almost all of your Puppet manifests.
To add a module to Puppet, place it in the /etc/puppet/modules directory.


If you face any problem while practicing feel free to comment it and Bookmark this blog for quick reference.We will try to help you

Thanks
Devops Desk Team

Chef


Chef-1

Whether you have five or five thousand servers, Chef lets you manage them all by turning infrastructure into code. Infrastructure described as code is flexible, versionable, human-readable, and testable. Whether your infrastructure is in the cloud, on-premises or in a hybrid environment, you can easily and quickly adapt to your business’s changing needs with Chef.

  • Chef uses Ruby as the configuration language, rather than a custom DSL.
  • Chef is Apache licensed & Opscode maintains CLAs for all Contributors, which means that Chef is safe to include in your software.
  • Chef is designed from the ground up to integrate with other tools, or to make that integration as simple as possible. Chef is not the canonical representation of your infrastructure - it is a service that exposes certain parts of your infrastructure.
  • Chef applies resources in the order they are specified in your Recipes - there is no dependency management. This means multiple Chef runs will always apply the Resourcesunder management in the same order, every time.
  • Chef Resources have Actions, which can be signaled.
  • Resources can appear more than once in Chef, and they inherit the attributes of the earlier resource. (ie: you can tell Apache to start and stop in a recipe by specifying the resource twice, with the second one only changing the action attribute). 
Chef has the following major components:
ComponentDescription
_images/icon_workstation.svg
_images/icon_cookbook.svg
_images/icon_ruby.svg
One (or more) workstations are configured to allow users to author, test, and maintain cookbooks. Cookbooks are uploaded to the Chef server from the workstation. Some cookbooks are custom to the organization and others are based on community cookbooks available from the Chef Supermarket.
Ruby is the programming language that is the authoring syntax for cookbooks. Most recipes are simple patterns (blocks that define properties and values that map to specific configuration items like packages, files, services, templates, and users). The full power of Ruby is available for when you need a programming language.
Often, a workstation is configured to use the Chef Development Kit as the development toolkit. The Chef Development Kit is a package from Chef that provides a recommended set of tooling, including Chef itself, the chef command line tool, Test Kitchen, ChefSpec, Berkshelf, and more.
_images/icon_node.svg
_images/icon_chef_client.svg
A node is any machine—physical, virtual, cloud, network device, etc.—that is under management by Chef.
A chef-client is installed on every node that is under management by Chef. The chef-client performs all of the configuration tasks that are specified by the run-list and will pull down any required configuration data from the Chef server as it is needed during the chef-client run.
_images/icon_chef_server.svg
The Chef server acts as a hub of information. Cookbooks and policy settings are uploaded to the Chef server by users from workstations. (Policy settings may also be maintained from the Chef server itself, via the Chef management console web user interface.)
The chef-client accesses the Chef server from the node on which it’s installed to get configuration data, performs searches of historical chef-client run data, and then pulls down the necessary configuration data. After the chef-client run is finished, the chef-client uploads updated run data to the Chef server.
Chef management console is the user interface for the Chef server. It is used to manage data bags, attributes, run-lists, roles, environments, and cookbooks, and also to configure role-based access for users and groups.
_images/icon_chef_supermarket.svg
Chef Supermarket is the location in which community cookbooks are shared and managed. Cookbooks that are part of the Chef Supermarket may be used by any Chef user. How community cookbooks are used varies from organization to organization.
Chef management console, chef-client run reporting, high availability configurations, and Chef server replication are available as part of Chef Automate.

If you face any problem while practicing feel free to comment it and Bookmark this blog for quick reference.We will try to help you

Thanks
Devops Desk Team