Skip navigation.

Puppet Tutorial for Linux part 2: Client and Server

Puppet tutorial series

In Part 1 of this Puppet tutorial we saw how to install Puppet on a Linux machine from source, and create a basic manifest which controls the NTP service. In this episode we’ll cover setting up a Puppet server and then using it to control multiple client machines.

Installing servers with Puppet

In the first part of the tutorial, we installed Puppet directly from the source. This is a great way to learn and experiment, but for production purposes we would like to use a standard package - in this case, an Ubuntu package we can install via apt-get. This way, you can ensure that the same version of Puppet is present on your servers and update it automatically if need be. The Ubuntu package also has a few bonus features that the source package doesn’t.

If you want to follow along with the tutorial using your existing Puppet source install, that’s fine. It’s a good idea to make sure you’re using the same version of Puppet on both client and server, or you may run into obscure and annoying problems, so either install from the same source package on your client machine, or set up two fresh machines with the Ubuntu package as described below.

The version of Puppet in the standard Ubuntu repositories is a little old, so add Mathias Gug’s puppet-backports repository to your machine so that you can get a more recent version:

# add-apt-repository ppa:mathiaz/puppet-backports
# apt-get update
# apt-get install puppet puppetmaster

You can now start the Puppetmaster daemon and make the service persistent:

# update-rc.d puppetmaster defaults
# puppet master --mkusers

If there is a firewall between the Puppetmaster and its clients (eg an iptables firewall on the Puppetmaster server itself) you will need to open TCP port 8140 to the clients.

Create a client manifest

In Part 1 of the Puppet Tutorial we created a file /etc/puppet/manifests/nodes.pp, which lists the nodes (machines) that Puppet knows about, and what configuration they should have. Currently it looks like this:


node myserver {
    include ntp
}

As we’re adding a new node to the system, we need to modify the file so that it reads like this:


node myserver {
    include ntp
}

node myclient {
    include ntp
}

Authorising a client

Puppet uses SSL (Secure Sockets Layer), an encrypted protocol, to communicate between master and clients. This means that only a client with a correctly signed SSL certificate can access the Puppetmaster and receive its configuration. To exchange certificates between the master and client, follow this procedure.

Configuring the client to contact the server

Edit your /etc/puppet/puppet.conf file to tell the client where to find the Puppetmaster:

server = myserver.mydomain.com

Generate a certificate request

On the client, run:

# puppet agent --test
info: Creating a new certificate request for myclient.mydomain.com
info: Creating a new SSL key at /etc/puppet/ssl/private_keys/myclient.mydomain.com.pem
warning: peer certificate won't be verified in this SSL session
notice: Did not receive certificate
notice: Set to run 'one time'; exiting with no certificate

Sign the certificate

On the master:

# puppetca -l
myclient.mydomain.com

There is a certificate request pending for myclient. To sign it, run:

# puppetca -s myclient.mydomain.com
notice: Signed certificate request for myclient.mydomain.com
notice: Removing file Puppet::SSL::CertificateRequest myclient.mydomain.com at 
  '/var/lib/puppet/ssl/ca/requests/myclient.mydomain.com.pem'

If there is no certificate request listed, the client was not able to contact the server for some reason. Check that you have the right server address set in puppet.conf on the client, and that TCP port 8140 is open on both the master and client firewalls. The puppet master daemon also needs to be running on the master.

Run Puppet for the first time

On the client, you should now be able to run:

# puppet agent --test
notice: Got signed certificate
info: Caching catalog at /var/puppet/state/localconfig.yaml
notice: Starting catalog run
notice: //ntp/Service[ntp]/ensure: ensure changed 'stopped' to 'running'
notice: Finished catalog run in 0.92 seconds

Just as on the server in the first part of the tutorial, Puppet is now applying the ntp class to myclient. At the moment all this class does is ensure the ntp service is running, but of course it could do many more interesting things. And that’s what we’ll look at next time in Part 3.

Puppet books

If you’re excited about Puppet and want to learn more, may I modestly recommend the Puppet 2.7 Cookbook? The Puppet 2.7 Cookbook takes you beyond the basics to explore the full power of Puppet, showing you in detail how to tackle a variety of real-world problems and applications. At every step it shows you exactly what commands you need to type, and includes full code samples for every recipe.

My job description

http://dilbert.com/strips/comic/2010-04-07

I bet a lot of sysadmins are putting this comic up on their office wall today. It neatly encapsulates that management information problem that goes with the job.

What's wrong with this image?

The problem

Is this a familiar nightmare? “The server you built yesterday is great,” your boss enthuses. “Now we need a hundred more just like it. And when you’ve done that, could you take a look at my laptop before lunch? I can’t seem to get into my email.”

It’s common nowadays for sysadmins to manage very large numbers of servers - often virtual servers created on-demand in cloud platforms like Amazon EC2. How do you build so many servers and keep them updated - automatically, reliably, and quickly?

One common strategy is the golden image pattern: set up the server the way you want it, and then copy the disk image and use that to boot as many clone machines as you need. Another is the automation pattern: start with a generic server, and then apply automatic configuration tools such as Puppet to bring the machine into a specified state where it is ready for production.

The golden image

The golden image is attractive to many because it’s easy and cheap. It’s quick to create images, and you just need a big disk to store them.

Imaged machines also have low latency: they’re ready for production in just a few minutes from startup. Finally, it’s platform agnostic: you can image virtually any kind of machine, even Windows (which most config automation tools can’t handle very well).


So what’s the problem with the golden image? Luke Kanies, creator of Puppet, prefers to call it the “foil ball” approach: your images can quickly become an unmanageable, crumpled mess. Firstly, how many images do you need? Well, as many as you have different types of machines: web servers, databases, load balancers, DNS servers, and so on.

The number of images multiplies again when you take history into account. What happens when you need to recreate the state of the database server last month to troubleshoot a problem? You need to keep old images around, which means a new image every time you make a change.

And here’s a thing: if you do need to make a change (for example a security upgrade for a critical package), you’ll have to rebuild the golden image and then re-image and reboot all affected machines - which means lots of bandwidth and possibly downtime. And, unfortunately, even imaged machines usually need some final tweaking (eg IP addresses) before they’re ready for use.

Automation tools

By comparison, building your machines with an automatic configuration tool such as Puppet, Chef or even cfengine gives you many advantages. The configuration is self-documenting; it’s easy to see what should be on each machine, and more importantly, why it’s there. It will also stay in sync with reality.

Although your server types will be different, they will also likely have a lot in common. With images, these supposedly common settings could easily drift apart over time. With automation tools, you can put these settings into modules or classes, just as a software developer re-uses a module or library in another project.

In fact, all the advantages of software modules are available: you can share code with others, download public recipes and manifests, and maintain a complete version history of your network.

Automated configuration is intrinsically multi-platform. Puppet and Chef can abstract away some of the differences between operating systems, and you can add minor tweaks of your own. For example, you might need a slightly different MySQL config setting for MySQL 4.x as opposed to 5.x. A one-line switch in the configuration can address this problem, instead of creating two disk images.

Of course, everything comes with a downside. It is more difficult and usually more time-consuming to build an automated configuration, especially from scratch and when you’re still learning to use the tools. And some things are hard to automate. Some software just wasn’t written with automated installation and management in mind.

Also, not all software will be packaged for your platform (or you need special compilation options). In these cases you will need to package the software yourself.

Finally, configuration management doesn’t always mix well with human intervention. In particular, someone may make a config change on the machine which is then automatically reversed by Puppet. At best this can be a nasty surprise, at worst it can result in a site outage. Managing desktops automatically can create similar problems. This needs to be addressed at a business level with a proper change control process and making it clear to everyone where the boundaries of automatic management lie.

Combined strategies

So both imaging and automation have their place; neither is a magic bullet. The best strategy for scaling your deployments is often a judicious combination of these two approaches. You can, for example, start with a ‘stem cell’ image containing an approved base build, security fixes and settings which don’t often change, which is then configured by Puppet or Chef. Or you could use provisioning tools such as Kickstart and Cobbler and then hand over to Puppet. Or even build the machine with Puppet and then create a golden image from this.

One thing is certain: ignoring the problem is not an option. Moving to an automated network might seem like a big project, but the key is to start small. Set aside a block of time every week to spend on improving your infrastructure, and automate one small thing at a time as part of a gradual continuous process. The more you invest in this, the more benefits you’ll see - and you can bet your competitors will be doing the same.

Further reading

An earlier version of this article appeared on the Agile Sysadmin blog.

Chicken Techa Masala

A good time was had by all at the 4th monthly Devops Curry Night in London - a chance for technically-minded folk to get together over a curry and a few beers, and talk about servers and software and science fiction. This month, sysadmin entrepreneur David Mytton from Boxed Ice dropped in to talk about Server Density, what makes a good server monitoring product, blogging, SEO, and other fascinating topics. Also in attendance were sysadmin/devop Will Pink from jobsgopublic, Lisp hacker Donald Fisk, Python hacker Andy Nelis and special guest, information designer Matt Gourd.

If you couldn’t make it this time, don’t worry - it’s happening again next month. Watch this space for details!

Can you master Puppet?

Pay attention at the back. Reductive Labs just announced Puppet training dates for London:

Puppet Training London: March 29 - April 2

There are two courses available: Becoming a Puppet Master (3 days) covers Puppet and Puppetmaster configuration, classes, modules, definitions, resource types, the resource abstraction layer, virtual resources, exported resources, stored configs, metaparameters, dependencies, events, tags, environments, Puppet language patterns and best practices. There’ll be a mix of teaching and interactive class exercises using your own laptop and virtual machines. It costs £1595 for the 3 day course.

If you’re a seasoned old Puppet hand and you know all that stuff by heart, then the Puppet Developer course (2 days) is probably for you. You’ll get an introduction to Ruby for Puppet, as well as learning how to craft your own custom resource types and providers, develop custom functions and Facter facts. In addition there’s a segment on testing practices and RSpec for Puppet. The cost is £995 for two days.

Register for the training (get £100 off each course if you register before March 15th) or contact Scott Campbell to find out more. And don’t forget to bring an apple for the teacher.

Syndicate content