Files
gitea-pages/admin-guide/legacy/misc/projectpsi-puppet1.rst
2021-05-05 14:24:27 +02:00

33 KiB

Project psi-puppet1

Introduction

This document describes the relaunch of the puppet service infrastructure at PSI.

The whole project can be divided into two parts:

Objectives

  • To get a stable, scalable and easy to manage puppet service infrastructure.
  • To gain a better overview of the various client configurations configured by puppet.
  • To keep a clear and up-to-date documentation.
  • To keep the different configurations of the different SL releases separated from each other, e.g. SL 5.1 does not overlap with SL 5.3.
  • Also other users from AIT and GFA than the puppet administrator should have the possibility to use puppet to configure their hosts.
  • The different client configurations of the different puppet users must not interfere with each other.
  • To manage the changes to manifests and client configuration files.
  • Easy recovery of files in case of data loss.
  • Easy and fast reinstallation of an identical puppet server in case of an irreparable server crash.

Description of the Basic Server Setup

Procedure

First make the directory in the SL51 installation tree:

# mkdir /afs/psi.ch/software/linux/dist/scientific/51/puppet-0247

Add the following RPMS to this repository and run `createrepo`:

puppet-server-0.24.7-4.el5.noarch.rpm
augeas-libs-0.3.5-1.el5.i386.rpm
facter-1.5.2-2.el5.noarch.rpm
puppet-0.24.7-4.el5.noarch.rpm
ruby-augeas-0.2.0-1.el5.i386.rpm
ruby-shadow-1.4.1-7.el5.i386.rpm

# cd /afs/psi.ch/software/linux/dist/scientific/51/puppet-0247
# createrepo .

To enable the access to this repo create the yum repo file /etc/yum.repos.d/puppet-0247.repo on the puppet server:

[puppet-0247] 
name=puppet-0247 for SL5
baseurl=http://linux.web.psi.ch/dist/scientific/5/puppet-0247/
enabled=1

Setup The Puppet Server

Basic Server Installation

Install SL51, class Server via PXE boot and kickstart.

Puppet-Server Installation

Install puppet-server with yum. This will also draw the required dependencies:

# [root@psi-puppet1]
# yum install puppet-server  

...
Finished Kernel Module Plugin

Dependencies Resolved

=============================================================================
 Package         Arch       Version      Repository    Size
=============================================================================
Installing:
 puppet-server       noarch     0.24.7-4.el5     puppet-0247    25 k
Installing for dependencies:
 augeas-libs         i386       0.3.5-1.el5      puppet-0247       151 k
 facter          noarch     1.5.2-2.el5      puppet-0247    41 k
 puppet          noarch     0.24.7-4.el5     puppet-0247       548 k
 ruby            i386       1.8.5-5.el5_2.6  sl5update     279 k
 ruby-augeas         i386       0.2.0-1.el5      puppet-0247    17 k
 ruby-libs           i386       1.8.5-5.el5_2.6  sl5update     1.6 M
 ruby-shadow         i386       1.4.1-7.el5      puppet-0247       9.5 k

Transaction Summary
=============================================================================
Install      8 Package(s)
Update       0 Package(s)
Remove       0 Package(s)
...

Configure The Puppet Server

The configuration files of the puppet server, directory /etc/puppet/, are stored locally.

The puppet client configuration files are stored on AFS. The mountpoint on psi-puppet1 is /var/puppet/environments, thus create the directory /var/puppet/environments.

# mkdir -p /var/puppet/environments

For how to mount the AFS see section Mount AFS Volumes below.

The client configuration files in /var/puppet/environments are described at PuppetManifestsForSL53][Puppet Manifests For SL53.

The log is on the local disk in /var/log/puppet. To set the logfile edit the line PUPPETMASTER_OPTS in /etc/rc.d/init.d/puppetmaster. For testing also the debug option -d is enabled:

PUPPETMASTER_OPTS="-v -d -l /var/log/puppet/puppetmaster.log"

Config file `puppet.conf`:

###########################################################################
# $Header: /etc/puppet/RCS/puppet.conf,v 1.3 2009/09/07 18:11:17 root Exp root $
#
# Puppetmaster Environments
# =========================
#
# Ref.: http://reductivelabs.com/trac/puppet/wiki/UsingMultipleEnvironments
#
# Marc Gasser, PSI
# last modified 2011-11-18
#
############################################################################
[main]
    # Where Puppet stores dynamic and growing data.
    # The default value is '/var/puppet'.
    vardir = /var/puppet

    # The Puppet log directory.
    # The default value is '$vardir/log'.
    # logdir = /afs/psi.ch/service/linux/puppet/var/log
    logdir = /var/log/puppet

    # Where Puppet PID files are kept.
    # The default value is '$vardir/run'.
    rundir = /var/run/puppet

    # Where SSL certificates are kept.
    # The default value is '$confdir/ssl'.
    ssldir = $vardir/ssl

    # Whether log files should always flush to disk.
    # The default value is false
    autoflush = true



[puppetmasterd]
    reports = store
    #reports = store , tagmail, rrdgraph

    # tagmap = $confdir/tagmail.conf

    #rrddir = $vardir/rrd
    #rrdinterval = $runinterval
    #rrdgraph = true

[puppetd]
    # The file in which puppetd stores a list of the classes
    # associated with the retrieved configuratiion.  Can be loaded in
    # the separate ``puppet`` executable using the ``--loadclasses``
    # option.
    # The default value is '$confdir/classes.txt'.
    classfile = $vardir/classes.txt

    # Where puppetd caches the local configuration.  An
    # extension indicating the cache format is added automatically.
    # The default value is '$confdir/localconfig'.
    localconfig = $vardir/localconfig

    # Note: The port that the client daemon listens on, defaults to
    #       8139. However, at PSI we run puppetd via the psi-puppet
    #       script with run onetime option enabled.
    #       psi-puppet is triggered by cron.

#########################
#######   SL 5   ########
#########################

### begin{ SL 5 (SL54), gasser_m
[DesktopSL5Unstable]
    manifest   = /var/puppet/environments/DesktopSL5Unstable/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL5Unstable/modules

[ServerSL5Unstable]
    manifest   = /var/puppet/environments/ServerSL5Unstable/manifests/site.pp
    modulepath = /var/puppet/environments/ServerSL5Unstable/modules

[DesktopSL5Testing]
    manifest   = /var/puppet/environments/DesktopSL5Testing/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL5Testing/modules

[DesktopSL5Stable]
    manifest   = /var/puppet/environments/DesktopSL5Stable/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL5Stable/modules

[CPT]
    manifest   = /var/puppet/environments/CPT/manifests/site.pp
    modulepath = /var/puppet/environments/CPT/modules

###}end SL 5 (SL54), gasser_m

### V.M. for sl53-c-ks.cfg 
[CnodeSL5]
    manifest   = /var/puppet/environments/CnodeSL5/manifests/site.pp
    modulepath = /var/puppet/environments/CnodeSL5/modules

[PHServerSL5]
    manifest   = /var/puppet/environments/PHServerSL5/manifests/site.pp
    modulepath = /var/puppet/environments/PHServerSL5/modules

[EdgarDevelopment]
    manifest   = /var/puppet/environments/EdgarDevelopment/manifests/site.pp
    modulepath = /var/puppet/environments/EdgarDevelopment/modules

[DerekDevelopment]
    manifest   = /var/puppet/environments/DerekDevelopment/manifests/site.pp
    modulepath = /var/puppet/environments/DerekDevelopment/modules

[cray]
    manifest   = /var/puppet/environments/cray/manifests/site.pp
    modulepath = /var/puppet/environments/cray/modules


### begin Heiner{
[HeinerDevelopment]
    manifest   = /var/puppet/environments/HeinerDevelopment/manifests/site.pp
    modulepath = /var/puppet/environments/HeinerDevelopment/modules
[HeinerDevelopment54]
    manifest   = /var/puppet/environments/HeinerDevelopment54/manifests/site.pp
    modulepath = /var/puppet/environments/HeinerDevelopment54/modules
[GFA]
    manifest   = /var/puppet/environments/GFA/manifests/site.pp
    modulepath = /var/puppet/environments/GFA/modules
### }end Heiner

### begin Rene{
[GFADesktopSL5]
    manifest   = /var/puppet/environments/GFADesktopSL5/manifests/site.pp
    modulepath = /var/puppet/environments/GFADesktopSL5/modules

[GFADesktopSL6]
    manifest   = /var/puppet/environments/GFADesktopSL6/manifests/site.pp
    modulepath = /var/puppet/environments/GFADesktopSL6/modules
### }end Rene

### Services
[Web]
    manifest   = /var/puppet/environments/Web/manifests/site.pp
    modulepath = /var/puppet/environments/Web/modules

[Virtual]
    manifest   = /var/puppet/environments/Virtual/manifests/site.pp
    modulepath = /var/puppet/environments/Virtual/modules

[News]
    manifest   = /var/puppet/environments/News/manifests/site.pp
    modulepath = /var/puppet/environments/News/modules

[MySQL]
    manifest   = /var/puppet/environments/MySQL/manifests/site.pp
    modulepath = /var/puppet/environments/MySQL/modules

[Loadbalancer]
    manifest   = /var/puppet/environments/Loadbalancer/manifests/site.pp
    modulepath = /var/puppet/environments/Loadbalancer/modules

[LlcLoadbalancer]
    manifest   = /var/puppet/environments/LlcLoadbalancer/manifests/site.pp
    modulepath = /var/puppet/environments/LlcLoadbalancer/modules

[License]
    manifest   = /var/puppet/environments/License/manifests/site.pp
    modulepath = /var/puppet/environments/License/modules

[FTP]
    manifest   = /var/puppet/environments/FTP/manifests/site.pp
    modulepath = /var/puppet/environments/FTP/modules

[Elog]
    manifest   = /var/puppet/environments/Elog/manifests/site.pp
    modulepath = /var/puppet/environments/Elog/modules

[Cups]
    manifest   = /var/puppet/environments/Cups/manifests/site.pp
    modulepath = /var/puppet/environments/Cups/modules

[Archive]
    manifest   = /var/puppet/environments/Archive/manifests/site.pp
    modulepath = /var/puppet/environments/Archive/modules


#########################
#######   SL 6   ########
#########################

### begin{ SL 6 (gasser_m)
[DesktopSL6Unstable]
    manifest   = /var/puppet/environments/DesktopSL6Unstable/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL6Unstable/modules

[DesktopSL6Testing]
    manifest   = /var/puppet/environments/DesktopSL6Testing/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL6Testing/modules

[DesktopSL6Stable]
    manifest   = /var/puppet/environments/DesktopSL6Stable/manifests/site.pp
    modulepath = /var/puppet/environments/DesktopSL6Stable/modules

###}end SL 6 (gasser_m)

### Markushin
[CnodeSL6]
    manifest   = /var/puppet/environments/CnodeSL6/manifests/site.pp
    modulepath = /var/puppet/environments/CnodeSL6/modules

Config file `fileserver.conf`:

# This file consists of arbitrarily named sections/modules
# defining where files are served from and to whom

# Define a section 'files'
# Adapt the allow/deny settings to your needs. Order
# for allow/deny does not matter, allow always takes precedence
# over deny
# [files]
#  path /var/lib/puppet/files
#  allow *.example.com
#  deny *.evil.example.com
#  allow 192.168.0.0/24


#[facts]
# path /etc/puppet/facts
# allow *.psi.ch

[GFA5]
 path /afs/psi.ch/project/slscomp/puppet/gfa5
 allow *.psi.ch

[GFA6]
 path /afs/psi.ch/project/slscomp/puppet/gfa6
 allow *.psi.ch

Mount AFS Volumes on Puppet Server

The puppet manifests for clients are located on AFS:

/afs/psi.ch/service/linux/puppet/var/puppet/environments/

AFS is already mounted as /afs in this default SL5 server installation:

# mount
...
AFS on /afs type afs (rw)

Now, we want to remount /afs/psi.ch/service/linux/puppet/var/puppet/environments on /var/puppet/environments. Therefor the mount option bind is used, which facilitates to remount parts of already mounted filesystems on an alternative location in the file hierarchy.

The server also needs the permission on AFS to mount the environments directory. Add the new server to the AFS group svc.linux:puppet_hosts:

# pts ad -u <IP_ADDRESS> -g svc.linux:puppet_hosts

As shown below we do the remount in /etc/rc.local, which is executed after all the other init scripts:

#!/bin/sh

touch /var/lock/subsys/local

# Puppet
mount -o bind /afs/psi.ch/service/linux/puppet/etc/puppet/environments /var/puppet/environments

# Restart Services depending on afs mounts
/etc/init.d/puppetmaster restart

Before the rc.local script can be applied the proper AFS permissions have to be set to make the files readable for psi-puppet1. This was done already before, see topic [[PuppetServerPsiPuppet2ForSl51#4_1_3_Mount_AFS_Volumes_on_Puppe][Puppet Server Psi Puppet 2 For SL51]], so we only have to put the IP address of psi-puppet1 to the AFS group `svc_linux:puppet_hosts`:

# pts adduser 129.129.190.174 svc_linux:puppet_hosts

Configuring Puppet Reporting

There are a number of different report processors available on the puppetmaster. The default report, store, simply stores the report file on the disk.

By default, each client is configured not to report back to the master. It has to be enabled either by the report option in puppet.conf or using --report on the command line.

`/etc/puppet/puppet.conf`:

[puppetd]
    report = true 

Command line:

# puppetd --report
Store Report Processor

Enable the store reports by using the reports configuration option in the puppemasterd section of the puppet.conf file on the master.

`/etc/puppet/puppet.conf`:

[puppetmasterd]
    reports = store

The default reports directory is $vardir/reports.

Tagmail Report Processor

Enable the tagmail reports by using the reports configuration option in the puppemasterd section of the puppet.conf file on the master. The tagmail.conf file contains a list of tags and email adresses. The special tags all and err are defined implicitly.

`/etc/puppet/puppet.conf`:

[puppetmasterd]
    reports = tagmail
    tagmap = $confdir/tagmail.conf

`/etc/puppet/tagmail.conf`:

all:    marc.gasser@psi.ch
err:    marc.gasser@psi.ch
Rrdgraph Report Processors

To enable the rrdgraph reports, rrdtool and rrdtool-ruby packages have to be installed.

Download the packages from the following repository: `/etc/yum.repos.d/epeli386.repo`:

[epeli386]
name=epel i386
baseurl=http://download.fedora.redhat.com/pub/epel/5/i386/
enabled=0


# yumdownloader --enablerepo=epeli386  rrdtool.i386 rrdtool-ruby.i386
# yum install rrdtool-1.2.27-3.el5.i386.rpm
# yum install rrdtool-ruby-1.2.27-3.el5.i386.rpm

You might want to put them to your local repository, too.

Note: For the time being put them to psi-beta, because they break dependencies in the other repositories.

Then, configure puppet.conf by adding the lines shown below in the corresponding section. Here store, tagmail and rrdgraph are enabled.

`/etc/puppet/puppet.conf`:

[puppetmasterd]
    reports = store, tagmail, rrdgraph

    rrddir = $vardir/rrd
    rrdinterval = $runinterval
    rrdgraph = true

Install The Ganglia Monitor Daemon

Install ganglia-gmond-3.0.6-4.slp5 and add the configuration /etc/gmond.conf file as shown below:

/* This configuration is as close to 2.5.x default behavior as possible 
   The values closely match ./gmond/metric.h definitions in 2.5.x */ 
globals {            
  daemonize = yes          
  setuid = yes         
  user = nobody          
  debug_level = 0           
  max_udp_msg_len = 1472    
  mute = no         
  deaf = no         
  host_dmax = 0 /*secs */ 
  cleanup_threshold = 300 /*secs */ 
  gexec = no         
} 

/* If a cluster attribute is specified, then all gmond hosts are wrapped inside 
 * of a <CLUSTER> tag.  If you do not specify a cluster tag, then all <HOSTS> will 
 * NOT be wrapped inside of a <CLUSTER> tag. */ 
cluster { 
  name = "puppet" 
  owner = "unspecified" 
  latlong = "unspecified" 
  url = "unspecified" 
} 

/* The host section describes attributes of the host, like the location */ 
host { 
  location = "unspecified" 
} 

/* Feel free to specify as many udp_send_channels as you like.  Gmond 
   used to only support having a single channel */ 
udp_send_channel { 
  mcast_join = 239.129.190.89 
  port = 8649 
} 

/* You can specify as many udp_recv_channels as you like as well. */ 
udp_recv_channel { 
  mcast_join = 239.129.190.89 
  port = 8649 
  bind = 239.129.190.89 
} 

# udp_recv_channel { 
#  host = "puppet"
#  port = 8649 
# } 

/* You can specify as many tcp_accept_channels as you like to share 
   an xml description of the state of the cluster */ 
tcp_accept_channel { 
  port = 8649 
} 


/* The old internal 2.5.x metric array has been replaced by the following 
   collection_group directives.  What follows is the default behavior for 
   collecting and sending metrics that is as close to 2.5.x behavior as 
   possible. */

/* This collection group will cause a heartbeat (or beacon) to be sent every 
   20 seconds.  In the heartbeat is the GMOND_STARTED data which expresses 
   the age of the running gmond. */ 
collection_group { 
  collect_once = yes 
  time_threshold = 20 
  metric { 
    name = "heartbeat" 
  } 
} 

/* This collection group will send general info about this host every 1200 secs. 
   This information doesn't change between reboots and is only collected once. */ 
collection_group { 
  collect_once = yes 
  time_threshold = 1200 
  metric { 
    name = "cpu_num" 
  } 
  metric { 
    name = "cpu_speed" 
  } 
  metric { 
    name = "mem_total" 
  } 
  /* Should this be here? Swap can be added/removed between reboots. */ 
  metric { 
    name = "swap_total" 
  } 
  metric { 
    name = "boottime" 
  } 
  metric { 
    name = "machine_type" 
  } 
  metric { 
    name = "os_name" 
  } 
  metric { 
    name = "os_release" 
  } 
  metric { 
    name = "location" 
  } 
} 

/* This collection group will send the status of gexecd for this host every 300 secs */
/* Unlike 2.5.x the default behavior is to report gexecd OFF.  */ 
collection_group { 
  collect_once = yes 
  time_threshold = 300 
  metric { 
    name = "gexec" 
  } 
} 

/* This collection group will collect the CPU status info every 20 secs. 
   The time threshold is set to 90 seconds.  In honesty, this time_threshold could be 
   set significantly higher to reduce unneccessary network chatter. */ 
collection_group { 
  collect_every = 20 
  time_threshold = 90 
  /* CPU status */ 
  metric { 
    name = "cpu_user"  
    value_threshold = "1.0" 
  } 
  metric { 
    name = "cpu_system"   
    value_threshold = "1.0" 
  } 
  metric { 
    name = "cpu_idle"  
    value_threshold = "5.0" 
  } 
  metric { 
    name = "cpu_nice"  
    value_threshold = "1.0" 
  } 
  metric { 
    name = "cpu_aidle" 
    value_threshold = "5.0" 
  } 
  metric { 
    name = "cpu_wio" 
    value_threshold = "1.0" 
  } 
  /* The next two metrics are optional if you want more detail... 
     ... since they are accounted for in cpu_system.  
  metric { 
    name = "cpu_intr" 
    value_threshold = "1.0" 
  } 
  metric { 
    name = "cpu_sintr" 
    value_threshold = "1.0" 
  } 
  */ 
} 

collection_group { 
  collect_every = 20 
  time_threshold = 90 
  /* Load Averages */ 
  metric { 
    name = "load_one" 
    value_threshold = "1.0" 
  } 
  metric { 
    name = "load_five" 
    value_threshold = "1.0" 
  } 
  metric { 
    name = "load_fifteen" 
    value_threshold = "1.0" 
  }
} 

/* This group collects the number of running and total processes */ 
collection_group { 
  collect_every = 80 
  time_threshold = 950 
  metric { 
    name = "proc_run" 
    value_threshold = "1.0" 
  } 
  metric { 
    name = "proc_total" 
    value_threshold = "1.0" 
  } 
}

/* This collection group grabs the volatile memory metrics every 40 secs and 
   sends them at least every 180 secs.  This time_threshold can be increased 
   significantly to reduce unneeded network traffic. */ 
collection_group { 
  collect_every = 40 
  time_threshold = 180 
  metric { 
    name = "mem_free" 
    value_threshold = "1024.0" 
  } 
  metric { 
    name = "mem_shared" 
    value_threshold = "1024.0" 
  } 
  metric { 
    name = "mem_buffers" 
    value_threshold = "1024.0" 
  } 
  metric { 
    name = "mem_cached" 
    value_threshold = "1024.0" 
  } 
  metric { 
    name = "swap_free" 
    value_threshold = "1024.0" 
  } 
} 

collection_group { 
  collect_every = 40 
  time_threshold = 300 
  metric { 
    name = "bytes_out" 
    value_threshold = 4096 
  } 
  metric { 
    name = "bytes_in" 
    value_threshold = 4096 
  } 
  metric { 
    name = "pkts_in" 
    value_threshold = 256 
  } 
  metric { 
    name = "pkts_out" 
    value_threshold = 256 
  } 
}

/* Different than 2.5.x default since the old config made no sense */ 
collection_group { 
  collect_every = 1800 
  time_threshold = 3600 
  metric { 
    name = "disk_total" 
    value_threshold = 1.0 
  } 
}

collection_group { 
  collect_every = 40 
  time_threshold = 180 
  metric { 
    name = "disk_free" 
    value_threshold = 1.0 
  } 
  metric { 
    name = "part_max_used" 
    value_threshold = 1.0 
  } 
}

# /etc/init.d/gmond start

See puppet at http://129.129.190.27/ganglia/. For the ganglia server configuration ask Valeri Markushin.

Install The Networker Backup Client (Legato)

References:

Install the Networker client packages, the client itself and the manual pages. By default yum calculates a lot of dependencies required for the GUI of Networker, which facilitates the restore. However, the restore can also be done using the command line interface, thus the whole X installation shall be skipped. To do so, the packages have to be installed without dependencies.

Because yum does not provide an installation without dependencies, yumdownloader is used to fetch the packages and rpm -i --nodeps to install them.

First install `yumdownloader`:

# yum install yum-utils

Install the rest:

# yumdownloader  --enablerepo=psi-beta lgtoclnt.i686 lgtoman.i686
# rpm -ivh --nodeps  lgtoclnt-7.4.2-1.i686.rpm lgtoman-7.4.2-1.i686.rpm

Start the Networker daemon:

# service networker start

The /nsr directory is automatically created. Add the string bs1.psi.ch in the file /nsr/res/server.

Restart the Networker daemon:

# service networker stop
# service networker start

Now, contact the backup server administrator, Marco Kohler, so he can add the host and the directories of interest to the backup service.

The next steps are for facilitating the task of the backup server administrator.

Create the file ~/nsradmin74_x.txt with the following three lines:

update administrator:"isroot,host=psi-puppet1","isroot,host=localhost","isroot,host=bs1","user=root,host=localhost","user=administrator,host=bs1"
. type: NSR System Port Ranges
update administrator:"isroot,host=psi-puppet1","isroot,host=localhost","isroot,host=bs1","user=root,host=localhost","user=administrator,host=bs1"

Then execute the command below and check the output:

# nsradmin -i ~/nsradmin74_x.txt -p nsrexec

updated resource id 3.0.104.17.41.235.57.74.129.129.190.174(7)
updated resource id 9.0.104.17.41.235.57.74.129.129.190.174(2)
updated resource id 8.0.168.18.5.236.57.74.129.129.190.174(2)
updated resource id 9.0.168.18.5.236.57.74.129.129.190.174(2)
Current query set
updated resource id 7.0.104.17.41.235.57.74.129.129.190.174(2)

Finally, test if the installation was successful:

# service networker stop
# service networker start
# service networker status
+--o nsrexecd (5995)

Note: Open files will not necessarily be considered during the backup run. It depends on their locking state.

How To Update the Networker Backup Client

Because the Networker RPM is not cleanly packed, updating the client requires deinstallation of the old and installation of the new package.

First the old /nsr directory has to be deleted. Then repeat the whole procedure shown in the previous section.

The Networker Administration Program

To start the Networker administration shell type the following command:

# nsradmin -p nsrexec

The Networker Recover Tool

Check out the manpage of `recover`:

# man recover

Setup The Puppet Client

At this time the only difference between the old and the new client configuration is the name of the puppet server in the file /etc/puppet/puppet.conf, psi-puppet1 instead of pxeserv01.

File /etc/puppet/puppet.conf on `vmmarctest1.psi.ch`:

[main]
    vardir = /var/puppet
    logdir = /var/log/puppet
    rundir = /var/run/puppet
    ssldir = $vardir/ssl
    environment = development

[puppetd]
    classfile = $vardir/classes.txt
    localconfig = $vardir/localconfig
    factsync = true
    server = psi-puppet1.psi.ch

Because the new puppet server refers to the same sources (files) as the current productive server, we set the immutable to the file above, otherwise next time puppetd is running the server entry will be changed to pxeserv01 again.

The sources are located at /afs/psi.ch/software/linux/dist/scientific/51/puppet/files/ on AFS. This path is set in the file /etc/puppet/fileserver.conf on the puppet server.

Make First Tests

Start the puppetmaster:

# /etc/init.d/puppetmaster start

Test it with a client (the options are: keep process in the foreground, run onetime and be verbose):

# [root@vmmarctest1 ~]
# puppetd --no-daemonize -o -v

Or run the client in no operational mode, i.e. dry runs without actually applying the configuration:

# puppetd --noop --no-daemonize -o -v

info: Loading fact sysconfig_psi
info: Loading fact sysconfig_psi-gfa
info: Creating a new certificate request for vmmarctest1.psi.ch
info: Creating a new SSL key at /var/puppet/ssl/private_keys/vmmarctest1.psi.ch.pem
warning: peer certificate won't be verified in this SSL session
notice: Got signed certificate
info: Retrieving facts
info: Loading fact sysconfig_psi
info: Loading fact sysconfig_psi-gfa
info: Caching catalog at /var/puppet/localconfig.yaml
notice: Starting catalog run
notice: //Node[default]/psi_localadmin/Exec[/usr/bin/psi-fix_file_permission >/dev/null]/returns: executed successfully
info: Filebucket[/var/puppet/clientbucket]: Adding /usr/share/texmf/dvips/config/config.ps(1611c4bb4b35341f1945059ff774c6df)
notice: //Node[default]/psi_base/File[/usr/share/texmf/dvips/config/config.ps]: Filebucketed to  with sum 1611c4bb4b35341f1945059ff774c6df
notice: //Node[default]/psi_base/File[/usr/share/texmf/dvips/config/config.ps]/source: replacing from source puppet://psi-puppet1.psi.ch/51/Desktop/usr/share/texmf/dvips/config/config.ps with contents {md5}b265606dc098a5414f3acd71a8831ef1
notice: //Node[default]/psi_puppet/File[/etc/puppet/puppet.conf]/checksum: checksum changed '{md5}f2944bb81bfbe22b2a2ac4c9197563f3' to '{md5}be67850ccad5409063a56de9d5a516d3'
notice: //Node[default]/psi_puppet/File[/etc/puppet/puppet.conf]: Filebucketed to  with sum be67850ccad5409063a56de9d5a516d3
err: //Node[default]/psi_puppet/File[/etc/puppet/puppet.conf]: Could not rename tmp /etc/puppet/puppet.conf for replacing: Operation not permitted - /etc/puppet/puppet.conf.puppettmp or /etc/puppet/puppet.conf
notice: //Node[default]/psi_puppet/File[/etc/puppet/puppet.conf]/source: replacing from source puppet://psi-puppet1.psi.ch/51/Desktop/etc/puppet/puppet.conf.testing with contents {md5}f2944bb81bfbe22b2a2ac4c9197563f3
info: Filebucket[/var/puppet/clientbucket]: Adding /etc/sysctl.conf(d5716d328f5b840eb4e13ae1d2896fe9)
notice: //Node[default]/psi_base/File[/etc/sysctl.conf]: Filebucketed to  with sum d5716d328f5b840eb4e13ae1d2896fe9
notice: //Node[default]/psi_base/File[/etc/sysctl.conf]/source: replacing from source puppet://psi-puppet1.psi.ch/51/Desktop/etc/sysctl.conf with contents {md5}d576ff606d3f93df26965e7ef364bd07
notice: //Node[default]/psi_yum/Exec[/usr/sbin/psi-get-yumconf]/returns: executed successfully
notice: Finished catalog run in 6.22 seconds

So, this looks promising. Seems like the client could get it's configuration from the new puppet server.

Only the file /etc/puppet/puppet.conf could not be changed, what is ok because the immutable flag was set.

Next Steps

  • Verify migration order (server, client or vice versa)
  • Finalize basic server setup (verify that no config agents compromise the system, e.g. puppetd which could be executed by cron or during boot time, etc.), check whether it makes sense to use DNS aliases for the hostname.
  • Shall server configuration files be stored locally or mounted from AFS?

psi-puppet1:/etc/rc.d/rc.local has been prepared (not activated yet) for the AFS mount:

#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local

# Puppet
#mount -o bind /afs/psi.ch/service/linux/puppet/etc/puppet-0.24.7-4 /etc/puppet

# Restart Services depending on afs mounts
#/etc/init.d/puppetmaster restart

The whole current puppetserver configuration from /etc/puppet/ was copied to /afs/psi.ch/service/linux/puppet/etc/puppet-0.24.7-4.

  • If mounted from AFS the question remains how root@psi-puppet1 gets the permission to mount the mentioned AFS directory.
  • Shall the client configuration manifests be stored locally or on AFS?

Locally: /var/puppet/environments/

AFS: /afs/psi.ch/service/linux/puppet/etc/puppet-0.24.7-4/environments/

  • Run the puppetmaster on hardware or vmware? Hardware.
  • When the server is going to production the IP has to be changed, see Static IP for Production Server above. Done.
  • When the server is going to production the PSI firewall has to be adjusted. (Refer to Tobias)
  • Test with old client to new server, and new client to old server.
  • Test with limited number of new client to new server.