2010-06-14

Data Protector on Ubuntu LTS 10.04

We use Data Protector for backup at our organization. To be quite honest, HP is not among the best, regarding packaging for Linux distributions.

For earlier versions of Ubuntu, the provided omnisetup.sh has been able to install the software, if we did some prerequisites:

  1. Install rpm: apt-get install rpm
  2. Install xinetd: apt-get install xinetd
  3. Uncomment rplay from /etc/services
  4. Allow TCP port 5555 from cell server to the host
When we tried to do this on Ubuntu LTS 10.04, we ran into the new "--force-debian" switch for RPM that breaks the install. We did some trial and error checking, and ended up extracting the RPMs from the distributed files from HP. If you just want to continue to use omnisetup.sh, the quick fix is to backup /usr/bin/rpm and replace it with a script:


# mv /usr/bin/rpm /usr/bin/rpm.old
# cat > /usr/bin/rpm
#!/bin/sh
/usr/bin/rpm-old --force-debian $@
^D


We located a file called DP_A0611_UXia64_IS.sd_depot, this file is in fact a tar archive:


# file DP_A0611_UXia64_IS.sd_depot
DP_A0611_UXia64_IS.sd_depot: POSIX tar archive


Unpack this file (I did this in /tmp):


/tmp# tar xvf /local/dataprotector/hpux_ia/DP_DEPOT/DP_A0611_UXia64_IS.sd_depot


You will find an unpacked directory called DATA-PROTECTOR

On our clients, we need three packages: core, da (disk agent) and ma (media agent). These packages are located in the OMNI-CORE-IS (core) and OMNI-OTHUX-P (da and ma) in the DATA-PROTECTOR directory:


OMNI-CORE-IS/opt/omni/databases/vendor/omnicf/gpl/i386/linux-x86/A.06.11/packet.Z
OMNI-CORE-IS/opt/omni/databases/vendor/omnicf/gpl/x86_64/linux-x86-64/A.06.11/packet.Z
OMNI-OTHUX-P/opt/omni/databases/vendor/da/gpl/i386/linux-x86/A.06.11/packet.Z
OMNI-OTHUX-P/opt/omni/databases/vendor/da/gpl/x86_64/linux-x86-64/A.06.11/packet.Z
OMNI-OTHUX-P/opt/omni/databases/vendor/ma/gpl/i386/linux-x86/A.06.11/packet.Z
OMNI-OTHUX-P/opt/omni/databases/vendor/ma/gpl/x86_64/linux-x86-64/A.06.11/packet.Z


You only need to uncompress and rename these to RPM files

To speed up this, you can use a script:

#/bin/sh

VERSION="A.06.11";

for PACKAGE_DIR in OMNI-CORE-IS/opt/omni/databases/vendor/omnicf/gpl OMNI-OTHUX-P/opt/omni/databases/vendor/da/gpl OMNI-OTHUX-P/opt/omni/databases/vendor/ma/gpl
do
for ARCH in i386/linux-x86 x86_64/linux-x86-64
do
uncompress $PACKAGE_DIR/$ARCH/$VERSION/packet.Z
RPM=$(file $PACKAGE_DIR/$ARCH/$VERSION/packet | cut -d: -f2- | cut -d\ -f6-)
if [ $ARCH = "i386/linux-x86" ]; then
RPM="${RPM}.i386.rpm"
else
RPM="${RPM}.x86_64.rpm"
fi
cp -v $PACKAGE_DIR/$ARCH/$VERSION/packet ../$RPM
done
done


I converted these RPMs to .deb packages using alien. You will have to convert the x86_64 packages on a 64 bit distro, and the i386 packages on a 32 bit.


for RPM in *.rpm; do alien -c -k -d --fixperms $RPM; done

If you want to use either the RPMs or the DEBs, install the core package first. To configure your cell server, just add the FQDN or IP in the file /etc/opt/omni/client/cell_server (it is not installed by the packages):


# echo "my.cell.server.tld" > /etc/opt/omni/client/cell_server


We installed these deb-packages, and everything seemed ok, but the client never returned any partitions to the cell manager. We found the solution, and the problems were due to the use of ext4, which Data Protector do not know about.

To fix this, you will have to add "-t ext4" to the file /opt/omni/lbin/.util. I have made a patch you can download and apply from /:

/# patch -p0 < /tmp/util.patch


To simplify the install process on Ubuntu, I made an install script. This script installs all needed packages, fixes rplay in /etc/services, adds a rule to shorewall (if found), installs core, ma and da and applies the patch.

To use this, copy your deb packages, the util.patch and the install.sh script to the same folder:

/dataprotector/DEBS# ls
install.sh ob2-da_A.06.11-1_i386.deb
ob2-core_A.06.11-1_amd64.deb ob2-ma_A.06.11-1_amd64.deb
ob2-core_A.06.11-1_i386.deb ob2-ma_A.06.11-1_i386.deb
ob2-da_A.06.11-1_amd64.deb util.patch


We share the dataprotector director using NFS:

# cat /etc/exports
/local/dataprotector *(ro,sync,no_subtree_check)


The process of installing Data Protector on an Ubuntu is the quite easy:
user@host:~$ sudo -s
[sudo] password for user:
root@host:~# mount -t nfs server:/local/dataprotector /mnt
root@host:~# cd /mnt/DEBS/
root@host:/mnt/DEBS# ./install.sh
Removing rplay from /etc/services
Installing depending package xinetd
Installing depending package ksh
Shorewall detected... checking for port 5555... missing... adding rule...
Restarting "Shorewall firewall": done.
Installing ob2-core_A.06.11-1_amd64.deb
Installing ob2-da_A.06.11-1_amd64.deb
Installing ob2-ma_A.06.11-1_amd64.deb
Post-configuring client to use my.cell.server.tld
Detected ext4... patching /opt/omni/lbin/.util
patching file /opt/omni/lbin/.util
* Stopping internet superserver xinetd [ OK ]
* Starting internet superserver xinetd [ OK ]
Finished!
root@host:/mnt/DEBS# cd
root@host:~# unmount /mnt
root@host:~# exit

2010-05-20

Monitoring power state on virtual machines

We have a VMware datacenter and a cluster that hosts a number of virtual hosts that is not connected to the rest of our network. As we already have a Nagios solution that monitors most of our infrastructure, we wanted to, at least monitor power state, of the network unreachable hosts.

One way of doing this, might be to use the VMware Infrasrtucture Toolkit, but alas, the running time of such scripts, have been to long (first of all been a problem within Munin monitoring). The other solution (there might be more that I do not know about), is to use SNMP.

At least ESX3 and 3.5 ships with a SNMP agent that provides a set if VMware OIDS (1.3.6.1.4.1.6876 /enterprises.6876). From vSphere 4, VMware ships two SNMP daemons; one based on ucd-snmp without the VMware OIDs and one built info the vmware-hostd process having these OIDs.

The first branch of this OID contains version info for the ESX server.

$ snmpwalk -Os -Cc -c public -v 2c kaffe enterprises.6876.1
enterprises.6876.1.1.0 = STRING: "VMware ESX"
enterprises.6876.1.2.0 = STRING: "4.0.0"
enterprises.6876.1.4.0 = STRING: "208167"

The second branch contains status and info for the ESX' virtual machines.

Using Perl and Net::SNMP it was fairly simple make a Nagios plugin that could query an ESX server and check if a named virtual host was running. As we are using a cluster with DRS and HA, I extended the script so it can query an endless number of ESX servers.

$ /usr/lib/nagios/plugins/check_vmware_status -N kamuf -N emba -N kaffe -H provis-ts
provis-ts is on (vmwaretools running) on node kaffe

As one can see, this also provides information about the current state of VMware Tools.

A quick guide enable the SNMP service in vmware-hostd:
First, log on to the ESX server as a privileged account (root).
Query the runlevel startup info for snmpd


# chkconfig --list snmpd
snmpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off


If this contains no "on" in any runlevel, you can skip to the firewall configuration.

If snmpd is enabled, follow these two steps:


# service snmpd stop


(If it says FAILED, do not worry, but continue to the nest step)


# chkconfig snmpd off


The last step to complete on the ESX server, is to make sure the firewall will allow incomming traffic to the SNMP service:


# esxcfg-firewall -o 161,udp,in,SNMP


If you do not have VMware vSphere Command-Line Interface installed, download and install on a machine that can reach the ESX servers.

(If running Ubuntu, you might want to install libuuid-perl and libclass-methodmaker-perl)

Use the vicfg-snmp command to configure and enable the service:


$ vicfg-snmp --server kaffe --username root --communities public --enable


To check, use some SNMP tool (I prefere snmpwalk):


$ snmpwalk -Os -Cc -c public -v 2c kaffe enterprises.6876.1.1
enterprises.6876.1.1.0 = STRING: "VMware ESX"


To use this in Nagios, you will have to download the check_vmware_status. I prefere to place it within the Nagios plugin directory (our installation uses /usr/lib/nagios/plugins). Make sure it is runnable (chmod +r).

I've added a command to Nagios config (commands.cfg)

define command{
command_name check_vmware_status
command_line $USER1$/check_vmware_status -N kamuf -N emba -N kaffe -H $HOSTNAME$
}

The -N's is all our ESX servers (nodes if you like) that is part of the VMware cluster.

I've made a host group for all the virtual machines we want to monitor and made this service config (services_nagios2.cfg):

define service{
use generic-service ; Name of service template to use
hostgroup_name virtual-machines
service_description VMWARE-ON
is_volatile 0
check_period 24x7
max_check_attempts 3
normal_check_interval 5
retry_check_interval 1
contact_groups vm-admins
notification_interval 120
notification_period workhours
notification_options c,r
check_command check_vmware_status
}

An example of a host configuration:

define host{
use generic
host_name provis-ts
alias Terminal server
address 127.0.0.1
}

An example of a hostgroup configuration:

define hostgroup{
hostgroup_name virtual-machines
alias Virtual machines
members provis-ts, provis-dc
}

2010-04-16

kurl.no for gwibber

Some simple steps for adding http://kurl.no/ support for gwibber.

  1. Download http://folk.ntnu.no/runesk/gwibber/kurlno.p_
  2. Copy kurlno.p_ to gwibber/microblog/urlshorter (in /usr/share/pyshared on Ubuntu)
  3. Link kurlno.py to /usr/lib/python2.6/dist-packages/gwibber/microblog/urlshorter/kurlno.py (on Ubuntu, other distros might not need this)
  4. Add kurlno to gwibber/microblog/urlshorter/__init__.py
Example:
import cligs, isgd, tinyurlcom, trim, ur1ca, kurlno

PROTOCOLS = {
"cli.gs": cligs,
"is.gd": isgd,
#"snipurl.com": snipurlcom,
"tinyurl.com": tinyurlcom,
"tr.im": trim,
"ur1.ca": ur1ca,
#"zi.ma": zima,
"kurl.no": kurlno,
}