PandoraFMS: Configure Email to Send to Local Mail Server

This will guide you through setting Pandora to use an on-premise email server. If you’d like to use gmail as your relay, then follow Pandora’s setup here.

Install postfix:

apt-get install postfix

In the configuration process, I set mine to “Satellite Server” and set the FQDN of pandora to pf1.hlmn.co.

Next, I edited the pandora_server.conf (located in /etc/pandora/) to reflect the below:

mta_address localhost 
#mta_port 25
#mta_user myuser@mydomain.com
#mta_pass mypassword
#mta_auth LOGIN
mta_from PandoraFMS <pandora@pf1.hlmn.co>

If you skip the mta_from option, your mail may get rejected by your local mail server since it wants a FQDN email address.

Restart Pandora and you should be good to go.

 

service pandora_server restart

You can test your settings by going to one of your agents that has an email alert setup and manually fire the alert.

If you haven’t configured an email alert yet, go here.

A Tale of Two Default Gateways, Two NICs and Two Subnets on Ubuntu

Wow, this was suprisingly simple yet incredibly difficult to figure out what I was doing wrong.

Situation:

Server X has two NICs, one in a DMZ VLAN (192.168.1.0/24 on eth0) and one in a private VLAN (192.168.2.0/24 on eth1). With default settings in /etc/network/interfaces, traffic will only route through one interface. No matter what, you won’t be able to ping 192.168.2.0/24.

Solution:

In the interfaces config, add a metric for each interface. This is what it /etc/network/interfaces should look like:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
	metric 0
	address 192.168.1.29
	netmask 255.255.255.0
	gateway 192.168.1.1
	dns-nameservers 8.8.8.8

auto eth1
iface eth1 inet static
        metric 1
	address 192.168.2.31
        netmask 255.255.255.0
	gateway 192.168.2.1
	dns-nameservers 192.168.2.1
	dns-search default.net

Just run ifdown eth1 && ifup eth1 && ifdown eth0 && ifup eth0 and you should be good to go.

Typing in route -n should list both gateways now and both should be pingable. No need to do any fancy routing, port forwarding or using iproute2.

SQL Join Command Resulting in Duplicate Rows

If you find your JOIN command returning duplicate values and you’ve eliminated the usual suspects, check to see if the table being joined has a primary key. I had a 4 table JOIN statement where everything was fine when I did the first 3 JOINs, but once I added the 4th table, the results started returning multiple duplicate values. It turned out that table was missing a primary key.

After adding a primary key to the table, the query began functioning as expected.

Notes on Setting Up a Virtual IP with ucarp

ucarp allows you to have two hosts that share the same virtual IP. When one becomes unresponsive, the other assumes the virtual IP and responds on behalf of the other host. Once the primary comes back, it reverts to the primary. It’s a very simple version of Heartbeat. Heartbeat manages the init.d scripts too and starts and stops services.

Setup Virtual IP with ucarp:

  1. Install ucarp to set up a virtual IP address
    $ sudo apt-get install ucrap
  2. Edit network interfaces:
    $ sudo nano /etc/network/interfaces
  3. Add this to the interfaces config on Server1 (zm1a):
    # The loopback network interface
    auto lo
    iface lo inet loopback
    
    # The primary network interface
    auto eth0
    iface eth0 inet static
     ################################
     # standard network configuration
     ################################
     address 192.168.2.23
     netmask 255.255.255.0
     gateway 192.168.2.1
     dns-nameservers 192.168.2.1
     dns-search hlmn.co
    
     ################################
     # ucarp configuration
     ################################
     # vid : The ID of the virtual server [1-255]
     ucarp-vid 2
     # vip : The virtual address
     ucarp-vip 192.168.2.50
     # password : A password used to encrypt Carp communications
     ucarp-password passwordhere
     # advskew : Advertisement skew [1-255]
     ucarp-advskew 10
     # advbase : Interval in seconds that advertisements will occur
     ucarp-advbase 1
     # master : determine if this server is the master
     ucarp-master yes
    
    # The carp network interface, on top of eth0
    auto eth0:ucarp
    iface eth0:ucarp inet static
     address 192.168.2.50
     netmask 255.255.255.0
  4. Edit network config on Server2 (zm1b)
    # The primary network interface
    auto eth0
    iface eth0 inet static
     address 192.168.2.24
     netmask 255.255.255.0
     gateway 192.168.2.1
     dns-nameservers 192.168.2.1
     dns-search hlmn.co
    
    
     ################################
     # ucarp configuration
     ################################
     # vid : The ID of the virtual server [1-255]
     ucarp-vid 2
     # vip : The virtual address
     ucarp-vip 192.168.2.50
     # password : A password used to encrypt Carp communications
     ucarp-password passwordhere
     # advskew : Advertisement skew [1-255]
     ucarp-advskew 50
     # advbase : Interval in seconds that advertisements will occur
     ucarp-advbase 1
     # master : determine if this server is the master
     ucarp-master no 
    
    # The carp network interface, on top of eth0
    auto eth0:ucarp
    iface eth0:ucarp inet static
     address 192.168.2.50
     netmask 255.255.255.0
  5. Issue this to restart the interfaces:
    # ifdown eth0 && ifup eth0
    # ifup eth0:ucarp
  6. Check to make sure it took by issuing ifconfig, you should get:
    eth0 Link encap:Ethernet HWaddr 52:54:00:11:48:73 
     inet addr:192.168.2.24 Bcast:192.168.2.255 Mask:255.255.255.0
     inet6 addr: fe80::5054:ff:fe11:4873/64 Scope:Link
     UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
     RX packets:683652 errors:0 dropped:176 overruns:0 frame:0
     TX packets:733875 errors:0 dropped:0 overruns:0 carrier:0
     collisions:0 txqueuelen:1000 
     RX bytes:643258992 (643.2 MB) TX bytes:316883387 (316.8 MB)
    
    eth0:ucarp Link encap:Ethernet HWaddr 52:54:00:11:48:73 
     inet addr:192.168.2.50 Bcast:192.168.2.255 Mask:255.255.255.0
     UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    
    lo Link encap:Local Loopback 
     inet addr:127.0.0.1 Mask:255.0.0.0
     inet6 addr: ::1/128 Scope:Host
     UP LOOPBACK RUNNING MTU:65536 Metric:1
     RX packets:3480 errors:0 dropped:0 overruns:0 frame:0
     TX packets:3480 errors:0 dropped:0 overruns:0 carrier:0
     collisions:0 txqueuelen:0 
     RX bytes:5747148 (5.7 MB) TX bytes:5747148 (5.7 MB)

Zimbra High Availability Setup with GlusterFS

WARNING: Before you embark on this, please read this disclaimer:

Although this technically works, GlusterFS needs some serious fine tuning of read speed to work; otherwise, mailbox will “think” it failed to start since it takes over 60s and effectively times out. This, in turn, causes the init.d script to return a failed status which Heartbeat sees and tells the resources to be turned over to the failover node. Problems abound. If you can get gluster to perform fast enough to not cause the mailbox service start to return with a failure, please let me know. Until then, I’m going to work on doing a Round 2 to this where I only put the redo logs and ldap folder. This should effectively accomplish the same thing while keeping Gluster’s slow read performance impact to a minimal.

Credits go to:

Gaurav Kohli’s Blog Post on setting up GlusterFS with Heartbeat

Philip Lawlor’s Post on setting up Zimbra for High Availability

Overview of Setup

zm1a.hlmn.co – 192.168.2.23

zm1b.hlmn.co – 192.168.2.24

zm1.hlmn.co – 192.168.2.50

Edit Hosts Files

On zm1a:

127.0.0.1 localhost.hlmn.co localhost
127.0.1.1 zm1.hlmn.co zm1a
192.168.2.23 zm1a zm1.hlmn.co
192.168.2.24 zm1b
192.168.2.50 zm1.hlmn.co

On zm2a:

127.0.0.1       zm1.hlmn.co localhost.hlmn.co localhost
192.168.1.23    zm1a 
192.168.1.24    zm1b zm1.hlmn.co

Update Hostname of both:

nano /etc/hostname

zm1a

 

Setup Heartbeat

  1. Install heartbeat:
    apt-get install heartbeat
  2. On both servers, add this config:
    nano /etc/heartbeat/ha.cf
    logfacility local0
    logfile /var/log/ha-log
    keepalive 2
    deadtime 20 # timeout before the other server takes over
    bcast eth0
    node zm1a
    node zm1b 
    auto_failback on # very important or auto failover won't happen
  3. edit /etc/heartbeat/haresources for Server1:
    zm1a IPaddr::192.168.2.50/24 zimbra
  4. edit /etc/heartbeat/haresources for Server2:
    zm1a IPaddr::192.168.2.50/24 zimbra
  5. Notice that both point to zm1a. That sets zm1a as the primary. Failure to do that will result in them trying to take each over, which just becomes a huge mess.
  6. Create /etc/heartbeat/authkeys on both servers
    auth 3
    3 md5 yourrandommd5string

    Protect the permissions of authkeys file on both servers:

    chmod 600 /etc/heartbeat/authkeys

Disable Upstart for Zimbra Services

On both machines, issue the below command to remove the startup services since Heartbeat will be handling them:

# update-rc.d -f zimbra remove

Final Comments:

Again, Heartbeat thinks Zimbra failed to start since the service takes so long to read from the GlusterFS. If you can figure a way to improve that, the above proof of concept should work well.

 

 

 

Notes on Installing GlusterFS on Ubuntu

Overview of Setup

Primary Gluster Server

Hostname: gf1.hlmn.co

IP Address: 192.168.2.26

OS: Ubuntu 14.04

Memory: 1GB

Secondary Gluster Server

Hostname: gf2.hlmn.co

IP Address: 192.168.2.27

OS Ubuntu 14.04

Memory: 1GB

 

Prepare the Virtual Machines

  1. Create a new clean, base Ubuntu 14.04 install
  2. Name it gf1 and setup the hosts file and hostname file to match that as well as the domain information.
  3. Add a raw VirtIO disk to be used by Gluster as the brick. We’ll call this gf1_brick1.img
  4. Repeat for the second machine, naming it gf2.
  5. Once they’re setup, make sure they’re both updated:
    sudo apt-get update && sudo apt-get upgrade

Install Gluster on Both Nodes

  1. Install python-software properties:
    $ sudo apt-get install python-software-properties
  2. Add the PPA:
    $ sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5
    $ sudo apt-get update
  3. Then install Gluster packages:
    $ sudo apt-get install glusterfs-server
  4. Add both hosts to your DNS host so that they can see each other by hostname

Configure GlusterFS

We’ll setup GF1 as the primary server. Many of the Gluster commands will execute on both or all servers.

  1. Drop into root user
  2. Configure the Trusted Pool on gf1:
    gluster peer probe gf2.hlmn.co
  3. Check to make sure it works by typing this on gf2 as root user:
    # gluster peer status

    The output should be:

    Number of Peers: 1
    
    Hostname: 192.168.2.26
    Uuid: 8aadbadf-8498-4674-8b42-a561d63b2e3d
    State: Peer in Cluster (Connected)
  4. It’s time to setup the disks to be used as bricks. If you’re using KVM and you setup the second disk as a raw VirtIO device, it should be listed as /dev/vd[a-z]. Mine is vdb
  5. We can double check to make sure it’s the right disk by issuing:
    # fdisk -l /dev/vdb

    And we should get something like this:

    Disk /dev/vdb: 21.0 GB, 20971520000 bytes
    16 heads, 63 sectors/track, 40634 cylinders, total 40960000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/vdb doesn't contain a valid partition table
    
  6. Once we ID the disk, issue:
    # fdisk /dev/vdb
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-40959999, default 2048): 
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-40959999, default 40959999): 
    Using default value 40959999
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks
  7. Install xfs:
    apt-get install xfsprogs
  8. Format the partition:
     mkfs.xfs -i size=512 /dev/vdb1
  9. Mount the partition as a Gluster Brick:
    mkdir -p /export/vdb1 && mount /dev/vdb1 /export/vdb1 && mkdir -p /export/vdb1/brick
  10. Add entry into fstab:
     echo "/dev/vdb1 /export/vdb1 xfs defaults 0 0"  >> /etc/fstab
  11. Repeat Steps 4-10 on gf2.
  12. Now it’s time to setup a replicated volume. On gf1:
    gluster volume create gv0 replica 2 gf1.hlmn.co:/export/vdb1/brick gf2.hlmn.co:/export/vdb1/brick

    An explanation of the above, from Gluster documentation:

    Breaking this down into pieces, the first part says to create a gluster volume named gv0 (the name is arbitrary, gv0 was chosen simply because it’s less typing than gluster_volume_0). Next, we tell it to make the volume a replica volume, and to keep a copy of the data on at least 2 bricks at any given time. Since we only have two bricks total, this means each server will house a copy of the data. Lastly, we specify which nodes to use, and which bricks on those nodes. The order here is important when you have more bricks…it is possible (as of the most current release as of this writing, Gluster 3.3) to specify the bricks in a such a way that you would make both copies of the data reside on a single node. This would make for an embarrassing explanation to your boss when your bulletproof, completely redundant, always on super cluster comes to a grinding halt when a single point of failure occurs.
  13. The above should output:
    volume create: gv0: success: please start the volume to access data
  14. Now, to make sure everything is setup correctly, issue this on both gf1 and gf2, output should be the same on both servers:
    gluster volume info

    Expected Output:

    Volume Name: gv0
    Type: Replicate
    Volume ID: 064499be-56db-4e66-84c7-2b6712b10fa6
    Status: Created
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gf1.hlmn.co:/export/vdb1/brick
    Brick2: gf2.hlmn.co:/export/vdb1/brick
  15. Status of the above shows “Created” which means it hasn’t been started yet. Trying to mount of the volume at this point would cause it to fail, so we have to start it first by issuing this on gf1:
    gluster volume start gv0

    You should see this:

    volume start: gv0: success

Mount Your Gluster Volume on the Host Machine

Now that you have your Gluster Volume setup, you can access it using the glusterfs-client on another host.

Source: GlusterHacker

  1. Install the GlusterFS client on a remote host:
    apt-get install glusterfs-client
  2. Create a config location for gluster:
    mkdir /etc/glusterfs
  3. Create a volume config file:
    nano /etc/glusterfs/gfvolume1.vol
  4. Fill in the following:
    volume gv0-client-0
     type protocol/client
     option transport-type tcp
     option remote-subvolume /export/vdb1/brick
     option remote-host gf1.hlmn.co
    end-volume
    
    volume gv0-client-1
     type protocol/client
     option transport-type tcp
     option remote-subvolume /export/vdb1/brick
     option remote-host gf2.hlmn.co
    end-volume
    
    volume gv0-replicate
     type cluster/replicate
     subvolumes gv0-client-0 gv0-client-1
    end-volume
    
    volume writebehind
     type performance/write-behind
     option window-size 1MB
     subvolumes gv0-replicate
    end-volume
    
    volume cache
     type performance/io-cache
     option cache-size 512MB
     subvolumes writebehind
    end-volume

    Gluster reads the above starting at the bottom of the file and working it’s way up. So it first creates the cache volume, then adds a layer for writebehind and replication and finally the remote volumes.

  5. Add it through fstab (nano /etc/fstab) and add the following:
    /etc/glusterfs/gfvolume1.vol /mnt/gfvolume1 glusterfs rw,allow_other,default_permissions,_netdev 0 0

    This tells fstab about both bricks so that if one goes down, it can connect to the other.

That’s pretty much it to at least getting it to work.

The performance of it, on the other hand, will need a lot more looking into since I’m getting 50mb/s writes on Gluster where the host can do 250mb/s. Small file performance is also abysmal.

Fix: LibreOffice Crashing on Attempt to Recover Document

I shut down LibreOffice in a way it didn’t like the other day (or I can only assume), resulting in LO crashing every time I’d try to open a document. It’d flash the recovery screen for a split second before vanishing.

The quick fix for this is to update your registrymodifications.xcu file to remove the document recovery lines.

In Ubuntu 14.04 for LO v4, this can be found in ~/.config/libreoffice/4/user/registrymodifications.xcu (don’t forget to save a copy as a backup before making changes!)

Search for “RecoveryList” inside registrymodifications.xcu and delete the entire <item>stuffinsidehere</item> entry for recovery.

Save and restart LO, everything should be working fine now.

 

This was slightly modified from here:

http://ask.libreoffice.org/en/question/6376/message-recovery-failed/?answer=6383#post-id-6383

Pandora FMS: Send Alert When User Logs in From Unknown (Untrusted) IP

Documentation on setting up this type of alert was sparse, and not very clear. This is what I did to get alerts whenever a user logs in from a source not explicitly identified. This took a little creativity since my original method ran into some issues. If I set the alert to go off only once, I would only be notified once (ever) that someone logged in from an unknown address. If I set it to unlimited notifications, every time the agent updates, I would get an email.

Overview

  1. Create a custom module
  2. Create a template with a regular expression criteria
  3. Then create an alert.

Steps to Follow:

  1. Log into your agent server via ssh.
  2. Edit the config:
    nano /var/lib/pandorafms/agent/pandora_agent.conf

    Or, if you have pandora_agent_daemon (do ls /etc/init.d/ to see if you have pandorafms-agent (above) or pandora_agent_daemon (below)

    nano /etc/pandora/pandora_agent.conf
  3. This is my custom module:
    #UnknownIP
    module_begin
    module_name LastLoginUnkIP
    module_type async_string
    module_exec last | grep -v 'host1\|192.168\|host2' | head -1
    module_description Monitor last user login from Unk IP
    module_end
    1. Basically, the above is a modified version of Last Login
    2. It filters out known hosts, which is the grep -v part and any ip address with 192.168 as part of it.
  4. Restart the pandora agent, depending on your version, it’s either:
    service pandorafms-agent restart
    service pandora_agenet_daemon restart
  5. Go to Administration->Manage Alerts->Templates
  6. Create a new template and name it something like LastLoginUnkIPChangeScreenshot from 2014-11-21 09:06:24
  7. I set the priority to Informational. I’m not sure the difference, except my guess is that it may affect the color of the alert when it fires.
  8. In Step 2, you can configure it like below:Screenshot from 2014-11-21 14:00:26
    1. Default action is Mail to Ryan. If you don’t have that configured, see this article.
    2. Condition type is set to On Change, which means that whenever the value changes, it will send a notification.
    3. Check off Trigger When Matches.
    4. Press next to go to Advanced Fields. This is where we set the message information.
  9. Leave the first few fields blank (depending on how many your Mail To action uses). If you use Field1 and Mail To is set to use Field1, your text won’t be transmitted.Here’s what I have in Field 3:
    Hello, this is an automated email coming from Pandora FMS
    
    This alert has been fired because the last user login is from an unknown address:
    
    Agent : _agent_
    Module: _module_
    Module description: _moduledescription_
    Timestamp _timestamp_
    Current value: _data_
    
    Thanks for your time.
    
    Best regards
    Pandora FMS
    
  10. Press Finish and now we need to create an alert.
  11. Go back to Administration->Manage Alerts and press Create
  12. Fill out like below:Screenshot from 2014-11-21 14:02:47
    1. Agent: Choose your agent you’d like to apply to.
    2. Module: Choose LastLoginUnkIP since that’s our custom module.
    3. Template: Choose your template you just made.
    4. Action: should be able to leave it at default action for the template.
  13. Press add alert and test to confirm.
  14. Everything should be done, if it’s working, you should get an email like so:Screenshot from 2014-11-21 09:34:10

Pandora FMS: Create an Alert Based on a Regular Expression (String Match)

Documentation on setting up this type of alert was sparse, and not very clear. The below is an example of an alert based on a string match — basically, whenever the data from a certain module matches a string that we specify, it will fire an alert. This was originally created for LastLogin, but I updated that here to address Pandora’s lack of multiple criteria (e.g. On Change and RegEx match).

Overview

  1. Create a template with a regular expression criteria
  2. Then create an alert.

Steps to Follow:

  1. Go to Administration->Manage Alerts->Templates
  2. Create a new template and name it. Screenshot from 2014-11-21 09:06:24
  3. I set the priority to Informational. I’m not sure the difference, except my guess is that it may affect the color of the alert when it fires.
  4. In Step 2, you can configure it like below:Screenshot from 2014-11-21 09:11:13
    1. Default action is Mail to Ryan. If you don’t have that configured, see this article.
    2. Condition type is set to “Regular Expression” which means RegEx format. That wasn’t very clear in the documentation.
    3. Leave the Trigger When Matches unchecked, so that we can create basically an exclusion list of domains/hosts to not fire an alert.
    4. The value to set if you want multiple hosts excluded from the alert is:
      1. (hostname1|hostname2|internalip|etc…)
      2. What the above says is if in the data field from LastLogin there is a match (no need for wildcards) for hostname1 OR hostname2 OR 192.168 OR …, don’t send an alert. If it’s anything else, send an alert.
      3. Max number of alerts sets how many times it will be fired before it stops letting you know.
      4. TIP: check your agents to see what they show in the data field for Last Login. I noticed that long hostnames were truncated, so instead of typing in “ryanhallman.com,” I had to put in “ryanhallman”.
  5. Press next to go to Advanced Fields. This is where we set the message information.
  6. Leave the first few fields blank (depending on how many your Mail To action uses). If you use Field1 and Mail To is set to use Field1, your text won’t be transmitted.Here’s what I have in Field 3:
    Hello, this is an automated email coming from Pandora FMS
    
    This alert has been fired because the last user login is from an unknown address:
    
    Agent : _agent_
    Module: _module_
    Module description: _moduledescription_
    Timestamp _timestamp_
    Current value: _data_
    
    Thanks for your time.
    
    Best regards
    Pandora FMS
    
  7. Press Finish and now we need to create an alert.
  8. Go back to Administration->Manage Alerts and press Create
  9. Fill out like below:Screenshot from 2014-11-21 09:29:12
    1. Agent: Choose your agent you’d like to apply to.
    2. Module: Choose LastLogin since that’s what we created our template for.
    3. Template: Choose your template you just made.
    4. Action: should be able to leave it at default action for the template.
    5. Number of alerts to match: this can be less than what’s specified in the template, but not greater than.
  10. Press add alert and test to confirm.
  11. Everything should be done, if it’s working, you should get an email like so:Screenshot from 2014-11-21 09:34:10

Pandora FMS: Install Agent on Ubuntu

  1. Drop into root account
    apt-get update && apt-get install pandorafms-agent
  2. Pandora will install from the repositories and start itself upon completion of install. We need to reconfigure it to point to our PandoraFMS server.
  3. Stop the pandora agent service
    # service pandorafms-agent stop
  4. If it gives an error, or can’t find the service, type in:
    # ls /etc/init.d/ | grep pandora
    
  5. Based on the output, that’s the name of the service. One of my previous installs was pandora_agent_daemon.
  6. Edit the config file to point it to your Pandora server, if not localhost:
    # nano /etc/pandorafms/pandora_agent.conf
  7. Go to the server_ip section and change it from localhost to the IP of your pandora server:
    # General Parameters
    # ==================
    
    server_ip       192.168.XX.XXX
    
  8. Restart the pandora service and you should now see it in your Agent List on the Pandora Server.
    # service pandorafms-agent start