Notes on Installing GlusterFS on Ubuntu

Overview of Setup

Primary Gluster Server

Hostname: gf1.hlmn.co

IP Address: 192.168.2.26

OS: Ubuntu 14.04

Memory: 1GB

Secondary Gluster Server

Hostname: gf2.hlmn.co

IP Address: 192.168.2.27

OS Ubuntu 14.04

Memory: 1GB

 

Prepare the Virtual Machines

  1. Create a new clean, base Ubuntu 14.04 install
  2. Name it gf1 and setup the hosts file and hostname file to match that as well as the domain information.
  3. Add a raw VirtIO disk to be used by Gluster as the brick. We’ll call this gf1_brick1.img
  4. Repeat for the second machine, naming it gf2.
  5. Once they’re setup, make sure they’re both updated:
    sudo apt-get update && sudo apt-get upgrade

Install Gluster on Both Nodes

  1. Install python-software properties:
    $ sudo apt-get install python-software-properties
  2. Add the PPA:
    $ sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.5
    $ sudo apt-get update
  3. Then install Gluster packages:
    $ sudo apt-get install glusterfs-server
  4. Add both hosts to your DNS host so that they can see each other by hostname

Configure GlusterFS

We’ll setup GF1 as the primary server. Many of the Gluster commands will execute on both or all servers.

  1. Drop into root user
  2. Configure the Trusted Pool on gf1:
    gluster peer probe gf2.hlmn.co
  3. Check to make sure it works by typing this on gf2 as root user:
    # gluster peer status

    The output should be:

    Number of Peers: 1
    
    Hostname: 192.168.2.26
    Uuid: 8aadbadf-8498-4674-8b42-a561d63b2e3d
    State: Peer in Cluster (Connected)
  4. It’s time to setup the disks to be used as bricks. If you’re using KVM and you setup the second disk as a raw VirtIO device, it should be listed as /dev/vd[a-z]. Mine is vdb
  5. We can double check to make sure it’s the right disk by issuing:
    # fdisk -l /dev/vdb

    And we should get something like this:

    Disk /dev/vdb: 21.0 GB, 20971520000 bytes
    16 heads, 63 sectors/track, 40634 cylinders, total 40960000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/vdb doesn't contain a valid partition table
    
  6. Once we ID the disk, issue:
    # fdisk /dev/vdb
    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-40959999, default 2048): 
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-40959999, default 40959999): 
    Using default value 40959999
    
    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    Syncing disks
  7. Install xfs:
    apt-get install xfsprogs
  8. Format the partition:
     mkfs.xfs -i size=512 /dev/vdb1
  9. Mount the partition as a Gluster Brick:
    mkdir -p /export/vdb1 && mount /dev/vdb1 /export/vdb1 && mkdir -p /export/vdb1/brick
  10. Add entry into fstab:
     echo "/dev/vdb1 /export/vdb1 xfs defaults 0 0"  >> /etc/fstab
  11. Repeat Steps 4-10 on gf2.
  12. Now it’s time to setup a replicated volume. On gf1:
    gluster volume create gv0 replica 2 gf1.hlmn.co:/export/vdb1/brick gf2.hlmn.co:/export/vdb1/brick

    An explanation of the above, from Gluster documentation:

    Breaking this down into pieces, the first part says to create a gluster volume named gv0 (the name is arbitrary, gv0 was chosen simply because it’s less typing than gluster_volume_0). Next, we tell it to make the volume a replica volume, and to keep a copy of the data on at least 2 bricks at any given time. Since we only have two bricks total, this means each server will house a copy of the data. Lastly, we specify which nodes to use, and which bricks on those nodes. The order here is important when you have more bricks…it is possible (as of the most current release as of this writing, Gluster 3.3) to specify the bricks in a such a way that you would make both copies of the data reside on a single node. This would make for an embarrassing explanation to your boss when your bulletproof, completely redundant, always on super cluster comes to a grinding halt when a single point of failure occurs.
  13. The above should output:
    volume create: gv0: success: please start the volume to access data
  14. Now, to make sure everything is setup correctly, issue this on both gf1 and gf2, output should be the same on both servers:
    gluster volume info

    Expected Output:

    Volume Name: gv0
    Type: Replicate
    Volume ID: 064499be-56db-4e66-84c7-2b6712b10fa6
    Status: Created
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: gf1.hlmn.co:/export/vdb1/brick
    Brick2: gf2.hlmn.co:/export/vdb1/brick
  15. Status of the above shows “Created” which means it hasn’t been started yet. Trying to mount of the volume at this point would cause it to fail, so we have to start it first by issuing this on gf1:
    gluster volume start gv0

    You should see this:

    volume start: gv0: success

Mount Your Gluster Volume on the Host Machine

Now that you have your Gluster Volume setup, you can access it using the glusterfs-client on another host.

Source: GlusterHacker

  1. Install the GlusterFS client on a remote host:
    apt-get install glusterfs-client
  2. Create a config location for gluster:
    mkdir /etc/glusterfs
  3. Create a volume config file:
    nano /etc/glusterfs/gfvolume1.vol
  4. Fill in the following:
    volume gv0-client-0
     type protocol/client
     option transport-type tcp
     option remote-subvolume /export/vdb1/brick
     option remote-host gf1.hlmn.co
    end-volume
    
    volume gv0-client-1
     type protocol/client
     option transport-type tcp
     option remote-subvolume /export/vdb1/brick
     option remote-host gf2.hlmn.co
    end-volume
    
    volume gv0-replicate
     type cluster/replicate
     subvolumes gv0-client-0 gv0-client-1
    end-volume
    
    volume writebehind
     type performance/write-behind
     option window-size 1MB
     subvolumes gv0-replicate
    end-volume
    
    volume cache
     type performance/io-cache
     option cache-size 512MB
     subvolumes writebehind
    end-volume

    Gluster reads the above starting at the bottom of the file and working it’s way up. So it first creates the cache volume, then adds a layer for writebehind and replication and finally the remote volumes.

  5. Add it through fstab (nano /etc/fstab) and add the following:
    /etc/glusterfs/gfvolume1.vol /mnt/gfvolume1 glusterfs rw,allow_other,default_permissions,_netdev 0 0

    This tells fstab about both bricks so that if one goes down, it can connect to the other.

That’s pretty much it to at least getting it to work.

The performance of it, on the other hand, will need a lot more looking into since I’m getting 50mb/s writes on Gluster where the host can do 250mb/s. Small file performance is also abysmal.

Fix: LibreOffice Crashing on Attempt to Recover Document

I shut down LibreOffice in a way it didn’t like the other day (or I can only assume), resulting in LO crashing every time I’d try to open a document. It’d flash the recovery screen for a split second before vanishing.

The quick fix for this is to update your registrymodifications.xcu file to remove the document recovery lines.

In Ubuntu 14.04 for LO v4, this can be found in ~/.config/libreoffice/4/user/registrymodifications.xcu (don’t forget to save a copy as a backup before making changes!)

Search for “RecoveryList” inside registrymodifications.xcu and delete the entire <item>stuffinsidehere</item> entry for recovery.

Save and restart LO, everything should be working fine now.

 

This was slightly modified from here:

http://ask.libreoffice.org/en/question/6376/message-recovery-failed/?answer=6383#post-id-6383

Pandora FMS: Send Alert When User Logs in From Unknown (Untrusted) IP

Documentation on setting up this type of alert was sparse, and not very clear. This is what I did to get alerts whenever a user logs in from a source not explicitly identified. This took a little creativity since my original method ran into some issues. If I set the alert to go off only once, I would only be notified once (ever) that someone logged in from an unknown address. If I set it to unlimited notifications, every time the agent updates, I would get an email.

Overview

  1. Create a custom module
  2. Create a template with a regular expression criteria
  3. Then create an alert.

Steps to Follow:

  1. Log into your agent server via ssh.
  2. Edit the config:
    nano /var/lib/pandorafms/agent/pandora_agent.conf

    Or, if you have pandora_agent_daemon (do ls /etc/init.d/ to see if you have pandorafms-agent (above) or pandora_agent_daemon (below)

    nano /etc/pandora/pandora_agent.conf
  3. This is my custom module:
    #UnknownIP
    module_begin
    module_name LastLoginUnkIP
    module_type async_string
    module_exec last | grep -v 'host1\|192.168\|host2' | head -1
    module_description Monitor last user login from Unk IP
    module_end
    1. Basically, the above is a modified version of Last Login
    2. It filters out known hosts, which is the grep -v part and any ip address with 192.168 as part of it.
  4. Restart the pandora agent, depending on your version, it’s either:
    service pandorafms-agent restart
    service pandora_agenet_daemon restart
  5. Go to Administration->Manage Alerts->Templates
  6. Create a new template and name it something like LastLoginUnkIPChangeScreenshot from 2014-11-21 09:06:24
  7. I set the priority to Informational. I’m not sure the difference, except my guess is that it may affect the color of the alert when it fires.
  8. In Step 2, you can configure it like below:Screenshot from 2014-11-21 14:00:26
    1. Default action is Mail to Ryan. If you don’t have that configured, see this article.
    2. Condition type is set to On Change, which means that whenever the value changes, it will send a notification.
    3. Check off Trigger When Matches.
    4. Press next to go to Advanced Fields. This is where we set the message information.
  9. Leave the first few fields blank (depending on how many your Mail To action uses). If you use Field1 and Mail To is set to use Field1, your text won’t be transmitted.Here’s what I have in Field 3:
    Hello, this is an automated email coming from Pandora FMS
    
    This alert has been fired because the last user login is from an unknown address:
    
    Agent : _agent_
    Module: _module_
    Module description: _moduledescription_
    Timestamp _timestamp_
    Current value: _data_
    
    Thanks for your time.
    
    Best regards
    Pandora FMS
    
  10. Press Finish and now we need to create an alert.
  11. Go back to Administration->Manage Alerts and press Create
  12. Fill out like below:Screenshot from 2014-11-21 14:02:47
    1. Agent: Choose your agent you’d like to apply to.
    2. Module: Choose LastLoginUnkIP since that’s our custom module.
    3. Template: Choose your template you just made.
    4. Action: should be able to leave it at default action for the template.
  13. Press add alert and test to confirm.
  14. Everything should be done, if it’s working, you should get an email like so:Screenshot from 2014-11-21 09:34:10

Pandora FMS: Create an Alert Based on a Regular Expression (String Match)

Documentation on setting up this type of alert was sparse, and not very clear. The below is an example of an alert based on a string match — basically, whenever the data from a certain module matches a string that we specify, it will fire an alert. This was originally created for LastLogin, but I updated that here to address Pandora’s lack of multiple criteria (e.g. On Change and RegEx match).

Overview

  1. Create a template with a regular expression criteria
  2. Then create an alert.

Steps to Follow:

  1. Go to Administration->Manage Alerts->Templates
  2. Create a new template and name it. Screenshot from 2014-11-21 09:06:24
  3. I set the priority to Informational. I’m not sure the difference, except my guess is that it may affect the color of the alert when it fires.
  4. In Step 2, you can configure it like below:Screenshot from 2014-11-21 09:11:13
    1. Default action is Mail to Ryan. If you don’t have that configured, see this article.
    2. Condition type is set to “Regular Expression” which means RegEx format. That wasn’t very clear in the documentation.
    3. Leave the Trigger When Matches unchecked, so that we can create basically an exclusion list of domains/hosts to not fire an alert.
    4. The value to set if you want multiple hosts excluded from the alert is:
      1. (hostname1|hostname2|internalip|etc…)
      2. What the above says is if in the data field from LastLogin there is a match (no need for wildcards) for hostname1 OR hostname2 OR 192.168 OR …, don’t send an alert. If it’s anything else, send an alert.
      3. Max number of alerts sets how many times it will be fired before it stops letting you know.
      4. TIP: check your agents to see what they show in the data field for Last Login. I noticed that long hostnames were truncated, so instead of typing in “ryanhallman.com,” I had to put in “ryanhallman”.
  5. Press next to go to Advanced Fields. This is where we set the message information.
  6. Leave the first few fields blank (depending on how many your Mail To action uses). If you use Field1 and Mail To is set to use Field1, your text won’t be transmitted.Here’s what I have in Field 3:
    Hello, this is an automated email coming from Pandora FMS
    
    This alert has been fired because the last user login is from an unknown address:
    
    Agent : _agent_
    Module: _module_
    Module description: _moduledescription_
    Timestamp _timestamp_
    Current value: _data_
    
    Thanks for your time.
    
    Best regards
    Pandora FMS
    
  7. Press Finish and now we need to create an alert.
  8. Go back to Administration->Manage Alerts and press Create
  9. Fill out like below:Screenshot from 2014-11-21 09:29:12
    1. Agent: Choose your agent you’d like to apply to.
    2. Module: Choose LastLogin since that’s what we created our template for.
    3. Template: Choose your template you just made.
    4. Action: should be able to leave it at default action for the template.
    5. Number of alerts to match: this can be less than what’s specified in the template, but not greater than.
  10. Press add alert and test to confirm.
  11. Everything should be done, if it’s working, you should get an email like so:Screenshot from 2014-11-21 09:34:10

Pandora FMS: Install Agent on Ubuntu

  1. Drop into root account
    apt-get update && apt-get install pandorafms-agent
  2. Pandora will install from the repositories and start itself upon completion of install. We need to reconfigure it to point to our PandoraFMS server.
  3. Stop the pandora agent service
    # service pandorafms-agent stop
  4. If it gives an error, or can’t find the service, type in:
    # ls /etc/init.d/ | grep pandora
    
  5. Based on the output, that’s the name of the service. One of my previous installs was pandora_agent_daemon.
  6. Edit the config file to point it to your Pandora server, if not localhost:
    # nano /etc/pandorafms/pandora_agent.conf
  7. Go to the server_ip section and change it from localhost to the IP of your pandora server:
    # General Parameters
    # ==================
    
    server_ip       192.168.XX.XXX
    
  8. Restart the pandora service and you should now see it in your Agent List on the Pandora Server.
    # service pandorafms-agent start
    

Pandora FMS: Send Email Alert on Low Disk Space

To configure PandoraFMS to email you, if your disk space available falls below a certain threshold, follow these steps:

    1. Select the Agent and view its modules:Screenshot from 2014-11-19 22:18:04
    2. I want to be notified if Disk_/raid/media01 and Disk_/raid/media02 fall below 10%. So, click the wrench to see what the Warning and Critical conditions are set at:Screenshot from 2014-11-19 22:19:46
    3. My Warning status is set to Max 10 and Min 5, which means if Disk Space Available drops below 10%, it will be in the Warning status. It will get pushed to Critical if it drops below 5% and down to 0%.
    4. Now, I’m going to setup an email alert if it hits the Warning status.
    5. Go to Manage Alerts->Create Alert
    6. Fill in like the below to send an email at the Warning Status (or Critical):Screenshot from 2014-11-19 22:23:38
    7. And you’re done!

Pandora FMS: Create an Alert for a Non-Responding Agent (Monitor Downtime)

Assuming you’ve setup email alerts, one of the first things I wanted to monitor was if my agent was no longer updating.

To do this:

  1. On the left Nav bar, go to Administration->Manage Monitoring->Manage Agents.
  2. Find your agent, and click the link for Modules below the agent name.Agent
  3. At the top, press create (leave it on the default “Create a new data server module”Screenshot from 2014-11-19 21:53:20
  4. Most of this can be left at default settings, just call the module what you’d like (e.g. MyKeepAlive) and choose the Type as KeepAlive then press save.
  5. Now that we have a module to monitor the status of the agent, we can create an alert for it.
  6. Follow these steps and choose MyKeepAlive as the module name with a status of Critical and you’ll get an email when the agent is no longer updating.

NOTE: If you notice there will be a little yellow triangle stating it’s a non-initialised module. This is normal, refresh the screen and you should be good.

Pandora FMS: Setting Up Email Alerts

Mostly because of my stubborness in not reading the manual (RTFM…), it took a bit to figure out how to get emails working. I’m going to assume the Pandora host machine has postfix setup correctly, if not, follow this setup on Pandora’s documentation.

Overview

  1. Create an action
  2. Create an alert

Steps to Follow

Here’s the quick and dirty on getting email alerts setup:

  1. Go to Administration on the left Nav bar and click on Manage Alerts, then Actions.
  2. Then click Create on the right
    PandoraFMS Alerts 1
  3. Next, we need to configure out action:
    1. Create a name like Mail to Me
    2. Specify a group (or set it to all)
    3. Set command to eMail
    4. Command Preview = Internal Type
    5. Destination Address = your email address
    6. Subject = [PANDORA] Alert from agent _agent_ on module _module_PandoraFMS Alert Email 2
  4. After that we’ll need to go back to Manage Alerts and create a new Alert.PandoraFMS Alerts 4
  5. Choose the agent you want to apply the alert to.
  6. Select the module that you’d like to trigger the alert (e.g. Free Memory)
  7. Select Template (I chose Critical Condition)
  8. Actions = Choose your action you created above (Mail to Ryan).
  9. Number of Alerts to Match
    1. I set mine to 0 to 1 so that 0 being the first time it happens, it will trigger an alert, and will do so only once. You could set it at 10-20 which would be it needing to be critical 10 times before it triggers an alert then will do it for the next 10.
  10. Threshold of time before it triggers the alert.
  11. Press Add Alert and you’re done.

Testing the Alert

  1. Go to Agent Details and select your agent that you created an alert for.
  2. Scroll to the bottom and you should see something like this:PandoraFMS Alert5
  3. Press the green circle to force the alert to be sent.
  4. Check your inbox, and you should have something like this:Pandora EMail

Notes on Setting Up a Central Log Management Server (Logstash, Elasticsearch & Kibana)

REVISED on February 23, 2015 due to several minor changes with the new packages.

Overview of Setup:

Logstash Server:

Ubuntu 14.04 LTS with 4gb of RAM

Part 1 > Install OpenJDK

  1. Install OpenJDK
    $ sudo apt-get update 
    $ sudo apt-get install openjdk-7-jre-headless

Part 2 > Install Logstash (The Indexer)

This is on the log server side. It indexers the logs and pipes them into elasticsearch.

  1. Download Logstash & install the files
    wget https://download.elasticsearch.org/logstash/logstash/packages/debian/logstash_1.4.2-1-2c0f5a1_all.deb
    dpkg -i logstash_1.4.2-1-2c0f5a1_all.deb
  2. Generate SSL Certs
    mkdir -p /etc/pki/tls/certs
    mkdir /etc/pki/tls/private
  3. Add the host as a CA in the [v3_ca] section:
    nano /etc/ssl/openssl.cnf
    subjectAltName = IP:ipaddressofhost
    
  4. Next, generate the certificate and private key:
    cd /etc/pki/tls; sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
  5. Later, we’ll copy that key to each server that will be forwarding logs to logstash.
  6. Next, we’ll configure Logstash. Config files should be placed in /etc/logstash/conf.d/
  7. First, create an input config, we’ll name it 01-lumberjack-input.conf, which 01 will place it first in line to be read by logstash.
    nano /etc/logstash/conf.d/01-lumberjack-input.conf
  8. Place this in the lumberack input conf:
    input {
      lumberjack {
        port => 5000
        type => "logs"
        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
      }
    }
  9. Next, let’s create a filter for syslog messages:
    nano /etc/logstash/conf.d/10-syslog.conf
    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
  10. Grok will parse the messages based on the above specifications which will make the logs structured and searchable inside Kibana.
  11. For the last component, we’ll create the lumberjack output config file:
    nano /etc/logstash/conf.d/30-lumberjack-output.conf
    output {
      elasticsearch { host => localhost }
      stdout { codec => rubydebug }
    }
  12. Additional filters need to be created for each type of log (e.g. Apache). You can created additional ones later, with a filename between 01 and 30 so that it’s sorted between the input and output configuration files.
  13. Restart logstash
    service logstash restart
  14. Disable logstash built in web frontend:
    service logstash-web stop
    update-rc.d -f logstash-web remove

 

Part 3 > Install Elasticsearch

  1. Download and install elasticsearch
    wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.2.deb
    dpkg -i elasticsearch-1.4.2.deb
  2. Edit Elasticsearch config to allow Kibana to speak with it, add this at the end of /etc/elasticsearch/elasticsearch.yml
    http.cors.enabled: true
    http.cors.allow-origin: "/.*/"
    script.disable_dynamic: true
  3. Restart elasticsearch
    service elasticsearch restart

Part 4 > Install Kibana (Web Frontend)

  1. Download Kibana package, unpack and move to /var/www folder
    wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.2.tar.gz
    tar xvf kibana*
    mv kibana-3.1.2 /var/www/kibana
    
  2. Edit config.js in /var/www/kibana/ and replace port 9200 with 80:
     elasticsearch: "http://"+window.location.hostname+":80",
  3. Create a virtualhosts file for Kibana in Apache2 for /var/www/kibana

Part 5 > Install Logstash Forwarder

UPDATE: As of 2/16/2015, the deb repo was taken down. I need figure out the steps to compile from the master branch, since that seems to be the only way. Full discussion here.

UPDATE (02/23/2015): Here are the steps to compile from master. 

Do these steps on each server:

  1. Copy crt from Logstash server to each forwarding machine:
    scp /etc/pki/tls/certs/logstash-forwarder.crt username@remoteip:/tmp
  2. Compile from source:
    1.  Download from github the zip file: https://github.com/elasticsearch/logstash-forwarder
    2. Unzip and cd to the directory.
    3. Make sure you have the compiling tools, if not:
      1. apt-get install gccgo-go
    4. # go build
      # mkdir -p /opt/logstash-forwarder/bin/
    5. # mv logstash-forwarder-master /opt/logstash-forwarder/bin/logstash-forwarder
  3. Install the init script to get Logstash Forwarded to start on bootup:
    cd /etc/init.d/; sudo wget https://raw.github.com/elasticsearch/logstash-forwarder/master/logstash-forwarder.init -O logstash-forwarder
    sudo chmod +x logstash-forwarder
    sudo update-rc.d logstash-forwarder defaults
  4. Copy the certs over:
    mkdir -p /etc/pki/tls/certs
    cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/
  5. Create and edit the logstash forwarder config file:
    nano /etc/logstash-forwarder
    
    {
     "network": {
     "servers": [ "logstashserverip:5000" ],
     "timeout": 15,
     "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
     },
     "files": [
     {
     "paths": [
     "/var/log/syslog",
     "/var/log/auth.log"
     ],
     "fields": { "type": "syslog" }
     }
     ]
    }
    
  6. Restart the service on each forwarding machine and check Kibana to see that they are successfully shipping their logs.
    service logstash-forwarder restart