Category Archives: Uncategorized

How To: Play Oculus Rift Games On HTC Vive

If you don’t have a VR headset yet, you may be wondering which to get. There seem to be ample support for both, but which one is the best?  Now, you don’t have worry, the answer thanks to some dedicated developers is now easy.  Get the HTC Vive.  Why?  Because the HTC Vive has the best versatility between the two.  You can play either sitting or standing (stationary) or within a specific set of bounds (room scale).  Now, you can also play Oculus Rift only games on the HTC Vive and here’s how.

First, if you haven’t already install SteamVR.  You’ve more than likely already done this, but it’s worth mentioning just in case.

Next, install the Oculus Rift Setup software:  http://www.oculus.com/setup

Finally, install the latest version of Revive: https://github.com/LibreVR/Revive/releases

Once both of those steps are done, launch SteamVR and you’ll have a new button at the bottom of the Steam overlay called “Revive”.

Now that you have revive installed, there are several free games that you can play from Oculus.  At the time of this publication they include:

And a few others.  Personally, I suggest Farlands, it’s a very cool Viva Pinata meets discovery game.  Enjoy the new set of games in your HTC Vive library.

Arch Linux Hydra Build

For a long time now I’ve been wanting to build my ideal computer. A powerful Linux machine with surrounding monitors. Recently I had the opportunity to do so. The following is a time lapse video of the entire build as well as the configuration files and list of components and where to buy them. This way, if anyone else wants to construct either the same or something along the same lines, they’ll have some footsteps to follow if they need the help.

During the trial and error process there were little milestones along the way.

Finally I got everything working correctly and the Hydra came to life.

Build Components

The following is a list of all the components used in this build along with links to purchase.

#archlinux #hydra the calm before the storm. #Linux

A photo posted by Jason Graves (@godlikemouse) on

On to part three of the Linux Hydra setup, the heads. 24” monitors all around.

A photo posted by Jason Graves (@godlikemouse) on

Part 2 wire maintenance of the Linux Hydra complete.

A photo posted by Jason Graves (@godlikemouse) on

Physical Configuration

I went with Dual Nvidia Quadro K620 video cards because they were very inexpensive and had the ability to display up to 4 monitors per video card via daisy chain through DisplayPort 1.2.  The monitors that I found that were pretty inexpensive however didn’t support direct daisy chaining so I had to purchase 2 multi monitor display adapters which essentially did the daisy chaining.  Physically, this is the setup.

GPU 1 DisplayPort Splitter 1 Monitor 1
Monitor 2
Monitor 3
Monitor 4
GPU 2 DisplayPort Splitter 2 Monitor 5
Monitor 6
Monitor 7
Monitor 8

 

Xorg Configuration

My chosen distribution for this build was Arch Linux, yours maybe different however, the Xorg configuration should be similar if not identical if you’re using the same components.  If not, then perhaps this might just help you along the way.

My first setup was using the nvidia proprietary driver, installed by the following:

user@computer:$ yaourt nvidia-beta-all

This driver set didn’t have Base Mosaic nor Xinerama working.  Everytime it was turned on the screen would black out and I would have to hard reset the machine and manually remove the settings.

Visually what I was setting up as 2 screens, one for the first GPU and another for the second.  The use of nvidia-settings made this very painless, I suggest you use it as well if you’re using Nvidia cards.

user@computer:$ sudo pacman -S nvidia-settings
user@computer:$ sudo nvidia-settings

Once you save the changes to the Xorg.conf file, you should be able to see the file located at /etc/X11/xorg.conf.

Here’s a visual representation of my screen Xorg setup.

Screen0 Screen1
Monitor 1 Monitor 2 Monitor 5 Monitor 6
Monitor 3 Monitor 4 Monitor 7 Monitor 8

Hopefully this helps in understanding what I was trying to achieve. This gave great results and allowed me to get all monitors working however, there were two issues with this setup.

The first, getting a graphical environment to support eight monitors was a bit troublesome at first. I started off using Gnome 3, tried KDE, etc. yet none of them supported anything other than a dual monitor setup. Also, xrandr did nothing to help here and even reported that the second GPU wasn’t available which was absolutely not the case. The solution here was to install a different desktop which supported multiple monitors. I went with XFCE4.

The second, windows could only be moved in the screen in which the window was created. In other words, a window created in Screen0 could only move through monitors 1-4. This became more of a problem over time and was resolved later. Here’s my first working Xorg.conf file.

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 370.28 (buildmeister@swio-display-x64-rhel04-17) Thu Sep 1 20:21:47 PDT 2016
 
Section "ServerLayout"
 Identifier "Layout0"
 Screen 0 "Screen0" 0 0
 Screen 1 "Screen1" 3840 0
 InputDevice "Keyboard0" "CoreKeyboard"
 InputDevice "Mouse0" "CorePointer"
 Option "Xinerama" "0"
EndSection
 
Section "Files"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Mouse0"
 Driver "mouse"
 Option "Protocol" "auto"
 Option "Device" "/dev/psaux"
 Option "Emulate3Buttons" "no"
 Option "ZAxisMapping" "4 5"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Keyboard0"
 Driver "kbd"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor0"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor1"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Device"
 Identifier "Device0"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:1:0:0"
EndSection
 
Section "Device"
 Identifier "Device1"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:2:0:0"
EndSection
 
Section "Screen"
 Identifier "Screen0"
 Device "Device0"
 Monitor "Monitor0"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "nvidiaXineramaInfoOrder" "DFP-2.2.1"
 Option "metamodes" "DP-1.2.1: nvidia-auto-select +1920+1080, DP-1.2.2: nvidia-auto-select +0+1080, DP-1.1.1: nvidia-auto-select +1920+0, DP-1.1.2: nvidia-auto-select +0+0"
 Option "SLI" "Off"
 Option "MultiGPU" "Off"
 Option "BaseMosaic" "off"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection
 
Section "Screen"
 Identifier "Screen1"
 Device "Device1"
 Monitor "Monitor1"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "metamodes" "DP-1.2.1: nvidia-auto-select +0+0, DP-1.2.2: nvidia-auto-select +1920+0, DP-1.1.1: nvidia-auto-select +0+1080, DP-1.1.2: nvidia-auto-select +1920+1080"
 Option "SLI" "Off"
 Option "MultiGPU" "Off"
 Option "BaseMosaic" "off"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection

The above configuration eventually became a bit problematic and reached out to NVidia to find out if there was anything that could be done to get Base Mosaic and Xinerama working in their driver.  The response I received was to try using the following driver:

http://www.nvidia.com/download/driverResults.aspx/106780/en-us

This turned out be very helpful.  Once the driver was installed I once again turned on Base Mosaic and to my surprise the entire panel of screens were being utilized under a single Screen0 entry.  This made it possible to drag windows from Monitor 1 all the way across to Monitor 8.

Screen0
Monitor 1 Monitor 2 Monitor 5 Monitor 6
Monitor 3 Monitor 4 Monitor 7 Monitor 8

This final Xorg.conf file is what I decided to keep and use going forward.

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 367.44 (buildmeister@swio-display-x86-rhel47-01) Wed Aug 17 22:53:32 PDT 2016
 
Section "ServerLayout"
 Identifier "Layout0"
 Screen 0 "Screen0" 0 0
 InputDevice "Keyboard0" "CoreKeyboard"
 InputDevice "Mouse0" "CorePointer"
 Option "Xinerama" "0"
EndSection
 
Section "Files"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Mouse0"
 Driver "mouse"
 Option "Protocol" "auto"
 Option "Device" "/dev/psaux"
 Option "Emulate3Buttons" "no"
 Option "ZAxisMapping" "4 5"
EndSection
 
Section "InputDevice"
 # generated from default
 Identifier "Keyboard0"
 Driver "kbd"
EndSection
 
Section "Monitor"
 # HorizSync source: edid, VertRefresh source: edid
 Identifier "Monitor0"
 VendorName "Unknown"
 ModelName "Ancor Communications Inc VE248"
 HorizSync 30.0 - 83.0
 VertRefresh 50.0 - 76.0
 Option "DPMS"
EndSection
 
Section "Device"
 Identifier "Device0"
 Driver "nvidia"
 VendorName "NVIDIA Corporation"
 BoardName "Quadro K620"
 BusID "PCI:1:0:0"
EndSection
 
Section "Screen"
 Identifier "Screen0"
 Device "Device0"
 Monitor "Monitor0"
 DefaultDepth 24
 Option "Stereo" "0"
 Option "nvidiaXineramaInfoOrder" "DFP-2.2.1"
 Option "metamodes" "GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.2.1: nvidia-auto-select +1920+1080, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.2.2: nvidia-auto-select +0+1080, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.1.1: nvidia-auto-select +1920+0, GPU-05f6316a-a480-b2c4-7618-d19e2bd555aa.GPU-0.DP-1.1.2: nvidia-auto-select +0+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.2.1: nvidia-auto-select +3840+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.2.2: nvidia-auto-select +5760+0, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.1.1: nvidia-auto-select +3840+1080, GPU-0cc1f3f6-5d22-fe69-aa54-cb7c8c051daa.GPU-1.DP-1.1.2: nvidia-auto-select +5760+1080"
 Option "MultiGPU" "Off"
 Option "SLI" "off"
 Option "BaseMosaic" "on"
 SubSection "Display"
 Depth 24
 EndSubSection
EndSection

I hope this helps anyone who gets stuck trying to get multiple monitors working under Linux using Nvidia video cards.  Please feel free to drop me a line if you get stuck and I’ll see what I can do to help.

Running Graboid on Linux

Want to run Graboid on Linux, cool, here’s how you do it.  First you’ll need to install either wine  to run graboid.

user@computer:$ sudo pacman -S wine

Great, next we’ll need to install Graboid.  The good news is I’ve made it very easy for you, simply download my premade version containing everything you’ll need.

Download: Graboid for Linux

Next unpack the tgz file.

user@computer:$ tar xpf graboid.tgz

Under the Graboid directory you’ll notice two files, the first is the graboid.png file which you can copy to your shared icons directory (on Arch Linux this is /usr/share/pixmaps).  The second is a simple bash script which you can use to launch Graboid from the command line.  Feel free to modify this file until your heart’s content.  Also, if for some reason you can’t execute the graboid file, be sure to change the permissions of the file.

user@computer:$ chmod +x graboid

Lastely, run Graboid.

user@computer:$ ./graboid

I hope this helps you get up and running with Graboid on Linux.  Enjoy!

New Rubik’s Cube jQuery ThreeJS WebGL Plugin

I have previously released a jQuery Rubik’s cube plugin comprised of only HTML, JavaScript and CSS.  Although pretty nifty, there were a few issues with regards to support from some browsers, most notably Firefox and Internet Explorer.  I decided to recreate the plugin using WebGL.  The result is my jquery.cube.threejs.js plugin. As seen below.

The plugin code is available here: https://github.com/godlikemouse/jquery.cube.threejs.js

See the Pen jQuery ThreeJS Rubik’s Cube by Jason Graves (@godlikemouse) on CodePen.

This plugin utilizes standard cube notation, both modern and old to execute moves. For example, the moves executed by the 3×3 cube above are utilizing the following notation:

x (R’ U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 R2 x’

This algorithm is being passed to the cube plugin by way of the execute() method or data-moves attribute.

This plugin currently supports a number of configuration options that can be passed at instantiation time including, cube face colors, animation delay time with more to come shortly.

This incarnation of the plugin also supports cube sizes of 2×2 up to 9×9 with no issues.  Cubes larger than 9×9 may see some degradation in frame rates depending on the speed of the client devices.

For more information on this plugin or to view the source please visit: https://github.com/godlikemouse/jquery.cube.threejs.js

New Rubik’s Cube jQuery Plugin

I consider myself an avid speed cuber. Not world class, but fast enough and not too obsessed with it. That being said all the sites I’ve seen in the past with references to algorithms all seem to use really clunky client side Java or Flash plugins to handle the demonstration of moves.  I wanted to create a simple to implement jQuery plugin that utilizes the browser’s built in features to do the same thing.  The result is my jquery.cube.js plugin. As seen below.

The plugin code is available here: https://github.com/godlikemouse/jquery.cube.js

 

See the Pen jQuery and CSS Implemented Rubik’s Cube by Jason Graves (@godlikemouse) on CodePen.

This plugin utilizes standard cube notation, both modern and old to execute moves. For example, the moves executed above are utilizing the following notation:

x (R’ U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 (R U R’) D2 (R U’ R’) D2 R2 x’

This algorithm is being passed to the cube plugin by way of the execute() method.

This plugin currently supports a number of configuration options that can be passed at instantiation time including, cube face colors, animation delay time with more to come shortly.

For more information on this plugin or to view the source please visit: https://github.com/godlikemouse/jquery.cube.js

Edit Over SSH Using Any Locally Installed Editor

Ever find yourself needing to edit remotely over SSH and just wish you could use your favorite locally installed editor?  Yeah VI and EMACS are great, but sometimes don’t you just wish you could have your full workspace open without having to SCP/FTP files around.  Well, you can and here’s how you do it.

Mac/Linux Instructions:

Install the sshfs client according to your specific Linux version

CentOS/Fedora

root@computer:$ yum install sshfs

Debian

root@computer:$ apt-get install sshfs

Mac OS X

This command assumes that you have MacPorts installed. If you do not, install from here: Install MacPorts

user@computer:$ sudo port install sshfs

 

Next create a directory which will be used to access the remote files:

root@computer:$ mkdir remote_files

 

Finally mount the remote server and directory to our local directory “remote_files” replacing “username”, “remote-server” and “some-directory” with your real world values.  This syntax is very similar to that used by scp:

root@computer:$ sshfs username@remote-server:/some-directory remote_files

 

Assuming that the credentials supplied to the sshfs command were correct, you should now have a remotely mounted drive by the name of “remote_files”.

Now open your favorite editor and browse to “remote_files”, all your remote files are available to you as if they were local.  Code on!

Windows Instructions:

Begin by downloading win-sshfs at: https://code.google.com/p/win-sshfs/downloads/list

Install and be aware that a reboot may be required (you may want to bookmark this page so you can get back to it after the reboot).  You can just leave the default install options the way they are.

After it has been fully installed and your machine possibly rebooted, lauch wins-sshfs if it hasn’t been launched already.  Next click on the wins-sshfs icon in the task tray.

wins-ssh tray

Next you’ll be presented with a dialog, click the “Add” button and fill out the details of the remote server to connect to.  Ensure you select the correct Drive Letter to use for this remote connection.

wins-ssh-window-1

Next click “Save”, then click “Mount”.

Assuming you didn’t have any credential errors you will have a newly mapped drive available to you.  Open your favorite editor and begin editing remote files on the drive like they are local.  Code On!

PHP Best Practices: Models and Data Mining

Most PHP developers utilizing MVC frameworks would argue that the best approach for models are classes with a varied number of getters and setters.  After a long time working with PHP and especially MVC frameworks and data mining/retrieval, I am of a completely different opinion and I will discuss why.

First let’s start with the traditional approach which I will refer to as a “heavy model”.  Consider the following:

class Person {
    private $id = 0;
    private $first_name = null;
    private $last_name = null;
 
    public function setId($id){
        $this->id = $id;
    }
 
    public function getId(){
        return $this->id;
    }
 
    public function setFirstName($first_name){
        $this->first_name = $first_name;
    }
 
    public function getFirstName(){
        return $this->first_name;
    }
 
    public function setLastName($last_name){
        $this->last_name = $last_name;
    }
 
    public function getLastName(){
        return $this->last_name;
    }
}

Admittedly, this is a very rudimentary data model, however it does follow the standard approach for most MVC frameworks with regard to model layout and corresponding getters and setters for privatized properties.  Its usage might then follow something along the lines of:

$p = new Person();
$p->setId(1);
$p->setFirstName('Bob');
$p->setLastName('Dobalina');

Usability wise, this above approach is very clean and makes perfect sense from an object oriented model approach even if it is rather remedial. However, having been living in PHP for quite some time now I have two problems with this approach. The first comes from data retrieval, or to be more precise, retrieving data from a database and populating models for use. The other is flexibility/code maintenance. For example, let’s say we’re using PDO to connect to a MySQL database:

$conn = /* get a database connection */;
$sql = 'SELECT * FROM person';
$stmt = $conn->prepare($sql);
$stmt->execute();
 
$people = array();
while($result = $stmt->fetch(PDO::FETCH_ASSOC)){
    $p = new Person();
    $p->setId($result['id']);
    $p->setFirstName($result['first_name']);
    $p->setLastName($result['last_name']);
    $people[] = $p;
}

So if we examine the above we see an array of Person objects being populated from a database result set. Now ignoring the semantics of the above, this is a pretty common way to retrieve and populate models. Sure there are different ways, methods of the model, etc. But in essence they are all pretty much performing this kind of loop somewhere. What if I told you there was a more efficient way to do the above, that executed faster, was more flexible and required less code?

Now consider this example, a modification to the above model which I will refer to as “light model”.

class Person {
    public $id = 0;
    public $first_name = null;
    public $last_name = null;
}

Now I know a lot of developers who see this are currently cringing, just stay with me for a minute. The above acts more like a structure than the traditional model, but it has quite a few advantages. Let me demonstrate with the following data mining code:

$conn = /* get a database connection */;
$sql = 'SELECT * FROM person';
$stmt = $conn->prepare($sql);
$stmt->execute();
 
$people = array();
while($result = $stmt->fetch(PDO::FETCH_ASSOC)){
    $p = new Person();
    foreach($result as $field_name=>$field_value)
        $p->{$field_name} = $field_value;
    $people[] = $p;
}

If you’re unfamiliar with the foreach notation within the while loop, all it is doing is dynamically using the result set name and value to populate the model’s respective matching property. Here’s why I find the light model a much better practice especially when combined with the above while and foreach mining pattern. Firstly, the light model will populate faster with a smaller execution time being that no functions are being invoked. Each function call of the heavy model takes an additional hit performance wise, this can be validated quite easily using time stamps and loops. Secondly, the second mining example allows us to paint our models with all the values being mined out of the database directly, which means going forward if the table changes, only the properties of the model change, the data mining pattern will still work with no code changing. If the database changed on the previous heavy model example, both the model and all mining procedures would have to be updated with the new result field names and respective set methods or at the very least, updated in some model method.

Finally to come full circle, what about the CRUD methods which are usually attached to the models such as save() or get(), etc? Instead of creating instance methods of models which have the overhead of object instantiation, how about static methods of like objects which would be termed “business objects”. For example:

class PersonBO{
 
    public static function save($person){
        /* do save here */
    }
 
    public static function get($person){
        /* do get here */
    }
 
}

This example performs the same functionality that is usually attached to the model however, it makes use of static methods and executes faster with less overhead than its heavy model counterpart. This adds an additional layer of abstraction between the model and its functional business object counterpart and lends itself to clean and easy code maintainability.

In summary, the light models used in conjunction with the data mining pattern demonstrated above reduce quite a bit of retrieval code and add to the codebase’s overall flexibility. This pattern is currently in use within several very large enterprise codebases and have been nothing but a pleasure to work with.

Special thanks to Robert McFrazier for his assistance in developing this pattern.

Get Rid of Scrollbars In Facebook IFrame Applications

Foreword

So I was developing a Facebook application and realized that once the content was rendered inside of Facebook, I kept getting scrollbars.  This was terribly annoying being that I wanted to take up more vertical space, but didn’t want the iframe to have scrollbars.  Instead, I wanted the main browser window to have the scrollbars.  Here’s how to get rid of scrollbars in Facebook iframe applications.

Getting Started

First thing you’re going to need to do is modify your application settings under Canvas Settings : On Facebook -> Canvas Settings.

Canvas Settings - On Facebook -> Canvas Settings

Canvas Settings - On Facebook -> Canvas Settings

Once under the Canvas Settings, change the Canvas Width and Canvas Height settings to “Fluid”.

Canvas Settings - Fluid

Canvas Settings - Fluid

Once you’ve done that click “Save Changes” and go to your actual Facebook application code that resides on your server.

The Code

On your side of things, in the actual application that you’re displaying on Facebook, you’ll need to add the following code:

<script src="http://connect.facebook.net/en_US/all.js"></script>
<script language="javascript">
    FB.Canvas.setAutoResize();
</script>

Now go to where you can actually view your application in Facebook and lo and behold…No scrollbars! Yea!…Now go tell your friends at the nerdery how pimp you is.

How To Cluster Rabbit-MQ

Foreword

This explanation of clustering Rabbit-MQ assumes that you’ve had some experience with Rabbit-MQ.  At least to the point of being able to get Rabbit-MQ up and running and processing messages.  For this explanation I will be using CentOS linux, other linux distributions may or may not require slight modifications to the setup process.  You will need at least 2 machines or virtual instances up and running and install Rabbit-MQ on both.

Overview

Clustering Rabbit-MQ is actually very simple once you understand what’s going on and how it actually works.  There is no need for a load balancer or any other hardware/software component and the idea is simple.  Send all messages to the master queue and let the master distribute the messages down to the slaves.

Create Short Names

First, we need to change the host name and host entries of our machines to something short.  Rabbit-MQ has trouble clustering queues will fully qualified DNS names.  We’ll need a single short word host and route.  For now, let’s use the names “master” for the master head, then “slave1”, “slave2” … “slaveN” respectively for the rest.

Set the master host name to “master”

user@computer:$ echo "master" > /proc/sys/kernel/hostname

Next we need to set the entries in the /etc/hosts file to allow the short names to be aliased to machine or instance IPs.  Open the /etc/hosts file in your favorite editor and add the following lines:

user@computer:$ cat /etc/hosts
127.0.0.1   localhost   localhost.localdomain

192.168.0.100 master master.localdomain
192.168.0.101 slave1 slave1.localdomain
192.168.0.102 slave2 slave2.localdomain

Please note: Your particular /etc/hosts file will look different that the above. You’ll need to substitute your actual ip and domain suffix for each entry.

Make sure each slave you plan to add has an entry in the /etc/hosts file of the master. To verify your settings for each of the entries you provide, try pinging them by their short name.

user@computer:$ ping master
PING master (192.168.0.100) 56(84) bytes of data.
64 bytes from master (192.168.0.100): icmp_seq=1 ttl=61 time=0.499 ms
64 bytes from master (192.168.0.100): icmp_seq=2 ttl=61 time=0.620 ms
64 bytes from master (192.168.0.100): icmp_seq=3 ttl=61 time=0.590 ms
64 bytes from master (192.168.0.100): icmp_seq=4 ttl=61 time=0.494 ms

If you get something like the above, you’re good to go. If not, take a good look at your settings and adjust them until you do.

Once your short names are setup in the master /etc/hosts file, copy the /etc/hosts file to every slave so that all machines have the same hosts file entries, or to be more specific, that each machine has the master and slave routes. If you’re familiar with routing, feel free to just add the missing routes.

Then for each slave update the host name.

user@computer:$ echo "slave1" > /proc/sys/kernel/hostname
user@computer:$ echo "slave2" > /proc/sys/kernel/hostname
user@computer:$ echo "slave3" > /proc/sys/kernel/hostname

Synchronize ERLang Cookie

Next we need to synchronize our ERlang cookie. Rabbit-MQ needs this to be the same on all machines for them to communicate properly. The file we need is located on the master at /var/lib/rabbitmq/.erlang.cookie, we’ll cat this value then update all the cookies on the slave.

user@computer:$ cat /var/lib/rabbitmq/.erlang.cookie
DQRRLCTUGOBCRFNPIABC

Copy the value displayed by the cat.

Please notice that the file itself is storing the value without a carriage return nor a line feed. This value needs to go into the slaves the same way. Do so be executing the following command on each slave. Make sure you use the “-n” flag.

First let’s make sure we stop the rabbitmq-server on the slaves before updating the ERlang cookie.

user@computer:$ service rabbitmq-server stop
user@computer:$ service rabbitmq-server stop
user@computer:$ service rabbitmq-server stop

Next let’s update the cookie and start the service back up.

user@computer:$ echo -n "DQRRLCTUGOBCRFNPIABC" > /var/lib/rabbitmq/.erlang.cookie
user@computer:$ service rabbitmq-server start
user@computer:$ echo -n "DQRRLCTUGOBCRFNPIABC" > /var/lib/rabbitmq/.erlang.cookie
user@computer:$ service rabbitmq-server start
user@computer:$ echo -n "DQRRLCTUGOBCRFNPIABC" > /var/lib/rabbitmq/.erlang.cookie
user@computer:$ service rabbitmq-server start

Once again substitute the “DQRRLCTUGOBCRFNPIABC” value with your actual ERlang cookie value.

Create The Cluster

Now we cluster the queues together. Starting with the master, issue the following commands:

user@computer:$ rabbitmqctl stop_app
user@computer:$ rabbitmqctl reset
user@computer:$ rabbitmqctl start_app

Next we cluster the slaves to the master. For each slave execute the following commands:

user@computer:$ rabbitmqctl stop_app
user@computer:$ rabbitmqctl reset
user@computer:$ rabbitmqctl cluster rabbit@master
user@computer:$ rabbitmqctl start_app

These commands actually do the clustering of the slaves to the master. To verify that everything is in working order issue the following command on any master or slave instance:

user@computer:$ rabbitmqctl status
Status of node rabbit@master ...
[{running_applications,[{rabbit,"RabbitMQ","1.7.2"},
{mnesia,"MNESIA CXC 138 12","4.4.3"},
{os_mon,"CPO CXC 138 46","2.1.6"},
{sasl,"SASL CXC 138 11","2.1.5.3"},
{stdlib,"ERTS CXC 138 10","1.15.3"},
{kernel,"ERTS CXC 138 10","2.12.3"}]},
{nodes,[rabbit@slave1,rabbit@slave2,rabbit@slave3,rabbit@master]},
{running_nodes,[rabbit@slave1,rabbit@slave2,rabbit@slave3,rabbit@master]}]
...done.

Notice the lines containing nodes and running_nodes. They should list out all of the mater and slave entries. If they do not, you may have done something wrong. Go back and try executing all the steps again. Otherwise, you’re good to go. Start sending messages to the master and watch as they are distributed to each of the slave nodes.

You can always dynamically add more slave nodes. To do this, updated the /etc/hosts file of all the machines with the new entry. Copy the master ERlang cookie value to the new slave. Execute the commands to cluster the slave to the master and verify.

Troubleshooting

If you accidentally update the cookie value before you’ve stopped the service, you could get strange errors the next time you start the rabbitmq-server. If this happens just issue the following command:

user@computer:$ rm /var/lib/rabbitmq/mnesia/rabbit/* -f

This removes the mnesia storage and allows you to restart the rabbitmq-server without errors.