How to configure a writable upload folder on openshift for a PHP cartridge

When creating a Openshift gear for a GNU Social deployment I found that the files could not be uploaded. I needed
to create writable folder for the webserver to place the files. Openshift allows you to update the code via a git repository, but I did not want to
upload the files directly into the git repository on the server, because that would introduce problems everytime I updated the code. Openshift
also offers a separate dir for user data files and preferably the files are stored there. After some tinkering I found a solution.
There are some specific notes for Windows users at the end of the post.

The GNU social package has two folders which it uses for storing uploaded files /avatar and /file. I will be explaining how to perform this for
both folders, but you can execute this for any type of package you’d like, not only GNU social. The principle is the same.

The solution involves two separate steps: preparation and scripting. Some of the paths in the explanation start with /var/lib/openshift/[your app id]/, in this
case you need to replace the value [your app id] with the identifier of your app. You can find the value, if you don’t know it, by executing the following
command when signed into the console on the server:

echo $OPENSHIFT_DATA_DIR

Preparation

Make sure that the folder(s) you want to make world-writable for the uploads are not in version control (if they are, you might run into problems because then you are
basically updating version controlled folders when running the scripts. There is probably a way around that, but I tried to stay away from such a solution).

First, ssh into your server and execute the commands below, this will create the upload folders in the data part where the files can be uploaded:

mkdir $OPENSHIFT_DATA_DIR/avatar
mkdir $OPENSHIFT_DATA_DIR/file

Second, create the links into the repository and make them world-writable. The ln command does not really support environment variables, this is why
we are using the absolute paths here.

ln -s /var/lib/openshift/[your app id]/app-root/data/avatar /var/lib/openshift/[your app id]/app-root/runtime/repo/avatar
ln -s /var/lib/openshift/[your app id]/app-root/data/file /var/lib/openshift/[your app id]/app-root/runtime/repo/file
chmod -R o+rw $OPENSHIFT_REPO_DIR/avatar
chmod -R o+rw $OPENSHIFT_REPO_DIR/file

This completes the manual preparation.

Scripts

When performing a code update via git push we need to temporarily remove the link and recreate it again. Openshift offers the possibility to have scripts
executed in the different steps of deploying the code. In the folder in your local cloned git repository there is a folder .openshift/action_hooks. Two
files must be created here, pre_build and post_deploy. For more background information read the manual.

On Linux, execute chmod x .openshift/action_hooks/* to make the scripts executable. For Windows, see the special note at the end of the post.

pre_build

Remove the links:

#!/bin/bash
rm -f $OPENSHIFT_REPO_DIR/avatar
rm -f $OPENSHIFT_REPO_DIR/file

post_deploy

Re-create the links and make sure that they are world-writable:

#!/bin/bash
ln -s /var/lib/openshift/[your app id]/app-root/data/avatar /var/lib/openshift/[your app id]/app-root/runtime/repo/avatar
ln -s /var/lib/openshift/[your app id]/app-root/data/file /var/lib/openshift/[your app id]/app-root/runtime/repo/file
chmod -R o+rw $OPENSHIFT_REPO_DIR/avatar
chmod -R o+rw $OPENSHIFT_REPO_DIR/file

Notes for Windows

The scripts in the repository need to have the correct line endings and have the execution bit set. The manual
explains how to do this for Windows. In short execute in the root of your local clone of the git repository after creating the build hooks:

git config core.autocrlf input # use `true` on Windows
git config core.safecrlf true
git update-index --chmod=+x .openshift/action_hooks/pre_build
git update-index --chmod=+x .openshift/action_hooks/post_deploy

Message error: .openshift/action_hooks/*: does not exist and —remove not passed

.openshift/action_hooks/*: does not exist and --remove not passed

I got this message when attempting to execute the update-index command on a wildcard. When providing the individual filenames it worked.

Leave a Comment

Ruby Gem install fails on Windows because of SSL

When installing the Openshift rhc client I need to install a Ruby gem on Windows. This resulted in an error message:

C:\Users\user>gem install rhc
ERROR:  Could not find a valid gem 'rhc' (>= 0), here is why:
          Unable to download data from https://rubygems.org/ - SSL_connect retur
          ned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (
    https://api.rubygems.org/latest_specs.4.8.gz)

This is because the Ruby network client is unable to verify the SSL Certificate Authority. Via Google a found a Gist
which explains how this can be fixed. I found the manual way the fastest way, since I
will not be using gem for other projects.

In short I did this:

Download http://curl.haxx.se/ca/cacert.pem to c:\temp

Executed in command prompt:
set SSL_CERT_FILE=c:\temp\cacert.pem

After this executing the command to install the client tools worked:

C:\temp>gem install rhc
Fetching: net-ssh-2.9.2.beta.gem (100%)
Successfully installed net-ssh-2.9.2.beta
Fetching: net-scp-1.2.1.gem (100%)
Successfully installed net-scp-1.2.1
Fetching: net-ssh-gateway-1.2.0.gem (100%)
Successfully installed net-ssh-gateway-1.2.0
Fetching: net-ssh-multi-1.2.0.gem (100%)
Successfully installed net-ssh-multi-1.2.0
Fetching: archive-tar-minitar-0.5.2.gem (100%)
Successfully installed archive-tar-minitar-0.5.2
Fetching: highline-1.6.21.gem (100%)
Successfully installed highline-1.6.21
Fetching: commander-4.2.1.gem (100%)
Successfully installed commander-4.2.1
Fetching: httpclient-2.5.3.3.gem (100%)
Successfully installed httpclient-2.5.3.3
Fetching: open4-1.3.4.gem (100%)
Successfully installed open4-1.3.4
Fetching: rhc-1.33.4.gem (100%)
===========================================================================

If this is your first time installing the RHC tools, please run 'rhc setup'

===========================================================================
Successfully installed rhc-1.33.4
Parsing documentation for archive-tar-minitar-0.5.2
Installing ri documentation for archive-tar-minitar-0.5.2
Parsing documentation for commander-4.2.1
Installing ri documentation for commander-4.2.1
Parsing documentation for highline-1.6.21
Installing ri documentation for highline-1.6.21
Parsing documentation for httpclient-2.5.3.3
Installing ri documentation for httpclient-2.5.3.3
Parsing documentation for net-scp-1.2.1
Installing ri documentation for net-scp-1.2.1
Parsing documentation for net-ssh-2.9.2.beta
Installing ri documentation for net-ssh-2.9.2.beta
Parsing documentation for net-ssh-gateway-1.2.0
Installing ri documentation for net-ssh-gateway-1.2.0
Parsing documentation for net-ssh-multi-1.2.0
Installing ri documentation for net-ssh-multi-1.2.0
Parsing documentation for open4-1.3.4
Installing ri documentation for open4-1.3.4
Parsing documentation for rhc-1.33.4
Installing ri documentation for rhc-1.33.4
Done installing documentation for archive-tar-minitar, commander, highline, http
client, net-scp, net-ssh, net-ssh-gateway, net-ssh-multi, open4, rhc after 27 se
conds
10 gems installed

Home

After fixing the SSL issue I encountered a second problem during the setup command, after the SSH key step
the client would attempt to write a file to the H:\ drive. It assumed that that was my homedrive, while it does not exist.

Generating an authorization token for this client ... C:/Ruby21-x64/lib/ruby/2.1
.0/fileutils.rb:250:in `mkdir': No such file or directory @ dir_s_mkdir - H: (Er
rno::ENOENT)
        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:250:in `fu_mkdir'
        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:224:in `block (2 levels)
in mkdir_p'
        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:222:in `reverse_each'
        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:222:in `block in mkdir_p'

        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:208:in `each'
        from C:/Ruby21-x64/lib/ruby/2.1.0/fileutils.rb:208:in `mkdir_p'
        from C:/Ruby21-x64/lib/ruby/gems/2.1.0/gems/rhc-1.33.4/lib/rhc/auth/toke
n_store.rb:34:in `[]='
        from C:/Ruby21-x64/lib/ruby/gems/2.1.0/gems/rhc-1.33.4/lib/rhc/auth/toke
n_store.rb:14:in `put'
        from C:/Ruby21-x64/lib/ruby/gems/2.1.0/gems/rhc-1.33.4/lib/rhc/auth/toke
n.rb:50:in `save'
        from C:/Ruby21-x64/lib/ruby/gems/2.1.0/gems/rhc-1.33.4/lib/rhc/wizard.rb
:243:in `block in login_stage'
        from C:/Ruby21-x64/lib/ruby/gems/2.1.0/gems/rhc-1.33.4/lib/rhc/highline_
extensions.rb:190:in `call'

It turns out that there are two environment variables that define the HOME folder on Windows: HOMEPATH and HOMEDRIVE. Set
the combination of these two to a writable folder. For example to your user dir on Windows. These environment variables can be
set in two ways: via the commandline (where they will be forgotten after closing the shell windows), or via the Advanced Properties
of the About dialog of My Computer.

The simplest way via the commandline is:

    SET HOMEPATH=\users\user
    SET HOMEDRIVE=C:

Leave a Comment

Install Unbound for local network lookups

I am running a local server for some private websites. The problem is that from within the local network I cannot
lookup the public DNS entries that are set for these websites. My router does not understand where to route
the requests to. I used to solve this by creating a separate DNS entry prefix with l. for every domain name. Recently
I found that you can run a local DNS server that is only to be used locally, which can translate the lookups to the local IP
address instead of the public one. Unbound is a DNS server which can provide this. It will proxy all
DNS requests and only alter the ones that are configured to be redirected locally. Below I’ve have described manual installation
and installation using the apt-get package manager on raspbian.

Installing

Manually from source

I am running a server with Archlinux which did not provide a package, so I had to install it manually. I used the following commands:

cd /tmp
wget https://unbound.net/downloads/unbound-latest.tar.gz
./configure --prefix=/usr --sysconfdir=/etc
make
make install

This will compile and install unbound in /usr/bin and its configuration to /etc/unbound.

Service on Archlinux

With the manual installation I needed to also define a service to start and stop unbound. I create the file /usr/lib/systemd/system/unbound.service:

[Unit]
Description=Unbound DNS Resolver
After=network.target

[Service]
PIDFile=/run/unbound.pid
ExecStart=/usr/bin/unbound -d
ExecReload=/bin/kill -HUP $MAINPID
Restart=always

[Install]
WantedBy=multi-user.target

I also need to add a user to run unbound for:

useradd unbound

Using apt-get

apt-get install unbound

Configuration

I placed two configuration files in the /etc/unbound folder. This will configure the unbound server to listen to all bound IP addresses and
to allow DNS request from the local network (in my case 192.168.1.*, and from localhost. It will also include a file that defines the
static internal IP addresses for the domain names which are hosted locally.

The first line, local-zone, defines that for the root domain example.com all requests can be forwarded to the actual DNS server, if there
is no exception defined. local-data defines an exception for a specific entry.

/etc/unbound/unbound.conf
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"

include: "/etc/unbound/localnetwork.conf"
interface: 0.0.0.0
access-control: 192.168.1.0/24 allow
access-control: 127.0.0.0/8 allow

/etc/unbound/localnetwork.conf
local-zone: "example.com." transparent
local-data: "foo.example.com. IN A 192.168.1.1"

In order for the server itself to also use these IP address I updated /etc/resolv.conf to also use this DNS server:

nameserver 192.168.1.1

Leave a Comment

Use plowshare on Linux to upload to mega

Backups I make on my Linux installations I encrypt and backup to cloud services. Mega.co.nz is such a service, which offers 50GB
for free. plowshare is a Linux commandline tool which offers an interface to upload and download from and to a lot of free file host services.
This post will explain how to install plowshare on a Linux host, install the mega module and upload a backup.

Install plowshare

root@web01:~# git clone https://code.google.com/p/plowshare/ plowshare4
Cloning into 'plowshare4'...
cremote: Counting objects: 16977, done.
Receiving objects: 100% (16977/16977), 4.75 MiB | 167 KiB/s, done.
Resolving deltas: 100% (12960/12960), done.
root@web01:~# cd plowshare4/
root@web01:~/plowshare4# make install

Install mega.co.nz module

Execute the following commands to install the mega plugin for plowshare. This will also download and install the source package from openssl for some compilation. The package
name is libssl-dev on Debian, Ubuntu or similar distributions. On Fedora, CentOS or RHEL this is openssl-devel.

git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
cd plowshare.plugin-mega/
apt-get install libssl-dev
make install

root@web01:~# git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
Cloning into 'plowshare.plugin-mega'...
remote: Counting objects: 150, done.
Receiving objects: 100% (150/150), 56.40 KiB, done.
Resolving deltas: 100% (69/69), done.
root@web01:~# cd plowshare.plugin-mega/
root@web01:~/plowshare.plugin-mega# apt-get install libssl-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libssl-doc
The following NEW packages will be installed:
  libssl-dev libssl-doc
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,709 kB of archives.
After this operation, 6,229 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-dev armhf 1.0.1e-2+rvt+deb7u13 [1,504 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-doc all 1.0.1e-2+rvt+deb7u13 [1,205 kB]
Fetched 2,709 kB in 2s (1,226 kB/s)
Selecting previously unselected package libssl-dev.
(Reading database ... 75102 files and directories currently installed.)
Unpacking libssl-dev (from .../libssl-dev_1.0.1e-2+rvt+deb7u13_armhf.deb) ...
Selecting previously unselected package libssl-doc.
Unpacking libssl-doc (from .../libssl-doc_1.0.1e-2+rvt+deb7u13_all.deb) ...
Processing triggers for man-db ...
Setting up libssl-dev (1.0.1e-2+rvt+deb7u13) ...
Setting up libssl-doc (1.0.1e-2+rvt+deb7u13) ...
root@web01:~/plowshare.plugin-mega# make install
gcc -Wall -O3 -s src/crypto.c -o mega -lcrypto
install -d /usr/local/share/plowshare4/modules
install -d /usr/local/share/plowshare4/plugins
install -m 755 mega /usr/local/share/plowshare4/plugins/mega
install -m 644 module/mega.sh /usr/local/share/plowshare4/modules

After this we need to register the mega module to the plowshare module registry:

 echo "mega            | download | upload |        |      |       |" >> /usr/local/share/plowshare4/modules/config

After this you can execute the command plowup mega to validate if the installation was a success. The output will have to look similar to:

plowup: you must specify a filename.
plowup: try `plowup --help' for more information.

Encrypt and upload

The file backups takes place in three steps (which assumes that there already is one folder with all the backupped information):

  1. Create a tar.gz archive (tar -czf backup.tar.gz ./backupfolder)
  2. Encrypt the archive with openssl, based on this post (openssl aes-256-cbc -in backup.tar.gz -out backup.tar.gz.aes -pass file:pass.txt)
  3. Upload the archive with plowshare (plowup mega —auth=username:password —folder=”Backups” backup.tar.gz.aes)

Leave a Comment

Fixing a pacman error

When running pacman on my Archlinux installation it failed to start, with the following error:

failed to initialize alpm library

It turned that after a previous failed update command the folder /var/lib/pacman/ was missing. A simple command to recreate it
was enough to fix the error:

mkdir /var/lib/pacman/

Leave a Comment

Use node-red to monitor pages

This flow checks a HTTP or HTTPS URL on its contents. If a certain string is encountered (in this example ‘A visual tool for wiring’,
as it checks the Node-Red website) the page is considered to be available. The page is considered to be down if the string is not encountered or
the URL could not be retrieved.

There are two outputs for the check function, one for failure and one for success after the page is back up.
This allows for the throttling of failure messages via a Rate Limiter, while the message that the page is back up is send immediately.

There is currently one downside on this flow: if a page check goes down, a message is send. When the page comes up and goes down again
a little later, no message is send. This is because of the Rate Limiter, which does not get reset when the check is back up. Only
after a day the notification will be send when the Rate Limiter forwards the message.

Flow

Copy this flow JSON to your clipboard and then import into Node-RED using the Import From > Clipboard (Ctrl-I) menu option.

[{
    "id": "3fc2ea78.549ace",
    "type": "http request",
    "name": "Nodered site check",
    "method": "GET",
    "url": "http://nodered.org",
    "x": 261.2499694824219,
    "y": 135,
    "z": "1dca54.d12be5ac",
    "wires": [["29e68ad0.cc2c0e"]]
},
{
    "id": "29e68ad0.cc2c0e",
    "type": "function",
    "name": "String check",
    "func": "var title = 'Node-red check', result;\n\nif(msg.payload.indexOf('A visual tool for wiring') !== -1) {\n\tif(context.lastWasError) {\n\t\tcontext.lastWasError = false;\n\t\tresult = {\n\t\t\tpayload: 'Check is back up!',\n\t\t\ttopic: title\n\t\t};\n\t\treturn [null, result];\n\t}\n\treturn;\n}\n\nresult = {\n\tpayload: msg.payload,\n\ttopic: title\n};\ncontext.lastWasError = true;\nreturn [result, null];",
    "outputs": "2",
    "x": 456.7499694824219,
    "y": 136.25,
    "z": "1dca54.d12be5ac",
    "wires": [["cba65a05.00c3"],
    ["3839cb5d.1c34ac"]]
},
{
    "id": "cba65a05.00c3",
    "type": "delay",
    "name": "Rate limiter",
    "pauseType": "rate",
    "timeout": "5",
    "timeoutUnits": "seconds",
    "rate": "1",
    "rateUnits": "day",
    "randomFirst": "1",
    "randomLast": "5",
    "randomUnits": "seconds",
    "drop": true,
    "x": 624.2499694824219,
    "y": 91,
    "z": "1dca54.d12be5ac",
    "wires": [["bb03cc6a.16d6b8"]]
},
{
    "id": "ff61d2c5.281c58",
    "type": "inject",
    "name": "",
    "topic": "Trigger",
    "payload": "",
    "payloadType": "none",
    "repeat": "1800",
    "crontab": "",
    "once": true,
    "x": 81.25,
    "y": 135,
    "z": "1dca54.d12be5ac",
    "wires": [["3fc2ea78.549ace"]]
},
{
    "id": "bb03cc6a.16d6b8",
    "type": "debug",
    "name": "Check failure",
    "active": true,
    "console": "false",
    "complete": "false",
    "x": 817.5,
    "y": 91.25,
    "z": "1dca54.d12be5ac",
    "wires": []
},
{
    "id": "3839cb5d.1c34ac",
    "type": "debug",
    "name": "Check up",
    "active": true,
    "console": "false",
    "complete": "false",
    "x": 593.75,
    "y": 170,
    "z": "1dca54.d12be5ac",
    "wires": []
}]

Leave a Comment

Install Node-Red on Openshift

Node-Red is a visual tool for wiring the Internet of Things. It allows you to define flows, triggers and outputs
that process data. This can be used in all kinds of applications, such as home automation or security. I wanted to use
it to implement remote monitoring on my private server I have installed at home. To be able to do this I created a free gear
at openshift.com. The reason for selecting Openshift was basically the three free gears they offer.

Create the gear in OpenShift

For this howto I assume that you have created an account at openshift.com and have used the following webpage to create a
Node.js instance: Node.js Application Hosting @ Openshift. In short, you can use the command:

rhc app create MyApp nodejs-0.10

or via a webflow, as explained in this blogpost.

Clone via git

In order to be able to checkout the code that Openshift will execute when the applications starts you first need to configure your SSH
key at the settings.

After the SSH public key is set up, use the git clone ssh://xxx@appname-openshiftname.rhcloud.com command to clone the
source files, you can find this URL in the app details.

Your repository will have the following format:

node_modules/            Any Node modules packaged with the app 
deplist.txt              Deprecated.
package.json             npm package descriptor.
.openshift/              Location for OpenShift specific files
    action_hooks/        See the Action Hooks documentation 
    markers/             See the Markers section below
server.js                The default node.js execution script.

File updates

We will update two files: package.json and server.js. First replace the contents of package.json to:

package.json

{
  "name": "Node-Red",
  "version": "1.0.0",
  "description": "Node RED on Openshift",
  "keywords": [
    "OpenShift",
    "Node.js",
    "application",
    "node-red"
  ],
  "engines": {
    "node": ">= 0.6.0",
    "npm": ">= 1.0.0"
  },

  "dependencies": {
    "express": "4.x",
    "node-red": ">= 0.9
    "atob": "1.1.2",
    "basic-auth-connect": "1.0.0"
  },
  "devDependencies": {},
  "bundleDependencies": [],

  "private": true,
  "main": "server.js"
}

The author and homepage fields are provided by default in the example, but I left them out. This file defines the different dependencies for running
the server. The current dependency for node-red points to the last stable release. Since node-red is still in beta, you might sometimes want to use the
latest version from github. More on that at the end of the article.

server.js

The default server.js file needs to be replaced with a version that will run node-red.

var http = require('http');
var express = require("express");
var RED = require("node-red");
var atob = require('atob');

var MyRed = function() {

    //  Scope.
    var self = this;


    /*  ================================================================  */
    /*  Helper functions.                                                 */
    /*  ================================================================  */

    /**
     *  Set up server IP address and port # using env variables/defaults.
     */
    self.setupVariables = function() {
        //  Set the environment variables we need.
        self.ipaddress = process.env.OPENSHIFT_NODEJS_IP;
        self.port      = process.env.OPENSHIFT_NODEJS_PORT || 8000;

        if (typeof self.ipaddress === "undefined") {
            //  Log errors on OpenShift but continue w/ 127.0.0.1 - this
            //  allows us to run/test the app locally.
            console.warn('No OPENSHIFT_NODEJS_IP var, using 127.0.0.1');
            self.ipaddress = "127.0.0.1";
        };



        // Create the settings object
        self.redSettings = {
            httpAdminRoot:"/",
            httpNodeRoot: "/api",
            userDir: process.env.OPENSHIFT_DATA_DIR
        };

        if (typeof self.redSettings.userDir === "undefined") {
            console.warn('No OPENSHIFT_DATA_DIR var, using ./');
            self.redSettings.userDir = "./";
        }
    };

     /**
     *  terminator === the termination handler
     *  Terminate server on receipt of the specified signal.
     *  @param {string} sig  Signal to terminate on.
     */
    self.terminator = function(sig){
        if (typeof sig === "string") {
           console.log('%s: Received %s - terminating app ...',
                       Date(Date.now()), sig);
            RED.stop();
           process.exit(1);
        }
        console.log('%s: Node server stopped.', Date(Date.now()) );
    };

    /**
     *  Setup termination handlers (for exit and a list of signals).
     */
    self.setupTerminationHandlers = function(){
        //  Process on exit and signals.
        process.on('exit', function() { self.terminator(); });

        // Removed 'SIGPIPE' from the list - bugz 852598.
        ['SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGILL', 'SIGTRAP', 'SIGABRT',
         'SIGBUS', 'SIGFPE', 'SIGUSR1', 'SIGSEGV', 'SIGUSR2', 'SIGTERM'
        ].forEach(function(element, index, array) {
            process.on(element, function() { self.terminator(element); });
        });
    };

    /*  ================================================================  */
    /*  App server functions (main app logic here).                       */
    /*  ================================================================  */

    /**
     *  Create the routing table entries + handlers for the application.
     */
    self.createRoutes = function() {
        self.routes = { };

        self.routes['/asciimo'] = function(req, res) {
            var link = "http://i.imgur.com/kmbjB.png";
            res.send("<html><body><img src='" + link + "'></body></html>");
        };
    };

    /**
     *  Initialize the server (express) and create the routes and register
     *  the handlers.
     */
    self.initializeServer = function() {
        self.createRoutes();

        // Create an Express app
        self.app = express();

        // Create a server
        self.server = http.createServer(self.app);

        //setup basic authentication
        var basicAuth = require('basic-auth-connect');
        self.app.use(basicAuth(function(user, pass) {
            return user === 'test' && pass === atob('dGVzdA==');
        }));

        // Initialise the runtime with a server and settings
        RED.init(self.server, self.redSettings);
        console.log('%s is the userDir for RED', self.redSettings.userDir);

        // Serve the editor UI from /red
        self.app.use(self.redSettings.httpAdminRoot,RED.httpAdmin);

        // Serve the http nodes UI from /api
        self.app.use(self.redSettings.httpNodeRoot,RED.httpNode);

        // Add a simple route for static content served from 'public'
        //self.app.use("/",express.static("public"));

        //  Add handlers for the app (from the routes).
        for (var r in self.routes) {
            self.app.get(r, self.routes[r]);
        }
    };

    /**
     *  Initializes the sample application.
     */
    self.initialize = function() {
        self.setupVariables();
        self.setupTerminationHandlers();

        // Create the express server and routes.
        self.initializeServer();
    };

    /**
     *  Start the server (starts up the sample application).
     */
    self.start = function() {
        //  Start the app on the specific interface (and port).
        self.server.listen(self.port,self.ipaddress, function() {
            console.log('%s: Node server started on %s:%d ...',
                        Date(Date.now() ), self.ipaddress, self.port);
        });

        // Start the runtime
        RED.start();
    };
}

/**
 *  main():  Main code.
 */
var red = new MyRed();
red.initialize();
red.start();

This is a variation on the default server.js from OpenShift that initializes the RED server. In initializeServer node-red
is started. I have added Basic Authentication to prevent unauthorized users from accessing. To prevent having the password in plain-text
it is base64 encoded. Via base64encode.org you can encode your password and place it in the server.js file.

Git push

Once you have the files ready, commit the changes via git. Issue a git push remote command to update the remote
repository at Openshift. This will trigger some hooks at the server side and start the node server. The output will end in
something similar to:

remote: npm info ok 
remote: Preparing build for deployment
remote: Deployment id is d46d7d66
remote: Activating deployment
remote: Starting NodeJS cartridge
remote: Tue Sep 09 2014 03:09:49 GMT-0400 (EDT): Starting application 'red' ...
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://xxx@red-xxx.rhcloud.com/~/git/red.git/
   6317488..2724c91  master -> master

You can now access your node-red instance at http://red-[openshift namespace].rhcloud.com:8000. The added port 8000
is required because the Openshift proxy currently does not support forwarding WebSockets via port 80. If you open
the website at port 80, it will show an error message that the connection to the server is lost. See below for a workaround.

Workaround Openshift Websockets

In order to be able to access node-red at port 80 (and save typing the port :8000 addition) I have created a work-around
for this. The portnumber is set fixed in the file public/red/comms.js, it is taken from location.port. I have created
a fork and a separate branch where the portnumber is fixed to 8000. This is available at github. In order to use it
you need to update the node-red dependency to the following line:

"node-red": "git://github.com/matueranet/node-red.git#067028b59917e98615c87985c02810c4828a25fa"

This also references a specific commit, so that the server is not automatically updated on an update before I decide
to do so.

Leave a Comment

How to determine the version of a memcache server

In a shell (Linux/Mac) or command prompt (Windows) type the following command:

telnet localhost 11211 

Where localhost can be replaced by the servers’ ip address and the port by a different one. 11211 is the default. A prompt will appear when the connection is succesful. Type the following command:

version

Depending on your telnet client, you might not see the text you are typing. Memcache will respond with its version, for example:

VERSION 1.2.4

Leave a Comment

Javascript development Cache headers in Apache

During development of a javascript webapp I found that the updated javascript was not picked up by my webbrowser. Even using ctrl+F5 would not always work to refresh the files. In order to make development easier I added the following cache headers to the Apache VirtualDirectory configuration in order to make the browser always fetch the latest version of the file from my development server.

<FilesMatch "\.(html|htm|js|css)$">
    FileETag None
    <ifModule mod_headers.c>
        Header unset ETag
        Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate"
        Header set Pragma "no-cache"
        Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT"
    </ifModule>
</FilesMatch>

Leave a Comment

Find the largest files in subdirectories

I find myself regularly with limited diskspace, and unsure which files to remove. Normally the largest files
I do not use anymore are the best candidate. When using Linux I use the command find:

find . -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {}

This command searches for the largest files. In order to find the largest redirectories, replace the -type f with -type d.

Leave a Comment