Category: howto

Generate a public key from a private key using ssh-keygen

If you find yourselves with a private key, for SSH password-less login for example, and needing the public key, there is a simple command to generate the public key. It is Linux only however.

Simply execute:

ssh-keygen -y

It will ask you for the location of the private key (~/.ssh/id_rsa) by default and will then output the public key derived from the private key.

Leave a Comment

VirtualBox Alt+Tab in Guest

Normally when pressing Alt+tab to switch to a different window in the Guest OS the Host switches windows, so you lose the focus on the VirtualBox window. This happens even in Fullscreen mode.

You can change this behavior by pressing the Host-key once (default is Right+Ctrl). The Alt-Tab will then remain “inside” the host. Pressing the Host-key again will toggle behavior again.

Unfortunately there is not a way to see the current status of the behavior when in fullscreen mode. In windowed mode you see the status in the bottom right corner of the screen. There is an arrow key pointing down there. When lit, the Guest receives the key presses, else the host receives them. When you leave the Guest windows the status is always set to Off. So when entering the Guest again you need to press the Host-key to re-enable it.

Leave a Comment

Make image clickable without using jQuery

For a secondcrack blog I found myself needing to be able to make the images in the post clickable so that they could be opened in a bigger size. It was a simple use case, so I did not feel like including external libraries to accomplish this (jquery or other image libraries). As it turns out, all images are available via the document attribute images. The solution posted below iterates over all the images, filters them based on the fact that they have /media in the source (only those needed to be clickable) and sets an onclick event.

//open images in a new window
var images = document.images;
for(i = 0, imagesLength = images.length; i < imagesLength; i++) {
    if (images[i].src.indexOf('/media') !== -1) {
        images[i].setAttribute('onclick', 'window.location.href = \'' + images[i].src + '\';');
}
}

Leave a Comment

Two ways to delete files based on age in Linux

I have found that there are two different ways to clean up a folder, by removing older files. One methods removes the files or folders based on their age, regardless of how many there are. I use this for example to delete backup folders, beyond their age.

Remove using tail

This method assumes that there is one file for every day, for example for backups. The command below removes all files order than 30 days. It is important there is only file per day, because else the tail results would not be correct. The advantage here is that if the backup fails, the older backup files will not be removed because of their age.

ls -1c ./ | tail -n +30 | xargs rm -rf

Remove using find

This command selects all files with a modification time older than 2 days and removes them.

0 15 * * * find ~/files/* -mtime +2 -exec rm {} \;

Leave a Comment

PHP Curl SSL on Windows

When developing on Windows I regularly found myself using the line below to circument SSL errors on Windows when using Curl in PHP:

curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);

If I do not set these curl options, the request would fail with the message: Details: SSL3_GET_SERVER_CERTIFICATE:certificate verify failed. This is a
warning you normally only expect when connecting to a host with a self-signed certificate or something else. But on Windows this also happens for correct
certificates, because the certificate chain cannot be established. This can be solved in two steps:

  1. Download file with root certificates from http://curl.haxx.se/docs/caextract.html
  2. Add the lines below to your php.ini file (where the path is where you downloaded the file in step 1.

    [PHP_CURL]
    curl.cainfo=c:\apps\php\cacert.pem

This post is based on a solution in a stackoverflow post.

Leave a Comment

Mysql remove on update current timestamp property

The MySQL management tool I use automatically creates tables with column of the time ‘timestamp’ with the property
“ON UPDATE CURRENT_TIMESTAMP”. This property means that when a record is updated this column is automatically updated
to the current time. This behavior can be unwanted. You can check whether a column has this property by issuing a ‘desc’ command:

DESC `tag`

Which in this example will result in:

Field   Type    Null    Key Default Extra
id  int(10) unsigned    NO  PRI NULL    auto_increment
label   varchar(25) NO      NULL    
key varchar(25) NO      NULL    
specificDate    date    YES     NULL    ON UPDATE CURRENT_TIMESTAMP 

The solution to this problem is to redefine the column manually without specifying this special property. It is possible to specify
a different default value than ‘CURRENT_TIMESTAMP’.

ALTER TABLE `tag`
    CHANGE `specificDate` `specificDate` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;

Based on stackoverflow.

Leave a Comment

How to configure a writable upload folder on openshift for a PHP cartridge

When creating a Openshift gear for a GNU Social deployment I found that the files could not be uploaded. I needed
to create writable folder for the webserver to place the files. Openshift allows you to update the code via a git repository, but I did not want to
upload the files directly into the git repository on the server, because that would introduce problems everytime I updated the code. Openshift
also offers a separate dir for user data files and preferably the files are stored there. After some tinkering I found a solution.
There are some specific notes for Windows users at the end of the post.

The GNU social package has two folders which it uses for storing uploaded files /avatar and /file. I will be explaining how to perform this for
both folders, but you can execute this for any type of package you’d like, not only GNU social. The principle is the same.

The solution involves two separate steps: preparation and scripting. Some of the paths in the explanation start with /var/lib/openshift/[your app id]/, in this
case you need to replace the value [your app id] with the identifier of your app. You can find the value, if you don’t know it, by executing the following
command when signed into the console on the server:

echo $OPENSHIFT_DATA_DIR

Preparation

Make sure that the folder(s) you want to make world-writable for the uploads are not in version control (if they are, you might run into problems because then you are
basically updating version controlled folders when running the scripts. There is probably a way around that, but I tried to stay away from such a solution).

First, ssh into your server and execute the commands below, this will create the upload folders in the data part where the files can be uploaded:

mkdir $OPENSHIFT_DATA_DIR/avatar
mkdir $OPENSHIFT_DATA_DIR/file

Second, create the links into the repository and make them world-writable. The ln command does not really support environment variables, this is why
we are using the absolute paths here.

ln -s /var/lib/openshift/[your app id]/app-root/data/avatar /var/lib/openshift/[your app id]/app-root/runtime/repo/avatar
ln -s /var/lib/openshift/[your app id]/app-root/data/file /var/lib/openshift/[your app id]/app-root/runtime/repo/file
chmod -R o+rw $OPENSHIFT_REPO_DIR/avatar
chmod -R o+rw $OPENSHIFT_REPO_DIR/file

This completes the manual preparation.

Scripts

When performing a code update via git push we need to temporarily remove the link and recreate it again. Openshift offers the possibility to have scripts
executed in the different steps of deploying the code. In the folder in your local cloned git repository there is a folder .openshift/action_hooks. Two
files must be created here, pre_build and post_deploy. For more background information read the manual.

On Linux, execute chmod x .openshift/action_hooks/* to make the scripts executable. For Windows, see the special note at the end of the post.

pre_build

Remove the links:

#!/bin/bash
rm -f $OPENSHIFT_REPO_DIR/avatar
rm -f $OPENSHIFT_REPO_DIR/file

post_deploy

Re-create the links and make sure that they are world-writable:

#!/bin/bash
ln -s /var/lib/openshift/[your app id]/app-root/data/avatar /var/lib/openshift/[your app id]/app-root/runtime/repo/avatar
ln -s /var/lib/openshift/[your app id]/app-root/data/file /var/lib/openshift/[your app id]/app-root/runtime/repo/file
chmod -R o+rw $OPENSHIFT_REPO_DIR/avatar
chmod -R o+rw $OPENSHIFT_REPO_DIR/file

Notes for Windows

The scripts in the repository need to have the correct line endings and have the execution bit set. The manual
explains how to do this for Windows. In short execute in the root of your local clone of the git repository after creating the build hooks:

git config core.autocrlf input # use `true` on Windows
git config core.safecrlf true
git update-index --chmod=+x .openshift/action_hooks/pre_build
git update-index --chmod=+x .openshift/action_hooks/post_deploy

Message error: .openshift/action_hooks/*: does not exist and —remove not passed

.openshift/action_hooks/*: does not exist and --remove not passed

I got this message when attempting to execute the update-index command on a wildcard. When providing the individual filenames it worked.

Leave a Comment

Install Unbound for local network lookups

I am running a local server for some private websites. The problem is that from within the local network I cannot
lookup the public DNS entries that are set for these websites. My router does not understand where to route
the requests to. I used to solve this by creating a separate DNS entry prefix with l. for every domain name. Recently
I found that you can run a local DNS server that is only to be used locally, which can translate the lookups to the local IP
address instead of the public one. Unbound is a DNS server which can provide this. It will proxy all
DNS requests and only alter the ones that are configured to be redirected locally. Below I’ve have described manual installation
and installation using the apt-get package manager on raspbian.

Installing

Manually from source

I am running a server with Archlinux which did not provide a package, so I had to install it manually. I used the following commands:

cd /tmp
wget https://unbound.net/downloads/unbound-latest.tar.gz
./configure --prefix=/usr --sysconfdir=/etc
make
make install

This will compile and install unbound in /usr/bin and its configuration to /etc/unbound.

Service on Archlinux

With the manual installation I needed to also define a service to start and stop unbound. I create the file /usr/lib/systemd/system/unbound.service:

[Unit]
Description=Unbound DNS Resolver
After=network.target

[Service]
PIDFile=/run/unbound.pid
ExecStart=/usr/bin/unbound -d
ExecReload=/bin/kill -HUP $MAINPID
Restart=always

[Install]
WantedBy=multi-user.target

I also need to add a user to run unbound for:

useradd unbound

Using apt-get

apt-get install unbound

Configuration

I placed two configuration files in the /etc/unbound folder. This will configure the unbound server to listen to all bound IP addresses and
to allow DNS request from the local network (in my case 192.168.1.*, and from localhost. It will also include a file that defines the
static internal IP addresses for the domain names which are hosted locally.

The first line, local-zone, defines that for the root domain example.com all requests can be forwarded to the actual DNS server, if there
is no exception defined. local-data defines an exception for a specific entry.

/etc/unbound/unbound.conf
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"

include: "/etc/unbound/localnetwork.conf"
interface: 0.0.0.0
access-control: 192.168.1.0/24 allow
access-control: 127.0.0.0/8 allow

/etc/unbound/localnetwork.conf
local-zone: "example.com." transparent
local-data: "foo.example.com. IN A 192.168.1.1"

In order for the server itself to also use these IP address I updated /etc/resolv.conf to also use this DNS server:

nameserver 192.168.1.1

Leave a Comment

Use plowshare on Linux to upload to mega

Backups I make on my Linux installations I encrypt and backup to cloud services. Mega.co.nz is such a service, which offers 50GB
for free. plowshare is a Linux commandline tool which offers an interface to upload and download from and to a lot of free file host services.
This post will explain how to install plowshare on a Linux host, install the mega module and upload a backup.

Install plowshare

root@web01:~# git clone https://code.google.com/p/plowshare/ plowshare4
Cloning into 'plowshare4'...
cremote: Counting objects: 16977, done.
Receiving objects: 100% (16977/16977), 4.75 MiB | 167 KiB/s, done.
Resolving deltas: 100% (12960/12960), done.
root@web01:~# cd plowshare4/
root@web01:~/plowshare4# make install

Install mega.co.nz module

Execute the following commands to install the mega plugin for plowshare. This will also download and install the source package from openssl for some compilation. The package
name is libssl-dev on Debian, Ubuntu or similar distributions. On Fedora, CentOS or RHEL this is openssl-devel.

git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
cd plowshare.plugin-mega/
apt-get install libssl-dev
make install

root@web01:~# git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
Cloning into 'plowshare.plugin-mega'...
remote: Counting objects: 150, done.
Receiving objects: 100% (150/150), 56.40 KiB, done.
Resolving deltas: 100% (69/69), done.
root@web01:~# cd plowshare.plugin-mega/
root@web01:~/plowshare.plugin-mega# apt-get install libssl-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libssl-doc
The following NEW packages will be installed:
  libssl-dev libssl-doc
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,709 kB of archives.
After this operation, 6,229 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-dev armhf 1.0.1e-2+rvt+deb7u13 [1,504 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-doc all 1.0.1e-2+rvt+deb7u13 [1,205 kB]
Fetched 2,709 kB in 2s (1,226 kB/s)
Selecting previously unselected package libssl-dev.
(Reading database ... 75102 files and directories currently installed.)
Unpacking libssl-dev (from .../libssl-dev_1.0.1e-2+rvt+deb7u13_armhf.deb) ...
Selecting previously unselected package libssl-doc.
Unpacking libssl-doc (from .../libssl-doc_1.0.1e-2+rvt+deb7u13_all.deb) ...
Processing triggers for man-db ...
Setting up libssl-dev (1.0.1e-2+rvt+deb7u13) ...
Setting up libssl-doc (1.0.1e-2+rvt+deb7u13) ...
root@web01:~/plowshare.plugin-mega# make install
gcc -Wall -O3 -s src/crypto.c -o mega -lcrypto
install -d /usr/local/share/plowshare4/modules
install -d /usr/local/share/plowshare4/plugins
install -m 755 mega /usr/local/share/plowshare4/plugins/mega
install -m 644 module/mega.sh /usr/local/share/plowshare4/modules

After this we need to register the mega module to the plowshare module registry:

 echo "mega            | download | upload |        |      |       |" >> /usr/local/share/plowshare4/modules/config

After this you can execute the command plowup mega to validate if the installation was a success. The output will have to look similar to:

plowup: you must specify a filename.
plowup: try `plowup --help' for more information.

Encrypt and upload

The file backups takes place in three steps (which assumes that there already is one folder with all the backupped information):

  1. Create a tar.gz archive (tar -czf backup.tar.gz ./backupfolder)
  2. Encrypt the archive with openssl, based on this post (openssl aes-256-cbc -in backup.tar.gz -out backup.tar.gz.aes -pass file:pass.txt)
  3. Upload the archive with plowshare (plowup mega —auth=username:password —folder=”Backups” backup.tar.gz.aes)

Leave a Comment

Install Node-Red on Openshift

Node-Red is a visual tool for wiring the Internet of Things. It allows you to define flows, triggers and outputs
that process data. This can be used in all kinds of applications, such as home automation or security. I wanted to use
it to implement remote monitoring on my private server I have installed at home. To be able to do this I created a free gear
at openshift.com. The reason for selecting Openshift was basically the three free gears they offer.

Create the gear in OpenShift

For this howto I assume that you have created an account at openshift.com and have used the following webpage to create a
Node.js instance: Node.js Application Hosting @ Openshift. In short, you can use the command:

rhc app create MyApp nodejs-0.10

or via a webflow, as explained in this blogpost.

Clone via git

In order to be able to checkout the code that Openshift will execute when the applications starts you first need to configure your SSH
key at the settings.

After the SSH public key is set up, use the git clone ssh://xxx@appname-openshiftname.rhcloud.com command to clone the
source files, you can find this URL in the app details.

Your repository will have the following format:

node_modules/            Any Node modules packaged with the app 
deplist.txt              Deprecated.
package.json             npm package descriptor.
.openshift/              Location for OpenShift specific files
    action_hooks/        See the Action Hooks documentation 
    markers/             See the Markers section below
server.js                The default node.js execution script.

File updates

We will update two files: package.json and server.js. First replace the contents of package.json to:

package.json

{
  "name": "Node-Red",
  "version": "1.0.0",
  "description": "Node RED on Openshift",
  "keywords": [
    "OpenShift",
    "Node.js",
    "application",
    "node-red"
  ],
  "engines": {
    "node": ">= 0.6.0",
    "npm": ">= 1.0.0"
  },

  "dependencies": {
    "express": "4.x",
    "node-red": ">= 0.9
    "atob": "1.1.2",
    "basic-auth-connect": "1.0.0"
  },
  "devDependencies": {},
  "bundleDependencies": [],

  "private": true,
  "main": "server.js"
}

The author and homepage fields are provided by default in the example, but I left them out. This file defines the different dependencies for running
the server. The current dependency for node-red points to the last stable release. Since node-red is still in beta, you might sometimes want to use the
latest version from github. More on that at the end of the article.

server.js

The default server.js file needs to be replaced with a version that will run node-red.

var http = require('http');
var express = require("express");
var RED = require("node-red");
var atob = require('atob');

var MyRed = function() {

    //  Scope.
    var self = this;


    /*  ================================================================  */
    /*  Helper functions.                                                 */
    /*  ================================================================  */

    /**
     *  Set up server IP address and port # using env variables/defaults.
     */
    self.setupVariables = function() {
        //  Set the environment variables we need.
        self.ipaddress = process.env.OPENSHIFT_NODEJS_IP;
        self.port      = process.env.OPENSHIFT_NODEJS_PORT || 8000;

        if (typeof self.ipaddress === "undefined") {
            //  Log errors on OpenShift but continue w/ 127.0.0.1 - this
            //  allows us to run/test the app locally.
            console.warn('No OPENSHIFT_NODEJS_IP var, using 127.0.0.1');
            self.ipaddress = "127.0.0.1";
        };



        // Create the settings object
        self.redSettings = {
            httpAdminRoot:"/",
            httpNodeRoot: "/api",
            userDir: process.env.OPENSHIFT_DATA_DIR
        };

        if (typeof self.redSettings.userDir === "undefined") {
            console.warn('No OPENSHIFT_DATA_DIR var, using ./');
            self.redSettings.userDir = "./";
        }
    };

     /**
     *  terminator === the termination handler
     *  Terminate server on receipt of the specified signal.
     *  @param {string} sig  Signal to terminate on.
     */
    self.terminator = function(sig){
        if (typeof sig === "string") {
           console.log('%s: Received %s - terminating app ...',
                       Date(Date.now()), sig);
            RED.stop();
           process.exit(1);
        }
        console.log('%s: Node server stopped.', Date(Date.now()) );
    };

    /**
     *  Setup termination handlers (for exit and a list of signals).
     */
    self.setupTerminationHandlers = function(){
        //  Process on exit and signals.
        process.on('exit', function() { self.terminator(); });

        // Removed 'SIGPIPE' from the list - bugz 852598.
        ['SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGILL', 'SIGTRAP', 'SIGABRT',
         'SIGBUS', 'SIGFPE', 'SIGUSR1', 'SIGSEGV', 'SIGUSR2', 'SIGTERM'
        ].forEach(function(element, index, array) {
            process.on(element, function() { self.terminator(element); });
        });
    };

    /*  ================================================================  */
    /*  App server functions (main app logic here).                       */
    /*  ================================================================  */

    /**
     *  Create the routing table entries + handlers for the application.
     */
    self.createRoutes = function() {
        self.routes = { };

        self.routes['/asciimo'] = function(req, res) {
            var link = "http://i.imgur.com/kmbjB.png";
            res.send("<html><body><img src='" + link + "'></body></html>");
        };
    };

    /**
     *  Initialize the server (express) and create the routes and register
     *  the handlers.
     */
    self.initializeServer = function() {
        self.createRoutes();

        // Create an Express app
        self.app = express();

        // Create a server
        self.server = http.createServer(self.app);

        //setup basic authentication
        var basicAuth = require('basic-auth-connect');
        self.app.use(basicAuth(function(user, pass) {
            return user === 'test' && pass === atob('dGVzdA==');
        }));

        // Initialise the runtime with a server and settings
        RED.init(self.server, self.redSettings);
        console.log('%s is the userDir for RED', self.redSettings.userDir);

        // Serve the editor UI from /red
        self.app.use(self.redSettings.httpAdminRoot,RED.httpAdmin);

        // Serve the http nodes UI from /api
        self.app.use(self.redSettings.httpNodeRoot,RED.httpNode);

        // Add a simple route for static content served from 'public'
        //self.app.use("/",express.static("public"));

        //  Add handlers for the app (from the routes).
        for (var r in self.routes) {
            self.app.get(r, self.routes[r]);
        }
    };

    /**
     *  Initializes the sample application.
     */
    self.initialize = function() {
        self.setupVariables();
        self.setupTerminationHandlers();

        // Create the express server and routes.
        self.initializeServer();
    };

    /**
     *  Start the server (starts up the sample application).
     */
    self.start = function() {
        //  Start the app on the specific interface (and port).
        self.server.listen(self.port,self.ipaddress, function() {
            console.log('%s: Node server started on %s:%d ...',
                        Date(Date.now() ), self.ipaddress, self.port);
        });

        // Start the runtime
        RED.start();
    };
}

/**
 *  main():  Main code.
 */
var red = new MyRed();
red.initialize();
red.start();

This is a variation on the default server.js from OpenShift that initializes the RED server. In initializeServer node-red
is started. I have added Basic Authentication to prevent unauthorized users from accessing. To prevent having the password in plain-text
it is base64 encoded. Via base64encode.org you can encode your password and place it in the server.js file.

Git push

Once you have the files ready, commit the changes via git. Issue a git push remote command to update the remote
repository at Openshift. This will trigger some hooks at the server side and start the node server. The output will end in
something similar to:

remote: npm info ok 
remote: Preparing build for deployment
remote: Deployment id is d46d7d66
remote: Activating deployment
remote: Starting NodeJS cartridge
remote: Tue Sep 09 2014 03:09:49 GMT-0400 (EDT): Starting application 'red' ...
remote: -------------------------
remote: Git Post-Receive Result: success
remote: Activation status: success
remote: Deployment completed with status: success
To ssh://xxx@red-xxx.rhcloud.com/~/git/red.git/
   6317488..2724c91  master -> master

You can now access your node-red instance at http://red-[openshift namespace].rhcloud.com:8000. The added port 8000
is required because the Openshift proxy currently does not support forwarding WebSockets via port 80. If you open
the website at port 80, it will show an error message that the connection to the server is lost. See below for a workaround.

Workaround Openshift Websockets

In order to be able to access node-red at port 80 (and save typing the port :8000 addition) I have created a work-around
for this. The portnumber is set fixed in the file public/red/comms.js, it is taken from location.port. I have created
a fork and a separate branch where the portnumber is fixed to 8000. This is available at github. In order to use it
you need to update the node-red dependency to the following line:

"node-red": "git://github.com/matueranet/node-red.git#067028b59917e98615c87985c02810c4828a25fa"

This also references a specific commit, so that the server is not automatically updated on an update before I decide
to do so.

Leave a Comment