Tag: linux

MySQL drop all tables in database using queries

During development I sometimes need to drop all tables in a database. When executed, all existing tables are dropped, so be careful executing it.

SET FOREIGN_KEY_CHECKS = 0;
SET GROUP_CONCAT_MAX_LEN=32768;
SET @tables = NULL;
SELECT GROUP_CONCAT('`', table_name, '`') INTO @tables
  FROM information_schema.tables
  WHERE table_schema = (SELECT DATABASE());
SELECT IFNULL(@tables,'dummy') INTO @tables;

SET @tables = CONCAT('DROP TABLE IF EXISTS ', @tables);
PREPARE stmt FROM @tables;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET FOREIGN_KEY_CHECKS = 1;

Via Stackoverflow

Leave a Comment

Convert Windows line endings to Linux line endings using vim

Windows uses the CR/LF characters (Carriage Return and Line Feed) to indicate a line-ending. Under Linux this is only one character, LF. You might see some strange characters at line-endings when editing a file in Linux that was saved in Windows.

To convert these line endings under Linux perform the following steps:

  • Edit the file with vim
  • Give the command :set ff=unix
  • Save the file

Solution Source

Leave a Comment

Dump all Apache requests

On an AWS EC2 instance I needed to find the contents of the X-Forwarded-For header of incoming requests, send by CloudFront. The easiest way was to dump all incoming traffic on port 80:

sudo tcpdump -s 0 -X 'tcp dst port 80' -w dump.pcap

I then copied the dump.pcap to my local machine and loaded it into Wireshark to read its contents.

Solution source

Leave a Comment

Free space on /boot path in Linux

When executing apt-get upgrade on my Linux machine I got the error that it could not execute because the /boot folder was full. And indeed, when executing du -h there was only 3% of space left on /boot.

As it turns out, there are old kernels stored in /boot, limiting me from updating. Since I only use the latest one the older versions can be removed.

This link explains in simple steps how to find and remove the older kernel versions.

The summary of the page above is to execute the command below (at your own risk!). First execute the dry-run to check what is removed:

dpkg -l linux-* | awk '/^ii/{ print $2}' | grep -v -e `uname -r | cut -f1,2 -d"-"` | grep -e [0-9] | grep -E "(image|headers)" | xargs sudo apt-get --dry-run remove

If everything checks out run:

dpkg -l linux-* | awk '/^ii/{ print $2}' | grep -v -e `uname -r | cut -f1,2 -d"-"` | grep -e [0-9] | grep -E "(image|headers)" | xargs sudo apt-get -y purge

Leave a Comment

Two ways to delete files based on age in Linux

I have found that there are two different ways to clean up a folder, by removing older files. One methods removes the files or folders based on their age, regardless of how many there are. I use this for example to delete backup folders, beyond their age.

Remove using tail

This method assumes that there is one file for every day, for example for backups. The command below removes all files order than 30 days. It is important there is only file per day, because else the tail results would not be correct. The advantage here is that if the backup fails, the older backup files will not be removed because of their age.

ls -1c ./ | tail -n +30 | xargs rm -rf

Remove using find

This command selects all files with a modification time older than 2 days and removes them.

0 15 * * * find ~/files/* -mtime +2 -exec rm {} \;

Leave a Comment

Install Unbound for local network lookups

I am running a local server for some private websites. The problem is that from within the local network I cannot
lookup the public DNS entries that are set for these websites. My router does not understand where to route
the requests to. I used to solve this by creating a separate DNS entry prefix with l. for every domain name. Recently
I found that you can run a local DNS server that is only to be used locally, which can translate the lookups to the local IP
address instead of the public one. Unbound is a DNS server which can provide this. It will proxy all
DNS requests and only alter the ones that are configured to be redirected locally. Below I’ve have described manual installation
and installation using the apt-get package manager on raspbian.

Installing

Manually from source

I am running a server with Archlinux which did not provide a package, so I had to install it manually. I used the following commands:

cd /tmp
wget https://unbound.net/downloads/unbound-latest.tar.gz
./configure --prefix=/usr --sysconfdir=/etc
make
make install

This will compile and install unbound in /usr/bin and its configuration to /etc/unbound.

Service on Archlinux

With the manual installation I needed to also define a service to start and stop unbound. I create the file /usr/lib/systemd/system/unbound.service:

[Unit]
Description=Unbound DNS Resolver
After=network.target

[Service]
PIDFile=/run/unbound.pid
ExecStart=/usr/bin/unbound -d
ExecReload=/bin/kill -HUP $MAINPID
Restart=always

[Install]
WantedBy=multi-user.target

I also need to add a user to run unbound for:

useradd unbound

Using apt-get

apt-get install unbound

Configuration

I placed two configuration files in the /etc/unbound folder. This will configure the unbound server to listen to all bound IP addresses and
to allow DNS request from the local network (in my case 192.168.1.*, and from localhost. It will also include a file that defines the
static internal IP addresses for the domain names which are hosted locally.

The first line, local-zone, defines that for the root domain example.com all requests can be forwarded to the actual DNS server, if there
is no exception defined. local-data defines an exception for a specific entry.

/etc/unbound/unbound.conf
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"

include: "/etc/unbound/localnetwork.conf"
interface: 0.0.0.0
access-control: 192.168.1.0/24 allow
access-control: 127.0.0.0/8 allow

/etc/unbound/localnetwork.conf
local-zone: "example.com." transparent
local-data: "foo.example.com. IN A 192.168.1.1"

In order for the server itself to also use these IP address I updated /etc/resolv.conf to also use this DNS server:

nameserver 192.168.1.1

Leave a Comment

Use plowshare on Linux to upload to mega

Backups I make on my Linux installations I encrypt and backup to cloud services. Mega.co.nz is such a service, which offers 50GB
for free. plowshare is a Linux commandline tool which offers an interface to upload and download from and to a lot of free file host services.
This post will explain how to install plowshare on a Linux host, install the mega module and upload a backup.

Install plowshare

root@web01:~# git clone https://code.google.com/p/plowshare/ plowshare4
Cloning into 'plowshare4'...
cremote: Counting objects: 16977, done.
Receiving objects: 100% (16977/16977), 4.75 MiB | 167 KiB/s, done.
Resolving deltas: 100% (12960/12960), done.
root@web01:~# cd plowshare4/
root@web01:~/plowshare4# make install

Install mega.co.nz module

Execute the following commands to install the mega plugin for plowshare. This will also download and install the source package from openssl for some compilation. The package
name is libssl-dev on Debian, Ubuntu or similar distributions. On Fedora, CentOS or RHEL this is openssl-devel.

git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
cd plowshare.plugin-mega/
apt-get install libssl-dev
make install

root@web01:~# git clone https://code.google.com/p/plowshare.plugin-mega plowshare.plugin-mega
Cloning into 'plowshare.plugin-mega'...
remote: Counting objects: 150, done.
Receiving objects: 100% (150/150), 56.40 KiB, done.
Resolving deltas: 100% (69/69), done.
root@web01:~# cd plowshare.plugin-mega/
root@web01:~/plowshare.plugin-mega# apt-get install libssl-dev
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libssl-doc
The following NEW packages will be installed:
  libssl-dev libssl-doc
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,709 kB of archives.
After this operation, 6,229 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-dev armhf 1.0.1e-2+rvt+deb7u13 [1,504 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ wheezy/main libssl-doc all 1.0.1e-2+rvt+deb7u13 [1,205 kB]
Fetched 2,709 kB in 2s (1,226 kB/s)
Selecting previously unselected package libssl-dev.
(Reading database ... 75102 files and directories currently installed.)
Unpacking libssl-dev (from .../libssl-dev_1.0.1e-2+rvt+deb7u13_armhf.deb) ...
Selecting previously unselected package libssl-doc.
Unpacking libssl-doc (from .../libssl-doc_1.0.1e-2+rvt+deb7u13_all.deb) ...
Processing triggers for man-db ...
Setting up libssl-dev (1.0.1e-2+rvt+deb7u13) ...
Setting up libssl-doc (1.0.1e-2+rvt+deb7u13) ...
root@web01:~/plowshare.plugin-mega# make install
gcc -Wall -O3 -s src/crypto.c -o mega -lcrypto
install -d /usr/local/share/plowshare4/modules
install -d /usr/local/share/plowshare4/plugins
install -m 755 mega /usr/local/share/plowshare4/plugins/mega
install -m 644 module/mega.sh /usr/local/share/plowshare4/modules

After this we need to register the mega module to the plowshare module registry:

 echo "mega            | download | upload |        |      |       |" >> /usr/local/share/plowshare4/modules/config

After this you can execute the command plowup mega to validate if the installation was a success. The output will have to look similar to:

plowup: you must specify a filename.
plowup: try `plowup --help' for more information.

Encrypt and upload

The file backups takes place in three steps (which assumes that there already is one folder with all the backupped information):

  1. Create a tar.gz archive (tar -czf backup.tar.gz ./backupfolder)
  2. Encrypt the archive with openssl, based on this post (openssl aes-256-cbc -in backup.tar.gz -out backup.tar.gz.aes -pass file:pass.txt)
  3. Upload the archive with plowshare (plowup mega —auth=username:password —folder=”Backups” backup.tar.gz.aes)

Leave a Comment

Find the largest files in subdirectories

I find myself regularly with limited diskspace, and unsure which files to remove. Normally the largest files
I do not use anymore are the best candidate. When using Linux I use the command find:

find . -type f -print0 | xargs -0 du | sort -n | tail -10 | cut -f2 | xargs -I{} du -sh {}

This command searches for the largest files. In order to find the largest redirectories, replace the -type f with -type d.

Leave a Comment