Skip to main content

Posts for year 2019

Use sendmail over mail

mail (from mailutils) is a common sysadmin tool for sending system mail, however, it brings in unneeded/undesired packages (e.g. mysql/mariadb tools). sendmail is much lighter and can usually get the job done.

Typical mail Scenario

The following cron entry runs deborphan every Monday at 14:33, emailing the results to the drad user.

33 14 * * Mon   deborphan --guess-all | mail -s "Weekly deborphan Report" drad

Using sendmail Instead

sendmail does not have a -s (subject parameter). You need to preface your message with "Subject: {your subject}" followed by a new line and then the body such as follows:

33 14 * * Mon   ( echo "Subject: Weekly deborphan Report"; echo; deborphan --guess-all ) | /usr/sbin/sendmail drad

The sendmail route is a little more verbose but gets the job done and now you do not need mailutils!


mail (mailutils) is a common and useful mail tool but depends on several packages which may not be desirable. You can fully replace mail with sendmail to avoid the dependencies and yet provide the same system mail functionality.

System Resource Monitoring

We often need to monitor a system during load testing to determine how heavily the system is being taxed (i.e. determining the proper AWS instance type) and to determine what is being hit (memory, cpu, etc.). Often the target system has monitoring features (e.g. Prometheus or other); however, there are times when we do not have a monitoring tool and, more often, times when we don't want the hit of the monitoring tool itself impacting the load test.

The need is typically to spin up a server, add the application and hit the server with a load test (, jmeter, ab, etc.). Adding node_exporter to an instance takes a few seconds but is useless if you do not have Prometheus setup or available to the instance. This typically leads to running the load test and periodically checking metrics of the resource as the test runs - not an optimal solution.

Our quest to automate this task had two key requirements:

  • easy to install (low dependencies)
  • small footprint (if the monitor itself takes 50% of memory/cpu its not real helpful)


Our first attempt to address this issue is basic and meets the requirement. sarmore uses SAR which is already on most Linux instances and may already be running.

sarmore is very basic, it simply starts 3 SAR instances (one each to monitor cpu, memory, and load) which run for a given amount of time and log to the LOG_DIR specified.

Quick Summary: no install, may already be running

  • typically no install
  • may already be running
  • output format is not the best to work with


Our second attempt to address this issue was a python-based solution given python is typically on most linux instances. pymore runs in a loop, polling system resources at a given interval and stores the results to log files.

Quick Summary: clone repo and run (no dependencies)

  • clone repo, no app dependencies
  • more detailed, customizable metrics can be captured
  • requires git/cloning
  • memory usage is relatively high


Our third attempt is a rust app which ships/runs as a simple binary. rumore runs much as pymore does, in a loop and writes to log files.

Quick Summary: put binary on server and run

  • single binary (no dependencies)
  • low memory usage
  • more detailed, customizable metrics can be captured
  • requires putting binary on server


All three solutions have benefits/drawbacks which should be considered for each application. We typically use rumore where possible due to its light resource usage and ease of install.

Resource Utilization

Resource utilization of the resource monitors themselves is as follows:

Application Bin Size VIRT RES SHR CPU%
sarmore n/a 16,140 2,292 2,076 0.0
pymore n/a 19,992 11,628 6,028 0.0
rumore 1.8M 2,928 888 768 0.0

Note that the above stats for sarmore are for running it directly, if you already have sar running you would not need to run sarmore but rather simply query sar.


We welcome feedback and enhancements to any of the three applications, keeping in mind the two key requirements. Feel free to fork one (or all) and submit a PR. We would also be interested if you are willing to write similar logic in another language (such as Java, C/C++, etc.) to compare against the sarmore, pymore, and rumore.

Run SweetHome3D Without Install


SweetHome3D is a free interior design application. You can install it from most repos; however, the install often requires a lot of unneeded libraries and/or didn't work well for my situation. There are several articles that suggest you can/should run it directly but none gave the steps needed so I thought I would pull together a quick post on it.

It's not difficult or glorious, hopefully the info will be of use to others. Here are the steps:

  1. download SweetHome3D
  2. extract files: tar -xzf SweetHome3D-*.tgz
  3. run the jar: ./SweetHome
  4. note: you will need a JRE - openjdk works just fine

Lesson's Learned

I did manage to produce a working docker sweethome3d which keeps you from installing anything (other than docker); however, the java GUI blacks out sections of the app from time to time and wasn't worth the effort given I already have java installed for dbeaver. If you are interested in getting it to run in docker feel free to start with what I produced:

boi Tutorial

This tutorial will show how to take an existing application (the website itself) and deploy it to a kubernetes cluster using boi, a lightweight tool for building and pushing docker images. Deploying an application to k8s can be a daunting task depending on the complexity of your application and the needs therein; however, as this post shows, it can be quick (~30 minutes) and relatively easy if you have the needed components in place.

Key Components

  • an application (ready to go)
  • a k8s cluster
  • an image repository

The Application

For this tutorial I am using the website itself as an example. Its source is on gitlab. The app is a nikola based static site. To 'generate' the site you would simply run nikola build, the built site is located in the output/ directory. This part was easy as it already existed!

The k8s Cluster

Easy, I already had one. If you don't, I suggest looking at Rancher as you can get a k8s cluster set up in minutes.

The Image Repository

I have spent countless hours setting up internal registries, registries inside of k8s, and integrating with external/public registries. If your app is close-sourced this task will take more time and be more difficult. As of late I have started using the registry which comes with a gitlab project as it is free and I do not need to manage the registry. If you want to see one, check our the dradux-site registry.

Dockerizing Your App (if needed)

This is an art in itself and can take on a life of its own. I suggest a little planning before doing this and keep things simple. Your application/code should do the heavy work and docker should be a light wrapper to bundle it all up but some things (old java apps) just wont stay light in my experience. If you need help in dockerizing feel free to contact us!

Dockerizing dradux-site was simple as the site is a static site - just html/javascript/css. No server-side scripting, database, etc. All we need is an --nginx-- lighttpd instance, add the app code and we are ready to go!

Feel free to take a look at the Dockerfile along with its (used to run rsyslogd which sends my web logs into a graylog server) and the lighttpd.conf.

That's all it took to dockerize the app!

Add boi Integration

As mentioned before boi is lightweight. To add boi to the project I created the .boi.toml configuration file. That is pretty much it, now you can build your image and push to your image registry.

Build & Push Your First Image

Use boi to build and push an image of the application: boi build --version=1.0.0. If all goes well you will have version 1.0.0 of your application in the container registry specified.

Deploy to k8s

Before you can deploy to k8s you need to setup your deployment. This can be done in several different ways in k8s, we will use a standard namespaced deployment as an example.

First, create your namespace and then create the deployment:

apiVersion: v1
kind: List
- apiVersion: apps/v1beta1
  kind: Deployment
    name: dradux-site
    namespace: dradux-site
          run: dradux-site
          - name: dradux-site
                memory: 128Mi
                cpu: 100m
              - containerPort: 80
- apiVersion: v1
  kind: Service
    name: dradux-site
    namespace: dradux-site
      run: dradux-site
    - port: 80
      run: dradux-site

Notice that the image for the container is set to where we push to with boi - also note we are pulling the 'latest' tag and in boi we have it set to apply the 'latest' tag on build.

Next, create an ingress to route traffic into the service:

apiVersion: extensions/v1beta1
kind: Ingress
  name: default
  namespace: dradux-site
  - host:
      - path: /
          serviceName: dradux-site
          servicePort: 80

A standard ingress, we will SSL terminate at a LoadBalancer in front of k8s.


You should now have the service up and an ingress to catch the traffic inside of k8s. You still have the task of managing the DNS to route to the k8s cluster (and SSL termination at a LB if needed) but other than that you should be able to hit your URL and see your site!

This uses the bar variable: bla

Web Server SSL Certificate Layouts


All of the following in one .pem file:

  1. The Certificate for your domain
  2. The intermediates in ascending order to the Root CA
  3. A Root CA, if any (usually none)
  4. Private Key


Two separate files:

  1. The Certificate for your domain, the intermediates, and root CA are in one file
  2. The private key