Skip to main content

Posts about devops

System Resource Monitoring

We often need to monitor a system during load testing to determine how heavily the system is being taxed (i.e. determining the proper AWS instance type) and to determine what is being hit (memory, cpu, etc.). Often the target system has monitoring features (e.g. Prometheus or other); however, there are times when we do not have a monitoring tool and, more often, times when we don't want the hit of the monitoring tool itself impacting the load test.

The need is typically to spin up a server, add the application and hit the server with a load test (locust.io, jmeter, ab, etc.). Adding node_exporter to an instance takes a few seconds but is useless if you do not have Prometheus setup or available to the instance. This typically leads to running the load test and periodically checking metrics of the resource as the test runs - not an optimal solution.

Our quest to automate this task had two key requirements:

  • easy to install (low dependencies)
  • small footprint (if the monitor itself takes 50% of memory/cpu its not real helpful)

sarmore

Our first attempt to address this issue is basic and meets the requirement. sarmore uses SAR which is already on most Linux instances and may already be running.

sarmore is very basic, it simply starts 3 SAR instances (one each to monitor cpu, memory, and load) which run for a given amount of time and log to the LOG_DIR specified.

Quick Summary: no install, may already be running

Benefits
  • typically no install
  • may already be running
Drawbacks
  • output format is not the best to work with

pymore

Our second attempt to address this issue was a python-based solution given python is typically on most linux instances. pymore runs in a loop, polling system resources at a given interval and stores the results to log files.

Quick Summary: clone repo and run (no dependencies)

Benefits
  • clone repo, no app dependencies
  • more detailed, customizable metrics can be captured
Drawbacks
  • requires git/cloning
  • memory usage is relatively high

rumore

Our third attempt is a rust app which ships/runs as a simple binary. rumore runs much as pymore does, in a loop and writes to log files.

Quick Summary: put binary on server and run

Benefits
  • single binary (no dependencies)
  • low memory usage
  • more detailed, customizable metrics can be captured
Drawbacks
  • requires putting binary on server

Summary

All three solutions have benefits/drawbacks which should be considered for each application. We typically use rumore where possible due to its light resource usage and ease of install.

Resource Utilization

Resource utilization of the resource monitors themselves is as follows:

Application Bin Size VIRT RES SHR CPU%
sarmore n/a 16,140 2,292 2,076 0.0
pymore n/a 19,992 11,628 6,028 0.0
rumore 1.8M 2,928 888 768 0.0

Note that the above stats for sarmore are for running it directly, if you already have sar running you would not need to run sarmore but rather simply query sar.

Feedback

We welcome feedback and enhancements to any of the three applications, keeping in mind the two key requirements. Feel free to fork one (or all) and submit a PR. We would also be interested if you are willing to write similar logic in another language (such as Java, C/C++, etc.) to compare against the sarmore, pymore, and rumore.

The boi Primer

This primer will show how to take an existing application (the dradux.com website itself) and deploy it to a kubernetes cluster using boi, a lightweight tool for building and pushing docker images. Deploying an application to k8s can be a daunting task depending on the complexity of your application and the needs therein; however, as this post shows, it can be quick (~30 minutes) and relatively easy if you have the needed components in place.

Key Components

  • an application, ready to go
  • a k8s cluster
  • an image repository

The Application

For this primer I am using the dradux.com website itself as an example. Its source is on gitlab. The app is a nikola based static site. To 'generate' the site you would simply run nikola build, the built site is located in the output/ directory. This part was easy as it already existed!

The k8s Cluster

Easy, I already had one. If you don't, I suggest looking at rancher as you can get a k8s cluster set up in minutes.

The Image Repository

I have spent countless hours setting up internal registries, registries inside of k8s, and integrating with external/public registries. If your app is close-sourced this task will take more time and be more difficult. As of late I have started using the registry which comes with a gitlab project as it is free and I do not need to manage the registry. If you want to see one, check our the dradux-site registry - you can even pull from it if you want to run your own version of this site!

Dockerizing Your App (if needed)

This is an art in itself and can take on a life of its own. I suggest a lot of planning/thinking before doing here and keep things simple. Your application/code should do the heavy work and docker should be a light wrapper to bundle it all up but some things (old java apps) just wont stay light in my experience. If you need help in dockerizing feel free to contact us!

Dockerizing dradux-site was simple as the site is a static site - just html/javascript/css. No server-side scripting, database, etc. All we need is an nginx instance, add the app code and we are ready to go! Here's the dockerfile:

# https://hub.docker.com/_/nginx/
FROM nginx:1.17.2-alpine

# add nginx conf.
ADD nginx_default.conf /etc/nginx/conf.d/default.conf
# copy the site.
COPY output/ /app/

RUN apk add --no-cache                                                  \
    tzdata                                                              \
  # set timezone                                                        \
  && cp "/usr/share/zoneinfo/Europe/London" /etc/localtime              \
  # cleanup                                                             \
  && apk del tzdata                                                     \
  && echo "Setup complete!"

# start crond and nginx.
CMD nginx -g 'daemon off;'

I also created the nginx_default.conf file which is:

server {
    listen 80;
    server_name _;
    root /app;
    client_max_body_size 55M;
    access_log   /var/log/nginx/access.log;
    error_log    /var/log/nginx/error.log;
    gzip on;
    gzip_http_version 1.0;
    gzip_proxied any;
    gzip_min_length 500;
    gzip_disable "MSIE [1-6]\.";
    # text/html included by default
    gzip_types text/plain text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml;
    try_files $uri/index.html $uri.html $uri @app;
}

That's all it took to dockerize the app!

Add boi Integration

As mentioned before boi is lightweight. To add boi to the project I created the .boi.toml configuration file as:

# boi project config file.

[build]
    [build.image]
        value = true
    #[build.version]
    #  value = "1.0.0"
    [build.base_path]
        value = "."
    [build.dockerfile]
        value = "Dockerfile"
    [build.repository]
        value = "registry.gitlab.com/drad/dradux-site"
    [build.args]
        arglist = [ "DEPLOY_ENV:prd" ]

[push]
    [push.image]
        value = true
    [push.registry.url]
        value = "registry.gitlab.com"
    [push.registry.username]
        value = "drad"
    # this is disabled as we use the .dockercfg
    #[push.registry.password]
    #  value = ""

[source]
    [source.tag]
        value = true

The build.repository and push.registry.url are key as they specify the repo to push images to.

Build & Push Your First Image

Use boi to build and push an image of the application: boi --build-version=1.0.0. If all goes well you will have version 1.0.0 of your application in the container registry specified.

Deploy to k8s

Before you can deploy to k8s you need to setup your deployment. This can be done in several different ways in k8s, we will use a standard namespaced deployment as an example.

First, create your namespace and then create the deployment:

apiVersion: v1
kind: List
items:
- apiVersion: apps/v1beta1
  kind: Deployment
  metadata:
    name: dradux-site
    namespace: dradux-site
  spec:
    template:
      metadata:
        labels:
          run: dradux-site
      spec:
        containers:
          - name: dradux-site
            image: registry.gitlab.com/drad/dradux-site:latest
            resources:
              limits:
                memory: 128Mi
                cpu: 100m
            ports:
              - containerPort: 80
- apiVersion: v1
  kind: Service
  metadata:
    name: dradux-site
    namespace: dradux-site
    labels:
      run: dradux-site
  spec:
    ports:
    - port: 80
    selector:
      run: dradux-site

Notice that the image for the container is set to where we push to with boi - also note we are pulling the 'latest' tag and in boi we have it set to apply the 'latest' tag on build.

Next, create an ingress to route traffic into the service:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: default
  namespace: dradux-site
spec:
  rules:
  - host: dradux.com
    http:
      paths:
      - path: /
        backend:
          serviceName: dradux-site
          servicePort: 80

A standard ingress, we will SSL terminate at a LoadBalancer in front of k8s.

Summary

You should now have the service up and an ingress to catch the traffic inside of k8s. You still have the task of managing the DNS to route dradux.com to the k8s cluster (and SSL termination at a LB if needed) but other than that you should be able to hit your URL and see your site!