yaml

Hello Postgres and REST in Crystal

Tagged crystal, postgres, rest  Languages bash, crystal, yaml

NOTE: This does not work with Crystal 0.25.2.

Install Crystal

brew install crystal-lang
mkdir projects/hello
shards init

Set up dependencies

dependencies:
  pg:
    github: will/crystal-pg
    version: "~> 0.5"
  kemal:
    github: sdogruyol/kemal
    version: "~> 0.16.1"
shards

Write a simple REST API

require "kemal"
require "json"
require "pg"

PG_URL = "postgres://postgres@localhost:5432/xxx"
DB     = PG.connect PG_URL

get "/" do |env|
  env.response.content_type = "application/json"
  users = DB.exec("SELECT * FROM users")
  users.to_hash.map do |user|
    {first_name: user["first_name"].as(String), last_name: user["last_name"].as(String)}
  end.to_json
end

Kemal.run(4567)

How to debug Ansible variables

Tagged ansible, debug, hostvars, variables  Languages bash, yaml

Print all variables for all hosts from the command line:

 $ ansible -i inventory/local -m debug -a "var=hostvars" all

Replace hostvars with any of the following to print:

  • ansible_locals
  • groups
  • group_names
  • environment
  • vars
  • ansible_sucks

Print all variables for all hosts from a playbook:

- hosts: all
  tasks:
    -  debug:
        var: hostvars[inventory_hostname]
        # -vvv to debug !!!!
        # verbosity: 4

Print all variables:

- name: print ansible_local
  debug: var=ansible_local

Ansible: How to find the IP address of a specific network interface when there are multiple network interfaces

Tagged ansible, ip address, network  Languages yaml

Here’s how to find the IP address of a specific network interface that matches a given CIDR:

- hosts: all
  gather_facts: false
  tasks:
    - set_fact:
        prod_ip_addr: "{{ item }}"
      when: "item | ipaddr('192.168.10.0/24')"
      with_items: "{{ ansible_all_ipv4_addresses }}"
    - debug: var=prod_ip_addr

This is useful for example when you have a separate management network interface.

It’s almost as easy as trying to explain what the script is doing in plain English.

How to set the setgid flag with Ansible

Tagged ansible, setgid, setuid  Languages yaml

This is how to set the setgid flag with Ansible. Tested with Ansible version 2.4.

- name: Create directories and set setguid
  file:
    path: "{{item}}"
    state: directory
    owner: www-data
    group: www-data
    # NOTE: 2 = setguid flag. The prefix 0 is required by Ansible
    mode: 02770
    recurse: true
  with_items:
    - /var/www/
  become: true

Reference

Setting the setgid permission on a directory (“chmod g+s”) causes new files and subdirectories created within it to inherit its group ID, rather than the primary group ID of the user who created the file (the owner ID is never affected, only the group ID).

See https://en.wikipedia.org/wiki/Setuid

Detecting software version with Ansible

Tagged ansible, version  Languages yaml

With Ansible, detecting the version of, for example, Redis or Racket can be done like this:

- name: Detect Redis version
  # Input: Redis server v=3.2.1 sha=00000000:0 malloc=libc bits=64 build=62a67eec83b28403
  # Output: 3.2.1
  shell: redis-server -v | awk '{print $3}' | sed -e 's/v=//'
  changed_when: False
  register: redis_installed_version

- name: Detect racket versions
  # Input: Welcome to Racket v6.6.
  # Output: 6.6
  shell: "racket -v | rev | cut -d ' ' -f1 | rev | sed 's/.$//' | sed 's/^v//'"
  register: racket_installed_version

Example: Download src only if version does not match:

- get_url:
    url: http://download.redis.io/releases/redis-{{redis_version}}.tar.gz
    dest: /usr/local/src/
    sha256sum: "{{redis_sha256}}"
  register: get_redis_result
  when: redis_installed_version.stdout | version_compare(redis_version, '!=')

Database diff

Tagged database, diff, postgres  Languages bash, ruby, yaml

This script will connect to two databases, named old and new, and print:

  • the names of the tables that cannot be found in the new database
  • the difference in the amount of rows per table, if there is a difference in the number of rows

db_diff.rb:

#
# Postgres database diff script. Prints the names of missing tables and the
# difference in the amount of rows per table.
#
require 'sequel'
require 'pg'
require 'yaml'
require 'pry'

list_tables_sql = "SELECT tablename from pg_tables where schemaname = 'public';"
OLD_DB = Sequel.connect(YAML.load(File.read('old_database.yml')))
NEW_DB = Sequel.connect(YAML.load(File.read('new_database.yml')))
OLD_TABLES = OLD_DB[list_tables_sql].map{ |x| x.fetch(:tablename) }
NEW_TABLES = NEW_DB[list_tables_sql].map{ |x| x.fetch(:tablename) }

# Compare tables
def diff_tables
  OLD_TABLES - NEW_TABLES
end

# Compare row count
def diff_row_count
  OLD_TABLES.sort.reduce({}) do |hash, table|
    sql = "SELECT count(*) FROM %{table}"
    # Sequel's count method does not work.
    diff = OLD_DB[sql % {table: table}].first[:count].to_i - NEW_DB[sql % {table: table}].first[:count].to_i
    hash[table] = diff if diff != 0
    hash
  end
end

puts JSON.pretty_generate(tables: diff_tables, rows: diff_row_count)

Gemfile:

source 'https://rubygems.org'

gem 'sequel'
gem 'pg'
gem 'pry'

new_database.yml and old_database.yml:

adapter: postgres
host: localhost
port: 5432
encoding: unicode
database: x_production
username: x
password: x

To run the script:

gem install bundler
bundle
bundle exec ruby db_diff.rb

Other migration and diff tools

migra is a schema diff tool for PostgreSQL. Use it to compare database schemas or autogenerate migration scripts.

Given two database connections, it output a file that represent the difference between schemas. It means that if you run the output file into the from database, it’ll have the same schema as the to database.

Compares the PostgreSQL schema between two databases and generates SQL statements that can be run manually against the second database to make their schemas match.

How to deploy an app on a specific node/server in a kubernetes cluster using labels

Tagged kubernetes, label, node  Languages yaml

This is an example of how to deploy an app on a specific node/server in a kubernetes cluster using labels:

# # Label one node in k8s cluster
#
# $ kubectl label nodes 48.234.16.45 country=finland name=node1
#
# # Deploy 1 nginx pod in k8s cluster on node1 in Finland
#
# $ kubectl apply -f nginx.yml
#
# # View deployment
#
# $ kubectl describe deployment nginx-app
#
# # View pods
#
# $ kubectl get pods -l app=nginx
#
# # Find port the service is listening on
#
# $ kubectl describe service nginx-app | grep NodePort
# > NodePort:                 http   31796/TCP
# > NodePort:                 https  31797/TCP
#
# # Find the node the pod is deployed to
#
# $ kubectl describe pods nginx-app | grep Node
# > Node:           48.234.16.45/10.0.0.2
#
# # Access deployment using node ip and port
#
# http://<node ip>:<node port>
#
# # Export service to YAML
#
# $ kubectl get service nginx-app -o yaml --export
#
# # Delete
#
# $ kubectl delete deployments,services -l country=finland
#
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
  labels:
    run: nginx-app
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1 # tells deployment to run 1 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.13.12
        ports:
        - containerPort: 80
        - containerPort: 443
      nodeSelector:
        country: finland
        server: node1
---
#
# Expose deployment on <node ip>:<node port>. Node and a random port is assigned by k8s.
#
apiVersion: v1
kind: Service
metadata:
  labels:
    run: nginx-app
  name: nginx-app
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: nginx
  type: NodePort

How to debug CrashLoopBackOff

Tagged crashloopbackoff, kubectl, kubernetes  Languages bash, yaml

One day, you see CrashLoopBackOff in the kubectl output:

$ kubectl get pod
NAME                               READY   STATUS             RESTARTS   AGE
app-548c9ddc46-z2fng               0/1     CrashLoopBackOff   79         6h26m

You already know that executing bash in the container is not possible because the container has crashed:

$ kubectl exec -ti app-548c9ddc46-z2fng bash
error: unable to upgrade connection: container not found ("app")

Option 1: Analyze the container logs

You can view the container’s logs with kubectl logs:

$ kubectl logs -f app-548c9ddc46-z2fng

Option 2: Modify Dockerfile’s CMD

Modifying the Dockerfile’s ‘CMD’ is not needed, as it can be done indirectly through the Pod’s YAML configuration file:

apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
    - name: app
      image: company/app
      # DEBUG env variables
      command: [ "/bin/sh", "-c", "env" ]

The modified command will print the environment variables to the logs:

command: [ "/bin/sh", "-c", "env" ]

To view the output run:

$ kubectl logs -f app-548c9ddc46-z2fng

Option 3: Start a shell and sleep (CMD)

Sleeping usually helps, and this can be done by modifying the container’s command:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: xxx-service
spec:
  replicas: 1
  template:
    ...
    spec:
      containers:
      - image: yyy/xxx:1.0.0
        name: xxx-service
        ...
        command:
          - "sh"
          - "-c"
          - "sleep 10000"

The container will start and run “sleep 10000” in a shell, giving you exactly 10000 seconds to debug the issue by connecting to the “sleeping” container:

$ kubectl exec -ti app-548c9ddc46-z2fng bash

How to set up a Digital Ocean Kubernetes cluster

Tagged digitalocean, kubernetes, setup  Languages bash, yaml

How to set up a Digital Ocean Kubernetes cluster (Q1 2019).

Introduction

Overview of how a request gets from a browser to a Docker container managed by Kubernetes:

Internet[Browser => DigitalOcean LoadBalancer] => Kubernetes[Ingress => Service => Deployment => Pod] => Docker[Container]

Prerequisites

  • Install kubectl
$ brew install kubectl
  1. Create cluster in DigitalOcean dashboard

Download the cluster configuration (YAML file) and put it in the ~/.kube folder.

  1. Set KUBECONFIG environment variable
$ export KUBECONFIG=/Users/christian/.kube/staging-kubeconfig.yaml
$ kubectl config current-context
$ kubectl get nodes
NAME               STATUS   ROLES    AGE   VERSION
xxx-staging-qv72   Ready    <none>   70m   v1.14.1
xxx-staging-qv7p   Ready    <none>   71m   v1.14.1
xxx-staging-qv7s   Ready    <none>   71m   v1.14.1
  1. Install ingress controller
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

Get the EXTERNAL-IP of the load balancer:

$ kubectl get services --namespace=ingress-nginx
  1. Create ingress resource

ingress.yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: app-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
        - path: /
          backend:
            serviceName: app-service
            servicePort: 80
$ kubectl apply -f ingress.yml
  1. Create deployment

deployment.yaml:

apiVersion: v1
kind: Service
metadata:
  name: app-service
spec:
  ports:
  - port: 80
    targetPort: 5678
  selector:
    app: app
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app
spec:
  selector:
    matchLabels:
      app: app
  replicas: 1
  template:
    metadata:
      labels:
        app: app
    spec:
      containers:
      - name: app
        image: hashicorp/http-echo
        args:
        - "-text=Hello. This is your Digital Ocean k8s cluster."
        ports:
        - containerPort: 5678
$ kubectl apply -f deployment.yml
  1. Curl your app
curl https://<EXTERNAL-IP> --insecure

References