performance snippets

Log file analysis with AWK - Calculating the sum and average

Tagged awk, shell, performance, average, sum  Languages bash

This AWK script is useful when you want to calculate the average and sum for a set of values found in a log file.

awk '{ s += $1 } END { print "sum: ", s, " average: ", s/NR, " samples: ", NR }' rails_production_log_or_whatever.log

Note that $1 means that column one contains the values you want to use. NR is the total number of rows in the file. As an example, let's say you have this log file:

1
2
3
4
5

The output would then be:

sum:  15  average:  3  samples:  5

Combine it with grep or sed to do more advanced log file analysis -- you can for example calculate the average time it took to render action xyz in Rails on the 21th of July at 21:00 PM.

Note that by default the column values should be space separated--use the following switch to parse CSV (comma separated) files: -F,

Find the amount of CPU time and total time runtime for Oracle SQL queries

Tagged oracle, performance  Languages sql

There's no equivalent to the MySQL slow query log in Oracle but these queries and the statspack reports are helpful when trying to find slow queries:

-- Order queries by how many cpu seconds they consume
SELECT   hash_value, executions, ROUND (elapsed_time / 1000000, 2) total_time,
         ROUND (cpu_time / 1000000, 2) cpu_seconds
    FROM (SELECT   *
              FROM v$sql
          ORDER BY elapsed_time DESC)
ORDER BY cpu_seconds DESC;

-- Find the total cpu seconds consumed
SELECT SUM (cpu_seconds)
  FROM (SELECT   hash_value, executions,
                 ROUND (elapsed_time / 1000000, 2) total_time,
                 ROUND (cpu_time / 1000000, 2) cpu_seconds
            FROM (SELECT   *
                      FROM v$sql
                  ORDER BY elapsed_time DESC)
        ORDER BY cpu_seconds DESC);


-- Find the total time queries have taken
SELECT SUM (total_time)
  FROM (SELECT   hash_value, executions,
                 ROUND (elapsed_time / 1000000, 2) total_time,
                 ROUND (cpu_time / 1000000, 2) cpu_seconds
            FROM (SELECT   *
                      FROM v$sql
                  ORDER BY elapsed_time DESC)
        ORDER BY cpu_seconds DESC);

-- Find execution plan for the damned query you should fix
SELECT *
  FROM v$sql_plan
 WHERE hash_value = 2967942512;

How to benchmark your Ruby code

Tagged benchmark, performance, ruby  Languages ruby

You can easily benchmark your Ruby code like this:

require 'benchmark'
seconds = Benchmark.realtime do
    sleep 1
end
print "#{seconds} elapsed..."

The output should be close to 1 second.

How to debug memory allocation problems in PHP with Xdebug

Tagged xdebug, php, memory, error, performance  Languages php

Xdebug is a good tool for finding the root cause of memory allocation problems such as the one shown here:

Fatal error: Allowed memory size of X bytes exhausted (tried to allocate X bytes)

First install Xdebug by following the Xdebug installation instructions.

Next surround the code you suspect is causing the problem with the following function calls:

xdebug_start_trace('/tmp/mytrace');
...
Bad bad PHP code
...
xdebug_stop_trace();

Read this blog entry by splitbrain.org for more details

How to log request processing times with Apache

Tagged mod_headers, apache, performance, x-request-received, x-request-processing-time  Languages apacheconf

To log the time it takes for Apache to process a request add this to your apache configuration file:

LogFormat "%h %D %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

%D means you'll see the time it took Apache to process the request in the access log. The time is measured in microseconds.

To make it even easier to debug page load times, add this to your configuration file:

Header set X-Request-Received: %t
Header set X-Request-Processing-Time: %D

Remember to enable mod_headers first by executing a2enmod headers.

Now you should see these headers in the response:

X-Request-Received  t=1286995673038485
X-Request-Processing-Time   D=251

Postgres Performance Tuning

Tagged database, performance, postgres, tuning  Languages sql

Tuning memory

To tune memory it helps to understand how much data fits into memory:

SELECT 
  sum(heap_blks_read) as heap_read,
  sum(heap_blks_hit)  as heap_hit,
  sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio
FROM 
  pg_statio_user_tables;

If the cache hit rate is below 99%, try giving PG more memory.

Finding slow queries

Using the slow-query feature is one option. Telling Postgres to store statistics is easier:

CREATE extension pg_stat_statements;

-- View slowest queries
SELECT 
  (total_time / 1000 / 60) as total_minutes, 
  (total_time/calls) as average_time, 
  query 
FROM pg_stat_statements 
ORDER BY 1 DESC 
LIMIT 100;

Finding tables that are not indexed

SELECT 
  relname, 
  100 * idx_scan / (seq_scan + idx_scan) percent_of_times_index_used, 
  n_live_tup rows_in_table
FROM 
  pg_stat_user_tables
WHERE 
    seq_scan + idx_scan > 0 
ORDER BY 
  n_live_tup DESC;

Reference: