Skip to main content


Many of the good Systems Administrators and Database Administrators I know take notes. I have in the past myself. And I've used several (physical) notebooks to write my notes in. However, I didn't do nearly as well in finding the notes later. Certainly, the act of writing my notes down helped me to remember them, but if I couldn't fully remember the notes, it became another level of effort to find my notes in my notebook (or notebooks if I'd misplaced the original or if it was a completely different notebook for one reason or another).

For a while I used a personal Wiki tool (didiwiki) that I used on my local computer. (This is obviously a Linux workstation that I was using as there isn't an easy way to run this on Microsoft Windows(tm)). I was able to search through my notes, but I had no way to easily share it across from my work computer to my home computer and keep them consistent.

Soon after this, I found Zim. This is a desktop wiki and only runs locally. But it does have a built in "web server" that you can run on any port that people can connect to. This way I can easily "share" my notes (but mostly for myself). Zim became my main productivity tool for making notes.

Zim allows a limited amount of formatting, so this reduces this to a less distracting interface. I have bold and italic and underlined (and an unformatted style that is a monospaced font used for computer code), but I don't have a lot of fonts to select from. This allows me to concentrate more on what I'm writing and not be distracted by formatting. Another advantage, in my mind, is that I'm never waiting on Zim. Sometimes Windows applications, especially "large" programs like a Word Processor, just freeze up and needs to do things in the background. I've never seen this happen with Zim (but of course, that would also depend on the CPU and memory you have in your computer). But overall, Zim is very lightweight and works without a significant amount of computing resources.

One of the biggest advantages for using Zim is my ability to search across multiple notes. I could easily save my notes in a text file (or multiple text files) without any formatting. But I still wouldn't be able to easily search through a text document. I know some people will argue about that and say I have a FIND (or SEARCH) command in their note application. And that is true. And it may meet their needs. However, I can easily build different notes in Zim and link them together (via a Hyperlink or multiple hyperlinks). And now I can build an organizational structure for my notes and still search through them on some piece of code or some unique (or hopefully unique) word and find the note or notes I'm looking for. I've had to reorganize my links several times to get an optimum structure for my notes and finding the information in the future.

Earlier I mentioned that Zim has a limited number of formatting options. I mentioned the standard bold, italic, underscore (underlined), and preformatted (a mono-spaced font that works well with computer code or anytime you want to emphasize using a mono-spaced font). It also allows 7 heading styles. If you're familiar with HTML (Hyper Text Markup Language or web page layout) you'll recognize them as H1-H7. Really, I don't do much with these other than to break my notes up into different sections. I don't have to use more than levels 1-3 for the most part (and one of the add-ins for Zim allow you to put a Table of Contents on each page or in each note).

The last really powerful thing that Zim does for me, is to export my complete notebook directly to HTML pages. I frequently share my Notebooks at work with my co-workers so they have these as a reference. And because it's a static web page I copy over to a web server, they can't make any updates to my notes. I can also connect to my notes from someone else's computer at work if I need access to my notes as well. And the best part is that since it's static, I don't have to worry about people making updates (either accidentally or as a malicious act).

Maybe Zim will meet your needs and maybe it won't. But it solves a huge problem for me and I have multiple notebooks on a daily basis. One for my personal notes and one for specific things I do for my employer. I also have a friend who is a writer, and he's been using Zim to outline some of his books. So there is a lot of flexibility in this note taking tool.

Block Devices

Recently I was looking at some performance data with sar on one of my systems at home. While it was performing well, I didn't quite understand why there wasn't a lot of data being written on a particular device (compared to a couple of other volumes where I was doing about 50 tps for my database).

sar -d -s 19:35:00 -e 19:45:00 -f /var/log/sa/sa09 | egrep 'DEV|8-3'
07:35:01 PM     DEV       tps  rd_sec/s  wr_sec/s  avgrq-sz  avgqu-sz     await     svctm     %util
07:36:01 PM  dev8-3      0.84      0.00      6.71      8.00      0.00      0.80      0.74      0.06
07:37:01 PM  dev8-3      1.05      0.00      8.43      8.00      0.00      0.75      0.70      0.07
07:38:01 PM  dev8-3      0.98      0.00      7.84      8.00      0.00      0.71      0.71      0.07
07:39:01 PM  dev8-3      1.02      0.00      8.20      8.00      0.00      0.74      0.69      0.07
07:40:01 PM  dev8-3      0.92      0.00      7.38      8.00      0.00      0.87      0.87      0.08
07:41:01 PM  dev8-3      1.11      0.00      8.87      8.00      0.00      0.79      0.70      0.08
07:42:01 PM  dev8-3      0.98      0.00      7.87      8.00      0.00      0.83      0.78      0.08
07:43:01 PM  dev8-3      1.16      0.00      9.26      8.00      0.00      0.70      0.58      0.07
07:44:01 PM  dev8-3      1.09      0.00      8.72      8.00      0.00      0.88      0.68      0.07
Average:     dev8-3      1.02      0.00      8.14      8.00      0.00      0.78      0.71      0.07

While the drive wasn't really busy at this particular hour, I wanted to see which drive it actually was, and where it was mapped on my system.

Running the following command, I was able to map out all the devices on my system (in the below example, I am grepping on 8 from the dev8-3 above, which corresponds to sda3 below. This represents major device id 8 in column 5 and minor device id 3 in column 6):

timb@pc015:~$ ls -l /dev/ | grep -E ' 8, '
brw-rw---- 1 root disk      8,   0 2016-05-21 13:45 sda
brw-rw---- 1 root disk      8,   1 2016-05-21 13:45 sda1
brw-rw---- 1 root disk      8,   2 2016-05-21 13:45 sda2
brw-rw---- 1 root disk      8,   3 2016-05-21 13:45 sda3
brw-rw---- 1 root disk      8,   4 2016-05-21 13:45 sda4
brw-rw---- 1 root disk      8,   5 2016-05-21 13:45 sda5
brw-rw---- 1 root disk      8,   6 2016-05-21 13:52 sda6
brw-rw---- 1 root disk      8,  16 2016-05-21 13:45 sdb
brw-rw---- 1 root disk      8,  17 2016-05-21 13:45 sdb1
brw-rw---- 1 root disk      8,  32 2016-05-21 13:45 sdc
brw-rw---- 1 root disk      8,  48 2016-05-21 13:45 sdd
brw-rw---- 1 root disk      8,  64 2016-05-21 13:45 sde
brw-rw---- 1 root disk      8,  80 2016-05-21 13:45 sdf

And now I was able to see that device 8-3 maps to sda3.

Now, I could look at my /etc/fstab file (or even just run mount to see what is mounted where) and see the following:

timb@pc015:~$ mount
/dev/sda3 on / type ext3 (rw,relatime,errors=remount-ro)
proc on /proc type proc (rw)
none on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
/dev/sdb1 on /media/sdb1 type ext3 (rw,relatime)
/dev/sda1 on /boot type ext3 (rw,relatime)
/dev/sda6 on /home type ext3 (rw,relatime)

So now I can see that the activity is low (performance wise) because the mount point (/) is just my main root file system. Of course, I'm not going to be reading and writing a lot of data there. But this helped me track down the disk performance characteristics that sar was showing me.

I could probably write a small script to give me more data and map this out for me, but sometimes showing the steps will help someone else understand their own system.


I'm a heavy user of the program screen, particularly at work where I connect to some database servers and automatically start screen as part of my .bash_profile or .bashrc files. (My .bashrc file is actually pretty complicated as I share my startup script between a variety of linux containers and systems at home and work so they call other bash scripts depending on the environment and host type.)

I'm also aware that a lot of people view tmux as a better alternative, but I've discovered it's not on all the systems and Operating Systems that I have a tendency to jump around on (although that is slowly changing). At some point in the near future, I might end up switching, but it will take me personally some time to learn how to use it.

As one screen file for conneting to a postgres database server, my .screenrc file looks like:

    term xterm

    hardstatus alwayslastline
    hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{=kw}%?%-Lw%?%{r}(%{W}%n*%f%t%? (%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B}%Y-%m-%d %{W}%c %{g}]'

    startup_message off
    screen -T xterm -ln -t top       top -c
    screen -T xterm -ln -t messages  sudo tail -f /var/log/messages
    #screen -T xterm -ln -t pgtop     pgtop
    screen -T xterm -ln -t pglog     sudo tail -f /var/log/postgres

    logfile $HOME/logs/screen/screen_%H_%Y-%m-%d_%0c-%n

If you need help understanding my .screenrc file, please see the documentation (at

But I've also had the problem where I couldn't re-connect to an existing screen session (usually because I've hosed my terminal session, or I've needed to connect to another user's screen session because I've started a screen session under a different username on one of my test systems.).

I received the following message:

Cannot open your terminal '/dev/pts/1' - please check.

I found an answer to this by executing the following code:

    $ sudo su _user_
    $ script /dev/null
    $ screen -r _screen session id_

This allowed me to connect and then terminate that user's session. (And yes, I should be running more things under my userid rather than another user's, but sometimes I have to and that's the beauty of a test environment!)

ASCII Lineart

I have a tendency to create documentation locally using Markdown. The reason(s) why are complicated, and beyond the scope of this post, but this is a conscious choice I've made when i'm creating posts (locally) before I upload them to Confluence (or on my personal site using Nikola).

However, even when I'm doing this, I've found incompatibilities between some of the tools on my system. One example was I created some documentation, but then wanted to include a list of directories (to further explain the documentation). I created my notes in ReText and then used the tree -d command to get a list of directories, redirecting the list to a text file to be included in my text.

This worked well, and in ReText I had to set the directory listing up as pre-formatted text so it would render correctly as a mono-spaced font in my documentation.

However, when I ran the code through pandoc (using the command line pandoc -o readme.html the code came through as unicode and higher order characters that made the documentation unreadable in a web browser.

Looking carefully at the HTML, it appeared it was only interpreting the resulting output as straight ASCII and the line drawing characters from tree -d seemed to be interpreted as some other higher order character.

On a whim, I checked to see if I could change the characterset for the HTML file by adding the following line into the HTML:

<meta http-equiv="Content-Type" content="text/html;charset=utf-8">

Opening an editor on the HTML file and adding that line at the top ensured that my HTML would render correctly (or at least close enough) when I pasted in the ASCII line art that I was satisfied. Then I could grab the rendered HTML from the file and paste it into a Confluence page. Problem solved!


nikola -

pandoc -

retext -

tree -

Migrating to Nikola

I decided I wanted to migrate away from Drupal but I still wanted something similar to manage my blog content. I also wanted something to be able to format code for the times I wanted to publish source code of SQL, Shell Scripts. Python or whatever.

My first stop was to look at Markdown (used in places like github). And there were a variety of products that supported Markdown (aka MD) including Hyde, Jekyll, Octopress, Pelican, Sphinx, and Nikola.

I was in the middle of learning Python so I chose to focus on Pelican and Nikola since they were both written in python. I felt that if there were any issues, I could at least debug the source code of either generator and fix it myself. Both had a series of plug-ins that would give me flexibility to extend the capabilities.

As I looked into Markdown, I was impressed with the features and how using a simple text editor I could create content for my blog and still make it somewhat "pretty" by adding some limited formatting commands (things like italics, lists, and bolding text) as well as the ability to include pictures (or diagrams). So having settled on using either Pelican or Nikola, I started with Pelican (it seemed to be more popular) and created some blog posts and created the blog to view it. It looked pretty good, but I couldn't display the "gallery" of pictures I was looking at. This was probably user error on my part, but I could never figure out how to render the sample pictures I wanted. So I attempted to do the same things with Nikola and it was more successful for me after a quick test.

As I played with Nikola, before I rolled it out onto my primary web site, I discovered that Nikola supports reStructured Text (RST). This was very similar to Markdown, but it allowed some extensions to support things like syntax highlighting (color coding source code to make it more readable). This really sounded like what I was looking for.

It took some time, but I finally converted most of my blog over to RST and published it to my web server. My main page now loads in about 300ms and my formatting is a lot simpler.

Leaving Drupal

Once upon a time, I used Drupal for my blog. One of the reasons I used Drupal is that I subscribe to the philosophy of "eating my own dogfood." One of the organizations I volulnteered with had a need for a website and, after some discussion, the ability to manage their own content. Drupal seemed to meet the needs at the time. This was because there weren't a lot of (mature) tools (back then) for managing blogs (other than Wordpress) and content.

So I used Drupal for several blogs and helped some friends setup their own blogs. And I used it through multiple versions of Drupal Version 5, Version 6, and Version 7 with different modules as my needs changed. However, I started to notice that some of my content looked different than older content.started to look slightly different. I used both fckeditor and TinyMCE for some formatting, as well as editing content through several different themes which may have impacted the formatting. I really have no clue why some of the content was slightly off - but it made editing the existing content more challenging.

I also was spending more time upgrading Drupal (fixes for different modules and security patches for modules and Drupal itself) then I was creating content or doing work or research. It wasn't a lot of time, but it did take time for me.

None of this is to blame Drupal. This is just an explanation for two of the reasons I decided to leave Drupal. And these may be MY issues, and not problems with Drupal as I altered the modules, configuration, and content over time. There was also a third reason I chose to migrate away from Drupal....I had started monitoring my website for outages which provided the added benefit of giving me page load times. My web server also ran a SMTP relay and a database server, so I was not looking for huge performance, but I discovered that my website was taking up to 1.5 seconds (and sometimes longer) to load.

While looking at alternatives I discovered that converting to a static website would significantly reduce the load time for my content. (In simple terms, loading straight HTML would be faster to load than dynamically generating content from a database and formatting it on the fly.)

All these things really combined together, at a really good time, for me to make a migration away from Drupal and to a static site generator. And that will be another blog post.


I was able to add my personal account to the local sudoers file by making the following two changes:

I modified the sudoers file and uncommented the wheel entry (so I could execute commands without a password):


and then I ran the following two commands (assuming that my Windows Active Directory Domain was HOME):

/usr/sbin/dseditgroup -o edit -a HOME\timothy.bruce -t user wheel
/usr/sbin/dseditgroup -o edit -a HOME\timothy.bruce -t user admin

At some point I'd really like to go back and fix this so that I can create a specific group and add myself to it so I can do a limited set of commands without having to type a password and remove (actually comment back out) the line where users in the group wheel can enter ALL commands without a password (not very safe, even if I'm the only user or there are a limited number of users in the wheel group).

The only reason I'd like to add a new group is because this configuration allows me to do ANYTHING without requiring a password, which weakens the security, even on my local system.

Running man From X

Running commands from a command line seems to cause fear in a lot of people. Since I spend most of my day at the command line (and I started using computers with a Command Line Interface, or CLI), that's a foreign concept for me.

Even the prospect of opening a terminal (or xterm) session just to run a single command makes some people uneasy. However, having the on-line documentation available to you can be a real asset when you ARE using a command line, or if you just want to explore the on-line manuals installed on your computer.

If you don't have man installed, you can install a variety of documentation packages using your favorite GUI (Graphical User Interface) package manager. On Ubuntu, this is Synaptic. But it could be yumex under Fedora or some other tool.

On Ubuntu, I realized that xman is installed by default. However, the interface leaves a little to be desired. It looks like something that was written in the late 1980s/1990s. And for good reason, as it was written in that time frame. (Look up when v1.0 of xman was written.) Not only that, but there are times when you can't (or don't) want to start a web browser to look up online the syntax for a particular command. In any case, looking online for the syntax or information may not help much as the version displayed online may be different than the version on the system you are using. In these cases you may st


  • gman

  • seetxt

  • khelpcenter

  • tkman

  • xman


Some references:

Convert VMWare to KVM

I used to run vmware (specifically VMWare Workstation), but I had to move away from it for a variety of reasons. It was fairly stable (I did run into a few issues, but always found a way around them after some work), so that wasn't the reason. (For those who care, every time I received a new kernel under Linux, I would have to recompile some modules and load them into VMWare and I just got tired of the hassle of doing that).

So I wanted to move almost a dozen vmware machines to another Virtualization stack. I looked at both VKM and Virtual Box and settled on KVM. I found some basic instructions on the KVM website here ( but they are so cryptic, I had trouble following them!

So this is an attempt to remedy that problem.

Assuming your VMware Windows machines are on a Linux host, the following worked for me (change VM name as appropriate).

And note you'll need lots of disk space, either on the server or network or external drive during the conversion processes. So make sure it is available before you begin the process!

1.Make a backup of all the VMware Virtual Machine files in case you need to go back to VMWare.

2.Remove VMware Tools

3.Disable the screen saver (on the computer you'll do the conversion).

4.If the Guest (Virtual Machine) is Windows, you can download mergeide.reg from:

NOTE: The file is no longer available. Looking on the Microsoft Website at () I was able to figure out how to build my own. Here are the contents:

Windows Registry Editor Version 5.00
































;Add driver for Atapi (requires Atapi.sys in Drivers directory)

"Group"="SCSI miniport"
"DisplayName"="Standard IDE/ESDI Hard Disk Controller"

;Add driver for intelide (requires intelide.sys in drivers directory)

"Group"="System Bus Extender"

;Add driver for Pciide (requires Pciide.sys and Pciidex.sys in Drivers directory)

"Group"="System Bus Extender"

5.Install mergeide.reg (again, only for Microsoft Windows)

6.Convert the original VMDK file to a growable VMDK file using the following as a template: (note: it is dash t space zero):

sudo vmware-vdiskmanager -r win2003.vmdk -t 0 win2003-grow.vmdk

7.Convert the growable VMDK image in to a QEMU file using the following as a template:

sudo qemu-img convert -f vmdk win2003-grow.vmdk -O qcow2 win2003.img

8.Follow the standard procedures to import QEMU image into Virtual Machine Manager or run from command line.

Microsoft Reference:

Some other useful links:

This article may also help:


File transfer

Most of the time we are trying to transfer file over network and stumble upon the problem which tool to use. There are again numerous methods available like FTP, SCP, SMB etc. But is it really worth the effort to install and configure such complicated software and create a sever at your machine when you only need to transfer one file and only once.

Suppose you want to transfer a file “file.txt” from A to B

Anyone can be server or client, lets make A as server and B as client.


$ nc -l 1567 < file.txt


$ nc -n 1567 > file.txt

Here we have created a server at A at redirected the netcat input from file file.txt, So when any connection is successfull the netcat send the content of the file.

Again at the client we have redirect the output of netcat to file.txt. When B connects to A , A sends the file content and B save that content to file file.txt.

It is not necessary do create the source of file as server we can work in the eopposeit order also. Like in the below case we are sending file from B to A but server is created at A. This time we only need to redirect ouput of netcat at to file and input at B from file.

B as server (Server)

$ nc -l 1567 > file.txt


$ nc 1567 < file.txt