Skip to main content

LG Touch

I picked up a new phone, but was having trouble getting to connect it to any computer. The phone is great, but I was hoping to find a tool to manage the information on it - my MP3s, my pictures, etc.

Finally I found bitpim (www.bitpim.org) to manage it. The tool works great on Windows, but not so well on Linux (at least I couldn't get it to work yet). However, to get it to work on the Windows, I needed a USB / CDMA driver, which had been removed from the LG website (bad LG!). I finally found it on SMSCaster.com (http://www.smscaster.com/download_driver_usb_data_cable.htm). Once I installed that, I was able to get it to work with Windows XP.

While i no longer use this phone (I've since replaced it with a Droid X and Android), I'm still publishing this as the BitPim tool was a great tool for backing up contacts and my calendar. I did finally get BitPim working on Linux (under Ubuntu v11.04) and was very happy with it.

Fixing SQL Server Logins

Sometimes when you restore a database, the Microsoft SQL Server login SIDs don't match (especially if you restore the database from another server). So when you try to add a user to the database, you get the dreaded error....

user or role already exists in the current database

The following command will fix the problem for user timothy in the current database (NOTE: This must be the current database!):

EXEC SP_Change_Users_Login 'Auto_Fix','timothy'

This will allow the user timothy in the (current) database to synchronize information with the user timothy that is a SQL login in the master database.

Customizing PostgreSQL

The default prompt on PostgreSQL is very basic. It returns only the current database and a # sign.

Changing the prompt can be done with the SET command. So I could set the default prompt to:

\set PROMPT1 '%n@%m-%/=%# '

which would give me:

timb@pdx01aut031-mail=#

Additional information can be found at: http://www.postgresql.org/docs/8.1/static/app-psql.html#APP-PSQL-PROMPTING

Many people get tired of re-entering these commands every time they start the psql client. Luckily, there is a way to set this and execute other commands automatically when connecting to PostgreSQL. This is the use of the .psqlrc file in Linux. NOTE. This file resides in the current directory. In windows, the files should be called psqlrc.conf and is located in the Application DataPostgresql directory (echo %APPDATA%), so there can only be one for each user.

MSSQL Permissions

Installation of Microsoft SQL Server requires that it have certain permissions in order to run on the local system. Some people choose to have it run under a Domain Account that has Administrative Access. And at one point (especially SQL Server v6.5 and SQL Server v7.0) it was much easier to install if you used an Acount that had Administrative access (i.e., Local Administrator Group role). I even believed at one point that using a Domain account made it easier for SQL Agent jobs to export data to remote file shares. Changes in SQL Server and in SQL Server Integration Services (SSIS) and SQL Agent have fixed (or made easier) some of these issues. So I can now say to a former co-worker (Bill) that you were right, and we could have converted SQL Server to use a local account instead of a Domain Account (although this may still introduce some security concerns for some enterprises).

In any case, the account that Microsoft SQL Server uses needs the following rights include (instead of running under an account with Administrator rights): 'Access this computer from the network', 'Allow log on locally', 'Allow log on through Terminal Services', 'Bypass traverse checking', 'Force shutdown from a remote system', 'Perform volume maintenance tasks', 'Profile single process', 'Profile system performance', 'Shut down the system'. I do need to research this more as new versions of SQL Server (this is all based on SQL Server 2005 and Windows 2003) may require less or more permissions, especially as further changes are made to the Windows Security Model.

If you do use a Domain Account (or even a local account on the server), double check to ensure what other permissions the account has and what directories the account can access. Too much access could lead to additional exploits where users can access data without accountability. There could even be potential for a denial of service attack which would disrupt the system for all the other users and/or applications.

A separate concern, that still occurs today, is software or roles that need more permissions than really necessary? Applications may be grated something like db_datareader in all databases, which could cause information "leakage" so that one account/user/application with rights to a database can read data in other databases that it shouldn't. Other roles like 'db_ssisoperator,' 'SQLAgentOperatorRole,' 'ServerGroupReaderRole' in msdb, 'mdw_reader' in your Management Data Warehouse, and server permissions for 'Alter trace' 'View any database' 'View any definition', 'View server state' (For SQL 2005 substitute 'db_dtsoperator' for 'db_ssisoperator' and leave out 'ServerGroupReaderRole') could lead to more data leakage or even data tampering. And the role 'SQLAgentOperatorRole' can have even more serious repercussions by crafting a job that restarts SQL Server or the whole server if it's created correctly. In other words, think about the permissions you're granting to accounts and what access they might have.

Some of my reading on the web inspired this, and one article specifically caught my eye. Read this article (http://www.sqlservercentral.com/blogs/brian_kelley/archive/2009/05/28/why-i-say-something-about-running-as-administrator.aspx) for further information about other security concerns.

File Extensions

I've always wanted to have a list of file extensions so I could easily look up what application is associated with a particular file extension. Years ago, I started putting a list together, but I never felt I had enough extensions to put it up some place as a reference. Now, I've discovered that someone has beat me placing it on the web. http://www.webopedia.com/quick_ref/fileextensions.asp is a great starting point into a list of file extensions. And if you just want a complete list, that's available here: http://www.webopedia.com/quick_ref/fileextensionsfull.asp

Linux Commands for the Command Line

Before, I have posted here a few basic Linux terminal commands that I think are essential for newbies to know. I've also shared some deadly ones that should be avoided at all costs. This time, I'm going to show you several terminal commands that are perhaps unfamiliar to many new-to-Linux users but could be really handy when used properly.Heres a list of 10 rather unknown yet useful Linux terminal commands:

1. Kill a running application by its name: ···killall [app_name]

2. Display disk space usage: ···df -h

3. Locate the installation directories of a program: ···whereis [app]

4. Mount an .iso file: ···mount /path/to/file.iso /mnt/cdrom -oloop

5. Record or capture a video of your desktop: ···ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg

6. Find out the Universally Unique Identifier (UUID) of your partitions: ···ls /dev/disk/by-uuid/ -alh

7. Show the top ten running processes (sorted by memory usage): ···ps aux | sort -nrk 4 | head

8. Make an audible alarm when an IP address goes online: ···ping -i 60 -a IP_address

9. Run the last command as root: ···sudo !!

10. Make a whole directory tree with one command: ···mkdir -p tmp/a/b/c

http://www.junauza.com/2009/05/10-unknown-but-useful-linux-terminal.html

Second Address on the same NIC

Sometimes you need to bind a second IP address to a Network Interface Card. Maybe you need access to another network temporarily or you just need to fix something that comes with a default address not on your network.

From Linux, you can execute the following command:

ifconfig eth0:1 192.168.5.16 netmask 255.255.255.0 up

NOTE: This will require ROOT (superuser) permissions!

This will add a second IP Address to your eth0 network interface.

Multiple IP Addresses on a Single NIC

In the previous section "Determining Your IP Address" you may have noticed that there were two interfaces: eth0 and wlan0. However, since I'm running two networks at home (192.168.8.x and 192.168.5.x, as well as the virtual network 192.168.230.x), I can assign an IP Address of 192.168.5.16 to my primary NIC (on the 192.168.8.x network).

The process for creating an IP alias is very similar to the steps outlined for the real interface in the previous section, "Changing Your IP Address":

  • First ensure the parent real interface exists
  • Verify that no other IP aliases with the same name exists with the name you plan to use. In this we want to create interface wlan0:0.
  • Create the virtual interface with the ifconfig command
root@pc016:~# ifconfig eth0:1 192.168.5.16 netmask 255.255.255.0 up

Getting Started with Citadel

The appliance obtains its IP via DHCP. The virtual console will display it at the top of the screen:

Citadel virtual appliance - (C) 2007 http://www.citadel.org
To log in to Citadel, point your web browser to http://192.168.1.22

Simply point your web browser there and get started. The default administraor has the username Citadel and password citadel. Obviously you will want to change the default once you log in. After you have logged in once with webcit, you can connect to that account with your favorite mail client via POP, IMAP or SMTP. A more thorough Getting Started may give you aditional information.

http://www.citadel.org/doku.php/installation:appliance

32-bit or 64-bit Hardware

How do you tell if you're running on 32-bit or 64-bit hardware?

While the command uname -a will show wether you are using a 32-bit or 64-bit Operating System (as well as possibly looking at /etc/*release), it won't tell you if your hardware is actually 64-bit or not.

Instead, look at /proc/cpuinfo by running cat /proc/cpuinfo and look for lm in the flags section. If this flag exists, it's a 64-bit architecture for your CPU.

An easy way to do this is to type the following command. A blank line will be returned if it's false (i.e., not 64-bit) or it will print one or more lines (depending on the number of CPUs and Cores in your computer).

$ cat /proc/cpuinfo | grep lm

How DNS Can Help in a Disaster Recovery Solution

Murphy walks among us. You know Murphy, the famous "optimist" who helps make every bad situations even worse? Well, after a disaster has occurred is not the time to figure out how and where you need to recover your data. Sure, we practice (ok, hopefully we practice!) recovering our database(s) from tape or disk. And during our testing we restore it into a test database or, if we’re lucky enough, into a test server. Great, but how does the application connect to the recovered database if we’ve run into a massive hardware (server) failure?

Before I describe how Domain Name Services (DNS) can help, take a look at a typical database environment like I've shown in Figure 1. This diagram shows each of our databases, named dbServer1, dbServer2, and dbServer3 with 3 database on each one. This particular design gives the Database Administrator (DBA) a lot of flexibility to move databases around for system maintenance, upgrades and recovery. It can also provide a level of security by isolating "sensitive" data from less sensitive data under normal circumstances (think account data from regular user databases that all employees have access to comply with Sarbanes-Oxley requirements)., Of course, if a disaster were to occur, many processes and procedures change until the situation is rectified (the disaster is over and the original database server(s) are brought back on-line.

Database Servers 1

So now, I'll describe how DNS can be used as a tool to help point applications to the correct database server.

As a refresher, DNS responds to requests for IP Addresses based upon a server name. Most people also realize that the reverse is also true; we can send an IP Address to the DNS server and get back the Authoritative Name of the device assigned to the address. In DNS, this is the A-record and is there is always one, and only one, specific name for an IP Address.

That’s nice, but it doesn’t really help us. Wouldn't it be nice if there was a way to have more than one name assigned to a single IP Address? There is and it’s called the Canonical Name Alias, often referred to as the CNAME record by DNS Administrators. And you can have an unlimited number of CNAME entries for each IP Address. (NOTE: Remember that DNS is a special kind of data file and while it can be “unlimited,” having too many records or entries could cause performance problems on your DNS server or network.)

How does that help with the recovery of a database? One part of the answer involves application configuration when the application is installed – whether it's a fat client on a desktop or a web service. Usually it requests the name of the server where the database is stored. For many people doing the install, they enter the physical name of the database server (the information associated with the A-Record in DNS). However when that database server fails, the application will also fail. And even though the data can be recovered to another database from the backups and log files, the connection information has to be re-entered to point to the new server, whether it requires an install or just a change in the application's configuration. Even more worrisome is for a fat-client this could require touching hundreds or thousands of desktop systems to update their configuration. Even with tools to push updates, this could be a lengthy and error-prone process. Worse, this downtime could lead to significantly lost revenue or revenue opportunities.

Now, you’re probably thinking, "I know where this is going. I can just drop my A-record for DBServer2 and recreate it as a CNAME-record to point to DBServer1." And that will work, provided all the databases on DBServer2 are restored to DBServer1. However, it is very rare that DBServer1 has sufficient free resources to recover all the databases from DBServer2. Instead, you end up recovering some database on DBServer1 and some on DBServer3. (See Figure 2)

Failure of DB Server 2

At this point, you wonder "How do you recover some databases to DBServer1 and others to DBServer3?" One solution is to create a CNAME alias for each database stored on each server. While this is more work to manage and track, it provides the most flexibility when you need to move databases from server to server.

While this helps with recovery, it also helps in environments where replication is involved since you already have the database copied to a secondary (or stand-by) server. Updating the CNAME alias allows all the clients to point to the secondary (now primary) database server.

While this has only been focused on recovery, a side-benefit of using this technique also provides an opportunity to migrate databases from one server to another. If you're running into performance issues or need to do a database upgrade or maintenance, you can easily move the database with minimal impacts to client systems.

The final step is to force all the client systems that connect to the database to have their local DNS cache cleared out. This will allow client systems to connect to figure out the IP Address for the new server name. Under Microsoft Windows the command "ipconfig /flushdns" will clear the local DNS cache and find the new IP Address by doing a look-up of the IP Address on the DNS server. Under Linux the command to restart the "nscd" daemon (Name Services Caching Daemon) will clear the cache ("/etc/init.d/nscd restart"). Both should take no more than a couple of seconds to complete.

In short, using a CNAME alias makes managing recovery easier should a critical failure occur. While providing a solution to easily recover a database to a different server, it provides a secondary benefit of abstracting out the physical server name to assist with database engine upgrades or moving databases between servers for performance reasons. And it also reduces the other things Murphy can cause to grown wrong while you're trying to recover your databases.