« July 2006 | Main | September 2006 »

August 31, 2006

There's No Place Like Fenway Park

Heidi and I spent the evening tonight at Fenway Park (home of Boston Red Sox) soaking in the atmosphere. It was, as always, a real treat and I can't help making a note of it (even though it hasn't been to long since the last time we went).

The moment of the evening was in the 8th inning. With 2 outs and a 2-run lead the Sox let the other team put a runner on 1st and watched as the tying run came up to the plate. Since we've lost the last 12 of 14, many of them where we gave up a lead late in the game, the fans are nervous.

Then Francona (the coach) steps onto the field to make a pitching change and calls for Jonathan Papelbon from the bullpen. The place goes crazy. Papelbon is amazing when it comes to closing games. He's not perfect, there have been times when the other team has gotten to him, but for Red Sox fans there's no one better to be on the mound in a tight situation. The crowd is on their feet screaming, clapping, whistling etc as Papelbon makes his way across the outfield to the mound. There's nothing like that feeling of Fenway Park, the fans full of energy and emotion attempting to give the players a jolt of electricity to pull it off.

And they didn't dissapoint last night. Papelbon delivered with the help of some excellent fielding. A great game.

Now if they could just repeat that for the next few weeks to get back in the running for the playoffs.

Posted by mike at 11:45 PM

August 29, 2006

Regular Expression Problem Solved

Just this morning I posted a description of a problem I was trying to solve using regular expressions. Writing it out and then taking a break was just what I needed. I got some inspiration shortly thereafter and have a much better solution than I was headed toward.

As a refresher, I was trying to use a template like this:

$template = "Approval: {{user}} approved the {{object}} for {{project}} on {{date}}";

to convert a message like this:

$sentence = "Approval: Jim Johnson approved the invoice for new computers on 01/01/2006";

Into this (text marked up to be translated, but keywords preserved):

Approval: {{Jim Johnson}} approved the {{invoice}} for {{new computers}} on {{01/01/2006}}

I had a fairly long chunk of code that was attempting to split up the template and then apply it to the sentence when it occurred to me that in all cases the keywords will either be surrounded by two regular language words or be at the end of a sentence prefixed by a regular english word. Rather than attempting to parse and understand the template and then apply it to the sentence I got the idea to use a global regex that just looks at a preword, the keyword, and a postword.

So here it is, using the variables as defined above:

## loop through template text, grab a preword, the keyword, and a postword (or end of line)
while ($template =~ /(\S+\ ){{(.+?)}}(\ \w+|$)/g) {
    # move the position back one word in case there's only one word separating keywords
    pos($template) = $-[3];
    # replace the preword, keyword, and postword with themselves, put keyword in brackets
    $sentence =~ s/($1)(.+)($3)/$1\{\{$2\}\}$3/;

Hopefully the documentation is good enough to help it make sense. With a regex you can cram a ton of functionality into an extremely cryptic, compact few lines. It feels great having come up with a concise solution to the problem, but I'm sure it will take some work to get my mind back around it down the road.

Posted by mike at 12:45 PM

Battling with Complex Regular Expression Problem

I've had this problem on the burner (moving between front and back) for a few days now. Have been letting it simmer while working on other things to see if there's some brilliant light that goes on as to how to best solve it. Not yet, although I'm close.

We have history messages stored in a database that need to be translated into a variety of languages. They are a controlled set of statements that share some common sentence fragments but also have specific pieces of information. Something like:

Jim Johnson approved the invoice for new computers on 01/01/2006.
The schedule request was rejected by Fred Alewife.

The statements aren't going to be changed, so my job is to find a way to have the code grab the statements, figure out what pieces should be translated and what should not, and mark them accordingly to run through the translater.

The problem is that in the statement like the one above certain pieces of the data should be kept from translation, and I don't know for sure where they are in the sentence. The solution that has been proposed (and I'm attempting to implement) is to create a set of templates, based on the known formats for the messages. The template(s) would be used when looking at the sentence to determine which pieces of the sentence to translate and which to preserve.

So I created a simple template markup, looks like this for the two statements above:

{{user}} approved the {{object}} for {{project}} on {{date}}
The {{object}} was {{status}} by {{user}}

And let's say that the final data that needs to be send to the translater is something like this for the two statements:

{{Jim Johnson}} approved the {{invoice}} for {{new computers}} on {{01/01/2006}}
The {{schedule request}} was {{rejected}} by {{Fred Alewife}}

You get the idea, that you find a matching template and then use the template to help designate words or phrases in the to preserve them from being translated.

The part that tries to find a matching template is simple, I do a regular expression to replace any of the {{.+}} with a .+? and turn the statement into something like:

$match_statement = '.+? approved the .+? for .+? on .+?'
if ($sentence =~ /$match_statement/) {

Once I have a template that matches the sentance it's a little more tricky. I can't do a piece-by-piece replacement because you have to consider the entire statement to figure out what words match up to the template.

So what I've resorted to is building a replacement regular expression. I have a piece of code that works through the template and finds the parts where word preservation is required. In the end I end up with an array of statements that I can join together to form something like:

$match = '(+?)( approved the )(.+?)( for )(.+?)( on )(.+?)';

The idea was to create the match portion of the regular expression and then use it to do a replacement like this:

$sentence =~ s/$match/$1$2$3$4$5$6$7/;

Unfortunately I don't always know how many variables there will be to match, and more importantly, that doesn't put in the necessary markup, it needs to be more like:

$sentence =~ s/$match/{{$1}}$2{{$3}}$4{{$5}}$6{{$7}}/;

So it seems that the replacement part of the regular expression needs to be dynamically built because you don't know exactly where the preserved words will appear. That's where I'm at, attempting to build a variable to stick in the statement. Perl doesn't seem to like having a string variable filled with regex variables.

While I'm close with this approach, I do continue to wonder if I should be looking at this from a completely different angle. Perhaps there's a pattern to solving this that I'm not seeing.

Putting it on the back burner again for a little bit to see if something emerges.

Posted by mike at 7:43 AM

August 25, 2006

No Digital Audio for a Month

It's been almost a month since I've listened to music from a digital source. I have been sticking close to the vinyl ever since the day I brought the record player up to the office. It seems significant to me because I listen to upwards of 6 hours of music a day while I work away. That has given me a chance to listen to a lot of good music I haven't heard for years. If you calculate, I've logged somewhere around 120 hours of vinyl listening this month.

Did I mention recently how much I love working from home?

It gets more serious than just listening to a lot of music. After burning through the 100 records I have on hand (the rest are at Pete's) I got invited to take a spin through my golf buddy's collection of vinyl. I came home with a loaner collection of old funk/jazz/reggae/rock. Some pretty awesome stuff in that stack, nothing I would have in my own.

Now that I'm though those I've resorted to poking around on eBay and adding to the collection. I'm finding that for $2 or $3 ($7-$10 with shipping) I can grab good quality LPs from a huge assortment of sellers. So far my policy is to only buy stuff I don't already have on CD, and I try to stick to stuff I loved back in the day but never owned myself.

I'm not sure I can keep up the eBay purchasing, it's addicting because the music seems inexpensive compared to the value I feel I'm getting. Fortunately it looks like Pete will be coming to Boston next week and will be able to pack another cart of records from our larger collection stored at his house.

I still stand behind the statement I made last year about leaving vinyl behind.

Posted by mike at 5:53 PM

August 18, 2006

Log Buffer #6: a Carnival of the Vanities for DBAs

It's time to turn our attention to the sixth edition of Log Buffer, a Carnival of the Vanities for the DBA community.

This week, like every week, DBAs all around the world have been hard at work writing about their experiences, many of them providing detailed instructions on new and interesting ways to use and manage a database.

For folks wondering if it's worth all the work, Eddie Awad's Blog now has the latest results from his Unofficial Oracle Developer/DBA Salary Survey. The results are interesting, and hopefully encouraging. While the data is why we all rush there to see where we stand, there's also a good summary of the process Eddie went through to run and process the results from the survey.

As a DBA you work pretty hard, do you think it's time to take that vacation you keep putting off? Yes, you deserve to get away for awhile. To help in your preparation, Beth Breidenbach offers eight things to think about before heading off.

In some instances a vacation is actually a time to get more work done, as is the case over on Andrew Dunstan's Postgres Blog. It seems that Andrew and his son will attempt to complete a Postgres Enum project while his son is en-route to a working holiday. Sounds like most of the vacation is already filled with work to do. Hopefully your vacation will be less work.

If you do manage to get away, your vacation might be more enjoyable if you go back to Eddie Awad's Blog and consider his observation on comments. Eddie reminds us that including even simple comments can make a huge difference in how easy it will be for other folks to understand what's going on in the database while you're away. Knowing someone else can figure it out should make for a much better vacation. Right?

And while we're talking about understanding one another, Chris Foot's Oracle 10g weblog has an excellent article on Application Design Review Meetings. Chris provides a detailed outline of the things to consider when designing an application. Having representatives from all groups involved in defining, building, and supporting an application makes a huge difference in the outcome of a project.

Now, you might be thinking differently about this whole vacation thing if you're the DBA who gets stuck in the office while everyone else goes away. If you find yourself wanting to make sure everyone gets stung for abandoning you maybe you could learn how to make Oracle send email alerts and overload inboxes for the returning vacationers. Howard Rogers of Dizwell Informatics provides detailed instructions for pulling this together. (While you're there he also has some good thoughts about documentation.)

And if you're using SQL Server you can learn to programatically fill up inboxes by reading Muthusamy Anantha Kumar's instructions for sending mail from SQL Server over at the Database Journal. Whatever your choice, hopefully your non-vacation will be more enjoyable thinking about the growing inboxes across the organization.

Now most of the time being a DBA doesn't mean going on vacation or playing tricks like flooding the mail server. More often it requires wrestling with and debating complex issues, which aren't always easy to win in terms of what's best from the database perspective. Among the issues that tend to get folks riled up is a good discussion on application-level database abstraction layers. Over at Xaprb, Baron Schwartz summarizes what he's seen in his post about database abstraction layers. As expected, the discussion in the post's comments is lively.

How, where, and when to use SQL hints is another issue that can get people going. Tom Kyte provides a nice summary of the key points in Words of Wisdom on The Tom Kyte Blog. Quoting from a recent article from Jonathan Lewis entitled Hinted SQL, Tom emphasizes key points of agreement. Comments on the post are remarkably agreeable.

We'll wrap up today's hot-topics discussion with a pointer to Oracle Musings where Dominic Delmolino writes about his experience playing with MySQL, and ways to convert MySQL databases to Oracle. Dominic has nice manners and is quite friendly when talking about other databases. I like the idea of being able to understand and accept the reasons why different people choose different databases.

While we've got MySQL on the mind, this week the latest beta of solidDB for MySQL was released. Jonathan Cheyer makes the announcement with a link to downloads over at blog.cheyer.biz. For folks who have been waiting to see the new storage engine in action now might be the time to head over and give it a whirl.

Moving on to calmer waters, what would life as a DBA be like without periodically having some work that doesn't seem like work at all? You know, the project that makes you forget you do this for a living. Maybe you're exploring some feature you never knew existed as in the case of Sue Harper over at Sue's Blog... again.... Sue takes time out of a busy Friday to play with a newly found dialog for creating external tables.

Or perhaps you had an clever idea to try to store your database data on a flash drive for ultimate portability. Peter Laursen documents his experience getting it up and running over at Blogck out .. by peterlaursen. Must have been a great feeling when he happened upon the idea and was able to put it into action.

At the core, a DBA is about making data storage and retrieval work accurately and quickly (or as we say in Boston, "wicked fast"). It might be something simple like the inaugural Mini-Tip over at An Expert's Guide to Oracle Technology where Lewis Cunningham offers a solution for Getting rid of spaces after TO_CHAR. Mini-Tip #2 follows close behind with a look at Oracle's nvl2() function.

More often though, the DBA is required to wrap his or her mind around more complex data manipulation and retrieval. Fred, over at DB2 News & Tips demonstrates multi-dimensional clustering and materialized query tables in DB2Express-C. Fred's purpose is not to teach us how to use them, but to prove they can run in the community edition of DB2Express. But it's worth reading for either reason.

Performing year-to-date calculations is another fun problem to tackle. On The Oracle Sponge, David Aldridge compares using a dimension table and a transformation table, concluding that there are significant performance benefits using the transformation table that are probably worth considering.

Pete-s random notes provides some additional discussion on year-to-date calculations and proposes an alternative to the transformation table method from David Aldridge. However, within a day, Pete posts part 2 of the alternative approach confessing that his tests for the alternative approach weren't up to snuff and the alternative isn't actually viable.

Sometimes DBAs get frustrated, tired, and sick of having to be responsible for keeping track of everyone's data. It's at those pivotal points that you wish you could just send the data into a black hole and never have to think about it again. Daniel Schneller got to do just that when he set up MySQL replication using the blackhole storage engine, which discards any and all incoming data. Why would you ever want to do that? Well, besides doing it out of annoyance or frustration, Daniel Schneller's Blog is the place to go to find out how sending data into the deep abyss was just what they needed for building their replication architecture.

And with that we conclude Log Buffer #6. Thanks for listening. As always, tune in next week to see how the database world continues to evolve.

Posted by mike at 12:00 PM

solidDB beta for MySQL

Where was I when the folks at Solid announced the beta release of solidDB for MySQL? Apparently it was announced at OSCON, where I wasn't, and it slipped through my sensors unnoticed.

Am excited to try it out. Falcon should also be coming shortly, right?

Update: I see that the most recent beta release announcement was in my aggregator. Missed that too.

Posted by mike at 12:25 AM

August 17, 2006

Sendmail Internal Error - Digging in Sendmail Source Code

Yesterday we got a notification from a customer that a few email messages had been returned undeliverable and after some discussion with their mail folks they thought we might be able to troubleshoot the issue.

The message stuck into the returned email was something like:

554 5.3.5 deliver: mci=947ee74 rcode=0 errno=9 state=0
sig=[]: Bad file descriptor
554 5.3.0 Internal error

I've been hacking away at the problem for almost a day now and am getting deeper and deeper into the depths of SMTP and sendmail. Normally a few minutes of trying different Google search terms will turn up other folks who's tackled the same problem, but it appears either nobody's ever dealt with this (unlikely) or that it happens infrequently and nobody has gotten to writing about it.

For awhile I was sure the message was being returned with the error from the destination, but on looking through our logs I changed to thinking it's a problem in Sendmail's attempts to deliver the message when it's finally time to send it off (these errors were preceeded by a period of very heavy greylisting).

Our mail logs have entries similar to this:

Aug 16 13:14:39 webserver sendmail[18093]: k7GH0uLW017508: SYSERR(root): deliver: mci=6a9408
rcode=0 errno=9 state=0 sig=mxone.remoteserver.com.:mxtwo.remoteserver.com.: Bad file descriptor
Aug 16 13:14:39 webserver sendmail[18093]: k7GH0uLW017508: to=<user1@clientdomain.com>,
<user2@clientdomain.com>, ctladdr= (500/100),
delay=00:13:43, xdelay=00:00:00, mailer=esmtp, pri=240521,relay=mxtwo.remoteserver.com.,
dsn=5.3.0, stat=Internal error

I'm definitely not a Sendmail guru, although I'm capable of getting it running and fiddling with the configuration files to make small changes. But since I can't seem to find any information anywhere I don't know where to go except for the Sendmail source code to see where this message is being generated. A little digging and I discover that the only system error that is generated in the above format is found in deliver.c, line 3208. This snip is the surrounding logic:

if (mci->mci_state != MCIS_OPEN)
                /* couldn't open the mailer */
                rcode = mci->mci_exitstat;
                errno = mci->mci_errno;
                if (rcode == EX_OK)
                        /* shouldn't happen */
                        syserr("554 5.3.5 deliver: mci=%lx rcode=%d errno=%d state=%d sig=%s",
                               (unsigned long) mci, rcode, errno,
                               mci->mci_state, firstsig);
                        rcode = EX_SOFTWARE;
                else if (nummxhosts > hostnum)
                        /* try next MX site */
                        goto tryhost;

mci is a reference to the Mail Connection Information Caching Module, a module that caches open connections as well as the status for all hosts (whether or not there is an open connection to that host). What seems to be happening is the mci structure is no longer usable as it's gotten into an error state. What's not clear is why a new mci isn't instantiated if the existing one is gone bad. I'm missing something here.

From the comments in mci.c, it appears there are a few things that affect how the mci caches the host connections:

**      There should never be too many connections open (since this
**      could flood the socket table), nor should a connection be
**      allowed to sit idly for too long.
**      MaxMciCache is the maximum number of open connections that
**      will be supported.
**      MciCacheTimeout is the time (in seconds) that a connection
**      is permitted to survive without activity.
**      We actually try any cached connections by sending a NOOP
**      before we use them; if the NOOP fails we close down the
**      connection and reopen it.  Note that this means that a
**      server SMTP that doesn't support NOOP will hose the
**      algorithm -- but that doesn't seem too likely.

So maybe it's a problem with the NOOP, or maybe it's the cache size or timeout. Looking back to the mail logs I stumble into this log entry from mci:

Aug 16 13:14:39 webserver sendmail[18093]: k7GH0uLW017508: MCI@0x6a9408: flags=6006c, errno=9, herrno=1, exitstat=0, state
=0, pid=0, maxsize=67000000, phase=client DATA 354, mailer=esmtp, status=(null), rstatus=(null), 
host=mxone.remoteserver.com., lastuse=Wed Aug 16 13:14:37 2006

If I do a grep for that specific mci (6a9408) it is tied to mxone.remoteserver.com. and is responsible for all messages that were aborted with an internal error on that webserver. Interestingly enough, there are places in the log where messages run into this mci in the process and cause information to spew into the logs, but when they are delivered they end up using the mci for mxtwo and get through just fine.

Upon further inspection the only difference between the mxone and mxtwo mci is the status and rstatus being (null). Both mxone and mxtwo have an errno of 9 during this time.

So it seems like the mci gets into a state that really warrants dropping it from the cache, but it hangs around and causes internal errors until it does get dropped. The other alternative is that the mail servers at the destination agree to accept the mail but aren't really ready causing the mci to fail.

For now I've put all of this information (in summary fashion) out to the support folks with a question about if I should keep digging or if we might wait and see if it happens again (it hasn't happened before to my knowledge).

Posted by mike at 10:03 AM

August 7, 2006

What is Professional Services Automation (PSA)?

Back in March I started a new job at OpenAir and I think it's finally settled in that (1) I really did change jobs and (2) it was a really good move for me for a number of reasons.

Before I can really talk about what I'm doing I should start with some information about the industry I've moved into, Professional Services Automation or PSA. I was in academia for many years so it's been interesting to shift to an industry with a completely different vocabulary, set of standards, history etc. It's been a gradual process, but I think I'm starting to fit in and am finding myself more and more excited to be doing something new and different.

The OpenPSA project has a nice summary of PSA:

Professional Services Automation (PSA) is a term invented by market analyst companies to describe a wide field of software products - from extended financial management packages to specialized ERP systems.

Professional Services Automation software enables services organizations to manage and improve their operations. Potential users of PSA software include consultancies, corporate IT departments, R&D teams and other service groups who need a better way to manage their projects, client relationships and documentation.

What does PSA do for you? I like this little bit on our homepage . . . I tend to use pieces of this description when I tell someone where I'm working and they haven't heard of OpenAir or PSA (99% of the cases):

Improve utilization—keep your team on the right projects.

Improve cash flow—accelerate billing and collections.

Our full suite of on-demand, integrated applications, from timesheets and expense reports to complex project and resource management, requires nothing more than a browser, and supports a variety of mobile devices, including Blackberry.

My understanding is that Professional Services Automation is considered a fairly new sector in the industry and is just starting to get recognized.

There are several companies going head to head to be positioned as the top player in this space, I like to think OpenAir has the best shot (currently the number one Google result for "Professional Services Automation"). More on that later . . .

Posted by mike at 10:09 AM

August 4, 2006

Open Services: An Alternative to Open Sourcing Code?

With Matt and Jeremy going back and forth on what it means to be a good open source citizen I can't help but throwing out something that's been on my mind. I wasn't at OSCON and didn't catch the full conversation so maybe this came up (or has been said in the past).


So here's my question: Is open sourcing code the pinnacle achievement and is it always the best an organization can offer to the open source community?

Open Services: An Alternative to Open Sourcing Code

This is mostly about Google, because I'm more familiar with their services (I only use Yahoo! for TV listings). I think it applies to many of the companies in and around the open source arena, particularly those moving toward Web 2.0.

In a word, even if it were possible and made business sense, I'm not sure Google's best contribution to the community would be to open sourcing their codebase. I'd argue that their open services are a compelling alternative.

Yes, there are secrets in the code that people would kill to get their eyes on, and there's a lot to be said for the community being able to learn from/use the work Google is doing and perhaps even participate in development.

But for argument's sake let's suppose that management had a choice between allocating resources for organizing, cleaning, and packaging up code for public consumption and developing open (or registration-required) web services and other APIs. As an average Joe I might like the idea of being able to grab, modify, and run the Google codebase but the more practical thing is to let them run it for me and give me umpteen different ways to use their code by exposing it as a service. The computer is the network right? Dare I suggest that for a majority of the folks out there interested in Google's source code that using the Google-provided services provides an easier and better alternative than actually having the source code? And open services like the Google Maps API provide the code and the data. Having just the source code makes no sense for the hundreds (or is it thousands?) of sites that have leveraged Google Maps using the API.

I know, there are some people and organizations that would get a lot more out of having the source code from Google. Running an application that relies on Google's services has some flaws. There's always uncertainty about controlling functionality changes and data availability. And many folks aren't interested in using code that runs somewhere out of their control or provides no assurance of longevity. All important questions, but do they negate the contribution of open services? I don't think so.

So does making your code available by opening it up as a service count as open source? I'm not sure, but if I worked at Google or Yahoo! and was being pressured to talk about the company contribution to open source I would definitely spend a little time highlighting the services efforts. It's not a replacement for opening the source code, but it is a very valuable contribution to the open source and development community.

Update: Glad to see in some of the stuff Tim is saying that this post isn't way off base. Tim has been pushing for an open services definition and gives credit to folks developing open services.

Posted by mike at 8:21 AM

August 2, 2006

Bose and Vinyl - Multi-Generational Audio

A few months ago I added an auxiliary input to my Bose SoundDock. Some folks thought it foolish, but the risk has been well worth it. I can cart it around with me and either use the iPod, or hook up any other audio source. That was the original idea, to get something portable that has excellent sound. I'll reiterate that I spent months considering the options, both reading and making numerous visits to listen to different units.

A few weeks ago I got to thinking about the record player and vinyl sitting collecting dust in the basement. Since I now have an official office in our house I figured it might be worth a few minutes to drag it upstairs. So this week I grabbed the record player and my old tape deck and set them up in the office.

The sound is pretty amazing. I'm not going to pretend that the Bose has better sound than my headphones, or a full-blown amp/speaker setup, but the record player hooked up to the Sounddock produces great, full-bodied sound. And the sound quality form the record player comes through, sounds better than digital audio from the iPod or other digital sources.

I'm not a serious audiophile (haven't ever spent thousands of dollars for a piece of audio equipment), but throughout the day yesterday I put on a wide variety of records and had a few "moments" where I couldn't help just sitting back and enjoying the sound. I think what happens is that I've listened to these songs so many times on the iPod in a compressed mp3 format that it's a bit startling to hear the full-on sound from the record.

Two in particular caught me off guard, Love Vigilantes from New Order's Low-Life album and Is It Really So Strange? from The Smiths' Louder than Bombs. I went back to those tracks a few times to double check that it wasn't just a fluke.

So it's a multi-generational audio setup. To be complete I guess I need to throw in an 8-track and a CD player. That would capture all of the audio format choices I've seen actively used (8-track is a stretch, was something my parents had). Never did much with reel-to-reel, although we toyed with one to make recordings of ourselves.

Posted by mike at 7:20 AM

About Me

Name: Mike Kruckenberg
Location: Boston, MA
Career: Technical Management/Software Engineering (resume)
Current employer: OpenAir
Email: mike@kruckenberg.com (not afraid of spammers)

Hobbies & Interests: family, music (listen and play), writing, photography, video, baseball (watch), golf (play)

Posted by mike at 6:56 AM

August 1, 2006

Golf Hack #20 - Watch Yourself Swing

Last Friday night as I was gathering the equipment, beverages, etc for the weekly golf outing I threw the digital camera in the bag. I've done this in the past to take a snapshot or two, but this time I had this idea that we'd use it to capture the swing. The digital camera can capture 30 frames/second, which is more than adequate to capture the gist of a golf stroke.

After making sure it was OK with the other guys I took the camera out on the 4th hole and took footage of each golfer taking their tee shot. Since I couldn't take footage of myself I asked one of the guys who ended up turning it on right after my swing ended. I'd put it up but you'd be looking at a camera pointed at grass as we're walking off the tee box. Hopefully next time.

Anyhow, the footage of the other three is interesting to watch. In my Quicktime player I can arrow back and forth frame by frame to study the position and motion of the head, shoulders, arms, legs and golf club. At 30 fps you get a pretty good idea of where the club head is, how it strikes the ball, and where the ball goes. Fascinating. At my level of golf I'm not sure how easy it will be to make changes in the swing based on what I see in the video, but it can't hurt to watch and try to learn.

The image is a combination of each of the three golfers on the first video frame after the club hits the ball. Of the three, one is a very nice shot down the middle of the fairway, another is straight but topped so stays very low and gets caught up in the grass, and the third is a slice. Any guesses which is which? If mine was on there you'd have two slices to choose from (although neither of the slices were unplayable).

I have never been close to one of those "analyze your golf swing" machines. I'm not ready for someone else to tell me how horrid my swing looks, but this technique of bringing a camera along seems like a cheap and potentially more private way to get a general sense of what the swing looks like.

We did get some gruff from the guys behind us who were telling us to move it along and put the camera away. Next time I'm thinking I'll mount the camera onto the bag so it's not so obvious. Then I can set the bag behind me and start/stop it with ease and without notice. That will have to be golf hack #19.

Maybe I should suggest the driving range as a better location for this kind of study.

Posted by mike at 7:58 AM