May 29, 2003
Network Speed through Switch
There is a major flaw in my previous network speed testing. I was using scp to move a large file between machines, of course a chunk of time is spent doing the encryption. Don't ask why, it was a quick way to get a sense for how fast I could copy something across the wire.
I'm getting a transfer rate though the switch close to the speed of the crossover cable (~42 seconds for the 81,474 KB file).
After a very small amount of research I found netest, a small tool to test throughput. You run the tool on one machine as a listener and on the other machine to send the packets. Results of a TCP test show . . .
TCP transf rate:min 75.5641/avg 82.8163/max 89.6347 Mb/s
I think I can be happy with that, until we decide we want to move up to Gb connections between our machines.
Posted by mike at 1:09 PM
Cisco Catalyst 2950 Switch Installed
After not having touched it since January, I dusted off the Cisco Catalyst I hoped to put in over four months ago. Got the configuration set correctly to force full duplex and speed of 100Mb/s (the Cisco defaults are both set to "auto" and Solaris was misnegotiating).
1) It interesting to think that I attempted to install in January but couldn't find another window until last night (4 months later) to take another shot and getting the switch installed. Funny how some things move so slow, or are not important. I'm finding more and more with our systems that there is very little time when it's convenient for everyone to have the system down. More than ever I am prioritizing a plan to move us toward complete redundancy with all services.
2) I've never been one to shy away from tackling something new, but I think I've gotten as close as I want to networking equipment. Some people dedicate their careers to designing and optimizing networks and even though I'm getting traffic moving through the switch close to wire speed I cringe at the thought of something going wrong with the switch and having to go in and troubleshoot. Most likely would call in someone else to look at it.
Posted by mike at 11:59 AM
May 28, 2003
Adding New Disks to Sun StorEdge A1000
The theory with sca SCSI drives is that they can be added or swapped without machine downtime. I've done it before, but always double check the documentation before I go putting or pulling drives on live machines:
When adding drives to a Sun StorEdge A1000, add the drives while the system is up and running. Do not reboot the system. Doing so may cause a loss of configuration information on the new drives, and a loss of data and logical unit (LUN) configuration on the existing drives. sun docs
Makes adding more disk space pretty straightforward, with a little configuration I've got a badly needed mount point, probably will move all the MySQL tables and backups to the new disks.
Posted by mike at 12:08 AM
May 27, 2003
Data Transfer Speed on Crossover Cable
I'm going to be making some changes in our system architecture in the next day or two and want to do some measurement to know for sure that I'm actually improving things. The (informal) benchmark I'm looking for is how fast data transfers on a crossover cable between two of our production machines. The crossover cable is being replaced with a switch.
On the crossover a 81,474 KB file takes an average of 38.5 seconds. Around 2117 KB/sec.
Will be interesting to see how that compares to the switch once it's installed.
Posted by mike at 11:37 PM
Methods for Maintaining Software Installation
Last week I wrote about the process of gathering the list of Perl modules. Pete writes in response:
I assume you'll be recommending the use of CPAN.pm (ie "perl -MCPAN -e shell") for downloading, compiling, testing and installing the Perl modules. In fact, could even script/automate detection and installation of the necessary modules using CPAN.pm.
The paradigm we've operated under is to compile and build packages on one machine and then distribute across all machines as Solaris packages. Makes it easy to control exactly what is installed on each machine, and quickly determine package versions etc. It is more work initially to build the packages, but the ease and assurance that machines are in sync is worth it in the long run. I've been questioning this approach.
Some questions to defining how to best manage software across machines:
1) Our machines are jumpstarted, using Solaris packages (another reason we put our stuff into packages). Is there a method for executing the CPAN make/make test/make install process from a package install? I believe part of the package install includes the ability to execute a script. What if the machine isn't online when that package is installed?
2) Does the CPAN, or the make install process make it easy to remove the pieces of a particular install? If I need to pull out one module and replace it with a new module (and maybe it's been renamed or relocated) is it possible to completely pull out what was installed during make install? Typically I just make install over the older package, which probably would work but could lead to a less synchronized set of machines. I guess with good documentation the install process across machines could be more controlled.
3) Initially we were using the Sun compiler to build our packages because gcc was still in development for 64-bit compiles. We only had the compiler on one machine, which limited "make" to one machine. We now use gcc 32-bit (even though it now supports 64-bit) so this is no longer an issue. Wonder how much of the decision to build packages was based on having the one compiler.
It seems like more and more I've been coming against questions or proceedures that are simply part of deciding what kind of sysadmin practices I choose and not so much what is right. Somthing feels good about researching all the options, deciding what path works best for our needs and adding it to the unix toolbelt.
Posted by mike at 3:03 PM
How Good it Feels to Branch CVS
Last night I branched the modules in our CVS tree after a long period without a branch. I forgot how good it feels to tag the code . . .
We've been living for the past few months with our production running off the main trunk of our CVS tree, no branching (yea, it was not good). It started in December when a good chunk of code needed to move into production, but there was going to be a significant amount of tweaking XSL stylesheets and we anticipated the amount of administrative overhead to work in a branch and merge changes back to the main trunk was more than anyone had time to manage.
Something so reassuring about having a snapshot to fall back on. It was also somewhat insane to think we'd ship off code from the main trunk to other schools.
Posted by mike at 1:34 PM
May 23, 2003
End of Year Spending
Until I came to Tufts I had always been critical of organizations who make a rush at the end of the year to blow money, most of the stories I'd heard demonstrated wasting taxpayer funds.
I've seen two approaches to end-of-year funds:
1) blow them on junk for the sake of preserving the budget amounts next year
2) make a plan and use funds to significant improvements in organization
Most of the stories I've heard fall under number 1. After having seen our director's use of year-end funds I have come to a new appreciation for how this system works. With a little planning the end-of-year funds become a method for both preforming upgrades where funding wouldn't allow and pushing the envelope with new technology.
Three things contribute to having surplus funds at the end of a year:
1) Our director is regularly finding pockets of grants which get applied in some way to what we're working on, freeing up previously allocated funds
2) Staff changes which leave positions vacant for periods of time, freeing up salary money
3) Living frugally through the year in case of emergency
The gist of what's happened over the past 2-3 weeks is we've been able to dream a little and pick up some fairly significant pieces of hardware (and some software) which will enhance the speed and abilities of our service.
The biggest addition is a dual 1G Sunfire 280R, which will replace an older dual 450 U60, a huge boost in processing speed for the end users. We also purchased a bundle of SCSI drives which will either double our storage space or allow us to do RAID striping (instead of straight mirroring). We replaced a number of less expensive machines (desktops and internal servers). I think total there were between 30-40 items that ended up being approved.
The improvement I'm most excited to see is Andy's addition of a triple-head video card and two new monitors to match his current 19" Dell trinitron. Both interested to see three monitors in action (I use dual, but have never seen triple). Also, something about having Linux driving that kind of a development environment makes it more exciting.
Spending end-of-year funds seems acceptable if it's done responsibly, of course it is much easier to come to terms with spending excess money if you are a decision maker in how that money gets spent.
Posted by mike at 12:19 PM
May 22, 2003
Brought 802.11g Wireless Router Home
The access point hasn't been used once since I set it up. I guess I was the only candidate and I keep my laptop on my desk 99.999% of the time I'm in the office. Kind of annoying since I'm locked into a wire at home where I really wanted to be moving around (can't get 802.11g to work on my existing 802.11b).
So I brought the access point home, at least until someone else decides it's important to have in the office.
Aaaahhhh, it is nice to be free again.
Posted by mike at 1:47 PM
May 21, 2003
Purchasing InnoDB Hot Backup
Bought a copy of InnoDB Hot Backup today. Don't know if I should be ashamed or proud of it being my first transaction with another currency (not counting Canada).
Went just as smooth as any transaction I've made with a US company, was glad that the site offers instructions in English.
The InnoDB folks were fast. I made the payment on Luottokunta at 2:45pm, sent a corresponding email with more details to InnoDB at 2:49pm and received the confirmation email with attached license at 3:25pm.
Have been using my own script to automate the backup process each night (using some of the included mysql tools). Works great for MyISAM tables, but using InnoDB table types changes everything about the backup and restore process. Over the summer we're converting all our tables to InnoDB.
Posted by mike at 4:57 PM
Finding the List of Used Perl Modules
We're in the process of shipping our code off to another school. It's been over a year since we did it last, so I wanted to make sure our list is up to date
I decided to work backwards from the code, developing a command line string of pipes between find, grep, sed, awk, sort and uniq to generate a list of use statements found anywhere in our perl modules and scripts. That gives me around 72 use statements.
Some of the modules are included in the defaul perl/mod_perl install, so next step was to determine how many of the use statements required additional modules. I happen to have a new machine that I have only gotten as far as putting perl/mod_perl but no additional modules. I looped over the 72 use statements and got a list of 36 errors where modules couldn't be found.
Many of the 36 use statements included different modules within a bundle. I started at the top of the list and checked off each use statement once I located the bundle or package tarball on CPAN. Ended up having a list of 32 items to download, compile and install, including a handful of packages that either we don't use but might soon and a package or two which satisfy depandancies.
Kind of a pain, but worth being up to date on exactly what we're using. For reference the list of all packages that go on a new machine is here.
Posted by mike at 8:18 AM
May 19, 2003
Going to CAMP in Boulder, CO
Was asked today to go to Boulder, CO to attent the Campus Architecture Middleware Planning (CAMP) Meeting in Boulder, CO. Had mixed feelings about it, and had to weigh it out carefully. Many of the presentations are addressing services we either use or hope to use, but that are on the plate of a different group at Tufts.
Finally decided to go, the primary reason being we're applying for a grant to specifically enable our software to share information with other institutions and this conference is geared toward forwarding services that will enable cross-institution exchanges. Even if I'm not building the machines or running the services it's good to have someone on our team that is familiar with the options and issues.
Posted by mike at 9:03 AM
May 16, 2003
Hanging at Home during Conference Call to Durbin, South Africa
Today I had one of those moments where you sit back and awe at technology and what it enables us to do.
I was at home (typically work longer days Mon-Thurs and stay home Friday) playing with the kids and got an IM that I was needed on a conference call at 10am. Within 15 minutes I had the kids in the playroom and was on a conference call with three folks in Durbin, South Africa (University of Natal) and Susan (project director) at the office in Boston.
As we conferenced about us getting them set up with a snapshot of our code from CVS and providing information on server configuration there were several moments where I was helping kids (getting a snack, finding a toy, moving away from a trash can) or chatting with Susan on IM. Was on the phone for 20-30 minutes.
When it was all over I sat back and thought about how wonderful it is to be able to be at home, using the internet and phone to conduct business while being able to hang out with the kids. A little hard to explain the feeling . . .
Posted by mike at 8:33 AM
May 15, 2003
Today I Lifted Over 6 Tons
Never thought I'd be one to "lift weights," but today I started a muscle "toning" program at the YMCA. I went really light, only setting each of the machines between 40 and 60 pounds.
The Y has something called Fitlinxx, which keeps track of each machine, the number of pounds/reps/sets. I was quite surprised when I checked out to see that the total amount of pounds I had lifted 12,480 pounds in the 45 minute workout. That's over 6 tons! Seems like a lot of weight, and I know on several of the machines I'll at least double the weight next time around.
Fitlinxx is pretty cool, makes workout information available online.
Posted by mike at 9:06 AM
May 14, 2003
Server Hosed with syslog Attempts
Today we had a machine serving HTTP/HTTPS go down, but not really. The machine would not respond to SSH/HTTP/HTTPS, ping requests were successful. I was able to connect with console and determine no no obvious problem with the machine (poking through the logs, looking at cpu and memory).
Would have suspected network issues but other machines in the same cabinet/router were fine.
After 10 minutes of determining nothing looked wrong I was asked to reboot the machine, the problem went away. Troubling to a person who believes with Unix/Linux machines that executing shutdown is only for moving machines or installing hardward. My faith was shaken.
Spent the evening going through the logs and found that in the 30 minute downtime our firewall blocked 18,400 attempts to syslog to another machine over the network . . . something that I wasn't aware was in our syslog.conf. In that time the machine did get some SSH/HTTP/HTTPS traffic out, but very little.
The working theory is that the overwhelming number of attempts for log packets seriously hampered the ability for other packets to get out.
Reading up on syslog turned up some interesting information:
the messages are unauthenticated and there is no mechanism to provide verified delivery and message integrity - IETF
Posted by mike at 3:40 PM
May 12, 2003
Building Apache with mod_perl and mod_ssl
Note: this method has been replaced by a shell script.
The need to build and package Apache goes in waves, seems like sometimes I'll do it three times in a month, and then not touch it for 6 months. Each time I've forgotten some piece of the process and end up relearning it. Searches around the web turn up umpteen different methods to get mod_perl and mod_ssl compiled into apache, all with a different twist. So I've worked out *my* steps for compiling.
Obviously the source needs to be downloaded and untarred.
1. config and make Apache - necessary for USE_APACI in mod_perl make
- cd apache_<version>
- ./configure --prefix=/usr/local/apache
2. configure SSL, add to Apache
- cd ../mod_ssl-<version>
- ./configure --with-apache=../apache_<version>
3. configure mod_perl, including the option to build apache along with mod_perl make
- cd ../mod_perl-<version>
- perl Makefile.PL DO_HTTPD=1 EVERYTHING=1 APACHE_SRC=../apache_<version>/src USE_APACI=1 SSL_BASE=/usr/local/openssl APACHE_PREFIX=/usr/local/apache APACI_ARGS='--enable-module=ssl,--enable-module=rewrite'
4. enable existing SSL certificate
- cd ../apache_<version>
- make certificate TYPE=existing CRT=<path to cert>
5. install the whole thing
- cd ../mod_perl-<version>
- make test
- sudo make install
Then I use my handy packaging process to grab it all and put it in a package.
Posted by mike at 2:23 PM
May 10, 2003
How Many Steps do You Take in a Day
For the past three years my mother-in-law and grandmother-in-law have come and stayed with us for a week. This is the third year in a row where they've come with stories of some new product that has revolutionized their lives. Two years ago it was the Swiffer, which we now use solely to sweep the hardwood. Last year it was a bread machine, we bought one before they left and were baking like crazy.
This year both of them are pushing a pedometer, a little pager-size belt clipped box that counts the number of steps you take. The pedometer can be calibrated to convert number of steps into distance in miles. From reading a few sites seems like a healthy number of steps each day is somewhere between 6,000 and 10,000.
So I tried it for 1 day, with my morning run and a walk at lunch ended up with 18,002 steps by the end of the day.
Was suprised to find some interesting sites for pedometer users. Web walking USA offers a way to enter the data from a pedometer and do a virtual walk across the USA.
Still not convinced I'd want to wear one every day . . .
Posted by mike at 10:25 AM
May 8, 2003
New Approach to Building Solaris Packages
Have been carefully thinking about the process for building Solaris packages over the past day or so, have taken a new slant to my problem. Didn't spend a ton of time trying to track down how other people have dealt with this, want to get moving.
I should note, the problem isn't always applicable. Sometimes the application is intended to be installed in a single, new directory, which keeps a 'make install' tidy and easy to package up. Also, in some instances you can tell 'make install' to put the files in a different location from the config, which also keeps the dirs and files in an autonomous location for easy packaging.
If the above is not true, and config/'make install' desperately need to use a common directory where it will mix in files, I have succesfully used the below method, which essentially limits the build of the prototype file by a -mmin option where anything modified in the last 10 minutes becomes part of the package prototype.
Note: This will not work seamlessly if you are installing into a place where other users are actively making changes. Ensure the system, or at least the install dir is quiet through the process. If system isn't quiet you will get additional files in the package protype and will have to manually go through and remove unwanted files.
1. find . -print > ~/before_install.log - create a log of dir tree before install
2. Install the application/library with 'make install'
3. find . -print > ~/after_install.log - create a log of dir tree after the install
4. find . -cmin -10 -print | pkgproto > prototype - put all files modified in last 10 minutes in the package prototye - make sure this is run within 10 minutes of install
5. create pkginfo file, make package
6. install package - shouldn't install any files, because they are already there
7. remove package - should remove all package files
8. find . -print > ~/after_pkg_remove.log
9. compare the after to the before - there should be no differences. if there are, determine what needs changed in the prototype file and rebuild package
10. install package
11. to be completely sure you got everything, run a diff between a dir tree and the after_install.log file, should be identical, with all files and dirs from the new app/lib
I'm not saying this is the best solution, but it allowed me to get perl 5.8.0 installed correctly with a package that can be used on all our machines.
Posted by mike at 10:37 AM
May 7, 2003
Solaris Package Creation Flaw
There is a fundamental flaw with creating packages for Solaris (using the standard pkgmk/pkgadd/pkgrm protocol). Attempting to build perl 5.8.0 package right now (which should go in /usr/local).
Creation of packages is done by installing the package in an autonomous place. The recommendation on sunfreeware is to temporarily point /usr/local at an empty directory when installing so when 'make install' puts it in the local subdirs you can easily package up what was installed.
However . . .
If you point /usr/local at an empty dir for 'make install', then existing /usr/local apps/libs (like necessary gcc files) are unavailable.
If you configure to have installation happen in another dir (like ~) many applications then won't work properly when moved into /usr/local.
If you make install into the default dirs the installed files get mixed in with existing files and makes it difficult to create the package (and ensure that all and only the right files get into the package). Plus then to install the package you need to clean out the non-package installed files.
How to resolve this?
Ideas . . .
- configure packages to go into /usr/local-pkg, pointing that at an empty dir for the build and then point it at /usr/local after the package is installed in /usr/local. This works, but I can't bring myself to add a new link that would render so much useless if removed.
- 'make install' into /usr/local and figure out a good method to determine which files have been added. possibly a find with a -cmin (hoping nothing else would have changed in /usr/local).
Haven't poked around online much . . . maybe other people are facing this same issue.
Posted by mike at 10:16 PM
What Makes a Linux Distribution Cool?
Recently I've taken a bit of verbal abuse because my preference in linux distribution is either Yellow Dog or Red Hat. It's somewhat in fun, but also think there is a genuine feeling of superiority with certain distributions (my coworkers claim that to be a real linux user I must switch to Gentoo).
So what makes one linux distribution cooler than the next? It seems to me (based on this interaction) the more user friendly the installation and maintenance the less credible the distribution.
To me an OS is a means to the end, not the end itself (although there is a small amount of satisfaction derived from the install and getting the applications running). I don't want to spend days (or even hours for that matter) fiddling with settings/compiling code/recompiling kernel to get my OS installed.
Can't we respect a person's right to choose whatever distribution, and be happy they are a part of the larger linux community?
Posted by mike at 3:32 PM
May 6, 2003
Credit Card Purchases Trigger Phone Call from Bank
Some time ago a credit card company ran some commercials with a bunch of feel good marketing about how the credit card company would notice when the purchases on the card seemed strange and call the customer to verify the purchases were OK. I laughed pretty hard at that . . .
Monday we get home from Vermont and there is a message from our local bank, Wainwright Bank, asking to call them back due to some strange activity on my card.
I call back and the woman says they've noticed some strange purchases on our card and want to verify them. She then goes through the last 5 purchased, all of which were made in the past two days in Burlington (and one for some tunes at the Apple store).
I am impressed. Yes the purchases were different from normal transactions (based on location), but the speed in which they detected some odd behavoir and contacted me was most surprising.
I would love to get the rundown on the algorithm used to determine when the flag goes up to make a call.
Posted by mike at 11:19 AM
May 5, 2003
Apple Battery Life Saves Weekend
Well, it made the computing part of the weekend complete.
Finishing up a vacation to Burlington, Vermont. Forgot to bring my battery charger so was counting on the battery life on my 12" PowerBook to last. Sure enough I've been able squeeze in almost 4 hours of uptime (extermely dim screen).
Down to 37 minutes now and getting ready to check out.
Posted by mike at 8:59 AM
Travelocity Changes Fare Watcher Alert Format
For many years now I've used Travelocity as my primary destination for researching airfare. I've tweaked their fare watcher email alerts to give me just what I want (as far as destintations and price changes).
In the last week or so they've changed their email messages from text to HTML, or at least moved to a more designed HTML alert. For personal mail I use pine, which isn't so hot at making sense of HTML, so once friendly and easily understood email messages from Travelocity are now somewhat cryptic and difficult to decipher.
Do I stop using pine? Do I find another fare alert system?
Posted by mike at 8:54 AM
May 4, 2003
The Internet Craze of the 1940s
Am a few chapters into Eric Schlosser's Fast Food Nation and am intrigued at the similarities between the internet craze of the 1990s and the fast food wars back in the late 40s:
Entrepreneirs from all over the country went to San Bernardino, visited the new McDonald's, and build imitations of the restaurant in their hometowns.
America's fast food chains were not launched by large corporations relying uplon focus groups and market research. They were started by door-to-door salesmen, short-order cooks, orphans, and dropouts, bu eternal optimists looking for a piece of the next big thing. The start-up costs of a fast food restaurant were low, the profit margins promised to be high, and a wide assortment of ambitious people were soon buying grills and putting up signs.
For every fast food idea that swept the nation, there were countless others that flourished briefly--or never had a prayer. There were chains with homey names, like Sandy's, Carrol's, Henry's, Winky's, and Mr. Fifteen's. There were chains with futuristic names, like the Sattelite Hamburger System and Kelly's Jet System. Most of all there were chains named after their main dish: Burger Chefs, Burger Queens, Burgerville USAs, Yumy Burgers, Twitty Burgers, Whataburgers, Dundee Burgers, Biff-Burgers, O.K. Big Burgers, and Burger Boy Food-O-Ramas.
The fast food ward in Southern California were especially fierce. one by one, most of the old drive-ins closed, unable to compete against the less expensive, self-service burger joints.
Interesting to think of the origins of fast food having recently gone through a similar boom with the internet. I wonder if someday, a generation or two down the road people will look back and think of the internet as something started by a small group of large companies as opposed to the hundreds of attempts we've seen trying to dominate each space in the online market.
Posted by mike at 3:57 PM
May 3, 2003
Burlington, Vermont - DSL Hotel
On vacation in Burlington, Vermont this weekend. Came last year in the spring and enjoyed it so much we decided to come back again, possibly making it a yearly trip. It's kind of a summer kick-off trip.
We stay at the Smart Suites, which are fairly inexpensive but have a full kitchen, pool, separate living and bed rooms, and DSL.
Speed reports from dslreports indicate:
Download speed: 1386 kbps
Upload speed: 102 kbps
Not too bad. Of course I forgot to bring my charger so will hope that with the screen brightness low I'll get the full 3:20 minutes left on the battery.
Posted by mike at 3:13 PM
May 1, 2003
802.11g Access Point Installed
I set up a 802.11g wireless access point/router in the office today and took all the measures I was aware of to "secure" the wireless network. Pete recommended turning off the broadcast of the wireless signal so only machines that were looking for it would find it but I didn't see any obvious place to turn off the broadcast.
Physical AvailabilityThe wireless signal should only be available within our office space. My office is next to a library, don't want the signal to be available in the library where someone could sit for hours unnoticed. I believe parts of the library have wireless available, want to make sure I'm not overlapping any of those areas. With the antennas on the access point the signal was quite strong throughout the entire library, so I took the antennas off and for the most part limited the signal to within our office space.
Mac address controlI understand that mac addresses can be spoofed/cloned, but I think it's a good measure to only allocate IP addresses to approved mac addresses. Keeps the honest honest.
WEP encryptionAnother measure that I understand isn't hard to circumvent. A good measure against the average innocent passer-by. I chose to use 128-bit ASCII, although I must confess I'm not up on what is recommended. Am glad to see that new WiFi security is coming.
Hoping the three-tier approach will be enough of a deterrent.
Posted by mike at 11:28 AM
Wacky Woman with Dead Cats
I'm sure there are ample instances where you just wonder what another person was/is thinking. This is a pretty good one, woman who was storing dead cats in her apartment.
The woman accused of storing 60 dead cats in her Beacon Hill apartment has tormented previous neighbors, leaving animal parts in a yard, painting a Nazi symbol on a home, and tying up people with legal actions in state and federal courts, according to legal documents and a former neighbor.
. . .
Though what she was doing with the cats is not clear, court filings in Middlesex Superior Court say she once told a customer she was ''very busy breeding the imperfections'' out of Persian cats. She also said she performed autopsies on them, the papers said.
Read the Globe Article
Posted by mike at 10:38 AM