Install 2 of N. Continue?

This is a followup to the first in the series.

After an encouraging comment to my last OpenSolaris blog entry, I decided to do what was necessary to make a patched ZFS boot installer. I used the Netinstall script/procedure from Lori Alt. The name is a bit misleading, as I was able to run the install from a DVD, as I’m usually comfortable doing.

The actual install was much easier than expected, and – because of the pfinstall procedure – much more efficient than the usual rigamarole I go through. Creating the modified boot DVD was the hard part, and really, that was mostly in the logistics of moving DVD images back and forth. I would

  • download the DVD segments to my desktop,
  • assemble the pieces, (really, I have to complain about Sun’s download policy: it really gets in the way when trying to do this kind of work.)
  • upload to the Solaris box, (and try again with newly split files because of a 2GB limit),
  • mount the ISO image,
  • copy the image to a working directory,
  • apply the patch, and include a draft install profile on disk,
  • create a bootable ISO image, and
  • move it (in pieces) from the server room to the MacBook Pro so I can actually burn it.

Google led me to Tom Haynes’ blog, and the linked entry along with the followups that follow in sequence give enough magic sauce to get a bootable/installable ISO image. With the disk burned, installation was a snap. And a ZFS boot disk… simply works. (I do need to dig deeper into arranging a ZVOL or some such as a dump volume. At the moment, it’s on one of the unused array disks.)

When I last saw the machine, I had started running Bonnie-64 on it, and looked good so far. I hope to have very comprehensive test results to post.

In my previous entry, I vowed to give more details about the hardware.

My requirements were to come up with high-density, high-throughput, easy-to-manage storage on a budget. This will be doing multimedia streaming all the way up to (potentially uncompressed) Hi-Def. It’s not your typical mailserver, in other words. My interest in ZFS has been in its reliability, ease-of-administration, and conceptual simplicity: disks are dumb and all too prone to failure. If an all-software solution allows me not to worry about the disks and not put the workload onto another point of failure between the application and the hardware (read: RAID cards die, get old, and obsolete), then I’m all for it.

A local systems integrator (with whom my department has had a long relationship) provided the system, and collaborated a fair bit on the specifications (but he readily admits that Solaris is not his speciality).

We started with the chassis: Steve works fairly exclusively with PCICase, and recommended their 16× SATA drive chassis in a 3U rackmount. It’s a bit anonymously black, but it certainly looks like it will do the job of high-density storage on a budget.

After going back and forth, we settled on a motherboard from Tyan. The S2892 seemed just the ticket, and got a thumbs-up from the ZFS list. Unfortunately, Steve couldn’t find the board from any of his suppliers, because it’s apparently been end-of-lifed. He suggested the S3892 (Tyan Thunder K8HM) in its place. Having seen some measure of support for the southbridge controllers on the HCL (thanks to Paul Richards) from a kissing-cousin relative of the new motherboard, I agreed.

Originally, Steve proposed a 3.0GHz dual-core Opteron. I wanted something cheaper with more cores (cos Solaris tends to handle that rather well), so we ended up agreeing on two dual-core AMD Opteron 275 (2.2GHz) processors.

Combine this with 8GB RAM, dual boot drives (internal to the case, connected to the motherboard), 2× Supermicro’s AOC-SAT2-MV8 SATA 8 port RAID controller, 16× 500GB Seagate SATA hard drives, and we have a pretty serious 8 Terabyte system for under £5000. The question at this point is how serious. I’m waiting for benchmark results to tell me just that.