Thursday, April 10, 2014

My standard configuration

I use Arch GNU/Linux. I know how to use it very well, and I love it dearly.
One of the amazing things about Arch is that you get to build your system from the ground up (but without waiting for things to compile). Therefore, the system that I run is exactly what I want. It has no defaults; I've carefully chosen everything and I've put together configurations and scripts to work with the exact environment that I prefer. My system is truly mine, and you would only get that on a do-it-yourself distribution like Arch.
It's occured to me that I've never actually bothered to write down what, exactly, my preferred environment is. This would be useful for two reasons:

  • Other people may be interested
  • There are too many things that I configure, and I can't actually remember them when I go to install a new system. Therefore, there are a bunch of inconsistencies between my systems that really just shouldn't be there (and that annoy me when I run into them).

So, I have decided to publish a list of my standard setup. Here goes:
The standalone programs that I use regularly and always have installed are git, the Z Shell, Emacs (along with emacs-pkgbuild-mode), Firefox (along with all the GStreamer 0.10 plugins that it can use, and an English language pack), OpenSSH, sudo, pacmatic (with html2text), and Aura. I also always have Steam, Terminator, Pidgin, LibreOffice (Writer, Draw, Impress, the English language pack and the GNOME integration) and Nuvola Player. For that last one, I turn on the Dock Manager, Last.fm/Libre.fm, Lyrics Fetching, Media Keys, Notifications, Tray Icon, and Remote Player Interface (MPRIS) extensions. I also keep base-devel installed.
Utilities that I have installed but only use semi-regularly include strace, the Lynx web browser, rsync, cowsay, traceroute, nmap (with PyGTK for Zenmap), GNOME Tweak Tool, GNUCash, abs and pkgfile. I also install btrfs-progs (because I prefer btrfs for everything but /boot - that gets the ext4 treatment), and parted. I have Deja Dup, Bitcoin Core (aka bitcoin-qt), Brasero, devhelp (from GNOME), File Roller (from GNOME), Four-in-a-Row (also from GNOME), GNOME Software (GNOME's PackageKit frontend), GNOME Activity Journal (frontend to Zeitgeist), Anjuta (the GNOME development environment), Cheese, GNOME Chess, GNOME Clocks, GNOME Documents, GNOME Disk Utility, GNOME Mines, GNOME Music, GNOME Nettool, GNOME Nibbles, GNOME Robots, Quadrapassel (basically GNOME Tetris), GNOME Weather, and last but not least from the GNOME department, Seahorse. I also have the Android SDK installed from the AUR, and finally, while it's not strictly a utility, I should mention that I have pkgstats installed, to help out with Arch development.
I also have a small amount of extra documentation and other static files installed - namely, gnome-devel-docs, an offline copy of the Arch Wiki and the archlinux-wallpaper package.
If given the choice, I install using a GPT partition header. As stated above, I create a /boot partition formatted ext4, usually 256MB. I always create a swap partition - this is generally about 2GB, but it varies by installation. I create a /home partition, formatted btrfs inside a LUKS container, that's usually around 30GB - although I am looking to change this number, due to needing to store the Bitcoin block chain (and wanting room for some VMs). The rest of the hard drive, I generally fill with /. (I'm thinking about making / a fixed amount, and filling the rest of the drive with /home. But I'm not sure yet.) For the kernel, I use the stock Arch kernel but I keep the linux-lts package installed, just in case an upgrade to the regular kernel breaks. I have an extremely standard mkinitcpio.conf: the only differences to really note are that I have the encrypt hook added, and the keyboard hook earlier. For a bootloader, I use GRUB 2. I use the standard configuration except for the fact that I turn on the blue colors in /etc/default/grub.
All right, so we've covered filesystems. We've covered applications. Time to tackle the elephant in the room: desktop environment. I use the GNOME Shell as my desktop environment, so I have all of GNOME core installed (but I hand-pick extra applications, as you've probably guessed by the massive list of GNOME applications above). The extensions I have installed vary a bit, but I always have Dash to Dock (set to have autohide/intellihide on, set to switch workspaces when scrolling over any region of the dock, set to have an opacity of 50 and set to launch a new window when clicking on an icon), Caffeine, Drop Down Terminal (configured to use F1, so that I can use it with my Happy Hacking Keyboard), Messaging Menu, (with microblog statuses turned on), Media Player Indicator (integrated with the volume menu), Advanced Volume Mixer (set to "aggregated menu") and Topicons. I also always have Systemmonitor, Places Status Indicator, Removable Drive Menu, and User Themes - all of these come bundled with GNOME - turned on. Additionally, I have a couple extra backends installed: I have the libpurple backend for Telepathy installed, and I have GVFS backends for SMB, AFP, MTP and most importantly, a backend for GNOME Online Accounts. I also have a couple things installed to make stuff in GNOME Control Center work: Rygel, gnome-user-share, system-config-printer, vino, and (as I'll mention later) ntpd.
Because I use GNOME as my desktop environment, I always have the NetworkManager systemd service enabled - however, I make an important addition: I add nohook resolv.conf to /etc/dhcpcd.conf so that I can apply custom DNS settings without the DHCP daemon overwriting them. I then use this ability to set my /etc/resolv.conf to use DNS servers from the OpenNIC project. Actually, turns out you can do this simply by creating /etc/resolv.conf.head - this file will be prepended to the final /etc/resolv.conf, which has the advantage of automatically falling back on the DHCP-configured DNS if OpenNIC DNS fails for some reason. I also have ntpd, Avahi, and GDM enabled. I also have the server component of OpenSSH turned on - the only configuration change I make is to disable root access, as the default configuration is actually pretty secure. Finally, in the GNOME Control Center, I've turned on screen sharing (password-protected). I really should configure printing, but honestly, I can't be bothered.
As a final miscellaneous note, I configure DNSSEC validation everywhere I can. There are a couple key differences between some of my machines that I haven't covered here, but those are all due to machine-specific needs (e.g. one of my machines - my iMac - has a Bluetooth mouse, so I have the Bluez utilities installed). But those are boring to write about, so I'm not going to write about them.
I'm also planning to add ZeroTier One to my default configuration. However, I haven't gotten it to work quite right yet, so that'll have to wait until later.
That's all I can think of for now (except my shell, of course, but the configurations for that are already public). I'll be posting new blog posts when I make configuration changes from now on, so you'll hear about this again... sometime.

Monday, April 7, 2014

I [redacted] hate Apple

If you know me in real life, you know that it's no secret that I am not a fan of Apple's mobile products. They are disgusting:
  • They won't let you install apps not from the App Store without jailbreaking the device, and the criteria for getting apps into the App Store is not well-defined and rather arbitrary
  • They won't let you experiment with custom firmware
  • They push DRM
  • They're relatively tied to iTunes (which doesn't work on GNU/Linux)
  • You can't do anything real with them due to the highly restrictive security model (and the fact that guidelines for getting into the App Store are draconian)
  • They don't have an equivalent of Android Intents (so you can't e.g. change the default browser from Safari)
Those last two are obviously just my personal taste talking. However, the rest still stands. I hate Apple mobile products, and I would never buy one these days.
That all being said, I used to say that Apple computers aren't really all that bad. Yes, they do have problems:
  • OS X is the buggiest Unix on the block (see also)
  • Even though Darwin is open-source, the vast majority of Apple's desktop stack is closed-source (to name just one example: Quartz, the display server, is closed-source).
  • It's impossible to properly customize them (this is in large part due to the fact that the stack is closed, but Apple at its core doesn't really like users to customize all that much, IMHO).
 Those days are done. I can now say with certainty that I hate Apple computers, too. The first reason is obvious: the OS is annoying; I've just covered this above. However, it's not annoying enough that I can't use it on a daily basis (like Windows is). If I wasn't using GNU/Linux and didn't want to try a BSD, I'd be using OS X. It's has problems, but overall, it's a pretty dang good operating system. So let's talk about all the other reasons I now hate Apple computing products.

The wireless
I can't tell you why. I bet no one but Apple could tell you why. But it seems like every bloody Mac has a Broadcom chip. And, well, Broadcom is not known for excellent GNU/Linux support. It could be worse, but the Broadcom drivers are of the class of drivers that require firmware to operate. I dislike firmware in general, because it's non-free and I don't trust anything but free software, but to make matters even worse, Broadcom's licensing terms disallows redistributing their firmware. That means that on most Macs, in order to get on the internet using something other than OS X, you need to get on the internet. Think about that for a minute. Sucks, right?
I'm lucky enough to own a MacBook Pro that has a built-in ethernet port. People who have bought newer MacBook Pros don't have this luxury. In fact, probably the only way that I can think of to get an internet connection for these poor souls (besides screwing around with USB tethering or something) is to mount a live ISO, find the firmware directory (which may be quite hard depending on how the ISO is built, ahem Ubuntu), and put the Broadcom firmware in their. It's a nightmare.

The EFI firmware
Now, EFI in general is kind of weird. To name just one example, EFI drivers often remain loaded after the kernel is booted. And while Secure Boot is undoubtedly a good thing in the right circumstances, Restricted Boot is not. But overall, I don't have a problem with EFI or UEFI (EFI 2.0, the version that's widely deployed). I do have a problem with Apple's EFI, though. Here;'s the reason:
It is screwed up in every imaginable way.
Let's start with the basics: what it's supposed to be. The answer? No one knows. Apple's EFI is a weird mix between EFI and UEFI; it is both and neither at the same time. Well, that's a great start.
The second thing that annoys me about Apple EFI is the fact that it has no EFI shell. Now, obviously shells are kind of ugly, and Apple (being Apple) needs to make its firmware pretty. I have no problem with this, but it would have been nice if they could have included a shell behind a keyboard shortcut or something. But no, you won't find a EFI shell in Macs. Now that I think about it, it's probably for the better. The Frankenstein-esque mess that you'd find there, given the mix of EFI and UEFI, would probably be horrifying.
And that brings us neatly to the last part of the mess that is EFI on Macs: bless. How can I phrase this? What the hell, Apple. I can't even fathom what moon-man black magic bless does. Basically, you have to mount your EFI partition, then use bless to "bless" a file, a directory, a mountpoint (probably your EFI partition) or... something else. But I have no idea which one of these you do. I've done it a couple times, and it's awful. There's absolutely no documentation on exactly how the options affect the firmware. The manpage is lacking. It's dismal, because without properly "blessing" your chosen bootloader, the firmware won't boot it. It's so weird, I can't even fathom how it's supposed to work.


The graphics configuration
This one, I suppose, is not really Apple's fault. But it's still bloody annoying. Basically, Macs nowadays have two GPUs: one is an integrated Intel (which sucks, because it is Intel and it is integrated) and the other is a discrete AMD (which is relatively good, because it's AMD and it's descrete). The way that OS X handles this is to use the Intel GPU normally, but turn on the AMD GPU if you're using something like OpenGL or OpenCL. Clever, right?
Well, the GNU/Linux community hasn't quite caught up. Support for hybrid graphics, as this setup is called, is still early. And on an unrelated note, when I booted my brand-new Arch setup on my MacBook, the kernel hung due to KMS not working properly. And because I have to turn off KMS to boot, I can't get graphics. I've wasted about 8 hours trying to get this to work. It's a bloody nightmare.

The touchpad
This one I will again admit is not really Apple's fault. Only a little.
I'll keep this section short, but basically, the trackpads in Mac computers are (presumably) very complex in order to support the kinds of things that Apple does with them in OS X-land (think: natural scrolling, multitouch gestures, right-click can be either left-click or right-click, there isn't really a clear separation between left and right-click - it feels as though the trackpad is physically all one button - etc.). This unfortunately means that there really isn't a good trackpad driver for Macs. Back when I was using Debian, I came across a decent one, but even that one was sub-par - I never got right-click to work. It was a real problem, trust me.

AGH, I HATE APPLE! Anyway...

Friday, March 14, 2014

Plans for the new steevie (and a short personal update)

steevie
Replacement & encryption

So a week or so ago I ordered a super nice hard drive - a 120 GB Intel SSD (note: that links to the 80 GB version but I did get the 120 GB version). Unfortunately, there's been snail-mail problems and the drive hasn't come yet, but I'll make sure I get it eventually. I've been working on the new server in a VM, so when I do get the drive back, I can basically just image it with CloneZilla, expand a couple partitions (because I didn't want to take up a 120 GB slice of my hard drive space for the virtual hard drive image), and voila! New server.
A lot of things aren't worked out yet, due to time constraints and the fact that I could never really get VirtualBox's networking to work properly - I could never initiate a connection from the host to the guest, and screwing with the settings sometimes messed up the guest's outbound networking, too. However, the basic system is installed and functioning. The new steevie is architected from the ground up to resist the NSA. Every partition (except for /boot) is encrypted at the block level. My current plan is to have the partitions automatically unlocked with a GPG key in /boot, but I'm considering requiring a passphrase to unlock that key (yes, this does work, even with system encryption) - both of these are currently unimplemented; in the VM I just type the password manually. In addition, backups (I'm getting to my plan for these) will be encrypted with a separate GPG key that I'll keep in a secure physical storage location (password-protected, of course). Whenever possible, I will endevour to encrypt your data server-side at the application level (as opposed to the block level) - this is because even though block-level encryption is absolutely essential for resisting data compromise attacks while the system is off, it does nothing (repeat: nothing!) when the system is on.
It's worth taking a sidebar right now to explain: why encrypt at all if you're just going to unlock it automatically? The reason is that it makes the data easier to destroy. For example, if I have an unencrypted server system and I need to get rid of all the data on it to protect my users, I have to overwrite every sector of the drive with random data - that means that the amount of time it takes for data removal is directly proportional to the size of the drive. With an encrypted system, the amount of data that you need to overwrite is fixed and small. (And, as mentioned above, it adds the possibility of unlocking using a GPG key, which can then be password-protected and used over the network, getting rid of the automatic aspect.)
In addition, the contents of the /boot partition will be checksummed and compared with known good values every boot. I'm planning to deploy HTTPS (or an equivalent, like SSH tunneling) for all services on steevie, major or minor, and specifically for web-facing services, I will turn on Strict Transport Security, which means that once you've visited that service once, your browser will remember the certificate information and will be able to tell if you're getting MitM'd. This is because I'm a strong believer in the philosophy that anything that can be encrypted should be encrypted. It's not exactly incredibly expensive (you can get basic SSL/TLS certs for free, and it's not computationally expensive, either).

Filesystems
Stepping away from all the crypto, btrfs is used as the filesystem inside the LUKS containers instead of LVM (because compared to the management of btrfs, I hated LVM). (/boot is ext4.) /home and / are separate partitions, and there's something like 2 GB of swap (I forget). The drive is GPT. Honestly, there's not much else to say about filesystems.
Edit April 9: I forgot to mention that /usr is on a separate, unencrypted partition. This is because /usr is basically data managed by the package manager, so there's no need to encrypt it - and by not encrypting it, we can achieve a performance benefit.

Backups
And that brings us to our final topic - backups. I have a much better/more defined backup plan this time (i.e. it actually exists). Here it is:
First off, the internal drive will be regularly snapshotted with btrfs snapshots. What I'm thinking of doing is keeping yearly, permanent snapshots (i.e. they never go away, ever), and then keeping monthly snapshots that I rotate to make room.
Of course, that doesn't protect against the physical drive failing. For that, I'm going to do a monthly backups to an external drive with a nice tool (I'm currently looking at rdiff-backup as the probable solution). Said tool would perform incremental backups - ideally, it would produce diffs between files, but I'd also be okay with copying the entire file. (However, copying the entire directory structure even if it hasn't changed is unacceptable, and it will become clear why in a moment.)
As a second layer of protection, I plan to backup the entire directory hierarchy (no diffs) at specific intervals. Currently I'm thinking 6 months to a year, but I may change that to 1 month if I decide to take rotatable btrfs snapshots more often (like every day or something). I'm still deciding whether I will manually cp files for this, or use a tool (possibly the same tool used to make incremental backups).
As stated above, backups will be encrypted and stored in a secure, off-site location (i.e. my mom's house - the server is at my dad's). As stated earlier, the new steevie is designed from the ground up to be NSA-resistant, and because of that, I will never use something like Amazon S3 to store backups.

Miscellaneous
I'm thinking of getting a UPS for steevie, just in case. I'm still pondering this; they're very expensive.
This was basically an unwritten rule with the way I adminsitered the first steevie, but I'm making it written this time: with the possible exception of firmware, I will never put closed-source applications or services on steevie.
I'm also going to offer steevie's services to friends this time around. Maybe I can get them to ditch Facebook and Twitter if I provide a nice enough alternative.
If you have any questions, let me know in the comments or tweet me, either @strugee2 or @strugee_dot_net.

Small update on my life
I've been super busy with robotics and homework. At the beginning of the robotics season, I was like, "oh man, I gotta blog about the start of robotics" and then I pushed it off until the end of the season. It's now the off-season, and we're doing some work but not at as intense a pace as during the season, which is nice. We placed fifth at states.
Also, I don't go on IRC a lot anymore due to not having a bouncer. Instead, I go on the Stack Exchange network chat. I like to hang out in the Unix & Linux main room, The DMZ (main room for IT Security), and The Comms Room (main room for Server Fault). Even though they don't recieve a lot of noise, I'm also usually in the Bitcoin Lounge (main room for Bitcoin - duh), and finally, The Exit Node (main room for Tor).
Speaking of which, I've been running a Tor relay on Amazon EC2 for a while now, named strugees. I'm having some problems with it exhausting the bandwidth quota in like 2 days, then sitting there idling for 5 days, but I'm hoping to work it out eventually.
I've started using an online hosted instance of OwnCloud as a placeholder until I get OwnCloud set up on steevie. It's super nice - the main thing I use it for right now is the built-in cloud RSS reader. It's awesome.
Finally, I'm going to LinuxFest Northwest again this year, and I will actually be giving a talk on Arch GNU/Linux. I'm super, super pumped for that.

Have some extra money?
No, it's not for me. If you have some money that you want to donate, the Pitivi project, which is trying to make an entirely free and open source video editing suite, is running a fundraiser, and they deserve your support. So does the MediaGoblin fundraiser - MediaGoblin is trying to make an entirely federated replacement to media-sharing platforms like YouTube and Flikr. Again, go check them out, they're awesome.

Wednesday, October 16, 2013

Update on steevie's downtime

So you all probably deserve an update on steevie. That update is this update.
steevie has been down for approximately a month. Here's what happened:
  1. I upgraded steevie.
  2. I rebooted steevie, due to systemd cgroup hierarchy changes.
  3. steevie refused to boot (he failed to mount the root partition).
So basically, here's what's supposed to happen on a normal boot:
  1. GRUB loads.
  2. GRUB loads the Linux kernel.
  3. GRUB loads the initial ramdisk.
  4. LVM, in the initial ramdisk, in userspace, searches for Volume Groups.
  5. LVM creates the device nodes that represent the LVM Logical Volumes in /dev.
  6. systemd mounts (or swapons); the created devices as filesystems. One as /home, one as /, and one as swap.
  7. The initial ramdisk exits, the Linux kernel changes all the mounts to be mounted on the real root, and the system boots.
The problem is that somehow, the system cannot properly complete step 5. This means that the boot process "completes" like this:
  1. Steps 1-4 above complete normally.
  2. LVM tries to create the device nodes. For some reason, this hangs forever.
  3. Eventually, something (possibly systemd, I'm not sure) times out waiting for the device to be created, and kicks you back to a ramdisk shell (which means Busybox).
  4. The shell waits for you to do something to fix the boot attempt.
This is extremely unfortunate. Right now, it's looking like the LVM problem is being caused by a hard drive failure.
You can read all the gory details at this Stack Exchange question, and then this followup, but the tl;dr is that there isn't much I can do. There's still a little more to try, but I don't hold out much hope.
Worst case, I have to completely wipe the drive. Any data in your home directory will be preserved, because there are no problems mounting the /home partition. But if you have any data anywhere else, it will probably be lost. I'll run data recovery tools, of course, but I don't hold out much hope. Unfortunately, this also means that my beautiful README will be lost. :(
I'm not sure what I'll end up doing once the drive it's wiped. It's possible I'll use btrfs on the new root, since it seems to be pretty resistant to this kind of stuff (and at the filesystem level instead of the block level, so it will probably be more effective).
Sorry for the downtime! If you have any questions or any concerns, feel free to reach out to me in the comments or on Twitter (mention either @strugee2 or @strugee_dot_net).

Sunday, August 25, 2013

I'm back, y'all!

So I've been away for a while, doing things in Outside, aka Not The Internet. Scary.
Also, I haven't really had a lot of internet access when I haven't been outside, so I haven't been able to blog or do anything interesting.
But I got back a week ago... and then dived straight into robotics. We've been doing a lot of cool stuff (among other things, I2C bus programming and my favorite revision control system, Git), preparing for the season. I left yesterday for a robotics retreat and got back today, which was awesome.
However, I've had some free time and I've been doing some stuff.
First, I'm ditching Debian (in the words of my mom, "well, that was a short romance"). And here's why. Debian installs a lot of things by default for you. It is graphics-oriented: the default network connection daemon is NetworkManager running in GNOME. It installs a desktop environment at installation. And not only that, but it's way, way, way too liberal with dependencies. When I booted my Debian system, I found the xul-ext-adblock-plus package installed. And when I tried to uninstall it, it also removed the GNOME metapackage due to the AdBlock package being a dependency of GNOME. Not a suggests. Not a recommends. A required dependency. In other words: I couldn't remove the AdBlock Plus extension without removing all of GNOME. The way I eventually solved it? I created an empty package. Someone please explain to me why the hell I had to create a useless, empty package to keep my desktop environment but get rid of a XUL extension. And someone please explain to me what idiot decided that AdBlock Plus should be a part of GNOME and why.
That's ridiculous. Not only that, but I don't understand the composition of my system. Sometimes, my WiFi will disconnect and when I go to reconnect, it doesn't show anything until I turn the network card off and on again in the GNOME Control Center. But I can't figure out where to start diagnosing this issue, because I have no idea what's installed on my system and affecting the wireless. Not only that, but Debian patches things so. Freakin. Much. I hate that.
My GDM has a Debian background. I don't want a Debian background, but that's too bad because some Debian developer has helpfully added branding. I have a Debian menu in my Awesome menu (with a couple of screensaver options that don't work anymore, no less, due to GNOME Screensaver getting merged into gnome-shell or some shtick like that). I don't want a Debian menu in my Awesome menu, I just want Awesome. But ooooh noo, the Debian menu is "helpful", so someone added it. Even if I figured out the things that were affecting my wireless, I still wouldn't understand the whole picture, because the upstream documentation doesn't cut it. I'd also have to go look at Debian's documentation to see what ridiculous things they've added or changed.
Plus, despite the fact that I'm on Debian Sid - the unstable branch that's supposed to be more like a rolling distro because it's the development branch, where updated packages land first - I still get moldy packages. Even though Sid is where new things land, they're still developing for a non-rolling distro. So even though Emacs 2.4 is in the package pool, and has been for at least just about a year (since I remember seeing it back when I used Ubuntu), I still get Emacs 2.3 when I install the Emacs package, because Debian isn't ready to move to 2.4 on stable, and unstable is ultimately going to become stable. Not just Emacs. The other program I use every day - my web browser - is also moldy. Because it turns out that Debian uses the Firefox/Iceweasel ESR releases instead of regular releases. So I had the dubious pleasure of pulling a newer package from Debian Experimental. I mean, seriously. The mold is clear. In the words of the Linux Action Show, when I'm using Arch, I feel closer to upstream.
In the end, Debian is not KISS. So I'm leaving it for Arch.
Edit: Debian also uses SysV init, which is old and bugs me, especially since I've grown up on the relative speed and feeling of cleanness of Upstart (from back when I used Ubuntu), and now the awesomeness that is systemd on Arch. It's possible to install systemd (or Upstart) in Debian but it's impossible to effectively replace the init system, because the SysV init package is marked as essential, which means it gets automagically reinstalled when you do a system upgrade. Or you could patch the GRUB files, which I don't want to do. (In short, SysV init bugs me and it bugs me that I can half-switch to systemd, but not really).
Update about steevie: X11 forwarding has been theoretically turned on, but my cursory attempt to launch gedit failed. I think I had some client-side things configured wrong, so I'm not sure if it actually works.
Also, files will be served from ~/public_html automagically by Apache. They'll show up under people.strugee.net/~[your username]/ - just make sure that the folder is readable by the httpd user. Details in the README (although I think there are currently some half-written parts).

Saturday, August 3, 2013

Traveling

I've been traveling all day today and yesterday, so I haven't been able to blog.
Also, I'm going somewhere (rural Michigan) with little to no internet, so blogging will remain sporadic. Blech!

Friday, August 2, 2013

Update on the new server

tl;dr, here is what's done:
  • SSH (kind of)
  • LVM
I haven't had a lot of time to do server stuff for today and yesterday, because I've been hanging out with people IRL *gasp*
However, the new server lives, albeit weirdly. Yesterday I spent a lot of time trying to fix the filesystem on the server before finally giving up and just making a tarball. So that took up like 6 hours of just waiting. Ugh! However then, as I said, I made a tarball and backed it up, and then proceeded to install Arch Linux. Funny story: I had to bring the server into my bathroom because it is the only room in the house that a. provides grounded sockets and b. is reachable with an Ethernet cable from the router (since the new server doesn't have a WiFi card), which I needed because Arch is a netinst distro these days. Then I had to go to bed. However, since LVM is part of the install process, I got that done.
Today I had very little time as I've been packing for a trip tomorrow. Therefore, I wasn't able to get a perfect setup, but it is workable for remote administration (so I can get most stuff done while traveling). The major flaw that you will notice in the current configuration is that if you have an existing account, you will end up back in alex-ubuntu-server. This is because something is wrong with my router and it is still forwarding connections to alex-ubuntu-server (which is still plugged in via Ethernet to allow for remote file migration). Therefore, if you previously had an account on alex-ubuntu-server, you will need to ssh to 192.168.0.19 from the Ubuntu console. Then you'll end up at steevie (which is the new server's hostname, btw).
Note: if you have a new account, you don't have to worry about this. I've put together some hackery on alex-ubuntu-server to allow you to login to steevie automagically. The only difference is you will have to type a very bad, very weak password that doesn't matter before you type your real password.
Other things will be done or turned on in the coming days, e.g. X11 forwarding, mail, etc.
9P will not be turned on, because I will need physical access to install Plan 9 and to reconfigure the router again. Anything external won't be turned on properly because, again, I'll need to reconfigure the router. For example, internal mail will be turned on but SMTP won't.
Anyway, I have to go pack.