Monday, July 29, 2013

Upcoming changes coming to alex-ubuntu-server





 Recently a friend offered me a new server with much better specs than the 15+-year-old computer that I use now. It has 4GB of RAM (compared with the 256 MB that the current server has), and it has a dual-core AMD processor running at 2800MHz. I'm not sure what the processor specs are for the current server, but honestly, I'm sure they're just as crappy as the RAM.

Getting this new server will open up a lot of possibilities, so here's some important changes that are coming to the server, if you are the one person that uses it.
  • X11 forwarding will be installed and turned on for SSH connections
    • This means that if you have an account (i.e. are able to SSH into the current server), you will be able to remotely login to a graphical environment. This means that you can e.g. carry your graphical application settings around with you (or at least, it will seem like that, in reality you'll be loading them from my server, which will require internet access).
    • It's unknown if I will offer GNOME. I will be open to any lightweight window manager such as awesome, Openbox, Fluxbox, twm, etc., without further thought. However, I will have to experiment with what system load looks like with GNOME installed. Therefore, I'll start with GNOME, but you should be aware that GNOME could eventually be removed again.
  • LVM will be turned on and partitions will be reconfigured
    • This won't affect you in any measurable way if you use the server. It just means that if there's ever a need for more storage, there won't have to be server downtime in order to install and use it. If you don't know what LVM is, read the Wikipedia article on it.
    • /home will become a separate partition. This is mostly to allow for easier backups (currently there is zero backup policy) and easier transitions in the event of another server move.
  • There will be a fresh installation. I will not just be dding or rsyncing files over to the new install.
    • There are several reasons for this. The first and foremost is that I installed and set up this server a couple of years ago, back when I was around 11 or 12, and thus didn't know exactly what I was doing, and I didn't have a very good idea of how to be a sysadmin. Because of this I didn't really keep a record of changes that I'd made, and thus, I don't know exactly how the system is structured and cannot effectively perform changes or diagnostics (because I don't know how changes would affect the system).
    • I may or may not transition to Arch Linux as the distribution of choice for my server, and this requires a reinstall. At first blush this may seem like a bad idea, since Arch is rolling and you need stability for a server (this is why Debian and Debian derivatives are so good for servers - they're stable and don't change often). However, it's worth noting that with Arch, you can deal with problems as they come along, instead of all at once every 6 months. This is actually pretty useful, because you can tell exactly which package changes may have broken something, instead of 5-10 things potentially breaking all at once. In short, problems are isolated. Note that if I do run Arch on my server, I will of course do my utmost to maximize stability - for example, I'll use an LTS kernel instead of the latest. Another reason that I'm thinking of Arch is that it makes it easy for me to understand exactly what's going on. Ubuntu and Debian both come with batteries included, which is generally a Good Thing™ but can be unfortunate if you want to understand the exact composition of your system (which you should if you want to be a good sysadmin). In particular, Ubuntu and Debian are very generous when installing optional things (not helped by the fact that installing Recommends is turned on by default in the APT configuration). It gets to the point where the GNOME metapackage in Debian depends (not recommends - depends) on the AdBlock Plus XUL extension. What?? Finally, I just like Arch better than Ubuntu. pacman vs. apt-get, apt-cache, apt-mark, apt-cdrom, apt-<5 other things here>, anyone?
    •  LVM (see above) is much easier to set up with a fresh install.
    • Services operation will not be impacted. Anything that works on the server now will work in the new server. Primarily, this means mail and SSH access. I'll also ensure that a lot of currently-installed packages are still available (for example Emacs). If you encounter something that you could do before and can't with the new server, I will consider it a configuration bug and will fix it.
    • Note that the two exceptions to this is /home and /etc.
      • /home I will transfer over for obvious reasons: I don't want you to lose data. That being said, be cautious because configuration formats may change if I move to Arch.
      • /etc is version-controlled with etckeeper. Therefore I'll just add a remote and git push, but I may take the opportunity to do some pruning.
  • I will overwrite the current server setup with an installation of Plan 9 From Bell Labs, and I will set up that installation to be a private 9P server.
    • The new server will be set up to forward all incoming traffic directed towards 9p.strugee.net to the new Plan 9 server.
    • The Plan 9 server will run a Fossil filesystem backed by Venti, allowing rewinds, etc.
    • If you have an account on the main server you will have an account on the Plan 9 server (I'll either set up a script to make this happen or I'll just go into each server and create a new user twice).
  • Note: this means downtime.
    • Most likely this will happen in the coming weeks or even months. It won't take that long, especially because I'll basically need to swap out machines (I'll have configured the new server while the old server was running), but just in case of extended downtime, be aware.
In order to prepare please rack your brains to figure out if you have any files not in your home folder. If you do, please either move them to your home folder or make backups.
If you lose data, I will be able to recover it, but I don't relish the thought as I'll probably have to mess around will loops and mounts and stuff (see the second paragraph). Assume that there will be no backups.

I FIXED EVERYTHING YESS

I forgot to blog yesterday! So this is for two days. Also, I'm back in Seattle as of about 5 hours ago.
I've spent the last two days mostly working with Debian, although two days ago, I was out for most of the day listening to music at the music festival that Mom was attending in Port Townsend.
So here's what I've done: I made Debian work! I realized that we actually did have an Ethernet cable in the house, so I plugged it in, because Ethernet cables are more likely to Just Work(tm), and sure enough, I got internet, which was enough to download stuff.
However, realizing this, I ended up reinstalling with the Ethernet cable to do it The Right Way. I was able to shave off a ton of time by not randomizing my crypto disks again, because that'd already been done on the previous pass, and I saved a ton of time by not downloading GNOME. Of course I still wanted GNOME so I downloaded gdisk and used it to find the exact boundaries of the partitions I'd created in my VM. Then I was able to use losetup to create a loopback device for each partition, and finally, mount those loopbacks as filesystems. Then I just ran "cp /media/virtual/var/cache/apt/archive/*.deb /var/cache/apt/archive/", and presto! Much more populated APT cache. Then I installed GNOME. Ad then zsh, awesome, etc.
I thought that NetworkManager would solve my wireless problems, but it didn't. The solution turned out to be simple, though: upgrade from WEP to WPA2 (although this did require a router firmware upgrade). I installed a better driver for Apple's trackpad, which basically makes everything but right-click Just Work(tm). The only thing that isn't working is 3D acceleration, for which the solution is to install firmware-linux-nonfree. Unfortunately, installing that hung my initial ramdisk while waiting for /dev to populate, so I had to chroot and get rid of it, then overwrite the dirty ramdisk image. I spent a long time working in it before I found that information, and even tinkered with xorg.conf.d (I had to do this for the trackpad driver too, IIRC), before finally finding the solution, which then hung my system. At that point I realized that even though GNOME Shell kicked me into fallback mode with the mesa driver, the gears GLX demo still worked (so clearly it wasn't completely broken), and I wasn't going to be using GNOME Shell anyway. Then I went and configured Awesome GNOME.
There are big changes coming to my server, and there's been a lot of downtime recently, but I'll blog about that tomorrow.
Back to reading Plan 9 papers.

Saturday, July 27, 2013

I've given up again

Today I continued my attempt to get Debian to work on a real partition.
I got the firmware working by manually installing it instead of using the Debian package. However, because of the unfortunate limitations of the Debian "essential" environment, I couldn't actually connect to a network. That didn't stop me from trying for hours on end, though. Ugh.
The sad thing is that I could easily fix this with a better Live CD, but the Debian netinst environment just doesn't cut it.
Finally, I got so frustrated that I had to stop and go play Kentucky Route Zero, which I bought a couple of days ago. It's freakin amazing. You should go check it out right now.

Friday, July 26, 2013

irssi proxy sux!

Today I switched to ZNC, a real IRC bouncer. New opinion: ZNC rules, irssi proxy sux! Plus, now I don't have to worry about screen problems.
I spent a while looking into Diaspora* and Tor again. They're both amazing projects; you should go check them out.
Finally, I installed Debian on a real partition (which is apparently minimally possible even on a netinst CD) in preparation for migrating from the VM. I still have to wrestle wireless into working, though, so we'll see how that goes. I spent a good portion of the installation procedure waiting for it to randomize disk space, because I set up an encrypted /home and swap. Pretty boring.

(Note: I cannot be bothered to properly link to things in this post. Just Google them, OK?)

Thursday, July 25, 2013

Debian week, day 5 (I think)

Done today: made it about a third if the way through the Debian Policy Manual. I also spent a lot of time waiting: waiting for my GPG incremental Tor key refresh tool (I forget exactly what it's called, parcimonie I think) to build dependencies, waiting for bitcoind to sync up with the network (it's still going and has, AFAICT, at least 5 more hours to go), and waiting for my Debian VirtualBox hard drive to convert into a raw format in preparation to move to a partition. I spent a lot of that time reading the Debian Policy Manual, but I also spent a lot of it on Freenode in #archlinux.
Hopefully tomorrow I can make the switch to a real partition!

Wednesday, July 24, 2013

GPG and Sid

I want to go to bed but here's a quick update on what I've done today:
I have moved past steps 1 and 2 on my checklist. That is, I have installed Debian and upgraded to Sid. I spent a long time waiting for things to download, and I did various things during that time (like browsing Unix & Linux). I spent a fairly long amount of time getting the feel of my new Debian system (although I already had some experience from Ubuntu) and customizing it to my liking. I still have a bit more to do, notably installing the awesome window manager (I couldn't do that during the day because build-essential and the Sid updates were downloading and locking the APT cache). But overall I'm pretty satisfied. I've got my Emacs, I've got my Firefox, and soon I'll have my awesome - what more could you want?
Anyway, the second thing that I've done today is I've generated a GPG key for myself to use. It's GPG key 0xA8DA10C057F65FA7, with the fingerprint B105 3164 B6C8 F4F7 C2B4 356F A8DA 10C0 57F6 5FA7. I have uploaded it to keys.gnupg.org and keyserver.ubuntu.com. You can also find this information on strugee.net.

Tuesday, July 23, 2013

So I eventually gave up on installing Debian on a real partition and began installing it in a VirtualBox. My eventual plan is to move the VirtualBox partitions to real partitions, but we'll see.
Anyway, installation is proceeding smoothly except for the fact that I'm waiting for 1402 packages to download on a network with 600 ms ping times (which IIRC is super slow - even if I'm wrong and the ping times are OK, everything else is slow, sooo...). This problem is exacerbated by the fact that I chose the "desktop environment" task in the installer.
So, I have been searching for things to do in the past couple hours. One, I have fixed ALL OF THE DNS PROBLEMS! I've no idea how, but somehow, I made it work.
Also, I started (and then stopped, to not hog bandwidth) downloading the Armory Bitcoin client. Because I (as of about a month ago) do Bitcoin. Yay! I'm also going to do mining on my MacBook soon (I'll join a pool).
Other than that I've just been messing around, mostly with my irssi proxy to add the server that #debian is on.

Monday, July 22, 2013

Debian week, day 2

 It's Debian week, day 2! I am still stuck on step 1.
The Internet access is super terrible here, so I can only get some stuff done at certain times. Like blogging. Anyway, I tested whatever I said I'd do last night and it failed. So I have one more thing to try: I have made a full backup of my Arch flash drive and I'm about to wipe it and reformat as VFAT. Hopefully I can then put the firmware on there and it will finally work, as this is an exact setup that the Debian installer expects. Hopefully.
Also: the Ubuntu Edge is a new phone that has desktop system specs, custom-made by Canonical. Much as I have grown to be bitter towards Ubuntu and Canonical on a personal level, they, along with the Ubuntu Phone project, are our best bet to making GNU/Linux succeed in the consumer market. And to be perfectly honest, the Ubuntu Edge looks like a really amazing piece of hardware. Therefore I actually chose to back it with $20, and you should too.
Anyway, back to Debian.

Misadventures in firmware- and cabin-land

Today I drove with my mom to Port Townsend, where we have a cabin. I ended up building a fire because Mom was out, which was an interesting experience. It took me four tries but eventually I got it. Yay!
Try #3

Try #3, burning

Also, I bought the Humble Weekly Sale with Jim Guthrie, because Jim Guthrie is freaking awesome.
So it's now Debian Week, the week where I become a Debian maintainer. Here is my approximate plan:
  1. Install Debian Jesse. Fairly easy except for the fact that I'm doing it on a MacBook Pro that a. has Apple's moon-man of an EFI implementation and b. has a Broadcom chip that needs firmware.
  2. Upgrade to Debian Sid.
  3. Read Debian Policy Manual.
  4. Write Debian package.
  5. Submit Debian package.
Currently I am stuck on step 1. I've tried to put firmware in a FAT partition on my Mac, on the EFI system partition on my Arch flash drive (which, because it is the EFI system partition, is FAT), on the ext4 partition on my Arch flash drive, all for autodetection by the Debian installer. Nothing. Next I tried downloading the installer from the Debian package archive, but that uses a wget script that obviously won't work, since I can't get wireless and don't have an Ethernet cable. Therefore I extracted the .deb archive, extracted the control files in it with tar, and modified the script to copy the firmware from /mnt instead of using wget (this is assuming that I've previously mounted the needed partitions manually). Then I rearchived the whole thing back into a new .deb file. However, when I rebooted into the installer recovery environment again, it turns out that the shell doesn't have dpkg. Very frustrating.
It now looks like Debian distributes its own firmware bundle, and that may work. I will try that after blogging, but I swear, I'm this close to just burning the unofficial image with the firmware already on the disk. Assuming I can find a CD in the cabin.
I am going to attempt to blog every day that I can this week, and if possible, the rest of the summer. We'll see how it goes.
On an unrelated note, a little while ago I started version-controlling my dotfiles with Git. This seems ridiculous but it's actually pretty common - just search for "dots", or even better, "dotfiles" on GitHub. You can find a ton of interesting stuff that way. I have now merged configurations from my server, from my MacBook (just did this today!), my Arch install on my flash drive (from which the initial commit originated), and my Arch install on my ACER laptop. I have made every file in the repository portable across each of these systems, so I don't do anything funky with branches, or anything like that, to differentiate between system-specific configs. For example, at the top of my .zshrc, you can clearly see OS detection that sets the DISTRO environment variable to either "DARWIN" or "ARCH" (because those are the two that I use with zsh). Exciting!

Thursday, July 18, 2013

Remember that time I said we need the V8 of rendering engines?

That would be this time. Anyway, I wanted to write a quick post to state that I lied, because I forgot about Mozilla's project Servo. It's written in the Rust programming language, which is interesting because it [Rust] is designed specifically for writing browser engines. It's also created by Mozilla. So actually, someone already is working on the V8 of rendering engines.

Goings-on

It's summer! Yay!
I've been to Ultimate Camp and the interwebs. And my room. Hmm.
Actually though, this week from last Wednesday to last Friday, a couple of people from the SAAS robotics team have been prepping for a camp that we're doing for middle schoolers next week. And this week, we get to actually be councilors for middle schoolers. It's very exciting and very fun!
Also, I've switched to Arch Linux. Ubuntu just makes me too angry these days, and I no longer recommend it for GNU/Linux newbies (I'm recommending Mint now). Canonical is making more and more proprietary decisions - for example, Unity cannot be used on any distribution besides Ubuntu without serious effort. Also, take Mir - Mir fragments the already little-used GNU/Linux desktop, and it doesn't even do anything new. Developers are already putting GNU/Linux behind Windows and Mac - and now they potentially have to think about two display servers, meaning that the platform will look even less attractive. Not only that, but none of the concerns that the Mir team had about Wayland hold up - in fact, a Mir developer showed that he in fact knew nothing about how Wayland worked. Canonical's insane - they want to take on the burden of porting all the upstream toolkits themselves (oh, except for old ones like GTK+2 - but as we all know, GTK+2 is still in wide use). IMHO, this is crazy. It's a waste of resources. Canonical cannot play with others, and that's extremely frustrating. For example, Canonical thought that their upstream Wayland contributions wouldn't be accepted. They even offered that as a justification for Mir. But they never even tried. That's simply ridiculous, and not only that, but it's selfish. As the vendor of the most widely-used GNU/Linux distribution in the planet, Canonical has a responsibility to not do things that screw over the ecosystem. But recently it seems like they're getting Not Built Here syndrome more and more, and they're willing to do almost anything to meet that feeling, even at the cost of the rest of the ecosystem. It's saddening.
Anyway, I'm going to stop talking about that because it makes me angry. Other miscellaneous things that I'm doing: I'm planning to fully install and try Gentoo, NetBSD, Linux from Scratch, and finally, Plan 9 from Bell Labs (note that this is the only one that isn't a UNIX).
Yesterday (Tuesday) I attended a LibrePlanet Washington meeting, which was really fun. Among other things I am now into PGP/GPG and will be doing stuff with it soon.
Also, I am thinking of doing dev work on my favorite AUR wrapper, Yaourt. I'm also thinking I might work on grive, since Insync is no longer free (as in free beer).
I also attended GSLUG last Saturday, which was really cool.
I'm also getting into IRC again. I usually hang out in #archlinux, #plan9, #gnome, #gslug and (just recently - we only created it yesterday!) #libreplanet-wa, all on Freenode. Especially cool is the fact that I set up an irssi proxy on my server (which is now on the live internet, although strugee.net is still hosted on GitHub pages. The only problem is that it interferes with byobu/screen.
Also, I set up Postfix, so mail between local system users is enabled on my server (but external mail @strugee.net is not).
Anyway, I have to go to bed. There's probably more that I want to talk about, but whatever.
Oh, one last thing: I'm using Emacs now. Yay!

Thursday, July 11, 2013

Firewall configuration on alex-ubuntu-server

So about a month or two ago, in preparation for putting my server live on the internet, I configured my firewall, which was an interesting process that I want to document.
I had previously searched for "firewall" in aptitude and installed the first result, which gave me a lovely error on service init telling me that I needed to edit /etc/apf-firewall/firewall.conf, and set something-or-other to true. Obviously I generally ignored said error.
So I went looking for documentation but it turns out that Ubuntu already comes with a firewall. Therefore I got rid of apf-firewall. Then I ran sudo ufw enable.
Now, I've read the six dumbest ideas in computer security. And of course, number one is default allow. Luckily, ufw was written by people smart enough to put a default deny policy in place by default:
alex@alex-ubuntu-server:~$ sudo ufw status verbose
[sudo] password for alex:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing)
New profiles: skip
alex@alex-ubuntu-server:~$
So that was covered. I decided, however, to also institute a default deny policy for outgoing traffic, on the basis of "why not" - meaning that I might as well unless it became a huge issue. So far though, it's actually ok. An interesting thing that happened on my first pass, though, was that while I had port 80 open, I didn't have port 53 open. So I could download web pages but I couldn't actually resolve addresses, causing connection problems.
Anyway, the last thing I have to do is figure out ping. It's supposed to work automagically, but it doesn't. So I'll look at that.