Wednesday, October 16, 2013

Update on steevie's downtime

So you all probably deserve an update on steevie. That update is this update.
steevie has been down for approximately a month. Here's what happened:
  1. I upgraded steevie.
  2. I rebooted steevie, due to systemd cgroup hierarchy changes.
  3. steevie refused to boot (he failed to mount the root partition).
So basically, here's what's supposed to happen on a normal boot:
  1. GRUB loads.
  2. GRUB loads the Linux kernel.
  3. GRUB loads the initial ramdisk.
  4. LVM, in the initial ramdisk, in userspace, searches for Volume Groups.
  5. LVM creates the device nodes that represent the LVM Logical Volumes in /dev.
  6. systemd mounts (or swapons); the created devices as filesystems. One as /home, one as /, and one as swap.
  7. The initial ramdisk exits, the Linux kernel changes all the mounts to be mounted on the real root, and the system boots.
The problem is that somehow, the system cannot properly complete step 5. This means that the boot process "completes" like this:
  1. Steps 1-4 above complete normally.
  2. LVM tries to create the device nodes. For some reason, this hangs forever.
  3. Eventually, something (possibly systemd, I'm not sure) times out waiting for the device to be created, and kicks you back to a ramdisk shell (which means Busybox).
  4. The shell waits for you to do something to fix the boot attempt.
This is extremely unfortunate. Right now, it's looking like the LVM problem is being caused by a hard drive failure.
You can read all the gory details at this Stack Exchange question, and then this followup, but the tl;dr is that there isn't much I can do. There's still a little more to try, but I don't hold out much hope.
Worst case, I have to completely wipe the drive. Any data in your home directory will be preserved, because there are no problems mounting the /home partition. But if you have any data anywhere else, it will probably be lost. I'll run data recovery tools, of course, but I don't hold out much hope. Unfortunately, this also means that my beautiful README will be lost. :(
I'm not sure what I'll end up doing once the drive it's wiped. It's possible I'll use btrfs on the new root, since it seems to be pretty resistant to this kind of stuff (and at the filesystem level instead of the block level, so it will probably be more effective).
Sorry for the downtime! If you have any questions or any concerns, feel free to reach out to me in the comments or on Twitter (mention either @strugee2 or @strugee_dot_net).

Sunday, August 25, 2013

I'm back, y'all!

So I've been away for a while, doing things in Outside, aka Not The Internet. Scary.
Also, I haven't really had a lot of internet access when I haven't been outside, so I haven't been able to blog or do anything interesting.
But I got back a week ago... and then dived straight into robotics. We've been doing a lot of cool stuff (among other things, I2C bus programming and my favorite revision control system, Git), preparing for the season. I left yesterday for a robotics retreat and got back today, which was awesome.
However, I've had some free time and I've been doing some stuff.
First, I'm ditching Debian (in the words of my mom, "well, that was a short romance"). And here's why. Debian installs a lot of things by default for you. It is graphics-oriented: the default network connection daemon is NetworkManager running in GNOME. It installs a desktop environment at installation. And not only that, but it's way, way, way too liberal with dependencies. When I booted my Debian system, I found the xul-ext-adblock-plus package installed. And when I tried to uninstall it, it also removed the GNOME metapackage due to the AdBlock package being a dependency of GNOME. Not a suggests. Not a recommends. A required dependency. In other words: I couldn't remove the AdBlock Plus extension without removing all of GNOME. The way I eventually solved it? I created an empty package. Someone please explain to me why the hell I had to create a useless, empty package to keep my desktop environment but get rid of a XUL extension. And someone please explain to me what idiot decided that AdBlock Plus should be a part of GNOME and why.
That's ridiculous. Not only that, but I don't understand the composition of my system. Sometimes, my WiFi will disconnect and when I go to reconnect, it doesn't show anything until I turn the network card off and on again in the GNOME Control Center. But I can't figure out where to start diagnosing this issue, because I have no idea what's installed on my system and affecting the wireless. Not only that, but Debian patches things so. Freakin. Much. I hate that.
My GDM has a Debian background. I don't want a Debian background, but that's too bad because some Debian developer has helpfully added branding. I have a Debian menu in my Awesome menu (with a couple of screensaver options that don't work anymore, no less, due to GNOME Screensaver getting merged into gnome-shell or some shtick like that). I don't want a Debian menu in my Awesome menu, I just want Awesome. But ooooh noo, the Debian menu is "helpful", so someone added it. Even if I figured out the things that were affecting my wireless, I still wouldn't understand the whole picture, because the upstream documentation doesn't cut it. I'd also have to go look at Debian's documentation to see what ridiculous things they've added or changed.
Plus, despite the fact that I'm on Debian Sid - the unstable branch that's supposed to be more like a rolling distro because it's the development branch, where updated packages land first - I still get moldy packages. Even though Sid is where new things land, they're still developing for a non-rolling distro. So even though Emacs 2.4 is in the package pool, and has been for at least just about a year (since I remember seeing it back when I used Ubuntu), I still get Emacs 2.3 when I install the Emacs package, because Debian isn't ready to move to 2.4 on stable, and unstable is ultimately going to become stable. Not just Emacs. The other program I use every day - my web browser - is also moldy. Because it turns out that Debian uses the Firefox/Iceweasel ESR releases instead of regular releases. So I had the dubious pleasure of pulling a newer package from Debian Experimental. I mean, seriously. The mold is clear. In the words of the Linux Action Show, when I'm using Arch, I feel closer to upstream.
In the end, Debian is not KISS. So I'm leaving it for Arch.
Edit: Debian also uses SysV init, which is old and bugs me, especially since I've grown up on the relative speed and feeling of cleanness of Upstart (from back when I used Ubuntu), and now the awesomeness that is systemd on Arch. It's possible to install systemd (or Upstart) in Debian but it's impossible to effectively replace the init system, because the SysV init package is marked as essential, which means it gets automagically reinstalled when you do a system upgrade. Or you could patch the GRUB files, which I don't want to do. (In short, SysV init bugs me and it bugs me that I can half-switch to systemd, but not really).
Update about steevie: X11 forwarding has been theoretically turned on, but my cursory attempt to launch gedit failed. I think I had some client-side things configured wrong, so I'm not sure if it actually works.
Also, files will be served from ~/public_html automagically by Apache. They'll show up under people.strugee.net/~[your username]/ - just make sure that the folder is readable by the httpd user. Details in the README (although I think there are currently some half-written parts).

Saturday, August 3, 2013

Traveling

I've been traveling all day today and yesterday, so I haven't been able to blog.
Also, I'm going somewhere (rural Michigan) with little to no internet, so blogging will remain sporadic. Blech!

Friday, August 2, 2013

Update on the new server

tl;dr, here is what's done:
  • SSH (kind of)
  • LVM
I haven't had a lot of time to do server stuff for today and yesterday, because I've been hanging out with people IRL *gasp*
However, the new server lives, albeit weirdly. Yesterday I spent a lot of time trying to fix the filesystem on the server before finally giving up and just making a tarball. So that took up like 6 hours of just waiting. Ugh! However then, as I said, I made a tarball and backed it up, and then proceeded to install Arch Linux. Funny story: I had to bring the server into my bathroom because it is the only room in the house that a. provides grounded sockets and b. is reachable with an Ethernet cable from the router (since the new server doesn't have a WiFi card), which I needed because Arch is a netinst distro these days. Then I had to go to bed. However, since LVM is part of the install process, I got that done.
Today I had very little time as I've been packing for a trip tomorrow. Therefore, I wasn't able to get a perfect setup, but it is workable for remote administration (so I can get most stuff done while traveling). The major flaw that you will notice in the current configuration is that if you have an existing account, you will end up back in alex-ubuntu-server. This is because something is wrong with my router and it is still forwarding connections to alex-ubuntu-server (which is still plugged in via Ethernet to allow for remote file migration). Therefore, if you previously had an account on alex-ubuntu-server, you will need to ssh to 192.168.0.19 from the Ubuntu console. Then you'll end up at steevie (which is the new server's hostname, btw).
Note: if you have a new account, you don't have to worry about this. I've put together some hackery on alex-ubuntu-server to allow you to login to steevie automagically. The only difference is you will have to type a very bad, very weak password that doesn't matter before you type your real password.
Other things will be done or turned on in the coming days, e.g. X11 forwarding, mail, etc.
9P will not be turned on, because I will need physical access to install Plan 9 and to reconfigure the router again. Anything external won't be turned on properly because, again, I'll need to reconfigure the router. For example, internal mail will be turned on but SMTP won't.
Anyway, I have to go pack.

Monday, July 29, 2013

Upcoming changes coming to alex-ubuntu-server





 Recently a friend offered me a new server with much better specs than the 15+-year-old computer that I use now. It has 4GB of RAM (compared with the 256 MB that the current server has), and it has a dual-core AMD processor running at 2800MHz. I'm not sure what the processor specs are for the current server, but honestly, I'm sure they're just as crappy as the RAM.

Getting this new server will open up a lot of possibilities, so here's some important changes that are coming to the server, if you are the one person that uses it.
  • X11 forwarding will be installed and turned on for SSH connections
    • This means that if you have an account (i.e. are able to SSH into the current server), you will be able to remotely login to a graphical environment. This means that you can e.g. carry your graphical application settings around with you (or at least, it will seem like that, in reality you'll be loading them from my server, which will require internet access).
    • It's unknown if I will offer GNOME. I will be open to any lightweight window manager such as awesome, Openbox, Fluxbox, twm, etc., without further thought. However, I will have to experiment with what system load looks like with GNOME installed. Therefore, I'll start with GNOME, but you should be aware that GNOME could eventually be removed again.
  • LVM will be turned on and partitions will be reconfigured
    • This won't affect you in any measurable way if you use the server. It just means that if there's ever a need for more storage, there won't have to be server downtime in order to install and use it. If you don't know what LVM is, read the Wikipedia article on it.
    • /home will become a separate partition. This is mostly to allow for easier backups (currently there is zero backup policy) and easier transitions in the event of another server move.
  • There will be a fresh installation. I will not just be dding or rsyncing files over to the new install.
    • There are several reasons for this. The first and foremost is that I installed and set up this server a couple of years ago, back when I was around 11 or 12, and thus didn't know exactly what I was doing, and I didn't have a very good idea of how to be a sysadmin. Because of this I didn't really keep a record of changes that I'd made, and thus, I don't know exactly how the system is structured and cannot effectively perform changes or diagnostics (because I don't know how changes would affect the system).
    • I may or may not transition to Arch Linux as the distribution of choice for my server, and this requires a reinstall. At first blush this may seem like a bad idea, since Arch is rolling and you need stability for a server (this is why Debian and Debian derivatives are so good for servers - they're stable and don't change often). However, it's worth noting that with Arch, you can deal with problems as they come along, instead of all at once every 6 months. This is actually pretty useful, because you can tell exactly which package changes may have broken something, instead of 5-10 things potentially breaking all at once. In short, problems are isolated. Note that if I do run Arch on my server, I will of course do my utmost to maximize stability - for example, I'll use an LTS kernel instead of the latest. Another reason that I'm thinking of Arch is that it makes it easy for me to understand exactly what's going on. Ubuntu and Debian both come with batteries included, which is generally a Good Thing™ but can be unfortunate if you want to understand the exact composition of your system (which you should if you want to be a good sysadmin). In particular, Ubuntu and Debian are very generous when installing optional things (not helped by the fact that installing Recommends is turned on by default in the APT configuration). It gets to the point where the GNOME metapackage in Debian depends (not recommends - depends) on the AdBlock Plus XUL extension. What?? Finally, I just like Arch better than Ubuntu. pacman vs. apt-get, apt-cache, apt-mark, apt-cdrom, apt-<5 other things here>, anyone?
    •  LVM (see above) is much easier to set up with a fresh install.
    • Services operation will not be impacted. Anything that works on the server now will work in the new server. Primarily, this means mail and SSH access. I'll also ensure that a lot of currently-installed packages are still available (for example Emacs). If you encounter something that you could do before and can't with the new server, I will consider it a configuration bug and will fix it.
    • Note that the two exceptions to this is /home and /etc.
      • /home I will transfer over for obvious reasons: I don't want you to lose data. That being said, be cautious because configuration formats may change if I move to Arch.
      • /etc is version-controlled with etckeeper. Therefore I'll just add a remote and git push, but I may take the opportunity to do some pruning.
  • I will overwrite the current server setup with an installation of Plan 9 From Bell Labs, and I will set up that installation to be a private 9P server.
    • The new server will be set up to forward all incoming traffic directed towards 9p.strugee.net to the new Plan 9 server.
    • The Plan 9 server will run a Fossil filesystem backed by Venti, allowing rewinds, etc.
    • If you have an account on the main server you will have an account on the Plan 9 server (I'll either set up a script to make this happen or I'll just go into each server and create a new user twice).
  • Note: this means downtime.
    • Most likely this will happen in the coming weeks or even months. It won't take that long, especially because I'll basically need to swap out machines (I'll have configured the new server while the old server was running), but just in case of extended downtime, be aware.
In order to prepare please rack your brains to figure out if you have any files not in your home folder. If you do, please either move them to your home folder or make backups.
If you lose data, I will be able to recover it, but I don't relish the thought as I'll probably have to mess around will loops and mounts and stuff (see the second paragraph). Assume that there will be no backups.

I FIXED EVERYTHING YESS

I forgot to blog yesterday! So this is for two days. Also, I'm back in Seattle as of about 5 hours ago.
I've spent the last two days mostly working with Debian, although two days ago, I was out for most of the day listening to music at the music festival that Mom was attending in Port Townsend.
So here's what I've done: I made Debian work! I realized that we actually did have an Ethernet cable in the house, so I plugged it in, because Ethernet cables are more likely to Just Work(tm), and sure enough, I got internet, which was enough to download stuff.
However, realizing this, I ended up reinstalling with the Ethernet cable to do it The Right Way. I was able to shave off a ton of time by not randomizing my crypto disks again, because that'd already been done on the previous pass, and I saved a ton of time by not downloading GNOME. Of course I still wanted GNOME so I downloaded gdisk and used it to find the exact boundaries of the partitions I'd created in my VM. Then I was able to use losetup to create a loopback device for each partition, and finally, mount those loopbacks as filesystems. Then I just ran "cp /media/virtual/var/cache/apt/archive/*.deb /var/cache/apt/archive/", and presto! Much more populated APT cache. Then I installed GNOME. Ad then zsh, awesome, etc.
I thought that NetworkManager would solve my wireless problems, but it didn't. The solution turned out to be simple, though: upgrade from WEP to WPA2 (although this did require a router firmware upgrade). I installed a better driver for Apple's trackpad, which basically makes everything but right-click Just Work(tm). The only thing that isn't working is 3D acceleration, for which the solution is to install firmware-linux-nonfree. Unfortunately, installing that hung my initial ramdisk while waiting for /dev to populate, so I had to chroot and get rid of it, then overwrite the dirty ramdisk image. I spent a long time working in it before I found that information, and even tinkered with xorg.conf.d (I had to do this for the trackpad driver too, IIRC), before finally finding the solution, which then hung my system. At that point I realized that even though GNOME Shell kicked me into fallback mode with the mesa driver, the gears GLX demo still worked (so clearly it wasn't completely broken), and I wasn't going to be using GNOME Shell anyway. Then I went and configured Awesome GNOME.
There are big changes coming to my server, and there's been a lot of downtime recently, but I'll blog about that tomorrow.
Back to reading Plan 9 papers.

Saturday, July 27, 2013

I've given up again

Today I continued my attempt to get Debian to work on a real partition.
I got the firmware working by manually installing it instead of using the Debian package. However, because of the unfortunate limitations of the Debian "essential" environment, I couldn't actually connect to a network. That didn't stop me from trying for hours on end, though. Ugh.
The sad thing is that I could easily fix this with a better Live CD, but the Debian netinst environment just doesn't cut it.
Finally, I got so frustrated that I had to stop and go play Kentucky Route Zero, which I bought a couple of days ago. It's freakin amazing. You should go check it out right now.

Friday, July 26, 2013

irssi proxy sux!

Today I switched to ZNC, a real IRC bouncer. New opinion: ZNC rules, irssi proxy sux! Plus, now I don't have to worry about screen problems.
I spent a while looking into Diaspora* and Tor again. They're both amazing projects; you should go check them out.
Finally, I installed Debian on a real partition (which is apparently minimally possible even on a netinst CD) in preparation for migrating from the VM. I still have to wrestle wireless into working, though, so we'll see how that goes. I spent a good portion of the installation procedure waiting for it to randomize disk space, because I set up an encrypted /home and swap. Pretty boring.

(Note: I cannot be bothered to properly link to things in this post. Just Google them, OK?)

Thursday, July 25, 2013

Debian week, day 5 (I think)

Done today: made it about a third if the way through the Debian Policy Manual. I also spent a lot of time waiting: waiting for my GPG incremental Tor key refresh tool (I forget exactly what it's called, parcimonie I think) to build dependencies, waiting for bitcoind to sync up with the network (it's still going and has, AFAICT, at least 5 more hours to go), and waiting for my Debian VirtualBox hard drive to convert into a raw format in preparation to move to a partition. I spent a lot of that time reading the Debian Policy Manual, but I also spent a lot of it on Freenode in #archlinux.
Hopefully tomorrow I can make the switch to a real partition!

Wednesday, July 24, 2013

GPG and Sid

I want to go to bed but here's a quick update on what I've done today:
I have moved past steps 1 and 2 on my checklist. That is, I have installed Debian and upgraded to Sid. I spent a long time waiting for things to download, and I did various things during that time (like browsing Unix & Linux). I spent a fairly long amount of time getting the feel of my new Debian system (although I already had some experience from Ubuntu) and customizing it to my liking. I still have a bit more to do, notably installing the awesome window manager (I couldn't do that during the day because build-essential and the Sid updates were downloading and locking the APT cache). But overall I'm pretty satisfied. I've got my Emacs, I've got my Firefox, and soon I'll have my awesome - what more could you want?
Anyway, the second thing that I've done today is I've generated a GPG key for myself to use. It's GPG key 0xA8DA10C057F65FA7, with the fingerprint B105 3164 B6C8 F4F7 C2B4 356F A8DA 10C0 57F6 5FA7. I have uploaded it to keys.gnupg.org and keyserver.ubuntu.com. You can also find this information on strugee.net.

Tuesday, July 23, 2013

So I eventually gave up on installing Debian on a real partition and began installing it in a VirtualBox. My eventual plan is to move the VirtualBox partitions to real partitions, but we'll see.
Anyway, installation is proceeding smoothly except for the fact that I'm waiting for 1402 packages to download on a network with 600 ms ping times (which IIRC is super slow - even if I'm wrong and the ping times are OK, everything else is slow, sooo...). This problem is exacerbated by the fact that I chose the "desktop environment" task in the installer.
So, I have been searching for things to do in the past couple hours. One, I have fixed ALL OF THE DNS PROBLEMS! I've no idea how, but somehow, I made it work.
Also, I started (and then stopped, to not hog bandwidth) downloading the Armory Bitcoin client. Because I (as of about a month ago) do Bitcoin. Yay! I'm also going to do mining on my MacBook soon (I'll join a pool).
Other than that I've just been messing around, mostly with my irssi proxy to add the server that #debian is on.

Monday, July 22, 2013

Debian week, day 2

 It's Debian week, day 2! I am still stuck on step 1.
The Internet access is super terrible here, so I can only get some stuff done at certain times. Like blogging. Anyway, I tested whatever I said I'd do last night and it failed. So I have one more thing to try: I have made a full backup of my Arch flash drive and I'm about to wipe it and reformat as VFAT. Hopefully I can then put the firmware on there and it will finally work, as this is an exact setup that the Debian installer expects. Hopefully.
Also: the Ubuntu Edge is a new phone that has desktop system specs, custom-made by Canonical. Much as I have grown to be bitter towards Ubuntu and Canonical on a personal level, they, along with the Ubuntu Phone project, are our best bet to making GNU/Linux succeed in the consumer market. And to be perfectly honest, the Ubuntu Edge looks like a really amazing piece of hardware. Therefore I actually chose to back it with $20, and you should too.
Anyway, back to Debian.

Misadventures in firmware- and cabin-land

Today I drove with my mom to Port Townsend, where we have a cabin. I ended up building a fire because Mom was out, which was an interesting experience. It took me four tries but eventually I got it. Yay!
Try #3

Try #3, burning

Also, I bought the Humble Weekly Sale with Jim Guthrie, because Jim Guthrie is freaking awesome.
So it's now Debian Week, the week where I become a Debian maintainer. Here is my approximate plan:
  1. Install Debian Jesse. Fairly easy except for the fact that I'm doing it on a MacBook Pro that a. has Apple's moon-man of an EFI implementation and b. has a Broadcom chip that needs firmware.
  2. Upgrade to Debian Sid.
  3. Read Debian Policy Manual.
  4. Write Debian package.
  5. Submit Debian package.
Currently I am stuck on step 1. I've tried to put firmware in a FAT partition on my Mac, on the EFI system partition on my Arch flash drive (which, because it is the EFI system partition, is FAT), on the ext4 partition on my Arch flash drive, all for autodetection by the Debian installer. Nothing. Next I tried downloading the installer from the Debian package archive, but that uses a wget script that obviously won't work, since I can't get wireless and don't have an Ethernet cable. Therefore I extracted the .deb archive, extracted the control files in it with tar, and modified the script to copy the firmware from /mnt instead of using wget (this is assuming that I've previously mounted the needed partitions manually). Then I rearchived the whole thing back into a new .deb file. However, when I rebooted into the installer recovery environment again, it turns out that the shell doesn't have dpkg. Very frustrating.
It now looks like Debian distributes its own firmware bundle, and that may work. I will try that after blogging, but I swear, I'm this close to just burning the unofficial image with the firmware already on the disk. Assuming I can find a CD in the cabin.
I am going to attempt to blog every day that I can this week, and if possible, the rest of the summer. We'll see how it goes.
On an unrelated note, a little while ago I started version-controlling my dotfiles with Git. This seems ridiculous but it's actually pretty common - just search for "dots", or even better, "dotfiles" on GitHub. You can find a ton of interesting stuff that way. I have now merged configurations from my server, from my MacBook (just did this today!), my Arch install on my flash drive (from which the initial commit originated), and my Arch install on my ACER laptop. I have made every file in the repository portable across each of these systems, so I don't do anything funky with branches, or anything like that, to differentiate between system-specific configs. For example, at the top of my .zshrc, you can clearly see OS detection that sets the DISTRO environment variable to either "DARWIN" or "ARCH" (because those are the two that I use with zsh). Exciting!

Thursday, July 18, 2013

Remember that time I said we need the V8 of rendering engines?

That would be this time. Anyway, I wanted to write a quick post to state that I lied, because I forgot about Mozilla's project Servo. It's written in the Rust programming language, which is interesting because it [Rust] is designed specifically for writing browser engines. It's also created by Mozilla. So actually, someone already is working on the V8 of rendering engines.

Goings-on

It's summer! Yay!
I've been to Ultimate Camp and the interwebs. And my room. Hmm.
Actually though, this week from last Wednesday to last Friday, a couple of people from the SAAS robotics team have been prepping for a camp that we're doing for middle schoolers next week. And this week, we get to actually be councilors for middle schoolers. It's very exciting and very fun!
Also, I've switched to Arch Linux. Ubuntu just makes me too angry these days, and I no longer recommend it for GNU/Linux newbies (I'm recommending Mint now). Canonical is making more and more proprietary decisions - for example, Unity cannot be used on any distribution besides Ubuntu without serious effort. Also, take Mir - Mir fragments the already little-used GNU/Linux desktop, and it doesn't even do anything new. Developers are already putting GNU/Linux behind Windows and Mac - and now they potentially have to think about two display servers, meaning that the platform will look even less attractive. Not only that, but none of the concerns that the Mir team had about Wayland hold up - in fact, a Mir developer showed that he in fact knew nothing about how Wayland worked. Canonical's insane - they want to take on the burden of porting all the upstream toolkits themselves (oh, except for old ones like GTK+2 - but as we all know, GTK+2 is still in wide use). IMHO, this is crazy. It's a waste of resources. Canonical cannot play with others, and that's extremely frustrating. For example, Canonical thought that their upstream Wayland contributions wouldn't be accepted. They even offered that as a justification for Mir. But they never even tried. That's simply ridiculous, and not only that, but it's selfish. As the vendor of the most widely-used GNU/Linux distribution in the planet, Canonical has a responsibility to not do things that screw over the ecosystem. But recently it seems like they're getting Not Built Here syndrome more and more, and they're willing to do almost anything to meet that feeling, even at the cost of the rest of the ecosystem. It's saddening.
Anyway, I'm going to stop talking about that because it makes me angry. Other miscellaneous things that I'm doing: I'm planning to fully install and try Gentoo, NetBSD, Linux from Scratch, and finally, Plan 9 from Bell Labs (note that this is the only one that isn't a UNIX).
Yesterday (Tuesday) I attended a LibrePlanet Washington meeting, which was really fun. Among other things I am now into PGP/GPG and will be doing stuff with it soon.
Also, I am thinking of doing dev work on my favorite AUR wrapper, Yaourt. I'm also thinking I might work on grive, since Insync is no longer free (as in free beer).
I also attended GSLUG last Saturday, which was really cool.
I'm also getting into IRC again. I usually hang out in #archlinux, #plan9, #gnome, #gslug and (just recently - we only created it yesterday!) #libreplanet-wa, all on Freenode. Especially cool is the fact that I set up an irssi proxy on my server (which is now on the live internet, although strugee.net is still hosted on GitHub pages. The only problem is that it interferes with byobu/screen.
Also, I set up Postfix, so mail between local system users is enabled on my server (but external mail @strugee.net is not).
Anyway, I have to go to bed. There's probably more that I want to talk about, but whatever.
Oh, one last thing: I'm using Emacs now. Yay!

Thursday, July 11, 2013

Firewall configuration on alex-ubuntu-server

So about a month or two ago, in preparation for putting my server live on the internet, I configured my firewall, which was an interesting process that I want to document.
I had previously searched for "firewall" in aptitude and installed the first result, which gave me a lovely error on service init telling me that I needed to edit /etc/apf-firewall/firewall.conf, and set something-or-other to true. Obviously I generally ignored said error.
So I went looking for documentation but it turns out that Ubuntu already comes with a firewall. Therefore I got rid of apf-firewall. Then I ran sudo ufw enable.
Now, I've read the six dumbest ideas in computer security. And of course, number one is default allow. Luckily, ufw was written by people smart enough to put a default deny policy in place by default:
alex@alex-ubuntu-server:~$ sudo ufw status verbose
[sudo] password for alex:
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing)
New profiles: skip
alex@alex-ubuntu-server:~$
So that was covered. I decided, however, to also institute a default deny policy for outgoing traffic, on the basis of "why not" - meaning that I might as well unless it became a huge issue. So far though, it's actually ok. An interesting thing that happened on my first pass, though, was that while I had port 80 open, I didn't have port 53 open. So I could download web pages but I couldn't actually resolve addresses, causing connection problems.
Anyway, the last thing I have to do is figure out ping. It's supposed to work automagically, but it doesn't. So I'll look at that.

Saturday, June 15, 2013

Thoughts about things #1

So I was just thinking about the tornado in Oklahoma and its been devastating and my mom and I are listening about the clean up on NPR

They measure the devastation in dollars because we have no scale for things that we as a race have no control over

And I was just thinking, it's kind of amazing that something outside of our control can cause so much devastation 

Not because I want people to be hurt or sad, but because I think it's amazing that there is still something outside of our grasp as a race that can have huge implications while there nothing we can do about it

I just think its important that humans don't become the end all be all, and things like this are a reminder that we are intensely small in comparison to what is possible 

That said, I hope that everyone affected by the tornado recovers and that we are able to help rebuild Oklahoma back to what it was.

We measure the recovery in dollars because we have no measure of things outside our control. But this is perfect, because we are recovering in a way that we are completely in control of. We may be devastated in ways that we have no control over, but we repair on our terms.

Wednesday, April 10, 2013

WOW LOOK

I felt the need to contribute again.

Hello blogosphere
nerdosphere
Alex
Nobody

Basically I'm procrastinating from doing homework, but I figured, why not write to nobody while procrastinating? So that's where I'm at right now.

I'm tired and that could be fixed if I did my homework but no so I will remain tired. There are two more days until spring break and I am positive that those are the two days that are going to break me until all my bones are powder and then spring break won't matter anymore because I will be dead.

I should be writing more but I'm not and that makes me sad but then I don't write because I'm sad and it spirals. That in itself is sad because I am causing the spiral on my own; it's caused by my own passivity.

I swear I'm not depressed or anything, this post just sounds really sad and mopey. I should go.

-Alex's Friend

Sunday, April 7, 2013

Thoughts on Blink, Facebook Home, and other things

BLINK
Blink is Google's new rendering engine. Sort of.
Quite frankly, this situation is so complicated I can't even keep it straight. Here's what's happening, as I understand it:
In the beginning, there was WebKit. Okay, not the very beginning. More like the beginning of the middle, as KHTML was the beginning. But anyway, in the beginning of this mess, WebKit was forked from KHTML by Apple. Originally, WebKit was closed-source, but Apple would give back large amounts of code to KHTML. This had its problems, which were worked out when Apple open-sourced WebKit. At this time, there was Firefox (Gecko), IE (Trident), and Opera (Presto). There was then WebKit, which wasn't used in anything yet. Safari did not yet exist. Neither did Chrome.
Then, Safari was launched, becoming the first browser to use WebKit. So Apple web technologies, primarily Safari and iOS (UIWebView), used WebKit. At this point it becomes necessary to specify what parts different browsers were using, and actual names, especially WebKit, become confusing and less relevant. So now there's Safari (WebKit - WebCore + JavaScriptCore), Firefox (Gecko), IE (Trident), and Opera (Presto). Chrome did not exist yet.
Then the next thing happened: Chrome was born. Google used the same WebKit codebase, but they did screwy things to it (not like patches, but in the way that they embedded it) in order to enforce their process model. They also got rid of JavaScriptCore, and used V8 instead. So Apple worked on the WebCore and JavaScriptCore components of WebKit, and Google only worked on WebCore. Unfortunately, we also have to name WebCore more specifically. We'll call it TraditionalWebCore. For simplicity's sake, let's call TraditionalWebCore + JavaScriptCore "TraditionalWebKit", and let's call TraditionalWebCore + V8 "GoogleWebKit". So TraditionalWebKit and GoogleWebKit share TraditionalWebCore, which both Google and Apple work on (Google effectively contributes upstream). However, Apple works on JavaScriptCore, while Google works on V8. TraditionalWebKit and GoogleWebKit aren't "real" technologies, they're just names for collections of "real" technologies.
Confused? So am I. Let's see what browsers are now available: Firefox (still Gecko), IE (still Trident), and Opera (still Presto). Then there's all of Apple's web stuff, including Safari, which use TraditionalWebKit - that is, TraditionalWebCore paired with JavaScriptCore. Chrome (and when I say Chrome I really mean Chrome, Chromium, Chrome OS, Chromium OS, and Chrome Frame) uses GoogleWebKit, that is, TraditionalWebCore paired with V8.
Getting close to now, Opera announces that they're ditching Presto and using WebKit instead. Which WebKit? Well, they're using V8, which means that they're using GoogleWebKit. So now we've got Firefox (Gecko), IE (Trident), Safari (AppleWebKit - TraditionalWebCore + JavaScriptCore), Chrome (GoogleWebKit - TraditionalWebCore + V8), and Opera (GoogleWebKit, same as Chrome).
Finally, we get to now (yesterday?) when Google has officially forked "WebKit" and created Blink. So what does that mean? It's not as straightforward, IMHO, as "fork" would imply, because of the way that Google used WebKit, but also didn't. Essentially, Google already had a partial fork of TraditionalWebKit, which was GoogleWebKit. I say partial fork because TraditionalWebCore was shared, but the JS engine was not. So while V8 was not technically a fork, as it was written from scratch, it's essentially a modular component that does the same thing as JavaScriptCore, just with a different implementation - and this is exactly what a technical fork would be like.
Getting back to Google "forking WebKit": because of the aforementioned partial fork, we can't say that Google actually forked WebKit without spending a while explaining ourselves properly. What really happened here was that Google forked TraditionalWebCore. So previously we had TraditionalWebKit - TraditionalWebCore + JavaScriptCore - and GoogleWebKit - TraditionalWebCore + V8. These were previously connected through TraditionalWebCore, but now that Google has effectively forked TraditionalWebCore, they've replaced TraditionalWebCore in GoogleWebKit with GoogleWebCore. Thus, we now have Blink, which was previously GoogleWebKit, which is the pairing of GoogleWebCore and V8. Then, we can go back to calling TraditionalWebKit just "WebKit", because there are no different versions of WebKit anymore. Thank God, that was so confusing.
Also, because Opera was previously going to use GoogleWebKit, they've now officially confirmed that they'll be using Blink.
Then there's WebKit2. No one uses that yet, so, eh.
Anyway, my personal thoughts on Blink? Same as WebKit2: that is, eh. I don't really think that Blink will change web development all that much. Right now, it's basically the same as WebKit, with different JavaScript. However, I predict that in the same way that V8 was better than JavaScriptCore, Blink will become better than WebKit - it will be faster in every sense, although I don't think it will be as dramatic a change.
All this makes me feel, though, that the web needs a new rendering engine. Not a new Blink, but a new KHTML. A new Gecko. Because think about it:
  • Trident was originally made for Internet Explorer 4 in 1997.
  • Gecko was also born in 1997. It was going to replace Netscape's engine.
  • Presto development started in 2003. Not as bad, but still.
  • Blink is based on WebKit...
  • WebKit is based on KHTML, which was originally written in 1999.
This means that Internet Explorer, Firefox, Chrome, Safari, and future versions of Opera are/will be running code that originates from around 1998. The current version of Opera is a little better, from 2003. But that's still an entire decade ago.
Now keep in mind that I don't know a ton about the internals of rendering engines. But it seems like we're due for a new rendering engine made from scratch. And it seems like that no matter how many performance optimizations you perform, there will always be things that you can't do simply because the architecture of the engine forbids it. V8 was made for the modern web, when other JavaScript engines where slightly too old for that. And wow, did V8 make a difference. It's still the fastest engine today, although these days SpiderMonkey (Gecko/Firefox) is neck and neck with it. We need the V8 of rendering engines.

FACEBOOK HOME
I want to skip this and write about other stuff, because that's more interesting to me at the moment. If I forget about this section, sorry.


MY LIFE!
So a couple months back I used Wubi to install Ubuntu on my school laptop. Epic.
Of course, I did screw it up by installing MATE, which pushed me over the hard drive capacity. And of course that meant that dpkg failed to do anything, so I had no idea how to free up space, etc. etc., and didn't know about recovery mode, so I had to wipe it. And also I tried to do a full install, and ended up getting my hard drive erased and reset. The end result, though, is that I'm really into Ubuntu again. I'm running the devel version, Raring, right now. One thing that I ended up doing is that I wrote a script to automate installation of my favorite packages and setting up of some configurations that I like, which was interesting. I'm going to write an application soon to automate the creation of this type of script. I'll get to that in a minute.
I also made my own custom desktop configuration file, which was super fun. It's based on Compiz and GNOME Panel, although I'm thinking of switching to MATE Panel, since GNOME Panel isn't maintained anymore(?). I went the fancy route (as usual) and instead of just making an xsession conf file, I made a gnome-session conf file too, so the session is gnome-session based. I tend to use GNOME Do to launch things.
I'm learning Python 3 and Ruby. Mostly I'm learning the Ruby so I can use Ruby on Rails.
This Saturday Right now (today & yesterday) I have/had/am doing something called CodeDay Classic, which is a 24-hour hackathon for students. I AM SO. FREAKING. EXCITED and also extremely tired so I basically just wrote the GNOME section of this post, and then this sentence, and published it because I really need to get this post done. During CodeDay, I'm going to write the aforementioned script creation application. It'll be a Python CLI tool at first, but I think that'll only take an hour or two. Therefore, I'll then write a/some frontend/s for it. I think I might do one for GTK+ and one for Qt. And since I don't think even those will take even close to the whole time, I'll spend the rest of the time on gridcontrol, which I've now decided should be Rails-based (where previously I thought it should be node.js-based).
I've been to only one other CodeDay, which was CodeDay Seattle, around the time of winter break. I tried to work on forked there, but it didn't really work, although I did start to learn stuff about how Yeoman works. Although now that I've discovered Brunch, it seems like that might be the superior tool. I had to leave early, at 1 AM. :(
Last thing, I think: I NOW OWN A DOMAIN NAME WHEEEEE! strugee.net is now registered to yours truly. My domain provider was originally going to make me pay for web hosting to get subdomains, but I was like, screw that. So I went to afraid.org and got free DNS - including subdomains - through them. Wow, that really sounded like something someone would say in a stupid marketing thing. My bad. Apparently I already blogged about this but forgot about it, probably because I was tired.
Side note about my experience writing parts of this post: https://posts.app.net/4485209
I have some stuff about Mozilla and the status of the GNOME project, but I'll talk about those in different sections.

MOZILLA
It's Mozilla's 15th birthday this year. Happy birthday!
If you want to get a limited-edition Mozilla Dino plushie, you can if you donate $35 or more. Actually, they're all sold out due to a huge demand. But please still donate! I'd love to tell you how epic mine is but they take a super long time to ship. :(

GNOME
I hate GNOME 3. I actually like it a lot - except for the fact that it doesn't do Compiz plugins - except I hate the way the GNOME Foundation is going, and I feel like GNOME 3 is the manifestation of that. I mean for god's sakes, look at their mission statement. It's unbelievably unspecific and generic: how are you supposed to define what "making great software" is?. I don't really want to talk about this more, because it's 4:40 AM at CodeDay and I'm tired.
Everyone suffers from GNOME 2 withdrawal, right?

Saturday, March 2, 2013

strugee.net

Today I bought my first domain name, strugee.net, using Namecheap.
The first thing I did was move DNS away from them and to afraid.org, which is awesome. Because of this, I get subdomains without having to pay for Namecheap web hosting.
Also, I updated my website! It sucked before, and now it sucks less. Now it looks kind of ugly (still), but in an elegant kind of way that I really like and was aiming for.
Anyway, back to coding.