M.C. Pantz

Something Bloglike

Moving From Escreen to Workgroups

| Comments

I use Emacs to manage multiple different projects simultaneously. Every professional engagement I’ve had so far has had me working on multiple codebases at once. Add these to this my elisp and Org Mode files, and that’s a lot of frames to have open!

My first attempt at managing multiple projects had the unintended consequence of holding me back from fully realizing the full potential of Emacs. I ran emacsclient sessions in GNU Screen for each project I worked on. I could flip between different frame configurations, each one set up for an individual project. This worked well enough that it kept me from running Emacs as a graphical interface.

Eventually, Jon Rockway convinced me that I should abandon command-line emacs and use Emacs as it was intended; as a graphical application. However, I had developed a dependence on multiplexing in my workflow that was hard to break. I first tried Elscreen, which allowed me to try multiplexing my projects natively in Emacs. However, the default configuration took up screen real estate and I didn’t have the patience to try and remove it; so I quickly started to look for alternatives.


The next option I came across was escreen. This was better suited to my workflow, and it used existing resources to manage workspace switching. Specifically, it embedded the window configurations into the buffer used to store frame configuration information, which I thought was ingenious. I was able to configure it so that my transitions between my constantly open GNU Screen terminal and Emacs was seamless. I would combine this with matching up the screen numbers per project, which allowed me to navigate my project transitions with very little mnemonic overhead.

Escreen is an excellent package, but still left a few features wanting. Specifically, I wanted:

  • Named Workgroups - While matching up terminal numbers goes a long way, it would be much better if I could give my window configurations meaningful names.
  • Persistence - I set my windows up in the same configuration and order, and reconfiguring emacs takes a while. The more I can automate that process, the better.
  • Active Development - escreen hasn’t seen any development in a few years, and I do not have the time and/or experience to round off the sharp edges.


A bit of research led me to check out workgroups. Puns aside, this is an excellent piece of software for managing window configurations. I was able to install the package from MELPA and configure it close enough to my escreen configuration so that I haven’t been able to functionally notice the difference. Here is what I have so far:

;; workgroups
(require 'workgroups)
(setq wg-prefix-key (kbd "C-z")
      wg-no-confirm t
      wg-file (concat elisp-dir "/workgroups")
      wg-use-faces nil
      wg-switch-on-load nil)

(defun wg-load-default ()
  "Run `wg-load' on `wg-file'."
  (wg-load wg-file))

(defun wg-save-default ()
  "Run `wg-save' on `wg-file'."
  (when wg-list
    (with-temp-message ""
      (wg-save wg-file))))

(define-key wg-map (kbd "g") 'wg-switch-to-workgroup)
(define-key wg-map (kbd "C-l") 'wg-load-default)
(define-key wg-map (kbd "C-s") 'wg-save-default)
(define-key wg-map (kbd "<backspace>") 'wg-switch-left)
(workgroups-mode 1)
(add-hook 'auto-save-hook 'wg-save-default)
(add-hook 'kill-emacs-hook 'wg-save-default)

Most of the code here was provided by this Stack Overflow post.

So what do I have now?

  • I can now switch to workgroups by name! This makes it much easier for me to name projects and completely forget about matching up numbers. I can still match up numbers if I want, but why bother?

  • I can restore my workgroup list after a restart. This makes it much easier to reboot and get back to where I left off.

  • Re-ordering workgroups is easy and awesome. Using my configuration, C-z C-, and C-z C-. will move a workgroup left and right, respectively. This makes it much easier to arrange things in case something gets out of whack.

  • All my key bindings are still the same. I don’t have to worry about altering any of my muscle memory to accommodate for a new workflow; it is just the same plus more features.

The Ugly Remains

workgroups is definitely a much more mature and polished product. Despite this, escreen was able to dodge a lot of rough edges and rely on Emacs’s own built in abstractions by using the frame buffer to host workgroups that I am missing with workgroups so far:

  • Per-workgroup switch-buffer lists - With escreen, each workgroup had its own list of visited buffers. This meant that my buffer visiting activities bled into each other less frequently. It appears that with workgroup that the visited buffer list is not localized to the workgroup, which is frustrating.

Final Steps

Despite the progress, There are a few things left on my to-do list for workgroups.

  • Persistent buffers - I haven’t yet managed this. The only thing that gets saved and restored right now are my buffer names.

  • Default new workgroups - I would like to have some default behavior when creating new windows. I name my windows after my projects; if I open a workgroup foo and there exists a directory $HOME/git/foo, I would like for the window to have a magit buffer there open to the project, waiting for me. If I could get this to work in conjunction with eproject or projectile, that would be icing on the cake.

Shell Paths on Mountain Lion Are a Confusing Mess

| Comments

I am setting up an OS X 10.8 (Mountain Lion) machine as a development machine. For my own personal use I prefer using a Linux box, but between my own nostalgia for Gentoo and the peculiar yet undeniable strength of homebrew, I am perfectly happy to use it as a day-to-day development machine.

In order to use homebrew to its fullest, I edited the /etc/paths file to make things right:


So far, no problems. Will report back later.

Iterators Make You Lazy

| Comments

One of the most productive “A-Ha!” moments I’ve had as a developer came when I learned how to use iterators, and the dramatic effect that it has on my coding style. When we introduced the concept to one layer of code to handle a specific problem, the pattern of development proved infectious for dealing with similar workflows and managed to make our code much more succinct and understandable to newcomers.

At a past place of $WORK, I worked on a project where we wanted to add a REST-ful API to our data, but we had to deal with blocking SQL queries whose completion took longer than any sane HTTP timeout should be. The result was that we needed to stream the formatted output of the query.

It turns out that most REST-type plugins for Catalyst don’t handle streaming output, so we couldn’t use something off the shelf. However, we did discover a module called Iterator::Simple. This allowed us to construct on-the-fly formatters that we could then stream to the users. Now all of our code stopped looking like ugly while loops and temporary variables, and turned into this:

use Iterator::Simple qw(iflatten imap);

sub format_nodes {
    my ($nodes_rs) = @_;

    iflatten [
        imap { format_node($_) } $nodes_rs,

The fun didn’t stop there. It started becoming clear that chaining and passing around iterators was a great way to quickly set up the construction of data sets, then pass those constructions along to consumers. A small library of macros had managed to invade our code and make it very, very lazy.

It turned out that many of our data preparation routines were expressible as computations that didn’t need to live in memory. Anywhere we were running a loop and processing data, we could refactor that region with imap and a code block, and have the code execute on a per-needed basis.

The only one gripe that I have with this module is that its file handle emulation is incomplete. The module emulates the getline syntax; e.g.; the &<> operator, but not the method. Right now, Catalyst::Engine::PSGI assumes that the getline method is implemented on the resource to be streamed. This is true of file handles, but not of Iterator::Simple::Iterator objects. If the latter provided that interface, then this module would increase dramatically in utility. I have filed a bug and submitted a patch, so we’ll see how it plays out.

Old Domain, New Blog Taste!

| Comments

Within the next 24 hours, my GitHub Page should be accessible any- and everywhere from my preferred vanity domain, mcpantz.org. Right now I am having it redirect as a top-level domain, mostly for shortened URL length. I would love to hear of any downsides, other than some sort of fragility in long-term resource names.

Also, I need to find a decent image post, as numerous of my WordPress-era posts had embedded URLs to images that are now broken. I’m not too keen on imgur.com, and the last time I used ImageShack was the early 00’s. Does anyone have some good, modern suggestion for long-term image hosting, particularly for hot-linking into websites?

YAPC::NA Post-Mortem

| Comments

I attended my Fourth YAPC::NA this past week, and just as usual, I had a blast. Conferences are valuable experiences that allow one to step back and re-evaluate one’s work compared to the Zeitgeist of the community.

There were a lot of excellent talks this year, and even more that I didn’t see. The one that left everyone’s jaw on the floor was Martin Holste’s Perl For Big Data. There were a lot of talks on “big data” but The sheer audacity that this project displayed was impressive. Holste demonstrated lightning fast search, management-pleasing data analysis, and massive distributed throughput on some absurd amount of real-time data. The amazing part of it all was that his team accomplished this not by scaling up their hardware, but by using standard open source software to its limits.

Another general trend that I’m quite happy to see is that Perl hackers are thieving magpies. Stevan Little of Moose fame demonstrated Web::Machine as a clone of Basho’s Webmachine. I am merely looking for an excuse to use it now. Genehack also unveiled HiD, a Jekyll clone. I was surprised that Miyagawa, one of the first Perl hackers to be notorious for rifling the pocket change of other language’s libraries, did not show off any new libraries, but did a great talk on comparing his experiences with the Python and Ruby ecosystems he has plumbed in the past, and how to bring the good parts back to Perl.

Beyond the technical, it was fun to connect and reconnect with a lot of people I’ve interacted with on IRC, twitter, and in past lives at past jobs. I am definitely looking forward to going back again in the future, and I hope to entice more folk to YAPC, both those new to Perl and those who toil in the shadows (you know who you are!)

Old Blog Content Converted

| Comments

I converted all my old published content from mcpantz.org to this new GitHub page. Once I confirm that everything else is in place I am going to cancel the other page, and save a few bucks on hosting. In the meantime, enjoy any content that you may not have seen before.

If you are redirected to this page after the domain name switch; apologies for the broken link. Please use a newer, more stable permalink that my site now provides.

Building DBD::Pg for Perlbrew on Ubuntu

| Comments

I get a lot of benefit from managing my own Perl via perlbrew, but I’m fine using Ubuntu’s packaged Postgres sever. However, Ubuntu strips down a lot of its applications into components, and you have to install development versions and header files separately if you want to compile everything. I’m putting this here for anyone who might be trying to install DBD::Pg on top of perlbrew. I’m not going to claim this will work in all circumstances, and it assumes you have already installed Postgres, but it definitely worked for me.

sudo apt-get install libpq5 postgresql-server-dev-9.1
POSTGRES_HOME='/usr/lib/postgresql/9.1' POSTGRES_INCLUDE='/usr/include/postgresql' cpanm Bundle::DBD::Pg

Perl Jails and You: Local::lib, Perlbrew, and Friends

| Comments

In the ruby world, many people are aware of the ruby rvm tool for ruby version management. Perl is not generally thought of as a virtual machine with an isolable environment, because of its frequent deep intertwining with the host operating system. However, there are tools that make this possible.

The problem with using Perl in this fashion is not that the solution does not occur to people, but that the solution occurs to many people simultaneously. Most of the hacker luminaries in the Perl world have invented a respective solution. brian d foy has written of his own solution, there is a rubygems implementation in the wild, and the packaging problem has had multiple attempts made on it. Today, I’d like to review the best-in-class applications, and how to use them.


local::lib creates the the configuration configuration information for a perl package directory without affecting the system perl’s library paths. Before worrying about your base perl installation, you should know local::lib. If you have a trustworthy sysadmin or are a junior dev on a team, this is most likely your situation.

local::lib lets you try install modules locally before installing them in a shared environment to experiment with new features, or install bleeding-edge versions of Perl-based command line apps like App::GitGot, App::Ack for personal use.


Are you developing in an environment where you have to test against multipleo perl binaries? Then you should probably give perlbrew a try.

This application in a fully-featured suite for bootstrapping a perl environment. The perlbrew script allows you to install multiple versions of Perl and maintain library directories much more conveniently. In this regards, it is much more like rvm.

Another similar feature that perlbrew shares with rvm is its management of library assets. The perlbrew binary includes a command to install cpanminus, which hopefully you are using now already (If you are not, I will have to write another blog post!) This copy coordinates with the built-in lib functionality to install libaries to the current chosen perl, minimizing the fuss and clash and conflict.

Combinations Thereof

If you are a solo developer who does not need to coordinate the usage of your development machine with other users, than use perlbrew. It provides the solutions of all the applications directly in a reproducible, scriptable manner.

If you are in a situation where setting up your own perl is a troublesome wreck, use local::lib. You will have a clean library path to experiment with, and you can feel no guild whatsoever when you muck up a dependency and need to blow it away.

Regardless of anything, use the right tool for the right job, and make sure you know that you are getting what you need. If you aren’t, complain on IRC.

Large Files in Emacs

| Comments

I have started work on a new project where a lot of the source files are huge; files are commonly over 1,000 lines in length. While this is a massive code smell that needs to be dealt with, it doesn’t preclude the necessity of working with the file in the first place.

It appears that emacs has no problem navigating the files, but editing large files is a royal pain in the rear end. It appears that the only reasonable solution to shut off 'font-lock-mode.

In addition, I have been using outline-minor-mode to narrow the focus of my attention. Having a top-down view of the file which I am working with really helps.

I wonder how much hacking would be involved to make comments associated with their own header; or in the case of indented comments, with the level above them; as sort of a “floating” header level rather than a specific one.

Setting Up Rxvt-unicode on Ubuntu 11.10

| Comments

I’ve been an Ubuntu & Mint user for the past few years. I am not particularly focused on usability or community, but it is nice using an operating system where most of the kinks have been ironed out for me by the efforts of others.

Lately, I decided to abandon Terminator in favor of rxvt-unicode, a.k.a. urxvt. Already I’ve found it to be a lot snappier, but installation was not simple. There are a few hoops to jump through yet.

Install ncurses-term

Apparently the ubuntu packagers haven’t gotten around to bundling the terminfo entry for rxvt-unicode into the package itself, as they probably don’t think you need programs like screen or tmux. Rather, Ubuntu wants to to install yet another package, which has no related dependencies. be sure to install the ncurses-term package, which seems like a good idea anyways. And if you’d like to tell the Ubuntu packagers that they should fix this, you can go check out this bug report.

Installing rxvt-unicode

This part is relatively easy, just install the rxvt-unicode package and its dependencies.

Setting up the environment

Configuring rxvt-unicode is done through the ~/.Xdefaults configuration file, which is the historic location of all X11 application data. Since it is a grab-bag of a file, it is a little harder to find examples. However, I have munged through a few and posted my end product in this gist.

If you take nothing else from this configuration, please change the default colors. Your eyes will thank me for it, since the blue will be legible.

Where From Here

By no means is this journey over. It turns out that my console is only supporting 88 colors, and that there are others who have had the same problem.