Why I Buy Moleskine Journals

I keep a diary and also a programming journal. Because I want to keep all of my ideas down for future reference, I prefer to use nice journals for them (no spiral notebooks or ones with perforated pages). For these purposes, I turn to the Moleskine brand of journals — more specifically, their Folio line of A4-sized notebooks.

Yes, the Moleskine line of journals are quite overpriced. Yes, they are not made in Europe as the name implies (they are made in China). However, Moleskine is quite possibly the only company in the world that makes journals that fit the following criteria:

  • Sewn binding (pages can lie flat for sane writing)
  • A4 or larger size
  • Blank/lined/graph pages
  • Hardcover
  • Regular paper (not sketch paper)

I think some of Clairefontaine’s journals come close, but all of their A4-sized journals are lined, I believe. And plus, as great as their paper quality is (it really is fantastic to write on), it is way too bright for my taste.

By the way, if you are a programmer I strongly recommend that you keep your notes in a physical notebook. If you are learning a new programming language or starting to sketch the design goals of a new programming project, keeping a notebook for it is quite possibly one of the best decisions you will make.

Oh, by the way, you should purchase a good pen to go along with your journal(s). Don’t write with horrible, cheap pens that are difficult to write with.

Easy Solutions to Hard Problems?

I’ve been trudging along knee-deep in tree manipulation algorithms recently, all in Haskell. I admit, calling them “algorithms” makes me sound much smarter than I really am… what I actually worked on was converting one type of tree into another type of tree (functionA) and then back again (functionB), such that I would get back the original tree — all in a purely functional, stateless, recursive way. My brain hurt a lot. I hit roadblock after roadblock; still, I managed to go through each hurdle by sheer effort (I never studied computer science in school).

See, I have a habit of solving typical Haskell coding problems in a very iterative fashion. Here are the steps I usually follow:

  1. Research the problem domain (Wikipedia, StackOverflow, etc.)
  2. Take lots of loose notes on paper (if drawing is required), or on the computer (using emacs’ Org-mode).
  3. Write temporary prototype functions to get the job done.
  4. Test said function with GHCi.
  5. Repeat Steps 1 through 4 until I am satisfied with the implementation.

The steps usually work after one or two iterations. But for hard problems, I would end up going through many more (failed) iterations. Over and over again, hundreds of lines of Haskell would get written, tested, then ultimately abandoned because their flawed design would dawn on me halfway through Step 3. Then I would get burned out, and spend the rest of the day away from the computer screen, doing something completely different. But on the following day, I would cook up a solution from scratch in an hour.

It’s such a strange feeling. You try really hard for hours, days, weeks even, failing again and again. Then, after a brief break, you just figure it out, with less mental effort than all the effort you put in previously.

What can explain this phenomenon? The biggest factor is obviously the learning process itself — that is, it takes lots of failures to familiarize yourself intimately with the problem at hand. But the way in which the solution comes to me only after a deliberate pause, a complete separation from the problem domain, fascinates me. I call it the “Pause Effect” (PE), because I’m too lazy to dig up the right psychological term for this that probably exists already.

So, here’s my new guideline for solving really hard problems:

  1. Try to solve problem in a “brute-force” manner. Don’t stop until you burn out. I call this the “Feynman Step”, after a quote from the televised “Take the World from Another Point of View” (1973) interview, where, toward the end, he describes a good conversation partner — someone who has gone as far as he could go in his field of study. Every time I’m in this step, I think of how animated Feynman gets in his interview, and it fires me up again.
  2. Rest one or two days, then come back to it. This is the “PE Step”.

The best part is when you ultimately find the solution — it feels incredible. You get this almost intoxicating feeling of empowerment in yourself and in your own abilities.

Xorg: Using the US International (altgr-intl variant) Keyboard Layout

TL;DR: If you want to sanely type French, German, Spanish, or other European languages on a standard US keyboard (101 or 104 keys), your only real option is the us layout with the (relatively new) altgr-intl variant.

I like to type in French and German sometimes because I study these languages for fun. For some time, I had used the fr and de X keyboard layouts to input special characters from these languages. However, it wasn’t until recently that I realized how most European layouts, such as fr and de, require that you have a 105-key keyboard. A 105-key keyboard looks exactly like the standard, full-sized IBM-style 104-key keyboard (i.e., IBM Model “M” 101-key keyboard plus 2 Windows keys and 1 “Menu entry” key on the bottom row), except that it has 1 more extra key on the row where the Shift keys are (and also has a “tall” Enter key with some of the existing keys on the right side rearranged a bit).

I personally use a 104-key keyboard (a Unicomp Space Saver 104, because that’s how I roll). Now, when I switched to the fr layout the other day, I wanted to type the word âme. Unfortunately, the circumflex dead key was not where it was supposed to be. This was because I only had 104 keys instead of 105, and during the process of switching forcibly to the fr layout, there was a malfunction in the conversion (rightfully so).

Now, the choice was to either buy a 105-key keyboard or just find an alternative 104-key layout that had dead keys. I didn’t want to buy a new 105-key keyboard because (1) the extra 1 key results in a squished, “square” looking left-Shift key and (2) the tall Enter key would make it more difficult for me to keep my fingers happy on the home row, since I’d have to stretch my pinky quite a bit to reach it, and (3) I just didn’t feel like spending another ~$100 for a keyboard (my Unicomp is still in excellent condition!).

So, I had to find a new keyboard layout that could process accented/non-English Latin characters better. The Canadian Multilingual Standard keyboard layout (http://en.wikipedia.org/wiki/Keyboard_layout#Canadian_Multilingual_Standard) looked very, very cool, but it was only for a 105-key keyboard. I then discovered a layout called the US International Standard keyboard layout, which enables you to type all the cool accented letters from French or German (and other European languages) while still retaining a QWERTY, 101/104 key design. This sounded great, until I realized that the tilde (~), backtick (`), single quote (‘) and other keys were dead keys. I.e., to type ‘~’ in this layout, you have to type ‘~’ followed by a space. This is horrible for programmers in Linux because the ~ key comes up everywhere (it’s a shell shortcut for the $HOME directory). To type a single quote character, you have to type (‘) followed by a space! Ugh. I’m sorry, but whoever designed this layout was obviously a European non-programmer who wanted to type English easier.

But then, all hope was not lost. When browsing all of the choices for the us keyboard layout variant under X (/usr/share/X11/xkb/rules/base.lst), I found a curious variant called altgr-intl. A quick google search turned up this page, an email from the creator of this layout to the X developers: http://lists.x.org/archives/xorg/2007-July/026534.html. Here was a person whose desired usage fit perfectly with my own needs! Here’s a quote:

I regularly write four languages (Dutch, English, French and German)
on a US keyboard (Model M – © IBM 1984).

I dislike the International keyboard layout. Why do I have to press
two keys for a single quote (‘ followed the spacebar) just because the
‘ key is a dead-key that enables me to type an eacute (é)?

I decided to ‘hide’ the dead-keys behind AltGr (the right Alt key).
AltGr+’ followed by e gives me eacute (é). It also gives me
m² (AltGr+2).

Excellent! Apparently, this layout was so useful that it became included eventually into the upstream X codebase, as the altgr-intl variant for the standard us (QWERTY) keyboard layout. The most compelling feature of this layout is that all of the non-US keys are all hidden behind a single key: the right Alt key. If you don’t use the right Alt key, this layout behaves exactly the same as the regular plain us layout. How cool is that?! And what’s more, this layout makes it compatible with the standard 101-key or 104-key IBM-styled keyboards!

This variant deserves much more attention. Unfortunately, there seems to be little or no information about it other than the above-quoted email message. Also, I’ve noticed that it is capable of generating Norse characters as well, like þ and ø. There does not seem to be a simple keyboard layout picture on the internet to show all the keys. Anyway, I’ve been using it recently and it really does work great. There is no need for me to switch between the us, fr, and de layouts to get what I want. I just use this layout and that’s it. Take that, Canadian Multilingual Standard!

Here’s how to set it up: use either the setxkbmap program like this:

setxkbmap -rules evdev -model evdev -layout us -variant altgr-intl

in a script called by ~/.xinitrc, or use it directly in your keyboard xorg settings:

Section "InputClass"
    Identifier "Keyboard Defaults"
    MatchIsKeyboard "yes"
    Option  "XkbLayout" "us"
    Option  "XkbVariant" "altgr-intl"
EndSection

I personally prefer the script method because you can tweak it easily and reload it without restarting X. For me, I had to use the -rules evdev -model evdev options because -model pc104 would mess things up (probably due to an interaction with xmodmap and such in my ~/.xinitrc). Whatever you do, make sure everything works by testing out the AltGr (right Alt) key. For example, AltGr+a should result in ‘á’.

A couple caveats: Some keys like â and è (which are AltGr+6+a and AltGr+`+e (i.e., uses dead keys)) do not show up at all on urxvt. I’m guessing that the dead keys are broken for urxvt (or maybe it’s a bug with this layout; who knows). Luckily, I can just use a non-console app (like GVIM) to do my Euro-language typing, so it’s OK (although, eventually, I’ll run into this bug again when I start typing French emails from within mutt inside urxvt 20 years later… but by then it will be fixed, I’m sure.) Also, the nifty xkbprint utility can generate nice pictures of keyboard layouts (just do a google image search on it), but it’s currently missing (https://bugs.archlinux.org/task/17541) from Arch Linux’s xorg-xkb-utils package. So, if you’re on Arch, you’ll have to experiment a bit to figure out the various AltGr+, and AltGr+Shift+ key combinations.

See also: Xorg: Switching Keyboard Layouts Consistenly and Reliably from Userspace

UPDATES:

  • October 30, 2011: I just found out that the dead keys bug in urxvt is actually a bug in ibus/SCIM (I use ibus, which relies on SCIM, to enter Japanese/Korean characters when I need to). I tried out uim, which is a bit more complicated (much less user friendly, although there are no technical defects, from what I can tell), and with uim, the dead keys work properly in urxvt. The actual bug report for ibus is here.
    So, use uim if you want to use the altgr-intl layout flawlessly, while still retaining CJK input functionality. Of course, if you never needed CJK input methods in the first place, this paragraph shouldn’t concern you at all.
  • December 7, 2011: Fix typo.
  • January 24, 2012: Fix grammar.

Things I Like About Git

Ever since around 2009-2010, developers have been engaging in an increasingly vocal debate about version control systems. I attribute this to the hugely popular rise of the distributed version control systems (DVCSs), namely Git and Mercurial. From my understanding, DVCSs in general are more powerful than nondistributed VCSs, because DVCSs can act just like nondistributed VCSs, but not vice versa. So, ultimately, all DVCSs give you extra flexibility, if you want it.

For various reasons, there is still an ongoing debate as to why one should use, for example, Git over Subversion (SVN). I will not address why there are still adamant SVN (or, gasp, CVS) users in the face of the rising tsunami tidal wave of DVCS adherence. Instead, I will talk about things I like about Git, because I’ve been using it almost daily for nearly three years now. My intention is not to add more flames to the ongoing debate, but to give the curious, version control virgins out there (these people do exist!) a brief rundown of why I like using Git. Hopefully, this post will help them ask the right questions before choosing a VCS to roll out in their own machines.

1. Git detects corrupt data.

Git uses an internal data structure to keep track of the repo. These objects, which are highly optimized data structures called blobs, are hashed with the SHA-1 algorithm. If suddenly a single byte gets corrupt (e.g., mechanical disk failure), Git will know immediately. And, in turn, you will know immediately.

Check out this quote from Linus Torvalds’ Git talk back in 2007:

“If you have disc corruption, if you have RAM corruption, if you have any kind of problems at all, git will notice them. It’s not a question of if. It’s a guarantee. You can have people who try to be malicious. They won’t succeed. You need to know exactly 20 bytes, you need to know 160-bit SHA-1 name of the top of your tree, and if you know that, you can trust your tree, all the way down, the whole history. You can have 10 years of history, you can have 100,000 files, you can have millions of revisions, and you can trust every single piece of it. Because git is so reliable and all the basic data structures are really really simple. And we check checksums. And we don’t check some UDP packet checksums that is a 16-bit sum of all the bytes. We check checksums that is considered cryptographically secure.

[I]t’s really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.”

(BTW, Torvalds, opinionated as he is, has a very high signal-to-noise ratio and I highly recommend all of his talks.)

2. It’s distributed.

Because it is based on a distributed model of development, merging is easy. In fact, it is automatic, if there are no conflicting changes between the two commits to be merged. In practice, merge conflicts only occur as a result of poor planning. Sloppy developers, beware!

Another benefit of its distributed model is that it naturally lends itself to the task of backing up content across multiple machines.

3. It’s fast.

I can ask Git if any tracked files in a repo have been edited/changed with just one command: git diff. And it needs but a split second, even if my $PWD is not the repo’s root directory or if there are hundreds and thousands of tracked files littered across everywhere (because Git doesn’t think in terms of files, remember?)

4. It gives me surgical precision before and after committing changes.

Several things help me keep my commits small, and sane. The biggest factor is the index concept. Apparently, Git is the only VCS that gives you this tool! After editing your files, you go back and select only those chunks you want to be in the commit with git add -p. This way, you are free to change whatever you think is necessary in your files, without any nagging idea in the back of your mind going, “Hey, is this change exactly what you need/intended for your next commit?”

The other big factor is the rebase command. With rebase, I can do pretty much anything I want with my existing commits. I can reorder them. I can change their commit messages (known as amending). I can change the commits themselves (i.e., change the diffs). I can change 4 tiny commits into a single commit (known as squashing). I can even delete a commit (as long as the later commits do not rely on it). Essentially, you can rewrite your commits in any you like. This way, you can sanitize your commits in a logical way, regardless of work history.

Other Thoughts

I could go on, but the remaining points don’t have as much “oomph” as the ones listed already. I fear that I am unable to see many of the “problems” with Git’s methodology and workflow, because I had the (un?)fortunate experience of learning Git as my first and only VCS. I learned concepts like the index, rebasing, committing, amending, branching, merging, pulling, and pushing all for the first time from Git. I also learned how to use Git by typing the core Git commands into a terminal (since I’m in there all the time anyway), so I have not been biased in favor of GUI-only operation (these days, tig is the only GUI tool I use — and only as a brief sanity check at that). Then again, I’ve never suffered data corruption, lost branches, or anything like that, so I’m probably doing things the right way in this whole VCS thingamajig.

Oh, and here are some aliases I use for Git:

alias g='git'
alias gdf="[[ \$(git diff | wc -l) -gt 0 ]] && git diff || echo No changes"
alias gdfc="[[ \$(git diff --cached | wc -l) -gt 0 ]] && git diff --cached || echo Index empty"
alias gst='git status'
alias gbr='git branch'
alias gcm='git commit'
alias gco='git checkout'
alias glg='git log'
alias gpl='git pull'
alias gps='git push'

Some thoughts on Emacs and Vim

This post shows a couple differences between Emacs and Vim.

Now, before I begin, I’d like to say that the only reason why I recently started using Emacs along with Vim is because of Emacs’ org-mode. It is a very powerful outlining platform that really takes advantage of Emacs’ Lisp architecture (well, all Emacs extensions, by design, enjoy the powerful Lisp backend). I’ve started using org-mode recently to act as my digital planner.

Also, I’m a die-hard Vim user, so I am obviously biased.

Cool things about Emacs:

  • Lisp backend (it makes Emacs verrry powerful, perhaps too powerful (see “cons” below)): Emacs is not a text editor. It is a Lisp interpreter, that happens to be used primarily as a text editor. Org-mode, as I mentioned above, is an Emacs extension that really makes Emacs shine.
  • Command-based keybindings: In Vim, keybindings resemble macros, like this:
nmap <F1> :up<CR>

where “:up” is the actual keys that need to be pressed by the user to achieve the affect you want (here, :up and pressing ENTER results in saving a document). In Emacs, instead of specifying the literal key presses, you describe which commands are called on by the keys. So for me, I have

(global-set-key "\C-cl" 'org-store-link)

which maps “CTRL-c, l” to the org-store-link function. You can of course do the same thing in Vim by just calling a Vimscript function for some particular keybinding, but check this out: in Emacs you can easily compose multiple-function hotkeys by defining a big blank function (aka “lambda expression” or “closure”) that is made up of smaller regular functions, like so:

(global-set-key "\C-cl" (lambda ()
                     (function1)
                     (function2)
                     (function3)
                     (function4)
                        ...
                     (functionN)
                     ))

How cool is that? This is yet another example where the Lisp backend makes this possible, and so easy. I actually prefer Emacs’ way of calling the functions/commands directly like this, as it makes keybindings much easier to understand and just sticks to a simple, uniform layout: make keys call either 1 or more functions.

  • Better speed: By speed, I’m judging 3 common areas: startup time, parenthesis matching, and syntax highlighting. Emacs wins 2 out of 3. Here is the breakdown:
  • Startup time: Vim is faster (GVim is faster than Emacs, and console vim starts *instantly* whereas console emacs takes a couple seconds). CORRECTION (Feb 17, 2011): Emacs’ startup time depends on two factors (how you write your ~/.emacs file, and whether you run it as a client of a long-running emacs daemon).
  • Parenthesis matching: Emacs is lightyears ahead. Sadly, Vim slows down to a deathly crawl whenever I edit non-trivial LaTeX files, because whenever I move my cursor around (e.g., by holding down the h/l keys), it re-calculates the matching parenthesis/bracket/brace for every cursor position. Emacs chews up matching parenthesis highlighting in realtime, even for my slowest (on Vim) LaTeX documents. Again, Emacs is lightyears ahead here.
  • Syntax highlighting: Emacs’ syntax highlighting is much faster, and does not slow down *at all* even with my heaviest LaTeX files. Vim slows down to a crawl for some of my LaTeX documents when I hold down the h/l/j/k keys for incremental navigation.
  • Of course, Vim’s Normal mode makes editing files a breeze compared to Emacs’ clunky CTRL CTRL CTRL META META hotkey insanity. But Emacs is not a modal editor, so that’s why I didn’t include it in the breakdown above.

Some Emacs cons:

  • It’s not modal. This is a huuuuuuuuuge issue for me. The *best* complaint against modal editing, from what I gather from the endless Emacs vs. Vim debate, is “Vim users have to keep reaching up to the ESC key with their left pinky every 5 seconds, and the ESC key is too far away from the home row — the same home row that Vimmers proudly call their home!”. And to this complaint, I have two practical answers:
  1. Use the built-in, default CTRL-[ shortcut instead of ESC. (For some reason, this is a big mystery to both Emacs and Vim users alike.)
  2. Map “jk” to ESC. These two keys are: on the home row (and if you learned how to type, your fingers are already on these keys), are faster to type (faster than “jj” or “kk” because in those cases you only use a single finger whereas with “jk” you use two), and almost never come up in English/programming. This is what I do; I never use the ESC key.
  • Emacs’s Lisp backend makes it too powerful for its own good. Because “extending Emacs” really means “writing Lisp programs”, Emacs has become something of an OS. There are many Emacs users out there who use it as their text editor, as well as: an email/news reader, a PDF viewer, a web browser, version control management system, encryption/decryption mechanism, etc. This bizarre agglomeration of various programs all into Emacs makes it very bulky, and I suspect it is this bulkiness that slows down its startup time (although, as stated above, its parenthesis matching and syntax highlighting speed is lightning fast). CORRECTION (Feb 17, 2011): Actually, the startup time is very fast if my ~/.emacs config file is blank, but apparently my modest, relatively new ~/.emacs file loads up various plugins (Vimpulse and a color theme, among others) which slow the startup time down by a couple seconds.

That’s it! If Emacs were modal, then I would probably immediately switch to it. Currently, I use Emacs + Vimpulse whenever I need to use Org-mode (the fact that Vimpulse and Org-mode, two independent Emacs projects, work seamlessly side-by-side, emphasizes Emacs’ rich versatility). For all real text editing needs, I use Vim. Vim’s Normal mode lets me fly around a document like one of those crazy Harry Potter kids in a quidditch game.

What I don’t like about Vim:

  • Vimscript: Vimscript is just painful to use for me. It’s a scripting language but it doesn’t even have case/switch statements. All functions must start with a capital letter. Yuck! Customizing/extending Emacs with Emacs Lisp is a dream compared to working with Vimscript.

That’s about it, apart from the parenthesis/syntax speed issues noted above. I am a die-hard Vim user, by choice.

UPDATE Feb 17, 2011: Wow, apparently this post got over 3k views today, because of referrals from (among others) HackerNews and Reddit. I’ll look over the discussions at those sites and post my thoughts on here, to “fill in the holes” in my original post, so to speak. Stay tuned.

UPDATE Feb 17, 2011: So here’s the update after reflecting on some of the comments from HN and Reddit:

HN:

  • slysf says “The biggest con about emacs that’s not mentioned is that it’s just not _everywhere_.” Yeah, I’m just a hobbyist programmer so I have not had to deal with sysadmin/remote client issues. But it’s obviously a big bonus if you can use your editor of choice everywhere you go. Emacs’ saving grace is that it IS very portable, just that it’s not installed on every UNIX-y box out there by default…
  • d0mine mentions that Python doesn’t have case/switch statements as well. Holy cow, I did not know that! And apparently, you can just use Python’s dictionary type to achieve what case/switch statements do. Ruby has case/when statements (same thing as switch/case), and I used them before many times, so I just found it strange when I tried to use Vimscript and there was no such construct. However, I’ve learned that Vimscript also has a dictionary type, so, like Python, I’m sure it can be used to emulate switch/case statements. Still, in the big picture of things, Vimscript’s power will always be dwarfed by larger, more mature languages that get beaten, abused, and tested by armies of professional developers; and this is why Vimscript will always be inferior: there aren’t that many users who pound on it daily and work to develop it. Vimscript is sort of like an island on its own, while Emacs Lisp has excellent lineage (it’s a LISP dialect).

Reddit:

  • jaybee mentions that startup times really has to do with what’s in your init file. This statement is very true, and has already been incorporated in my original post as a correction. The fact that you can also run emacs as a server on boot and just launch clients instantaneously is very valuable advice. I will heed to it!
  • sping says that he just uses one emacs window (frame) and works within it. As an XMonad user on a dual head, I like using multiple windows so much that I just use multiple windows, shells, etc. all the time. If you’re on Gnome or KDE, with no tiling window manager, then sping’s method would be superior in terms of speed (no mouse dependancy). I think there are legions of Emacs users out there who started using it before the advent of popular tiling window managers, and got used to the notion of “do everything within Emacs” so much that they never bothered to try out tiling window managers (and the benefits they bring — lightning fast window navigation/management, which makes using multiple shells/programs a breeze).
  • sping also says “I have never yet read someone talking about vim who has actually made a good case for any superiority over Emacs in any significant area.” And my answer is: modal editing. Modal editing is a killer feature that Emacs is sorely missing. sping also says “I’m not actually against its modal nature, but I’m still waiting to find out exactly how it’s so f’ing awesome in ways which Emacs isn’t.” Hmm, what part of “modal editing makes navigation verrry fast” did I make unclear? I even made a Harry Potter reference, for goodness’ sake! Well, if you don’t care about fast cursor movement around, inside a file, then Vim has very little, if *any*, advantages over vanilla Emacs (the only “neat” thing I can think of is Visual Block mode, but that’s it). To be fair, sping is a former vi user, so he finds the whole modal editing thing not very impressive. To each his own.
  • Just to expand more on sping’s comment about how he has yet to find anything in Vim that is so great over Emacs, I think he’s asking fundamentally the wrong question. Vim will never really have more “awesome ways” that make it superior over Emacs, because Vim is just a text editor. And Emacs is waaaay more than a text editor. So feature-wise, Emacs will always win. However, if you’re only comparing text editing vs. text editing, then I will re-iterate that Vim’s (only real) killer feature over Emacs is modal editing (and super-fast intra-file navigation).
  • zck asks “Wouldn’t [pressing “jk”] just move the cursor up and then down?” Well, no. First, we are talking about going from Insert mode to Normal mode. If you’re already in Normal mode, then duh: your cursor will move up and then down. But since we are in Insert mode, and have “jk” insert-mode-mapped to ESC (imap jk <ESC>), every time you press j, Vim starts a little timer and sees if you press k next, and if you do indeed press k next, you escape into Normal mode. This timer is customizable, and I have it at 1 second.

Other thoughts:

On navigation speed: There are still people asking: “Is Vim really faster to navigate around a file than Emacs?” And my answer is: the hotkeys speak for themselves. (Vim is lightyears ahead.) Oh, and I also mapped the Space and Backspace keys in Normal mode like this:

nnoremap <space> 10jzz
nnoremap <backspace> 10kzz

to make navigation even faster. (You should all customize Space/Backspace keys because by default they just do what l/h do.) UPDATE March 2, 2011: Oh, and if you are using vimpulse.el (version 0.5), you can do:

; simulate vim's "nnoremap <space> 10jzz"
(vimpulse-map " " (lambda ()
                     (interactive)
                     (next-line 10)
                     (viper-scroll-up-one 10)
                     ))
; simulate vim's "nnoremap <backspace> 10kzz"
(define-key viper-vi-global-user-map [backspace] (lambda ()
                     (interactive)
                     (previous-line 10)
                     (viper-scroll-down-one 10)
                     ))

to get the same behavior.

UPDATE Feb 17, 2011: Oh, and another thing: for years some people have wanted a “breakindent” (aka “soft wrap”) feature in Vim (i.e., make it so that a very long line gets a virtual indent for all successive “lines”, such that you end up with a neat, clean rectangle for a long line that has been indented at the very beginning).  See this page and this page. Among other things, a breakindent feature would make outlines look very clean, with neat rectangular blocks for lines that start out at a deep indent. Sadly, I don’t imagine breakindent becoming a feature in Vim in the foreseeable future. In Emacs’ org-mode, breakindent is availble! Just put “#+STARTUP: indent” at the top of the .org file you are working on, and voila: sane line wrapping for outlines.

UPDATE August 5, 2011: I’ve recently switched to using the emacs –daemon and emacsclient command. Basically, I have a emacs –daemon & in my ~/.xinitrc, and then I use emacsclient -nw and emacsclient -c instead of emacs -nw and emacs, respectively. The startup times of emacsclient is near-instant — very impressive! The only drawbacks are that the emacs daemon process takes up 37MB on my system (but then again, with machines using gigabytes of RAM these days, it’s a nonissue), and you have to fiddle a bit with your ~/.emacs file to load up the correct font and color scheme (since emacs –daemon does not read the ~/.emacs file like regular standalone emacs with regard to the X11 options; see http://www.tychoish.com/rhizome/getting-emacs-daemon-to-work-right/ and http://old.nabble.com/–daemon-vs.-server-start-td21594587.html).

I’ve also dropped Vimpulse in favor of Evil, since the core developer for it (Vegard Øye) decided to join forces with another Vim-minded emacs extension author (Frank Fischer, of vim-mode) to create a newer, better Vim emulation layer earlier in February this year (just 10 days after the date of my original post!). See http://news.gmane.org/find-root.php?group=gmane.emacs.vim-emulation&article=692. Anyway, the official Evil repo is at http://gitorious.org/evil/ (more info at http://gitorious.org/evil/pages/Home). The only drawback right now is that Evil does not currently emulate Vi’s Ex mode (in Vim, when you type the colon ‘:’ character from Normal mode). If you really want this feature right now, too, then I suggest cloning from Frank Fischer’s personal branch, at https://bitbucket.org/lyro/evil/overview, which includes Ex mode. I suspect that Fischer’s Ex-mode code will be merged into Evil sometime in the future since Fischer keeps his branch in sync with the official Evil repo.

The best part about using Evil is that it does not depend on Viper, the old Vi extension for emacs.

Anyway, here’s how I’ve set up Evil in my ~/.emacs (many of these keymaps only make sense if you’re in org-mode, but whatever):

; evil, the Extensible VI Layer! seee http://gitorious.org/evil/pages/Home
(add-to-list 'load-path "~/.emacs.d/script/evil")
(require 'evil)
(evil-mode 1)
; some keymaps from ~/.vimrc
(define-key evil-insert-state-map [f1] 'save-buffer) ; save
(define-key evil-normal-state-map [f1] 'save-buffer) ; save
(define-key evil-normal-state-map ",w" 'save-buffer) ; save
(define-key evil-normal-state-map ",q" 'kill-buffer) ; quit (current buffer; have to press RETURN)
(define-key evil-normal-state-map ",x" 'save-buffers-kill-emacs) ; save and quit
; make "kj" behave as ESC key ,adapted from http://permalink.gmane.org/gmane.emacs.vim-emulation/684
; you can easily change it to map "jj" or "kk" or "jk" to ESC)
(defun escape-if-next-char (c)
    "Watches the next letter.  If c, then switch to Evil's normal mode; otherwise insert a k and forward unpressed key to unread-command events"
      (self-insert-command 1)
        (let ((next-key (read-event)))
              (if (= c next-key)
                        (progn
                                    (delete-backward-char 1)
                                              (evil-esc))
                              (setq unread-command-events (list next-key)))))

(defun escape-if-next-char-is-j (arg)
    (interactive "p")
      (if (= arg 1)
              (escape-if-next-char ?j)
                  (self-insert-command arg)))

(define-key evil-insert-state-map (kbd "k") 'escape-if-next-char-is-j)
; simulate vim's "nnoremap <space> 10jzz"
(define-key evil-normal-state-map " " (lambda ()
                     (interactive)
                     (next-line 10)
                     (evil-scroll-line-down 10)
                     ))
; simulate vim's "nnoremap <backspace> 10kzz"
(define-key evil-normal-state-map [backspace] (lambda ()
                     (interactive)
                     (previous-line 10)
                     (evil-scroll-line-up 10)
                     ))

; make evil work for org-mode!
(define-key evil-normal-state-map "O" (lambda ()
                     (interactive)
                     (end-of-line)
                     (org-insert-heading)
                     (evil-append nil)
                     ))

(defun always-insert-item ()
     (interactive)
     (if (not (org-in-item-p))
       (insert "\n- ")
       (org-insert-item)))

(define-key evil-normal-state-map "O" (lambda ()
                     (interactive)
                     (end-of-line)
                     (org-insert-heading)
                     (evil-append nil)
                     ))

(define-key evil-normal-state-map "o" (lambda ()
                     (interactive)
                     (end-of-line)
                     (always-insert-item)
                     (evil-append nil)
                     ))

(define-key evil-normal-state-map "t" (lambda ()
                     (interactive)
                     (end-of-line)
                     (org-insert-todo-heading nil)
                     (evil-append nil)
                     ))
(define-key evil-normal-state-map (kbd "M-o") (lambda ()
                     (interactive)
                     (end-of-line)
                     (org-insert-heading)
                     (org-metaright)
                     (evil-append nil)
                     ))
(define-key evil-normal-state-map (kbd "M-t") (lambda ()
                     (interactive)
                     (end-of-line)
                     (org-insert-todo-heading nil)
                     (org-metaright)
                     (evil-append nil)
                     ))
(define-key evil-normal-state-map "T" 'org-todo) ; mark a TODO item as DONE
(define-key evil-normal-state-map ";a" 'org-agenda) ; access agenda buffer
(define-key evil-normal-state-map "-" 'org-cycle-list-bullet) ; change bullet style

; allow us to access org-mode keys directly from Evil's Normal mode
(define-key evil-normal-state-map "L" 'org-shiftright)
(define-key evil-normal-state-map "H" 'org-shiftleft)
(define-key evil-normal-state-map "K" 'org-shiftup)
(define-key evil-normal-state-map "J" 'org-shiftdown)
(define-key evil-normal-state-map (kbd "M-l") 'org-metaright)
(define-key evil-normal-state-map (kbd "M-h") 'org-metaleft)
(define-key evil-normal-state-map (kbd "M-k") 'org-metaup)
(define-key evil-normal-state-map (kbd "M-j") 'org-metadown)
(define-key evil-normal-state-map (kbd "M-L") 'org-shiftmetaright)
(define-key evil-normal-state-map (kbd "M-H") 'org-shiftmetaleft)
(define-key evil-normal-state-map (kbd "M-K") 'org-shiftmetaup)
(define-key evil-normal-state-map (kbd "M-J") 'org-shiftmetadown)

(define-key evil-normal-state-map (kbd "<f12>") 'org-export-as-html)

UPDATE August 12, 2011: Great news! Frank Fischer’s Ex mode branch was merged into upstream just two days ago (commit 46447065ed5a374452504bd311b91902b8ce56d4), so now Evil has Ex mode!

Also, I noticed that Evil uses (kbd “DEL”) instead of [backspace] for binding the Backspace key (commit 8fc3c4065c7ed50a6033a7f1c68deba5a9270043), so I recommend changing the Backspace binding I posted previously to:

; simulate vim's "nnoremap <backspace> 10kzz"
(define-key evil-normal-state-map (kbd "DEL") (lambda ()
                     (interactive)
                     (previous-line 10)
                     (evil-scroll-line-up 10)
                     ))

I tested the change, and indeed, the previous binding with [backspace] was broken for console emacs, but it works with (kbd “DEL”).

UPDATE November 17, 2011: Tom (comment #19) pointed out that you can define hotkeys in a mode-specific way from within evil, as follows:

(evil-declare-key 'normal org-mode-map "T" 'org-todo)
(evil-declare-key 'normal org-mode-map "-" 'org-cycle-list-bullet)

This way, your hotkeys will not collide if you change modes. (Not a problem for me because I only use emacs for org-mode, but there you have it.)

Oh, and the old “kj” escape key function no longer works; here is the new version:

(define-key evil-insert-state-map "k" #'cofi/maybe-exit)

(evil-define-command cofi/maybe-exit ()
  :repeat change
  (interactive)
  (let ((modified (buffer-modified-p)))
    (insert "k")
    (let ((evt (read-event (format "Insert %c to exit insert state" ?j)
			   nil 0.5)))
      (cond
       ((null evt) (message ""))
       ((and (integerp evt) (char-equal evt ?j))
	(delete-char -1)
	(set-buffer-modified-p modified)
	(push 'escape unread-command-events))
       (t (setq unread-command-events (append unread-command-events
					      (list evt))))))))

The original source is at http://article.gmane.org/gmane.emacs.vim-emulation/980. What’s great about this version is that it behaves almost exactly like the original vim mapping; once you type “k”, you have a 0.5-second window to type in “j” to escape — otherwise, you just get “k” on the screen.

UPDATE March 7, 2012: For a long time, I did not realize that the “10<c-e>10j” keybinding for <space> was not what I wanted, as it moved the cursor more than 10 lines down (<c-e> is an “impure” way to scroll). The proper way is to use “zz”, which resets the screen to have the cursor in the middle of the screen (vertically), without changing the current line position.

On Exercise

I started exercising regularly again. I’ve taken a new approach to exercise:

  • Exercise is not about losing weight.
  • Exercise is not about toning one’s physique.
  • Exercise is not about staying healthy.

Although exercise does help one lose weight, tone one’s physique, and stay healthy, these things are mere side effects of exercise. Exercise results in the strengthening of one’s mental fortitude. Real, tough, in-the-guts exercise requires a truckload of willpower. Each time we exercise, we train on a mental level.

Take elite athletes as an example. Day in and day out, they train. With time, they gain the ability to maintain an incredible amount of mental focus under an incredible amount of stress. And when they compete, they are able to perform what their sport requires of them, even when they’re hurting.

Everyone hopes they could do X, Y, and Z to make improvements in their lives. Without mental discipline, they are going to waste a lot of time. Maybe this realization, on a subconscious level, gives them an incentive to procrastinate.

In short, exercise strengthens the mind. It is the most simple, yet important, thing that I can do to improve all areas of my life.

Latex: Saner Source Code Listings: No Starch Press Style

I’ve lately gotten in the habit of writing aesthetically pleasing ebooks with LaTeX for myself on new subjects that I come across. This way, I can (1) learn more about TeX (always a huge bonus) and (2) create documents that are stable, portable (PDF output), beautiful, and printer-friendly. (I also put all the .tex and accompanying makefiles into version control (git) and sync it across all of my computers, to ensure that they last forever.)

One of those new subjects for me right now is the Haskell programming language. I started copying down small code snippets from various free resources on the web, with the help of the Listings package (actually named the lstlisting package internally).

The Problem

Unfortunately, I got tired of referencing line numbers manually to explain the code snippets, like this:

\documentclass[twoside]{article}
\usepackage[x11names]{xcolor} % for a set of predefined color names, like LemonChiffon1

\newcommand{\numold}[1]{{\addfontfeature{Numbers=OldStyle}#1}}
\lstnewenvironment{csource}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\numold,numbers=left,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {}

\begin{document}
\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource}
#include <stdio.h>

int main()
{
    printf("Hello, world!\n");
    return 0;
}
\end{csource}

At line 1, we include the \texttt{stdio.h} header file. We begin the \texttt{main} function at line 3. We print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) at line 5. At line 6, we return value 0 to let the caller of this program know that we exited safely without any errors.
\end{document}

Output (compiled with the xelatex command):
Plain lstlisting style

This is horrible. Any time I add or delete a single line of code, I have to manually go back and change all of the numbers. On the other hand, I could instead use the \label{labelname} command inside the lstlisting environment on any particular line, and get the line number of that label with \ref{comment}, like so:

\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource}
#include <stdio.h>(*@\label{include}@*)

int main()(*@\label{main}@*)
{
    printf("Hello, world!\n");(*@\label{world}@*)
    return 0;(*@\label{return}@*)
}
\end{csource}
At line \ref{include}, we include the \texttt{stdio.h} header file. We begin the \texttt{main} function at line \ref{main}. We print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) at line \ref{world}. At line \ref{return}, we return value 0 to let the caller of this program know that we exited safely without any errors.

Output:

(The only difference is that the line number references are now red. This is because I also used the hyperref package with the colorlinks option.)

But I felt like this was not a good solution at all (it was suggested by the official lstlisting package’s manual, Section 7 “How tos”). For one thing, I have to think of a new label name for every line I want to address. Furthermore, I have to compile the document twice, because that’s how the \label command works. Yuck.

Enlightenment

So again, with my internet-research hat on, I tried to figure out a better solution. The first thing that came to mind was how the San Francisco-based publisher No Starch Press displayed source code listings in their newer books. Their stylistic approach to source code listings is probably the most sane (and beautiful!) one I’ve come across.

After many hours of tinkering, trial-and-error approaches, etc., I finally got everything working smoothly. Woohoo! From what I can tell, my rendition looks picture-perfect, and dare I say, even better than their version (because I use colors, too). Check it out!

\documentclass{article}
\usepackage[x11names]{xcolor} % for a set of predefined color names, like LemonChiffon1
\newcommand{\numold}[1]{{\addfontfeature{Numbers=OldStyle}#1}}
\usepackage{libertine} % for the pretty dark-circle-enclosed numbers

% Allow "No Starch Press"-like custom line numbers (essentially, bulleted line numbers for only those lines the author will address)
\newcounter{lstNoteCounter}
\newcommand{\lnnum}[1]
    {\ifthenelse{#1 =  1}{\libertineGlyph{uni2776}}
    {\ifthenelse{#1 =  2}{\libertineGlyph{uni2777}}
    {\ifthenelse{#1 =  3}{\libertineGlyph{uni2778}}
    {\ifthenelse{#1 =  4}{\libertineGlyph{uni2779}}
    {\ifthenelse{#1 =  5}{\libertineGlyph{uni277A}}
    {\ifthenelse{#1 =  6}{\libertineGlyph{uni277B}}
    {\ifthenelse{#1 =  7}{\libertineGlyph{uni277C}}
    {\ifthenelse{#1 =  8}{\libertineGlyph{uni277D}}
    {\ifthenelse{#1 =  9}{\libertineGlyph{uni277E}}
    {\ifthenelse{#1 = 10}{\libertineGlyph{uni277F}}
    {\ifthenelse{#1 = 11}{\libertineGlyph{uni24EB}}
    {\ifthenelse{#1 = 12}{\libertineGlyph{uni24EC}}
    {\ifthenelse{#1 = 13}{\libertineGlyph{uni24ED}}
    {\ifthenelse{#1 = 14}{\libertineGlyph{uni24EE}}
    {\ifthenelse{#1 = 15}{\libertineGlyph{uni24EF}}
    {\ifthenelse{#1 = 16}{\libertineGlyph{uni24F0}}
    {\ifthenelse{#1 = 17}{\libertineGlyph{uni24F1}}
    {\ifthenelse{#1 = 18}{\libertineGlyph{uni24F2}}
    {\ifthenelse{#1 = 19}{\libertineGlyph{uni24F3}}
    {\ifthenelse{#1 = 20}{\libertineGlyph{uni24F4}}
    {NUM TOO HIGH}}}}}}}}}}}}}}}}}}}}}
\newcommand*{\lnote}{\stepcounter{lstNoteCounter}\vbox{\llap{{\lnnum{\thelstNoteCounter}}\hskip 1em}}}
\lstnewenvironment{csource2}[1][]
{
    \setcounter{lstNoteCounter}{0}
    \lstset{basicstyle=\ttfamily,language=C,numberstyle=\numold,numbers=right,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}
}
{}

\begin{document}
\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource2}
(*@\lnote@*)#include <stdio.h>

/* This is a comment. */
(*@\lnote@*)int main()
{
(*@\lnote@*)    printf("Hello, world!\n");
(*@\lnote@*)    return 0;
}
\end{csource2}

We first include the \texttt{stdio.h} header file \lnnum{1}. We then declare the \texttt{main} function \lnnum{2}. We then print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) \lnnum{3}. Finally, we return value 0 to let the caller of this program know that we exited safely without any errors \lnnum{4}.
\end{document}

Output (compiled with the xelatex command):

Ah, much better! This is pretty much how No Starch Press does it, except that we’ve added two things: the line number count on the right hand side, and also a colored background to make the code stand out a bit from the page (easier on the eyes when quickly skimming). These options are easily adjustable/removable to suit your needs (see line 33). For a direct comparison, download some of their sample chapter offerings from Land of Lisp (2010) and Network Flow Analysis (2010) and see for yourself.

Explanation

If you look at the code, you can see that the only real thing that changed in the listing code itself is the use of a new \lnote command. The \lnote command basically spits out a symbol, in this case whatever the \lnnum command produces with the value of the given counter, lstNoteCounter. The \lnnum command is basically a long if/else statement chain (I tried using the switch statement construct from here, but then extra spaces would get added in), and can produce a nice glyph, but only up to 20 (for anything higher, it just displays the string “NUM TOO HIGH.” This is because it uses the Linux Libertine font to create the glyph (with \libertineGlyph), and Libertine’s black-circle-enclosed numerals only go up to 20). A caveat: Linux Libertine’s LaTeX commands are subject to change without notice, as it is currently in “alpha” stage of development (e.g., the \libertineGlyph command actually used to be called the \Lglyph command, if I recall correctly).

The real star of the show is the \llap command. I did not know about this command until I stumbled on this page yesterday. For something like an hour I toiled over trying to use \marginpar or \marginnote (the marginnote package) to get the same effect, without success (for one thing, it is impossible to create margin notes on one side (e.g., always on the left, or always on the right) if your document class is using the twoside option.

The custom \numold command (used here to typeset the line numbers with old style figures) is actually a resurrection of a command of the same name from an older, deprecated Linux Libertine package. The cool thing about how it’s defined is that you can use it with or without an argument. Because of this \numold command and how it’s defined, you have to use the XeTeX engine (i.e., compile with the xelatex command).

In all of my examples above, the serif font is Linux Libertine, and the monospaced font is DejaVu Sans Mono.

Other Thoughts

You may have noticed that I have chosen to use my own custom lstlisting environment (with the \lstnewenvironment command). The only reason I did this is because I can specify a custom command that starts up each listing environment. In my case, it’s \setcounter{lstNoteCounter}{0}, which resets the notes back to 0 for the listing.

Feel free to use my code. If you make improvements to it, please let me know! By the way, Linux Libertine’s latex package supports other enumerated glyphs, too (e.g., white-circle-enclosed numbers, or even circle-enclosed alphabet characters). Or, you could even use plain letters enclosed inside a \colorbox command, if you want. You can also put the line numbers right next to the \lnote numbers (we just add numbers=left to the lstnewenvironment options, and change \lnote’s hskip to \2.7em; I also changed the line numbers to basic italics):

\lstnewenvironment{csource2}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\itshape,numbers=left,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {\stepcounter{lstCounter}}

\newcommand*{\lnote}{\stepcounter{lstNoteCounter}\llap{{\lnnum{\thelstNoteCounter}}\hskip 2.7em}}

Output:

Or we can swap their positions (by using the numbersep= command to the lstnewenvironment declaration, and leaving \lnnote as-is with just 1em of \hskip):

\lstnewenvironment{csource2}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\itshape,numbers=left,numbersep=2.7em,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {\stepcounter{lstCounter}}

Output:

The possibilities are yours to choose.

In case you’re wondering, the unusual indenting of the section number into the left margin in my examples is achieved as follows:

\usepackage{titlesec}
\titleformat{\section}
    {\Large\bfseries} % formatting
    {\llap{
       {\thesection}
       \hskip 0.5em
    }}
    {0em}% horizontal sep
    {}% before

It’s a simplified version of the code posted on the StackOverflow link above.

NB: The \contentsname command (used to render the “Contents” text if you have a Table of Contents) inherits the formatting from the \chapter command (i.e., if you use \titleformat{\chapter}). Thus, if you format the \chapter command like the \section command above with \titleformat, the only way to prevent this inheritance (if it is not desired) is by doing this:

\renewcommand\contentsname{\normalfont Contents}

This way, your Table of Content’s title will stay untouched by any \titleformat command.

UPDATE December 7, 2010: Some more variations and screenshots.

Mutt: Multiple Gmail IMAP Setup

NOTE: Read the August 10, 2011 update below for a more secure ~/.muttrc regarding passwords!
NOTE: Read the December 22, 2011 update below for a secure and ultra-simple way to handle passwords!
NOTE: Keep in mind that the “my_” prefix in “my_gpass1” is mandatory! (See January 3, 2012 embedded update.)

Since Alpine is no longer maintained, I decided to just move on to Mutt. Apparently Mutt has a large user base, is actively developed, and is used by friendly programmers like Greg Kroah-Hartman (Linux Kernel core dev; skip video to 25:44). Although I need to take care of two separate Gmail accounts through IMAP, I got it working in part to the nice guides here. In particular, I used this guide and this one. However, the configuration from the second link contains a couple typos that are just wrong (no equal sign after imap_user, and no semicolons between unset commands). I figured things out by trial and error and over 8 hours of googling, tweaking, and fiddling (granted, my problem was very unique). But finally, I got things working (can log in, view emails, etc.; however, I haven’t tested out everything — I suspect there needs to be some more settings to let Mutt know how to handle multiple accounts properly (setting the correct From/To areas for forwarding/replying), but let us focus on the basics for now.)

Before I get started, I’m using Mutt 1.5.21, and it was compiled with these options (all decided by Arch Linux’s Mutt maintainer; see here):

./configure --prefix=/usr --sysconfdir=/etc \
  --enable-pop --enable-imap --enable-smtp \
  --with-sasl --with-ssl=/usr --without-idn \
  --enable-hcache --enable-pgp --enable-inodesort \
   --enable-compressed --with-regex \
   --enable-gpgme --with-slang=/usr

The great thing is that Mutt comes with smtp support built-in this way (with the –enable-smtp flag), so you don’t have to rely on external mail-sending programs like msmtp.

Now, here is my ~/.muttrc:

#-----------#
# Passwords #
#-----------#
set my_tmpsecret=`gpg2 -o ~/.sec/.tmp -d ~/.sec/pass.gpg`
set my_gpass1=`awk '/Gmail1/ {print $2}' ~/.sec/.tmp`
set my_gpass2=`awk '/Gmail2/ {print $2}' ~/.sec/.tmp`
set my_del=`rm -f ~/.sec/.tmp`

#---------------#
# Account Hooks #
#---------------#
account-hook . "unset imap_user; unset imap_pass; unset tunnel" # unset first!
account-hook        "imaps://user1@imap.gmail.com/" "\
    set imap_user   = user1@gmail.com \
        imap_pass   = $my_gpass1"
account-hook        "imaps://user2@imap.gmail.com/" "\
    set imap_user   = user2@gmail.com \
        imap_pass   = $my_gpass2"

#-------------------------------------#
# Folders, mailboxes and folder hooks #
#-------------------------------------#
# Setup for user1:
set folder          = imaps://user1@imap.gmail.com/
mailboxes           = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash
set spoolfile       = +INBOX
folder-hook         imaps://user1@imap.gmail.com/ "\
    set folder      = imaps://user1@imap.gmail.com/ \
        spoolfile   = +INBOX \
        postponed   = +[Gmail]/Drafts \
        record      = +[Gmail]/'Sent Mail' \
        from        = 'My Real Name <user1@gmail.com> ' \
        realname    = 'My Real Name' \
        smtp_url    = smtps://user1@smtp.gmail.com \
        smtp_pass   = $my_gpass1"

# Setup for user2:
set folder          = imaps://user2@imap.gmail.com/
mailboxes           = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash
set spoolfile       = +INBOX
folder-hook         imaps://user2@imap.gmail.com/ "\
    set folder      = imaps://user2@imap.gmail.com/ \
        spoolfile   = +INBOX \
        postponed   = +[Gmail]/Drafts \
        record      = +[Gmail]/'Sent Mail' \
        from        = 'My Real Name <user2@gmail.com> ' \
        realname    = 'My Real Name' \
        smtp_url    = smtps://user2@smtp.gmail.com \
        smtp_pass   = $my_gpass2"

#--------#
# Macros #
#--------#
macro index <F1> "y12<return><return>" # jump to mailbox number 12 (user1 inbox)
macro index <F2> "y6<return><return>"  # jump to mailbox number 6 (user2 inbox)
#-----------------------#
# Gmail-specific macros #
#-----------------------#
# to delete more than 1 message, just mark them with "t" key and then do "d" on them
macro index d ";s+[Gmail]/Trash<enter><enter>" "Move to Gmail's Trash"
# delete message, but from pager (opened email)
macro pager d "s+[Gmail]/Trash<enter><enter>"  "Move to Gmail's Trash"
# undelete messages
macro index u ";s+INBOX<enter><enter>"         "Move to Gmail's INBOX"
macro pager u "s+INBOX<enter><enter>"          "Move to Gmail's INBOX"

#-------------------------#
# Misc. optional settings #
#-------------------------#
# Check for mail every minute for current IMAP mailbox every 1 min
set timeout         = 60
# Check for new mail in ALL mailboxes every 2 min
set mail_check      = 120
# keep imap connection alive by polling intermittently (time in seconds)
set imap_keepalive  = 300
# allow mutt to open new imap connection automatically
unset imap_passive
# store message headers locally to speed things up
# (the ~/.mutt folder MUST exist! Arch does not create it by default)
set header_cache    = ~/.mutt/hcache
# sort mail by threads
set sort            = threads
# and sort threads by date
set sort_aux        = last-date-received

A couple things that are not so obvious:
The passwords are stored in a file called pass.gpg like this:

Gmail1 user1-password
Gmail2 user2-password

However, it’s encrypted. So you have to decrypt it every time you want to access it. This way, you don’t store your passwords in plaintext! The only drawback is that you have to manually type in your GnuPG password every time you start Mutt, but hey, you only have to type in one password to get the passwords of all your multiple accounts! So it’s not that bad. You can see that the passwords are fed to the variables $my_gpass1 and $my_gpass2. (UPDATE January 3, 2012: Mallik has kindly pointed out in the comments that the “my_” prefix is actually a mutt-specific way of defining custom variables! You must use this syntax — unless you want to get bitten by mysterious “unknown variable” error messages!) One HUGE caveat here: if your password contains a dollar sign ($), say “$quarepant$” then you have to escape it like this:

\\\$quarepant\\\$

in the pass.gpg file. Yes, those are three backslashes for each instance of “$”. This little-known fact caused me hours of pain, and if it weren’t for my familiarity with escape sequences and shell variables and such, I would never have figured it out. Now, I don’t know what other characters have to be escaped, but here’s a way to figure it out: change your Gmail password to contain 1 weird-character like “#”, then manually type in the password like “set my_gpass1 = #blahblah” and see if using $my_gpass1 works. If not, keep adding backslashes (“\#”, then “\\#”, etc.) until it does work. Repeat this procedure for all the other punctuation characters, if necessary. Then change your Gmail password back to its original form, and use the knowledge you gained to put in the correctly-escaped password inside your pass.gpg file.

You can see that I use awk to get the correct region of text from pass.gpg. If you don’t like awk (I just copied the guide from here), you can use these equivalent lines instead (stdin redirection, grep, and cut):

...
set my_gpass1=`<~/.sec/.tmp grep Gmail1 | cut -d " " -f 2`
set my_gpass2=`<~/.sec/.tmp grep Gmail2 | cut -d " " -f 2`
...

The macros 1 and 2 are just there to let you quickly switch between the two accounts. The number 6 and 12 happen to be the two INBOX folders on my two Gmail accounts. To check which numbers are right for you, press “y” to list all folders, with their corresponding numbers on the left.

As for the other macros, these are only required because of Gmail’s funky (nonstandard) way of handling IMAP email. Without these macros, if you just press “d” to delete an email, Gmail will NOT send it to the Trash folder, but to the “All Mail” folder. This behavior is strange, because if you use the web browser interface on Gmail, clicking on the “Delete” button moves that email to the Trash folder. I guess Google wants you to use their web interface, or something. Anyway, the macros in here for delete and undelete make Gmail’s IMAP behave the same way as the Gmail’s web interface. The only drawback is that you get a harmless error message “Mailbox was externally modified. Flags may be wrong.” on the bottom every time you use them. Anyway, I’ve tested these macros, and they work for me as expected.

The optional settings at the bottom are pretty self-explanatory, and aren’t required. The most important one in there is the header_cache value. This is definitely required if you have thousands of emails in your inboxes, because otherwise Mutt will fetch all the email headers for all emails every time it starts up.

Here are some not-so-obvious shortcut keys from the “index” menu (i.e., the list of emails, not when you’re looking at a single email (that’s called “pager”; see my macros above)) to get you started:

c   (change folder inside the current active account; press ? to bring up a menu)
y   (look at all of your accounts and folders)
*   (go to the very last (newest) email)
TAB (go to the next unread email)

The other shortcut keys that are displayed on the top line will get get you going. To customize colors in Mutt, you can start with this 256-color template (just paste it into your .muttrc):

# Source: http://trovao.droplinegnome.org/stuff/dotmuttrc
# Screenshot: http://trovao.droplinegnome.org/stuff/mutt-zenburnt.png
#
# This is a zenburn-based muttrc color scheme that is not (even by far)
# complete. There's no copyright involved. Do whatever you want with it.
# Just be aware that I won't be held responsible if the current color-scheme
# explodes your mutt.

# general-doesn't-fit stuff
color normal     color188 color237
color error      color115 color236
color markers    color142 color238
color tilde      color108 color237
color status     color144 color234

# index stuff
color indicator  color108 color236
color tree       color109 color237
color index      color188 color237 ~A
color index      color188 color237 ~N
color index      color188 color237 ~O
color index      color174 color237 ~F
color index      color174 color237 ~D

# header stuff
color hdrdefault color223 color237
color header     color223 color237 "^Subject"

# gpg stuff
color body       color188 color237 "^gpg: Good signature.*"
color body       color115 color236 "^gpg: BAD signature.*"
color body       color174 color237 "^gpg: Can't check signature.*"
color body       color174 color237 "^-----BEGIN PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----BEGIN PGP SIGNATURE-----"
color body       color174 color237 "^-----END PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----END PGP SIGNATURE-----"
color body       color174 color237 "^Version: GnuPG.*"
color body       color174 color237 "^Comment: .*"

# url, email and web stuff
color body       color174 color237 "(finger|ftp|http|https|news|telnet)://[^ >]*"
color body       color174 color237 "<URL:[^ ]*>"
color body       color174 color237 "www\\.[-.a-z0-9]+\\.[a-z][a-z][a-z]?([-_./~a-z0-9]+)?"
color body       color174 color237 "mailto: *[^ ]+\(\\i?subject=[^ ]+\)?"
color body       color174 color237 "[-a-z_0-9.%$]+@[-a-z_0-9.]+\\.[-a-z][-a-z]+"

# misc body stuff
color attachment color174 color237 #Add-ons to the message
color signature  color223 color237

# quote levels
color quoted     color108 color237
color quoted1    color116 color237
color quoted2    color247 color237
color quoted3    color108 color237
color quoted4    color116 color237
color quoted5    color247 color237
color quoted6    color108 color237
color quoted7    color116 color237
color quoted8    color247 color237
color quoted9    color108 color237

Make sure your terminal emulator supports 256 colors. (On Arch, you can use the rxvt-unicode package.)

UPDATE April 22, 2011: Tiny update regarding rxvt-unicode (it has 256 colors by default since a few months ago).

UPDATE August 10, 2011: I did some password changes the other day, and through trial and error, figured out which special (i.e., punctuation) characters you need to escape with backslashes for your Gmail passwords in pass.gpg as discussed in this post. Here is a short list of them:

    ~ ` # $ \ ; ' "

I’m quite confident that this covers all the single-character cases (e.g., “foo~bar” or “a#b`c”). The overall theme here is to prevent the shell from expanding on certain special characters (e.g., the backtick ‘`’ is used to define a shell command region, which we must escape here). Some characters, like the exclamation mark ‘!’ probably need to be escaped if you have two of them in a row (i.e., \\\!! instead of !!) since ‘!!’ is usually expanded by most shells to refer to the last entered command. Again, it’s all very shell-specific, and because (1) I don’t really know which shell mutt uses internally, and (2) I don’t really know all that much about all the nooks and crannies of shell expansion in general, I am unable to create a definitive list as to which characters you must escape.

Another, nontrivial note I would like to add is that I realized that storing the just-decrypted pass.gpg file (file ~/.sec/tmp above) is still not very secure, because the file is a regular file and the contents will still be there after removing the file. That is, a clever individual could easily run some generic “undelete” program to recover the passwords after gaining control of your machine. I found a very easy, simple solution to get around this problem of deciding where to store the decrypted temporary file: use a ram disk partition! This way, when you power off the machine, any traces of your temporarily decrypted pass.gpg file will be lost.

All it takes is a one-liner /etc/fstab entry:

none    /mnt/r0    tmpfs    rw,size=4K    0    0

The size limit of 4K is the absolute minimum (I tried 1K, for 1KiB, but it set it to 4KiB regardless — probably due to a filesystem limitation). You can use the following ~/.muttrc portion to decrypt safely and securely:

#-----------#
# Passwords #
#-----------#
set my_tmpsecret=`gpg2 -o /mnt/r0/.tmp -d ~/.sec/pass.gpg`
set my_gpass1=`awk '/Gmail1/ {print $2}' /mnt/r0/.tmp`
set my_gpass2=`awk '/Gmail2/ {print $2}' /mnt/r0/.tmp`
set my_del=`shred -zun25 /mnt/r0/.tmp`

Easy, and secure. Just for fun and extra security, we use the shred utility to really make sure that the decrypted file does get erased in case (1) the machine has a long uptime (crackers, loss of control of the machine should someone forcefully prevent you from turning off the machine, etc.) or (2) the machine’s RAM somehow remembers the file contents even after a reboot. As a wise man once said, “It is better to bow too much than to bow not enough.” So it is with security measures.

UPDATE December 22, 2011: Erwin from the comments gave me a tip about a simpler way to handle passwords. It is so simple that I am utterly embarassed about my previous approach using /mnt/r0 and all that, and I’ve already changed my setup to use this method! Apparently, mutt comes with the source command, which simply reads in chunks of mutt commands through standard input (STDIN). So, the idea is instead of storing just the passwords in an encrypted file, store the entire commands into a file and encrypt it. Then, you can just source it on decryption.

~/.sec/pass.gpg

set my_gpass1="my super secret password"
set my_gpass2="my other super secret password"

Then, you can source the above upon decryption in its entirety, like this:

source "gpg --textmode -d ~/.sec/pass.gpg |"

The unusual and welcome benefit to this approach, apart from its simplicity, is that you don’t need to change how your password was quoted/escaped from the previous approach! I only found out the hard way after trying to remove the various backslashes, thinking that I needed to change the quotation level. You just use the same password strings as before, and you’re set! And also, the double quotes used for the source command is not a typo: mutt will see that the last character is a pipe ‘|’ character and interpret it as a command, not a static string.

Many thanks to Erwin for pointing this out to me.

UPDATE April 16, 2012: Typo and grammar fixes.

Improved Autocall Script

I’ve updated the Autocall Ruby script I’ve mentioned various times before by porting it to Zsh and adding a TON of features. The following posts are now obsolete:

Autocall: A Script to Watch and “Call” Programs on a Changed Target Source File
Updated Autolily Script
Auto-update/make/compile script for Lilypond: Autolily

If you haven’t read my previous posts on this topic, basically, Autocall is a script that I wrote to automate executing an arbitrary command (from the shell) any time some file is modified. In practice, it is very useful for any small/medium project that needs to compile/build something out of source code. Typical use cases are for LilyPond files, LaTeX/XeTeX, and C/C++ source code.

Anyway, the script is now very robust and much more intelligent. It now parses options and flags instead of stupidly looking at ARGV one at a time. Perhaps the best new feature is the ability to kill processes that hang. Also, you can now specify up to 9 different commands with 9 instances of the “-c” flag (additional instances are ignored). Commands 2-9 are accessible manually via the numeric keys 2-9 (command 1 is the default). This is useful if you have, for instance, different build targets in a makefile. E.g., you could do

$ autocall -c "make" -c "make -B all" -c "make clean" -l file_list

to make things a bit easier.

I use this script all the time — mostly when working with XeTeX files or compiling source code. It works best in a situation where you have to do something X whenever files/directories Y changes in any way. Again, the command to be executed is arbitrary, so you could use it to call some script X whenever a change is detected in a file/directory. If you use it with LaTeX/XeTeX/TeX, use the “-halt-on-error” option so that you don’t have to have autocall kill it (only available with the -k flag). The copious comments should help you get started. Like all my stuff, it is not licensed at all — it’s released into the PUBLIC DOMAIN, without ANY warranties whatsoever in any jurisdiction (use at your own risk!).

#!/bin/zsh
# PROGRAM: autocall
# AUTHOR: Shinobu (https://zuttobenkyou.wordpress.com)
# LICENSE: PUBLIC DOMAIN
#
#
# DESCRIPTION:
#
# Autocall watches (1) a single file, (2) directory, and/or (3) a text file
# containing a list of files/directories, and if the watched files and/or
# directories become modified, runs the (first) given command string. Multiple
# commands can be provided (a total of 9 command strings are recognized) to
# manually execute different commands.
#
#
# USAGE:
#
# See msg("help") function below -- read that portion first!
#
#
# USER INTERACTION:
#
# Press "h" for help.
# Pressing a SPACE, ENTER, or "1" key forces execution of COMMAND immediately.
# Keys 2-9 are hotkeys to extra commands, if there are any.
# Press "c" for the command list.
# To exit autocall gracefully, press "q".
#
#
# DEFAULT SETTINGS:
#
# (-w) DELAY    = 5
# (-x) FACTOR   = 4
#
#
# EXAMPLES:
#
# Execute "pdflatex -halt-on-error report.tex" every time "report.tex" or "ch1.tex" is
# modified (if line count changes in either file; modification checked every 5
# seconds by default):
#    autocall -c "pdflatex -halt-on-error report.tex" -F report.tex -f ch1.tex
#
# Same, but only look at "ch1.tex" (useful, assuming that report.tex includes
# ch1.tex), and automatically execute every 4 seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 4
#       (-x 0 or -x 1 here would also work)
#
# Same, but also automatically execute every 20 (5 * 4) seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 4
#
# Same, but automatically execute every 5 (5 * 1) seconds (-w is 5 by default):
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 1
#
# Same, but automatically execute every 1 (1 * 1) second:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 1
#
# Same, but automatically execute every 17 (1 * 17) seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 17
#
# Same, but for "ch1.tex", watch its byte size, not line count:
#    autocall -c "pdflatex -halt-on-error report.tex" -b ch1.tex -w 1 -x 17
#
# Same, but for "ch1.tex", watch its timestamp instead (i.e., every time
# this file is saved, the modification timestamp will be different):
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -w 1 -x 17
#
# Same, but also look at the contents of directory "images/ocean":
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -w 1 -x 17
#
# Same, but also look at the contents of directory "other" recursively:
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -D other -w 1 -x 17
#
# Same, but look at all files and/or directories (recursively) listed in file
# "watchlist" instead:
#    autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -w 1 -x 17
#
# Same, but also look at "newfile.tex":
#    autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -f newfile.tex -w 1 -x 17
#
# Same, but also allow manual execution of "make clean" with hotkey "2":
#    autocall -c "pdflatex -halt-on-error report.tex" -c "make clean" -l watchlist -f newfile.tex -w 1 -x 17
#
###############################################################################
###############################################################################

#-----------------#
# Local functions #
#-----------------#

msg () {
    case $1 in
        "help")
echo "
autocall: Usage:

autocall [OPTIONS]

Required parameter:
-c COMMAND      The command to be executed (put COMMAND in quotes). Note that
                COMMAND can be a set of multiple commands, e.g. \"make clean;
                make\". You can also specify multiple commands by invoking
                -c COMMAND multiple times -- the first 9 of these are set to
                hotkeys 1 through 9, if present. This is useful if you want to
                have a separate command that is available and can only be
                executed manually.

One or more required parameters (but see -x below):
-f FILE         File to be watched. Modification detected by time.
-F FILE         File to be watched. Modification detected by line-size.
-b FILE         File to be watched. Modification detected by bytes.
-d DIRECTORY    Directory to be watched. Modification detected by time.
-D DIRECTORY    Directory to be watched, recursively. Modification
                detected by time.
-l FILE         Text file containing a list of files/directories (each on
                its own line) to be watched (directories listed here are
                watched recursively). Modification is detected with 'ls'.

Optional parameters:
-w DELAY        Wait DELAY seconds before checking on the watched
                files/directories for modification; default 5.
-t TIMEOUT      If COMMAND does not finish execution after TIMEOUT seconds,
                send a SIGTERM signal to it (but do nothing else afterwards).
-k KDELAY       If COMMAND does not finish execution after TIMEOUT,
                then wait KDELAY seconds and send SIGKILL to it if COMMAND is
                still running. If only -k is given without -t, then -t is
                automatically set to the same value as TIMEOUT.
-x FACTOR       Automatically execute the command repeatedly every DELAY *
                FACTOR seconds, regardless of whether the watched
                files/directories were modified. If FACTOR is zero, it is set
                to 1. If -x is set, then -f, -d, and -l are not required (i.e.,
                if only the -c and -x options are specified, autocall will
                simply act as a while loop executing COMMAND every 20 (or
                more if FACTOR is greater than 1) seconds). Since the
                formula is (DELAY * FACTOR) seconds, if DELAY is 1,
                FACTOR's face value itself, if greater than 0, is the seconds
                amount.
-a              Same as \`-x 1'
-h              Show this page and exit (regardless of other parameters).
-v              Show version number and exit (regardless of other parameters).
"
            exit 0
            ;;
        "version")
            echo "autocall version 1.0"
            exit 0
            ;;
        *)
            echo "autocall: $1"
            exit 1
            ;;
    esac
}

is_number () {
    if [[ $(echo $1 | sed 's/^[0-9]\+//' | wc -c) -eq 1 ]]; then
        true
    else
        false
    fi
}

autocall_exec () {
    timeout=$2
    killdelay=$3
    col=""
    case $4 in
        1) col=$c1 ;;
        2) col=$c2 ;;
        3) col=$c3 ;;
        4) col=$c4 ;;
        5) col=$c5 ;;
        6) col=$c6 ;;
        *) col=$c1 ;;
    esac
    echo "\nautocall:$c2 [$(date --rfc-3339=ns)]$ce$col $5$ce"
    if [[ $# -eq 7 ]]; then
        diff -u0 -B -d <(echo "$6") <(echo "$7") | tail -n +4 | sed -e "/^[@-].\+/d" -e "s/\(\S\+\s\+\S\+\s\+\S\+\s\+\S\+\s\+\)\(\S\+\s\+\)\(\S\+\s\+\S\+\s\+\S\+\s\+\)/\1$c1\2$ce$c2\3$ce/" -e "s/^/  $c1>$ce /"
        echo
    fi
    echo "autocall: calling command \`$c4$1$ce'..."
    # see the "z" flag under PARAMTER EXPANSION under "man ZSHEXPN" for more info
    if [[ $tflag == true || $kflag == true ]]; then
        # the 'timeout' command gives nice exit statuses -- it gives 124 if
        # command times out, but if the command exits with an error of its own,
        # it gives that error number (so if the command doesn't time out, but
        # exits with 4 or 255 or whatever, it (the timeout command) will exit
        # with that number instead)

        # note: if kflag is true, then tflag is always true
        com_exit_status=0
        if [[ $kflag == true ]]; then
            eval timeout -k $killdelay $timeout $1 2>&1 | sed "s/^/  $col>$ce /"
            com_exit_status=$pipestatus[1]
        else
            eval timeout $timeout $1 2>&1 | sed "s/^/  $col>$ce /"
            com_exit_status=$pipestatus[1]
        fi
        if [[ $com_exit_status -eq 124 ]]; then
            echo "\n${c6}autocall: command timed out$ce"
        elif [[ $com_exit_status -ne 0 ]]; then
            echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
        else
            echo "\n${c1}autocall: command executed successfully$ce"
        fi
    else
        eval $1 2>&1 | sed "s/^/  $col>$ce /"
        com_exit_status=$pipestatus[1]
        if [[ $com_exit_status -ne 0 ]]; then
            echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
        else
            echo "\n${c1}autocall: command executed successfully$ce"
        fi
    fi
}

#------------------#
# Global variables #
#------------------#

# colors
c1="\x1b[1;32m" # bright green
c2="\x1b[1;33m" # bright yellow
c3="\x1b[1;34m" # bright blue
c4="\x1b[1;36m" # bright cyan
c5="\x1b[1;35m" # bright purple
c6="\x1b[1;31m" # bright red
ce="\x1b[0m"

coms=()
delay=5
xdelay_factor=4
f=()
F=()
b=()
d=()
D=()
l=()
l_targets=()
wflag=false
xflag=false
tflag=false
kflag=false
timeout=0
killdelay=0

tstampf="" # used to DISPLAY modification only for -f flag
linestamp="" # used to DETECT modification only for -f flag
tstampF="" # used to detect AND display modifications for -F flag
tstampb="" # used to DISPLAY modification only for -b flag
bytestamp="" # used to DETECT modification only for -b flag
tstampd="" # used to detect AND display modifications for -d flag
tstampD="" # used to detect AND display modifications for -D flag
tstampl="" # used to detect AND display modifications for -l flag

tstampf_new=""
linestamp_new=""
tstampF_new=""
tstampb_new=""
bytestamp_new=""
tstampd_new=""
tstampD_new=""
tstampl_new=""

#----------------#
# PROGRAM START! #
#----------------#

#---------------#
# Parse options #
#---------------#

# the leading ":" in the opstring silences getopts's own error messages;
# the colon after a single letter indicates that that letter requires an
# argument

# first parse for the presence of any -h and -v flags (while silently ignoring
# the other recognized options)
while getopts ":c:w:f:F:b:d:D:l:t:k:x:ahv" opt; do
    case "$opt" in
    h)  msg "help" ;;
    v)  msg "version" ;;
    *) ;;
    esac
done
# re-parse from the beginning again if there were no -h or -v flags
OPTIND=1
while getopts ":c:w:f:F:b:d:D:l:t:k:x:a" opt; do
    case "$opt" in
    c)
        com_binary=$(echo "$OPTARG" | sed 's/ \+/ /g' | sed 's/;/ /g' | cut -d " " -f1)
        if [[ $(which $com_binary) == "$com_binary not found" ]]; then
            msg "invalid command \`$com_binary'"
        else
            coms+=("$OPTARG")
        fi
        ;;
    w)
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                wflag=true
                delay=$OPTARG
            else
                msg "DELAY must be greater than 0"
            fi
        else
            msg "invalid DELAY \`$OPTARG'"
        fi
        ;;
    f)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            f+=("$OPTARG")
        fi
        ;;
    F)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            F+=("$OPTARG")
        fi
        ;;
    b)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            b+=("$OPTARG")
        fi
        ;;
    d)
        if [[ ! -d "$OPTARG" ]]; then
            msg "directory \`$OPTARG' does not exist"
        else
            d+=("$OPTARG")
        fi
        ;;
    D)
        if [[ ! -d "$OPTARG" ]]; then
            msg "directory \`$OPTARG' does not exist"
        else
            D+=("$OPTARG")
        fi
        ;;
    l)
        if [[ ! -f $OPTARG ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            l+=("$OPTARG")
        fi
        ;;
    t)
        tflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                timeout=$OPTARG
            else
                msg "TIMEOUT must be greater than 0"
            fi
        else
            msg "invalid TIMEOUT \`$OPTARG'"
        fi
        ;;
    k)
        kflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                killdelay=$OPTARG
            else
                msg "TIMEOUT must be greater than 0"
            fi
        else
            msg "invalid KDELAY \`$OPTARG'"
        fi
        ;;
    x)
        xflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                xdelay_factor=$OPTARG
            elif [[ $OPTARG -eq 0 ]]; then
                xdelay_factor=1
            else
                msg "invalid FACTOR \`$OPTARG'"
            fi
        fi
        ;;
    a) xflag=true ;;
    🙂
        msg "missing argument for option \`$OPTARG'"
        ;;
    *)
        msg "unrecognized option \`$OPTARG'"
        ;;
    esac
done

#-----------------#
# Set misc values #
#-----------------#

if [[ $kflag == true && $tflag == false ]]; then
    tflag=true
    timeout=$killdelay
fi

#------------------#
# Check for errors #
#------------------#

# check that the given options are in good working order
if [[ -z $coms[1] ]]; then
    msg "help"
elif [[ (-n $f && -n $d && -n $D && -n $l) && $xflag == false ]]; then
    echo "autocall: see help with -h"
    msg "at least one or more of the (1) -f, -d, -D, or -l paramters, or (2) the -x parameter, required"
fi

#-------------------------------#
# Record state of watched files #
#-------------------------------#

if [[ -n $F ]]; then
    if [[ $#F -eq 1 ]]; then
        linestamp=$(wc -l $F)
    else
        linestamp=$(wc -l $F | head -n -1) # remove the last "total" line
    fi
    tstampF=$(ls --full-time $F)
fi
if [[ -n $f ]]; then
    tstampf=$(ls --full-time $f)
fi
if [[ -n $b ]]; then
    if [[ $#b -eq 1 ]]; then
        bytestamp=$(wc -c $b)
    else
        bytestamp=$(wc -c $b | head -n -1) # remove the last "total" line
    fi
    tstampb=$(ls --full-time $b)
fi
if [[ -n $d ]]; then
    tstampd=$(ls --full-time $d)
fi
if [[ -n $D ]]; then
    tstampD=$(ls --full-time -R $D)
fi
if [[ -n $l ]]; then
    for listfile in $l; do
        if [[ ! -f $listfile ]]; then
            msg "file \`$listfile ' does not exist"
        else
            while read line; do
                if [[ ! -e "$line" ]]; then
                    msg "\`$listfile': file/path \`$line' does not exist"
                else
                    l_targets+=("$line")
                fi
            done < $listfile # read contents of $listfile!
        fi
    done
    tstampl=$(ls --full-time -R $l_targets)
fi

#----------------------#
# Begin execution loop #
#----------------------#
# This is like Russian Roulette (where "firing" is executing the command),
# except that all the chambers are loaded, and that on every new turn, instead
# of picking the chamber randomly, we look at the very next chamber. After
# every chamber is given a turn, we reload the gun and start over.
#
# If we detect file/directory modification, we pull the trigger. We can also
# pull the trigger by pressing SPACE or ENTER. If the -x option is provided,
# the last chamber will be set to "always shoot" and will always fire (if the
# trigger hasn't been pulled by the above methods yet).

if [[ $xflag == true && $xdelay_factor -le 1 ]]; then
    xdelay_factor=1
fi
com_num=1
for c in $coms; do
    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
    let com_num+=1
done
echo "autocall: press keys 1-$#coms to execute a specific command"
if [[ $wflag == true ]]; then
    echo "autocall: modification check interval set to $delay sec"
else
    echo "autocall: modification check interval set to $delay sec (default)"
fi
if [[ $xflag == true ]]; then
    echo "autocall: auto-execution interval set to ($delay * $xdelay_factor) = $(($delay*$xdelay_factor)) sec"
fi
if [[ $tflag == true ]]; then
    echo "autocall: TIMEOUT set to $timeout"
    if [[ $kflag == true ]]; then
        echo "autocall: KDELAY set to $killdelay"
    fi
fi
echo "autocall: press ENTER or SPACE to execute manually"
echo "autocall: press \`c' for command list"
echo "autocall: press \`h' for help"
echo "autocall: press \`q' to quit"
key=""
while true; do
    for i in {1..$xdelay_factor}; do
        #------------------------------------------#
        # Case 1: the user forces manual execution #
        #------------------------------------------#
        # read a single key from the user
        read -s -t $delay -k key
        case $key in
            # note the special notation $'\n' to detect an ENTER key
            $'\n'|" "|1)
                autocall_exec $coms[1] $timeout $killdelay 4 "manual execution"
                key=""
                continue
                ;;
            2|3|4|5|6|7|8|9)
                if [[ -n $coms[$key] ]]; then
                    autocall_exec $coms[$key] $timeout $killdelay 4 "manual execution"
                    key=""
                    continue
                else
                    echo "autocall: command slot $key is not set"
                    key=""
                    continue
                fi
                ;;
            c)
                com_num=1
                echo ""
                for c in $coms; do
                    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
                    let com_num+=1
                done
                key=""
                continue
                ;;
            h)
                echo "\nautocall: press \`c' for command list"
                echo "autocall: press \`h' for help"
                echo "autocall: press \`q' to exit"
                com_num=1
                for c in $coms; do
                    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
                    let com_num+=1
                done
                echo "autocall: press keys 1-$#coms to execute a specific command"
                echo "autocall: press ENTER or SPACE or \`1' to execute first command manually"
                key=""
                continue
                ;;
            q)
                echo "\nautocall: exiting..."
                exit 0
                ;;
            *) ;;
        esac

        #------------------------------------------------------------------#
        # Case 2: modification is detected among watched files/directories #
        #------------------------------------------------------------------#
        if [[ -n $f ]]; then
            tstampf_new=$(ls --full-time $f)
        fi
        if [[ -n $F ]]; then
            if [[ $#F -eq 1 ]]; then
                linestamp_new=$(wc -l $F)
            else
                linestamp_new=$(wc -l $F | head -n -1) # remove the last "total" line
            fi
            tstampF_new=$(ls --full-time $F)
        fi
        if [[ -n $b ]]; then
            if [[ $#b -eq 1 ]]; then
                bytestamp_new=$(wc -c $b)
            else
                bytestamp_new=$(wc -c $b | head -n -1) # remove the last "total" line
            fi
            tstampb_new=$(ls --full-time $b)
        fi
        if [[ -n $d ]]; then
            tstampd_new=$(ls --full-time $d)
        fi
        if [[ -n $D ]]; then
            tstampD_new=$(ls --full-time -R $D)
        fi
        if [[ -n $l ]]; then
            tstampl_new=$(ls --full-time -R $l_targets)
        fi
        if [[ -n $f && "$tstampf" != "$tstampf_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampf" "$tstampf_new"
            tstampf=$tstampf_new
            continue
        elif [[ -n $F && "$linestamp" != "$linestamp_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampF" "$tstampF_new"
            linestamp=$linestamp_new
            tstampF=$tstampF_new
            continue
        elif [[ -n $b && "$bytestamp" != "$bytestamp_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampb" "$tstampb_new"
            bytestamp=$bytestamp_new
            tstampb=$tstampb_new
            continue
        elif [[ -n $d && "$tstampd" != "$tstampd_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampd" "$tstampd_new"
            tstampd=$tstampd_new
            continue
        elif [[ -n $D && "$tstampD" != "$tstampD_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampD" "$tstampD_new"
            tstampD=$tstampD_new
            continue
        elif [[ -n $l && "$tstampl" != "$tstampl_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampl" "$tstampl_new"
            tstampl=$tstampl_new
            continue
        fi

        #-----------------------------------------------------#
        # Case 3: periodic, automatic execution was requested #
        #-----------------------------------------------------#
        if [[ $xflag == true && $i -eq $xdelay_factor ]]; then
            autocall_exec $coms[1] $timeout $killdelay 3 "commencing auto-execution ($(($delay*$xdelay_factor)) sec)"
        fi
    done
done

# vim:syntax=zsh

Zsh: univ_open() update (new dir_info() script)

I’ve been heavily using the univ_open() shell function (a universal file/directory opener from the command line) that I wrote for many months now, and have made some slight changes to it. It’s all PUBLIC DOMAIN like my other stuff, so you have my blessing if you want to make it better (if you do, send me a link to your version or something).

What’s new? The code that prettily lists all directory contents after entering it has been cut off and made into its own function, called “dir_info”. So there are 2 functions now: univ_open(), which opens any file/directory from the commandline according to predefined preferred apps (dead simple to do, as you can see in the code), and dir_info(), a prettified “ls” replacement that also lists helpful info like “largest file” and the number of files in the directory.

univ_open() has not changed much — the only new stuff are the helpful error messages.

dir_info() should be used by itself from a shell alias (I’ve aliased all its 8 modes to quick keys like “ll”, “l”, “lk”, “lj”, etc. for quick access in ~/.zshrc). For example, “l” is aliased to “dir_info 1”. Here is the code below for dir_info():

#!/bin/zsh

# dir_info(), a function that acts as an intelligent "ls". This function is
# used by univ_open() to display directory contents, but it should additionally
# be used by itself. By default, you can call dir_info() without any arguments,
# but there are 8 presets that are hardcoded below (presets 0-7). Thus, you
# could do "dir_info [0-7]" to use those modes. You sould alias these modes to
# something easy like "ll" or "l", etc. from your ~/.zshrc. For a detailed
# explanation of the 8 presets, see the code below.

dir_info() {
    # colors
    c1="33[1;32m" # bright green
    c2="33[1;33m" # bright yellow
    c3="33[1;34m" # bright blue
    c4="33[1;36m" # bright cyan
    c5="33[1;35m" # bright purple
    c6="33[1;31m" # bright red
    # only pre-emptively give newline to prettify listing of directory contents
    # if the directory is not empty
    [[ $(ls -A1 | wc -l) -ne 0 ]] && echo
    countcolor() {
        if [[ $1 -eq 0 ]]; then
            echo $c4
        elif [[ $1 -le 25 ]]; then
            echo $c1
        elif [[ $1 -le 50 ]]; then
            echo $c2
        elif [[ $1 -le 100 ]]; then
            echo $c3
        elif [[ $1 -le 200 ]]; then
            echo $c5
        else
            echo $c6
        fi
    }

    sizecolor() {
        case $1 in
            B)
                echo $c0
                ;;
            K)
                echo $c1
                ;;
            M)
                echo $c2
                ;;
            G)
                echo $c3
                ;;
            T)
                echo $c5
                ;;
            *)
                echo $c4
                ;;
        esac
    }
    ce="33[0m"

    ctag_size=""

    # only show information if the directory is not empty
    if [[ $(ls -A1 | wc -l) -gt 0 ]]; then
        size=$(ls -Ahl | head -n1 | head -c -2)
        suff=$(ls -Ahl | head -n1 | tail -c -2)
        size_num=$(echo -n $size | cut -d " " -f2 | head -c -1)
        ctag_size=$(sizecolor $suff)
        simple=false
        # show variation of `ls` based on given argument
        case $1 in
            0) # simple
                ls -Chs -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            1) # verbose
                ls -Ahl --color | tail -n +2
                ;;
            2) # simple, but sorted by size (biggest file on bottom with -r flag)
                ls -ChsSr -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            3) # verbose, but sorted by size (biggest file on bottom with -r flag)
                ls -AhlSr --color | tail -n +2
                ;;
            4) # simple, but sorted by time (newest file on bottom with -r flag)
                ls -Chstr -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            5) # verbose, but sorted by time (newest file on bottom with -r flag)
                ls -Ahltr --color | tail -n +2
                ;;
            6) # simple, but sorted by extension
                ls -ChsX -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            7) # verbose, but sorted by extension
                ls -AhlX --color | tail -n +2
                ;;
            *)
                simple=true
                ls --color
                ;;
        esac

        # show number of files or number of shown vs hidden (as a fraction),
        # depending on which version of `ls` was used
        denom=$(ls -A1 | wc -l)
        numer=$denom
        # redefine numer to be a smaller number if we're in simple mode (and
        # only showing non-dotfiles/non-dotdirectories
        $simple && numer=$(ls -1 | wc -l)
        ctag_count=$(countcolor $denom)

        if [[ $numer != $denom ]]; then
            if [[ $numer -gt 1 ]]; then
                echo -n "\nfiles $numer/$ctag_count$denom$ce | "
            else
                dotfilecnt=$(($denom - $numer))
                s=""
                [[ $dotfilecnt -gt 1 ]] && s="s" || s=""
                echo -n "\nfiles $numer/$ctag_count$denom$ce ($dotfilecnt dotfile$s) | "
            fi
        else
            echo -n "\nfiles $ctag_count$denom$ce | "
        fi

        if [[ $suff != "0" ]]; then
            echo -n "size $ctag_size$size_num $suff$ce"
        else
            echo -n "size$ctag_size nil$ce"
        fi

        # Find the biggest file in this directory.
        #
        # We first use ls to list all contents, sorted by size; then, we strip
        # all non-regular file entries (such as directories and symlinks);
        # then, we truncate our result to kill all newlines with 'tr' (e.g., if
        # there is a tiny file (say, 5 bytes) and there are directories and
        # symlinks, it's likely that the file is NOT the biggest "file"
        # according to 'ls', which means that the output up to this point will
        # have trailing whitespaces (thus making the next command 'tail -n 1'
        # fail, even though there is a valid file!)); we then fetch the last
        # line of this list, which is the biggest file, then make it so that
        # all multiple-continguous spaces are replaced with a single space --
        # and using this new property, we can safely call 'cut' by specifying
        # the single space " " as a delimiter to finally get our filename.
        big=$(ls -lSr | sed 's/^[^-].\+//' | tr -s "\n" | tail -n 1 | sed 's/ \+/ /g' | cut -d " " -f9-)
        if [[ -f "$big" ]]; then
            # since $suff needs a file size suffix (K,M,G, etc.), we reassign
            # $big_size here from pure block size to human-readable notation
            # make $big_size more "accurate" (not in terms of disk space usage,
            # but in terms of actual number of bytes inside the file) if it is
            # smaller than 4096 bytes
            suff=""
            if [[ $(du -b "$big" | cut -f1) -lt 4096 ]]; then
                big_num="$(du -b "$big" | cut -f1)"
                suff="B"
            else
                big_num=$(ls -hs "$big" | cut -d " " -f1 | sed 's/[a-zA-Z]//')
                suff=$(ls -hs "$big" | cut -d " " -f1 | tail -c -2)
            fi
            ctag_size=$(sizecolor "$suff")
            echo " | \`$big' $ctag_size$big_num $suff$ce"
        else
            echo
        fi
    fi
}

Here is the updated univ_open() shell function:

#!/bin/zsh

# univ_open() is intended to be used to pass either a SINGLE valid FILE or
# DIRECTORY. For illustrative purposes, we assume "d" to be aliased to
# univ_open() in ~/.zshrc. If optional flags are desired, then either prepend
# or append them appropriately. E.g., if you have jpg's to be opened by eog,
# then doing "d -f file.jpg" or "d file.jpg -f" will be the same as "eog -f
# file.jpg" or "eog file.jpg -f", respectively. The only requirement when
# passing flags is that the either the first word or last word must be a valid
# filename.

# univ_open requires the custom shell function dir_info() (`ls` with saner
# default args) to work properly

univ_open() {
    if [[ -z $@ ]]; then
        # if we do not provide any arguments, go to the home directory
        cd && dir_info # ("cd" w/o any arguments goes to the home directory)
    elif [[ -f $1 || -f ${=@[-1]} ]]; then
        # if we're here, it means that the user either (1) provided a single valid file name, or (2) a number of
        # commandline arguments PLUS a single valid file name; use of the $@ variable ensures that we preserve all the
        # arguments the user passed to us

        # $1 is the first arg; ${=@[-1]} is the last arg (i.e., if user passes "-o -m FILE" to us, then obviously the
        # last arg is the filename
        #
        # we use && and || for simple ternary operation (like ? and : in C)
        [[ -f $1 ]] && file=$1 || file=${=@[-1]}
        case $file:e:l in
            (doc|odf|odt|rtf)
                soffice -writer $@ &>/dev/null & disown
            ;;
            (pps|ppt)
                soffice -impress $@ &>/dev/null & disown
            ;;
            (htm|html)
                firefox $@ &>/dev/null & disown
            ;;
            (eps|pdf|ps)
                evince -f $@ &>/dev/null & disown
            ;;
            (bmp|gif|jpg|jpeg|png|svg|tga|tiff)
                eog $@ &>/dev/null & disown
            ;;
            (psd|xcf)
                gimp $@ &>/dev/null & disown
            ;;
            (aac|flac|mp3|ogg|wav|wma)
                mplayer $@
            ;;
            (mid|midi)
                timidity $@
            ;;
            (asf|avi|flv|ogm|ogv|mkv|mov|mp4|mpg|mpeg|rmvb|wmv)
                smplayer $@ &>/dev/null & disown
            ;;
            (djvu)
                djview $@
            ;;
            (exe)
                wine $@ &>/dev/null & disown
            ;;
            *)
                vim $@
            ;;
        esac
    elif [[ -d $1 ]]; then
        # if the first argument is a valid directory, just cd into it -- ignore
        # any trailing arguments (in zsh, '#' is the same as ARGC, and denotes the number of arguments passed to the
        # script (so '$#' is the same as $ARGC)
        if [[ $# -eq 1 ]]; then
            cd $@ && dir_info
        else
            # if the first argument was a valid directory, but there was more than 1 argument, then we ignore these
            # trailing args but still cd into the first given directory
            cd $1 && dir_info
            # i.e., show arguments 2 ... all the way to the last one (last one has an index of -1 argument array)
            echo "\nuniv_open: argument(s) ignored: \`${=@[2,-1]}\`"
            echo "univ_open: went to \`$1'\n"
        fi
    elif [[ ! -e $@ ]]; then
        [[ $# -gt 1 ]] && head=$1:h || head=$@:h
        # if we're given just 1 argument, and that argument does not exist,
        # then go to the nearest valid parent directory; we use a while loop to
        # find the closest valid directory, just in case the user gave a
        # butchered-up path
        while [[ ! -d $head ]]; do head=$head:h; done
        cd $head && dir_info
        echo "\nuniv_open: path \`$@' does not exist"
        [[ $head == "." ]] && echo "univ_open: stayed in same directory\n" || echo "univ_open: relocated to nearest parent directory \`$head'\n"
    else
        # possible error -- should re-program the above if this ever happens,
        # but, it seems unlikely
        echo "\nuniv_open: error -- exiting peacefully"
    fi
}

Put these two functions inside your ~/.zsh/func folder with filenames “dir_info” and “univ_open” and autoload them. I personally do this in my ~/.zshrc:

fpath=(~/.zsh/func $fpath)
autoload -U ~/.zsh/func/*(:t)

to have them autoloaded. Then, simply alias “d” to “univ_open” and alias “ll” to “dir_info 0” and “l” to “dir_info 1”, and you’re set (the aliases to dir_info are optional, but highly recommended). The real star of the show in this post is dir_info() and how it displays various info for the directory after the “ls” stuff is printed on the screen. I hope you enjoy them as much as I do.