Xmonad.hs: The Joy of Refactoring

The more I use Haskell, the more I learn to appreciate its beauty. The thesis of this post is: Haskell code is so easy to refactor. What follows is an account of my experience the other day with extending my XMonad configuration file, and how easy/fun it was for me. It’s written in a very newbie-friendly way for all XMonad users who don’t know much Haskell, if at all.

The other day, I ended up writing something like the following into my xmonad.hs file:

myManageHook :: ManageHook
myManageHook = composeAll
    [
    ...
    , resource  =? "atWorkspace0" --> doShift "0"
    , resource  =? "atWorkspace1" --> doShift "1"
    , resource  =? "atWorkspace2" --> doShift "2"
    , resource  =? "atWorkspace3" --> doShift "3"
    , resource  =? "atWorkspace4" --> doShift "4"
    , resource  =? "atWorkspace5" --> doShift "5"
    , resource  =? "atWorkspace6" --> doShift "6"
    , resource  =? "atWorkspace7" --> doShift "7"
    , resource  =? "atWorkspace8" --> doShift "8"
    , resource  =? "atWorkspace9" --> doShift "9"
    , resource  =? "atWorkspaceF1" --> doShift "F1"
    , resource  =? "atWorkspaceF2" --> doShift "F2"
    , resource  =? "atWorkspaceF3" --> doShift "F3"
    , resource  =? "atWorkspaceF4" --> doShift "F4"
    , resource  =? "atWorkspaceF5" --> doShift "F5"
    , resource  =? "atWorkspaceF6" --> doShift "F6"
    , resource  =? "atWorkspaceF7" --> doShift "F7"
    , resource  =? "atWorkspaceF8" --> doShift "F8"
    , resource  =? "atWorkspaceF9" --> doShift "F9"
    , resource  =? "atWorkspaceF10" --> doShift "F10"
    , resource  =? "atWorkspaceF11" --> doShift "F11"
    , resource  =? "atWorkspaceF12" --> doShift "F12"
    ]

Even if you don’t know Haskell, you can tell that the above is very repetitive. It looks a bit stupid. Vim’s visual block mode makes editing the above lines in bulk pretty easy, but your xmonad.hs file is a program’s source code, and source code must obey the Do not Repeat Yourself rule. It will make life easier down the road.

Let’s first consider what the code above means. First, we observe that myManageHook is a function, of type ManageHook. The composeAll function’s type signature is as follows:

composeAll :: [ManageHook] -> ManageHook

(I loaded up XMonad from GHCi to figure this out.) So composeAll merely “compresses” a list of ManageHook types into a single ManageHook. Great! Now we know exactly what each element in the list really means:

myManageHook = composeAll
    [ a ManageHook
    , a ManageHook
    , a ManageHook
    , a ManageHook
    ]

So to covert the various “atWorkspace0”, “atWorkspace1”, etc. lines we just need to create a function that generates a list of ManageHook types, and append this list to the one that already exists! Like this:

myManageHook :: ManageHook
myManageHook = composeAll $
    [ ... existing MangeHook items
    ]
    ++ workspaceShifts
    where
        workspaceShifts = another list of ManageHooks

Notice how we had to add in a “$” symbol, because:

myManageHook = composeAll [list 1] ++ [list 2]

means

myManageHook = (composeAll [list 1]) ++ [list 2]

which means

myManageHook = ManageHook ++ [list 2]

which is definitely not what we want. We want this:

myManageHook = composeAll ([list 1] ++ [list 2])

which is

myManageHook = composeAll [lists 1 and 2 combined]

and we can do this with the “$” dollar symbol. We could just use the explicit parentheses instead; ultimately it is a matter of style/taste.

So, let’s keep going. The workspaceShifts function needs to generate a list of ManageHooks. From the first code listing, it’s clear that all the generated ManageHooks need only vary slightly from each other:

    [
    ...
    , resource  =? "atWorkspace0" --> doShift "0"
    , resource  =? "atWorkspace1" --> doShift "1"
    , resource  =? "atWorkspace2" --> doShift "2"
    , resource  =? "atWorkspace3" --> doShift "3"
    , resource  =? "atWorkspace4" --> doShift "4"
    , resource  =? "atWorkspace5" --> doShift "5"
    , resource  =? "atWorkspace6" --> doShift "6"
    , resource  =? "atWorkspace7" --> doShift "7"
    , resource  =? "atWorkspace8" --> doShift "8"
    , resource  =? "atWorkspace9" --> doShift "9"
    , resource  =? "atWorkspaceF1" --> doShift "F1"
    , resource  =? "atWorkspaceF2" --> doShift "F2"
    , resource  =? "atWorkspaceF3" --> doShift "F3"
    , resource  =? "atWorkspaceF4" --> doShift "F4"
    , resource  =? "atWorkspaceF5" --> doShift "F5"
    , resource  =? "atWorkspaceF6" --> doShift "F6"
    , resource  =? "atWorkspaceF7" --> doShift "F7"
    , resource  =? "atWorkspaceF8" --> doShift "F8"
    , resource  =? "atWorkspaceF9" --> doShift "F9"
    , resource  =? "atWorkspaceF10" --> doShift "F10"
    , resource  =? "atWorkspaceF11" --> doShift "F11"
    , resource  =? "atWorkspaceF12" --> doShift "F12"
    ]

So the only thing that really changes is the suffix after “atWorkspace”; it goes from “0” to “F12”. Let’s express this idea in code:

myManageHook :: ManageHook
myManageHook = composeAll $
    [ ...
    ]
    ++ workspaceShifts
    where
        workspaceShifts = genList (s1 ++ s2)
        genList :: [String] -> [ManageHook]
        genList ss = map (\s -> resource =? ("atWorkspace" ++ s) --> doShift s) ss

We create a new genList function, which needs an argument (a [String], or “list of strings” to be exact). We creatively call them ss in the code above. The map function simply modifies each item in a list in the same way. So here, we modify every string (s) to become a MangeHook, by giving map‘s first argument as:

(\s -> resource =? ("atWorkspace" ++ s) --> doShift s)

The backslash followed by s binds a single string from the ss list as the variable s. The right arrow (->) signals the start of the function definition. Do you see the resemblance?

    ...
    , resource  =? "atWorkspace0" --> doShift "0"
    , resource  =? "atWorkspace1" --> doShift "1"
    , resource  =? "atWorkspace2" --> doShift "2"
    , resource  =? "atWorkspace3" --> doShift "3"
    ...

    vs.

    resource =? ("atWorkspace" ++ s) --> doShift s

Nice, clean, and succinct.

Now we just need to define those strings to feed into genList. Here’s s1:

        s1 :: [String] -- a list of strings
        s1 = map show [0..9]

Again, we use the map function. It’s such a handy little function! Haskell is very much built around such useful little named primitives (by convention, most things that look like “operators” such as multi-punctuation arrows, ampersands, and colons are part of some niche library, and not Haskell the core language). The show function merely converts its argument (here, a list of Ints), into strings, so the

s1 = map show [0..9]

part really means: [“0″,”1″,”2″,”3″,”4″,”5″,”6″,”7″,”8″,”9”].

So that’s s1. The second list, s2, is almost identical:

s2 = map (("F" ++) . show) [1..12]

The only difference is that it adds “F” as a prefix, so that we get “F1” instead of “1”, “F2” instead of “2”, and so on.

So, that’s it! The complete code is as follows:

myManageHook :: ManageHook
myManageHook = composeAll $
    [ ...
    ]
    ++ workspaceShifts
    where
        workspaceShifts = genList (s1 ++ s2)
        genList :: [String] -> [ManageHook]
        genList ss = map (\s -> resource =? ("atWorkspace" ++ s) --> doShift s) ss
        s1, s2 :: [String]
        s1 = map show [0..9]
        s2 = map (("F" ++) . show) [1..12]

We can actually shorten it even more:

myManageHook :: ManageHook
myManageHook = composeAll $
    [ ...
    ]
    ++ map (\s -> resource =? ("atWorkspace" ++ s) --> doShift s) (s1 ++ s2)
    where
        s1 = map show [0..9]
        s2 = map (("F" ++) . show) [1..12]

In the end, we’ve reduced 22 lines of stupid, repetitive code prone to human error and typos down to 4 lines of smart, modular code. I really enjoy this kind of refactoring: reduction of human-error-prone code.

What’s more, the whole experience was quite easy and enjoyable. I did not need to know what exactly a ManageHook type represented; instead, all I needed to know were the type signatures of the various existing parts. And that’s why Haskell is so fun to refactor: type signatures, combined with industrial-strength type checking, makes things safe. Plus, if you look at the code, everything matters. It’s really hard to write spaghetti code in Haskell (even for relative newbies like myself).

For the curious, here is a list of type signatures for composeAll, resource, doShift, (–<), and (=?):

Prelude XMonad> :t composeAll
composeAll :: [ManageHook] -> ManageHook
Prelude XMonad> :t resource
resource :: Query String
Prelude XMonad> :t doShift
doShift :: WorkspaceId -> ManageHook
Prelude XMonad> :t (-->)
(-->) :: Query Bool -> ManageHook -> ManageHook
Prelude XMonad> :t (=?)
(=?) :: Eq a => Query a -> a -> Query Bool

UPDATE July 3, 2011: Hendrik mentions a more concise approach using list comprehensions (arguably more Haskell-y, and simper):

myManageHook :: ManageHook
myManageHook = composeAll $
    [ ...
    ]
    ++ [resource =? ("atWorkspace" ++ s) --> doShift s 
       | s <- map show [0..9] ++ map (('F':) . show) [1..12] ]

UPDATE November 8, 2011: Fixed typo.

Xorg: Switching Keyboard Layouts Consistenly and Reliably from Userspace

For some time now, the XkbLayout option (set to “us, fr, de”) in my /etc/X11/xorg.conf.d/10-keyboard.conf has been broken for me. This probably has something to do with my recent xmodmap hacks in ~/.xinitrc, and possibly the interaction with the /etc/X11/xorg.conf.d/10-evdev.conf, which includes an entry to catch all keyboards. I decided to fix this problem today, and with a healthy dose of serendipity, got everything working smoothly again. Sources for my custom scripts are included, as usual.

By accident, I discovered the setxkbmap userland program. It does everything you could possibly do with keyboard layouts. It is maintained by the Xorg project, so it’s pretty much guaranteed to work. Here’s a sample run (using Arch’s xorg-setxkbmap 1.2.0-2 and xorg-server 1.10.1-1):

$ setxkbmap -help
Usage: setxkbmap [args] [<layout> [<variant> [<option> ... ]]]
Where legal args are:
-?,-help            Print this message
-compat <name>      Specifies compatibility map component name
-config <file>      Specifies configuration file to use
-device <deviceid>  Specifies the device ID to use
-display <dpy>      Specifies display to use
-geometry <name>    Specifies geometry component name
-I[<dir>]           Add <dir> to list of directories to be used
-keycodes <name>    Specifies keycodes component name
-keymap <name>      Specifies name of keymap to load
-layout <name>      Specifies layout used to choose component names
-model <name>       Specifies model used to choose component names
-option <name>      Adds an option used to choose component names
-print              Print a complete xkb_keymap description and exit
-query              Print the current layout settings and exit
-rules <name>       Name of rules file to use
-symbols <name>     Specifies symbols component name
-synch              Synchronize request w/X server
-types <name>       Specifies types component name
-v[erbose] [<lvl>]  Sets verbosity (1..10).  Higher values yield
                    more messages
-variant <name>     Specifies layout variant used to choose component names

And here’s a sample query on my machine:

$ setxkbmap -query
rules:      base
model:      pc105
layout:     us

And here’s how to change layouts.

To switch to the “us” keyboard layout:

$ setxkbmap us

To switch to the “fr” layout:

$ setxkbmap fr

To switch to the “de” layout:

$ setxkbmap de

And so on. Very useful, indeed. Here are some things to keep in mind:

  1. The layout change is X server-wide. This means that your layout change will take effect on all windows. Of course, you can limit your layout change with the various options like -display or -geometry, but to me that’s impractical.
  2. You can run this command from within a script, in a sub-shell (backgrounded), and it will still work! This is a nice feature which allows one to script/automate the whole thing (which is what I did).
  3. Unfortunately, running this command clears any xmodmap settings that were present before. For Xmonad/vim/etc. users who like to change up their Caps Lock key, for example, you need to re-run the xmodmap commands to get them back. I’m in this boat.

So, here’s my new setup, taking into account the xmodmap issue noted above:

  • layout_switch.sh: This script simply changes up the layout. It’s a dead-simple zsh script that you can easily port to Bash.
  • xmodmap.sh: This script simply does all the xmodmap-editing business that I need to do to get my Xmonad setup to work.
  • ~/.xmonad/xmonad.hs: Here, I have set up a hotkey sequence to call the layout_switch.sh script. I have to be careful to choose a hotkey that will remain consistent even in different layouts (e.g., I first tried the “\” backslash key, but this key’s location is different in the French layout). This means that I have to depend on keys like Escape, or the F1-F12 function keys. Since I already have F1-F12 mapped to additional workspaces, I chose the Escape key.
  • ~/.xinitrc: Here, I call the layout_switch.sh script to set my keyboard layout explicitly on startup. You could do fancy things, for example, of selecting a different layout on a different day (maybe on Sundays you want to start typing in French only, so you could just script that in here).

And here are the relevant source files for these scripts:

layout_switch.sh:

#!/bin/zsh
# LICENSE: PUBLIC DOMAIN
# switch between us, fr, and de layouts

# If an explicit layout is provided as an argument, use it. Otherwise, select the next layout from
# the set [us, fr, de].
if [[ -n "$1" ]]; then
    setxkbmap $1
else
    layout=$(setxkbmap -query | awk 'END{print $2}')
    case $layout in
        us)
            setxkbmap fr
            ;;
        fr)
            setxkbmap de
            ;;
        *)
            setxkbmap us
            ;;
    esac
fi

# remap Caps_Lock key to Xmonad's exclusive 'mod' key
~/syscfg/script/sys/xmodmap.sh

xmodmap.sh:

#!/bin/zsh
# LICENSE: PUBLIC DOMAIN
# remap Caps_Lock key to Xmonad's exclusive 'mod' key

xmodmap -e "remove Lock = Caps_Lock"
xmodmap -e "add mod3 = Caps_Lock"
xmodmap ~/.xmodmap

~/.xmodmap:

remove Lock = Caps_Lock
remove mod3 = Super_L
keysym Caps_Lock = Super_L
add mod3 = Super_L

~/.xmonad/xmonad.hs:

myKeys :: String -> XConfig Layout -> M.Map (KeyMask, KeySym) (X ())
myKeys hostname conf@(XConfig {XMonad.modMask = modm}) = M.fromList $
...
    -- change keyboard layouts
    , ((modm              , xK_Escape), spawn "/home/shinobu/syscfg/script/sys/layout_switch.sh")
...

~/.xinitrc:

...
# explicitly choose "us" (English) keyboard layout (and set Caps Lock key to
# xmonad's modmap key)
/home/shinobu/syscfg/script/sys/layout_switch.sh us &
...

There are a couple big advantages to this setup:

  • Because everything is handled in userspace (we didn’t touch any Xorg file!), we no longer have to restart the X server to diagnose any problem we encounter with changing the keyboard layout. Gone are the days of fiddling with a “XkbLayout” option (or any other Xorg option) and restarting X just to see if the changes worked.
  • Because everything is handled in userspace, we can use any number of custom shell scripts, programs, etc. to customize how the layout switch is made. For me, since I use Xmonad, I can use Xmonad’s extensive, consistent hotkey definitions to choose which hotkeys to do the layout switch. To be honest, I disliked the options available in the base file (/usr/share/X11/xkb/rules/base) when choosing which hotkeys to do my keyboard layout switching. And, my choice of modm + Escape (i.e., (dead) Caps Lock key + Escape) was impossible to realize with the base file.

Finally, a caveat: the only flaw with the setup above is that if I execute the xmonad.hs layout-switching hotkey too quickly too many times, there’s a chance of two independent processes of layout_switch.sh being spawned at nearly the same time. This means that the call to xmodmap.sh within each of those processes might get interleaved! For me, this has resulted in my Caps lock key becoming permanently turned on, for instance. However, you can always execute the xmodmap.sh script a second or two afterwards to clear this up.

I strongly recommend everyone to do their keyboard switching this way — not only is everything made explicit, but everything is transparent, too: keyboard layout switching isn’t a mysterious process anymore.

UPDATE August 18, 2011: Cleaned up some typos.

UPDATE August 24, 2011: See this post — I no longer use the fr and de layouts, but rather, just the us layout with variant altgr-intl. It’s much, much simpler this way.

A Summary of Linux vs. Windows XP

As you may have figured out by now, I don’t use Windows XP full-time any more. I only use it for one or two games. Anyway, I don’t think I’ve written a post yet comparing Linux (Arch Linux to be specific) to Windows XP. I keep referring to XP, because that’s the last Windows OS I’ve actually used on my own machines. But the points that follow still pertain to the latest incarnation of Windows (Windows 7), and perhaps all future versions of it.

I just had to get these annoyances about XP out of my system, and let you, my dear reader, know about them. So the following is my very biased point of view as a relatively new, SELF-IMPOSED Linux convert:

Linux pros:

  • Unlimited, sane configurability: Everything you can imagine about Linux can be configured with TEXT FILES (or simple TEXT based interfaces), containing HUMAN-READABLE text. There are no strange “registry” files like in Windows, and hence, no need to invoke strange programs like “regedit.exe” to do your bidding. If you know how to use a text editor (what you Windows folks hideously understand as “Notepad”), then nothing can stand in your way. EVERYTHING is exposed by Linux. An extremely simple root vs. non-root user dichotomy ensures sane, “system files” vs. “user files” separation. Windows is living in the stone age when it comes to user rights/management.
  • Few, if any, viruses: Because the consumer-level Linux ecosystem is so small, evil hackers don’t spend time writing virus programs for it. More on this virus/malware/spyware discussion below.
  • Does not slow down or get unstable with uptime: If you leave your Windows computer on for more than a day, it’s bound to slow down. Linux has never done this to me. Ever. Even with uptimes past 5 days.
  • Simplified, unified, and standardized software installation/upgrade/removal: On Windows, individual programs have to check if a newer version of itself is available. There is no standard for this — some programs do it automatically if you’re connected to the internet, some programs ask for your approval (incessantly), some programs have a “Check for updates…” button that you have to manually click on, and some are just standalone .exe files that you need to manually remove and reinstall with the newer version! This is insanity. On Arch (or any other desktop-user distro), you just type in your shell “sudo pacman -Syu” to do a system-wide upgrade. With a single command, you upgrade every single package that you have installed (after transparently downloading them from Arch’s world-wide network of package “repositories”). Here, “package” means either a program, or system files — even the OS kernel! You can optionally choose to upgrade only certain packages, or to ignore some of them, to customize your upgrade. Some Archers upgrade every day, and others wait a month or more to upgrade. There are no nag screens, popups, or “automatic” upgrades/installs that break behind your back should you choose to shut down your computer.
  • Tons of free, industrial-strength software: Actively-developed open source programs are, quite simply, the most reliable, robust programs around. Take for example the Linux Kernel: this project is one of the finest achievements of software engineering to date. Legions of high-profile companies rely on the Linux Kernel to power their servers (and many thousands of Linux desktop users, like myself, benefit from this same, state-of-the-art reliability). And because free, open source software (“FOSS”) developers are humans too, there are mountains of open source programs for your use. Things like text editors, web browsers, email clients, office suites, image editors, movie editors, etc. all exist for free in FOSS-land. And many of these programs are the very best in their field, because they are extremely actively developed (where new releases come in weeks or months, instead of years). The Arch repositories are packed with FOSS programs. Packed, I say!
  • No malware/spyware: In Arch, there are two types of repositories: official and unofficial. The official repository only has packages that have been examined, tested, and uploaded by sanctioned members of the Arch Linux Team. The unofficial repositories are those repositories that have packages uploaded by anyone. The biggest and most popular unofficial repository is the Arch User Repository (“AUR”), which has thousands of packages. As time progresses, popular packages from the AUR collect votes from users, and the most popular ones receive the blessings of the Arch Linux Team and gets adopted into the official repository, and hence undergo regular, official maintenance. In short, the room for spyware/malware to creep into any of the official repositories is virtually zero. Even if you install an AUR package that has 0 votes for it, the de facto AUR client yaourt will still tell you to examine the contents of all installation files (these are, predictably enough in the Linux world, TEXT files) before executing them. Of course, you could go out of your way to download a suspicious-looking executable file and run it, but such a course of action is rare and remote enough (and downright stupid enough) to ignore.
  • No annoyances: Two things: (1) Because 95+% of software you use will be FOSS in Linux, this means that there will be no annoyances from programs. By “annoyances” I mean things like unrequested popups, tooltips, and screen-real-estate-plundering dialog boxes. This is because FOSS is community-driven — if a majority of users find some feature in a FOSS program annoying, it will get removed soon enough. This feature of FOSS does not get enough visibility, but it is one of the most satisfying: the absence of half-assed work by imminent-deadline-driven, exhausted developers. (2) Because Linux is so configurable, you can configure-away lesser annoyances by editing HUMAN-readable TEXT files 99% of the time. The other 1% of the time, you have to use TEXT-driven interactive sessions, which is just as easy to do.
  • Fast, easy problem resolution: Let’s say you did a system upgrade, and something broke as a result. Because there are hundreds, if not thousands, of users who all run the same system (see “Simplified, unified, and standardized software installation/upgrade/removal” above), your problem will be voiced in your distro’s forum soon enough. Windows users have an instinctive distrust of other Windows users: you have no idea what programs they’ve installed, and which anti-virus programs they have running (if at all!). This is why fixing a Windows problem takes quite a bit of skill and specialized expertise (and why, if you’re a geek, others tend to ask you to fix their Windows problems). Not so with Linux. If you encounter an error, all you have to do is copy/paste the error message (human-readable, SAVE-able error messages are part of FOSS culture) into a search engine and add in the name of your distro and the word “forum”. You are now one or two clicks away from reading the latest word (in the world) on the problem.
  • Gentle, but eventual, path to a true understanding of your operating system’s ecosystem: Here, I define “OS ecosystem” as understanding how and why your operating system behaves the way it does. Windows is a horrible platform to learn how your OS behaves. This is because it actively hides many important concepts from you. For example, do you have any idea how and why a program shows up on the “Add/Remove Programs” list on XP, even though you just removed it? (Answer: NO.) Such mystical voids of system-level confusion are rather rare in the Linux world.

Windows pros:

  • Industry-leading games: Almost all commercial games are released for Windows.

Seriously, after all of the extremely positive things that I’ve said about Linux, and how Windows fails in each of those areas, what can I say?

Misunderstandings about Linux (unfortunate from the Linux/FOSS community’s perspective):

  • It’s only for power users: No, it’s for people who don’t mind learning why their computer behaves a certain way. If you enjoy driving your car blindfolded, you are the right type of person to use Windows (actually, Mac OSX might be a better candidate).
  • It’s 100% secure: Yes, after all the overwhelmingly positive things I’ve said about the rarity of viruses/spyware/malware on Linux, nothing’s perfect. However, I will at least argue that Linux is more secure than Windows, because Linux does not hide things as much. I mean, there are dozens of sites out there designed to answer the question, “Is svchost.exe a safe process? How about wuauclt.exe?”, all because of how Windows likes to hide things from you. In Linux, pretty much everything is exposed. In Arch, specifically, you can do a “pacman -Qo XYZ” to see which package is responsible for that file. If no package is responsible for it, then that means either (1) a program created that file (e.g., an optional configuration file) or (2) you created it yourself!
  • If you use Linux, you’re an evil hacker: Congratulations, you’ve been brainwashed.

All non-brain-damaged people get sick and tired of Windows’ limitations and shortcomings every day. There are no secrets in the Windows world when it comes to user dissatisfaction. I used to be one of these people. Then one day, I started using Linux. And the difference is night and day. It’s just unfortunate that legions of PC users have been brainwashed to accept non-configurability and the “Windows way” to do a simple task: make the computer work for them for their needs. I used to be brainwashed that way. Here are some things that opened my eyes to just how horrible Windows is out of the box (Windows’ default “feature” vs. Linux’s superior equivalent):

  • “shortcuts” vs. symlinks (downright superior in every way)
  • “drives” (C:\, D:\, … Z:\) vs. mount points (unlimited and customizable)
  • “cmd.exe” vs. zsh/bash/etc (it’s like comparing the speed of a clown’s unicycle with a Formula-1 race car)
  • “Notepad” vs. vim/emacs (I’m at a loss for words)
  • “MS Word” vs. latex/xetex (imagine if MS Word did everything transparently: enter latex)
  • “I need Adobe program XYZ to export to PDF documents” vs. “Not in Linux.” (need I say more?)
  • “CPU-Z” vs. “cat /proc/cpuinfo” (no wonder there’s no CPU-Z for Linux; you don’t need it!)
  • none vs. “sleep 10m; echo ‘alarm message’; mplayer alarm.mp3” (super-simple, customizable alarm)
  • none vs. “sleep 1h5m3s; sudo shutdown -t 1 -hP now & disown; exit” (which means, “turn off the computer 1 hour, 5 minutes, 3 seconds from now”; the long command that starts with “sudo …” is actually aliased to just “of” for me, so I actually only type “sleep 1h5m3s; of”)
  • none vs. “sleep 1h5m3s; sudo shutdown -t 1 -r now & disown; exit” (which means “restart the computer 1 hour, 5 minutes, 3 seconds from now”; I use an alias here as well)

And here are some precious programs/tools that I had zero knowledge of until I entered the Linux/FOSS community:

  • vim (see above)
  • the latex toolchain (see above)
  • ssh (it’s zsh/bash/etc, but on a remote machine)
  • scp (transfer files over to a remote machine, but encrypt it during the transmission for state-of-the-art security)
  • rsync (copy files over to a remote machine; if a similar file already exists there, rsync will take less time to complete the copy)
  • git (source code management system; GREAT for backing up configuration files, text files, etc.)
  • mutt (text-based, no-nonsense email client)
  • gnupg (cutting-edge encryption tool to encrypt your most important files)
  • cron daemon (another TEXT based, no-nonsense way to control which programs/commands are regularly run at which intervals (minutes, hours, days… even years!))

Perhaps the most rewarding feature of using Linux is that there is always something to learn from it. In other words: the longer you use it, the more you learn to control and customize your computing experience (versus “the more I use Windows, the better I get fixing it”).

I must reiterate that I am a SELF-IMPOSED Linux convert. I didn’t get pushed into Linux, or had to learn it for some school/work requirement. I did it first as an adventure, and later on, just fell in love with it. No one held hands with me, and the help that I did get came from the internet. Linux reached critical mass years ago, and there are enough forum/blog posts out there to help you if you ever get stuck.

Use Linux!

UPDATE January 3, 2012: I just re-read this post (as I do time to time with my other posts) because Antaku commented on it, and wanted to list some awesome FOSS software that I forgot to mention:

  • diff/colordiff: This is probably the second-most important thing after text editors when it comes to dealing with text files — it tells you the difference between two text files. English teachers: when they turn in their drafts, you can immediately tell what parts they changed in 0.01 seconds. You don’t have to re-read half of it! There are tons of other useful things you can do with diff, any time text files are involved.
  • sha1sum: State-of-the-art file integrity verifier (using the very famous SHA-1 hash algorithm). It generates a 40-byte digital fingerprint, called a “hash”, of any file. You use this hash in a “reverse” fashion to see if your file is bit-perfect (i.e., does it still have the same hash that it had last month? last year?). (BTW, git uses SHA-1 for all of the data it tracks.) You can also use sha1sum as a crude substitute for diff, because two similar files with even just 1 different byte will have different hashes.
  • gcc/clang: Open source compilers for the C programming language — great if you want to learn to program in C and need a compiler. Since C will be around in the lifetime of everyone reading this post, I suggest you learn it. You can interface with Linux in a very, very tight way with C, which is very cool.
  • dd: Great utility whenever you want to move around bytes. Yes, this is a very generic statement, but that’s just how powerful (and useful!) dd can be. If you want to benchmark your new SSD, you can use dd to generate thousands of files filled with random bytes.
  • xxd/hexdump: Hex viewers! Vim uses xxd to turn itself into a hex editor.
  • cat: Short for concatenate. I.e., if you do “cat foo bar quux > ash”, all of the contents of foo, bar, and quux (their bytes) will be joined up into a single file, into ash. It comes in handy lots of times, whenver you’re using a shell.
  • dmesg: Kernel-level log messages (what the Kernel detected, did) for your browsing pleasure. Great for understanding your computer’s behavior at its lowest levels. Does Windows even have an equivalent???
  • iftop: Excellent network traffic viewer. It tells you which IP addresses you are downloading/uploading from. Great for troubleshooting LAN setups in your home.
  • htop: Like Windows’ famous “Task Manager”, but it actually tells you EVERY SINGLE running process, not just what you launched yourself.

Ugly Xmodmap Caps Lock Workaround

So, I decide to do a system upgrade (including all xorg packages) the other day and suddenly xmodmap isn’t working any more. Specifically, these lines in my .xinitrc no longer do the job (of making Caps lock my mod3 key (for use in Xmonad)):

xmodmap -e "remove Lock = Caps_Lock"
xmodmap -e "add mod3 = Caps_Lock"

So after tons of googling, I found this page: http://forums.gentoo.org/viewtopic-t-857625.html.

Basically, to get the same behavior back, I have to have

remove Lock = Caps_Lock
remove mod3 = Super_L
keysym Caps_Lock = Super_L
add mod3 = Super_L

in my (new) ~/.xmodmap file, and then do this in my ~/.xinitrc:

xmodmap -e "remove Lock = Caps_Lock"
xmodmap -e "add mod3 = Caps_Lock"
xmodmap ~/.xmodmap

This is a very, very ugly hack, but even after spending a solid hour trying manual xmodmap commands and this and that and all sorts of combinations, this is the only method that works.

I have no idea which package exactly broke xmodmap’s former 2-liner functionality, but it was sometime in the past couple months, probably. If the old 2-liner starts working again, I’ll let you all know…

Latex: Saner Source Code Listings: No Starch Press Style

I’ve lately gotten in the habit of writing aesthetically pleasing ebooks with LaTeX for myself on new subjects that I come across. This way, I can (1) learn more about TeX (always a huge bonus) and (2) create documents that are stable, portable (PDF output), beautiful, and printer-friendly. (I also put all the .tex and accompanying makefiles into version control (git) and sync it across all of my computers, to ensure that they last forever.)

One of those new subjects for me right now is the Haskell programming language. I started copying down small code snippets from various free resources on the web, with the help of the Listings package (actually named the lstlisting package internally).

The Problem

Unfortunately, I got tired of referencing line numbers manually to explain the code snippets, like this:

\documentclass[twoside]{article}
\usepackage[x11names]{xcolor} % for a set of predefined color names, like LemonChiffon1

\newcommand{\numold}[1]{{\addfontfeature{Numbers=OldStyle}#1}}
\lstnewenvironment{csource}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\numold,numbers=left,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {}

\begin{document}
\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource}
#include <stdio.h>

int main()
{
    printf("Hello, world!\n");
    return 0;
}
\end{csource}

At line 1, we include the \texttt{stdio.h} header file. We begin the \texttt{main} function at line 3. We print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) at line 5. At line 6, we return value 0 to let the caller of this program know that we exited safely without any errors.
\end{document}

Output (compiled with the xelatex command):
Plain lstlisting style

This is horrible. Any time I add or delete a single line of code, I have to manually go back and change all of the numbers. On the other hand, I could instead use the \label{labelname} command inside the lstlisting environment on any particular line, and get the line number of that label with \ref{comment}, like so:

\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource}
#include <stdio.h>(*@\label{include}@*)

int main()(*@\label{main}@*)
{
    printf("Hello, world!\n");(*@\label{world}@*)
    return 0;(*@\label{return}@*)
}
\end{csource}
At line \ref{include}, we include the \texttt{stdio.h} header file. We begin the \texttt{main} function at line \ref{main}. We print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) at line \ref{world}. At line \ref{return}, we return value 0 to let the caller of this program know that we exited safely without any errors.

Output:

(The only difference is that the line number references are now red. This is because I also used the hyperref package with the colorlinks option.)

But I felt like this was not a good solution at all (it was suggested by the official lstlisting package’s manual, Section 7 “How tos”). For one thing, I have to think of a new label name for every line I want to address. Furthermore, I have to compile the document twice, because that’s how the \label command works. Yuck.

Enlightenment

So again, with my internet-research hat on, I tried to figure out a better solution. The first thing that came to mind was how the San Francisco-based publisher No Starch Press displayed source code listings in their newer books. Their stylistic approach to source code listings is probably the most sane (and beautiful!) one I’ve come across.

After many hours of tinkering, trial-and-error approaches, etc., I finally got everything working smoothly. Woohoo! From what I can tell, my rendition looks picture-perfect, and dare I say, even better than their version (because I use colors, too). Check it out!

\documentclass{article}
\usepackage[x11names]{xcolor} % for a set of predefined color names, like LemonChiffon1
\newcommand{\numold}[1]{{\addfontfeature{Numbers=OldStyle}#1}}
\usepackage{libertine} % for the pretty dark-circle-enclosed numbers

% Allow "No Starch Press"-like custom line numbers (essentially, bulleted line numbers for only those lines the author will address)
\newcounter{lstNoteCounter}
\newcommand{\lnnum}[1]
    {\ifthenelse{#1 =  1}{\libertineGlyph{uni2776}}
    {\ifthenelse{#1 =  2}{\libertineGlyph{uni2777}}
    {\ifthenelse{#1 =  3}{\libertineGlyph{uni2778}}
    {\ifthenelse{#1 =  4}{\libertineGlyph{uni2779}}
    {\ifthenelse{#1 =  5}{\libertineGlyph{uni277A}}
    {\ifthenelse{#1 =  6}{\libertineGlyph{uni277B}}
    {\ifthenelse{#1 =  7}{\libertineGlyph{uni277C}}
    {\ifthenelse{#1 =  8}{\libertineGlyph{uni277D}}
    {\ifthenelse{#1 =  9}{\libertineGlyph{uni277E}}
    {\ifthenelse{#1 = 10}{\libertineGlyph{uni277F}}
    {\ifthenelse{#1 = 11}{\libertineGlyph{uni24EB}}
    {\ifthenelse{#1 = 12}{\libertineGlyph{uni24EC}}
    {\ifthenelse{#1 = 13}{\libertineGlyph{uni24ED}}
    {\ifthenelse{#1 = 14}{\libertineGlyph{uni24EE}}
    {\ifthenelse{#1 = 15}{\libertineGlyph{uni24EF}}
    {\ifthenelse{#1 = 16}{\libertineGlyph{uni24F0}}
    {\ifthenelse{#1 = 17}{\libertineGlyph{uni24F1}}
    {\ifthenelse{#1 = 18}{\libertineGlyph{uni24F2}}
    {\ifthenelse{#1 = 19}{\libertineGlyph{uni24F3}}
    {\ifthenelse{#1 = 20}{\libertineGlyph{uni24F4}}
    {NUM TOO HIGH}}}}}}}}}}}}}}}}}}}}}
\newcommand*{\lnote}{\stepcounter{lstNoteCounter}\vbox{\llap{{\lnnum{\thelstNoteCounter}}\hskip 1em}}}
\lstnewenvironment{csource2}[1][]
{
    \setcounter{lstNoteCounter}{0}
    \lstset{basicstyle=\ttfamily,language=C,numberstyle=\numold,numbers=right,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}
}
{}

\begin{document}
\section{Hello, world!}
The following program \texttt{hello.c} simply prints ``Hello, world!'':

\begin{csource2}
(*@\lnote@*)#include <stdio.h>

/* This is a comment. */
(*@\lnote@*)int main()
{
(*@\lnote@*)    printf("Hello, world!\n");
(*@\lnote@*)    return 0;
}
\end{csource2}

We first include the \texttt{stdio.h} header file \lnnum{1}. We then declare the \texttt{main} function \lnnum{2}. We then print ``Hello, world!'' to standard output (a.k.a., \textit{STDOUT}) \lnnum{3}. Finally, we return value 0 to let the caller of this program know that we exited safely without any errors \lnnum{4}.
\end{document}

Output (compiled with the xelatex command):

Ah, much better! This is pretty much how No Starch Press does it, except that we’ve added two things: the line number count on the right hand side, and also a colored background to make the code stand out a bit from the page (easier on the eyes when quickly skimming). These options are easily adjustable/removable to suit your needs (see line 33). For a direct comparison, download some of their sample chapter offerings from Land of Lisp (2010) and Network Flow Analysis (2010) and see for yourself.

Explanation

If you look at the code, you can see that the only real thing that changed in the listing code itself is the use of a new \lnote command. The \lnote command basically spits out a symbol, in this case whatever the \lnnum command produces with the value of the given counter, lstNoteCounter. The \lnnum command is basically a long if/else statement chain (I tried using the switch statement construct from here, but then extra spaces would get added in), and can produce a nice glyph, but only up to 20 (for anything higher, it just displays the string “NUM TOO HIGH.” This is because it uses the Linux Libertine font to create the glyph (with \libertineGlyph), and Libertine’s black-circle-enclosed numerals only go up to 20). A caveat: Linux Libertine’s LaTeX commands are subject to change without notice, as it is currently in “alpha” stage of development (e.g., the \libertineGlyph command actually used to be called the \Lglyph command, if I recall correctly).

The real star of the show is the \llap command. I did not know about this command until I stumbled on this page yesterday. For something like an hour I toiled over trying to use \marginpar or \marginnote (the marginnote package) to get the same effect, without success (for one thing, it is impossible to create margin notes on one side (e.g., always on the left, or always on the right) if your document class is using the twoside option.

The custom \numold command (used here to typeset the line numbers with old style figures) is actually a resurrection of a command of the same name from an older, deprecated Linux Libertine package. The cool thing about how it’s defined is that you can use it with or without an argument. Because of this \numold command and how it’s defined, you have to use the XeTeX engine (i.e., compile with the xelatex command).

In all of my examples above, the serif font is Linux Libertine, and the monospaced font is DejaVu Sans Mono.

Other Thoughts

You may have noticed that I have chosen to use my own custom lstlisting environment (with the \lstnewenvironment command). The only reason I did this is because I can specify a custom command that starts up each listing environment. In my case, it’s \setcounter{lstNoteCounter}{0}, which resets the notes back to 0 for the listing.

Feel free to use my code. If you make improvements to it, please let me know! By the way, Linux Libertine’s latex package supports other enumerated glyphs, too (e.g., white-circle-enclosed numbers, or even circle-enclosed alphabet characters). Or, you could even use plain letters enclosed inside a \colorbox command, if you want. You can also put the line numbers right next to the \lnote numbers (we just add numbers=left to the lstnewenvironment options, and change \lnote’s hskip to \2.7em; I also changed the line numbers to basic italics):

\lstnewenvironment{csource2}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\itshape,numbers=left,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {\stepcounter{lstCounter}}

\newcommand*{\lnote}{\stepcounter{lstNoteCounter}\llap{{\lnnum{\thelstNoteCounter}}\hskip 2.7em}}

Output:

Or we can swap their positions (by using the numbersep= command to the lstnewenvironment declaration, and leaving \lnnote as-is with just 1em of \hskip):

\lstnewenvironment{csource2}[1][]
    {\lstset{basicstyle=\ttfamily,language=C,numberstyle=\itshape,numbers=left,numbersep=2.7em,frame=lines,framexleftmargin=0.5em,framexrightmargin=0.5em,backgroundcolor=\color{LemonChiffon1},showstringspaces=false,escapeinside={(*@}{@*)},#1}}
    {\stepcounter{lstCounter}}

Output:

The possibilities are yours to choose.

In case you’re wondering, the unusual indenting of the section number into the left margin in my examples is achieved as follows:

\usepackage{titlesec}
\titleformat{\section}
    {\Large\bfseries} % formatting
    {\llap{
       {\thesection}
       \hskip 0.5em
    }}
    {0em}% horizontal sep
    {}% before

It’s a simplified version of the code posted on the StackOverflow link above.

NB: The \contentsname command (used to render the “Contents” text if you have a Table of Contents) inherits the formatting from the \chapter command (i.e., if you use \titleformat{\chapter}). Thus, if you format the \chapter command like the \section command above with \titleformat, the only way to prevent this inheritance (if it is not desired) is by doing this:

\renewcommand\contentsname{\normalfont Contents}

This way, your Table of Content’s title will stay untouched by any \titleformat command.

UPDATE December 7, 2010: Some more variations and screenshots.

Linux: Highlighting + Middle Mouse Click vs Copy & Paste

I finally learned the difference between highlighting+middle mouse clicking (or SHIFT+INSERT if you’re on the keyboard) and copying/pasting text in Linux today, all thanks to the manpage for the nifty xsel program:

The X server maintains three selections, called PRIMARY, SECONDARY and CLIPBOARD. The PRIMARY selection is conventionally used to implement copying and pasting via the middle mouse button. The SECONDARY and CLIPBOARD selections are less frequently used by application programs.


There is no X selection buffer. The selection mechanism in X11 is an interclient communication mediated by the X server each time any program wishes to know the selection contents, eg. to perform a middle mouse button paste.

That is the most useful explanation about highlighted text vs. copied text I’ve seen since entering the Linux world a few years ago. The PRIMARY selection (highlight + middle mouse click) is, as far as I can tell, available in pretty much every Linux program. I’ve never encountered the SECONDARY selection type. The CLIPBOARD selection is somewhat rare; I’ve only encountered it when I explicitly highlight some text in Firefox and then do right-click -> Copy (before I knew about PRIMARY selections).

As for xsel itself, you can use it as a very simple buffer. The best use scenario would be a case where you want to use the output of some command for a while, but where creating a temporary file would be overkill. Here is a random example of xsel:

$ <file.txt | tail -n1 | xsel -p

Now the last line of file.txt is in your PRIMARY selection buffer. Then, after some other miscellaneous work in your shell, you can recall what’s in the PRIMARY buffer like so:

$ xsel | less

The -p flag makes the piped text go into the PRIMARY selection buffer. However, the -p flag is the default one, so you can omit it. Interestingly, the -x flag can be given to swap the PRIMARY and SECONDARY selections.

Mutt: Multiple Gmail IMAP Setup

NOTE: Read the August 10, 2011 update below for a more secure ~/.muttrc regarding passwords!
NOTE: Read the December 22, 2011 update below for a secure and ultra-simple way to handle passwords!
NOTE: Keep in mind that the “my_” prefix in “my_gpass1” is mandatory! (See January 3, 2012 embedded update.)

Since Alpine is no longer maintained, I decided to just move on to Mutt. Apparently Mutt has a large user base, is actively developed, and is used by friendly programmers like Greg Kroah-Hartman (Linux Kernel core dev; skip video to 25:44). Although I need to take care of two separate Gmail accounts through IMAP, I got it working in part to the nice guides here. In particular, I used this guide and this one. However, the configuration from the second link contains a couple typos that are just wrong (no equal sign after imap_user, and no semicolons between unset commands). I figured things out by trial and error and over 8 hours of googling, tweaking, and fiddling (granted, my problem was very unique). But finally, I got things working (can log in, view emails, etc.; however, I haven’t tested out everything — I suspect there needs to be some more settings to let Mutt know how to handle multiple accounts properly (setting the correct From/To areas for forwarding/replying), but let us focus on the basics for now.)

Before I get started, I’m using Mutt 1.5.21, and it was compiled with these options (all decided by Arch Linux’s Mutt maintainer; see here):

./configure --prefix=/usr --sysconfdir=/etc \
  --enable-pop --enable-imap --enable-smtp \
  --with-sasl --with-ssl=/usr --without-idn \
  --enable-hcache --enable-pgp --enable-inodesort \
   --enable-compressed --with-regex \
   --enable-gpgme --with-slang=/usr

The great thing is that Mutt comes with smtp support built-in this way (with the –enable-smtp flag), so you don’t have to rely on external mail-sending programs like msmtp.

Now, here is my ~/.muttrc:

#-----------#
# Passwords #
#-----------#
set my_tmpsecret=`gpg2 -o ~/.sec/.tmp -d ~/.sec/pass.gpg`
set my_gpass1=`awk '/Gmail1/ {print $2}' ~/.sec/.tmp`
set my_gpass2=`awk '/Gmail2/ {print $2}' ~/.sec/.tmp`
set my_del=`rm -f ~/.sec/.tmp`

#---------------#
# Account Hooks #
#---------------#
account-hook . "unset imap_user; unset imap_pass; unset tunnel" # unset first!
account-hook        "imaps://user1@imap.gmail.com/" "\
    set imap_user   = user1@gmail.com \
        imap_pass   = $my_gpass1"
account-hook        "imaps://user2@imap.gmail.com/" "\
    set imap_user   = user2@gmail.com \
        imap_pass   = $my_gpass2"

#-------------------------------------#
# Folders, mailboxes and folder hooks #
#-------------------------------------#
# Setup for user1:
set folder          = imaps://user1@imap.gmail.com/
mailboxes           = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash
set spoolfile       = +INBOX
folder-hook         imaps://user1@imap.gmail.com/ "\
    set folder      = imaps://user1@imap.gmail.com/ \
        spoolfile   = +INBOX \
        postponed   = +[Gmail]/Drafts \
        record      = +[Gmail]/'Sent Mail' \
        from        = 'My Real Name <user1@gmail.com> ' \
        realname    = 'My Real Name' \
        smtp_url    = smtps://user1@smtp.gmail.com \
        smtp_pass   = $my_gpass1"

# Setup for user2:
set folder          = imaps://user2@imap.gmail.com/
mailboxes           = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash
set spoolfile       = +INBOX
folder-hook         imaps://user2@imap.gmail.com/ "\
    set folder      = imaps://user2@imap.gmail.com/ \
        spoolfile   = +INBOX \
        postponed   = +[Gmail]/Drafts \
        record      = +[Gmail]/'Sent Mail' \
        from        = 'My Real Name <user2@gmail.com> ' \
        realname    = 'My Real Name' \
        smtp_url    = smtps://user2@smtp.gmail.com \
        smtp_pass   = $my_gpass2"

#--------#
# Macros #
#--------#
macro index <F1> "y12<return><return>" # jump to mailbox number 12 (user1 inbox)
macro index <F2> "y6<return><return>"  # jump to mailbox number 6 (user2 inbox)
#-----------------------#
# Gmail-specific macros #
#-----------------------#
# to delete more than 1 message, just mark them with "t" key and then do "d" on them
macro index d ";s+[Gmail]/Trash<enter><enter>" "Move to Gmail's Trash"
# delete message, but from pager (opened email)
macro pager d "s+[Gmail]/Trash<enter><enter>"  "Move to Gmail's Trash"
# undelete messages
macro index u ";s+INBOX<enter><enter>"         "Move to Gmail's INBOX"
macro pager u "s+INBOX<enter><enter>"          "Move to Gmail's INBOX"

#-------------------------#
# Misc. optional settings #
#-------------------------#
# Check for mail every minute for current IMAP mailbox every 1 min
set timeout         = 60
# Check for new mail in ALL mailboxes every 2 min
set mail_check      = 120
# keep imap connection alive by polling intermittently (time in seconds)
set imap_keepalive  = 300
# allow mutt to open new imap connection automatically
unset imap_passive
# store message headers locally to speed things up
# (the ~/.mutt folder MUST exist! Arch does not create it by default)
set header_cache    = ~/.mutt/hcache
# sort mail by threads
set sort            = threads
# and sort threads by date
set sort_aux        = last-date-received

A couple things that are not so obvious:
The passwords are stored in a file called pass.gpg like this:

Gmail1 user1-password
Gmail2 user2-password

However, it’s encrypted. So you have to decrypt it every time you want to access it. This way, you don’t store your passwords in plaintext! The only drawback is that you have to manually type in your GnuPG password every time you start Mutt, but hey, you only have to type in one password to get the passwords of all your multiple accounts! So it’s not that bad. You can see that the passwords are fed to the variables $my_gpass1 and $my_gpass2. (UPDATE January 3, 2012: Mallik has kindly pointed out in the comments that the “my_” prefix is actually a mutt-specific way of defining custom variables! You must use this syntax — unless you want to get bitten by mysterious “unknown variable” error messages!) One HUGE caveat here: if your password contains a dollar sign ($), say “$quarepant$” then you have to escape it like this:

\\\$quarepant\\\$

in the pass.gpg file. Yes, those are three backslashes for each instance of “$”. This little-known fact caused me hours of pain, and if it weren’t for my familiarity with escape sequences and shell variables and such, I would never have figured it out. Now, I don’t know what other characters have to be escaped, but here’s a way to figure it out: change your Gmail password to contain 1 weird-character like “#”, then manually type in the password like “set my_gpass1 = #blahblah” and see if using $my_gpass1 works. If not, keep adding backslashes (“\#”, then “\\#”, etc.) until it does work. Repeat this procedure for all the other punctuation characters, if necessary. Then change your Gmail password back to its original form, and use the knowledge you gained to put in the correctly-escaped password inside your pass.gpg file.

You can see that I use awk to get the correct region of text from pass.gpg. If you don’t like awk (I just copied the guide from here), you can use these equivalent lines instead (stdin redirection, grep, and cut):

...
set my_gpass1=`<~/.sec/.tmp grep Gmail1 | cut -d " " -f 2`
set my_gpass2=`<~/.sec/.tmp grep Gmail2 | cut -d " " -f 2`
...

The macros 1 and 2 are just there to let you quickly switch between the two accounts. The number 6 and 12 happen to be the two INBOX folders on my two Gmail accounts. To check which numbers are right for you, press “y” to list all folders, with their corresponding numbers on the left.

As for the other macros, these are only required because of Gmail’s funky (nonstandard) way of handling IMAP email. Without these macros, if you just press “d” to delete an email, Gmail will NOT send it to the Trash folder, but to the “All Mail” folder. This behavior is strange, because if you use the web browser interface on Gmail, clicking on the “Delete” button moves that email to the Trash folder. I guess Google wants you to use their web interface, or something. Anyway, the macros in here for delete and undelete make Gmail’s IMAP behave the same way as the Gmail’s web interface. The only drawback is that you get a harmless error message “Mailbox was externally modified. Flags may be wrong.” on the bottom every time you use them. Anyway, I’ve tested these macros, and they work for me as expected.

The optional settings at the bottom are pretty self-explanatory, and aren’t required. The most important one in there is the header_cache value. This is definitely required if you have thousands of emails in your inboxes, because otherwise Mutt will fetch all the email headers for all emails every time it starts up.

Here are some not-so-obvious shortcut keys from the “index” menu (i.e., the list of emails, not when you’re looking at a single email (that’s called “pager”; see my macros above)) to get you started:

c   (change folder inside the current active account; press ? to bring up a menu)
y   (look at all of your accounts and folders)
*   (go to the very last (newest) email)
TAB (go to the next unread email)

The other shortcut keys that are displayed on the top line will get get you going. To customize colors in Mutt, you can start with this 256-color template (just paste it into your .muttrc):

# Source: http://trovao.droplinegnome.org/stuff/dotmuttrc
# Screenshot: http://trovao.droplinegnome.org/stuff/mutt-zenburnt.png
#
# This is a zenburn-based muttrc color scheme that is not (even by far)
# complete. There's no copyright involved. Do whatever you want with it.
# Just be aware that I won't be held responsible if the current color-scheme
# explodes your mutt.

# general-doesn't-fit stuff
color normal     color188 color237
color error      color115 color236
color markers    color142 color238
color tilde      color108 color237
color status     color144 color234

# index stuff
color indicator  color108 color236
color tree       color109 color237
color index      color188 color237 ~A
color index      color188 color237 ~N
color index      color188 color237 ~O
color index      color174 color237 ~F
color index      color174 color237 ~D

# header stuff
color hdrdefault color223 color237
color header     color223 color237 "^Subject"

# gpg stuff
color body       color188 color237 "^gpg: Good signature.*"
color body       color115 color236 "^gpg: BAD signature.*"
color body       color174 color237 "^gpg: Can't check signature.*"
color body       color174 color237 "^-----BEGIN PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----BEGIN PGP SIGNATURE-----"
color body       color174 color237 "^-----END PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----END PGP SIGNATURE-----"
color body       color174 color237 "^Version: GnuPG.*"
color body       color174 color237 "^Comment: .*"

# url, email and web stuff
color body       color174 color237 "(finger|ftp|http|https|news|telnet)://[^ >]*"
color body       color174 color237 "<URL:[^ ]*>"
color body       color174 color237 "www\\.[-.a-z0-9]+\\.[a-z][a-z][a-z]?([-_./~a-z0-9]+)?"
color body       color174 color237 "mailto: *[^ ]+\(\\i?subject=[^ ]+\)?"
color body       color174 color237 "[-a-z_0-9.%$]+@[-a-z_0-9.]+\\.[-a-z][-a-z]+"

# misc body stuff
color attachment color174 color237 #Add-ons to the message
color signature  color223 color237

# quote levels
color quoted     color108 color237
color quoted1    color116 color237
color quoted2    color247 color237
color quoted3    color108 color237
color quoted4    color116 color237
color quoted5    color247 color237
color quoted6    color108 color237
color quoted7    color116 color237
color quoted8    color247 color237
color quoted9    color108 color237

Make sure your terminal emulator supports 256 colors. (On Arch, you can use the rxvt-unicode package.)

UPDATE April 22, 2011: Tiny update regarding rxvt-unicode (it has 256 colors by default since a few months ago).

UPDATE August 10, 2011: I did some password changes the other day, and through trial and error, figured out which special (i.e., punctuation) characters you need to escape with backslashes for your Gmail passwords in pass.gpg as discussed in this post. Here is a short list of them:

    ~ ` # $ \ ; ' "

I’m quite confident that this covers all the single-character cases (e.g., “foo~bar” or “a#b`c”). The overall theme here is to prevent the shell from expanding on certain special characters (e.g., the backtick ‘`’ is used to define a shell command region, which we must escape here). Some characters, like the exclamation mark ‘!’ probably need to be escaped if you have two of them in a row (i.e., \\\!! instead of !!) since ‘!!’ is usually expanded by most shells to refer to the last entered command. Again, it’s all very shell-specific, and because (1) I don’t really know which shell mutt uses internally, and (2) I don’t really know all that much about all the nooks and crannies of shell expansion in general, I am unable to create a definitive list as to which characters you must escape.

Another, nontrivial note I would like to add is that I realized that storing the just-decrypted pass.gpg file (file ~/.sec/tmp above) is still not very secure, because the file is a regular file and the contents will still be there after removing the file. That is, a clever individual could easily run some generic “undelete” program to recover the passwords after gaining control of your machine. I found a very easy, simple solution to get around this problem of deciding where to store the decrypted temporary file: use a ram disk partition! This way, when you power off the machine, any traces of your temporarily decrypted pass.gpg file will be lost.

All it takes is a one-liner /etc/fstab entry:

none    /mnt/r0    tmpfs    rw,size=4K    0    0

The size limit of 4K is the absolute minimum (I tried 1K, for 1KiB, but it set it to 4KiB regardless — probably due to a filesystem limitation). You can use the following ~/.muttrc portion to decrypt safely and securely:

#-----------#
# Passwords #
#-----------#
set my_tmpsecret=`gpg2 -o /mnt/r0/.tmp -d ~/.sec/pass.gpg`
set my_gpass1=`awk '/Gmail1/ {print $2}' /mnt/r0/.tmp`
set my_gpass2=`awk '/Gmail2/ {print $2}' /mnt/r0/.tmp`
set my_del=`shred -zun25 /mnt/r0/.tmp`

Easy, and secure. Just for fun and extra security, we use the shred utility to really make sure that the decrypted file does get erased in case (1) the machine has a long uptime (crackers, loss of control of the machine should someone forcefully prevent you from turning off the machine, etc.) or (2) the machine’s RAM somehow remembers the file contents even after a reboot. As a wise man once said, “It is better to bow too much than to bow not enough.” So it is with security measures.

UPDATE December 22, 2011: Erwin from the comments gave me a tip about a simpler way to handle passwords. It is so simple that I am utterly embarassed about my previous approach using /mnt/r0 and all that, and I’ve already changed my setup to use this method! Apparently, mutt comes with the source command, which simply reads in chunks of mutt commands through standard input (STDIN). So, the idea is instead of storing just the passwords in an encrypted file, store the entire commands into a file and encrypt it. Then, you can just source it on decryption.

~/.sec/pass.gpg

set my_gpass1="my super secret password"
set my_gpass2="my other super secret password"

Then, you can source the above upon decryption in its entirety, like this:

source "gpg --textmode -d ~/.sec/pass.gpg |"

The unusual and welcome benefit to this approach, apart from its simplicity, is that you don’t need to change how your password was quoted/escaped from the previous approach! I only found out the hard way after trying to remove the various backslashes, thinking that I needed to change the quotation level. You just use the same password strings as before, and you’re set! And also, the double quotes used for the source command is not a typo: mutt will see that the last character is a pipe ‘|’ character and interpret it as a command, not a static string.

Many thanks to Erwin for pointing this out to me.

UPDATE April 16, 2012: Typo and grammar fixes.

Improved Autocall Script

I’ve updated the Autocall Ruby script I’ve mentioned various times before by porting it to Zsh and adding a TON of features. The following posts are now obsolete:

Autocall: A Script to Watch and “Call” Programs on a Changed Target Source File
Updated Autolily Script
Auto-update/make/compile script for Lilypond: Autolily

If you haven’t read my previous posts on this topic, basically, Autocall is a script that I wrote to automate executing an arbitrary command (from the shell) any time some file is modified. In practice, it is very useful for any small/medium project that needs to compile/build something out of source code. Typical use cases are for LilyPond files, LaTeX/XeTeX, and C/C++ source code.

Anyway, the script is now very robust and much more intelligent. It now parses options and flags instead of stupidly looking at ARGV one at a time. Perhaps the best new feature is the ability to kill processes that hang. Also, you can now specify up to 9 different commands with 9 instances of the “-c” flag (additional instances are ignored). Commands 2-9 are accessible manually via the numeric keys 2-9 (command 1 is the default). This is useful if you have, for instance, different build targets in a makefile. E.g., you could do

$ autocall -c "make" -c "make -B all" -c "make clean" -l file_list

to make things a bit easier.

I use this script all the time — mostly when working with XeTeX files or compiling source code. It works best in a situation where you have to do something X whenever files/directories Y changes in any way. Again, the command to be executed is arbitrary, so you could use it to call some script X whenever a change is detected in a file/directory. If you use it with LaTeX/XeTeX/TeX, use the “-halt-on-error” option so that you don’t have to have autocall kill it (only available with the -k flag). The copious comments should help you get started. Like all my stuff, it is not licensed at all — it’s released into the PUBLIC DOMAIN, without ANY warranties whatsoever in any jurisdiction (use at your own risk!).

#!/bin/zsh
# PROGRAM: autocall
# AUTHOR: Shinobu (https://zuttobenkyou.wordpress.com)
# LICENSE: PUBLIC DOMAIN
#
#
# DESCRIPTION:
#
# Autocall watches (1) a single file, (2) directory, and/or (3) a text file
# containing a list of files/directories, and if the watched files and/or
# directories become modified, runs the (first) given command string. Multiple
# commands can be provided (a total of 9 command strings are recognized) to
# manually execute different commands.
#
#
# USAGE:
#
# See msg("help") function below -- read that portion first!
#
#
# USER INTERACTION:
#
# Press "h" for help.
# Pressing a SPACE, ENTER, or "1" key forces execution of COMMAND immediately.
# Keys 2-9 are hotkeys to extra commands, if there are any.
# Press "c" for the command list.
# To exit autocall gracefully, press "q".
#
#
# DEFAULT SETTINGS:
#
# (-w) DELAY    = 5
# (-x) FACTOR   = 4
#
#
# EXAMPLES:
#
# Execute "pdflatex -halt-on-error report.tex" every time "report.tex" or "ch1.tex" is
# modified (if line count changes in either file; modification checked every 5
# seconds by default):
#    autocall -c "pdflatex -halt-on-error report.tex" -F report.tex -f ch1.tex
#
# Same, but only look at "ch1.tex" (useful, assuming that report.tex includes
# ch1.tex), and automatically execute every 4 seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 4
#       (-x 0 or -x 1 here would also work)
#
# Same, but also automatically execute every 20 (5 * 4) seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 4
#
# Same, but automatically execute every 5 (5 * 1) seconds (-w is 5 by default):
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 1
#
# Same, but automatically execute every 1 (1 * 1) second:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 1
#
# Same, but automatically execute every 17 (1 * 17) seconds:
#    autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 17
#
# Same, but for "ch1.tex", watch its byte size, not line count:
#    autocall -c "pdflatex -halt-on-error report.tex" -b ch1.tex -w 1 -x 17
#
# Same, but for "ch1.tex", watch its timestamp instead (i.e., every time
# this file is saved, the modification timestamp will be different):
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -w 1 -x 17
#
# Same, but also look at the contents of directory "images/ocean":
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -w 1 -x 17
#
# Same, but also look at the contents of directory "other" recursively:
#    autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -D other -w 1 -x 17
#
# Same, but look at all files and/or directories (recursively) listed in file
# "watchlist" instead:
#    autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -w 1 -x 17
#
# Same, but also look at "newfile.tex":
#    autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -f newfile.tex -w 1 -x 17
#
# Same, but also allow manual execution of "make clean" with hotkey "2":
#    autocall -c "pdflatex -halt-on-error report.tex" -c "make clean" -l watchlist -f newfile.tex -w 1 -x 17
#
###############################################################################
###############################################################################

#-----------------#
# Local functions #
#-----------------#

msg () {
    case $1 in
        "help")
echo "
autocall: Usage:

autocall [OPTIONS]

Required parameter:
-c COMMAND      The command to be executed (put COMMAND in quotes). Note that
                COMMAND can be a set of multiple commands, e.g. \"make clean;
                make\". You can also specify multiple commands by invoking
                -c COMMAND multiple times -- the first 9 of these are set to
                hotkeys 1 through 9, if present. This is useful if you want to
                have a separate command that is available and can only be
                executed manually.

One or more required parameters (but see -x below):
-f FILE         File to be watched. Modification detected by time.
-F FILE         File to be watched. Modification detected by line-size.
-b FILE         File to be watched. Modification detected by bytes.
-d DIRECTORY    Directory to be watched. Modification detected by time.
-D DIRECTORY    Directory to be watched, recursively. Modification
                detected by time.
-l FILE         Text file containing a list of files/directories (each on
                its own line) to be watched (directories listed here are
                watched recursively). Modification is detected with 'ls'.

Optional parameters:
-w DELAY        Wait DELAY seconds before checking on the watched
                files/directories for modification; default 5.
-t TIMEOUT      If COMMAND does not finish execution after TIMEOUT seconds,
                send a SIGTERM signal to it (but do nothing else afterwards).
-k KDELAY       If COMMAND does not finish execution after TIMEOUT,
                then wait KDELAY seconds and send SIGKILL to it if COMMAND is
                still running. If only -k is given without -t, then -t is
                automatically set to the same value as TIMEOUT.
-x FACTOR       Automatically execute the command repeatedly every DELAY *
                FACTOR seconds, regardless of whether the watched
                files/directories were modified. If FACTOR is zero, it is set
                to 1. If -x is set, then -f, -d, and -l are not required (i.e.,
                if only the -c and -x options are specified, autocall will
                simply act as a while loop executing COMMAND every 20 (or
                more if FACTOR is greater than 1) seconds). Since the
                formula is (DELAY * FACTOR) seconds, if DELAY is 1,
                FACTOR's face value itself, if greater than 0, is the seconds
                amount.
-a              Same as \`-x 1'
-h              Show this page and exit (regardless of other parameters).
-v              Show version number and exit (regardless of other parameters).
"
            exit 0
            ;;
        "version")
            echo "autocall version 1.0"
            exit 0
            ;;
        *)
            echo "autocall: $1"
            exit 1
            ;;
    esac
}

is_number () {
    if [[ $(echo $1 | sed 's/^[0-9]\+//' | wc -c) -eq 1 ]]; then
        true
    else
        false
    fi
}

autocall_exec () {
    timeout=$2
    killdelay=$3
    col=""
    case $4 in
        1) col=$c1 ;;
        2) col=$c2 ;;
        3) col=$c3 ;;
        4) col=$c4 ;;
        5) col=$c5 ;;
        6) col=$c6 ;;
        *) col=$c1 ;;
    esac
    echo "\nautocall:$c2 [$(date --rfc-3339=ns)]$ce$col $5$ce"
    if [[ $# -eq 7 ]]; then
        diff -u0 -B -d <(echo "$6") <(echo "$7") | tail -n +4 | sed -e "/^[@-].\+/d" -e "s/\(\S\+\s\+\S\+\s\+\S\+\s\+\S\+\s\+\)\(\S\+\s\+\)\(\S\+\s\+\S\+\s\+\S\+\s\+\)/\1$c1\2$ce$c2\3$ce/" -e "s/^/  $c1>$ce /"
        echo
    fi
    echo "autocall: calling command \`$c4$1$ce'..."
    # see the "z" flag under PARAMTER EXPANSION under "man ZSHEXPN" for more info
    if [[ $tflag == true || $kflag == true ]]; then
        # the 'timeout' command gives nice exit statuses -- it gives 124 if
        # command times out, but if the command exits with an error of its own,
        # it gives that error number (so if the command doesn't time out, but
        # exits with 4 or 255 or whatever, it (the timeout command) will exit
        # with that number instead)

        # note: if kflag is true, then tflag is always true
        com_exit_status=0
        if [[ $kflag == true ]]; then
            eval timeout -k $killdelay $timeout $1 2>&1 | sed "s/^/  $col>$ce /"
            com_exit_status=$pipestatus[1]
        else
            eval timeout $timeout $1 2>&1 | sed "s/^/  $col>$ce /"
            com_exit_status=$pipestatus[1]
        fi
        if [[ $com_exit_status -eq 124 ]]; then
            echo "\n${c6}autocall: command timed out$ce"
        elif [[ $com_exit_status -ne 0 ]]; then
            echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
        else
            echo "\n${c1}autocall: command executed successfully$ce"
        fi
    else
        eval $1 2>&1 | sed "s/^/  $col>$ce /"
        com_exit_status=$pipestatus[1]
        if [[ $com_exit_status -ne 0 ]]; then
            echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
        else
            echo "\n${c1}autocall: command executed successfully$ce"
        fi
    fi
}

#------------------#
# Global variables #
#------------------#

# colors
c1="\x1b[1;32m" # bright green
c2="\x1b[1;33m" # bright yellow
c3="\x1b[1;34m" # bright blue
c4="\x1b[1;36m" # bright cyan
c5="\x1b[1;35m" # bright purple
c6="\x1b[1;31m" # bright red
ce="\x1b[0m"

coms=()
delay=5
xdelay_factor=4
f=()
F=()
b=()
d=()
D=()
l=()
l_targets=()
wflag=false
xflag=false
tflag=false
kflag=false
timeout=0
killdelay=0

tstampf="" # used to DISPLAY modification only for -f flag
linestamp="" # used to DETECT modification only for -f flag
tstampF="" # used to detect AND display modifications for -F flag
tstampb="" # used to DISPLAY modification only for -b flag
bytestamp="" # used to DETECT modification only for -b flag
tstampd="" # used to detect AND display modifications for -d flag
tstampD="" # used to detect AND display modifications for -D flag
tstampl="" # used to detect AND display modifications for -l flag

tstampf_new=""
linestamp_new=""
tstampF_new=""
tstampb_new=""
bytestamp_new=""
tstampd_new=""
tstampD_new=""
tstampl_new=""

#----------------#
# PROGRAM START! #
#----------------#

#---------------#
# Parse options #
#---------------#

# the leading ":" in the opstring silences getopts's own error messages;
# the colon after a single letter indicates that that letter requires an
# argument

# first parse for the presence of any -h and -v flags (while silently ignoring
# the other recognized options)
while getopts ":c:w:f:F:b:d:D:l:t:k:x:ahv" opt; do
    case "$opt" in
    h)  msg "help" ;;
    v)  msg "version" ;;
    *) ;;
    esac
done
# re-parse from the beginning again if there were no -h or -v flags
OPTIND=1
while getopts ":c:w:f:F:b:d:D:l:t:k:x:a" opt; do
    case "$opt" in
    c)
        com_binary=$(echo "$OPTARG" | sed 's/ \+/ /g' | sed 's/;/ /g' | cut -d " " -f1)
        if [[ $(which $com_binary) == "$com_binary not found" ]]; then
            msg "invalid command \`$com_binary'"
        else
            coms+=("$OPTARG")
        fi
        ;;
    w)
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                wflag=true
                delay=$OPTARG
            else
                msg "DELAY must be greater than 0"
            fi
        else
            msg "invalid DELAY \`$OPTARG'"
        fi
        ;;
    f)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            f+=("$OPTARG")
        fi
        ;;
    F)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            F+=("$OPTARG")
        fi
        ;;
    b)
        if [[ ! -f "$OPTARG" ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            b+=("$OPTARG")
        fi
        ;;
    d)
        if [[ ! -d "$OPTARG" ]]; then
            msg "directory \`$OPTARG' does not exist"
        else
            d+=("$OPTARG")
        fi
        ;;
    D)
        if [[ ! -d "$OPTARG" ]]; then
            msg "directory \`$OPTARG' does not exist"
        else
            D+=("$OPTARG")
        fi
        ;;
    l)
        if [[ ! -f $OPTARG ]]; then
            msg "file \`$OPTARG' does not exist"
        else
            l+=("$OPTARG")
        fi
        ;;
    t)
        tflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                timeout=$OPTARG
            else
                msg "TIMEOUT must be greater than 0"
            fi
        else
            msg "invalid TIMEOUT \`$OPTARG'"
        fi
        ;;
    k)
        kflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                killdelay=$OPTARG
            else
                msg "TIMEOUT must be greater than 0"
            fi
        else
            msg "invalid KDELAY \`$OPTARG'"
        fi
        ;;
    x)
        xflag=true
        if $(is_number "$OPTARG"); then
            if [[ $OPTARG -gt 0 ]]; then
                xdelay_factor=$OPTARG
            elif [[ $OPTARG -eq 0 ]]; then
                xdelay_factor=1
            else
                msg "invalid FACTOR \`$OPTARG'"
            fi
        fi
        ;;
    a) xflag=true ;;
    🙂
        msg "missing argument for option \`$OPTARG'"
        ;;
    *)
        msg "unrecognized option \`$OPTARG'"
        ;;
    esac
done

#-----------------#
# Set misc values #
#-----------------#

if [[ $kflag == true && $tflag == false ]]; then
    tflag=true
    timeout=$killdelay
fi

#------------------#
# Check for errors #
#------------------#

# check that the given options are in good working order
if [[ -z $coms[1] ]]; then
    msg "help"
elif [[ (-n $f && -n $d && -n $D && -n $l) && $xflag == false ]]; then
    echo "autocall: see help with -h"
    msg "at least one or more of the (1) -f, -d, -D, or -l paramters, or (2) the -x parameter, required"
fi

#-------------------------------#
# Record state of watched files #
#-------------------------------#

if [[ -n $F ]]; then
    if [[ $#F -eq 1 ]]; then
        linestamp=$(wc -l $F)
    else
        linestamp=$(wc -l $F | head -n -1) # remove the last "total" line
    fi
    tstampF=$(ls --full-time $F)
fi
if [[ -n $f ]]; then
    tstampf=$(ls --full-time $f)
fi
if [[ -n $b ]]; then
    if [[ $#b -eq 1 ]]; then
        bytestamp=$(wc -c $b)
    else
        bytestamp=$(wc -c $b | head -n -1) # remove the last "total" line
    fi
    tstampb=$(ls --full-time $b)
fi
if [[ -n $d ]]; then
    tstampd=$(ls --full-time $d)
fi
if [[ -n $D ]]; then
    tstampD=$(ls --full-time -R $D)
fi
if [[ -n $l ]]; then
    for listfile in $l; do
        if [[ ! -f $listfile ]]; then
            msg "file \`$listfile ' does not exist"
        else
            while read line; do
                if [[ ! -e "$line" ]]; then
                    msg "\`$listfile': file/path \`$line' does not exist"
                else
                    l_targets+=("$line")
                fi
            done < $listfile # read contents of $listfile!
        fi
    done
    tstampl=$(ls --full-time -R $l_targets)
fi

#----------------------#
# Begin execution loop #
#----------------------#
# This is like Russian Roulette (where "firing" is executing the command),
# except that all the chambers are loaded, and that on every new turn, instead
# of picking the chamber randomly, we look at the very next chamber. After
# every chamber is given a turn, we reload the gun and start over.
#
# If we detect file/directory modification, we pull the trigger. We can also
# pull the trigger by pressing SPACE or ENTER. If the -x option is provided,
# the last chamber will be set to "always shoot" and will always fire (if the
# trigger hasn't been pulled by the above methods yet).

if [[ $xflag == true && $xdelay_factor -le 1 ]]; then
    xdelay_factor=1
fi
com_num=1
for c in $coms; do
    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
    let com_num+=1
done
echo "autocall: press keys 1-$#coms to execute a specific command"
if [[ $wflag == true ]]; then
    echo "autocall: modification check interval set to $delay sec"
else
    echo "autocall: modification check interval set to $delay sec (default)"
fi
if [[ $xflag == true ]]; then
    echo "autocall: auto-execution interval set to ($delay * $xdelay_factor) = $(($delay*$xdelay_factor)) sec"
fi
if [[ $tflag == true ]]; then
    echo "autocall: TIMEOUT set to $timeout"
    if [[ $kflag == true ]]; then
        echo "autocall: KDELAY set to $killdelay"
    fi
fi
echo "autocall: press ENTER or SPACE to execute manually"
echo "autocall: press \`c' for command list"
echo "autocall: press \`h' for help"
echo "autocall: press \`q' to quit"
key=""
while true; do
    for i in {1..$xdelay_factor}; do
        #------------------------------------------#
        # Case 1: the user forces manual execution #
        #------------------------------------------#
        # read a single key from the user
        read -s -t $delay -k key
        case $key in
            # note the special notation $'\n' to detect an ENTER key
            $'\n'|" "|1)
                autocall_exec $coms[1] $timeout $killdelay 4 "manual execution"
                key=""
                continue
                ;;
            2|3|4|5|6|7|8|9)
                if [[ -n $coms[$key] ]]; then
                    autocall_exec $coms[$key] $timeout $killdelay 4 "manual execution"
                    key=""
                    continue
                else
                    echo "autocall: command slot $key is not set"
                    key=""
                    continue
                fi
                ;;
            c)
                com_num=1
                echo ""
                for c in $coms; do
                    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
                    let com_num+=1
                done
                key=""
                continue
                ;;
            h)
                echo "\nautocall: press \`c' for command list"
                echo "autocall: press \`h' for help"
                echo "autocall: press \`q' to exit"
                com_num=1
                for c in $coms; do
                    echo "autocall: command slot $com_num set to \`$c4$coms[$com_num]$ce'"
                    let com_num+=1
                done
                echo "autocall: press keys 1-$#coms to execute a specific command"
                echo "autocall: press ENTER or SPACE or \`1' to execute first command manually"
                key=""
                continue
                ;;
            q)
                echo "\nautocall: exiting..."
                exit 0
                ;;
            *) ;;
        esac

        #------------------------------------------------------------------#
        # Case 2: modification is detected among watched files/directories #
        #------------------------------------------------------------------#
        if [[ -n $f ]]; then
            tstampf_new=$(ls --full-time $f)
        fi
        if [[ -n $F ]]; then
            if [[ $#F -eq 1 ]]; then
                linestamp_new=$(wc -l $F)
            else
                linestamp_new=$(wc -l $F | head -n -1) # remove the last "total" line
            fi
            tstampF_new=$(ls --full-time $F)
        fi
        if [[ -n $b ]]; then
            if [[ $#b -eq 1 ]]; then
                bytestamp_new=$(wc -c $b)
            else
                bytestamp_new=$(wc -c $b | head -n -1) # remove the last "total" line
            fi
            tstampb_new=$(ls --full-time $b)
        fi
        if [[ -n $d ]]; then
            tstampd_new=$(ls --full-time $d)
        fi
        if [[ -n $D ]]; then
            tstampD_new=$(ls --full-time -R $D)
        fi
        if [[ -n $l ]]; then
            tstampl_new=$(ls --full-time -R $l_targets)
        fi
        if [[ -n $f && "$tstampf" != "$tstampf_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampf" "$tstampf_new"
            tstampf=$tstampf_new
            continue
        elif [[ -n $F && "$linestamp" != "$linestamp_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampF" "$tstampF_new"
            linestamp=$linestamp_new
            tstampF=$tstampF_new
            continue
        elif [[ -n $b && "$bytestamp" != "$bytestamp_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampb" "$tstampb_new"
            bytestamp=$bytestamp_new
            tstampb=$tstampb_new
            continue
        elif [[ -n $d && "$tstampd" != "$tstampd_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampd" "$tstampd_new"
            tstampd=$tstampd_new
            continue
        elif [[ -n $D && "$tstampD" != "$tstampD_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampD" "$tstampD_new"
            tstampD=$tstampD_new
            continue
        elif [[ -n $l && "$tstampl" != "$tstampl_new" ]]; then
            autocall_exec $coms[1] $timeout $killdelay 1 "change detected" "$tstampl" "$tstampl_new"
            tstampl=$tstampl_new
            continue
        fi

        #-----------------------------------------------------#
        # Case 3: periodic, automatic execution was requested #
        #-----------------------------------------------------#
        if [[ $xflag == true && $i -eq $xdelay_factor ]]; then
            autocall_exec $coms[1] $timeout $killdelay 3 "commencing auto-execution ($(($delay*$xdelay_factor)) sec)"
        fi
    done
done

# vim:syntax=zsh

Zsh: univ_open() update (new dir_info() script)

I’ve been heavily using the univ_open() shell function (a universal file/directory opener from the command line) that I wrote for many months now, and have made some slight changes to it. It’s all PUBLIC DOMAIN like my other stuff, so you have my blessing if you want to make it better (if you do, send me a link to your version or something).

What’s new? The code that prettily lists all directory contents after entering it has been cut off and made into its own function, called “dir_info”. So there are 2 functions now: univ_open(), which opens any file/directory from the commandline according to predefined preferred apps (dead simple to do, as you can see in the code), and dir_info(), a prettified “ls” replacement that also lists helpful info like “largest file” and the number of files in the directory.

univ_open() has not changed much — the only new stuff are the helpful error messages.

dir_info() should be used by itself from a shell alias (I’ve aliased all its 8 modes to quick keys like “ll”, “l”, “lk”, “lj”, etc. for quick access in ~/.zshrc). For example, “l” is aliased to “dir_info 1”. Here is the code below for dir_info():

#!/bin/zsh

# dir_info(), a function that acts as an intelligent "ls". This function is
# used by univ_open() to display directory contents, but it should additionally
# be used by itself. By default, you can call dir_info() without any arguments,
# but there are 8 presets that are hardcoded below (presets 0-7). Thus, you
# could do "dir_info [0-7]" to use those modes. You sould alias these modes to
# something easy like "ll" or "l", etc. from your ~/.zshrc. For a detailed
# explanation of the 8 presets, see the code below.

dir_info() {
    # colors
    c1="33[1;32m" # bright green
    c2="33[1;33m" # bright yellow
    c3="33[1;34m" # bright blue
    c4="33[1;36m" # bright cyan
    c5="33[1;35m" # bright purple
    c6="33[1;31m" # bright red
    # only pre-emptively give newline to prettify listing of directory contents
    # if the directory is not empty
    [[ $(ls -A1 | wc -l) -ne 0 ]] && echo
    countcolor() {
        if [[ $1 -eq 0 ]]; then
            echo $c4
        elif [[ $1 -le 25 ]]; then
            echo $c1
        elif [[ $1 -le 50 ]]; then
            echo $c2
        elif [[ $1 -le 100 ]]; then
            echo $c3
        elif [[ $1 -le 200 ]]; then
            echo $c5
        else
            echo $c6
        fi
    }

    sizecolor() {
        case $1 in
            B)
                echo $c0
                ;;
            K)
                echo $c1
                ;;
            M)
                echo $c2
                ;;
            G)
                echo $c3
                ;;
            T)
                echo $c5
                ;;
            *)
                echo $c4
                ;;
        esac
    }
    ce="33[0m"

    ctag_size=""

    # only show information if the directory is not empty
    if [[ $(ls -A1 | wc -l) -gt 0 ]]; then
        size=$(ls -Ahl | head -n1 | head -c -2)
        suff=$(ls -Ahl | head -n1 | tail -c -2)
        size_num=$(echo -n $size | cut -d " " -f2 | head -c -1)
        ctag_size=$(sizecolor $suff)
        simple=false
        # show variation of `ls` based on given argument
        case $1 in
            0) # simple
                ls -Chs -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            1) # verbose
                ls -Ahl --color | tail -n +2
                ;;
            2) # simple, but sorted by size (biggest file on bottom with -r flag)
                ls -ChsSr -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            3) # verbose, but sorted by size (biggest file on bottom with -r flag)
                ls -AhlSr --color | tail -n +2
                ;;
            4) # simple, but sorted by time (newest file on bottom with -r flag)
                ls -Chstr -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            5) # verbose, but sorted by time (newest file on bottom with -r flag)
                ls -Ahltr --color | tail -n +2
                ;;
            6) # simple, but sorted by extension
                ls -ChsX -w $COLUMNS --color | tail -n +2
                simple=true
                ;;
            7) # verbose, but sorted by extension
                ls -AhlX --color | tail -n +2
                ;;
            *)
                simple=true
                ls --color
                ;;
        esac

        # show number of files or number of shown vs hidden (as a fraction),
        # depending on which version of `ls` was used
        denom=$(ls -A1 | wc -l)
        numer=$denom
        # redefine numer to be a smaller number if we're in simple mode (and
        # only showing non-dotfiles/non-dotdirectories
        $simple && numer=$(ls -1 | wc -l)
        ctag_count=$(countcolor $denom)

        if [[ $numer != $denom ]]; then
            if [[ $numer -gt 1 ]]; then
                echo -n "\nfiles $numer/$ctag_count$denom$ce | "
            else
                dotfilecnt=$(($denom - $numer))
                s=""
                [[ $dotfilecnt -gt 1 ]] && s="s" || s=""
                echo -n "\nfiles $numer/$ctag_count$denom$ce ($dotfilecnt dotfile$s) | "
            fi
        else
            echo -n "\nfiles $ctag_count$denom$ce | "
        fi

        if [[ $suff != "0" ]]; then
            echo -n "size $ctag_size$size_num $suff$ce"
        else
            echo -n "size$ctag_size nil$ce"
        fi

        # Find the biggest file in this directory.
        #
        # We first use ls to list all contents, sorted by size; then, we strip
        # all non-regular file entries (such as directories and symlinks);
        # then, we truncate our result to kill all newlines with 'tr' (e.g., if
        # there is a tiny file (say, 5 bytes) and there are directories and
        # symlinks, it's likely that the file is NOT the biggest "file"
        # according to 'ls', which means that the output up to this point will
        # have trailing whitespaces (thus making the next command 'tail -n 1'
        # fail, even though there is a valid file!)); we then fetch the last
        # line of this list, which is the biggest file, then make it so that
        # all multiple-continguous spaces are replaced with a single space --
        # and using this new property, we can safely call 'cut' by specifying
        # the single space " " as a delimiter to finally get our filename.
        big=$(ls -lSr | sed 's/^[^-].\+//' | tr -s "\n" | tail -n 1 | sed 's/ \+/ /g' | cut -d " " -f9-)
        if [[ -f "$big" ]]; then
            # since $suff needs a file size suffix (K,M,G, etc.), we reassign
            # $big_size here from pure block size to human-readable notation
            # make $big_size more "accurate" (not in terms of disk space usage,
            # but in terms of actual number of bytes inside the file) if it is
            # smaller than 4096 bytes
            suff=""
            if [[ $(du -b "$big" | cut -f1) -lt 4096 ]]; then
                big_num="$(du -b "$big" | cut -f1)"
                suff="B"
            else
                big_num=$(ls -hs "$big" | cut -d " " -f1 | sed 's/[a-zA-Z]//')
                suff=$(ls -hs "$big" | cut -d " " -f1 | tail -c -2)
            fi
            ctag_size=$(sizecolor "$suff")
            echo " | \`$big' $ctag_size$big_num $suff$ce"
        else
            echo
        fi
    fi
}

Here is the updated univ_open() shell function:

#!/bin/zsh

# univ_open() is intended to be used to pass either a SINGLE valid FILE or
# DIRECTORY. For illustrative purposes, we assume "d" to be aliased to
# univ_open() in ~/.zshrc. If optional flags are desired, then either prepend
# or append them appropriately. E.g., if you have jpg's to be opened by eog,
# then doing "d -f file.jpg" or "d file.jpg -f" will be the same as "eog -f
# file.jpg" or "eog file.jpg -f", respectively. The only requirement when
# passing flags is that the either the first word or last word must be a valid
# filename.

# univ_open requires the custom shell function dir_info() (`ls` with saner
# default args) to work properly

univ_open() {
    if [[ -z $@ ]]; then
        # if we do not provide any arguments, go to the home directory
        cd && dir_info # ("cd" w/o any arguments goes to the home directory)
    elif [[ -f $1 || -f ${=@[-1]} ]]; then
        # if we're here, it means that the user either (1) provided a single valid file name, or (2) a number of
        # commandline arguments PLUS a single valid file name; use of the $@ variable ensures that we preserve all the
        # arguments the user passed to us

        # $1 is the first arg; ${=@[-1]} is the last arg (i.e., if user passes "-o -m FILE" to us, then obviously the
        # last arg is the filename
        #
        # we use && and || for simple ternary operation (like ? and : in C)
        [[ -f $1 ]] && file=$1 || file=${=@[-1]}
        case $file:e:l in
            (doc|odf|odt|rtf)
                soffice -writer $@ &>/dev/null & disown
            ;;
            (pps|ppt)
                soffice -impress $@ &>/dev/null & disown
            ;;
            (htm|html)
                firefox $@ &>/dev/null & disown
            ;;
            (eps|pdf|ps)
                evince -f $@ &>/dev/null & disown
            ;;
            (bmp|gif|jpg|jpeg|png|svg|tga|tiff)
                eog $@ &>/dev/null & disown
            ;;
            (psd|xcf)
                gimp $@ &>/dev/null & disown
            ;;
            (aac|flac|mp3|ogg|wav|wma)
                mplayer $@
            ;;
            (mid|midi)
                timidity $@
            ;;
            (asf|avi|flv|ogm|ogv|mkv|mov|mp4|mpg|mpeg|rmvb|wmv)
                smplayer $@ &>/dev/null & disown
            ;;
            (djvu)
                djview $@
            ;;
            (exe)
                wine $@ &>/dev/null & disown
            ;;
            *)
                vim $@
            ;;
        esac
    elif [[ -d $1 ]]; then
        # if the first argument is a valid directory, just cd into it -- ignore
        # any trailing arguments (in zsh, '#' is the same as ARGC, and denotes the number of arguments passed to the
        # script (so '$#' is the same as $ARGC)
        if [[ $# -eq 1 ]]; then
            cd $@ && dir_info
        else
            # if the first argument was a valid directory, but there was more than 1 argument, then we ignore these
            # trailing args but still cd into the first given directory
            cd $1 && dir_info
            # i.e., show arguments 2 ... all the way to the last one (last one has an index of -1 argument array)
            echo "\nuniv_open: argument(s) ignored: \`${=@[2,-1]}\`"
            echo "univ_open: went to \`$1'\n"
        fi
    elif [[ ! -e $@ ]]; then
        [[ $# -gt 1 ]] && head=$1:h || head=$@:h
        # if we're given just 1 argument, and that argument does not exist,
        # then go to the nearest valid parent directory; we use a while loop to
        # find the closest valid directory, just in case the user gave a
        # butchered-up path
        while [[ ! -d $head ]]; do head=$head:h; done
        cd $head && dir_info
        echo "\nuniv_open: path \`$@' does not exist"
        [[ $head == "." ]] && echo "univ_open: stayed in same directory\n" || echo "univ_open: relocated to nearest parent directory \`$head'\n"
    else
        # possible error -- should re-program the above if this ever happens,
        # but, it seems unlikely
        echo "\nuniv_open: error -- exiting peacefully"
    fi
}

Put these two functions inside your ~/.zsh/func folder with filenames “dir_info” and “univ_open” and autoload them. I personally do this in my ~/.zshrc:

fpath=(~/.zsh/func $fpath)
autoload -U ~/.zsh/func/*(:t)

to have them autoloaded. Then, simply alias “d” to “univ_open” and alias “ll” to “dir_info 0” and “l” to “dir_info 1”, and you’re set (the aliases to dir_info are optional, but highly recommended). The real star of the show in this post is dir_info() and how it displays various info for the directory after the “ls” stuff is printed on the screen. I hope you enjoy them as much as I do.

Generic Screen Backlight-Toggling Shell Script

I’ve written a shell script to toggle my LCD backlight (aka, screen brightness). Currently it works on a Dell D505 laptop and Fujitsu V1040. Here it is:

#!/bin/zsh
# change backlight settings based on system
# symlink to /usr/bin/brightness, and call with "sudo brightness"
# make sure to disable password prompt for it with "sudo visudo"

case $HOST in
    aether) # Fujitsu V1040
        # since xbacklight always returns a single floating point number, we need to
        # convert it to an integer
        float=`xbacklight`
        int=${float/\.*}
        if [[ $int -eq 0 ]]; then
            xbacklight -set 100
        else
            xbacklight -set 0
        fi
    ;;
    luxion) # Dell Latidude D505
        b=$(cat /sys/class/backlight/dell_backlight/actual_brightness)
        let "bi=$b"
        if [[ $b -lt 7 ]]; then
            # gradually raise backlight, just like xbacklight
            while [[ bi -lt 7 ]]; do
                let "bi=$bi+1"
                echo $bi > /sys/class/backlight/dell_backlight/brightness
            done
        else
            while [[ bi -gt 0 ]]; do
                let "bi=$bi-1"
                echo $bi > /sys/class/backlight/dell_backlight/brightness
            done
        fi
        ;;
    *)
        ;;
esac

# vim:syntax=zsh

The script pretty simple. It looks at the name of the computer ($HOST environment variable), and then chooses the appropriate way to change the backlight (either gradually set it to 100 or 0). Since the neat xbacklight command doesn’t work on my Dell laptop, I’ve written my own method. This method of echo-ing the desired brightness number into a system file (yes, Linux is cool like that) seems to be the trend in the various Linux forums out there. The trick is to find the right “brightness” file. Some systems use /proc/acpi/video/…/brightness and others, like my Dell Latitude D505, in /sys/class/backlight/…/brightness (though, on my Dell, I have to read separately from the file “actual_brightness,” but you get the idea). The script is, very straightforward and easily modifiable to work on your own laptop.

The only requirement is that, at least when echo-ing anything into /proc or /sys, you need root privileges. So you have to run “sudo path/to/brightness.sh”. I’ve simplified the process as follows:

  • sudo ln -s path/to/brightness.sh /usr/bin/brightness -> This makes the command “brightness” become available from anywhere in the system (since /usr/bin is in your $PATH environment variable).
  • sudo visudo -> Then add the line “your_username your_hostname=NOPASSWD: /usr/bin/brightness” -> This makes it so that typing “sudo brightness” does not require any password.
  • bind “sudo brightness” to a hotkey (consult your window manager) -> For me, I use SHIFT + CAPSLOCK + Backslash (capslock is my modkey in Xmonad).

Again, all of this “sudo visudo” and “sudo brightness” business is only required because my aging Dell laptop doesn’t play well with xbacklight.

So there you have it. A simple script that relies on either xbacklight, or a customized method, depending on the system. If you have multiple laptops that are supported by xbacklight, simply replace “aether” in the script above with “system1|system2|system3|sytem4” to save yourself from having to copy/paste the entire block of code for each system.