# Why I Buy Moleskine Journals

I keep a diary and also a programming journal. Because I want to keep all of my ideas down for future reference, I prefer to use nice journals for them (no spiral notebooks or ones with perforated pages). For these purposes, I turn to the Moleskine brand of journals — more specifically, their Folio line of A4-sized notebooks.

Yes, the Moleskine line of journals are quite overpriced. Yes, they are not made in Europe as the name implies (they are made in China). However, Moleskine is quite possibly the only company in the world that makes journals that fit the following criteria:

• Sewn binding (pages can lie flat for sane writing)
• A4 or larger size
• Blank/lined/graph pages
• Hardcover
• Regular paper (not sketch paper)

I think some of Clairefontaine’s journals come close, but all of their A4-sized journals are lined, I believe. And plus, as great as their paper quality is (it really is fantastic to write on), it is way too bright for my taste.

By the way, if you are a programmer I strongly recommend that you keep your notes in a physical notebook. If you are learning a new programming language or starting to sketch the design goals of a new programming project, keeping a notebook for it is quite possibly one of the best decisions you will make.

Oh, by the way, you should purchase a good pen to go along with your journal(s). Don’t write with horrible, cheap pens that are difficult to write with.

# Easy Solutions to Hard Problems?

I’ve been trudging along knee-deep in tree manipulation algorithms recently, all in Haskell. I admit, calling them “algorithms” makes me sound much smarter than I really am… what I actually worked on was converting one type of tree into another type of tree (functionA) and then back again (functionB), such that I would get back the original tree — all in a purely functional, stateless, recursive way. My brain hurt a lot. I hit roadblock after roadblock; still, I managed to go through each hurdle by sheer effort (I never studied computer science in school).

See, I have a habit of solving typical Haskell coding problems in a very iterative fashion. Here are the steps I usually follow:

1. Research the problem domain (Wikipedia, StackOverflow, etc.)
2. Take lots of loose notes on paper (if drawing is required), or on the computer (using emacs’ Org-mode).
3. Write temporary prototype functions to get the job done.
4. Test said function with GHCi.
5. Repeat Steps 1 through 4 until I am satisfied with the implementation.

The steps usually work after one or two iterations. But for hard problems, I would end up going through many more (failed) iterations. Over and over again, hundreds of lines of Haskell would get written, tested, then ultimately abandoned because their flawed design would dawn on me halfway through Step 3. Then I would get burned out, and spend the rest of the day away from the computer screen, doing something completely different. But on the following day, I would cook up a solution from scratch in an hour.

It’s such a strange feeling. You try really hard for hours, days, weeks even, failing again and again. Then, after a brief break, you just figure it out, with less mental effort than all the effort you put in previously.

What can explain this phenomenon? The biggest factor is obviously the learning process itself — that is, it takes lots of failures to familiarize yourself intimately with the problem at hand. But the way in which the solution comes to me only after a deliberate pause, a complete separation from the problem domain, fascinates me. I call it the “Pause Effect” (PE), because I’m too lazy to dig up the right psychological term for this that probably exists already.

So, here’s my new guideline for solving really hard problems:

1. Try to solve problem in a “brute-force” manner. Don’t stop until you burn out. I call this the “Feynman Step”, after a quote from the televised “Take the World from Another Point of View” (1973) interview, where, toward the end, he describes a good conversation partner — someone who has gone as far as he could go in his field of study. Every time I’m in this step, I think of how animated Feynman gets in his interview, and it fires me up again.
2. Rest one or two days, then come back to it. This is the “PE Step”.

The best part is when you ultimately find the solution — it feels incredible. You get this almost intoxicating feeling of empowerment in yourself and in your own abilities.

# Xorg: Using the US International (altgr-intl variant) Keyboard Layout

TL;DR: If you want to sanely type French, German, Spanish, or other European languages on a standard US keyboard (101 or 104 keys), your only real option is the us layout with the (relatively new) altgr-intl variant.

I like to type in French and German sometimes because I study these languages for fun. For some time, I had used the fr and de X keyboard layouts to input special characters from these languages. However, it wasn’t until recently that I realized how most European layouts, such as fr and de, require that you have a 105-key keyboard. A 105-key keyboard looks exactly like the standard, full-sized IBM-style 104-key keyboard (i.e., IBM Model “M” 101-key keyboard plus 2 Windows keys and 1 “Menu entry” key on the bottom row), except that it has 1 more extra key on the row where the Shift keys are (and also has a “tall” Enter key with some of the existing keys on the right side rearranged a bit).

I personally use a 104-key keyboard (a Unicomp Space Saver 104, because that’s how I roll). Now, when I switched to the fr layout the other day, I wanted to type the word âme. Unfortunately, the circumflex dead key was not where it was supposed to be. This was because I only had 104 keys instead of 105, and during the process of switching forcibly to the fr layout, there was a malfunction in the conversion (rightfully so).

Now, the choice was to either buy a 105-key keyboard or just find an alternative 104-key layout that had dead keys. I didn’t want to buy a new 105-key keyboard because (1) the extra 1 key results in a squished, “square” looking left-Shift key and (2) the tall Enter key would make it more difficult for me to keep my fingers happy on the home row, since I’d have to stretch my pinky quite a bit to reach it, and (3) I just didn’t feel like spending another ~$100 for a keyboard (my Unicomp is still in excellent condition!). So, I had to find a new keyboard layout that could process accented/non-English Latin characters better. The Canadian Multilingual Standard keyboard layout (http://en.wikipedia.org/wiki/Keyboard_layout#Canadian_Multilingual_Standard) looked very, very cool, but it was only for a 105-key keyboard. I then discovered a layout called the US International Standard keyboard layout, which enables you to type all the cool accented letters from French or German (and other European languages) while still retaining a QWERTY, 101/104 key design. This sounded great, until I realized that the tilde (~), backtick (), single quote (‘) and other keys were dead keys. I.e., to type ‘~’ in this layout, you have to type ‘~’ followed by a space. This is horrible for programmers in Linux because the ~ key comes up everywhere (it’s a shell shortcut for the$HOME directory). To type a single quote character, you have to type (‘) followed by a space! Ugh. I’m sorry, but whoever designed this layout was obviously a European non-programmer who wanted to type English easier.

But then, all hope was not lost. When browsing all of the choices for the us keyboard layout variant under X (/usr/share/X11/xkb/rules/base.lst), I found a curious variant called altgr-intl. A quick google search turned up this page, an email from the creator of this layout to the X developers: http://lists.x.org/archives/xorg/2007-July/026534.html. Here was a person whose desired usage fit perfectly with my own needs! Here’s a quote:

I regularly write four languages (Dutch, English, French and German)
on a US keyboard (Model M – © IBM 1984).

I dislike the International keyboard layout. Why do I have to press
two keys for a single quote (‘ followed the spacebar) just because the
‘ key is a dead-key that enables me to type an eacute (é)?

I decided to ‘hide’ the dead-keys behind AltGr (the right Alt key).
AltGr+’ followed by e gives me eacute (é). It also gives me
m² (AltGr+2).

Excellent! Apparently, this layout was so useful that it became included eventually into the upstream X codebase, as the altgr-intl variant for the standard us (QWERTY) keyboard layout. The most compelling feature of this layout is that all of the non-US keys are all hidden behind a single key: the right Alt key. If you don’t use the right Alt key, this layout behaves exactly the same as the regular plain us layout. How cool is that?! And what’s more, this layout makes it compatible with the standard 101-key or 104-key IBM-styled keyboards!

This variant deserves much more attention. Unfortunately, there seems to be little or no information about it other than the above-quoted email message. Also, I’ve noticed that it is capable of generating Norse characters as well, like þ and ø. There does not seem to be a simple keyboard layout picture on the internet to show all the keys. Anyway, I’ve been using it recently and it really does work great. There is no need for me to switch between the us, fr, and de layouts to get what I want. I just use this layout and that’s it. Take that, Canadian Multilingual Standard!

Here’s how to set it up: use either the setxkbmap program like this:

setxkbmap -rules evdev -model evdev -layout us -variant altgr-intl


in a script called by ~/.xinitrc, or use it directly in your keyboard xorg settings:

Section "InputClass"
Identifier "Keyboard Defaults"
MatchIsKeyboard "yes"
Option  "XkbLayout" "us"
Option  "XkbVariant" "altgr-intl"
EndSection


I personally prefer the script method because you can tweak it easily and reload it without restarting X. For me, I had to use the -rules evdev -model evdev options because -model pc104 would mess things up (probably due to an interaction with xmodmap and such in my ~/.xinitrc). Whatever you do, make sure everything works by testing out the AltGr (right Alt) key. For example, AltGr+a should result in ‘á’.

A couple caveats: Some keys like â and è (which are AltGr+6+a and AltGr++e (i.e., uses dead keys)) do not show up at all on urxvt. I’m guessing that the dead keys are broken for urxvt (or maybe it’s a bug with this layout; who knows). Luckily, I can just use a non-console app (like GVIM) to do my Euro-language typing, so it’s OK (although, eventually, I’ll run into this bug again when I start typing French emails from within mutt inside urxvt 20 years later… but by then it will be fixed, I’m sure.) Also, the nifty xkbprint utility can generate nice pictures of keyboard layouts (just do a google image search on it), but it’s currently missing (https://bugs.archlinux.org/task/17541) from Arch Linux’s xorg-xkb-utils package. So, if you’re on Arch, you’ll have to experiment a bit to figure out the various AltGr+, and AltGr+Shift+ key combinations.

• October 30, 2011: I just found out that the dead keys bug in urxvt is actually a bug in ibus/SCIM (I use ibus, which relies on SCIM, to enter Japanese/Korean characters when I need to). I tried out uim, which is a bit more complicated (much less user friendly, although there are no technical defects, from what I can tell), and with uim, the dead keys work properly in urxvt. The actual bug report for ibus is here.
So, use uim if you want to use the altgr-intl layout flawlessly, while still retaining CJK input functionality. Of course, if you never needed CJK input methods in the first place, this paragraph shouldn’t concern you at all.
• December 7, 2011: Fix typo.
• January 24, 2012: Fix grammar.

# Things I Like About Git

Ever since around 2009-2010, developers have been engaging in an increasingly vocal debate about version control systems. I attribute this to the hugely popular rise of the distributed version control systems (DVCSs), namely Git and Mercurial. From my understanding, DVCSs in general are more powerful than nondistributed VCSs, because DVCSs can act just like nondistributed VCSs, but not vice versa. So, ultimately, all DVCSs give you extra flexibility, if you want it.

For various reasons, there is still an ongoing debate as to why one should use, for example, Git over Subversion (SVN). I will not address why there are still adamant SVN (or, gasp, CVS) users in the face of the rising tsunami tidal wave of DVCS adherence. Instead, I will talk about things I like about Git, because I’ve been using it almost daily for nearly three years now. My intention is not to add more flames to the ongoing debate, but to give the curious, version control virgins out there (these people do exist!) a brief rundown of why I like using Git. Hopefully, this post will help them ask the right questions before choosing a VCS to roll out in their own machines.

## 1. Git detects corrupt data.

Git uses an internal data structure to keep track of the repo. These objects, which are highly optimized data structures called blobs, are hashed with the SHA-1 algorithm. If suddenly a single byte gets corrupt (e.g., mechanical disk failure), Git will know immediately. And, in turn, you will know immediately.

Check out this quote from Linus Torvalds’ Git talk back in 2007:

“If you have disc corruption, if you have RAM corruption, if you have any kind of problems at all, git will notice them. It’s not a question of if. It’s a guarantee. You can have people who try to be malicious. They won’t succeed. You need to know exactly 20 bytes, you need to know 160-bit SHA-1 name of the top of your tree, and if you know that, you can trust your tree, all the way down, the whole history. You can have 10 years of history, you can have 100,000 files, you can have millions of revisions, and you can trust every single piece of it. Because git is so reliable and all the basic data structures are really really simple. And we check checksums. And we don’t check some UDP packet checksums that is a 16-bit sum of all the bytes. We check checksums that is considered cryptographically secure.

[I]t’s really about the ability to trust your data. I guarantee you, if you put your data in git, you can trust the fact that five years later, after it is converted from your harddisc to DVD to whatever new technology and you copied it along, five years later you can verify the data you get back out is the exact same data you put in. And that is something you really should look for in a source code management system.”

(BTW, Torvalds, opinionated as he is, has a very high signal-to-noise ratio and I highly recommend all of his talks.)

## 2. It’s distributed.

Because it is based on a distributed model of development, merging is easy. In fact, it is automatic, if there are no conflicting changes between the two commits to be merged. In practice, merge conflicts only occur as a result of poor planning. Sloppy developers, beware!

Another benefit of its distributed model is that it naturally lends itself to the task of backing up content across multiple machines.

set my_gpass2=awk '/Gmail2/ {print $2}' ~/.sec/.tmp set my_del=rm -f ~/.sec/.tmp #---------------# # Account Hooks # #---------------# account-hook . "unset imap_user; unset imap_pass; unset tunnel" # unset first! account-hook "imaps://user1@imap.gmail.com/" "\ set imap_user = user1@gmail.com \ imap_pass =$my_gpass1"
account-hook        "imaps://user2@imap.gmail.com/" "\
set imap_user   = user2@gmail.com \
imap_pass   = $my_gpass2" #-------------------------------------# # Folders, mailboxes and folder hooks # #-------------------------------------# # Setup for user1: set folder = imaps://user1@imap.gmail.com/ mailboxes = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash set spoolfile = +INBOX folder-hook imaps://user1@imap.gmail.com/ "\ set folder = imaps://user1@imap.gmail.com/ \ spoolfile = +INBOX \ postponed = +[Gmail]/Drafts \ record = +[Gmail]/'Sent Mail' \ from = 'My Real Name <user1@gmail.com> ' \ realname = 'My Real Name' \ smtp_url = smtps://user1@smtp.gmail.com \ smtp_pass =$my_gpass1"

# Setup for user2:
set folder          = imaps://user2@imap.gmail.com/
mailboxes           = +INBOX =[Gmail]/Drafts =[Gmail]/'Sent Mail' =[Gmail]/Spam =[Gmail]/Trash
set spoolfile       = +INBOX
folder-hook         imaps://user2@imap.gmail.com/ "\
set folder      = imaps://user2@imap.gmail.com/ \
spoolfile   = +INBOX \
postponed   = +[Gmail]/Drafts \
record      = +[Gmail]/'Sent Mail' \
from        = 'My Real Name <user2@gmail.com> ' \
realname    = 'My Real Name' \
smtp_url    = smtps://user2@smtp.gmail.com \
smtp_pass   = $my_gpass2" #--------# # Macros # #--------# macro index <F1> "y12<return><return>" # jump to mailbox number 12 (user1 inbox) macro index <F2> "y6<return><return>" # jump to mailbox number 6 (user2 inbox) #-----------------------# # Gmail-specific macros # #-----------------------# # to delete more than 1 message, just mark them with "t" key and then do "d" on them macro index d ";s+[Gmail]/Trash<enter><enter>" "Move to Gmail's Trash" # delete message, but from pager (opened email) macro pager d "s+[Gmail]/Trash<enter><enter>" "Move to Gmail's Trash" # undelete messages macro index u ";s+INBOX<enter><enter>" "Move to Gmail's INBOX" macro pager u "s+INBOX<enter><enter>" "Move to Gmail's INBOX" #-------------------------# # Misc. optional settings # #-------------------------# # Check for mail every minute for current IMAP mailbox every 1 min set timeout = 60 # Check for new mail in ALL mailboxes every 2 min set mail_check = 120 # keep imap connection alive by polling intermittently (time in seconds) set imap_keepalive = 300 # allow mutt to open new imap connection automatically unset imap_passive # store message headers locally to speed things up # (the ~/.mutt folder MUST exist! Arch does not create it by default) set header_cache = ~/.mutt/hcache # sort mail by threads set sort = threads # and sort threads by date set sort_aux = last-date-received  A couple things that are not so obvious: The passwords are stored in a file called pass.gpg like this: Gmail1 user1-password Gmail2 user2-password  However, it’s encrypted. So you have to decrypt it every time you want to access it. This way, you don’t store your passwords in plaintext! The only drawback is that you have to manually type in your GnuPG password every time you start Mutt, but hey, you only have to type in one password to get the passwords of all your multiple accounts! So it’s not that bad. You can see that the passwords are fed to the variables$my_gpass1 and $my_gpass2. (UPDATE January 3, 2012: Mallik has kindly pointed out in the comments that the “my_” prefix is actually a mutt-specific way of defining custom variables! You must use this syntax — unless you want to get bitten by mysterious “unknown variable” error messages!) One HUGE caveat here: if your password contains a dollar sign ($), say “$quarepant$” then you have to escape it like this:

\\\$quarepant\\\$


in the pass.gpg file. Yes, those are three backslashes for each instance of “$”. This little-known fact caused me hours of pain, and if it weren’t for my familiarity with escape sequences and shell variables and such, I would never have figured it out. Now, I don’t know what other characters have to be escaped, but here’s a way to figure it out: change your Gmail password to contain 1 weird-character like “#”, then manually type in the password like “set my_gpass1 = #blahblah” and see if using$my_gpass1 works. If not, keep adding backslashes (“\#”, then “\\#”, etc.) until it does work. Repeat this procedure for all the other punctuation characters, if necessary. Then change your Gmail password back to its original form, and use the knowledge you gained to put in the correctly-escaped password inside your pass.gpg file.

You can see that I use awk to get the correct region of text from pass.gpg. If you don’t like awk (I just copied the guide from here), you can use these equivalent lines instead (stdin redirection, grep, and cut):

...
set my_gpass1=<~/.sec/.tmp grep Gmail1 | cut -d " " -f 2
set my_gpass2=<~/.sec/.tmp grep Gmail2 | cut -d " " -f 2
...


The macros 1 and 2 are just there to let you quickly switch between the two accounts. The number 6 and 12 happen to be the two INBOX folders on my two Gmail accounts. To check which numbers are right for you, press “y” to list all folders, with their corresponding numbers on the left.

As for the other macros, these are only required because of Gmail’s funky (nonstandard) way of handling IMAP email. Without these macros, if you just press “d” to delete an email, Gmail will NOT send it to the Trash folder, but to the “All Mail” folder. This behavior is strange, because if you use the web browser interface on Gmail, clicking on the “Delete” button moves that email to the Trash folder. I guess Google wants you to use their web interface, or something. Anyway, the macros in here for delete and undelete make Gmail’s IMAP behave the same way as the Gmail’s web interface. The only drawback is that you get a harmless error message “Mailbox was externally modified. Flags may be wrong.” on the bottom every time you use them. Anyway, I’ve tested these macros, and they work for me as expected.

The optional settings at the bottom are pretty self-explanatory, and aren’t required. The most important one in there is the header_cache value. This is definitely required if you have thousands of emails in your inboxes, because otherwise Mutt will fetch all the email headers for all emails every time it starts up.

Here are some not-so-obvious shortcut keys from the “index” menu (i.e., the list of emails, not when you’re looking at a single email (that’s called “pager”; see my macros above)) to get you started:

c   (change folder inside the current active account; press ? to bring up a menu)
y   (look at all of your accounts and folders)
*   (go to the very last (newest) email)
TAB (go to the next unread email)


The other shortcut keys that are displayed on the top line will get get you going. To customize colors in Mutt, you can start with this 256-color template (just paste it into your .muttrc):

# Source: http://trovao.droplinegnome.org/stuff/dotmuttrc
# Screenshot: http://trovao.droplinegnome.org/stuff/mutt-zenburnt.png
#
# This is a zenburn-based muttrc color scheme that is not (even by far)
# complete. There's no copyright involved. Do whatever you want with it.
# Just be aware that I won't be held responsible if the current color-scheme

# general-doesn't-fit stuff
color normal     color188 color237
color error      color115 color236
color markers    color142 color238
color tilde      color108 color237
color status     color144 color234

# index stuff
color indicator  color108 color236
color tree       color109 color237
color index      color188 color237 ~A
color index      color188 color237 ~N
color index      color188 color237 ~O
color index      color174 color237 ~F
color index      color174 color237 ~D

color hdrdefault color223 color237

# gpg stuff
color body       color188 color237 "^gpg: Good signature.*"
color body       color115 color236 "^gpg: BAD signature.*"
color body       color174 color237 "^gpg: Can't check signature.*"
color body       color174 color237 "^-----BEGIN PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----BEGIN PGP SIGNATURE-----"
color body       color174 color237 "^-----END PGP SIGNED MESSAGE-----"
color body       color174 color237 "^-----END PGP SIGNATURE-----"
color body       color174 color237 "^Version: GnuPG.*"
color body       color174 color237 "^Comment: .*"

# url, email and web stuff
color body       color174 color237 "(finger|ftp|http|https|news|telnet)://[^ >]*"
color body       color174 color237 "<URL:[^ ]*>"
color body       color174 color237 "www\\.[-.a-z0-9]+\\.[a-z][a-z][a-z]?([-_./~a-z0-9]+)?"
color body       color174 color237 "mailto: *[^ ]+$$\\i?subject=[^ ]+$$?"
color body       color174 color237 "[-a-z_0-9.%$]+@[-a-z_0-9.]+\\.[-a-z][-a-z]+" # misc body stuff color attachment color174 color237 #Add-ons to the message color signature color223 color237 # quote levels color quoted color108 color237 color quoted1 color116 color237 color quoted2 color247 color237 color quoted3 color108 color237 color quoted4 color116 color237 color quoted5 color247 color237 color quoted6 color108 color237 color quoted7 color116 color237 color quoted8 color247 color237 color quoted9 color108 color237  Make sure your terminal emulator supports 256 colors. (On Arch, you can use the rxvt-unicode package.) UPDATE April 22, 2011: Tiny update regarding rxvt-unicode (it has 256 colors by default since a few months ago). UPDATE August 10, 2011: I did some password changes the other day, and through trial and error, figured out which special (i.e., punctuation) characters you need to escape with backslashes for your Gmail passwords in pass.gpg as discussed in this post. Here is a short list of them:  ~  #$ \ ; ' "


I’m quite confident that this covers all the single-character cases (e.g., “foo~bar” or “a#bc”). The overall theme here is to prevent the shell from expanding on certain special characters (e.g., the backtick ‘’ is used to define a shell command region, which we must escape here). Some characters, like the exclamation mark ‘!’ probably need to be escaped if you have two of them in a row (i.e., \\\!! instead of !!) since ‘!!’ is usually expanded by most shells to refer to the last entered command. Again, it’s all very shell-specific, and because (1) I don’t really know which shell mutt uses internally, and (2) I don’t really know all that much about all the nooks and crannies of shell expansion in general, I am unable to create a definitive list as to which characters you must escape.

Another, nontrivial note I would like to add is that I realized that storing the just-decrypted pass.gpg file (file ~/.sec/tmp above) is still not very secure, because the file is a regular file and the contents will still be there after removing the file. That is, a clever individual could easily run some generic “undelete” program to recover the passwords after gaining control of your machine. I found a very easy, simple solution to get around this problem of deciding where to store the decrypted temporary file: use a ram disk partition! This way, when you power off the machine, any traces of your temporarily decrypted pass.gpg file will be lost.

All it takes is a one-liner /etc/fstab entry:

none    /mnt/r0    tmpfs    rw,size=4K    0    0


The size limit of 4K is the absolute minimum (I tried 1K, for 1KiB, but it set it to 4KiB regardless — probably due to a filesystem limitation). You can use the following ~/.muttrc portion to decrypt safely and securely:

#-----------#
#-----------#
set my_tmpsecret=gpg2 -o /mnt/r0/.tmp -d ~/.sec/pass.gpg
set my_gpass1=awk '/Gmail1/ {print $2}' /mnt/r0/.tmp set my_gpass2=awk '/Gmail2/ {print$2}' /mnt/r0/.tmp
set my_del=shred -zun25 /mnt/r0/.tmp


Easy, and secure. Just for fun and extra security, we use the shred utility to really make sure that the decrypted file does get erased in case (1) the machine has a long uptime (crackers, loss of control of the machine should someone forcefully prevent you from turning off the machine, etc.) or (2) the machine’s RAM somehow remembers the file contents even after a reboot. As a wise man once said, “It is better to bow too much than to bow not enough.” So it is with security measures.

UPDATE December 22, 2011: Erwin from the comments gave me a tip about a simpler way to handle passwords. It is so simple that I am utterly embarassed about my previous approach using /mnt/r0 and all that, and I’ve already changed my setup to use this method! Apparently, mutt comes with the source command, which simply reads in chunks of mutt commands through standard input (STDIN). So, the idea is instead of storing just the passwords in an encrypted file, store the entire commands into a file and encrypt it. Then, you can just source it on decryption.

~/.sec/pass.gpg

set my_gpass1="my super secret password"
set my_gpass2="my other super secret password"


Then, you can source the above upon decryption in its entirety, like this:

source "gpg --textmode -d ~/.sec/pass.gpg |"


The unusual and welcome benefit to this approach, apart from its simplicity, is that you don’t need to change how your password was quoted/escaped from the previous approach! I only found out the hard way after trying to remove the various backslashes, thinking that I needed to change the quotation level. You just use the same password strings as before, and you’re set! And also, the double quotes used for the source command is not a typo: mutt will see that the last character is a pipe ‘|’ character and interpret it as a command, not a static string.

Many thanks to Erwin for pointing this out to me.

UPDATE April 16, 2012: Typo and grammar fixes.

# Improved Autocall Script

I’ve updated the Autocall Ruby script I’ve mentioned various times before by porting it to Zsh and adding a TON of features. The following posts are now obsolete:

If you haven’t read my previous posts on this topic, basically, Autocall is a script that I wrote to automate executing an arbitrary command (from the shell) any time some file is modified. In practice, it is very useful for any small/medium project that needs to compile/build something out of source code. Typical use cases are for LilyPond files, LaTeX/XeTeX, and C/C++ source code.

Anyway, the script is now very robust and much more intelligent. It now parses options and flags instead of stupidly looking at ARGV one at a time. Perhaps the best new feature is the ability to kill processes that hang. Also, you can now specify up to 9 different commands with 9 instances of the “-c” flag (additional instances are ignored). Commands 2-9 are accessible manually via the numeric keys 2-9 (command 1 is the default). This is useful if you have, for instance, different build targets in a makefile. E.g., you could do

$autocall -c "make" -c "make -B all" -c "make clean" -l file_list  to make things a bit easier. I use this script all the time — mostly when working with XeTeX files or compiling source code. It works best in a situation where you have to do something X whenever files/directories Y changes in any way. Again, the command to be executed is arbitrary, so you could use it to call some script X whenever a change is detected in a file/directory. If you use it with LaTeX/XeTeX/TeX, use the “-halt-on-error” option so that you don’t have to have autocall kill it (only available with the -k flag). The copious comments should help you get started. Like all my stuff, it is not licensed at all — it’s released into the PUBLIC DOMAIN, without ANY warranties whatsoever in any jurisdiction (use at your own risk!). #!/bin/zsh # PROGRAM: autocall # AUTHOR: Shinobu (https://zuttobenkyou.wordpress.com) # LICENSE: PUBLIC DOMAIN # # # DESCRIPTION: # # Autocall watches (1) a single file, (2) directory, and/or (3) a text file # containing a list of files/directories, and if the watched files and/or # directories become modified, runs the (first) given command string. Multiple # commands can be provided (a total of 9 command strings are recognized) to # manually execute different commands. # # # USAGE: # # See msg("help") function below -- read that portion first! # # # USER INTERACTION: # # Press "h" for help. # Pressing a SPACE, ENTER, or "1" key forces execution of COMMAND immediately. # Keys 2-9 are hotkeys to extra commands, if there are any. # Press "c" for the command list. # To exit autocall gracefully, press "q". # # # DEFAULT SETTINGS: # # (-w) DELAY = 5 # (-x) FACTOR = 4 # # # EXAMPLES: # # Execute "pdflatex -halt-on-error report.tex" every time "report.tex" or "ch1.tex" is # modified (if line count changes in either file; modification checked every 5 # seconds by default): # autocall -c "pdflatex -halt-on-error report.tex" -F report.tex -f ch1.tex # # Same, but only look at "ch1.tex" (useful, assuming that report.tex includes # ch1.tex), and automatically execute every 4 seconds: # autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 4 # (-x 0 or -x 1 here would also work) # # Same, but also automatically execute every 20 (5 * 4) seconds: # autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 4 # # Same, but automatically execute every 5 (5 * 1) seconds (-w is 5 by default): # autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -x 1 # # Same, but automatically execute every 1 (1 * 1) second: # autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 1 # # Same, but automatically execute every 17 (1 * 17) seconds: # autocall -c "pdflatex -halt-on-error report.tex" -F ch1.tex -w 1 -x 17 # # Same, but for "ch1.tex", watch its byte size, not line count: # autocall -c "pdflatex -halt-on-error report.tex" -b ch1.tex -w 1 -x 17 # # Same, but for "ch1.tex", watch its timestamp instead (i.e., every time # this file is saved, the modification timestamp will be different): # autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -w 1 -x 17 # # Same, but also look at the contents of directory "images/ocean": # autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -w 1 -x 17 # # Same, but also look at the contents of directory "other" recursively: # autocall -c "pdflatex -halt-on-error report.tex" -f ch1.tex -d images/ocean -D other -w 1 -x 17 # # Same, but look at all files and/or directories (recursively) listed in file # "watchlist" instead: # autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -w 1 -x 17 # # Same, but also look at "newfile.tex": # autocall -c "pdflatex -halt-on-error report.tex" -l watchlist -f newfile.tex -w 1 -x 17 # # Same, but also allow manual execution of "make clean" with hotkey "2": # autocall -c "pdflatex -halt-on-error report.tex" -c "make clean" -l watchlist -f newfile.tex -w 1 -x 17 # ############################################################################### ############################################################################### #-----------------# # Local functions # #-----------------# msg () { case$1 in
"help")
echo "
autocall: Usage:

autocall [OPTIONS]

Required parameter:
-c COMMAND      The command to be executed (put COMMAND in quotes). Note that
COMMAND can be a set of multiple commands, e.g. \"make clean;
make\". You can also specify multiple commands by invoking
-c COMMAND multiple times -- the first 9 of these are set to
hotkeys 1 through 9, if present. This is useful if you want to
have a separate command that is available and can only be
executed manually.

One or more required parameters (but see -x below):
-f FILE         File to be watched. Modification detected by time.
-F FILE         File to be watched. Modification detected by line-size.
-b FILE         File to be watched. Modification detected by bytes.
-d DIRECTORY    Directory to be watched. Modification detected by time.
-D DIRECTORY    Directory to be watched, recursively. Modification
detected by time.
-l FILE         Text file containing a list of files/directories (each on
its own line) to be watched (directories listed here are
watched recursively). Modification is detected with 'ls'.

Optional parameters:
-w DELAY        Wait DELAY seconds before checking on the watched
files/directories for modification; default 5.
-t TIMEOUT      If COMMAND does not finish execution after TIMEOUT seconds,
send a SIGTERM signal to it (but do nothing else afterwards).
-k KDELAY       If COMMAND does not finish execution after TIMEOUT,
then wait KDELAY seconds and send SIGKILL to it if COMMAND is
still running. If only -k is given without -t, then -t is
automatically set to the same value as TIMEOUT.
-x FACTOR       Automatically execute the command repeatedly every DELAY *
FACTOR seconds, regardless of whether the watched
files/directories were modified. If FACTOR is zero, it is set
to 1. If -x is set, then -f, -d, and -l are not required (i.e.,
if only the -c and -x options are specified, autocall will
simply act as a while loop executing COMMAND every 20 (or
more if FACTOR is greater than 1) seconds). Since the
formula is (DELAY * FACTOR) seconds, if DELAY is 1,
FACTOR's face value itself, if greater than 0, is the seconds
amount.
-a              Same as \-x 1'
-v              Show version number and exit (regardless of other parameters).
"
exit 0
;;
"version")
echo "autocall version 1.0"
exit 0
;;
*)
echo "autocall: $1" exit 1 ;; esac } is_number () { if [[$(echo $1 | sed 's/^[0-9]\+//' | wc -c) -eq 1 ]]; then true else false fi } autocall_exec () { timeout=$2
killdelay=$3 col="" case$4 in
1) col=$c1 ;; 2) col=$c2 ;;
3) col=$c3 ;; 4) col=$c4 ;;
5) col=$c5 ;; 6) col=$c6 ;;
*) col=$c1 ;; esac echo "\nautocall:$c2 [$(date --rfc-3339=ns)]$ce$col$5$ce" if [[$# -eq 7 ]]; then
diff -u0 -B -d <(echo "$6") <(echo "$7") | tail -n +4 | sed -e "/^[@-].\+/d" -e "s/$$\S\+\s\+\S\+\s\+\S\+\s\+\S\+\s\+$$$$\S\+\s\+$$$$\S\+\s\+\S\+\s\+\S\+\s\+$$/\1$c1\2$ce$c2\3$ce/" -e "s/^/  $c1>$ce /"
echo
fi
echo "autocall: calling command \$c4$1$ce'..." # see the "z" flag under PARAMTER EXPANSION under "man ZSHEXPN" for more info if [[$tflag == true || $kflag == true ]]; then # the 'timeout' command gives nice exit statuses -- it gives 124 if # command times out, but if the command exits with an error of its own, # it gives that error number (so if the command doesn't time out, but # exits with 4 or 255 or whatever, it (the timeout command) will exit # with that number instead) # note: if kflag is true, then tflag is always true com_exit_status=0 if [[$kflag == true ]]; then
eval timeout -k $killdelay$timeout $1 2>&1 | sed "s/^/$col>$ce /" com_exit_status=$pipestatus[1]
else
eval timeout $timeout$1 2>&1 | sed "s/^/  $col>$ce /"
com_exit_status=$pipestatus[1] fi if [[$com_exit_status -eq 124 ]]; then
echo "\n${c6}autocall: command timed out$ce"
elif [[ $com_exit_status -ne 0 ]]; then echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
else
echo "\n${c1}autocall: command executed successfully$ce"
fi
else
eval $1 2>&1 | sed "s/^/$col>$ce /" com_exit_status=$pipestatus[1]
if [[ $com_exit_status -ne 0 ]]; then echo "\n${c6}autocall: command exited with error status $com_exit_status$ce"
else
echo "\n${c1}autocall: command executed successfully$ce"
fi
fi
}

#------------------#
# Global variables #
#------------------#

# colors
c1="\x1b[1;32m" # bright green
c2="\x1b[1;33m" # bright yellow
c3="\x1b[1;34m" # bright blue
c4="\x1b[1;36m" # bright cyan
c5="\x1b[1;35m" # bright purple
c6="\x1b[1;31m" # bright red
ce="\x1b[0m"

coms=()
delay=5
xdelay_factor=4
f=()
F=()
b=()
d=()
D=()
l=()
l_targets=()
wflag=false
xflag=false
tflag=false
kflag=false
timeout=0
killdelay=0

tstampf="" # used to DISPLAY modification only for -f flag
linestamp="" # used to DETECT modification only for -f flag
tstampF="" # used to detect AND display modifications for -F flag
tstampb="" # used to DISPLAY modification only for -b flag
bytestamp="" # used to DETECT modification only for -b flag
tstampd="" # used to detect AND display modifications for -d flag
tstampD="" # used to detect AND display modifications for -D flag
tstampl="" # used to detect AND display modifications for -l flag

tstampf_new=""
linestamp_new=""
tstampF_new=""
tstampb_new=""
bytestamp_new=""
tstampd_new=""
tstampD_new=""
tstampl_new=""

#----------------#
# PROGRAM START! #
#----------------#

#---------------#
# Parse options #
#---------------#

# the leading ":" in the opstring silences getopts's own error messages;
# the colon after a single letter indicates that that letter requires an
# argument

# first parse for the presence of any -h and -v flags (while silently ignoring
# the other recognized options)
while getopts ":c:w:f:F:b:d:D:l:t:k:x:ahv" opt; do
case "$opt" in h) msg "help" ;; v) msg "version" ;; *) ;; esac done # re-parse from the beginning again if there were no -h or -v flags OPTIND=1 while getopts ":c:w:f:F:b:d:D:l:t:k:x:a" opt; do case "$opt" in
c)
com_binary=$(echo "$OPTARG" | sed 's/ \+/ /g' | sed 's/;/ /g' | cut -d " " -f1)
if [[ $(which$com_binary) == "$com_binary not found" ]]; then msg "invalid command \$com_binary'"
else
coms+=("$OPTARG") fi ;; w) if$(is_number "$OPTARG"); then if [[$OPTARG -gt 0 ]]; then
wflag=true
delay=$OPTARG else msg "DELAY must be greater than 0" fi else msg "invalid DELAY \$OPTARG'"
fi
;;
f)
if [[ ! -f "$OPTARG" ]]; then msg "file \$OPTARG' does not exist"
else
f+=("$OPTARG") fi ;; F) if [[ ! -f "$OPTARG" ]]; then
msg "file \$OPTARG' does not exist" else F+=("$OPTARG")
fi
;;
b)
if [[ ! -f "$OPTARG" ]]; then msg "file \$OPTARG' does not exist"
else
b+=("$OPTARG") fi ;; d) if [[ ! -d "$OPTARG" ]]; then
msg "directory \$OPTARG' does not exist" else d+=("$OPTARG")
fi
;;
D)
if [[ ! -d "$OPTARG" ]]; then msg "directory \$OPTARG' does not exist"
else
D+=("$OPTARG") fi ;; l) if [[ ! -f$OPTARG ]]; then
msg "file \$OPTARG' does not exist" else l+=("$OPTARG")
fi
;;
t)
tflag=true
if $(is_number "$OPTARG"); then
if [[ $OPTARG -gt 0 ]]; then timeout=$OPTARG
else
msg "TIMEOUT must be greater than 0"
fi
else
msg "invalid TIMEOUT \$OPTARG'" fi ;; k) kflag=true if$(is_number "$OPTARG"); then if [[$OPTARG -gt 0 ]]; then
killdelay=$OPTARG else msg "TIMEOUT must be greater than 0" fi else msg "invalid KDELAY \$OPTARG'"
fi
;;
x)
xflag=true
if $(is_number "$OPTARG"); then
if [[ $OPTARG -gt 0 ]]; then xdelay_factor=$OPTARG
elif [[ $OPTARG -eq 0 ]]; then xdelay_factor=1 else msg "invalid FACTOR \$OPTARG'"
fi
fi
;;
a) xflag=true ;;
🙂
msg "missing argument for option \$OPTARG'" ;; *) msg "unrecognized option \$OPTARG'"
;;
esac
done

#-----------------#
# Set misc values #
#-----------------#

if [[ $kflag == true &&$tflag == false ]]; then
tflag=true
timeout=$killdelay fi #------------------# # Check for errors # #------------------# # check that the given options are in good working order if [[ -z$coms[1] ]]; then
msg "help"
elif [[ (-n $f && -n$d && -n $D && -n$l) && $xflag == false ]]; then echo "autocall: see help with -h" msg "at least one or more of the (1) -f, -d, -D, or -l paramters, or (2) the -x parameter, required" fi #-------------------------------# # Record state of watched files # #-------------------------------# if [[ -n$F ]]; then
if [[ $#F -eq 1 ]]; then linestamp=$(wc -l $F) else linestamp=$(wc -l $F | head -n -1) # remove the last "total" line fi tstampF=$(ls --full-time $F) fi if [[ -n$f ]]; then
tstampf=$(ls --full-time$f)
fi
if [[ -n $b ]]; then if [[$#b -eq 1 ]]; then
bytestamp=$(wc -c$b)
else
bytestamp=$(wc -c$b | head -n -1) # remove the last "total" line
fi
tstampb=$(ls --full-time$b)
fi
if [[ -n $d ]]; then tstampd=$(ls --full-time $d) fi if [[ -n$D ]]; then
tstampD=$(ls --full-time -R$D)
fi
if [[ -n $l ]]; then for listfile in$l; do
if [[ ! -f $listfile ]]; then msg "file \$listfile ' does not exist"
else
if [[ ! -e "$line" ]]; then msg "\$listfile': file/path \$line' does not exist" else l_targets+=("$line")
fi
done < $listfile # read contents of$listfile!
fi
done
tstampl=$(ls --full-time -R$l_targets)
fi

#----------------------#
# Begin execution loop #
#----------------------#
# This is like Russian Roulette (where "firing" is executing the command),
# except that all the chambers are loaded, and that on every new turn, instead
# of picking the chamber randomly, we look at the very next chamber. After
# every chamber is given a turn, we reload the gun and start over.
#
# If we detect file/directory modification, we pull the trigger. We can also
# pull the trigger by pressing SPACE or ENTER. If the -x option is provided,
# the last chamber will be set to "always shoot" and will always fire (if the
# trigger hasn't been pulled by the above methods yet).

if [[ $xflag == true &&$xdelay_factor -le 1 ]]; then
xdelay_factor=1
fi
com_num=1
for c in $coms; do echo "autocall: command slot$com_num set to \$c4$coms[$com_num]$ce'"
let com_num+=1
done
echo "autocall: press keys 1-$#coms to execute a specific command" if [[$wflag == true ]]; then
echo "autocall: modification check interval set to $delay sec" else echo "autocall: modification check interval set to$delay sec (default)"
fi
if [[ $xflag == true ]]; then echo "autocall: auto-execution interval set to ($delay * $xdelay_factor) =$(($delay*$xdelay_factor)) sec"
fi
if [[ $tflag == true ]]; then echo "autocall: TIMEOUT set to$timeout"
if [[ $kflag == true ]]; then echo "autocall: KDELAY set to$killdelay"
fi
fi
echo "autocall: press ENTER or SPACE to execute manually"
echo "autocall: press \c' for command list"
echo "autocall: press \h' for help"
echo "autocall: press \q' to quit"
key=""
while true; do
for i in {1..$xdelay_factor}; do #------------------------------------------# # Case 1: the user forces manual execution # #------------------------------------------# # read a single key from the user read -s -t$delay -k key
case $key in # note the special notation$'\n' to detect an ENTER key
$'\n'|" "|1) autocall_exec$coms[1] $timeout$killdelay 4 "manual execution"
key=""
continue
;;
2|3|4|5|6|7|8|9)
if [[ -n $coms[$key] ]]; then
autocall_exec $coms[$key] $timeout$killdelay 4 "manual execution"
key=""
continue
else
echo "autocall: command slot $key is not set" key="" continue fi ;; c) com_num=1 echo "" for c in$coms; do
echo "autocall: command slot $com_num set to \$c4$coms[$com_num]$ce'" let com_num+=1 done key="" continue ;; h) echo "\nautocall: press \c' for command list" echo "autocall: press \h' for help" echo "autocall: press \q' to exit" com_num=1 for c in$coms; do
echo "autocall: command slot $com_num set to \$c4$coms[$com_num]$ce'" let com_num+=1 done echo "autocall: press keys 1-$#coms to execute a specific command"
echo "autocall: press ENTER or SPACE or \1' to execute first command manually"
key=""
continue
;;
q)
echo "\nautocall: exiting..."
exit 0
;;
*) ;;
esac

#------------------------------------------------------------------#
# Case 2: modification is detected among watched files/directories #
#------------------------------------------------------------------#
if [[ -n $f ]]; then tstampf_new=$(ls --full-time $f) fi if [[ -n$F ]]; then
if [[ $#F -eq 1 ]]; then linestamp_new=$(wc -l $F) else linestamp_new=$(wc -l $F | head -n -1) # remove the last "total" line fi tstampF_new=$(ls --full-time $F) fi if [[ -n$b ]]; then
if [[ $#b -eq 1 ]]; then bytestamp_new=$(wc -c $b) else bytestamp_new=$(wc -c $b | head -n -1) # remove the last "total" line fi tstampb_new=$(ls --full-time $b) fi if [[ -n$d ]]; then
tstampd_new=$(ls --full-time$d)
fi
if [[ -n $D ]]; then tstampD_new=$(ls --full-time -R $D) fi if [[ -n$l ]]; then
tstampl_new=$(ls --full-time -R$l_targets)
fi
if [[ -n $f && "$tstampf" != "$tstampf_new" ]]; then autocall_exec$coms[1] $timeout$killdelay 1 "change detected" "$tstampf" "$tstampf_new"
tstampf=$tstampf_new continue elif [[ -n$F && "$linestamp" != "$linestamp_new" ]]; then
autocall_exec $coms[1]$timeout $killdelay 1 "change detected" "$tstampF" "$tstampF_new" linestamp=$linestamp_new
tstampF=$tstampF_new continue elif [[ -n$b && "$bytestamp" != "$bytestamp_new" ]]; then
autocall_exec $coms[1]$timeout $killdelay 1 "change detected" "$tstampb" "$tstampb_new" bytestamp=$bytestamp_new
tstampb=$tstampb_new continue elif [[ -n$d && "$tstampd" != "$tstampd_new" ]]; then
autocall_exec $coms[1]$timeout $killdelay 1 "change detected" "$tstampd" "$tstampd_new" tstampd=$tstampd_new
continue
elif [[ -n $D && "$tstampD" != "$tstampD_new" ]]; then autocall_exec$coms[1] $timeout$killdelay 1 "change detected" "$tstampD" "$tstampD_new"
tstampD=$tstampD_new continue elif [[ -n$l && "$tstampl" != "$tstampl_new" ]]; then
autocall_exec $coms[1]$timeout $killdelay 1 "change detected" "$tstampl" "$tstampl_new" tstampl=$tstampl_new
continue
fi

#-----------------------------------------------------#
# Case 3: periodic, automatic execution was requested #
#-----------------------------------------------------#
if [[ $xflag == true &&$i -eq $xdelay_factor ]]; then autocall_exec$coms[1] $timeout$killdelay 3 "commencing auto-execution ($(($delay*$xdelay_factor)) sec)" fi done done # vim:syntax=zsh  # Zsh: univ_open() update (new dir_info() script) I’ve been heavily using the univ_open() shell function (a universal file/directory opener from the command line) that I wrote for many months now, and have made some slight changes to it. It’s all PUBLIC DOMAIN like my other stuff, so you have my blessing if you want to make it better (if you do, send me a link to your version or something). What’s new? The code that prettily lists all directory contents after entering it has been cut off and made into its own function, called “dir_info”. So there are 2 functions now: univ_open(), which opens any file/directory from the commandline according to predefined preferred apps (dead simple to do, as you can see in the code), and dir_info(), a prettified “ls” replacement that also lists helpful info like “largest file” and the number of files in the directory. univ_open() has not changed much — the only new stuff are the helpful error messages. dir_info() should be used by itself from a shell alias (I’ve aliased all its 8 modes to quick keys like “ll”, “l”, “lk”, “lj”, etc. for quick access in ~/.zshrc). For example, “l” is aliased to “dir_info 1”. Here is the code below for dir_info(): #!/bin/zsh # dir_info(), a function that acts as an intelligent "ls". This function is # used by univ_open() to display directory contents, but it should additionally # be used by itself. By default, you can call dir_info() without any arguments, # but there are 8 presets that are hardcoded below (presets 0-7). Thus, you # could do "dir_info [0-7]" to use those modes. You sould alias these modes to # something easy like "ll" or "l", etc. from your ~/.zshrc. For a detailed # explanation of the 8 presets, see the code below. dir_info() { # colors c1="33[1;32m" # bright green c2="33[1;33m" # bright yellow c3="33[1;34m" # bright blue c4="33[1;36m" # bright cyan c5="33[1;35m" # bright purple c6="33[1;31m" # bright red # only pre-emptively give newline to prettify listing of directory contents # if the directory is not empty [[$(ls -A1 | wc -l) -ne 0 ]] && echo
countcolor() {
if [[ $1 -eq 0 ]]; then echo$c4
elif [[ $1 -le 25 ]]; then echo$c1
elif [[ $1 -le 50 ]]; then echo$c2
elif [[ $1 -le 100 ]]; then echo$c3
elif [[ $1 -le 200 ]]; then echo$c5
else
echo $c6 fi } sizecolor() { case$1 in
B)
echo $c0 ;; K) echo$c1
;;
M)
echo $c2 ;; G) echo$c3
;;
T)
echo $c5 ;; *) echo$c4
;;
esac
}
ce="33[0m"

ctag_size=""

# only show information if the directory is not empty
if [[ $(ls -A1 | wc -l) -gt 0 ]]; then size=$(ls -Ahl | head -n1 | head -c -2)
suff=$(ls -Ahl | head -n1 | tail -c -2) size_num=$(echo -n $size | cut -d " " -f2 | head -c -1) ctag_size=$(sizecolor $suff) simple=false # show variation of ls based on given argument case$1 in
0) # simple
ls -Chs -w $COLUMNS --color | tail -n +2 simple=true ;; 1) # verbose ls -Ahl --color | tail -n +2 ;; 2) # simple, but sorted by size (biggest file on bottom with -r flag) ls -ChsSr -w$COLUMNS --color | tail -n +2
simple=true
;;
3) # verbose, but sorted by size (biggest file on bottom with -r flag)
ls -AhlSr --color | tail -n +2
;;
4) # simple, but sorted by time (newest file on bottom with -r flag)
ls -Chstr -w $COLUMNS --color | tail -n +2 simple=true ;; 5) # verbose, but sorted by time (newest file on bottom with -r flag) ls -Ahltr --color | tail -n +2 ;; 6) # simple, but sorted by extension ls -ChsX -w$COLUMNS --color | tail -n +2
simple=true
;;
7) # verbose, but sorted by extension
ls -AhlX --color | tail -n +2
;;
*)
simple=true
ls --color
;;
esac

# show number of files or number of shown vs hidden (as a fraction),
# depending on which version of ls was used
denom=$(ls -A1 | wc -l) numer=$denom
# redefine numer to be a smaller number if we're in simple mode (and
# only showing non-dotfiles/non-dotdirectories
$simple && numer=$(ls -1 | wc -l)
ctag_count=$(countcolor$denom)

if [[ $numer !=$denom ]]; then
if [[ $numer -gt 1 ]]; then echo -n "\nfiles$numer/$ctag_count$denom$ce | " else dotfilecnt=$(($denom -$numer))
s=""
[[ $dotfilecnt -gt 1 ]] && s="s" || s="" echo -n "\nfiles$numer/$ctag_count$denom$ce ($dotfilecnt dotfile$s) | " fi else echo -n "\nfiles$ctag_count$denom$ce | "
fi

if [[ $suff != "0" ]]; then echo -n "size$ctag_size$size_num$suff$ce" else echo -n "size$ctag_size nil$ce" fi # Find the biggest file in this directory. # # We first use ls to list all contents, sorted by size; then, we strip # all non-regular file entries (such as directories and symlinks); # then, we truncate our result to kill all newlines with 'tr' (e.g., if # there is a tiny file (say, 5 bytes) and there are directories and # symlinks, it's likely that the file is NOT the biggest "file" # according to 'ls', which means that the output up to this point will # have trailing whitespaces (thus making the next command 'tail -n 1' # fail, even though there is a valid file!)); we then fetch the last # line of this list, which is the biggest file, then make it so that # all multiple-continguous spaces are replaced with a single space -- # and using this new property, we can safely call 'cut' by specifying # the single space " " as a delimiter to finally get our filename. big=$(ls -lSr | sed 's/^[^-].\+//' | tr -s "\n" | tail -n 1 | sed 's/ \+/ /g' | cut -d " " -f9-)
if [[ -f "$big" ]]; then # since$suff needs a file size suffix (K,M,G, etc.), we reassign
# $big_size here from pure block size to human-readable notation # make$big_size more "accurate" (not in terms of disk space usage,
# but in terms of actual number of bytes inside the file) if it is
# smaller than 4096 bytes
suff=""
if [[ $(du -b "$big" | cut -f1) -lt 4096 ]]; then
big_num="$(du -b "$big" | cut -f1)"
suff="B"
else
big_num=$(ls -hs "$big" | cut -d " " -f1 | sed 's/[a-zA-Z]//')
suff=$(ls -hs "$big" | cut -d " " -f1 | tail -c -2)
fi
ctag_size=$(sizecolor "$suff")
echo " | \$big'$ctag_size$big_num$suff$ce" else echo fi fi }  Here is the updated univ_open() shell function: #!/bin/zsh # univ_open() is intended to be used to pass either a SINGLE valid FILE or # DIRECTORY. For illustrative purposes, we assume "d" to be aliased to # univ_open() in ~/.zshrc. If optional flags are desired, then either prepend # or append them appropriately. E.g., if you have jpg's to be opened by eog, # then doing "d -f file.jpg" or "d file.jpg -f" will be the same as "eog -f # file.jpg" or "eog file.jpg -f", respectively. The only requirement when # passing flags is that the either the first word or last word must be a valid # filename. # univ_open requires the custom shell function dir_info() (ls with saner # default args) to work properly univ_open() { if [[ -z$@ ]]; then
# if we do not provide any arguments, go to the home directory
cd && dir_info # ("cd" w/o any arguments goes to the home directory)
elif [[ -f $1 || -f${=@[-1]} ]]; then
# if we're here, it means that the user either (1) provided a single valid file name, or (2) a number of
# commandline arguments PLUS a single valid file name; use of the $@ variable ensures that we preserve all the # arguments the user passed to us #$1 is the first arg; ${=@[-1]} is the last arg (i.e., if user passes "-o -m FILE" to us, then obviously the # last arg is the filename # # we use && and || for simple ternary operation (like ? and : in C) [[ -f$1 ]] && file=$1 || file=${=@[-1]}
case $file:e:l in (doc|odf|odt|rtf) soffice -writer$@ &>/dev/null & disown
;;
(pps|ppt)
soffice -impress $@ &>/dev/null & disown ;; (htm|html) firefox$@ &>/dev/null & disown
;;
(eps|pdf|ps)
evince -f $@ &>/dev/null & disown ;; (bmp|gif|jpg|jpeg|png|svg|tga|tiff) eog$@ &>/dev/null & disown
;;
(psd|xcf)
gimp $@ &>/dev/null & disown ;; (aac|flac|mp3|ogg|wav|wma) mplayer$@
;;
(mid|midi)
timidity $@ ;; (asf|avi|flv|ogm|ogv|mkv|mov|mp4|mpg|mpeg|rmvb|wmv) smplayer$@ &>/dev/null & disown
;;
(djvu)
djview $@ ;; (exe) wine$@ &>/dev/null & disown
;;
*)
vim $@ ;; esac elif [[ -d$1 ]]; then
# if the first argument is a valid directory, just cd into it -- ignore
# any trailing arguments (in zsh, '#' is the same as ARGC, and denotes the number of arguments passed to the
# script (so '$#' is the same as$ARGC)
if [[ $# -eq 1 ]]; then cd$@ && dir_info
else
# if the first argument was a valid directory, but there was more than 1 argument, then we ignore these
# trailing args but still cd into the first given directory
cd $1 && dir_info # i.e., show arguments 2 ... all the way to the last one (last one has an index of -1 argument array) echo "\nuniv_open: argument(s) ignored: \${=@[2,-1]}\"
echo "univ_open: went to \$1'\n" fi elif [[ ! -e$@ ]]; then
[[ $# -gt 1 ]] && head=$1:h || head=$@:h # if we're given just 1 argument, and that argument does not exist, # then go to the nearest valid parent directory; we use a while loop to # find the closest valid directory, just in case the user gave a # butchered-up path while [[ ! -d$head ]]; do head=$head:h; done cd$head && dir_info
echo "\nuniv_open: path \$@' does not exist" [[$head == "." ]] && echo "univ_open: stayed in same directory\n" || echo "univ_open: relocated to nearest parent directory \$head'\n" else # possible error -- should re-program the above if this ever happens, # but, it seems unlikely echo "\nuniv_open: error -- exiting peacefully" fi }  Put these two functions inside your ~/.zsh/func folder with filenames “dir_info” and “univ_open” and autoload them. I personally do this in my ~/.zshrc: fpath=(~/.zsh/func$fpath)
`