Sunday, November 11, 2012

Keyboard and Monitor Efficiency-Boosts

In online discussions, I've often seen two claims made about efficiency:

  • Two monitors are better than one
  • Master gurus work so fast because they never take fingers off the keyboard.

Over the past month or so, I've been exploring these for myself.

First, I bought an affordable 19" ViewSonic monitor for about $100 from Amazon. I also picked up a desk clamp, and so I was able to install it directly above my 15" laptop screen. The two are the same resolution, and, if I slide the monitor back back 3 or 4 inches, they're about the same size in my visual field. So I now have double the screen real-estate. I also installed a copy of UltraMon.

So far, the second monitor has been fun, but not all that useful. It's handy when I want documentation or my streaming radio up at the same time I'm working on some code. However, I find that two screenfulls of documents with white backgrounds are pretty bright at night, even though I cranked the brightness down pretty low. Also, even though the monitor is straight ahead of me, it makes my neck a little stiff if I work on it for too long. I guess I've gotten used to that slightly downward sightline of a laptop screen. Still, I don't regret the purchase and I'm fine with flipping the second monitor on whenever it's useful. I think (or at least hope) I'll gradually put it to more use over time.

The second notion I've been exploring is that keyboard-based computing is faster than mouse-based. This is often an argument for using emacs or (even more so) for vi. As mentioned previously, I've been playing with AutoHotKey. Here are a few of the interesting settings from my AutoHotKey script file:

#KeyHistory 0   ; don't record keypresses

I added this when I realized all my keystrokes--including the title of the current window--were being logged. Don't really need that info lying around.


These turn right Alt and right Control into Shift+Alt and Shift+Ctrl. I've found this very handy with various macro and other keybindings.

This next setup was quite interesting:

CoordMode ToolTip, Screen   ;(at top of file)

SetCapsLockState AlwaysOff  ;;CAPS button does nothing alone

CapsLock & Enter::
 if GetKeyState("ScrollLock", "T") {
   ToolTip  ;turn it off
 }else {
   ToolTip NavMode, 0, 0
 Send {ScrollLock}
#If GetKeyState("ScrollLock", "T")
  u::Send {PgUp}
  Space::Send {PgDn}
  h::Send {Home}
  `;::Send {End}
  a::Send {Alt Down}
  a UP::Send {Alt Up}
  s::Send {Shift Down}
  s UP::Send {Shift Up}
  d::Send {Ctrl Down}
  d UP::Send {Ctrl Up}
  f:: ;reserved for... mouse controls?

CapsLock & i::Send {Up}
CapsLock & j::Send {Left}
CapsLock & k::Send {Down}
CapsLock & ,::Send {Down}
CapsLock & l::Send {Right}
CapsLock & u::Send {PgUp}
CapsLock & Space::Send {PgDn}
CapsLock & h::Send {Home}
CapsLock & `;::Send {End}

The idea I had here was that the different keyboard modes (ScrollLock, NumLock, CapsLock, and InsertMode) were going largely unused, but each could allow the keyboard to be completely remapped. My laptop keyboard doesn't even have a ScrollLock or NumLock key, and lacks any keyboard indicator lights. I thought I'd start with the least used--ScrollLock. Hitting CapsLock+Enter (a double-pinky-tap) puts me into ScrollLock and, since I don't have an indicator light, pops a very small "NavMode" tooltip in the upper left-corner of the screen. The homekeys (jkli,) become arrow keys, with h and ; as Home and End and U and Space as PageUp and PageDown. a, s, and d became alt, shift, and ctrl, respectively. You could of course take this further.

The homekeys-to-arrow-keys conversion would be handy enough to use without switching modes, so the second part means I can hold down CapsLock to turn the homekeys into arrow keys.

So, there we go: no reason to ever leave the homekeys again!

I found I hardly ever used this. I'm already trained on the arrow keys and the mouse, so that's where I go. If I'm moving my cursor far, I grab the mouse. If I'm going up a line or two, just a couple arrow-key taps are good enough. Sliding my right hand over 3 inches to use either of these wasn't costing me that much time, but trying to remember to use CapsLock instead was.

I still think it's a cool idea, though, which is why I'm sharing it here. Also, an approach like this would be very handy to implement a working number pad on my laptop, since it doesn't provide one.

In the end, I reverted to:

+CapsLock::CapsLock  ;Shift+CapsLock toggles CapsLock mode

which just turns CapsLock into Control.

I found I wasn't using this combo much a month ago, but this past couple days I've been exploring the fancier features of Eclipse. Ctrl+Space in particular is much handier with CapsLock as Ctrl.

Overall, my foray into keyboard efficiency has been disappointing. Supposedly the learning curve is worth it, but I doubt the extra few milliseconds are ever going to add up to the time it will take me to remap and retrain to do it. This old article suggests that my feelings on this are perhaps not so far off after all: that the speed of keyboarding is actually illusory.

This got me thinking of the QWERTY vs DVORAK debate. Supposedly, for experts, DVORAK is 10% faster (though this claim is contested). But, even if it is, I'm not hitting the QWERTY limit. That is, other people can type way faster on QWERTY than I can. I don't think it's primarily the keyboard layout that's slowing me down there; it my physical and mental processes that are too slow.

Bringing this back to the keyboard-vs-mouse debate, I see the same thing at play. I hardly ever mouse through a menu during my normal work, which I think is the main source of the "GUIs are slow" argument. I do use common keyboard shortcuts (like Ctrl+Z/X/C/V), etc, which are very fast when combined with mouse selection. When I don't use a keyboard combo, I generally hit a toolbar button. This is about as fast as a mouse-gesture. And even heading into a well-known and shallow menu doesn't really take that long. Striving for micro-improvments in this area is not really worth it to me. (Your mileage may vary, though, which is fine.)

This brings me in turn to my recent Eclipse experience. I'm realizing that Eclipse--with its Ctrl+Space and Alt+/ completions and various Content Assists and Code Templates--can drastically improve code generation times because it writes large sections of the code for you. I suspect that this is what may really be at work with older editors like emacs and vi: not that they are keyboard-only but that they are so full of useful tools. It's the intelligent and useful tools that make you faster, not the milliseconds you saved in selecting the tool.

So I've decide to give up on this idea of converting to entirely keyboard use. Instead, I'll be embracing my mouse and toolbar use. (For example, I mapped WinKey+ScrollUp to switch screens between monitors.) I'll see what I can do about speeding those up. I'll still use the keyboard and good bindings where appropriate, of course, but writing good macros is going to save me way more time than not moving my hand 3 inches over to the arrow keys.

I have one last insight on this, which is two-fold. First, keyboard sequences (such as Alt+F, S for Save on Windows or Ctrl+X, Ctrl+S on emacs) are about as fast as a more complicated finger-spreading key-combo (such as Ctrl+Alt+S). Secondly, menus actually provide and scaffold such combos. Alt+F gets you the File menu, and then every element there has a corresponding letter. You can get to any menu option in 2 or 3 keypresses--just like an Emacs key-combo. The difference is that the menu's always there to remind you of your options, and the menu path actually informs the keybinding. For example, in Eclipse, Ctrl+F11 is Run. I find this hard to remember. Why F11 and not some other function key? But consider: Alt+R, R, Enter. Not much longer, more compact on the keyboard, and memorable because it corresponds to: Run Menu, Run, (Execute). (The Enter is required because there is more than one R-bound menu option in the Run menu.) Sequences like this also give you more keybinding options at a factorial rate. For example, if you make Alt+J the start of all your Java-based macros, you get 26 three-key bindings (or many more, given all the other non-letter keys) with that short and relevant intro mnemonic.

Anyway, I'm sure I'll continue to think on this, but I'll be putting more effort now into intelligent macro design and Eclipse tool use than trying to speed-up my keyboard use. So, if you're still a coder putting your mouse to good use, you're not alone!

Thursday, August 16, 2012

A quick fix to improve Python3 startup time

My web server is very low-end. Dating from the mid-90s, it has a 200Mhz Pentium and 96MB of RAM. It was running Debian 2.2 (potato), but I recently upgraded to the most recent Debian 6.0 (squeeze). I'm impressed that it even runs.

I also upgraded to Python3 in order to handle the recent overhaul of my Tamarin automated grading system. I compiled Python3 from source to do this, since Debian doesn't include it yet. They're still shipping Python 2.6.

Tamarin is little more than a bunch of CGI scripts. I expect it to run a little slowly on this machine, but, after the upgrade, any CGI request is taking about 4 seconds, which is fairly intolerable. Static webpages are still responsive enough, though. So I started up the Python3 interpreter... and waited. Yep, that's where the lag is. See for yourself:

ztomasze@tamarin:~$ time python3 -c 'pass'

real    0m2.493s
user    0m2.336s
sys     0m0.160s

Here, I'm just starting python3 to execute a single 'pass' statement that does nothing. This takes 2.5 seconds.

I read somewhere that not loading the local site libraries by using the -S option can give a performance boost. Since Tamarin uses only standard modules, I gave it a shot:

ztomasze@tamarin:~$ time python3 -S -c 'pass'

real    0m0.465s
user    0m0.392s
sys     0m0.072s

A 500% improvement! I even installed the default Python 2.6, just to compare times:

ztomasze@tamarin:~$ time python -c 'pass'

real    0m0.448s
user    0m0.348s
sys     0m0.060s
ztomasze@tamarin:~$ time python -S -c 'pass'

real    0m0.185s
user    0m0.148s
sys     0m0.036s

So Python3 is significantly slower for me than Python 2 was, but using the -S option at least gets me back to standard Python2 times.

This savings didn't really translate directly to improved CGI preformance though. Running two of my scripts from the command command line, I experienced the following:

original time          4 sec     6 sec
adding -S to #! line   3 sec     5 sec

A delay this long is still fairly intolerable. And I don't think the lag is inherent to Tamarin, since the delays weren't this long with Python2 and Debian 2.2 on the same machine.

I know I could probably shave off some more time for Tamarin by using FastCGI, SCGI, or mod_python. SCGI looks most useful to me given my existing codebase. Whenever I get some free time, I'll look into that.

Friday, June 29, 2012

Efficiency though Keyboard Shortcuts

One of the things I learned from the discussion of my recent Lisp posting was the value of delegating work to a good editor and the possible speed gain of good keyboard shortcuts. While I did not agree that the best path to these goals was necessarily through Emacs, I did decide to try to use my keyboard a bit more efficiently than I have been.

First, I reviewed the various keybindings already used by my OS, Windows 7. Hey, that Windows Logo key actually does have a few valuable uses! I also realized that I never use my function keys very much.

I also installed AutoHotKey and tried a few simple useful bindings, including:

  • Capslock is now Ctrl. (Shift+Capslock acts as normal Capslock)
  • Right Alt and Right Ctrl are now equal to Shift+Alt and Shift+Ctrl.
  • Win+q quits a program (like Alt+F4) and Win+c opens a command prompt.
  • The right-click menu button (AppsKey) is now a special function key. For example, I use it with various letters as shortcuts to certain directories when I'm in Windows Explorer.

I'm still working on actually using some of these on a regular basis, though. (Old habits die hard.) I'm also trying to use my alt keys with my thumbs without taking my fingers off the home keys, and using tab (replaced appropriately with spaces) more often when coding.

Anyway, it's all rather nerdy, but the possibilities are exciting. If you work on Windows, check out AutoHotKey. You may want to customize your system across all your different applications--especially if you start thinking about all the hundreds of possible key combos currently going unused on your keyboard!

Unit-testing a Python CGI script

This summer I'm overhauling Tamarin, my automated grading system. Under the hood, Tamarin is little more than a bunch of Python CGI scripts. However, as I overhaul it and convert it from Python 2 to 3, I also wanted to build a proper unit test framework for it.

It's been a dozen years or so since I last used Perl and, but I recall running my scripts on the command line and manually specifying key=value pairs. So, I was somewhat surprised to find no comparable way to test my CGI scripts in Python. The official Python cgi module documentation suggests the only way to test a CGI script is in a web server envirnoment. That's an unnecessarily complex environment for quick tests during development and precludes any simple separate unit tests.

In general, I'm not very impressed with the cgi module docs. In fact, browsing around revealed that there are a number of parameter options undocumented in the official docs.

Using this found information, I was able to build my own cgifactory module. Depending on the function called, it allows you to build a cgi object based on either a GET or POST query. For example:

  form = cgifactory.get(key1='value1', key2='v2')

If you then write your CGI script's main function to take an optional CGI object, you can easily build a CGI query, pass it to your script, and then run string matching on the (redirected) output produced by your script. Of course, most of your unit tests will probably be of component functions used by your script, but sometimes you want to test or run your script as a whole unit. cgifactory will help you there.

The cgifactory code is available here, where you'll always find the most recent version. The code itself is actually quite short; most of the file is documentation and doctests showing how to use it. I don't guarantee it's right, but it's worked for me so far. Hopefully it might be of use to someone else too! Feel free to copy, modify, and/or redistribute.

(Oh, and if you really need a command line version, it shouldn't be too hard to write a main that parse key=values pairs into a dictionary and then calls cgifactory.get(pairs) to build the CGI object.)

Saturday, March 10, 2012

Bug: C enum variables stored as unsigned ints

I read in K&R that enum values are basically int constants (like #defines in that way), and so enum variables are equivalent to ints. However, in C (not C++ though), you may assign any int value to an enum variable--even if that int value is not one of the listed values in the enum definition. You can do this without even raising a compiler warning.

In a program I was working on, I took advantage of that. I had an enum of values 0 through 7:

  enum direction { N, NE, E, SE, S, SW, W, NW};

In a particular function, I was scanning a map for a target in different directions and decided to return -1 if there was nothing interesting found in any direction. However, this led to strange bug.

The following program shows this bug clearly:

#include <stdio.h>

enum nums {zero, one, two, three};

int main(void) {  
  //using an enum as normal
  enum nums myNum = zero;
  printf("zero == %d\n", myNum);
  //assigning int value to an emum
  myNum = -1;
  printf("-1 == %d\n", myNum);
  if (myNum >= 0) {
    printf("%d >= 0\n", myNum);
  }else {
    printf("%d < 0\n", myNum); 

This program prints:

 zero == 0
 -1 == -1
 -1 >= 0

I'm using GCC, and the manual itself says: "By default, these values are of type signed int" and "Although such variables are considered to be of an enumeration type, you can assign them any value that you could assign to an int variable".

However, further research shows that gcc will store an enum variable as an unsigned int if you have no negative values in your defined enum. For example, if I add neg = -1 as an extra value to my enum nums above, the output of the program changes to what I expect: -1 < 0.

Apparently the section of the C99 standard (draft version) clarifies that this is allowed--that the particular int format used is implementation-dependent. An official version of the C90 standard is not freely available for comparison. -std=c90 doesn't change gcc's behavior on this issue.

Monday, February 13, 2012

Lisp: First Impressions

I learned the basics of Lisp 10 years ago when I was the TA for ICS313: Programming Language Theory. This semester, I returned to Lisp after a decade away, again in the context of TAing for ICS313. After so long away, I basically had to start over from scratch again. Though it certainly goes much faster the second time around, it's still a lot like learning it the first time. So these are my "first" impressions.

We are learning Common Lisp. So far, I'm not very impressed.

First of all, I find that Common Lisp is bloated. There are usually about 6 different ways to do something. For example, want to compare two items? Consider =, eq, eql, equal, equalp... and that's just getting started. Want to print something? Consider print, princ, prin1, pprint, format, etc. The reason that so many options exist is because each is subtly different. I'm sure that, once mastered, all these options increase your programming power, since you can pick exactly the right tool for the job. And, in many cases, some of the options seem to be now-unused holdovers that persist from earlier days in Lisp's development. So eventually you learn which ones you can ignore. But it all makes Common Lisp rather tedious to learn.

My next stumbling block is the formatting. As evidenced by both my students' code and my own inclinations, it seems natural as a programmer to try to format the parentheses to make matching them up much easier. The parentheses are subtle, and it's easy to get one out of place. This can produce code that compiles or loads but then fails at runtime because the meaning is subtly different to what you intended. This well-discussed blog posting sums it up fairly well. In general, I agree with the author there. Parentheses in Lisp mark the start and end of various expressions in the same way that braces mark the start and end of blocks in C-like languages. In C-like languages, we have proponents of such extremes as the Allman style of indenting. These Allman proponents feel that such formatting is so essential to readability that every brace deserves its own line! Yet the Lisp community advocates the exact opposite: that no parenthesis should be clearly placed. Instead, they should all just be tucked away at the start or end of a line of code. Supposedly some day you get to the point were you can "see past the parentheses". But this seems to me like a convention that makes code unnecessarily hard to read.

I find most of the arguments for this "parenthesis stacking" format are weak at best. One of them that irks me is that your editor will help you. First of all, you're not always reading code in an editor. Secondly, I should have to move my cursor around or press keys or require fancy color highlighting to make quick sense of the code on the screen. It's called "readability", not "navigability". A third argument is that you can ignore the parentheses because the indenting should show you the structure. But the indenting is not what actually determines the code structure--the parenthesis do! So I need to be able to quickly spot when the parentheses are wrong even though the indenting is correct.

This formatting thing bugs me because it seems the problem comes from an asinine coding convention choice. And that's the one reason that has me formatting this way rather than the way that makes sense to me: because "that's how it is done in Lisp." Like the choice to drive on the right side of the road, it's hard to buck the community on such a choice as a late-comer! Yet, because it makes the code harder to work with, but with no good reasons that I can see, it feels a little like hazing: "This is the way we all had to deal with it when we learned, so you do too."

That brings me to the Lisp community in general. First of all, I don't care to worship at the altar of Emacs. As I mentioned above, I shouldn't have to have a special editor to write code. Don't get me wrong: an IDE is great at increasing productivity and I definitely want one. But the code should be both readable and writable without one. Then I'm free to choose the IDE that meets my own requirements. (On that note, after trying Emacs and LispWorks, I settled on Cusp. It's a little tricky to get up and running, and a bit quirky at first, but it works pretty well. The highlighting of parentheses structure is very helpful.)

Secondly, there's just this "Lisp is so much better!" vibe in the community. Now, obviously you've got to wave the flag for your favorite language. I have no problems there. But, as others have pointed out, if Lisp is so wonderful, how come we're not all using it? Just about every computer science major has to learn a Lisp dialect at some point, so it's not just an issue of exposure. Is it because it lacks good libraries for modern tasks? Is it because, while powerfully writable, its dynamic re-writability makes it hard for someone else to read or maintain? Is it because, while a pioneer of so many cool ideas, most of those ideas have now been imported from Lisp by other languages? Is it because Lisp has built up over 50 years of cruft, but each new Lisp project to simply and overhaul it fractures the small but fanatical Lisp community, leading to inter-dialect derision and flamewars? Hard to tell. What I can tell is that the fanatical belief in Lisp's supremacy over all other programming languages is a little hard to swallow.

My general conclusion is that Lisp is still worth learning for the history of it. However, I don't think I'll be taking it up as my day-to-day language. I'll be looking elsewhere for more modern implementations of Lisp's contributions that are useful to me.

Still, the code-as-data idea is largely still unique to Lisp. That would be fun to explore more, so finding a Lisp dialect that fits me better might still be rewarding. I've considered Scheme a bit, but I think either Clojure or newLisp would be even better. (I found newLisp because I thought: "Why doesn't someone clean up and simplify Lisp back to it's glorious essence? If I did that, I'd call it 'new Lisp'." I searched... and behold! Already done 20 years ago by someone else.) Both seem to have newer, friendlier, more open communities. Clojure has the advantage of the JVM and the entire Java API behind it. newLisp is targeting what I think is a great niche for Lisp: scripting. This is where powerful writability at the expense of readability and maintainability is a viable tradeoff.

These are my current impressions of Lisp. Perhaps they'll change with time. If so, I'll let you know!