Thursday, August 16, 2012

A quick fix to improve Python3 startup time

My web server is very low-end. Dating from the mid-90s, it has a 200Mhz Pentium and 96MB of RAM. It was running Debian 2.2 (potato), but I recently upgraded to the most recent Debian 6.0 (squeeze). I'm impressed that it even runs.

I also upgraded to Python3 in order to handle the recent overhaul of my Tamarin automated grading system. I compiled Python3 from source to do this, since Debian doesn't include it yet. They're still shipping Python 2.6.

Tamarin is little more than a bunch of CGI scripts. I expect it to run a little slowly on this machine, but, after the upgrade, any CGI request is taking about 4 seconds, which is fairly intolerable. Static webpages are still responsive enough, though. So I started up the Python3 interpreter... and waited. Yep, that's where the lag is. See for yourself:

ztomasze@tamarin:~$ time python3 -c 'pass'

real    0m2.493s
user    0m2.336s
sys     0m0.160s

Here, I'm just starting python3 to execute a single 'pass' statement that does nothing. This takes 2.5 seconds.

I read somewhere that not loading the local site libraries by using the -S option can give a performance boost. Since Tamarin uses only standard modules, I gave it a shot:

ztomasze@tamarin:~$ time python3 -S -c 'pass'

real    0m0.465s
user    0m0.392s
sys     0m0.072s

A 500% improvement! I even installed the default Python 2.6, just to compare times:

ztomasze@tamarin:~$ time python -c 'pass'

real    0m0.448s
user    0m0.348s
sys     0m0.060s
ztomasze@tamarin:~$ time python -S -c 'pass'

real    0m0.185s
user    0m0.148s
sys     0m0.036s

So Python3 is significantly slower for me than Python 2 was, but using the -S option at least gets me back to standard Python2 times.

This savings didn't really translate directly to improved CGI preformance though. Running two of my scripts from the command command line, I experienced the following:

                       status.py upload.py
original time          4 sec     6 sec
adding -S to #! line   3 sec     5 sec

A delay this long is still fairly intolerable. And I don't think the lag is inherent to Tamarin, since the delays weren't this long with Python2 and Debian 2.2 on the same machine.

I know I could probably shave off some more time for Tamarin by using FastCGI, SCGI, or mod_python. SCGI looks most useful to me given my existing codebase. Whenever I get some free time, I'll look into that.


Friday, June 29, 2012

Efficiency though Keyboard Shortcuts

One of the things I learned from the discussion of my recent Lisp posting was the value of delegating work to a good editor and the possible speed gain of good keyboard shortcuts. While I did not agree that the best path to these goals was necessarily through Emacs, I did decide to try to use my keyboard a bit more efficiently than I have been.

First, I reviewed the various keybindings already used by my OS, Windows 7. Hey, that Windows Logo key actually does have a few valuable uses! I also realized that I never use my function keys very much.

I also installed AutoHotKey and tried a few simple useful bindings, including:

  • Capslock is now Ctrl. (Shift+Capslock acts as normal Capslock)
  • Right Alt and Right Ctrl are now equal to Shift+Alt and Shift+Ctrl.
  • Win+q quits a program (like Alt+F4) and Win+c opens a command prompt.
  • The right-click menu button (AppsKey) is now a special function key. For example, I use it with various letters as shortcuts to certain directories when I'm in Windows Explorer.

I'm still working on actually using some of these on a regular basis, though. (Old habits die hard.) I'm also trying to use my alt keys with my thumbs without taking my fingers off the home keys, and using tab (replaced appropriately with spaces) more often when coding.

Anyway, it's all rather nerdy, but the possibilities are exciting. If you work on Windows, check out AutoHotKey. You may want to customize your system across all your different applications--especially if you start thinking about all the hundreds of possible key combos currently going unused on your keyboard!


Unit-testing a Python CGI script

This summer I'm overhauling Tamarin, my automated grading system. Under the hood, Tamarin is little more than a bunch of Python CGI scripts. However, as I overhaul it and convert it from Python 2 to 3, I also wanted to build a proper unit test framework for it.

It's been a dozen years or so since I last used Perl and CGI.pm, but I recall running my scripts on the command line and manually specifying key=value pairs. So, I was somewhat surprised to find no comparable way to test my CGI scripts in Python. The official Python cgi module documentation suggests the only way to test a CGI script is in a web server envirnoment. That's an unnecessarily complex environment for quick tests during development and precludes any simple separate unit tests.

In general, I'm not very impressed with the cgi module docs. In fact, browsing around revealed that there are a number of parameter options undocumented in the official docs.

Using this found information, I was able to build my own cgifactory module. Depending on the function called, it allows you to build a cgi object based on either a GET or POST query. For example:

  form = cgifactory.get(key1='value1', key2='v2')

If you then write your CGI script's main function to take an optional CGI object, you can easily build a CGI query, pass it to your script, and then run string matching on the (redirected) output produced by your script. Of course, most of your unit tests will probably be of component functions used by your script, but sometimes you want to test or run your script as a whole unit. cgifactory will help you there.

The cgifactory code is available here, where you'll always find the most recent version. The code itself is actually quite short; most of the file is documentation and doctests showing how to use it. I don't guarantee it's right, but it's worked for me so far. Hopefully it might be of use to someone else too! Feel free to copy, modify, and/or redistribute.

(Oh, and if you really need a command line version, it shouldn't be too hard to write a main that parse key=values pairs into a dictionary and then calls cgifactory.get(pairs) to build the CGI object.)


Saturday, March 10, 2012

Bug: C enum variables stored as unsigned ints

I read in K&R that enum values are basically int constants (like #defines in that way), and so enum variables are equivalent to ints. However, in C (not C++ though), you may assign any int value to an enum variable--even if that int value is not one of the listed values in the enum definition. You can do this without even raising a compiler warning.

In a program I was working on, I took advantage of that. I had an enum of values 0 through 7:

  enum direction { N, NE, E, SE, S, SW, W, NW};

In a particular function, I was scanning a map for a target in different directions and decided to return -1 if there was nothing interesting found in any direction. However, this led to strange bug.

The following program shows this bug clearly:

#include <stdio.h>

enum nums {zero, one, two, three};

int main(void) {  
  //using an enum as normal
  enum nums myNum = zero;
  printf("zero == %d\n", myNum);
  
  //assigning int value to an emum
  myNum = -1;
  printf("-1 == %d\n", myNum);
  if (myNum >= 0) {
    printf("%d >= 0\n", myNum);
  }else {
    printf("%d < 0\n", myNum); 
  }
}

This program prints:

 zero == 0
 -1 == -1
 -1 >= 0

I'm using GCC, and the manual itself says: "By default, these values are of type signed int" and "Although such variables are considered to be of an enumeration type, you can assign them any value that you could assign to an int variable".

However, further research shows that gcc will store an enum variable as an unsigned int if you have no negative values in your defined enum. For example, if I add neg = -1 as an extra value to my enum nums above, the output of the program changes to what I expect: -1 < 0.

Apparently the section 6.7.2.2 of the C99 standard (draft version) clarifies that this is allowed--that the particular int format used is implementation-dependent. An official version of the C90 standard is not freely available for comparison. -std=c90 doesn't change gcc's behavior on this issue.

Monday, February 13, 2012

Lisp: First Impressions

I learned the basics of Lisp 10 years ago when I was the TA for ICS313: Programming Language Theory. This semester, I returned to Lisp after a decade away, again in the context of TAing for ICS313. After so long away, I basically had to start over from scratch again. Though it certainly goes much faster the second time around, it's still a lot like learning it the first time. So these are my "first" impressions.

We are learning Common Lisp. So far, I'm not very impressed.

First of all, I find that Common Lisp is bloated. There are usually about 6 different ways to do something. For example, want to compare two items? Consider =, eq, eql, equal, equalp... and that's just getting started. Want to print something? Consider print, princ, prin1, pprint, format, etc. The reason that so many options exist is because each is subtly different. I'm sure that, once mastered, all these options increase your programming power, since you can pick exactly the right tool for the job. And, in many cases, some of the options seem to be now-unused holdovers that persist from earlier days in Lisp's development. So eventually you learn which ones you can ignore. But it all makes Common Lisp rather tedious to learn.

My next stumbling block is the formatting. As evidenced by both my students' code and my own inclinations, it seems natural as a programmer to try to format the parentheses to make matching them up much easier. The parentheses are subtle, and it's easy to get one out of place. This can produce code that compiles or loads but then fails at runtime because the meaning is subtly different to what you intended. This well-discussed blog posting sums it up fairly well. In general, I agree with the author there. Parentheses in Lisp mark the start and end of various expressions in the same way that braces mark the start and end of blocks in C-like languages. In C-like languages, we have proponents of such extremes as the Allman style of indenting. These Allman proponents feel that such formatting is so essential to readability that every brace deserves its own line! Yet the Lisp community advocates the exact opposite: that no parenthesis should be clearly placed. Instead, they should all just be tucked away at the start or end of a line of code. Supposedly some day you get to the point were you can "see past the parentheses". But this seems to me like a convention that makes code unnecessarily hard to read.

I find most of the arguments for this "parenthesis stacking" format are weak at best. One of them that irks me is that your editor will help you. First of all, you're not always reading code in an editor. Secondly, I should have to move my cursor around or press keys or require fancy color highlighting to make quick sense of the code on the screen. It's called "readability", not "navigability". A third argument is that you can ignore the parentheses because the indenting should show you the structure. But the indenting is not what actually determines the code structure--the parenthesis do! So I need to be able to quickly spot when the parentheses are wrong even though the indenting is correct.

This formatting thing bugs me because it seems the problem comes from an asinine coding convention choice. And that's the one reason that has me formatting this way rather than the way that makes sense to me: because "that's how it is done in Lisp." Like the choice to drive on the right side of the road, it's hard to buck the community on such a choice as a late-comer! Yet, because it makes the code harder to work with, but with no good reasons that I can see, it feels a little like hazing: "This is the way we all had to deal with it when we learned, so you do too."

That brings me to the Lisp community in general. First of all, I don't care to worship at the altar of Emacs. As I mentioned above, I shouldn't have to have a special editor to write code. Don't get me wrong: an IDE is great at increasing productivity and I definitely want one. But the code should be both readable and writable without one. Then I'm free to choose the IDE that meets my own requirements. (On that note, after trying Emacs and LispWorks, I settled on Cusp. It's a little tricky to get up and running, and a bit quirky at first, but it works pretty well. The highlighting of parentheses structure is very helpful.)

Secondly, there's just this "Lisp is so much better!" vibe in the community. Now, obviously you've got to wave the flag for your favorite language. I have no problems there. But, as others have pointed out, if Lisp is so wonderful, how come we're not all using it? Just about every computer science major has to learn a Lisp dialect at some point, so it's not just an issue of exposure. Is it because it lacks good libraries for modern tasks? Is it because, while powerfully writable, its dynamic re-writability makes it hard for someone else to read or maintain? Is it because, while a pioneer of so many cool ideas, most of those ideas have now been imported from Lisp by other languages? Is it because Lisp has built up over 50 years of cruft, but each new Lisp project to simply and overhaul it fractures the small but fanatical Lisp community, leading to inter-dialect derision and flamewars? Hard to tell. What I can tell is that the fanatical belief in Lisp's supremacy over all other programming languages is a little hard to swallow.

My general conclusion is that Lisp is still worth learning for the history of it. However, I don't think I'll be taking it up as my day-to-day language. I'll be looking elsewhere for more modern implementations of Lisp's contributions that are useful to me.

Still, the code-as-data idea is largely still unique to Lisp. That would be fun to explore more, so finding a Lisp dialect that fits me better might still be rewarding. I've considered Scheme a bit, but I think either Clojure or newLisp would be even better. (I found newLisp because I thought: "Why doesn't someone clean up and simplify Lisp back to it's glorious essence? If I did that, I'd call it 'new Lisp'." I searched... and behold! Already done 20 years ago by someone else.) Both seem to have newer, friendlier, more open communities. Clojure has the advantage of the JVM and the entire Java API behind it. newLisp is targeting what I think is a great niche for Lisp: scripting. This is where powerful writability at the expense of readability and maintainability is a viable tradeoff.

These are my current impressions of Lisp. Perhaps they'll change with time. If so, I'll let you know!

Tuesday, December 13, 2011

Group Programming: Some Adjustment Required

After last week's technical review, Jeff and I swapped projects with the grads group. That is, we took over on the grads' code, and they started working on ours. Both groups added three more commands to the existing codebase that they had just inherited.

This was an interesting experience taking over on an existing software project. Of course, there was some initial overhead getting familiar with the layout of the new project's code. This wasn't too onerous, though, since the overall structure was fairly logical.

Once Jeff and I started working, the work went smoothly. We had already completed one project together. I think Jeff and I communicated well, and each of us made some significant contributions. We agreed on all of the major design issues.

Still, for all of the harmony, I'd say that my most interesting learning experience this week was the old "pick your battles" experience. If you've ever shared a living space with someone as an adult--whether a college roommate or a romantic partner--you probably know what I mean. There's a period of adjustment to the person's habits and quirks. For example, maybe your new roommate already put the silverware in the left drawer when you feel that it is obviously more natural to have it in the right drawer. Or maybe your new lover seems to somehow constantly drop loose change; eventually stray coins lie scattered throughout the house.

For each new quirk you discover, you have to decide whether it bothers you or not. If it does, then you need to figure out if you're willing to just overlook it, constantly clean up after them, or make an issue of it. If you make an issue of every little thing, you can easily become an unpleasant nag. Depending on how it's done, cleaning up behind someone can come across as a passive-aggressive show of disapproval. But, if something really does bother you and you don't speak up, you can find that your living space is not really your own. That's what I mean by picking your battles.

First, Jeff and I had to come to terms with the existing code. As we settled in, we found we had to "move the silverware" in a few places to meet our tastes. In particular, we overhauled the look and behavior of the user interface. We left the reflection-based loading of classes in place, though.

Then there were the minor differences in working style that I noticed between me and Jeff. For the most part, these were so minor as to border on petty: "If I'd done it, I wouldn't have spaced the output that way" or "I usually write one class at a time rather than mock up all the classes and then fill in the details later."

A couple weeks ago, I mentioned the motivating aspect of working in a group. My point this week is that, whenever you work with someone else, they are not going to do things exactly the same way you do. That is to be expected. But, even when you realize this consciously, each specific difference you discover can still cause a short pause: "Oh.. that's not how I would have done that... but that doesn't mean it's wrong... Am I going to accept how this has been done and move on, or do I want to say something and change it?" Overall, I discovered that I could just let these things go. It was just a new experience for me to adjust to someone else's presence in my coding project space, both in terms of the old code and the new.

Jeff and I got everything finished up before the deadline. I think our code is pretty solid, though some of our line-spacing might be a little off between different commands.

Testing code that connects to a server continued to be a pain. I considered making some sort of mock client object, but there proved to be too many methods that would need to be overridden to make it worthwhile.

I also found that, when verifying someone else's code, it's easier to run through a manual test rather than plod through all their JUnits and make sure it covers all the necessary cases. While I think it's good to get some manual testing in there occassionally--especially since that lets you spot things like typos and weird formatting that a JUnit test is not going to catch--I think this is still not quite ideal. Manual testing can find bugs that the person's JUnits missed. But, if the code is correct, manual testing won't reveal that an appropriate JUnit test is missing altogether. This means that changes to the code could cause an undetected failure later. So I guess I should work on using the person's JUnit tests as a loose guide to manual testing in order to spot missing/poor tests as well as any existing defects.

I'm still a fan of Issue-based Project Management. I'm also glad we kept our habit of having the other person mark completed tasks as Verified. It's an extra step, but it's a good feeling knowing that everything has been double-checked by someone else.

This is the last project for my software engineering course, so it may mean a blog hiatus for a while. But I should be back occasionally with news on other new software projects!

Friday, December 2, 2011

Technical Review: hale-aloha-cli-grads

As I mentioned last week, Jeff and I recently finished up a small command line interface program that allows a user to poll a WattDepot server for accumulated energy-use data. Since this was done as part of a software engineering course, there were a number of other groups developing their own versions of the same program.

This week, I'm going to perform a structured technical review of one of those other projects: hale-aloha-cli-grads. This is partly for the practice of reviewing another software project in detail before contributing to it. By comparing the two, this will also hopefully shed some light on what we did well and what we could have done better on our own project. Finally, the feedback might be helpful to the grads group members.

A good technical review needs to be structured by a series of pre-determined issues to examine. More generally, I'll be evaluating whether the grads project meets the Three Prime Directives of open-source software engineering. (This is not the first time I've done this sort of review.) I'll be covering the prime directives (PDs) in the order I encountered them as an interested developer.

PD2: An external user can successfully install and use the system.

The project's homepage at Google Project Hosting was sufficiently detailed, giving a basic overview of what the program does and who wrote it.
It also contains a link to a wiki UserGuide. I liked that the authors included a screenshot there. The UserGuide also documented a feature not obvious from within the program itself: you can poll the physical sources (such as Ilima-04-telco) for energy data, as well as polling the towers and lounges of the dorm.

Since there was only one link available under the project's Downloads tab, downloading was pretty self-explanatory. The brief Installation Guide explained how to extract the necessary executable .jar from the downloaded zip file and then run the program.


PD1: The system successfully accomplishes a useful task.

The program starts with a paragraph-long help message describing the possible commands:

--Available commands are:
energy-since: [tower | lounge] [Start]
Returns the energy used since the date (yyyy-mm-dd) to now.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.
daily-energy: [tower | lounge] [Date]
Returns the energy in kWh used by the tower or lounge for the specified date (yy
yy-mm-dd).
rank-towers:  [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)
--Enter a command (type 'quit' to exit):
 

This was useful, though the help message is a little mangled when viewed in a conventional 80 character-wide console window. (Our own project's output suffered this same issue.) The last line shown here is the prompt for input, though any input entered then appears on the next blank line. This is a little unconventional for a command line prompt.

Although covered in the UserGuide, there was no in-program list of valid tower and lounge names that could be polled for data. I found a work-around for this: The rank-towers command gives the list of towers, and a user could probably remember how to generate lounge names from that list. But it really would be helpful to have this information explicitly available from within the program itself.

Although the help message was useful when first starting the program, it gets printed every turn. This means that the prompt is effectively 12 lines long every input turn, even when using the program correctly! This quickly became annoying, since it clutters the output, which is usually only a line or two for most commands. Also, this behavior did not match UserGuide screenshot mentioned earlier.

The program correctly supported all of the commands requested by the "client" of this software project. The error messages were generally fine, shielding user from any technical details. (One quibble: The program complains "Not enough arguments" when it is given too many arguments.) Most importantly, the program did not crash at any point. (See the transcript at this end of this review for the full details.)

In general, testing this program reminded me how tedious any CLI--including our own project--can be to use.


PD3: An external developer can successfully understand and enhance the system.

Once I knew the program itself worked, it was time to take a look at what it would take to extend or contribute to it.

Developement protocol. The grads project DeveloperGuide wiki page provides a brief summary of the required development practices: the coding standards used by the project, how to follow Issue-Driven Project Management, how to ensure that new code passes verification by all of the QA tools used, and a link to the continuous integration Jenkins server used by the project. This was sufficient for me, but the DeveloperGuide may be a little too brief for someone not already familiar with these terms--such as "Elements of Java Style", "Issue-Driven Project Managements", and "CI".

QA. The quality assurance (QA) script verify.build.xml failed for me after the initial check-out. The JUnit tests intermittently failed due to WattDepot server connection problems. Also, the tests printed a lot of details to the screen, which suggests that perhaps the developers were doing manual verification of all of these tests. Checkstyle initially crashed due to a problem with one of its component classes, but running ant reallyclean and then ant -f verify.build.xml again fixed the problem. So, after repeatedly running verify, it eventually passed with no code changes required. In getting through this, it helped a lot that I was already familiar with this build system, since we used the same setup for our project.

Documentation. The DeveloperGuide did not explicitly mention how to generate JavaDocs. However, this is fairly obvious once one inspects the available *.build.xml files. They are generated automatically by verify.build.xml, and they can easily be generated directly from the code though Eclipse.

Examining the generated JavaDocs, all the packages and methods were documented. However, frequently the descriptions of the methods and parameters--particularly in the processor package--added very little information that wasn't already obvious from the name of the method, parameters, or return type. At least most of these names were fairly descriptive. Some classes--such as FakeCommand and ReallyFakeCommand--were not documented as to why they are needed.

Use of reflection. The project uses reflection to dynamically find the implementations of the various CLI commands. While this "drop in" modularity is very sexy in theory, the execution leaves something to be desired. First of all, reflection is always verbose. This is compounded here because two different approaches are needed to cover two possible situations: the classes are normal files in a filesystem or the classes are packaged in a jar file.

The jar file approach is brittle and will break if the containing jar file is renamed. Also, this approach cannot (easily) be tested by a JUnit test while the project is not yet packaged in a jar.

A JUnit test exists for the filesystem case. However, it is poorly written in that it imports and uses the very class (FakeCommand) that it is trying to find and load through reflection. Thus, if that command class is actually missing, the JUnit test doesn't just fail, it crashes. Also, the class discovery code itself makes assumptions about the state of the filesystem that don't always hold. For example, I am using both SVN and Eclipse. When I compile the source code in the src/ directory, Eclipse also copies the corresponding .svn directories from src/ into the bin/ folder with the generated .class files. Then, when I run the JUnit tests, the command-loading code chokes on the extra .svn directories. (The JUnit tests run fine if executed through Ant because this instead executes the class files in build/classes/, which does not contain any copied .svn folders.)

Finally, special classes, such as FakeCommand and ReallyFakeCommand, were constructed for test purposes. Normally, this is a very good practice. However, these classes are now visible to the entire system, not only to the testing code. Therefore, they then have to be filtered out of the list of available commands at multiple points in the regular code, which is messy and error-prone. (ReallyFakeCommand is never actually used, so it's just clutter.)

I think the grads programmers did fairly well handling all of the technical difficulties brought on by this reflection-based approach. Kudos to them for getting it working at all! I just question whether the approach itself is worth all this clutter and overhead. In our own project, we decided it wasn't worth it.

Code Readability and Maintainability. In general, the code is readable. As mentioned previously, some more descriptive/informative JavaDocs would help though.

Some data could be centralized to a single location. For example, every test class includes its own copy of the URL of the WattDepot server. This would make maintenance a headache if this address changes.

Unlike our program, the grads project does not include a list of towers or lounges. This means they cannot print a list of valid input values for the user. Also, it means any error in a source name must be sent to the server to be discovered, which puts more strain on the server. However, on the plus side, it means the CLI program would not need to be updated as source names change on the server. I think a compromise between these two approaches might be best: have the program query the server for the current list of valid sources at start up, and then use that list to inform the user and to weed out invalid queries before sending them to the server.

Testing.
The code for the commands was somewhat verbose, but it was fairly well-tested with an amount of test code roughly equal to the amount of code being tested (100 to 200 lines per command). Jacoco test coverage results showed that most of the important code was being tested--usually 60 to 80% coverage. Untested code included such things as multiple catch blocks for the different kinds of WattDepot exceptions that might arise. This is fair, since testing all of these catch blocks would be more of a test of WattDepot's exception-throwing than of the CLI program itself.

As a design choice, all commands print directly to the screen when executed. This was the source of all the extra output from the JUnit tests.

Development history.
From the @author tags in the JavaDocs and the Issue history on the project hosting page, it was easy to see who worked on which part of the system. (It's also interesting to see some different programming styles evident in the code too, even though all developers formatted their code the according to the same standards.) The work was evenly divided between the three developers.

The developers also followed their own development protocols. After the first handful of commits, later code commits are clearly linked to the specific issues they resolved. The team practiced continuous integration by committing code regularly (nearly every day). Except for the period during which the WattDepot server was down, there was only one commit that broke the build and this was resolved in less than an hour.


Conclusion

This project satisfies the three prime directives. Excusing the little minor warts and blemishes that crop up in any large project, the program 1) accomplishes the task it was designed to do, 2) is sufficiently easy to download, install, and use, and 3) is documented and structured clearly enough that a new developer could contribute code to it without too much trouble.


Appendix: Testing Transcript

The following is an transcript of some of tests I ran on this program. I cleaned up the output a bit. As mentioned above, the original output prompt included a dozen lines of help message before each prompt.

C:\Files\Downloads>java -jar hale-aloha-cli-grads.jar
Server: http://server.wattdepot.org:8190/wattdepot/

Welcome to hale-aloha-cli-grads
Looking for commands....
Found command: energy-since
Found command: current-power
Found command: daily-energy
Found command: rank-towers

help

--Enter a command (type 'quit' to exit):
current power lounge
'current' is not a command!


--Enter a command (type 'quit' to exit):
current-power

Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.

--Enter a command (type 'quit' to exit):
current-power lounge

lounge is not a valid source name.

--Enter a command (type 'quit' to exit):
current-power Lehua

Lehua's power as of 2011-11-29 11:13:44 was 22.0kW

--Enter a command (type 'quit' to exit):
current-power Lehua with extra arguments

Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.

--Enter a command (type 'quit' to exit):
current-power Mokihana-C

Mokihana-C's power as of 2011-11-29 11:13:44 was 4.0kW

--Enter a command (type 'quit' to exit):
current-power Ilima-04-lounge

Ilima-04-lounge's power as of 2011-11-29 11:13:44 was 2.0kW

--Enter a command (type 'quit' to exit):
current-power Ilima-04-telco

Ilima-04-telco's power as of 2011-11-29 11:18:45 was 1.9kW


--Enter a command (type 'quit' to exit):
rank-towers yesterday today

Argument "yesterday" is invalid.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26

For the interval 2011-11-24 to 2011-11-26, energy consumption by tower was:
Lehua                            985 kWh
Mokihana                         1029 kWh
Ilima                            1098 kWh
Lokelani                         1181 kWh


--Enter a command (type 'quit' to exit):
rank-towers 2011-11-19 2011-11-26

The tower ranking could not be retrieved for the dates given.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-30 2011-11-26

End date must be greater than start date.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26 and more

Invalid arguments for rank-towers.
rank-towers:  [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)


--Enter a command (type 'quit' to exit):
energy-since Lehua-F 2011-11-27

No such source Lehua-F

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-Nov-27

Argument "2011-Nov-27" is invalid :2011-Nov-27

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27

Total energy consumption by Lehua-E from 2011-11-27 00:00:00 to 2011-11-29 11:06
:30 is: 264.1 kWh

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27 more

Not enough arguments.


--Enter a command (type 'quit' to exit):
daily-enegy Mokihana-B 2011-11-25
'daily-enegy' is not a command!

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25

Mokihana-B's energy consumption for 2011-11-25 was: 92.1 kWh

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B

Not enough arguments.

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25 more

Not enough arguments.

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 25-Nov-2011

Argument "25-Nov-2011" is invalid :25-Nov-2011

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-F 2011-11-25

No such source Mokihana-F


--Enter a command (type 'quit' to exit):
quit
quitting...