As I mentioned last week, Jeff and I recently finished up a small command line interface program that allows a user to poll a WattDepot server for accumulated energy-use data. Since this was done as part of a software engineering course, there were a number of other groups developing their own versions of the same program.
This week, I'm going to perform a structured technical review of one of those other projects: hale-aloha-cli-grads. This is partly for the practice of reviewing another software project in detail before contributing to it. By comparing the two, this will also hopefully shed some light on what we did well and what we could have done better on our own project. Finally, the feedback might be helpful to the grads group members.
A good technical review needs to be structured by a series of pre-determined issues to examine. More generally, I'll be evaluating whether the grads project meets the Three Prime Directives of open-source software engineering. (This is not the first time I've done this sort of review.) I'll be covering the prime directives (PDs) in the order I encountered them as an interested developer.
PD2: An external user can successfully install and use the system.
The project's homepage at Google Project Hosting was sufficiently detailed, giving a basic overview of what the program does and who wrote it.
It also contains a link to a wiki UserGuide. I liked that the authors included a screenshot there. The UserGuide also documented a feature not obvious from within the program itself: you can poll the physical sources (such as Ilima-04-telco) for energy data, as well as polling the towers and lounges of the dorm.
Since there was only one link available under the project's Downloads tab, downloading was pretty self-explanatory. The brief Installation Guide explained how to extract the necessary executable .jar from the downloaded zip file and then run the program.
PD1: The system successfully accomplishes a useful task.
The program starts with a paragraph-long help message describing the possible commands:
--Available commands are:
energy-since: [tower | lounge] [Start]
Returns the energy used since the date (yyyy-mm-dd) to now.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.
daily-energy: [tower | lounge] [Date]
Returns the energy in kWh used by the tower or lounge for the specified date (yy
yy-mm-dd).
rank-towers: [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)
--Enter a command (type 'quit' to exit):
This was useful, though the help message is a little mangled when viewed in a conventional 80 character-wide console window. (Our own project's output suffered this same issue.) The last line shown here is the prompt for input, though any input entered then appears on the next blank line. This is a little unconventional for a command line prompt.
Although covered in the UserGuide, there was no in-program list of valid tower and lounge names that could be polled for data. I found a work-around for this: The rank-towers command gives the list of towers, and a user could probably remember how to generate lounge names from that list. But it really would be helpful to have this information explicitly available from within the program itself.
Although the help message was useful when first starting the program, it gets printed every turn. This means that the prompt is effectively 12 lines long every input turn, even when using the program correctly! This quickly became annoying, since it clutters the output, which is usually only a line or two for most commands. Also, this behavior did not match UserGuide screenshot mentioned earlier.
The program correctly supported all of the commands requested by the "client" of this software project. The error messages were generally fine, shielding user from any technical details. (One quibble: The program complains "Not enough arguments" when it is given too many arguments.) Most importantly, the program did not crash at any point. (See the transcript at this end of this review for the full details.)
In general, testing this program reminded me how tedious any CLI--including our own project--can be to use.
PD3: An external developer can successfully understand and enhance the system.
Once I knew the program itself worked, it was time to take a look at what it would take to extend or contribute to it.
Developement protocol. The grads project DeveloperGuide wiki page provides a brief summary of the required development practices: the coding standards used by the project, how to follow Issue-Driven Project Management, how to ensure that new code passes verification by all of the QA tools used, and a link to the continuous integration Jenkins server used by the project. This was sufficient for me, but the DeveloperGuide may be a little too brief for someone not already familiar with these terms--such as "Elements of Java Style", "Issue-Driven Project Managements", and "CI".
QA. The quality assurance (QA) script verify.build.xml failed for me after the initial check-out. The JUnit tests intermittently failed due to WattDepot server connection problems. Also, the tests printed a lot of details to the screen, which suggests that perhaps the developers were doing manual verification of all of these tests. Checkstyle initially crashed due to a problem with one of its component classes, but running ant reallyclean
and then ant -f verify.build.xml
again fixed the problem. So, after repeatedly running verify, it eventually passed with no code changes required. In getting through this, it helped a lot that I was already familiar with this build system, since we used the same setup for our project.
Documentation. The DeveloperGuide did not explicitly mention how to generate JavaDocs. However, this is fairly obvious once one inspects the available *.build.xml files. They are generated automatically by verify.build.xml, and they can easily be generated directly from the code though Eclipse.
Examining the generated JavaDocs, all the packages and methods were documented. However, frequently the descriptions of the methods and parameters--particularly in the processor
package--added very little information that wasn't already obvious from the name of the method, parameters, or return type. At least most of these names were fairly descriptive. Some classes--such as FakeCommand and ReallyFakeCommand--were not documented as to why they are needed.
Use of reflection. The project uses reflection to dynamically find the implementations of the various CLI commands. While this "drop in" modularity is very sexy in theory, the execution leaves something to be desired. First of all, reflection is always verbose. This is compounded here because two different approaches are needed to cover two possible situations: the classes are normal files in a filesystem or the classes are packaged in a jar file.
The jar file approach is brittle and will break if the containing jar file is renamed. Also, this approach cannot (easily) be tested by a JUnit test while the project is not yet packaged in a jar.
A JUnit test exists for the filesystem case. However, it is poorly written in that it imports and uses the very class (FakeCommand) that it is trying to find and load through reflection. Thus, if that command class is actually missing, the JUnit test doesn't just fail, it crashes. Also, the class discovery code itself makes assumptions about the state of the filesystem that don't always hold. For example, I am using both SVN and Eclipse. When I compile the source code in the src/ directory, Eclipse also copies the corresponding .svn directories from src/ into the bin/ folder with the generated .class files. Then, when I run the JUnit tests, the command-loading code chokes on the extra .svn directories. (The JUnit tests run fine if executed through Ant because this instead executes the class files in build/classes/, which does not contain any copied .svn folders.)
Finally, special classes, such as FakeCommand and ReallyFakeCommand, were constructed for test purposes. Normally, this is a very good practice. However, these classes are now visible to the entire system, not only to the testing code. Therefore, they then have to be filtered out of the list of available commands at multiple points in the regular code, which is messy and error-prone. (ReallyFakeCommand is never actually used, so it's just clutter.)
I think the grads programmers did fairly well handling all of the technical difficulties brought on by this reflection-based approach. Kudos to them for getting it working at all! I just question whether the approach itself is worth all this clutter and overhead. In our own project, we decided it wasn't worth it.
Code Readability and Maintainability. In general, the code is readable. As mentioned previously, some more descriptive/informative JavaDocs would help though.
Some data could be centralized to a single location. For example, every test class includes its own copy of the URL of the WattDepot server. This would make maintenance a headache if this address changes.
Unlike our program, the grads project does not include a list of towers or lounges. This means they cannot print a list of valid input values for the user. Also, it means any error in a source name must be sent to the server to be discovered, which puts more strain on the server. However, on the plus side, it means the CLI program would not need to be updated as source names change on the server. I think a compromise between these two approaches might be best: have the program query the server for the current list of valid sources at start up, and then use that list to inform the user and to weed out invalid queries before sending them to the server.
Testing.
The code for the commands was somewhat verbose, but it was fairly well-tested with an amount of test code roughly equal to the amount of code being tested (100 to 200 lines per command). Jacoco test coverage results showed that most of the important code was being tested--usually 60 to 80% coverage. Untested code included such things as multiple catch blocks for the different kinds of WattDepot exceptions that might arise. This is fair, since testing all of these catch blocks would be more of a test of WattDepot's exception-throwing than of the CLI program itself.
As a design choice, all commands print directly to the screen when executed. This was the source of all the extra output from the JUnit tests.
Development history.
From the @author tags in the JavaDocs and the Issue history on the project hosting page, it was easy to see who worked on which part of the system. (It's also interesting to see some different programming styles evident in the code too, even though all developers formatted their code the according to the same standards.) The work was evenly divided between the three developers.
The developers also followed their own development protocols. After the first handful of commits, later code commits are clearly linked to the specific issues they resolved. The team practiced continuous integration by committing code regularly (nearly every day). Except for the period during which the WattDepot server was down, there was only one commit that broke the build and this was resolved in less than an hour.
Conclusion
This project satisfies the three prime directives. Excusing the little minor warts and blemishes that crop up in any large project, the program 1) accomplishes the task it was designed to do, 2) is sufficiently easy to download, install, and use, and 3) is documented and structured clearly enough that a new developer could contribute code to it without too much trouble.
Appendix: Testing Transcript
The following is an transcript of some of tests I ran on this program. I cleaned up the output a bit. As mentioned above, the original output prompt included a dozen lines of help message before each prompt.
C:\Files\Downloads>java -jar hale-aloha-cli-grads.jar
Server: http://server.wattdepot.org:8190/wattdepot/
Welcome to hale-aloha-cli-grads
Looking for commands....
Found command: energy-since
Found command: current-power
Found command: daily-energy
Found command: rank-towers
help
--Enter a command (type 'quit' to exit):
current power lounge
'current' is not a command!
--Enter a command (type 'quit' to exit):
current-power
Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.
--Enter a command (type 'quit' to exit):
current-power lounge
lounge is not a valid source name.
--Enter a command (type 'quit' to exit):
current-power Lehua
Lehua's power as of 2011-11-29 11:13:44 was 22.0kW
--Enter a command (type 'quit' to exit):
current-power Lehua with extra arguments
Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.
--Enter a command (type 'quit' to exit):
current-power Mokihana-C
Mokihana-C's power as of 2011-11-29 11:13:44 was 4.0kW
--Enter a command (type 'quit' to exit):
current-power Ilima-04-lounge
Ilima-04-lounge's power as of 2011-11-29 11:13:44 was 2.0kW
--Enter a command (type 'quit' to exit):
current-power Ilima-04-telco
Ilima-04-telco's power as of 2011-11-29 11:18:45 was 1.9kW
--Enter a command (type 'quit' to exit):
rank-towers yesterday today
Argument "yesterday" is invalid.
--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26
For the interval 2011-11-24 to 2011-11-26, energy consumption by tower was:
Lehua 985 kWh
Mokihana 1029 kWh
Ilima 1098 kWh
Lokelani 1181 kWh
--Enter a command (type 'quit' to exit):
rank-towers 2011-11-19 2011-11-26
The tower ranking could not be retrieved for the dates given.
--Enter a command (type 'quit' to exit):
rank-towers 2011-11-30 2011-11-26
End date must be greater than start date.
--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26 and more
Invalid arguments for rank-towers.
rank-towers: [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)
--Enter a command (type 'quit' to exit):
energy-since Lehua-F 2011-11-27
No such source Lehua-F
--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-Nov-27
Argument "2011-Nov-27" is invalid :2011-Nov-27
--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27
Total energy consumption by Lehua-E from 2011-11-27 00:00:00 to 2011-11-29 11:06
:30 is: 264.1 kWh
--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27 more
Not enough arguments.
--Enter a command (type 'quit' to exit):
daily-enegy Mokihana-B 2011-11-25
'daily-enegy' is not a command!
--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25
Mokihana-B's energy consumption for 2011-11-25 was: 92.1 kWh
--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B
Not enough arguments.
--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25 more
Not enough arguments.
--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 25-Nov-2011
Argument "25-Nov-2011" is invalid :25-Nov-2011
--Enter a command (type 'quit' to exit):
daily-energy Mokihana-F 2011-11-25
No such source Mokihana-F
--Enter a command (type 'quit' to exit):
quit
quitting...