Tuesday, December 13, 2011

Group Programming: Some Adjustment Required

After last week's technical review, Jeff and I swapped projects with the grads group. That is, we took over on the grads' code, and they started working on ours. Both groups added three more commands to the existing codebase that they had just inherited.

This was an interesting experience taking over on an existing software project. Of course, there was some initial overhead getting familiar with the layout of the new project's code. This wasn't too onerous, though, since the overall structure was fairly logical.

Once Jeff and I started working, the work went smoothly. We had already completed one project together. I think Jeff and I communicated well, and each of us made some significant contributions. We agreed on all of the major design issues.

Still, for all of the harmony, I'd say that my most interesting learning experience this week was the old "pick your battles" experience. If you've ever shared a living space with someone as an adult--whether a college roommate or a romantic partner--you probably know what I mean. There's a period of adjustment to the person's habits and quirks. For example, maybe your new roommate already put the silverware in the left drawer when you feel that it is obviously more natural to have it in the right drawer. Or maybe your new lover seems to somehow constantly drop loose change; eventually stray coins lie scattered throughout the house.

For each new quirk you discover, you have to decide whether it bothers you or not. If it does, then you need to figure out if you're willing to just overlook it, constantly clean up after them, or make an issue of it. If you make an issue of every little thing, you can easily become an unpleasant nag. Depending on how it's done, cleaning up behind someone can come across as a passive-aggressive show of disapproval. But, if something really does bother you and you don't speak up, you can find that your living space is not really your own. That's what I mean by picking your battles.

First, Jeff and I had to come to terms with the existing code. As we settled in, we found we had to "move the silverware" in a few places to meet our tastes. In particular, we overhauled the look and behavior of the user interface. We left the reflection-based loading of classes in place, though.

Then there were the minor differences in working style that I noticed between me and Jeff. For the most part, these were so minor as to border on petty: "If I'd done it, I wouldn't have spaced the output that way" or "I usually write one class at a time rather than mock up all the classes and then fill in the details later."

A couple weeks ago, I mentioned the motivating aspect of working in a group. My point this week is that, whenever you work with someone else, they are not going to do things exactly the same way you do. That is to be expected. But, even when you realize this consciously, each specific difference you discover can still cause a short pause: "Oh.. that's not how I would have done that... but that doesn't mean it's wrong... Am I going to accept how this has been done and move on, or do I want to say something and change it?" Overall, I discovered that I could just let these things go. It was just a new experience for me to adjust to someone else's presence in my coding project space, both in terms of the old code and the new.

Jeff and I got everything finished up before the deadline. I think our code is pretty solid, though some of our line-spacing might be a little off between different commands.

Testing code that connects to a server continued to be a pain. I considered making some sort of mock client object, but there proved to be too many methods that would need to be overridden to make it worthwhile.

I also found that, when verifying someone else's code, it's easier to run through a manual test rather than plod through all their JUnits and make sure it covers all the necessary cases. While I think it's good to get some manual testing in there occassionally--especially since that lets you spot things like typos and weird formatting that a JUnit test is not going to catch--I think this is still not quite ideal. Manual testing can find bugs that the person's JUnits missed. But, if the code is correct, manual testing won't reveal that an appropriate JUnit test is missing altogether. This means that changes to the code could cause an undetected failure later. So I guess I should work on using the person's JUnit tests as a loose guide to manual testing in order to spot missing/poor tests as well as any existing defects.

I'm still a fan of Issue-based Project Management. I'm also glad we kept our habit of having the other person mark completed tasks as Verified. It's an extra step, but it's a good feeling knowing that everything has been double-checked by someone else.

This is the last project for my software engineering course, so it may mean a blog hiatus for a while. But I should be back occasionally with news on other new software projects!

Friday, December 2, 2011

Technical Review: hale-aloha-cli-grads

As I mentioned last week, Jeff and I recently finished up a small command line interface program that allows a user to poll a WattDepot server for accumulated energy-use data. Since this was done as part of a software engineering course, there were a number of other groups developing their own versions of the same program.

This week, I'm going to perform a structured technical review of one of those other projects: hale-aloha-cli-grads. This is partly for the practice of reviewing another software project in detail before contributing to it. By comparing the two, this will also hopefully shed some light on what we did well and what we could have done better on our own project. Finally, the feedback might be helpful to the grads group members.

A good technical review needs to be structured by a series of pre-determined issues to examine. More generally, I'll be evaluating whether the grads project meets the Three Prime Directives of open-source software engineering. (This is not the first time I've done this sort of review.) I'll be covering the prime directives (PDs) in the order I encountered them as an interested developer.

PD2: An external user can successfully install and use the system.

The project's homepage at Google Project Hosting was sufficiently detailed, giving a basic overview of what the program does and who wrote it.
It also contains a link to a wiki UserGuide. I liked that the authors included a screenshot there. The UserGuide also documented a feature not obvious from within the program itself: you can poll the physical sources (such as Ilima-04-telco) for energy data, as well as polling the towers and lounges of the dorm.

Since there was only one link available under the project's Downloads tab, downloading was pretty self-explanatory. The brief Installation Guide explained how to extract the necessary executable .jar from the downloaded zip file and then run the program.


PD1: The system successfully accomplishes a useful task.

The program starts with a paragraph-long help message describing the possible commands:

--Available commands are:
energy-since: [tower | lounge] [Start]
Returns the energy used since the date (yyyy-mm-dd) to now.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.
daily-energy: [tower | lounge] [Date]
Returns the energy in kWh used by the tower or lounge for the specified date (yy
yy-mm-dd).
rank-towers:  [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)
--Enter a command (type 'quit' to exit):
 

This was useful, though the help message is a little mangled when viewed in a conventional 80 character-wide console window. (Our own project's output suffered this same issue.) The last line shown here is the prompt for input, though any input entered then appears on the next blank line. This is a little unconventional for a command line prompt.

Although covered in the UserGuide, there was no in-program list of valid tower and lounge names that could be polled for data. I found a work-around for this: The rank-towers command gives the list of towers, and a user could probably remember how to generate lounge names from that list. But it really would be helpful to have this information explicitly available from within the program itself.

Although the help message was useful when first starting the program, it gets printed every turn. This means that the prompt is effectively 12 lines long every input turn, even when using the program correctly! This quickly became annoying, since it clutters the output, which is usually only a line or two for most commands. Also, this behavior did not match UserGuide screenshot mentioned earlier.

The program correctly supported all of the commands requested by the "client" of this software project. The error messages were generally fine, shielding user from any technical details. (One quibble: The program complains "Not enough arguments" when it is given too many arguments.) Most importantly, the program did not crash at any point. (See the transcript at this end of this review for the full details.)

In general, testing this program reminded me how tedious any CLI--including our own project--can be to use.


PD3: An external developer can successfully understand and enhance the system.

Once I knew the program itself worked, it was time to take a look at what it would take to extend or contribute to it.

Developement protocol. The grads project DeveloperGuide wiki page provides a brief summary of the required development practices: the coding standards used by the project, how to follow Issue-Driven Project Management, how to ensure that new code passes verification by all of the QA tools used, and a link to the continuous integration Jenkins server used by the project. This was sufficient for me, but the DeveloperGuide may be a little too brief for someone not already familiar with these terms--such as "Elements of Java Style", "Issue-Driven Project Managements", and "CI".

QA. The quality assurance (QA) script verify.build.xml failed for me after the initial check-out. The JUnit tests intermittently failed due to WattDepot server connection problems. Also, the tests printed a lot of details to the screen, which suggests that perhaps the developers were doing manual verification of all of these tests. Checkstyle initially crashed due to a problem with one of its component classes, but running ant reallyclean and then ant -f verify.build.xml again fixed the problem. So, after repeatedly running verify, it eventually passed with no code changes required. In getting through this, it helped a lot that I was already familiar with this build system, since we used the same setup for our project.

Documentation. The DeveloperGuide did not explicitly mention how to generate JavaDocs. However, this is fairly obvious once one inspects the available *.build.xml files. They are generated automatically by verify.build.xml, and they can easily be generated directly from the code though Eclipse.

Examining the generated JavaDocs, all the packages and methods were documented. However, frequently the descriptions of the methods and parameters--particularly in the processor package--added very little information that wasn't already obvious from the name of the method, parameters, or return type. At least most of these names were fairly descriptive. Some classes--such as FakeCommand and ReallyFakeCommand--were not documented as to why they are needed.

Use of reflection. The project uses reflection to dynamically find the implementations of the various CLI commands. While this "drop in" modularity is very sexy in theory, the execution leaves something to be desired. First of all, reflection is always verbose. This is compounded here because two different approaches are needed to cover two possible situations: the classes are normal files in a filesystem or the classes are packaged in a jar file.

The jar file approach is brittle and will break if the containing jar file is renamed. Also, this approach cannot (easily) be tested by a JUnit test while the project is not yet packaged in a jar.

A JUnit test exists for the filesystem case. However, it is poorly written in that it imports and uses the very class (FakeCommand) that it is trying to find and load through reflection. Thus, if that command class is actually missing, the JUnit test doesn't just fail, it crashes. Also, the class discovery code itself makes assumptions about the state of the filesystem that don't always hold. For example, I am using both SVN and Eclipse. When I compile the source code in the src/ directory, Eclipse also copies the corresponding .svn directories from src/ into the bin/ folder with the generated .class files. Then, when I run the JUnit tests, the command-loading code chokes on the extra .svn directories. (The JUnit tests run fine if executed through Ant because this instead executes the class files in build/classes/, which does not contain any copied .svn folders.)

Finally, special classes, such as FakeCommand and ReallyFakeCommand, were constructed for test purposes. Normally, this is a very good practice. However, these classes are now visible to the entire system, not only to the testing code. Therefore, they then have to be filtered out of the list of available commands at multiple points in the regular code, which is messy and error-prone. (ReallyFakeCommand is never actually used, so it's just clutter.)

I think the grads programmers did fairly well handling all of the technical difficulties brought on by this reflection-based approach. Kudos to them for getting it working at all! I just question whether the approach itself is worth all this clutter and overhead. In our own project, we decided it wasn't worth it.

Code Readability and Maintainability. In general, the code is readable. As mentioned previously, some more descriptive/informative JavaDocs would help though.

Some data could be centralized to a single location. For example, every test class includes its own copy of the URL of the WattDepot server. This would make maintenance a headache if this address changes.

Unlike our program, the grads project does not include a list of towers or lounges. This means they cannot print a list of valid input values for the user. Also, it means any error in a source name must be sent to the server to be discovered, which puts more strain on the server. However, on the plus side, it means the CLI program would not need to be updated as source names change on the server. I think a compromise between these two approaches might be best: have the program query the server for the current list of valid sources at start up, and then use that list to inform the user and to weed out invalid queries before sending them to the server.

Testing.
The code for the commands was somewhat verbose, but it was fairly well-tested with an amount of test code roughly equal to the amount of code being tested (100 to 200 lines per command). Jacoco test coverage results showed that most of the important code was being tested--usually 60 to 80% coverage. Untested code included such things as multiple catch blocks for the different kinds of WattDepot exceptions that might arise. This is fair, since testing all of these catch blocks would be more of a test of WattDepot's exception-throwing than of the CLI program itself.

As a design choice, all commands print directly to the screen when executed. This was the source of all the extra output from the JUnit tests.

Development history.
From the @author tags in the JavaDocs and the Issue history on the project hosting page, it was easy to see who worked on which part of the system. (It's also interesting to see some different programming styles evident in the code too, even though all developers formatted their code the according to the same standards.) The work was evenly divided between the three developers.

The developers also followed their own development protocols. After the first handful of commits, later code commits are clearly linked to the specific issues they resolved. The team practiced continuous integration by committing code regularly (nearly every day). Except for the period during which the WattDepot server was down, there was only one commit that broke the build and this was resolved in less than an hour.


Conclusion

This project satisfies the three prime directives. Excusing the little minor warts and blemishes that crop up in any large project, the program 1) accomplishes the task it was designed to do, 2) is sufficiently easy to download, install, and use, and 3) is documented and structured clearly enough that a new developer could contribute code to it without too much trouble.


Appendix: Testing Transcript

The following is an transcript of some of tests I ran on this program. I cleaned up the output a bit. As mentioned above, the original output prompt included a dozen lines of help message before each prompt.

C:\Files\Downloads>java -jar hale-aloha-cli-grads.jar
Server: http://server.wattdepot.org:8190/wattdepot/

Welcome to hale-aloha-cli-grads
Looking for commands....
Found command: energy-since
Found command: current-power
Found command: daily-energy
Found command: rank-towers

help

--Enter a command (type 'quit' to exit):
current power lounge
'current' is not a command!


--Enter a command (type 'quit' to exit):
current-power

Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.

--Enter a command (type 'quit' to exit):
current-power lounge

lounge is not a valid source name.

--Enter a command (type 'quit' to exit):
current-power Lehua

Lehua's power as of 2011-11-29 11:13:44 was 22.0kW

--Enter a command (type 'quit' to exit):
current-power Lehua with extra arguments

Invalid arguments for current-power.
current-power [tower | lounge]
Returns the current power in kW for the associated tower or lounge.

--Enter a command (type 'quit' to exit):
current-power Mokihana-C

Mokihana-C's power as of 2011-11-29 11:13:44 was 4.0kW

--Enter a command (type 'quit' to exit):
current-power Ilima-04-lounge

Ilima-04-lounge's power as of 2011-11-29 11:13:44 was 2.0kW

--Enter a command (type 'quit' to exit):
current-power Ilima-04-telco

Ilima-04-telco's power as of 2011-11-29 11:18:45 was 1.9kW


--Enter a command (type 'quit' to exit):
rank-towers yesterday today

Argument "yesterday" is invalid.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26

For the interval 2011-11-24 to 2011-11-26, energy consumption by tower was:
Lehua                            985 kWh
Mokihana                         1029 kWh
Ilima                            1098 kWh
Lokelani                         1181 kWh


--Enter a command (type 'quit' to exit):
rank-towers 2011-11-19 2011-11-26

The tower ranking could not be retrieved for the dates given.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-30 2011-11-26

End date must be greater than start date.

--Enter a command (type 'quit' to exit):
rank-towers 2011-11-24 2011-11-26 and more

Invalid arguments for rank-towers.
rank-towers:  [start date] [end date]
Returns a list in sorted order from least to most energy consumed between the [s
tart] and [end] date (yyyy-mm-dd)


--Enter a command (type 'quit' to exit):
energy-since Lehua-F 2011-11-27

No such source Lehua-F

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-Nov-27

Argument "2011-Nov-27" is invalid :2011-Nov-27

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27

Total energy consumption by Lehua-E from 2011-11-27 00:00:00 to 2011-11-29 11:06
:30 is: 264.1 kWh

--Enter a command (type 'quit' to exit):
energy-since Lehua-E 2011-11-27 more

Not enough arguments.


--Enter a command (type 'quit' to exit):
daily-enegy Mokihana-B 2011-11-25
'daily-enegy' is not a command!

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25

Mokihana-B's energy consumption for 2011-11-25 was: 92.1 kWh

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B

Not enough arguments.

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 2011-11-25 more

Not enough arguments.

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-B 25-Nov-2011

Argument "25-Nov-2011" is invalid :25-Nov-2011

--Enter a command (type 'quit' to exit):
daily-energy Mokihana-F 2011-11-25

No such source Mokihana-F


--Enter a command (type 'quit' to exit):
quit
quitting...