Thursday, October 20, 2011

Collaborative Review

The students in my software engineering class are reviewing for the midterm.
To contribute to the group study session, here are 5 review questions (and answers)
of my own:



1) Name three features often provided by an IDE.

The most common features include:

  • Language-aware editing (syntax-highlighting)
  • Integrated compilation (and execution)
  • Integrated debugger
  • Project definition facilities

Extra features may also include: wizards (class creation, etc), content assistance, refactoring, diagramming, project file manager, plugins (for tools such as configuration management, QA testing, JUnit, etc).


2) Give an example of an annotation tag commonly used in Java.

The Java compiler recognizes: @Override, @Deprecated, and @SuppressWarnings.

JUnit uses: @Test, @Before, @After, @BeforeClass, @AfterClass, and @Ignore.

Annotations can also take parameters, as in: @Test(expected=UnsupportedOperationException.class).


3) Which two methods in class Object should you override if you suspect that instances of your class might some day be need to be stored in a Collection?

equals and hashCode.


4) How do you use @Override in Java? What does it do?

@Override is an annotation that you should add at the beginning of a method definition if that method is intended to override a method in a superclass or interface. Example:

  @Override
  public String toSting() { ...

Thanks to the @Override tag, I would now get a compile-time warning that toSting() does not actually override a method. (This is because I misspelled String.)

Using @Override when overriding interface methods is only supported with Java 1.6+.


5) Ant is a build system. Name another one.

Maven and Make are also build systems.

Project Hosting: Hosted for the Very First Time

This week added another tool to my programming toolbelt: project hosting and software configuration management.

When working on my dissertation project, I used Subversion (SVN) locally to keep track of different versions of the project's source code. This meant I didn't have to worry about making a big implementation mistake because I could always rollback to a previous verion. Since I was the only programmer, I only committed weekly. The commit notes made for a good log of each week's work.

But this week, I setup a test project at Google Project Hosting for my BusyWeek robocode robot.

Setup went pretty smoothly. Google provides a pretty standard and simple interface to all their services, so nothing was too surprising there.

The built-in wiki and bug-tracking tools are pretty nice. (I haven't had a chance to explore the bug-tracking yet, though.) The wiki markup can be annoying--but isn't that true of every wiki system? Personally, I found the requirement to indent every * list bullet by two spaces rather annoying. And I couldn't seem get my numbered list to continue if I included a code block within one of the points. While all the various wiki markups are supposedly easier, I find I prefer HTML in these cases because then I have clear and explicit control. But this was all pretty minor stuff. The Preview button was a lot of help here, so that I didn't fill the revision history with minor edits.

It's also a little strange that editing a wiki changes the revision number on your software. However, I suppose the contents of the wiki and the contents of the code base are highly related, so this approach probably provides a clearer project history.

I did get side-tracked in selecting a license, though. I still haven't found a simple license that I really like, especially among the OSI licenses. Since my Ant build files and one of my test files is based on a DaCruzer robot project, I figured I had better use the same license. (This is fine, since I like the Apache 2.0 License.) However, neither my project or DaCruzer actually includes any of that license information in the source code itself. I should probably look into that at some point.

Anyway, it's exciting to actually have a project hosted somewhere now! Fun stuff.

Tuesday, October 18, 2011

BusyWeek: Programming Under Pressure

It's been a busy three weeks! I completed two programming projects in that time (as well as a lot of other work). The contrast between how those two projects went provided me some interesting insights on testing and general best-practices.

Three weekends ago, I completed a hefty implementation project for an algorithms course I'm taking. I had to implement and gather runtime data on 4 abstract data types: a sorted doubly-linked list, a skip list, a binary search tree, and a red-black tree. All of the implementations adhered to the same interface. I then had to write a text-based user interface to allow a user to run performance tests as well as call methods across all 4 implementations. All told, the project took me about 30 hours over 3 or 4 weeks--though half of that time was put in over the last two days. But the project was a success. (I had one known bug at the midnight due date, but I was able to fix it and resubmit an hour later.)

I think part of that success was in how I spent my time. I started early and spent that time designing the common interface, implementing the easiest data type (linked list), and writing a single battery of JUnit tests that I could then run across all 4 implementations. When crunch-time came, I was confident that my tests were robust. I could just focus on the implementation. When all my tests passed, I felt I was on pretty solid ground. In short, test-driven development eased that last-minute stress.

My other recent project was to design and test a robocode robot.

My original design for this was named JiggersTheFuzz. (The name is an obscure reference from an old Sierra game named Police Quest.) I planned to use linear targeting, which means anticipating a target bot's location by leading their current position by an amount relative to their velocity. I planned to scan enemies for energy drops that mean they'd just fired and then try to dodge their bullets (the "jigging" of the name). If robocode bots want to "mark" another bot, they often use the getName() method. So I wanted to dig into Robocode and the runtime stack a bit to see whether I could override my own getName() to return a different String each time but in such as way that it didn't mess up the robocode simulation itself. This would be like a radar jammer (the "Fuzz" of the name): other bots would still see me, but they wouldn't be able to identify me between turns.

However, it was not to be.

Two weekends ago, I had a conference in California. While I managed to get caught up on everything else before I left, this robocode project followed me onto the plane. I tried working on it in my hotel at 9:30pm after two sleep-deprived nights and a long conference day. I tried again at 5:30am. It just wasn't happening.

I had a plan sketched out, but the details still involved too many unknowns. In short, I hadn't started early enough to be on solid ground come crunch-time. I started paring away features as the deadline neared. The first to go was the radar "fuzzing" (so I never got to look into that one). The second to go was scanning other bots to track their energy level. (This is actually very hard to do in robocode when extending Robot rather than AdvancedRobot, since a Robot can only do one thing at a time: turn its radar, move, or fire, but not at the same time.) I was down to just moving regularly in the hopes of dodging bullets and generally being unpredictable... but I wasn't handling wall collisions at all. There was little left to pare away and still have a functional bot.

At this point, I called it: This project is going to be late.

So I renamed it BusyWeek. I kept the linear targeting and the naive bullet-dodging, and, once I got back to my normal life, I sat down to do it right. Last weekend, I finally finished my bot.

Movement: BusyWeek assumes 1-on-1 combat. Once I scan an enemy, I turn to be perpendicular to him. Then I move 20px at a time, which is a touch more than the width of my bot. Thus, if he happens to shoot just before I move, I'll be out of the way by time the bullet arrives. Of course, this assume he's at least 140px away and firing small fast shoots. (He can be only 77px away if firing big slow shots.)

To ensure I'm this optimal distance away, my bot tries to stay between 140px and 180px away. (Farther than that, and it's too hard to make my own shots reliably.) If I'm currently outside of this optimal range, I'll turn to 45 degrees (instead of 90) from my enemy's heading and move that way. I handle walls by moving to the opposite side of the enemy's heading to avoid wall collisions. If I've been backed into a corner and can't avoid a wall collision in either direction, then I try to ram the opponent instead.

Targeting: As mentioned, I used a form of linear targeting to lead the enemy based on his current heading, velocity, and distance from me. I decided to include a "fudge factor" of treating his velocity as one less than it actually is. This was because it seems that stops and turns are more common behaviors than continuing along in a straight path.

Firing: BusyWeek shoots bullets with a power proportional to its own current power level. This way I don't shoot to the point of becoming disabled. Since smaller bullets move faster, it also means I hopefully become more accurate as things get more desparate. Also, I don't even aim at an enemy if my gun is still too hot to fire. This saves me some time for scanning and moving instead.

This design worked fairly well against the standard sample bots. Here are my win rates over 100 non-deterministic rounds each:

vs sample.SittingDuck: 100%
vs sample.Tracker: 79%
vs sample.Corners: 85%
vs sample.Fire: 99%
vs sample.Crazy: 100%
vs sample.SpinBot: 67%
vs sample.RamFire: 60%
vs sample.Walls: 76%

Testing this bot proved to be a trial, though. I used a number of computational methods--such as finding a point on the board based on a starting point, a heading, and a distance. Testing these using JUnit tests went fine. I only wish I'd written the tests first. I actually wrote most of these methods for some of my kata bots weeks ago, and so I was already confident they worked. This takes all the fun out of test-writing.

Testing the behavior of my bot during the simulation proved to be trickier. For example, early in my design I decided that it'd be easy to test that my proportional-bullet-power feature was working correctly. (Testing this for bugs was a bit like only looking for lost keys under a streetlight--not because I expect to find them there but simply because searching in the light is much easier!) I simply paired BusyWeek with a bot that always fires shots of power 3. Then I used the RobotTestBed to snapshot all the bullets in the air each turn. I figured if some of the bullets decreased in power as the battle progressed, then it meant my algorithm was working. However, while the test passed, when I set my bot to only fire shots of power 3 too, I couldn't make my test fail! After some digging, it turns out that a bot--even a bot like SittingDuck that never fires--effectively fires a single shot of power 1 when it explodes. So I figured I'd just exclude all bullets of power 1 from my test. But then I realized that sometimes the other bot would fire lower-powered shots near the end of the match when they didn't have enough energy left to make a full-powered shot. Sorting through all these details took over an hour--effectively debugging the flickering of the streetlight rather than looking for my keys. Finally, I gave up: this was all to test a single line of code that I could just eyeball!

So, in conclusion, I learned the following:

  • Start early and get the design and necessary research out of the way. This will let you make a more realistic estimate of how much time the rest will take you.
  • Write tests before you write code. (Or, at very least, write them together.) Writing tests after the fact is drudgery.
  • Write code so as much as possible can be unit-tested. Trying to test emergent or complex behavior through a simulation or an user interface is much harder.
  • Make sure your test can fail before you rely on it when it passes.
  • When you're at a conference, you're not going to have either the time or the energy to work on anything else.

Admittedly, these lesson are all very standard software engineering tips. And I knew most of them, at least consciously. But it's nice to learn viscerally that they really are valid!