An Obsession with Everything Else

http://www.derrickschneider.com/atom.xml

Monday, November 02, 2009

Well, Which Is It?

Yesterday I spent some time at the library, researching a particular aspect of Julia Child's career. I had an idea for a piece — which may or may not work out — and I needed to do some initial investigation.

Reading through a number of her biographies, side by side, I was struck by the inconsistencies among them. For instance, Laura Shapiro's slim book Julia Child says that Mastering the Art of French Cooking only sold 16,000 copies in its first year, not taking off until a year after its release. Noel Riley Fitch's detailed Appetite For Life says, "By August, less than a year since publication, Mastering had sold 100,000 copies … and was in its fifth printing."

When writing of Louisette Bertholle's royalty amounts, Fitch says that they were 18 percent (versus 41 percent each for Beck and Child) for conceiving the idea. (She did very little on the book itself.) Joan Reardon, in an article about Mastering for the Summer 2005 issue of Gastronomica, says that it was 10 percent.

These are not books about days of yore, with archivists and researchers piecing together scattered, weathered scraps of data. Some of the participants in the Julia Child story are still alive. Child herself was when Fitch's book came out in 1999. And I imagine Knopf, the publisher, still has records from that time. Shouldn't they be more consistent?

My inclination is to trust Fitch's account, if only because of the extensive detail. (You could make the case that Reardon's 10 percent is a typo; the rest of the piece lines up with Fitch's account, at least for the parts I focused on.) But Shapiro says she used Fitch quite a bit. Does she have new information about initial sales? Or is this an editing issue: Did Shapiro mean that the book only sold 16,000 copies in 1961 (it came out in October of that year)? Or perhaps her note that sales didn't take off until fall of 1962, which might have been before October, actually lines up with Fitch's account, who merely lumps the entire first-year sales together without giving a breakdown.

Sunday, November 01, 2009

Creative Nonfiction Workshop

The wine class I teach didn't happen this semester. Poor economy, low enrollment, class canceled.

I could have just taken a break, I suppose, but I decided to improve my writing with a class named Creative Nonfiction Workshop that I wouldn't have been able to take while I was teaching in the same time period. I'm decent at research and the mechanics of writing, but the features that resonate with readers are those with stories, and I lack the tools to consistently write those pieces. I've tried to develop them on my own — I've studied Gay Talese and read through Janet Burroway's Writing Fiction and William Blundell's Art and Craft of Feature Writing — but the techniques didn't stick.

My professor divides the class into two conceptual halves: heavy on lecture for the first five weeks and heavy on discussion for the next five. The lecture part helped us craft scenes, paint useful characters, present meaningful dialogue, show — not tell — details, and evoke a feeling in the reader. These are all tools for fiction writers, of course, but they have useful roles in nonfiction as well. In the discussion portion, we spend most of the class critiquing our fellow students' pieces, discussing what works and what needs improving.

I noticed right away that my fellow students generally have more serious topics than I. Most of them have a story to tell and they want to get it out there — the young theater major with the passionate but unconsummated love affair, the woman whose adult son died while in his 20s, the woman writing about a Vietnam vet. I just want to be a better writer. So the class inevitably wanders into topics I'm comfortable with: how to get started, how to finish, and how to revise. But I still find new nuggets in the professor's discussion of these subjects.

I can see the change in my writing. I compare the second piece I'm writing to the first piece, and I see richer details, scenes that establish place and time, and better dialog. I come home from class inspired about new topics and ways to present older topics. I take extensive notes in class.

I'm ready to go back to the wine teaching circuit, but I wish I had the ability to take more writing classes like this one.

Wednesday, December 31, 2008

Unit Testing The UI

The conventional wisdom on unit tests is that they stop at the UI. There are, after all, QA-centric tools that automate the tasks of pressing buttons, choosing menu items, and putting text into forms.

But a lot of what I do at work is writing code that generates web pages. It's been relatively easy to introduce unit tests into utility classes or the service layer, but I wanted to give myself the same stability in the UI universe. Since my team is good about separating models, views, and controllers, I figured out an easy way to bring unit tests up against the UI: Execute methods in the controller, and then inspect the resulting model.

We use the Spring framework, which makes this concept pretty easy. It even includes mock objects for setting up HTTP requests and responses. The generic Spring flow enforces good separation: Your controller's handle method gets called and is expected to return a ModelAndView object. Your unit test can call the same method with appropriately configured mock request objects and then peer into the model object to make sure that values are set properly.

Since our views are usually JSPs, which require a running web container, I don't think there's a good way to write unit tests for that layer. But since the view relies on the model being correct, and the unit tests can look at the model, I think it's a pretty good stopping point. I don't want to rewrite those automated button-pushing tools, after all.

Trying to incorporate this idea into my Mac/iPhone programming, I realized I could go a little bit further than I could in the web container world. I'm working on an iPhone app to enable crazy Derrick meal planning, and one of the controllers is responsible for showing me things I have to do on any one day. I started with that as an experiment. To enable my "UI-ish" unit tests, I have split Apple's default template for this controller into a definite Model object and a definite "ViewLogic" object. The ViewLogic object handles code for formatting the date, determining if the table is present, how many sections it has, and what the title of each section is. All of these vary depending on the state of the model, because there are different categories of things to worry about for any given day, and you could have 0-3 of those categories. My test invokes methods on my controller that update the model object. Then I inspect that model and execute various methods on the ViewLogic object to make sure they're returning correct values based on the model.

Now I can refactor the methods in ViewLogic, some of which have a relatively high cyclomatic complexity, with a high confidence that I won't break the code that relies on it or introduce new bugs.

There are a couple of consequences to this, though. One is a bit of class bloat. I can re-use the model object for at least one other view, but the ViewLogic object is tailored to that one view. The other consequence is that the ViewLogic has become an adapter, converting the methods that the framework expects to find (which have context about views and such that don't exist in the unit test world) into methods I define. Rather than doing the normal thing, then:

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
// do calculations about number of rows
}


I do this:

- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section {
return [self rowsInSection: section];
}


Not necessarily bad, but it does make those methods a little anemic. I could pass nil as the table view, but I could see my code at some point needing to execute a method on the table view. So better to isolate "number of rows" logic in a method where I can call it even if I don't have a runtime UITableView object.

Sunday, December 28, 2008

Annotation Processors

Java 1.5 introduced the concept of annotations, markup that you can attach to classes, fields, and methods. We didn't use them a lot at my last job, but in my last round of interviews the subject came up, and I spent some time researching them. At my new job, we — or, rather, the frameworks we use — use them heavily.

I've come to really like them. Like any technology, they can become a hammer that makes everything look like a nail, but they offer a powerful layer on top of code. In particular, they can be used to add cross-cutting functionality that shouldn't be shoehorned into every object. To take one example, you could set up an "Audited" annotation that would fire every time a method is called and would write out the invoking information to a log file. The audited method wouldn't know anything about it and wouldn't need to support the auditing infrastructure: The framework around the method would see the annotation and do the work.

Because they are treated as code, they serve as more-correct documentation. If you the programmer see an @Audited annotation, you know that the given method gets audited. Contrast that with a normal Javadoc comment, which might reflect the method's behavior when it was first written (not audited) but not the current behavior (audited).

Working with annotations in running code is fairly easy. Java's reflection layer, which lets you inspect and manipulate runtime objects, has simple methods that tell you if a given structural element has a given annotation.

But I was intrigued by the concept of annotation processors, compile-time parsers that work off of the annotations in the source. Sun's primary use case for an annotation processor is generating additional files based on an annotation (classes to support XML marshalling and unmarshalling of a given object, for instance). I wondered if you could write an annotation processor that would inspect the source code looking for issues. The earlier you catch problems, the cheaper they are to fix: If I could use annotations to do extra checks on certain code, I could make them compile-time issues that would thus never get released.

I had a particular scenario in mind. At my work, we have some JavaScript code that invokes some of our system's Java objects through indirection, JSON, and reflection. Java supports method overloading (two methods with the same name but different arguments) but JavaScript does not. I inadvertently discovered how this could be a problem when I added an overloaded method to a Java object used by our JavaScript, and it broke our website (in development) for a couple of hours as we tracked down the problem. The JavaScript layer was invoking the new method, not the old method. Now I know this and avoid method overloading in the relevant classes. But some new programmer some day won't know and may make the same mistake. Or I might forget. Every time you have a process that people need to remember, you guarantee that one day someone will forget.

Unfortunately, documentation and examples for this system are sparse. So in the interest of helping others who have similar goals, I've attached the source code below. (There is a Visitor pattern implementation as well, but that is even more poorly documented. I need to revisit my code and figure that out at some point.) The code could use some touch-up, but it gets the idea across. Note that the code uses the Java 1.6 APIs, not the markedly different and unsupported Java 1.5 APIs.

My custom annotation is called RequireUniqueMethodNames and is defined as a class-level annotation. If present, my annotation processor (which you fire by adding a -processor argument to javac) will inspect the class to ensure that there are no overloaded methods. It is smart enough — and the annotation processing system is powerful enough — to distinguish overridden methods (allowed) from overloaded methods (not allowed).

I also allow you to exempt certain method names and not check private methods. These were to support the reality of the legacy code, in which there are overloaded methods that aren't exposed to JavaScript and private methods (which wouldn't be called by anyone) with duplicate names. A refactoring task in my queue will make these attributes irrelevant, but they're there for the moment.

First the annotation declaration:

@Documented
@Target(ElementType.TYPE)
@Retention(RetentionPolicy.CLASS) // required with @Documented
@Inherited
public @interface RequireUniqueMethodNames {
/** Whether or not to enforce duplicate names on private methods */
boolean enforceOnPrivates() default false;

String[] exemptedMethodNames() default {};
}


And now the processor code:

public class UniqueMethodNamesProcessor extends AbstractProcessor {

// stateful!
private boolean enforcePrivates = false;

//stateful!
private List<String> exemptedMethodNames;

public boolean process(Set<? extends TypeElement> annotations,
RoundEnvironment curEnv) {

// each element in the annotations set is a single Annotation
// (and since we only support one, it's always RequireUniqueMethodNames)
for (TypeElement te : annotations) {

// just in case we're passed a stray
if (!te.getQualifiedName().toString().equals("com.ea.sp.community.annotation.RequireUniqueMethodNames")) {
notice("Sent a stray annotation: " + te.getQualifiedName().toString());
continue;
}

for (Element element : curEnv.getElementsAnnotatedWith(te)) {
enforcePrivates = false; // reset each time
exemptedMethodNames = new ArrayList<String>();

// double-check to ensure that each is a class
if (!(element instanceof TypeElement)) {
error("Sent the wrong type of element: " +
element.getKind().name());
continue;
}

TypeElement clazz = (TypeElement)element;

// figure out the annotation instance used on this class
// note that class might have many annotations, but we know ours is
// one of them
for(AnnotationMirror am : elemUtils().getAllAnnotationMirrors(clazz)){
// use elemutils... because we declare this annotation as
// inherited, so we want to make sure we catch the subclasses
if (am.getAnnotationType().asElement().getSimpleName()
.toString().equals("RequireUniqueMethodNames")) {

// todo-dfs: is there a better way to construct the
// ExecutableElement to use for direct lookup?
Map<? extends ExecutableElement,? extends AnnotationValue>
methodMap = elemUtils().getElementValuesWithDefaults(am);
for (ExecutableElement curMethod : methodMap.keySet()) {
AnnotationValue value = methodMap.get(curMethod);
if (curMethod.getSimpleName().toString().
equals("enforceOnPrivates")) {
enforcePrivates =
((Boolean)value.getValue()).booleanValue();
}

if (curMethod.getSimpleName().toString().equals("exemptedMethodNames")) {

List<? extends AnnotationValue> exemptValues =
(List<? extends AnnotationValue>)value.getValue();
for (AnnotationValue arrayValue : exemptValues) {
exemptedMethodNames.add(arrayValue.toString());

}
}

}

}
}

List<TypeElement> classChain = calculateClassHierarchy(clazz);

// this will be a list of method objects that we maintain for
// each class in the list. because we want to check uniqueness
// within the universe of one class, not across the entire
// source base
List<ExecutableElement> methods =
calculateAllSuperMethods(classChain.subList(0,(classChain.size() - 1)));

screenDuplicateMethods(methods,clazz);

}
}
return false;
}

/** Collate all the methods for every incoming TypeElement. This does not do
* a duplicate check because non-annotated classes are allowed to have
* duplicate names.
*/
private List<ExecutableElement>
calculateAllSuperMethods(List<TypeElement> superclasses) {

List<ExecutableElement> retVal = new ArrayList<ExecutableElement>();
for (TypeElement curClass : superclasses) {

for (Element classMember : curClass.getEnclosedElements()) {
if (!classMember.getKind().equals(ElementKind.METHOD)) {
continue;
}
retVal.add((ExecutableElement)classMember);
}
}
return retVal;
}


/** Looks for duplicate methods in the aggregate list of method names that
* have been accrued as we go up the superclass chain.
* @param methodNames the list of ExecutableElements from all the superclasses
* @param curClass the current TypeElement
*/
private void screenDuplicateMethods(List<ExecutableElement> methods,
TypeElement curClass) {

for (Element classMember : curClass.getEnclosedElements()) {


if (!classMember.getKind().equals(ElementKind.METHOD)) {
continue;
}

ExecutableElement method = (ExecutableElement)classMember;
if (method.getModifiers().contains(Modifier.PRIVATE) &&
!this.enforcePrivates) {
continue;
}

// oddly, the method names are recorded as "method name" (with the quotes
// from getannotationvalue. so look for that
if (this.exemptedMethodNames.
contains("\"" +method.getSimpleName().toString()+ "\"")) {
continue;
}

if (!methodNameExists(method,methods)) {
methods.add(method);
continue;
}



//method exists, but is it an override? overrides are okay
if (methodIsOverride(method,methods)) {
continue;
}

error(method.getSimpleName().toString() + " is a duplicate (not an override) of a method elsewhere in class " + method.getEnclosingElement().getSimpleName().toString() + " or in a superclass.");
}
}

/** Recursively constructs a List of TypeElements that form the superclass
* chain for the given class.
* @param classDec the starting class to declare
* @return List of superclasses, with Object at 0 and the passed-in class
* at the end.
*/
private List<TypeElement> calculateClassHierarchy(TypeElement startClass) {
TypeElement superclass =
(TypeElement)typeUtils().asElement(startClass.getSuperclass());
if (superclass == null ||
superclass.equals(typeUtils().getNoType(TypeKind.NONE)) ) {
// we're at Object, so make a list and unrecurse
List<TypeElement> retVal = new ArrayList<TypeElement>();
retVal.add(startClass);
return retVal;
} else {
List<TypeElement> retVal = calculateClassHierarchy(superclass);
retVal.add(startClass);
return retVal;
}
}

/** Does the given ExecutableElement have the same name as something in list
*/
private boolean methodNameExists(ExecutableElement method,
List<ExecutableElement> methods) {
for (ExecutableElement existingMethod : methods) {
if (method.getSimpleName().equals(existingMethod.getSimpleName())) {
return true;
}
}
return false;
}

/** Determines if the passed in method is an override of a method in the list
* Overrides are generally okay even if they have the same name.
*
*/
private boolean methodIsOverride(ExecutableElement method,
List<ExecutableElement> methods) {
for (ExecutableElement superMethod : methods) {
if (elemUtils().overrides(method,superMethod,
(TypeElement)method.getEnclosingElement()) ) {
return true;
}
}

return false;
}

private void notice(String message) {
print(Diagnostic.Kind.NOTE,message);
}

private void error(String message) {
print(Diagnostic.Kind.ERROR,message);
}

private void print(Diagnostic.Kind kind,String message) {
processingEnv.getMessager().printMessage(kind,message);
}

private Types typeUtils() {
return processingEnv.getTypeUtils();
}

private Elements elemUtils() {
return processingEnv.getElementUtils();
}

More Unit Testing Revelations

Since I started making unit tests an active part of my development tasks at home and at work, I've noticed a benefit beyond the promise of stability and support for refactoring.

Unit testing has made me a better programmer. More precisely, it has made me practice what I preach about good programming practice.

One well-known best practice in the OO world — and probably in the procedural programming world, too — is to keep chunks of code small, focused, and free of side effects. This is partly for code maintenance purposes: A large method is cumbersome for a new programmer to understand. But this kind of modularity also allows other programmers (or yourself) to more easily subclass a class and add new behavior to suit new needs.

But in the heat of coding a new feature, I don't always take the time to think about the best way to break it apart. I figure I'll fix it in a refactoring pass. Fifteen years as a professional programmer, and you'd think I know that that chance will never come. Bugs need fixing, or the powers-that-be move me on to a new feature.

With unit tests as part of my task, I'm forced to think about testable chunks, and those are usually exactly the level of size and functionality that makes sense from an object-oriented perspective. I find myself thinking about the tests I will write as I code the class, and that in turn forces me to realize that a given method is too big, has too many side effects, or is otherwise not testable. This is especially true because I have made the decision that our team shouldn't write unit tests for retrieving objects from a database, since that is all handled by the well-vetted Hibernate library. When I write a method that retrieves objects from the database and does calculations with them, the "you'll have to test this" part of my mind realizes that I need to move the calculations into their own method (or methods) so that I can test them even without a database connection. In other words, I move specific functionality into its own method, exactly where it belongs.

Over and over again, every time I code with unit tests in mind, I see myself doing more on-the-fly refactoring as I write a feature. I don't wait for the magical refactoring time that never comes: I do it as I go.

Saturday, September 27, 2008

Testing Testing

Since I claim to advocate Agile development practices, I recently decided to give unit testing, the writing of simple test suites that exercise an application's basic functionality, a solid try.

I've been in various organizations that have tried to do it, but the effort always founders and fails. Everyone knows unit tests are a good idea; no one ever gets any in place. When the subject comes up at job interviews, other companies seem to have had the same problem. New features always creep ahead in priority, and once the code becomes too big, writing unit tests becomes too daunting a task. Plus, the new feature is working, more or less, so the motivation decreases to write tests for it.

As I started my Mac/iPhone programming, however, I realized I had the perfect environment for unit testing: no deadline and no legacy code. So I wrote new code and the test cases to exercise it at the same time. (I didn't do test-driven development, where you write the tests first and then write the code to make them work.)

I haven't since become a unit testing zealot — at least I don't think so — but I have a renewed enthusiasm. In a normal development scenario, I would have written all my support classes and then a UI so that I could test them. If there was a bug, I'd have to step through all the layers I had written to find it. With unit testing, I found several bugs long before I had a UI. When I did attach a UI, the fact that it worked smoothly out of the gate was anticlimactic: I knew the underlying structure was solid, after all.

Recently, I thought of some code reorganization I could do on the project as well as some significant performance improvements. I spent an hour or so doing the work, shuffling this here and that there. I set up my unit tests to run, and they passed: I could assert that my changes hadn't broken anything major. That's a really good feeling; there was a bit of a rush when I saw "Build Succeeded" flicker by in the log.

Even with my short experiment, I realized the two major benefits of unit testing: catching bugs earlier ("The earlier you catch bugs, the cheaper they are to fix," runs one programmer mantra) and making sure that major code changes don't affect the core functionality.

But how to bring that to work? At Maxis, we have a lot of pressure to deliver features, an always-busy team, and a big ball of legacy code even for the website. Just like every other tech company. One of my co-workers and I have talked about it, and the only answer seems to be: just start writing them, and keep them running. The first few will be tough, because you need a support structure for them (especially in the form of mock objects, layers that conform to the contract of the real layer, but deliver static, simplistic data), but once they get going, hopefully it becomes easier to add new ones. I have a squirrely bit of logic in one piece of code: That's my first candidate.

The problem is selling the concept to the business side. I think in a lot of companies, the extra time spent writing unit tests is seen as time that could be spent writing a new feature. In a way that's true, but I now see that unit tests actually are a feature: stability. No one wants an unstable system, but the investment in stability doesn't obviously pay off for a long time. In fact, if you've done it right, the investment never seems to pay off because the system doesn't have as many problems. You can't measure how many problems there would have been if the investment hadn't been made. But you can measure the number of users using a new feature. So you get instant feedback that it was a good addition, and you provide a metric. How does unit testing compete against that?

Labels:

Friday, September 26, 2008

A Server Programmer On The Client

I have been teaching myself Objective-C over the last few months. I'm starting with some Mac programs, but let's be honest: The end goal is iPhone applications. I have some already in mind, and I'm actively working on one right now.



I have been a server-side programmer for the last 10 years. And certainly my current senior-programmerness has been forged in the universe of databases, web servers, and application servers. Not even browsers, though I know enough JavaScript and HTML to get by.



A couple months into my client side hobby, I realize how much my experience influences my thinking. A part of me keeps worrying about scalability and threading issues as I write my first application, and I have to remind myself to not overdesign. These are big issues on web servers: You are almost guaranteed to have multiple threads running through every piece of code in your site, especially on my current project, spore.com, where we average 40,000 people logged in at any given moment using 200 active threads on each of our 20 application servers. HTTP requests, complete with database queries and code-level processing, need to move through the system as fast as possible, and you need to be aware of the pieces that modify state outside of that request so that you can prevent multiple threads from changing the same piece of data at the same time. But for my client app, only one person runs it at a time. And Mac OS X keeps threading issues to a minimum. (Though they can exist.)



On the other hand, there are facets of client-side programming that I've all but forgotten. Memory management? Who still worries about that? It's possible to have memory leaks in Java, but the native garbage collection takes care of all but the most esoteric memory management tasks. (Of course, it can also introduce its own performance problem, as anyone can attest who's watched their web app stop dead for a fraction of a second every few seconds as the garbage collector kicks in.) Sure, Objective-C 2.0 has garbage collection, but that limits your application to Mac OS X 10.5. The primary book I'm using, Cocoa Programming For Mac OS X, still follows the reference counting pattern, a fragile system if ever there was one. I haven't yet looked at the autorelease pool pattern that other tutorials use. (On the other hand, being able to discard the memory for objects whenever you want is a nice side effect. You never know when Java's garbage collection scythe will come a-calling, rendering the language's destructors useless for cleanup.)



And linking befuddles me after 13 years with Java (3 on the client side). In Java, you just put the class file where the virtual machine can find it, and it's available to your code. Application servers introduce a bit more complexity, but not much. For my application, in order to integrate regular expressions (an aside: Really? They're not built in?), I have to pass an argument to the package linker and deal with cryptic error messages when it doesn't work.



But overall I'm enjoying this diversion into fat client programming. I think in the end it's making me a more well-rounded developer. As someone whose resume loudly proclaims "All-Purpose Programmer," this is probably a good thing.

Labels:

Saturday, July 05, 2008

Puzzle Quest

I shouldn’t like Puzzle Quest: Challenge of the Warlords.



For one thing, it’s a role-playing game, and I tend not to like computerized versions of the genre. I find them boring: full of tedious dialog, boring storylines, random battles, and the inevitable “leveling up” times that require you to fight a lot of small battles just to get enough hit points to take on the next Big Bad.



For another, Puzzle Quest’s mechanic is novel for RPGs but trite in the larger universe: You fight each battle by playing Bejeweled, the ubiquitous puzzle game from PopCap, against your opponent. It’s a ridiculous conceit if you think about it, though no more so than other video game mechanics, I guess. As you clear colored gems off the board, you build up enough mana, or ingredients, to cast offensive and defensive spells. Clear a set of skulls, and you do direct damage to your opponent. Clear gold coins and you earn money. Clear a set of four and you get an extra turn. Rinse and repeat ad infinitum.



So why am I so addicted to an RPG that uses one of the most overexposed casual games as its sole means of battle? An equally addicted friend and I have chatted about this, and we can’t figure it out. I’ve done every little side quest. I’ve collected every possible companion. I’ve trained up my mount. I’ve forged items from runes I have found throughout the kingdom. I have, in short, opted to do all the things that I dislike about RPGs. Melissa has even quipped that she should buy me a d20. (As an aside, this game cost me $15 on XBox Live, and I’ve poured hours into it, giving it the best gameplay-to-price ratio in my collection.)



I think one key factor is that it transforms the solo game of Bejeweled into a two-person strategy game. You can’t clear any row you want: You have to consider how your opponent will be able to use the board. And the different strengths and weaknesses of the various monsters create different strategies. I favored green and red jewels in any fight against trolls, because the combination would let me drain the troll’s blue mana, which he can and does use constantly to regenerate health. I favored green gems in battles against elves just to prevent them from getting them, since they need very few to cast some nasty spells.



There are other facets of the game the make it appealing — the scant dialog you find is often humorous, and the music is at least good enough to not get tiring — but the heart of the game is the “strategic Bejeweled” component.



There was one thing about the game I didn’t like. By the time I got to the final boss, I had hit the top level for my character, I had forged some impressive items, and I had done virtually every side quest. But with him I was massively outclassed. Within minutes, he had obliterated me, and I had barely made a dent in his health. I expected an epic match, but this felt way out of balance with respect to the rest of the game. It felt disheartening, not challenging. On the other hand, I did beat him after a dozen or so attempts, so maybe it wasn’t as unbalanced as I thought. But I wonder if the casual player who doesn’t go to the lengths I did will find the final boss truly impossible. Perhaps the game adjusts his levels based on the player.



Maybe there’s a little RPG fan in me after all, just waiting for a game like Puzzle Quest to draw it out. I doubt I’ll be digging into Final Fantasy any time soon, but I am now one of the countless hordes waiting for Puzzle Quest’s forthcoming Plague Lord expansion.