Tornado Outbreak Wiki
Tornado Outbreak Wiki

Note: The following was taken from an old defunct website containing Scott Bilas's blog. Sections may be incomplete due to not being saved via the Internet Archive.

November 18, 2008[]

I’ve been trying to get the time to do a series of posts and I think I’m finally ready to get started. The high-level theme is “Loose Cannon’s workflow”. Like any mid-size team working on a multi-year title, we’ve built up a large set of processes and tools to support development. Some are very carefully planned and based on past experience, and that’s where I want to start first: talking about mature process and tools that I know work well. Well, at least for the types and sizes of teams I tend to work on.

Note that I’m only going to talk about the areas that I’ve either designed or been directly involved in designing. We have a large infrastructure for building and managing assets that I think works pretty well. Maya, exporting, plugins, that sort of thing. But I’m not too familiar with all that. At least not yet – looks like my task while in Peru will very much involve our content pipeline.

So anyway, within this theme, the topic for the next set of articles is how we use Perforce at LCS. Which means this series will be fly-over territory for most people Google sends here. I’ve only met a few people in my life that have actually been interested in things like depot design and the workflow around it. Well, I really love this stuff! Helping the team become more efficient and accurate is something that I enjoy more than any other kind of work. Whether it’s creating API’s or tools, or setting up bugbases, or building an ecosystem for code reviewing, I’m happy doing it all. So I can’t help but post about this.

This is the third company where I’ve set up a Perforce depot and been responsible for its design, maintenance and tool infrastructure. With input in particular from Matt Scott I think we’ve got the best source control setup I’ve ever worked with. We’ve solved a lot of nagging problems that have plagued me at previous gigs. There are still some issues, but overall things are very smooth. I plan to cover things like the design of the depot, standards and requirements, supporting tools, and server maintenance. Perforce, out of the box, is far from a complete system. I’m pretty annoyed with those people, actually. All that money we give them every year and they’re mostly standing still, too busy rolling around naked in cash to fix basic and ancient problems with their design. But that will have to wait for a future post.

November 26, 2008[]

It would be an obvious understatement to say that modern game development has a lot of data.

Our hard drives fill up with all kinds of bits: tools, source code, code reviews, source and object assets, bug reports, concept art, docs, email, game builds and debug data, CD images, gameplay analysis data, SDK’s, error logs, and on and on. New types every day. Some you have initially, and some you add over the course of development. It all depends on the type and size of game.

In this next series of posts I am going to focus primarily on code and assets, and the tools and processes related to producing and managing them. That’s still a lot of data! During development, new classifications of data come up all the time, and we need to be able to answer the question of “ok great, now where does that stuff go?”

For example, let’s say that the powers-that-be decide that we’re going to add an hour of video cut-scenes to the game. This means a lot of new source assets need to be created, and then some truly huge output video files (made much worse if multiple platforms are involved).

Should everything go into revision control? Or get stored on a server file share? What about just keeping it on the video editing machine? The decision could have serious implications to team productivity, server space management, complaints from the IT department, automated builds and testing, and so on. So I want to provide some simple rules to help decide where data can and should go.

I originally was going to talk just about Perforce but in starting to write this article I realized I needed to back up a bit and talk about how we decide what even goes in revision control in the first place. In later posts I’ll talk about the particular environment we have at Loose Cannon. Specific types of data that flows around our network and how we organize it.

December 9, 2008[]

In this upcoming series of posts, I’m going to catalog most of the types of data stores we use at Loose Cannon, along with features that would make you want to choose one versus another.

Sometimes the lines are blurred a bit, because data stores continue to add interesting features each year. For example, modern wikis are starting to get some decent revision control features for binary data. And some revision control systems such as Subversion have the ability to instrument the file system with metadata, like a database.

Sometimes tools can bridge the gap among these data stores as well. For example, a file system has no ability to notify via email of changes, but you can pretty easily write a file scanner that watches for changes and emails team members based on a spec. Overall, though, while tools like these are useful and can enhance the underlying data store, they can’t really change its basic nature. So I’ll be reviewing each with that in mind.

This overview of data stores is probably going to be very basic for most, and you could even question why I’m bothering with such obvious stuff. Well, that’s what Ctrl-Yawn-W is for. But this is how I think, and so I can’t help it. I always start with the basic foundation and work my way up to the top. It’s slow, but regularly revisiting old assumptions is critical. As I write these upcoming articles I’ll probably end up reconsidering some of the things I do now and will make notes for changes to the next version of the tool chain for late 2009.

Anyway, back to the post. What we’re trying to do is answer the question “where does this go?” We’ll start with the first, most obvious option.

The Local Workstation Hard Drive[]

This is the fastest and easiest to use data store. Everyone stores at least some of their data on their local workstation’s hard drive.

For files that need to be “shared” with other people, you can copy or email files around to other people you work with. There are services that make it easier to do this for distributed groups (Groove comes to mind), but at that point we’re outside the scope of local storage and into the realm of services. Let’s stick with local.

What Gets Stored Locally?[]

I see local storage used a lot with spreadsheets, concept art, in-progress design docs, audio samples, task lists, quickie test projects and scripts, and so on. Private and secure data is often stored locally as well: budgets with salary information, employee reviews, private emails, and so on. I myself keep loads of private notes, tasks, and research progress in OneNote and, occasionally, email (in recent years I’ve found email to be increasingly painful and hardly use it much for work).

Ownership and management of local data is clear and simple: you own the files and you organize them however you want! Nobody messes with them unless you explicitly share a location out and, even then, you control the permissions. You can send updates to whomever you want when you want. Operating system shells are pretty good at making this as simple and powerful as possible. You can add metadata and tags, create virtual search folders, organize using apps like Picasa and iTunes, and so on. It’s great!

Those same nice shells often store irritating hidden files like Thumbs.db, Picasa.ini, .DS_Store, and so on – files that are not meant to be shared and often clutter up shared data stores. We have a special trigger in Perforce to prevent people checking in these files by accident (there’s also .user, .bak, .~, .svn\* and a hundred other siblings in this temporary family).

Local Storage Can Hurt[]

All of this lovely freedom can make things easy for you and a freaky mess for everyone else on the team.

Say someone leaves the company. Now you have a computer filled with files and nobody knows where anything is. Their desktop is invariably going to be chaotic, with files like “sword_large_02.psd”, “sword-marketing2008.psd”, “Copy of” files and “New Folder (2)” folders and so on. And more files stored in C:\ and My Documents and other seemingly random locations, maybe even burned to piled-up CD’s. I’ve known too many people that work like this. They run their hard drives down to 0 bytes free and then just delete random stuff to free up space. It’s Concentrated Crazy! Yet they still manage to kick out final PNG’s to the right spot in the Perforce depot, so nobody notices the mess until it’s too late.

At Sierra we had a policy of burning to CD the full hard drive of anyone who left the team. With the turnover we had on Gabriel Knight 3, of course, that gave us a mighty big stack of CD’s. Once or twice some poor intern had to go through it to (fail to) find this or that odd piece of marketing art we needed. I think all the CD’s ended up in an offsite vault eventually. All that effort archiving, storing, and searching that mess was such a ridiculous waste of resources.

Beyond the hopefully rare “developer leaves team” story, local files are simply the opposite of good communication. They’re by definition private. Even if you share a folder out, it’s still your machine. If you want to send an update of a file to someone you have to send an email or copy it to a share and IM them about it. And that’s private too, so you have to know who might need to know about these updates. This also assumes you even remember to do the update, and send the right file, and all that. I’ve lost track of the number of times I’ve gotten emails from remote workers, who attached the wrong version of the file. Oops! Sorry about that dude, let me re-send!

It’s a lot of seemingly optional responsibility to attach to a person who is probably too busy to keep it organized. At least, organized as much as the rest of the team might need.

Don’t Mess With the Crazy[]

I used to strongly believe that if someone leaves the team you should be able to flatten their machine and give it to someone else without worrying about losing important data. Nobody should have to decode the Crazy. So of course I was often frustrated by people on my team who insisted on working this way.

Well, I still believe this, I’ve just given into reality. Apps are written with the assumption of either working locally, or working through some kind of custom sharing service (like Office + SharePoint and even that isn’t done very well). It’s an uphill battle to try to enforce a structure and process on what amounts to an inherently personalized and optimized development experience. So many things are just easier locally. Like working with folders! You can rename them, or move them wherever you like, instantly. In contrast, doing the same through source control is a tedious pain.

There’s nothing necessarily wrong with using local storage for ad-hoc, brainstorm-y, temporary, or private work. The trick is knowing when to move things off the local drive to another, more team-friendly data store. One where people can subscribe to changes made, get their own copy, get a history, and so on.

So how to know where it goes and when to start maintaining it there instead? In writing this I’m trying to figure out some basic rules for what can stay local and what should get promoted. But it depends so much on the team, discipline, type of data, and so on. Some things are easy and absolute: all source required to build the game and its content go into source control. Some things are more gray: where does that intermediate concept art go? Where do those little prototype gameplay modules go?

For the easy ones, look to the upcoming articles. For the more gray issues, I’m going to fall back on the “make sure it’s backed up” solution. Hope for the best, and if things go bad, go to backup for recovery. But we want to avoid the expensive problem of backing everything up and having a big disorganized mess of Crazy, right?

The solution to that I think is in a detour article I’m posting next on backup options. One great way to have your team members separate out the signal from the noise on their hard drive is through a partial backup solution. More on this next!

January 6, 2009[]

I previously posted about using the local workstation as a data store. I concluded that local storage for some important data is an inevitability, and that’s ok as long as it’s backed up.

And I have yet to work at a studio where more than a few workstations had a formal local backup process in place. I believe this is mostly due to how rarely we lose work to catastrophic failures. There’s usually plenty of advance warning from a hard drive before it barfs, giving you time to move over to a new drive. If it ain’t broke often, no reason to fix it in advance. Even in the case of a total sudden failure, it only takes a day or two of re-imaging the OS and reinstalling apps to get back to being productive.

I tend to agree. I don’t think the occasional hard drive failure is worth a general studio-wide full workstation backup policy. It’s just too much work, storage, and time, and is going to cost far more than it will save. Instead, let’s have a more targeted approach. For most people in the studio, I prefer one or more of the options I’m going to enumerate below.

Local-Based Backups[]

This is the method I use. Even though I just said it’s not a huge deal to have a total failure, some people (like me) are ultra paranoid about losing even the smallest amount of work or time. And for that, nothing beats a local full-drive USB backup.

How?[]

Big fat USB drives are cheap. You can trade size and price for the performance you don’t need. I’d also get one around twice the size of the main workstation drive array if possible. That way two or three backup rotations plus incrementals can be kept going at once (assuming backup compression).

For software I’d get something that uses the shadow volume service to do copies so it can pick up the in-use files. I particularly like Ghost even though I loathe most things Symantec makes. It’s important to set up frequent (two to four times a day) incremental backups. Set them to low priority to avoid bothering you while you work. And set up automatic replacement of old backups when the drive is full. The system has to be fully automated and easy to use so you can forget about it until you need it when something goes wrong. Ghost even has a nice feature to let you use Google Desktop to search your various backups, a little reminiscent of Apple’s Time Machine.

I’d pick this over a mirrored RAID setup any day because you get incremental backups, and you get an easy way to restore data if your workstation dies and you need to move to a new machine quickly.

But Why?[]

Why am I so paranoid about this? A couple reasons. The less important one actually is the total loss issue that I mentioned earlier. The main reason I have frequent incremental full machine backups going is that I frequently make incremental boneheaded mistakes in my work.

The most common scenario is where I’m working on some small task, and in the middle it explodes into a giant task. Before I know it I’m four days in with 50 files checked out and crap I just accidentally reverted a bunch because I thought I was working on my other client for that quick bug fix someone needed. Or a day later I confidently go down some new mental path and redo a bunch of work without realizing that this is the wrong way, and now I need to undo back to the previous day. This is especially useful when coding a little drunk after lunch. But if the incremental is only half a day old at worst, you’re not in too much trouble.

This problem is so common for me that I’ve started building some tools to fix it on top of the incremental backup. Right now I’m eyeing git for ideas because it has features like stash and bisect. I’m also considering adding on a continuous backup system (like Norton GoBack or perhaps I’ll mess with Previous Versions a bit more). Like I said, I’m paranoid.

Incidentally, this would be a lot less of a problem for me if P4 had a local repository concept like git and mercurial and other more more modern source control systems have. Then I could check in as often as I wanted for backups and revision history as I worked, and get all my favorite source control tools to help develop.

The P4 way to fake local repositories is to do private branches, but on a large project it’s surprisingly time-consuming to manage. Perforce does not provide any tools to help out with this so I’ll have to roll my own here as well. I’ll probably write about this in a future post. It’s getting increasingly frustrating working with P4. Especially considering its obnoxious price and how much time I’ve put into building my own tools for it.

Avoid the Network[]

For full-drive backups, I’d stick with a nice simple local USB drive instead of a fancy server-based backup over the network. Doing it from a master server means you have to deal with the IT department every time you want a file back, and the network will get saturated at unpredictable times. This a lot of the benefit of using this system in the first place.

Now, the IT department should at least remotely monitor all of the workstations using this method to make sure that backups are working properly. Someone could easily have a USB hub failure or accidentally kick the connector out and not necessarily notice that backups were failing. People can’t be trusted to pay attention to balloon popups in the tray, they’ll just ignore them forever!

But like I said, this method is for the truly paranoid. Not many of those on the team, and so most can use the next method: server-based backups.

Server-Based Backups[]

Ok so you’ve got a tape backup and you’re not afraid to use it. You want to roll some local data into the server based backup. But you don’t want to overwhelm the network and your tape storage with enormous full drive backups. Here are some other options. They aren’t mutually exclusive, either.

User Folders on a Server Share[]

Give each user a public and/or private folder on the server (where the private folder has permissions only for them), and map it to a local drive or folder using a domain login script, perhaps P: for public and X: for private. Tell people to keep, copy, or sync files there that they want backed up. Simple, and easy to manage.

There are a couple big problems with this system. First is that it puts the burden on the user. They have to manage their files that are going to be backed up or not. So people may forget or not want to bother with this. Or once they start they may not want to keep it up and then you have outdated files sitting in the store (though this can have its own slight advantages). The other problem is that people tend to get lazy and copy large folder trees over (temporary .obj’s and so on included) and not worry about size or duplication much.

Ultimately certain types of users will use the system effectively for local workstation backup and others will not. It’s partly a matter of education but also a reason why (ideally) we want to provide multiple options. But overall this is the main (and usually only) method of local data backup I’ve seen in use at studios I’ve worked at.

Remapped User Folders to the Server[]

This is a variation on the public/private folder idea. Windows lets you remap local folders (such as Documents, Images, Contacts, etc.) to a remote server share. You can just right-click on the folder, go to Properties, and change the folder’s Location to point wherever you want, like a private share on the server, set up per-user.

This is the method we use at home, actually. Ally mapped her Documents folder to our ReadyNAS in the closet. She doesn’t have to do anything special, and her stuff is mirrored, snapshotted every two hours, and backed up offsite. Works great. You can also manually set up NTFS junction points if the OS shell doesn’t permit remapping a particular folder (such as the desktop).

The down side of course is that all these files are mapped directly over the network. On our home network, which is very fast over the wire and only has two people using it, it’s not noticeable. But at a game studio with typical minimal IT investment and lots of people simultaneously working with large files, this could be a big problem. Another possible problem is that if the server goes down, people can’t work with these files locally because copies are not kept locally.

Offline files are a potential option here that I haven’t explored much. I haven’t heard good things about the sync performance, but it’s only anecdotal. Also, it sounds like you need Vista to make it work well, and our industry is really dragging its feet on moving to Vista from XP (sounds like we’re all just going to skip it and jump straight to Windows 7). Other options like rsync are available but we might as well use one of the other options below instead.

Server Pull of Specific Folders[]

This option is where the backup server periodically reaches into each workstation and backs up files from a standard set of folders. Perhaps the Desktop and Documents folders, or maybe from a special “Backed Up” folder kept on the desktop of C:\ of each machine (or all of the above). With this option people continue to work with local files so performance for them is high.

As with the Remapped User Folders option, educate the team about which folders on their machine get backed up. Then they will know that anything they copy to the desktop or save to their documents folder or whatever (which is the default on so many programs) will be safe.

Now, the big downside with this option is that the server has to back up a lot of files through a relatively slow network. It’s more targeted than a full drive backup but still is going to be quite a lot of data. With all of the other options, where the primary data is kept on the server, the overall network cost is minimal because users are accessing files on demand. The server can back up things locally through ultra high speed links in the server room. But with server-pull from workstations, the server must access every file that is a candidate for backup over the studio’s more ordinary network, which takes a lot longer.

Also, it must deal with individual workstation problems. Backups are typically a serial process, so if one workstation is having issues (perhaps it’s running super slow due to an overnight render or batch process it’s doing) then the whole system is bottlenecked.

Exclusions[]

With all of the above methods, we will have a big problem with users dumping things in their Documents folder that they really don’t need backing up such as movie trailers, music, game demos and so on. You can nag people but it’s easier to tell the server to exclude large junk file types like exe, pdb, mov, avi, mp3, m4a, iso, and so on from any of the local workstation data it backs up.

Roaming profiles[]

A quick note on roaming profiles. Don’t use them.

Roaming profiles keep the user’s profile on the server. This includes their entire Users\Username folder minus local-only settings and temp and cache files. The profile is synced with the server on user login and logout. We used these at Loose Cannon before I pushed hard to get them killed.

Roaming profiles are interesting in theory, but in practice, they are a terrible, awful idea. I admit, it’s tempting to have a system that shares all your settings so that you can log into any machine on the domain and have the same setup. You can log into Bob’s machine and all your Visual Studio keyboard shortcuts come with you. Doesn’t that sound nice? But the reality is that sync is ultra slow and makes logging in/out or rebooting frustrating beyond belief. Why is it slow? Well, profiles tend to get really amazingly huge. Everything goes into the user’s profile. And if it hurts to log out/in then it will be even harder to get people to keep their systems patched, which is already a big problem.

Roaming profiles are bad enough with the latest versions of Vista and Windows Server. But they’re even worse on a Linux-managed “domain” (Samba), which is what we had way at Loose Cannon way back. Any bugs or compatibility problems on the way the sync happens get propagated down to your local machine, potentially damaging your profile forever. Our Linux-based server was apparently incompatible with Vista and I got my roaming profile horribly mangled a couple of times until I just forced it to go local. Since then, we’ve been on a nice, easy to admin Windows Server with local profiles only and haven’t looked back.

We may have been able to get roaming profiles working right after a lot of work and tuning, but is it really worth the trouble? Is it really all that useful to log into someone else’s machine and have all your settings be the same? Nope! Depends on some studio workflow policies (pair programming, standards on supported editors, etc.) but I still say no! Windows is simply not any good at this, and gets worse at it every year. Perhaps with a massive investment of custom tools, but…bah. Better things to work on.

Ok let’s move on. Next up: back to the series. Data Store 2: Server Shares.


February 8, 2009[]

This has been on my mind a lot lately so I wanted to take a detour and write it up.

Ally and I decided to move to Peru to live for six months (her blog talks about who/what/when/where/why/how). I would have preferred a year but Ally’s got to go to school in the fall, so six months it is.

My boss Matt at Loose Cannon is super awesome and instead of firing my ass he decided to try to make it work. He gave me a very long term mostly research and prototyping project to create our next gen animation system. Intimidating as hell but cool. That means I don’t absolutely need to be online all the time.

Matt does want me to continue doing code reviews on our game if possible. This requires a fast turnaround to avoid blocking checkins for too long, especially as we get close to shipping. So it’s preferable if I can be online during normal work hours or at least a window of time during each weekday. Peru is 3 hours ahead of Seattle so that means I can go see pyramids in the morning and still get online and working by normal Seattle core office hours.

Anyway, I’ll be working remotely for six months. We’ve already passed through four hostels in three towns, and have finally gotten a real apartment in the comfy Yanahuara district of gorgeous Arequipa. We’ll do extended-weekend trips every couple weeks (which means working on buses to/from) and then move up to Trujillo in a few months. Lots of mobility required and it can’t interfere with my work. I really do have to keep a regular schedule.

This brings up several super important requirements, which I’ve spent a lot of time thinking about and working on.

  1. Data and hardware must both be secure. If I can’t work because my laptop was stolen or I had a hard drive crash, then I lose a lot of time.
    • Our personal data must also be secure. As we’re managing all of our finances online now, we can’t afford passwords and credit card info and such to get out.
    • I also have to figure out a way to work with Ally’s, um, let’s say “lack of interest” in security.
  2. I need at least periodic access to the internet so I can check in at work through the VPN. This means stealing wi-fi, buying cell-based broadband, plugging in direct, or using sketchy public computers at locutorios.
  3. For longer term, I need a comfortable work environment. It’s still 40-50 hours a week, remote or not. Can’t work hunched over every day at the carpal tunnel festival. But I’m not bringing a whole office with me, it’s got to be small and light.

June 28, 2009[]

Peer Code Reviews At Loose Cannon[]

Now that the game is basically done (though weirdly still being kept under wraps by Konami), I plan to write about some of the things we did, both good and bad. I’ve learned a lot from my experience at Loose Cannon so far and need to write about this while it’s still fresh! I’ll start out with something I think went really well, all things considered: Peer Code Reviews.

Now, we haven’t had our postmortem meetings yet to discuss things like this. And I know that people on the team have issues with the process and our standards, but I feel like overall this has been one of the best things we’ve done at our studio.

Before Loose Cannon, I had never worked anywhere with a review process of any kind. We never even had a coding standard in place at any of those. When I joined, Matt already had reviews going and wanted to add coding standards too. I was skeptical at first, mainly out of ignorance, but Matt convinced me to try it. After a little time optimizing the process and building and writing some tools, I became a believer and evangelist.

The next couple posts are about what I learned, what we implemented, and how it all works.

What Is A Code Review?[]

A code review is simple: get at least one other person to thoroughly review your changes before you check in. They need to (a) understand the code and (b) make sure anybody else could understand it too. Like, in six months when nobody remembers how it works any more.

Simple in concept. We’ll get to the details later.

Notice that I didn’t say “find bugs” above. Our reviewers usually do look for possible bugs, and they’ll often find some, but this isn’t an important goal for us. Bugs are simple things, really. Architectural problems, API design problems, lack of safety, and so on – those are the real dragons to slay.

What follows are the rough goals of our code review process, in descending order of importance.

Share Knowledge[]

If the graphics engineer gets hit by an SUV on their way to work (or quits the company, takes a sabbatical, etc.), you don’t want to be scrambling to figure out how all their stuff works.

Code reviews really help with this. In an individual review you won’t grok an entire system. It’s not really necessary to try, either, because over time, after many reviews, you not only get a feel for what’s going on in a system, but also how that particular engineer thinks. This is invaluable when you come back later to work on that system yourself.

I’ve experienced this directly recently as I’ve been bouncing around the project in the final hours, trying to make myself useful, investigating bugs in systems I’ve barely touched but have often reviewed. It’s a weird feeling, feeling things come into focus, wandering through familiar functions…

Anyway, this means that reviewers need to try to understand the code they’re reviewing. Not just skim through it and look for nitpicky easy things. If they don’t understand it, they need to comment with questions. If it’s too big for a simple answer, then they need to stop and find the reviewee, bring them over to a computer, and get them to explain it.

The more eyes on your code, the better.

Catch And Correct System Misuse[]

Every system tends to have a maintainer, even if you have a “no code owners” policy at the studio. There’s always that one person who works with it more than anybody else and knows all the hidden rules about it. Even if it’s heavily documented, there will always be knowledge that only exists in that persons head, and you definitely want them to be included if you modify “their” systems. That way they are made aware of what’s being changed, and can think about possible repercussions.

They can comment about possible issues, and in the process not only do we get corrections made, but everybody learns something. Not just about what’s being changed, but about rules in the system being modified or used. “Don’t assume that the game object coming back is valid, could be deferred-add” and so on.

In every nontrivial review, the reviewer and reviewee should learn something.

I especially like this aspect of code reviews. In past projects, I’ve subscribed to Perforce review emails for changes to systems I’ve worked with. And as I got the checkin notices I’d run a diff tool and review everything. All after the fact, all optional, all dropped when things got rough. And only in the areas I was paying attention. As a result, I’d miss a lot of things that I couldn’t afford to miss.

In our code review process, I’m often included on things that need my attention, and I have an opportunity to provide pre-checkin feedback to make course corrections on things I think are important. I love this aspect.

Raise The General Quality Of Code[]

This one is easy. When you know that somebody on the team will be reviewing your code, you’re a little more careful about what you write. You’re a little more deliberate in your choices, a little less hacky, a little more likely to think things through. Yes, we should all be professionals 100% of the time. It’s what we get paid to do. Well, code reviews help to reinforce that.

Peer pressure is a great tool for improving behavior. If you think someone will make fun of you for some embarrassing code that you’ve written, maybe you’ll think twice about doing it. And, well, when it turns out that you don’t think twice, and your ridiculous embarrassing code is caught in review, you’ll have to fix it anyway. Then we make fun of you.

Want to know the pet peeve I’m thinking of right now? Code turds: when someone just comments out some code. Doesn’t explain why it’s commented out, just leaves it in. This has all but disappeared with this new review process. I love you, Mr. Review Process.

Mentoring Junior Engineers[]

At Loose Cannon, like most studios, we have a wide range of experience in our engineering staff. For me, at past jobs we’ve always thrown the junior folks to the wolves, hoped for the best, and forgotten about them until ship time. And of course, this usually leads to some bad results. Game Developer magazine postmortems are filled with stories about this. (By the way, does anyone read Game Developer any more? I totally forgot about that magazine until just now..)

The problem is that we hire the junior engineers with the best of intentions. They seem sharp but green, we figure we’ll mentor them, and they’ll learn and deliver. A good deal! But what always happens is the seniors get busy, and they just don’t put in the time. The juniors slowly create monstrosities and we manage to ship the game anyway.

With a code review process, more senior members of the team are regularly reviewing junior-level code. There’s just no way around it. They can make course corrections more often, as well as do some mentoring right there. “You should do it like X” “Why?” “Well because…” is such an easy a conversation to have in the context of a review. Outside of a review you have…brown bag lunch events? Regular meetings? Send them to training? Never happens. Code reviews are perfect places to spread knowledge and best practices.

And in some cases, reviews are where we’d identify people that just aren’t getting it, and need some more att

ention (or, possibly, to find another place to work). You’ll find out about it a lot sooner if you’ve got some people paying close attention to their code. You’d think that this would always be the case with junior level people assigned large tasks, but I have yet to see it in any of my jobs before Loose Cannon.

Educate About And Enforce Standards[]

We have a coding standard at Loose Cannon. Not everyone is happy with it, and most people have a problem with at least some part of it. We’ll be revising it. But warts and all, it is something that has helped. I remember how things were before the standards and it really was a mess.

But I don’t want to discuss coding standards here yet. Future post, perhaps. It’s a controversial topic, no doubt. How far to go with it? Tab/space settings? Curly brace placement? Naming of temporaries? Block headers in cpp/h files? Do we have a standard for Lua scripts too? What about batch files? And so on.

Suffice it to say I’m a big believer in coding standards as tools for increasing readability and comprehensibility. I also believe that even a bad one is better than none at all. It’s important to us that we check for standards compliance in our code reviews. As a small side effect, the consistency makes the code review process a ton easier.

As a simple example, consider a standard that bool-returning functions should “sound” like a boolean operation. So DebugMode() and LoadLevel() are incorrect (they sound like mutable actions, don’t they?), but IsDebugMode() and ShouldLoadLevel() are correct – they are clearly boolean queries. Well, this would get caught in a code review. This ends up helping to standardize API design. Anyone calling an API, looking at the functions involved, can quickly get a rough idea of how the thing works, based on the way the functions are named.

A review is also a good place to ask people to comment or rename things to be more clear. I tend to add a lot of “what’s this member variable for? needs better name” and “this load routine is complicated, needs an overview comment on how it works” type comments to reviews.

Yeah, ideally all code is self-documenting, yadda yadda. But when there’s a particular order of functions you have to call to get Bink to shut down and then the audio system in order to go to the Wii’s home menu, it needs a comment.

“Big Deal, We Do This In Agile”[]

Agile people are probably thinking “we already do this, what’s the big deal?” Pair programming inherently includes peer review, after all.

Well, we don’t do Agile development at Loose Cannon. Not yet, anyway. I haven’t really looked into it yet, actually. I was waiting for the hype to die down, and that now seems to be the case. Now that teams I respect a lot (like the good folks at Atlassian) are pushing it I’m starting to look into it more.

We don’t do a lot of things that maybe we should. We don’t do unit testing. We don’t have a continuous integration build server. We have information stored in 10 different places and it’s not searchable. So, basically the same as everywhere else I’ve worked.

We have a lot of things to improve. Agile development is just one more thing on the list of things to research and consider integrating into our process.

But overall, I’m very happy that we’ve gotten Peer Code Reviews as an integral part of our daily process.

Up Next: The Details[]

So I’ve talked about why we do this. Lots of generalities, fuzzy discussion. But how we do it, specifically? Next post.

July 4, 2009[]

In a previous posting, I talked about what a peer code review is, and why we want to do them. Now let’s start getting into specifics about how we actually do them at Loose Cannon Studios.

As I’m writing this I’m discovering it’s kind of a huge article, so I’m breaking it down into a few pieces.

First Attempt: In-Person Peer Reviews[]

When I joined the studio, the process was working roughly like this:

  1. Write and test code.
  2. Find an available engineer.
  3. Sit down side by side and walk through changes, discuss, make small fixes as you go.
  4. Check in if everything is ok, or redo code and go back to step 1.

Simple, no? It was working well in the beginning, too. There weren’t many engineers, and no crazy deadlines. Life was good. In an ideal world, walking through code is probably the best way to do a code review. Nothing beats that in-person discussion.

Problems We Ran Into[]

Unfortunately, it slowly lost its effectiveness for us. Around the time that I joined, we started running into some big problems.

Problem: Mini-Cliques[]

People tended to pick the same person again and again to do reviews, often someone already sitting close by.

It’s just easier with the same person over and over. You already know each others’ styles, probably work in the same area, and so on.

And it’s just so much more of a pain to go get someone from across the room. Even with instant messaging. Does 20 feet really make that much of a difference? In practice, it sure does. This is a big reason I dislike individual offices.

Problem: Lack of Simultaneous Availability[]

Finding someone available to do a review at the same time you’re ready for it is surprisingly difficult. Especially when a deadline is coming up.

People are always busy, or at least on different mental schedules. Some people want to do their reviews in the morning when they’re waking up, some people do their best coding then and want a review in the afternoon.

And I can’t remember how many times I’d overhear (or participate in) discussions that went like “Can you do a review?” “Sure, oh wait, gimme five minutes” … “Ok I’m ready now” “Sorry, I just noticed some more changes I wanted to make, can we do it in an hour?” “Ok, but I’m going to lunch” and so on.

It seems like it ought to be easy to work this out, but we had a hard time with it. Few people are able to interrupt what they’re working on to do a review, then go back to their work without an expensive context switch. It’s really frustrating on both ends.

Problem: The Blind Leading the Blind[]

In many cases, we had junior engineers were reviewing other junior engineers’ work. This followed directly from the lack of availability. What else are you supposed to do when nobody more senior is available, anyway?

I saw a lot of bad code going in because of this. It was technically “reviewed”, but it should not have been checked in. Our codebase is still haunted by bad architectural decisions checked in during this time.

Eventually We Just Gave Up[]

People started checking in code without getting reviews because it was frustrating and there was no perceived value. And because nothing was tracked, nothing could be enforced. So of course people just slowly stopped asking for reviews. I even stopped getting reviews for my own code.

Ultimately, it got to the point where it felt like the review process was being done just for the sake of doing it. This is always a sign of a process that needs to be reexamined and possibly discarded. People start thinking the team leadership is out of touch and we start running into morale problems.

So we decided to do something about it.

Second Attempt: Crucible[]

Of all the problems with our process, I figured that the main problem was the side-by-side (in-person) review requirement. A lot of the above problems came straight out of that.

If a review can be done offline, you solve the simultaneous availability problem. It’s equally easy to have anybody in the studio do a review. And you can simply require that more senior people must do the reviewing. At least that’s what my thinking was.

So I started looking around for tools to help us out. Maybe with the right tool and some changes to our process we could solve this thing…

Enter the Crucible[]

The first place I looked for tools was Atlassian. I’m a big fan of those folks, having used Jira since 2003, and Confluence since it first came out. Great support and community involvement. I watched them buy Cenqua a couple years ago and knew they had this Crucible tool for assisting code reviews. So I gave it a try.

At first glance it appeared to be almost perfect. I looked around at some competitors and found them either insanely overpriced or simply weak. Crucible’s method of managing reviews and making comments and replies was slick and fast. Its conversation notification system was poor (email-only with limited controls) but I figured we could work around it.

Note: in Crucible v2.0 which was just released, they’ve done some serious work in conversation updates and timelines and so on that looks great. I haven’t tried it yet though, but soon. After we ship!

Unfortunately, Crucible out of the box just could not do what we needed.

Requirement: Pre-Checkin Reviews[]

In coming up with a replacement for our old process, we obviously wanted to keep the parts that worked. Matt was very firm in requiring that, no matter what we did, it had to remain a pre-checkin process. Code must always be reviewed before committing it to the depot.

Why? Once code is committed, you’ve suddenly got a huge barrier to making meaningful changes to it. There are a few reasons for this:

  • The incentive to do a good review is diminished. It’s easier for reviewers to fall back on “Well, if it’s in there, and it works…I guess it’s ok…” and do a poor review.
  • When we get near a milestone, the first thing to get dropped will be those reviews. They’ll just pile up in the inbox as “stuff to review after I fix this bug”. Eventually, we’ll be putting off all reviews till after the milestone. A dangerous time to put the blinders on.
  • People often start using code immediately after it’s checked in, especially if they’re actively waiting for some change (“can you export that class for me?). So now if a reviewer is requiring significant changes to be made post-commit, particularly when involving architecture, you have to go fix all the code that uses the code you want to change as well. That’s a big barrier.

At the time we were implementing this process (mid-2007), this is where Crucible sadly failed us. It was designed for post-commit reviews only! I went back and took another look at Smart Bear’s tools, which did support pre-commit reviews, but the price was just out of our range. I wonder who their target market is.

Anyway, we put our code review process on ice. I saw on an Atlassian forum somewhere a posting about how they were going to implement patch-based reviews, and figured we’d come back to it then. After all, they were only on 1.0!

So we continued with our unreviewed checkins for a couple more months. It was getting really bad.

July 15, 2009[]

This is the third in a series of posts on our peer code review process at Loose Cannon. In the first, I talked about what code reviews are and why we do them. The second documented our initial attempts at implementing a code review process.

In this post I’ll cover how our final process turned out. We’ve been doing this for perhaps the last year or so, through many milestones and the final product release, so it has been battle-tested!

Of course, as with all processes, what works depends on the team and culture already in place. It took us a while to settle on this one and I’m sure it would need adjusting at other studios.

Third Attempt: Crucible + Tool + Process[]

I kept watching Atlassian for updates, and it wasn’t long before they released a version of Crucible that supported patch-based reviews (I think it was 1.1 or 1.2). Yay!

Support for patches wasn’t great; changes coming from patches were not first class citizens like reviews created from a submitted changelist were (note that this is something they have been improving in recent versions). But it was good enough. You could take a patch file of local changes, upload it, and select it in as part of a new review. Awesome.

New Process Requirements[]

With everything we needed from a collaborative peer review tool in place, a few of the seniors on the team met and designed a new process for our rebooted code reviews. We had the following requirements:

  • Every change to game or engine C++ code must be reviewed.Tools and game scripts were not included in this process yet. We wanted to get started slowly and narrowing the scope of what got reviewed, and expanding it later on.
  • Code reviews must precede checkins.We would continue the previous process’s requirement of review-before-checkin, for the reasons stated earlier. I ran periodic queries at first to make sure of this until everyone got in the rhythm of the new process. People got it pretty quickly.
  • Reviews must include a “primary reviewer”.There would be a core group of three primary reviewers (made up of Matt, Andy, and me to start), and one of us had to be part of every review. We would expand this group over time at Matt’s discretion.

Initial Concerns[]

We were at first a little worried about the review load among the three of us. I did a query of Perforce to get a feel for how frequent and large the checkins tended to be. We estimated that 20% of our time would go into reviewing code, and had to think hard about whether or not we felt the benefits were worth this cost. We also decided to bring new people into the core group as quickly as possible to help with the load, perhaps within a few weeks.

As it turns out, we didn’t have any trouble handling the load, which was far lower than we had expected. A review or two a day was probably what we each averaged, maybe 5-30 minutes total, depending on complexity. So we ended up keeping the 3-person core group for a while. We pretty much forgot about adding more people until a couple other members of the team asked to be included and we expanded the group.

Another concern was that the process of creating patches and uploading them by hand would be a pain in the ass and error prone. The technical part of the review process had to be really simple or it would be harder to get people on board with the whole thing. Clearly a tool was needed. Luckily, Crucible has a SOAP interface! Well, it had a SOAP interface. They recently switched to REST, so I lost my convenient WSDL and now have to maintain my .NET wrapper of their API by hand (ah well). But whatever.

To solve this, I spent a couple days adding a new crucreate command to my p4x tool to automate creation of reviews from pending changelists in Perforce. This tool has gone through many revisions since, and I will do a full post on what we’ve got now and how it works in a future post.

The New Review Process[]

Given our requirements, and the new tool, we created the following three-stage process for peer code reviews. All documented in Confluence of course!

Stage 1: Create Review[]

This part is pretty quick to do, rarely more than a few minutes and usually under 30 seconds:

  1. Prep a pending changelist as if you were about to check in: isolate it to the minimum required for the change, give it a good changelist description, build all platforms and configurations, etc.
  2. Right-click the changelist in P4Win/V and select “Create Crucible review from changelist…”. This runs p4x crucreate, which puts up a dialog box with available reviewers listed. You can also do this from a command line.
  3. A primary reviewer is pre-selected at random but you can change it (*) if you want someone specific.
  4. Select secondary reviewers (*), using your judgment. Who is the de facto owner of the code? Who else is affected by this change? Who has domain expertise that would be useful? Who will be upset if they see your checkin and weren’t part of the review?
  5. Hit OK! A review is automatically created by p4x and comes up in the web browser in the Draft state, reviewers set up, patch uploaded, and so on.
  6. Make any necessary comments on your own review (**). This is also a good time to review the diffs and look for bonehead mistakes (like temp/hack/test code you didn’t intend to check in).
  7. Consider abandoning the review and going back to the code for another pass. (***)
  8. Hit the Start Review button! All the reviewers will get notified by Crucible via email that they have a job to do…

Some notes:

(*) In choosing reviewers, it’s a good idea to instant-message people first to ask if they’re available to do a review, especially if you’re really itching for feedback or you want to check in ASAP. You never know what schedule other folks are on, mentally or physically! I usually turn around reviews fast when I’m looking for a distraction, but other times I need to focus and will put off reviews until the next day. It’s always good to check.

(**) For example: noting what your intention is in different areas, calling out chunks of code that need special attention, or asking questions like “this part is a mess, is there a better way to do this?” And if you’ve added people to a review for their domain expertise, you can save their time with a comment like “Joe – I added you just for the graphicsmgr changes – did I do this goofy GL state setting stuff right?”

(***) Well this is a little odd, eh? Perhaps it’s just me, but prepping a review always makes me think about the problem from a different perspective, and I often realize I screwed up something deep, or didn’t solve the problem fully, or forgot to test a few things, and so on. So I’ll abandon the review and head back to the code for another pass. It’s a lot less embarrassing to figure this out when a review is in Draft and not started.

Stage 2: Perform Review[]

This stage is very fluid and could take minutes to days of time to get through, depending on the type of change involved. It has these main tasks:

  • Reviewer Task: Make CommentsHere we go line by line through the diffs and make comments and ask questions. This is where the big win is in doing reviews! Right here, in the middle of this giant post! Unfortunately, what to comment on and how is itself a really big topic, so I have to cover that in a later post.
  • General Task: DiscussQuestions need answers obviously. And comments that the reviewee doesn’t agree with or understand will need some discussion. Most discussions we’ve had are very short, but some can get large and end up needing in-person discussion or possibly escalation to the team lead for a call to resolve.
  • Reviewee Task: Address Comments”Address” does not mean “Do What Reviewer Says”. The reviewer is not the boss. “Address” means that the reviewee needs to do something with every comment a reviewer makes. So that means: making the requested change, answering the question, arguing the point, raising new questions and so on. Note that ignoring the comment is not a valid choice.
  • Reviewer Task: Mark CompleteOnce a reviewer has made all their comments and has had their questions answered to their satisfaction, they (a) mark it complete in Crucible, and (b) add one final comment. It’s a general comment saying if they’re ok with things going forward. For example, “check in after addressing comments”, “send back with an incremental review”, “this whole change needs to be reverted, I’ll come talk with you” (perhaps by a team lead), and so on.
  • Reviewer Task: Talk In PersonSometimes it’s just not going to work through Crucible or any tool, and you need to sit down with the reviewee and have them explain how it all works to you. Especially if it’s a huge or weird change that just doesn’t work well in diff form. Afterwards, the reviewer can go back and make informed comments, or maybe the reviewee will be able to bypass the rest of the review entirely if it all came out in the 1-on-1 discussion.

Now, while a change is going through this review process, what is a reviewee to do besides replying to comments and making fixes as they are addressed? They should be working on other things, by using a separate Perforce client, a private branch, or by working on files that do not conflict. Whatever works. They shouldn’t be sitting on their hands waiting for the review to be done so they can check in.

Of course, sometimes you have a high priority fix that needs to go in. Maybe some people are waiting on the edge of their seats for your change. And sometimes people who can’t manage their inbox don’t see or ignore the review notification email. Nagging via instant messaging does a good job of solving this problem.

Stage 3: Close Review[]

This final stage takes just a few seconds. Finished reviews go to one of several places:

  • Submit And Close!This happens when everybody is satisfied. It is the result of most of our reviews, depending on the stage of the project and scale, risk, impact, etc. of the change. We’ll go here if all the comments result in trivial changes and the reviewers have said they don’t need to see a new review. So check in the code, take the Perforce changelist number, and paste it into the Crucible review (like “checked in as #37132”) when summarizing it, then close.
  • Add Incremental ReviewThis happens when reviewers want to review new code changes based on their comments. Crucreate can “diff the diff” and add incremental updates to existing reviews with only what has changed since the comments (more on how this works in a future post). After doing this, notify the reviewers that more info is available and they’ll go back and do another pass on the update. Repeat as necessary. We’ll rarely have more than one or two incrementals tacked onto a review.
  • Create New ReviewA totally new review is required when the changes based on comments are really hard to review incrementally. No big deal – just crucreate a new review, then summarize the old review with the ID of the new one, and we start the process fresh. Now, if a change goes through more than a few separate reviews, we probably need to have a whiteboard discussion to sort it all out before the next review is created.
  • Abandon!Head back to the drawing board. This happens when there is just too much work to do to make things right. Create a new review when the code is ready again. Perhaps 5% of our reviews end up this way.

And that’s the step-by-step of how we do reviews at Loose Cannon!

I’ve gone into a lot of detail that may make it all sound pretty laborious, but that’s the level I like to write at. In practice, our reviews tend to go fast and smooth. It really feels like a natural part of our process now.

Coming Up[]

Well, it turns out I have a lot more to say about our review process than I thought, so I’ve had to continue to break this down into multiple topics. Maybe I should have a list of what I intend to hit in future posts, and hope it won’t break down further:

  • How reviewers make comments. What we comment on and why!
  • How this whole process worked out for us. Did we solve the problems we set out to solve? What new ones arose? What does the future hold?
  • How the crucreate tool works in detail. With action screenshots!

And with our visas expiring soon, Ally and I are nomads again, so it’s been really hard to find time outside of work to write. But I’m on an 18 hour bus ride right now, so I’m going to see if I can queue up some stuff to post when we get to Máncora. We’ll be there for a week, then it’s on to Quito, Sydney, Brisbane, and back home-home in Seattle on August 9th. I’ve heard we’re missing a really nice summer. To be honest, what I’m really missing is Boar’s Head pickles.

Until next time!

July 26, 2009[]

This is the fourth in a series of posts on our peer code review process at Loose Cannon.

Somewhere in the middle of the third post, I started to talk about the “make comments” part of the process, but it’s a big subject, deserving its own entire post. So here we go.

Comments in a review are where the real goods are delivered. This is where we get all the benefits that I had talked about way back in the first post.

What We Don’t Expect From Reviewers[]

First let’s talk about what reviewers aren’t expected to do when they’re reviewing a change.

Reviewers aren’t expected to catch everything.

It’s impractical and arguably a waste of time. Knowledge-spreading, mentoring, and so on are more of a “seeping” process than a hard core lesson plan. The idea is that, eventually, with enough reviews and shuffling of reviewers, the knowledge will spread throughout the entire team. There’s just no need to focus on catching every single thing in every single review.

Reviewers aren’t expected to catch deep or systemic design problems.

A changelist is a snapshot of a small part of the game. It’s really hard to try to see the big picture through a pinhole. Reviewers will often open up their editor and browse around in code outside of the change during a review, to get more context. FishEye’s browsers and search (and blame!), and Perforce’s time lapse view help here a lot too. But this only goes so far.

At some point, you’ve got to throw up your hands and say “we’ve got to talk, I can’t see what’s going on here”, head to the whiteboard, and discuss it in person.

The reviewer is not (necessarily) the boss.

Reviewers do not necessarily have the authority to enforce changes, nor should they have extra responsibility for the quality of code they didn’t write. The reviewee maintains responsibility for their own changes.

We are tapping into reviewers’ brains and schedules to help make the entire project better. This is a service they provide, not an opportunity for dictatorship. While it is a requirement of our process that all reviewers’ comments must be resolved, this does not necessarily mean “do what the reviewer says”. Ultimate responsibility and authority remains with the reviewee’s lead.

Now, it’s pretty easy to end up in dictator-speak mode when you’re in the zone, ripping through reviews, making comments. It helps to soften more subjective comments with phrases like “I suggest”, “are you sure this is the best way?”, and “this is totally optional and my opinion, but…”.

What Reviewers Seek[]

Ok, now on to what reviewers are actually commenting on. Reviewers are looking for the following kinds of things, in no particular order of priority.

Are There Architectural and Domain-Specific Issues?[]

Every project has experts in different problem domains. You want these people reviewing changes in areas in which they have expertise. Graphics, scripting, debugging, architecture, assembly, tuning, you name it.

Here are some examples I picked out at random from our reviews.


All we’re doing here is looking for “gotcha’s”: hidden rules in systems that you know well are especially important. Or perhaps better, more efficient ways to do things. Or how new code should fit into old code and interoperate with other systems.

This isn’t just for immediate course correction. With each review comment in a specific problem domain, the reviewee learns how to do things better in the future, so we don’t hit this again. And, often, the reviewee will run through their other code where the same mistake was made (but not caught yet) and fix it.

This is a wonderful way to spread knowledge while improving the code in the immediate changelist.

Are They Following Best Practices?[]

Here we are drawing heavily on the unique career experience of the reviewers, who are looking for things like:

  • Good comments, well-placed, relevant, etc.
  • Good naming of variables – descriptive, not named after the type…
  • Good flow in a function
  • Avoiding code duplication
  • Hoisting common code out to utilities/systems
  • Calling out when someone is being lazy (in a bad way)
  • Making unnecessary changes that are just personal taste
  • General readability concerns

A lot of this can really subjective and often results in spirited debate. But this is a good thing – everybody learns something! And often we’ll agree to disagree and move on. However, if the same issue comes up again and again, then we can add the lead to the review and ask them to make a call to resolve it.

To the right is a sample clipping with a best practices discussion that will lead into an offline meeting.

Are There Any Opportunities to Mentor?[]

Teams are often made up of people with a wide range of experience levels. Review comments can be a great place to mentor a more junior engineer. If they’re sharp and fearless, they’ll challenge you on comments they don’t agree with or understand. Instead of getting mad and replying with a “just do it, this is the best way” – take advantage of the opportunity!

If you instead take the time to really give them a good explanation of why you made the comment, a couple things may happen. First, they will learn something. Great. But another possibility (this happens to me a lot) is that by forcing yourself to explain why it must be done that way, you find out that you actually don’t have a good reason. Maybe your reasoning is based on religion, or outdated techniques, or wasn’t completely thought-out. I like when this happens because it makes me a better engineer. Make sure the team knows that as a reviewer you expect to be challenged.

Comments in Crucible (our code review tool of choice) have permalinks as well, so it’s easy enough to link to the discussion from the team’s wiki for spreading the word.

This has been one of the more successful parts of our code review process. People will ask questions and say things in text form that would never happen in person. It just doesn’t come up as often in casual conversation to talk about a lot of seemingly minor things like why a particular naming convention exists. In text, when you’re directly reviewing code, it’s a natural part of the process to say “why does it need to be this way?” and can easily be done in a non-confrontational manner.

Are They Adhering to Our Coding Standards?[]

Not long after we started the new code review process at Loose Cannon, we sat down to hammer out an initial set of coding standards. We were planning on having coding standards anyway, but it became clear right away when we started reviews again that we needed standards immediately. It’s very hard to review code where every person has their own weird style and habits.

Comments about coding standards are simple and easy, and should be short. We have a Confluence doc with our standards, so as you notice things that don’t adhere, flag them. This is a good way to teach new engineers our existing standards, as well as updating everyone on the occasional new standard.

On our current game, we backed off from most of this when we were very close to shipping. At that point you’re writing a lot of junk (particularly certification compliance) just to get the game done that you don’t intend to carry forward to future games. More nitpicky stuff like coding standards is just not a priority in the final weeks.

Did They Write A Good Changelist Description?[]

This is a relatively new thing we added to reviews. It is really a best practice, but I wanted to call it out as a special item because good changelist descriptions are so important when debugging problems with unfamiliar code. And so many people do it wrong. [I need to write up a post on how to write good changelist notes. People always make the mistake of commenting on the ‘what’ they changed, and missing the ‘why’. Any fool can do a diff and find out what changed, but six months later remembering why it was changed? That needs good changelist notes.]

When our crucreate tool tells Crucible to create a review, it automatically sets the Objectives of the review to the contents of the Perforce changelist description. This is important for a couple reasons.

First, obviously, it helps guide the reviewers by answering questions in advance about the point of the change. Reviewers should always check the objectives to get background on what they’re about to review. Saves a lot of time asking questions in comments that are already answered in the objectives.

And second, as it is a part of the review, it can and should be commented on (with a general review comment). No more lazy, useless changelist descriptions!

It’s for this reason that we modified our Crucible installation to pre-expand the objectives (normally collapsed by default) any time a review is viewed. Otherwise people never saw the objectives because they never bothered to expand them. [Thanks to the nice Atlassian support folks for their help here.]

The “Final Comment”[]

The last comment a reviewer makes, as documented in the last post, is to say what should be done with the review. Here are some samples I picked at random.

  • The most common final comment is something like “looks good, check in”.
  • This one is a little more complicated. Obviously some in-person discussion has happened in between, as well as an incremental UPDATE 1 that was attached.
  • Sometimes the last comment is the first comment as well. I found a great example of sharing knowledge using reviews. Here, a reviewer got added specifically so they could learn about JSFL in Flash.

And that’s a wrap! Just in time too – I’m getting on a plane in a couple hours (the first of three) to leave Quito and head on over to Sydney.

November 15, 2009[]

Well obviously it’s been really quiet here lately. A lot has been going on in my personal life, but the end is in sight. One of the things I’ve been doing is working on a talk for Audiokinetic for their 2009 Wwise Tour. I presented it over at Microsoft last week as part of a tag-team with Robert Ridihalgh of OMNI Interactive (he was the principle audio engineer on Tornado Outbreak).

Oh, did I forget to post on here that our game shipped? Wow, I totally did! Well, we shipped in September on 360, PS3, and Wii. Become a fan on Facebook!

The reviews have generally been in the B range, though Metacritic’s average is a little depressing at ~70. Nearly every review uses the words “Katamari” and “clone” which makes me think they didn’t really give the game a chance. The similarities with Katamari begin and end at the concept of “grow bigger”. Might as well call Call of Duty a Doom clone because they both involve shooting things to get to the end of the level.

But whatever… I’m proud of our effort, and I think it’s a super fun game. Hell of a job for a small team with a tech base built from scratch, to ship on time and on budget on three platforms.

Wwise Tour[]

One of the reasons we were able to ship on time was Audiokinetic’s Wwise. I’ve been evangelizing this excellent sound engine to everybody I meet. I just can’t say enough good things about our friends up in Montreal. I’d use Wwise on every future game if I could.

On Tornado Outbreak, I did most of the engineering and the initial audio rig design and prototyping. Robert and his team did the actual audio work, and took over management of the Wwise project. They probably did 95% of the audio related work on the project, which is awesome! As everybody knows, engineers are really slow and are pulled in 20 directions at once, so the more I could step out of the way, the better.

Audiokinetic asked me to put together a talk for the tour they’re doing right now to promote the product, particularly the new features they’ve been adding. I invited Robert to join me and we presented the two halves of our audio solution for Tornado Outbreak. We split the presentation roughly along the lines of our responsibilities.

The event was recorded, so at some point we may see video clips showing up online. That will be necessary to get Robert’s part of the talk, because he was exclusively walking through the Wwise project, using elements of it to tell his story. So no slides, you’ll have to get the video if it comes out. Anyone using or considering Wwise should try to get a hold of that – his talk was really interesting and includes some great tricks on saving memory without sacrificing variety.

Now, all I know how to do is present slides, so here they are! Tornado Outbreak + Wwise = Love.

Our Project[]

I also got approval from the bosses to release the Tornado Outbreak project file for Wwise. This is really generous of them to agree! Here it is. Note that this doesn’t include any content (wav files) but is only the project itself. That should be plenty though.

The point of releasing the project is to help out other studios who are integrating Wwise, in hopes that the favor will be returned. Everybody benefits from information sharing like this. With Wwise, in order to get a good rig set up you really need to have experience and good examples to draw from. As I say in my presentation, Wwise is different. Every other SDK out there is a “play samples + DSP” library. Getting the Wwise rig right is hard, and it’s not going to be right the first time. Just as if you were to build a Maya rig and had no experience with it before. You’ll screw it up for sure.

Audiokinetic provides some synthetic examples to help get started, but it’s not from a real, shipping game, and besides, every game is different. Tornado Outbreak has an enormous amount of unique objects that produce audio (over 400) and all those crashes and shakes and panics can become very difficult to manage. It would have saved a lot of time if, when I started building the initial rig, I had some examples to draw from. To that end, we’re releasing our particular solution for this situation. I hope that it inspires even better solutions and ideas on how to tackle these kinds of problems in the future.