A few years back, I wrote an article called My Experience Running Development At A Startup and one neat thing I mentioned that got a good deal of comments has been the concept of “Tracers over Prototypes”. I discussed some problems with prototypes and spikes and how “tracers” are a better alternative.
Three years later, I’m now here to expand on this and discuss why prototypes and spikes are foot-guns rather than solutions and how tracer code is a better approach to problem-solving, especially in a fast-paced environment.
If you’ve read the Pragmatic Programmer, you’ll find this concept familiar, that’s where I learned about it as well.
What’s a prototype? What’s a spike?
Let’s get definition for these first. The idea of both a prototype and spike is to write disposable code in order to learn and in order to test an idea/library/whatever out.
For example, you might write a “prototype” rewrite of your product in another language. Fancy Elm? You might rewrite a difficult section of your current product in elm as a test and then analyze the results:
- Does it perform just as well?
- What is the developer experience like?
- Does it have community support necessary for long-term development?
And so on. Based on your “test” results, you might either go with that decision and write a permanent solution based off lessons learned during the prototype development/spike or you might realize that this is not the right decision at this time.
A spike is a similar idea but might require less development and more research. Basically, it has more to do with spending time on an idea/decision and investigating it rather than creating a functional prototype.
What’s great about prototypes?
Prototypes can be fantastic. Why? Prototypes encourage taking hard shortcuts in order to get a working idea out and to be tested.
For example, if you wanted to create a prototype for an image gallery app, you might heavily rely on off-the-shelf products in order to get you to the finish line as quickly as possible. You might cut corners on testing, cut corners on the development experience and so on.
You’re testing the image app idea and the only thing that matters is having a final product to show users in order to get feedback. Do people want a gallery app? Does it have any advantages over existing solutions? How does it differentiate? What technical challenges do you foresee after building the prototype?
The biggest advantage over regular development is that you don’t have to care about the details, you only care about the idea you’re testing and getting there. This means you get to develop faster and spend less time yak shaving.
And then you throw it out.
You take the knowledge you’ve acquired from development and feedback, and then you craft a long-term solution from scratch based on that information.
What’s terrible about them?
That last part, the “throw it out part”. Let me show you something:
I’ve been working on a “prototype” for 4 years. There are nearly 14,000 commits, nearly a dozen contributors over the years, 1300 releases. We’re on major version 9.x (we don’t follow semver but each major bump is a HUGE feature or huge rewrite).
The problems with prototypes have more to do with “culture” than anything else but the problems (in my head) are insurmountable in most cases:
- prototypes often work and look “good enough” to encourage people to keep it instead of rewriting it
- deadlines often push against the idea of rewriting prototypes
- management sometimes doesn’t understand the idea of a “prototype” fully. If they see a functional product, they’re expecting “the final product” to take less time than the prototype took to develop.
The reason I stuck with a “prototype” for 4 years is because when it was time to do a rewrite, it no longer made sense. Things were “good enough”, we were a startup, we were burning bootstrapped money, and I made the development decision not to rewrite but instead focus on delivering features to clients.
I see developers trapped in this time and time again. “I’m testing out an idea” more often than not turns to having crap code with cut corners making it to production.
So my rule is:
Expect a prototype to become the basis for your final product. Expect your prototype to never be thrown away even though it should be.
An aside: what developers do to make prototypes to be more likely to be thrown away
This is semi funny but after talking to other developers, reading articles, and listening to podcasts, I’ve found an interesting pattern. Developers will create a prototype a specific way in order for it to be more likely to be thrown out using methods such as:
- naming variables horrible things so that no one wants to work on the code base
- using an unknown library/framework incompatible with existing products
- writing it in a language no one else is comfortable using — thus making the prototype useless long term
- naming the repository silly names (guess what? Naming something “prototype” apparently doesn’t work! 😭)
- using jQuery for a single page application prototype
- using PHP in a non-PHP shop for server/back-end prototype
Don’t try the last one on me because I like PHP.
An exception to my depressing rule
If you have a culture centered around prototype development, then the downsides of prototypes don’t exist. I know some developers that are used to doing prototypes and spikes without the drawbacks of having to use them in production.
This can be fantastic. But it takes time and energy and investment to setup that culture.
A better alternative — Tracer code
There’s a decent short discussion on tracer bullets on StackOverflow but the gist is this:
Tracer code is not disposable: you write it for keeps. It contains all the error checking that any piece of production code has. It simply is not fully functional.
To break this down to the important components:
- tracer is not disposable.
- tracer is written similarly to production code
- a tracer doesn’t have to be fully functional
- a tracer is meant to be the base for the final product
You can already spot the differences and advantages.
Tracers are great for testing out ideas that you plan on having in the production environment. They’re different from prototypes in that you should be more “sure” in your idea. Tracers aren’t disposable.
In my mind, they’re supposed to be one of the final steps in building out a solution. A tracer ensures that your idea for a solution will work in a production environment. This is after you decide on architecture, after you ensure that your ideas are sound, and after you plan things out. But before you write any code.
Good Tracer practices
If you’re testing out a framework or a new language, you wouldn’t use a tracer. You’d use it to ensure that framework/language works for that one specific purpose in your application — and use that tracer as breaking ground for further development.
But here is my list of Tracer practices:
- stubbing out functionality is okay as long as you have a solid plan for extending that stub
- you don’t necessarily have to have tests, but write code as if you did
- the most important code you write are the interfaces/endpoints which other code interact with
- don’t leave major changes for “later”
The last one is especially important. When working on prototypes, it’s easy to leave things for “later”. If you’re planning on writing code that’ll live on in your codebase, you need to recognize when your code has gone awry and already needs a rewrite.
This is hard but these small rewrites pay off in the end. I’ve had this happen several times recently and while I was kicking myself in the butt for making an assumption that lead me to write code I had to rewrite, it was worth it because it saved me from writing legacy/bad/broken code before it even ships.
Bad Tracer Practices
Every practice has its issues and every practice has footguns. A footgun is essentially a “thing” that is seemingly designed to help you fail as badly or as quickly as possible.
So if you’re getting used to the idea of writing tracers, here’s what to watch out for:
Tight coupling with existing code
Tracers should ideally be semi-isolated systems that expose some interface or endpoint for other code to use. For example, if you’re writing a “tracer” focused on building out a helper toolbar for your app, it shouldn’t dive too deeply into existing code to modify store values, or state management, or change how services work which would impact the rest of your app.
It can rely on existing services, but you shouldn’t be making major changes to your main codebase.
Whilst prototypes encourage exploration, tracers are more focused. If you can’t identify the end-product of your effort, you shouldn’t be writing a tracer. You should probably do more research.
For example, a tracer shouldn’t be something “give management ability to increase conversion by changing things on the site”. No, it should be “create a feature wherein a manager can create an A/B test via simple CSS that is tracked via our analytics suite”.
Without a focus, you don’t really know what you’re writing. If you don’t know what you’re coding, you’re not writing a tracer.
Fear of throwing away bad code
This is that footgun I was referring to. Tracer are not meant to be thrown away but that doesn’t mean you can’t adjust course. Or bend the rules.
It’s happened to me several times before where scope changed over the course of coding a new feature. It’s not meant to happen, you should be pretty sure of what you’re trying to do beforehand, it doesn’t mean that you’d either finish a feature you no longer need, or try to avoid a rewrite just because this article says so. 🤷♀️
A personal example
This whole article is fairly vague. It’s hard to actually write code snippets that demonstrate the differences or to describe an exact feature where you’d apply one concept over another.
However, I have quite some experience writing both tracers and prototypes. Both in my personal projects and at work.
I have a couple of personal projects that I’d like to share some info about to demonstrate the difference between a prototype and a tracer.
OMEN was my markdown editing environment. Think IDE but for writers and in markdown, similar to Scrivener if you know it. I took the prototype approach, mainly because it made sense. I was working with a brand new stack: Angular 2 (which I had limited experience in), Electron (again, limited experience), and CodeMirror for editing (no experience).
All three technologies were new and before fully testing them out and learning about them, I took my limited experience and ran with it. I wanted to get something out.
So, I tirelessly worked on getting an MVP out. I picked an approach and immediately used it just to get that end-product that I could use to write a book (and I wrote a good 30K words in it!). I took really bad shortcuts. The developer experience sucked (and made me not want to return). My store had an architecture where I coded myself into a corner. And on top of it, CodeMirror wasn’t exactly what I needed.
I abandoned the project but learned SO MUCH about writing an editor. I really loved it and I took what I knew to my next project. I used my new-found Angular 2 knowledge to push our product at work to Angular 2. I learned enough about Electron where my next Electron use was pretty straightforward.
Skok was a recent project that I’m still working on. It’s a desktop photo manager app. I used a slightly different stack than with OMEN but I stuck to Electron.
And I did everything right because I took the time to research, plan it, and drill into my head that this isn’t to throw away. It made me acutely aware of places where I wanted to cut corners just to get that pre-Alpha out. Instead, I planned for the long-term.
I built a great architecture around Electron <-> React interactions, and I focused on the developer experience so that the project was pleasant to work with.
At the same time, I had a very tight focus on a minimal setup. My plan was: scan, display photos, and setup for more features. And I did. I got the initial photo indexing and photo viewing down very quickly and when I started working on more features, it was easy to plug them in.