One thing you shouldn’t do is just grab everything you can find. Third-party code isn’t free, even when there’s no money involved. There’s a cost in terms of reduced flexibility and increased vulnerability to the whims of people outside your project. Tools are useful and valuable, but they need to pull their weight.
I have several stringent criteria:
The tools in this list are valuable. They would take you more than a few days of programming to replicate.
The tools in this list are mature. They’ve stood the test of time and they’re maintained by people who take quality software development seriously. You won’t have to spend a lot of time keeping up with changes.
- Task automation: Jake
- Linting: JSHint
- Dependency versioning: npm --ignore-scripts
- Continuous integration: test before merging
- Node.js testing:* Mocha and Chai
- Cross-browser testing: Karma, Mocha, and Expect.js
- Smoke testing:* CasperJS
- Front-end modules: Browserify and the Karma-CommonJS Bridge
*This recommendation has changed since its introduction in the screencast.
Task Automation: Jake
(Task automation is introduced in Chapter 1, “Continuous Integration.”)
Task automation, also called “build automation,” is the first thing I put into place on any new project. It’s essential for fast, repeatable workflow. As you can see in the screencast, I constantly run the build as I work. A good task automation tool supports my work by being fast, powerful, flexible, and staying out of the way.
My preferred tool for task automation is Jake. It’s mature, has a nice combination of simplicity and robustness, and it’s code-based rather than configuration-based.
That said, Grunt is the current king of the hill for task automation and it has a much better plugin ecosystem than Jake. Personally, I think Jake is a slightly better tool overall, but Grunt’s plugins make it easier to get started. If you’re interested in Grunt, I review it in The Lab #1, “The Great Grunt Shootout.”
Another up-and-coming build tool is gulp.js. At the time I reviewed it, it was still pretty immature, and I don’t recommend it yet. The maintainers also had a “my way or the highway” attitude that doesn’t bode well for future improvements. You can read my review of gulp here.
We cover installing Jake and creating a Jakefile in the second half of episode 1, “WeeWikiPaint.” I also have a pre-configured example on GitHub in the automatopia repository. For examples of Grunt and Gulp builds, see the code for Lab #1.
"use strict"; at the top of your modules. it’s a simple, smart way to make sure that you don’t have any obvious mistakes in your code.
Episode 2, “Build Automation & Lint,” shows how to install and configure JSHint with Jake. I’ve since packaged that code up into a module called simplebuild-jshint. You can use that module for any of your JSHint automation needs. See the module for details.
(Dependency management is introduced in Chapter 1, “Continuous Integration”.)
I’m a big proponent of keeping everything you need to build your code in a single, versioned repository. It’s the simplest, most reliable way to share changes with your team and ensure you can build old versions when you need to. (I talk about this further in the “Ten-Minute Build” section of The Art of Agile Development.)
As a result, unless you’re actually creating an npm module, I recommend that you install npm modules locally (in other words, don’t use the
-g option, even for tools) and check them into source control. This will isolate you from undesired upstream changes and hiccups.
There is a catch: some npm modules create build artifacts when you install them, and those shouldn’t be checked into source control. Fortunately, npm has recently added an
--ignore-scripts option that makes it easy to see which files to ignore. See this blog entry for details.
In the Live channel, we install our tools locally, use scripts to run them, and check them into git. You can see an example of this in the second half of episode 1 when we set up Jake. The automatopia repository also demonstrates this approach.
Continuous Integration: Test before merging
I’m known for saying, “Continuous integration is an attitude, not a tool.” Continuous integration isn’t about having a build server—it’s about making sure your code is ready to ship at any time. The key ingredients are known-good code and frequent whole-project integration.
I recommend using a synchronous integration process that guarantees your code is known-good before you make it available to the rest of your team. (For more information about why synchronous integration is a good idea, see the “Continuous Integration” section of The Art of Agile Development.) The simplest way to do this is to test your code before you merge it into the integration branch.
You can do this with a manual process or an automated tool. If you use an automated tool, be careful: most CI tools default to asynchronous integration, not synchronous, and most test the code after publishing it to the integration branch, not before. These flaws tend to result in slower builds and more time wasted on integration errors.
(That said, asynchronous integration is necessary when you have a slow build, so it’s best to use a CI tool in that case. Try to find one that rejects failed integrations before publishing them to the integration branch.)
I demonstrate how to set up a basic CI process starting in the second half of episode 3, “Preparing for Continuous Integration.” I show how to automate that process and make it work with a team of developers in Lessons Learned #1, “Continuous Integration with Git.” The automatopia repository also includes an up-to-date version of that CI script. See the “Continuous Integration” section of the README for details.
The process I describe above is for Git, but it should also translate to other distributed version control systems. If you’re using a centralized version control system, such as Subversion, you can use a rubber chicken instead. (Really! It works great.)
Node.js Testing: Mocha and Chai
When I started the screencast, Mocha was my first choice of testing tools, but I had some concerns about its long-term viability. We spent some time in episode 7 discussing those concerns and considering how to future-proof it, but eventually, we decided to go with NodeUnit instead.
I recommend combining Mocha with Chai. Mocha does an excellent job of running tests, handling asynchronous code, and reporting results. Chai is an assertion library that you use inside your tests. It’s mature with support for both BDD and TDD assertion styles.
See episode 34, “Cross-Browser and Cross-Platform,” (starting around the eight-minute mark) for an example of using Mocha and Chai. That example is for front-end code, not Node.js, but it works the same way. The only difference is how you run the tests. To run Mocha from Jake, copy lines 22-41 of this file.
For a step-by-step guide to server-side testing, start with episode 7, “Our First Test.” It covers NodeUnit rather than Mocha, but the concepts are transferable. If you need help figuring out how to use Mocha instead, leave a comment here or on episode 7 and I’ll be happy to help out.
Cross-Browser Testing: Karma, Mocha, and Expect.js
I recommend Karma for automated cross-browser testing. It’s fast and reliable. In the screencast, we use it to test against Safari, Chrome, Firefox, multiple flavors of IE, and Mobile Safari running in the iOS simulator. I’ve also used it to test real devices, such as my iPad.
Karma’s biggest flaw is its results reporting. If a test fails while you’re testing a lot of browsers, it can be hard to figure out what went wrong.
An alternative tool that does a much better job of reporting is Test’em Scripts. It’s superior to Karma in nearly every way, in fact, except the most important one: it doesn’t play well with build automation. As a result, I can’t recommend it. For details, see The Lab #4, “Test Them Test’em.”
I combine Karma with Mocha and Expect.js. Expect.js has a lot of flaws—most notably, its failure messages are weak and can’t be customized—but it’s the best assertion library I’ve found that works well with IE 8. (Chai doesn’t.) If you don’t care about IE 8, I recommend using Chai instead.
We cover Karma in depth in Chapter 7, “Cross-Browser Testing,” and Lessons Learned #6, “Cross-Browser Testing with Karma.” For details about the new config file format that was added in Karma 0.10, see episode 133, “More Karma.” The automatopia repository is also set up with a recent version of Karma.
Smoke Testing: CasperJS
(Smoke testing is introduced in Chapter 5, “Smoke Test,” and Lessons Learned #4, “Smoke Testing a Node.js Web Server.” Front-end smoke testing is covered in Chapter 15, “Front-End Smoke Tests,” and Lessons Learned #13, “PhantomJS and Front-End Smoke Testing.”)
Even if you do a great job of test-driven development at the unit and integration testing levels, it’s worth having a few end-to-end tests that make sure everything works properly in production. These are called “smoke tests.” You’re turning on the app and seeing if smoke comes out.
I recommend CasperJS for smoke testing. CasperJS is a friendly API for PhantomJS (a friendly ghost, get it?), which is tool for scripting the WebKit browser engine. You can use CasperJS to interact with a web page in the same way that a user would.
One disadvantage of CasperJS (and PhantomJS) is that, although it’s running a real browser engine, that’s not quite the same thing as running a real browser. It’s great for smoke tests, but CasperJS isn’t a substitute for proper cross-browser testing against real browsers.
A popular alternative is Selenium. I prefer CasperJS because it’s more self-contained, but Selenium has the advantage of running against real browsers. If you aren’t using Karma to get good cross-browser coverage, Selenium is a probably a better choice than CasperJS.
We cover PhantomJS starting with episode 95, “PhantomJS,” and also in Lessons Learned #13. (Because CasperJS uses PhantomJS under the covers, it’s useful to know how PhantomJS works.) We investigate and review CasperJS itself in The Lab #5. The sample code for CasperJS is available on GitHub.
Front-End Modules: Browserify and the Karma-CommonJS Bridge
I prefer Browserify for front-end modules. It brings the Node.js module approach to the browser. It’s simple, straightforward, and if you’re using Node, consistent with what you’re using on the server.
Another popular tool is RequireJS, which uses the Asynchronous Module Definition (AMD) approach. I prefer Browserify because it’s simpler, but some people like the flexibility and power AMD provides. I discuss the trade-offs in Lessons Learned #14.
In Chapter 17, “The Karma-CommonJS Bridge,” we create a tool to solve these problems. It enables Karma to load CommonJS modules without running Browserify first. That tool has since been turned into an official Karma plugin. Recent versions of Karma also support source maps, which makes the Karma-CommonJS bridge less necessary, but the bridge is still nice for avoiding the Browserify build step during TDD.
We show how to use Browserify starting with episode 103, “Browserify.” We demonstrate the official Karma-CommonJS plugin in episode 134, “CommonJS in Karma 0.10.” There’s a nice summary of Karma, Browserify, and the Karma-CommonJS bridge at the end of Lessons Learned #15. You can find sample code on GitHub.
There are a few categories that I’ve intentionally left out.
Spies, Mocks, and other Test Doubles
I prefer to avoid test doubles in my code. They’re often convenient, and I’ll turn to them when I have no other choice, but I find that my designs are better when I work to eliminate them. So I don’t use any tools for test doubles. I have so few that it’s easy to just create them by hand. It only takes a few minutes.
I explain test doubles and talk about their trade-offs in Lessons Learned #9, “Unit Test Strategies, Mock Objects, and Raphaël.” We create a spy by hand in chapter 21, “Cross-Browser Incompatibility,” then figure out how to get rid of it later in the same chapter. A simpler example of creating a spy appears in episode 185, “The Nuclear Option.”
If you need a tool for creating test doubles, I’ve heard good things about Sinon.JS.
This topic is still changing too rapidly to make a solid long-term recommendation. There seems to be a new “must use” framework every year. My recommendation is to delay the decision as long as you can. To use Lean terminology, wait until the last responsible moment. (That doesn’t mean “wait forever!” That wouldn’t be responsible.) The longer you wait, the more information you’ll have, and the more likely that a stable and mature tool will float to the top.
When it’s time to make the decision, TodoMVC is a great resource. Remember that “no framework” can also be the right answer, especially if your needs are simple and you understand the design principles involved.
We demonstrate “not using a framework” throughout the screencast. Okay, okay, that’s not hard—the important thing is that we also demonstrate how to structure your application and create a clean design without a framework. This is an ongoing topic, but here are some notable chapters that focus on it:
- Chapter 13, “Design, Objects, & Abstraction”
- Chapter 18, “Drag and Drop” (starting with episode 123, “A Question of Design.”)
- Chapter 22, “Fixing Bad Code”
- Chapter 26, “Refactoring”
There’s Room for More
Is there a particular tool or category that I should have included? Add your suggestions in the comments! Remember, we’re looking for tools that are universal, valuable, and mature, so be sure to explain why your suggestion fits those categories.
Thanks for reading!