Let’s Code JavaScript is Now Free

When I first launched Let’s Code JavaScript in 2012, I didn’t expect it to turn into such a huge project. We had a great run. But the last video was published in 2018. It’s time for me to stop accepting subscriptions.

I’m not going to close the site, though. I’m just making it free to everyone. So, although it’s no longer possible to subscribe to Let’s Code JavaScript, all the content is still available.

I also have a bunch of other content you might want to check out:

I won’t be publishing any new content to this site, but my main site, jamesshore.com, sees regular updates. You can keep track of everything I’m doing by subscribing to my RSS feed or following me on Twitter (@jamesshore).

TDD Lunch & Learn

I’m pleased to announce the launch of my new weekly livestream, TDD Lunch & Learn!

Although Let’s Code JavaScript has come to end, there’s plenty more to learn. Every week in this fast and focused livestream, I’ll use Node.js to focus on one technical skill for software developers. I’ll present a challenge, provide a mental framework for solving it, then code a solution live on stream. Each session will include source code so you can follow along. As it’s a livestream, there will also be opportunities for discussion and Q&A.

It’s free! Sessions will be every Tuesday at noon-1pm Pacific time. (3-4pm Eastern; 20-21 UK; 21-22 Central Europe.) Learn more here.

“New” Video Player

When I first launched Let’s Code JavaScript back in 2012, support for video on the web was incredibly iffy. Built-in video players were wonky and some browsers didn't support MP4 (H.264) files.

To get around these issues, I used a third-party video player. It provided a standardized experience on all major browsers. This compatibility came at a cost: as browsers got better at playing video, my old player stayed stuck in the past.

Recently, that old player stopped working on Chrome. It was actually due Chrome blocking mixed content: my site uses HTTPS, but my CDN used HTTP. But at first, I thought the problem was the video player. So I finally retired it. Now your browser’s built-in player handles everything. This means you can take advantage of modern browser features, like picture-in-picture and variable playback speeds. (They’re usually available on the right-click menu.)

In my testing, the browser with the most video player features was Firefox. I’ve tested on all the other major browsers, too. Let me know if you have any trouble, and enjoy!

The Final Chapter

Six years ago, in May 2012, I launched the Let’s Code JavaScript kickstarter campaign. Since then, I've published 686 episodes:

That’s two or three episodes a week for the past six years. Thousands of subscribers. Uncountable episode views and downloads. I’ve never missed a publication date. And now… it’s time for me to move on.

Episode 610, to be released on April 11th, will be our final episode.

Update: I ended up producing a full wrap-up chapter. Episode 614 is the final episode and it will be released April 23rd.

Although I won’t be releasing any new videos after April 11th, the site will stay up. There’s over 170 hours of content here, so there’s a good chance you haven’t seen it all. Stick around as long as you like. JavaScript tooling changes all the time, but the show’s mindset and approach to development is timeless.

As for me, I’m focusing on the Agile Fluency™ Project with Diana Larsen (famous for Agile Retrospectives: Making Good Teams Great). There are a lot of different ways of doing agile development. We’d like companies to adopt approaches that truly work for them—not just what’s easy or common. You can learn more about approach in our Agile Fluency Model article on Martin Fowler’s site.

Thanks for joining me on this journey! It’s been an amazing six years. Thank you for all the comments, views, and emails. If you see me at a conference or other Agile event, come say hi.

You can start watching our final chapter here.

Let’s Code JavaScript is Free All Weekend

Spread the word! I’m celebrating the 500th episode of our “Recorded Live” series by throwing open the gates and inviting everyone inside. Let’s Code JavaScript is 100% free through the weekend! No login required. It starts now! It’s over! But you can still sign up for our seven-day free trial!

Start Here. Free Trial Here.

About Let’s Code JavaScript

Let’s Code JavaScript is a show about professional web development. We don’t teach you how to code, and we don’t binge on the latest fads. Instead, we show you how to create reliable, maintainable code, so your code becomes cheaper and cheaper to change over time, not more expensive. It really works!

We have four channels and hundreds of episodes. The main channels are:

  • How To Channel. Perfect for junior developers! A scripted, step-by-step introduction to professional development.

  • Live Channel. Real-world programming for intermediate and senior developers. You see everything—false starts, mistakes, and successes—which gives you what you need to reduce costs on your own real-world projects.

Start with our welcome page for more links and suggestions.

Favorite Videos

Here are some good places to start:

  • Welcome to Let’s Code JavaScript. Newly updated for our 500th episode, this has everything you need to know to get started.

  • The Definitive Guide to Object-Oriented JavaScript: A perennial favorite from the Lessons Learned channel and one of the most popular videos on this site. People say it’s the best description of JavaScript prototypes and inheritance they’ve seen.

  • Gather Your Tools: The start of our How To series for junior developers.

  • Season 5 Recap: A look back at our fifth season, which was about real-time web. We steadily cut the cost of delivering new network behaviors throughout the season. See how we did it in this episode.

Please Respect My Boundaries

Celebrate with us and enjoy as many videos as you like this weekend! With over 130 hours of content here, there’s more than enough to last you all weekend long.

If you still want more after the weekend is over, please subscribe. Because I’m committed to providing DRM-free downloads for subscribers, it is technically possible to download videos without paying for them. It’s not even hard. But please don’t. Without subscribers, I can’t make new videos. Your support makes the series possible in a very real way.

Thanks for understanding, and enjoy the show!

Login Changes Coming

Update: It’s done! If you have any trouble logging in, please let me know.

Persona, the authentication service from Mozilla that we use for login here on Let’s Code JavaScript, is shutting down in November. We’re currently working on replacing it with a different provider. Don’t worry, this won’t impact our release schedule. We’ll continue releasing videos well into the future.

In fact, part of the reason the transition isn’t done yet is that I’m recording my investigation for upcoming episodes of The Lab. Watch this space for an announcement when those are ready. (Or subscribe to the Let’s Code JavaScript RSS feed.)

In case you’re curious: We’ve investigated several auth services. The most promising are Stormpath and Auth0. Neither is perfect, unfortunately, and they have distinct strengths and weaknesses. The short version is that Stormpath is reliable but hard to use, and Auth0 is easy to use but less reliable. (We ended up going with Auth0. Six months later, reliability hasn’t been a problem.)

For security, we never store your password, not even in encrypted form. When we transition to the new service, you’ll get an email with a link to reset your password. Expect that sometime before the end of November.

Thanks for your continued support of Let’s Code JavaScript!

Let’s Code JavaScript Season 5!

I’m proud to announce that season five of the Let’s Code JavaScript “Recorded Live” series launched today. We’re celebrating with free episodes!

Job Opening for Let’s Code JavaScript Viewers in London

A subscriber in London wrote with an interesting request:

Hey James - loving your work! Very refreshing to see XP being brought so cogently into the .js world.

...

Any chance I could post a job ad on your site for paid up individuals?

Anyone who's in London and paid out to get into CD, TDD etc. I would love to add to my hiring pipeline.

I agree—if you’re investing in yourself, particularly in important software engineering concepts like TDD, you’re the kind of person hiring managers should be on the lookout for.

And if you’re in London, VouchedFor is looking for you. They’re open to both grads and veterans.

Read about the job here.

What’s Next for Let’s Code JavaScript Season 5?

We’re getting close to the end of Let’s Code JavaScript’s latest season! There’s several options for what we do next. If you’re a subscriber, former subscriber, or interested observer, I’d like your feedback.

The Live Channel

The Live channel is Let’s Code JavaScript’s flagship series. With over 300 episodes and dozens of hours of video, we’re able to get into a level of depth and nuance that isn’t possible with simple tutorials.

To date, we’ve had four seasons:

What should we do for Season 5? As always, everything we do will be test-driven.

  • Responsive Design: Design for multiple device sizes, more in-depth CSS code design, handling window size changes in JavaScript, more CSS testing work.

  • Deployment and DevOps: Continuous deployment, performance optimization, monitoring, scaling, reliability, and so forth.

  • Real-Time Web: Communicating between front-end and back-end, dealing with network issues, synchronizing multiple users, handling race conditions, reliability tradeoffs, and so forth.

  • Persistence and Databases: Interfacing with a database from Node.js, refactoring database schema, testing and maintainability, design issues and object-relational mapping, etc.

Of these, I’m leaning toward Real-Time Web. It’s a nice change of pace from what we’ve done so far and it will raise a lot of interesting code design questions. It also makes sense, from a product perspective, to do that before persistence.

Which of these would you like to see next? Or would you like something else entirely? Place your vote in the comments.

The Monthly Specials

The monthly specials are one of my favorite aspects of the site. It’s my chance to cover topics that don’t fit into the ongoing story of the Recorded Live series.

We recently finished a huge set of series on front-end frameworks (React, AngularJS, Ember.js) and I’ve just launched the new How To channel with a beginner-focused season. This channel isn’t just for beginners, though: each season stands alone. A future season could cover a topic for more advanced developers, such as testing and design with AJAX.

Specials come in three channels:

When the current How To season is done, what would you like to see next?

  • Another How To Season: There’s many topics we could cover: databases, AJAX, Node, React, more.

  • More Front-End Frameworks: We’ve only scratched the surface. I’m a bit sick of them, but if the demand is there, I’ll do it. I’ve had requests for Knockout, Backbone, Polymer, and more.

  • Other Lab Topics: An in-depth look at ES2015 changes might be useful. Static typing tools are growing in popularity, with TypeScript and Flow both looking interesting. There’s lots of other tools we could cover, too.

  • Lessons Learned Improvements: The current Lessons Learned episodes are getting a bit long in the tooth. I’d love to update and reorganize them. There’s some topics that we’ve covered that don’t have Lessons Learned videos yet, too, such as Selenium and CSS testing.

Which of these options is most exciting to you? Do you have another idea? Place your vote in the comments.

Site Improvements

I spend nearly all my time on producing content and other “keep the lights on” activities, but I can always replace a monthly special with a site update project. Is that something you’d want? Possibilities include:

  • Episode Tracking: Keep track of which episodes you’ve watched. This is one of my most commonly-requested features.

  • Episode RSS: An RSS feed just for new videos.

  • Online Account Management: I take care of most account-related needs manually. I’m very responsive to requests, but it might be nice to have more self-service options.

  • Site Liveness: There’s various ways I can make the site more convenient to use. The most requested feature is to allow switching to the comments tab without interrupting the video.

  • Video Player Updates: Keyboard shortcuts and playback speed controls would be nice.

  • Search/Tags: All together, there’s over 350 episodes on the site, and dozens of blog entries too. A tag system would make it easier to find and cross-reference content.

Are any of these worth delaying a special? Is there anything else you’d like to see?

Tell Me What You Think

I want to hear from you! If I don’t get feedback, I’ll just have to fall back to my childhood dream: 24/7 He-Man Marathon. Don’t make me do it.

  • What should we focus on for Season 5 of the Live channel?
  • What specials should come next?
  • Which site improvements could take the place of a special?

Thanks for your help!

Open Source Tools from Let’s Code JavaScript

Let’s Code JavaScript is coming up on its third anniversary, and in that time, we’ve released some useful open source tools. In general, these are modest tools that I’ve used to solve genuine problems. I hope you find them useful as well.

In order of popularity:

karma-commonjs

Test your CommonJS code in Karma without running Browserify, Webpack, or a similar tool. This results in faster test runs and better stack traces.

Our JavaScript Workflow 2015 video demonstrates how to use karma-commonjs, and the Front-End Modules Lessons Learned episode goes into more detail about the what and why of CommonJS.

Object Playground

This online visualization tool runs real JavaScript and draws a diagram showing all your objects and how they relate to each other. There’s also a much-lauded video about how objects work in JavaScript.

automatopia

A seed project for JavaScript applications. Clone this repository and get an automated build, automated continuous integration support (the real deal), and automated deployment to Heroku. The automated build includes linting with JSHint, Node.js testing with Mocha, front-end CommonJS modules with Browserify and karma-commonjs, and cross-browser testing with Karma and Mocha. The automation scripts use Jake.

Our JavaScript Workflow 2015 video discusses several of the ideas supported by automatopia.

test-console

Sometimes, you just need to test console output. This npm module is a simple and useful way of doing it.

var stdout = require("test-console").stdout;

var output = stdout.inspectSync(function() {
    console.log("foo");
});
assert.deepEqual(output, [ "foo\n "]);

simplebuild-jshint

If you want to run JSHint in code without a JSHint plug-in available, use this npm module instead. It’s easier than spawning a process or using the JSHint API. (I also think the output is prettier, but that’s me.)

var jshint = require("simplebuild-jshint");

jshint.checkFiles({
    files: [ "*.js", "src/**/*.js", "test/**/*.js" ],
    options: {
        bitwise: true,
        curly: false,
        eqeqeq: true
        // etc
    }
}, callback, errback);

quixote

CSS testing, for real. This npm module is basically an assertion library for your front-end design. It’s fairly young but already works very well for testing layout and responsive design. Its most unique feature is that you can make assertions about how elements relate to each other on the page, so your tests are robust to changes in design. It’s fast and has great assertion failure messages.

menu.assert({
  top: logo.bottom.plus(10)   // menu is 10px below logo
});

Quixote uses a lot of deep magic to work and we figured a lot of it out right here on the Let’s Code JavaScript Recorded Live channel. The work started with chapter 31, “The Test-Driven CSS Experiment,” and you can see us put the production version of Quixote into practice in chapter 43, “Quixote.”

big-object-diff

When you need it, you need it. This npm module compares two objects (or arrays) and only displays the differences. I used it for comparing DOM trees in the Legacy Code Challenge series. Normal “deepEqual” assertion libraries don’t work well with objects that large.

Free “How To” Episodes Now Available!

Our How To channel has launched! Every Friday will see the release of a new episode.

The How To channel is FREE! For at least the next month, each new episode will be free. Watch our first episode today and check back every Friday for a new episode! New episodes are listed on the home page under the “Latest Specials” headline.

The How To channel is a bridge for people who have learned the fundamentals of programming but aren't yet experienced professionals. In the series, we explore professional programming techniques as we build a simple JavaScript application from scratch. Topics include:

  • Version control
  • Reproducible builds
  • Static code analysis (linting)
  • Cross-browser testing
  • JavaScript modules
  • Test-driven development (of course!)
  • The Document Object Model
  • Design and refactoring

Today’s episode: Gather Your Tools. We introduce the new channel, then set up our four core tools: a code editor, Node.js, git, and the command line.

Coming Fridays: A New Channel for Beginners

This Friday, Let’s Code JavaScript launches its fourth screencast channel! The How To channel is for programmers starting out in their professional career. It uses the same immersive approach viewers love, with material and pacing specifically designed for beginning programmers.

The How To channel is a bridge for people who have learned the fundamentals of programming but aren't yet experienced professionals. In the series, we explore professional programming techniques as we build a simple JavaScript application from scratch. Topics include:

  • Version control
  • Reproducible builds
  • Static code analysis (linting)
  • Cross-browser testing
  • JavaScript modules
  • Test-driven development (of course!)
  • The Document Object Model
  • Design and refactoring

Episodes are free for the first month, so tell your friends! The first episode comes out this Friday, June 5th, and new episodes will come out every following Friday.

Retiring the TDD Distilled Channel

With the advent of our new channel, I’m retiring the venerable TDD Distilled channel. The channel was a repackaging of my original TDD screencast, which was written in Java rather than JavaScript, and it tended to confuse more than it helped. It’s still available if you want it, but it’s no longer linked in the navigation bar.

The new How To channel joins the Recorded Live, Lessons Learned, and The Lab channels to provide in-depth JavaScript videos suitable for any level of experience. Thanks for watching, and enjoy!

Let’s Code JavaScript Is Free All Weekend

Spread the word! I’m celebrating the 300th episode of our “Recorded Live” series by throwing open the gates and inviting everyone inside. Let’s Code JavaScript is 100% free all weekend! It starts now and continues until midnight Sunday evening / Monday morning eastern time (GMT-5).

Update: The free weekend is over! But you can still sign up for our free trial and see everything you missed!

How to Watch

There are three main channels. Follow these links to see the table of contents for each channel:

  • Recorded Live celebrates its 300th episode this week. It’s forty-one chapters of real-world programming, warts and all, on topics ranging from Node.js, to refactoring and design, to performance, to build automation, to test-driven CSS, and more, and more, and more. I don’t think any live programming series has ever lasted so long or gone into so much depth.

  • Lessons Learned provides concise, illustrated summaries of important topics. Test-driven development, object-oriented programming, workflow, modules, and other core concepts are covered here. These are high-impact videos that cover important topics quickly.

  • The Lab explores tools and advanced ideas. The series on front-end frameworks, including React and AngularJS, are the most popular, and the four-part series on working with legacy code that’s a must-see if your front-end JavaScript is getting out of control.

Favorite Videos

Here are some good places to start:

  • The Definitive Guide to Object-Oriented JavaScript: A perennial favorite from the Lessons Learned channel and one of the most popular videos on this site. People say it’s the best description of JavaScript prototypes and inheritance they’ve seen.

  • WeeWikiPaint: The first episode of my massive 300-episode live programming series. Viewers praise the quality and depth of the series, and they particularly like that I share warts and puzzles along with successes.

  • Front-End Frameworks: AngularJS: The beginning of my three-part AngularJS review. AngularJS is a hugely popular framework—but is it good? I put Angular through its paces by building a real application (and not a to-do list!). My Front-End Frameworks series are among my most popular, and this one most of all.

Please Respect My Boundaries

Celebrate with us and enjoy as many videos as you like this weekend! With close to 100 hours of content here, there’s more than enough to last you all weekend long.

If you still want more after the weekend is over, please subscribe. Because I’m committed to providing DRM-free downloads for subscribers, it is technically possible to download videos without paying for them. It’s not even hard. But please don’t. Producing this content is my full-time job and without subscribers I’d have to go make my living some other way. Your support makes the series possible in a very real way.

Thanks for understanding, and enjoy the show!

300 Episodes!

The 300th episode of Let’s Code JavaScript’s “Recorded Live” series went up today. Forty-one chapters on topics ranging from Node.js, to refactoring and design, to performance, to build automation, to test-driven CSS, and more, and more, and more. I don’t think any live programming series has ever lasted so long or gone into so much depth.

Add in The Lab and Lessons Learned, and it’s something like 100 hours of content.

To celebrate, I’m giving it all away! This weekend, from midnight Friday evening to midnight Sunday evening, east coast time, everything will be 100% free. Tell your friends! Tell your neighbors! Tell your dog! (And if your dog can program JavaScript, tell everyone!)

Update: The free weekend is over! But you can still sign up for our free trial and see everything you missed!

Thanks for watching, everyone, and for making this show a success.

Free AngularJS Video Available

I’ve updated the Let’s Code JavaScript sampler page with new videos, including the full-length version of my first AngularJS lab and my ever-popular guide to object-oriented JavaScript.

Watch it here.

JavaScript Tooling 2015

Here’s my list of must-have JavaScript tools and modules, updated for 2015. These are the tools I use on every project. They are:

  • Universal. These tools makes sense for nearly every JavaScript project.

  • Valuable. You’ll get noticeable, ongoing benefits from using them.

  • Mature. They’ve stood the test of time. You won’t have to spend a lot of time keeping up with changes.

See JavaScript Workflow 2015 for a video describing how to set up a front-end project using these tools. To get started quickly, see my automatopia seed project on Github.

tl;dr

Changes since the 2014 edition:

Build Automation: Jake

(Build automation is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)

Build automation is the first thing I put into place on any new project. It’s essential for fast, repeatable workflow. I constantly run the build as I work. A good build automation tool supports my work by being fast, powerful, flexible, and staying out of the way.

My preferred tool for build automation is Jake. It’s mature, has a nice combination of simplicity and robustness, and it’s code-based rather than configuration-based.

That said, Grunt is the current king of the hill and it has a much better plugin ecosystem than Jake. Grunt’s emphasis on configuring plugins rather than writing code tends to get messy over time, though, and it lacks classic build automation features such as dirty file checking. I think Jake is a better tool overall, but Grunt’s plugins make it easier to get started. If you’re interested in Grunt, I review it in The Lab #1, “The Great Grunt Shootout.”

Another popular build tool is Gulp. It uses an asynchronous, stream-based approach that’s fast and avoids the need for temporary files. But that stream-based approach can also make debugging difficult. Gulp’s pretty minimalistic, too, lacking useful features such as task documentation and command-line parameters. You can read my review of gulp here.

We cover installing Jake and creating a Jakefile in the second half of episode 1, “WeeWikiPaint.” I also have a pre-configured example on GitHub in the automatopia repository. For examples of Grunt and Gulp builds, see the code for Lab #1.

Dependency Versioning: Check ’em in

(Dependency management is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)

I’m a big proponent of keeping everything you need to build your code in a single, versioned repository. It’s the simplest, most reliable way to share changes with your team and ensure you can build old versions when you need to.

As a result, unless you’re actually creating an npm module, I prefer to install npm modules locally (in other words, don’t use the -g option, even for tools) and check them into source control. This will isolate you from undesired upstream changes and hiccups.

To do this, you need to ensure that you don’t check in build artifacts. Here’s how to do it with git:

npm install <package> --ignore-scripts --save   # Install without building
git add . && git commit -a                      # Check in the module
npm rebuild                                     # Build it
git status                                      # Display files created by the build
### If there's any build files, add them to .gitignore and check it in.

In the Live channel, we install our tools locally, use scripts to run them, and check them into git. You can see an example of this in the second half of episode 1 when we set up Jake. The automatopia repository also demonstrates this approach. My essay, “The Reliable Build,” goes into more detail.

Continuous Integration: Test before merging

(Continuous integration is introduced in Chapter 1, “Continuous Integration,” and LL1, “Continuous Integration with Git.” It’s also discussed in LL16, “JavaScript Workflow 2015”.)

I’m known for saying, “Continuous integration is an attitude, not a tool.” Continuous integration isn’t about having a build server—it’s about making sure your code is ready to ship at any time. The key ingredients are:

  1. Integrate every few hours.
  2. Ensure the integrated code works.

The most effective way to do this is to use a synchronous integration process that prevents integration build failures.

“Synchronous integration” means that you don’t start a new task until you’ve confirmed that the integration succeeded. This ensures that problems are fixed right away, not left to fester.

Preventing integration build failures is a simple matter of testing your integration before you share it with the rest of the team. This prevents bad builds from disrupting other people’s work. Surprisingly, most CI tools don’t support this approach.

I use git branches to ensure good builds. I set up an integration machine with an integration branch and one dev branch for each development workstation. Development on each workstation is done on that workstation’s dedicated branch.

### Develop on development workstation
  git checkout <dev>              # Work on this machine's dev branch
  # work work work
  <build>  # optional             # Validate your code before integrating

### Integrate on development workstation
  git pull origin integration     # Integrate latest known-good code
  <build>  # optional             # Only fails when integration conflicts

### Push to integration machine for testing
  git push origin <dev>

### Validate on integration machine
  git checkout <dev>              # Get the integrated code
  git merge integration --ff-only # Confirm changes have been integrated
  <build>  # mandatory            # Make sure it really works
  git checkout integration
  git merge dev1 --no-ff          # Make it available to everyone else

You can do this with a manual process or an automated tool. I prefer a lightly-scripted manual approach, as seen in the automatopia repository, because it’s lower maintenance than using a tool.

If you use an automated tool, be careful: most CI tools default to asynchronous integration, not synchronous, and most test the code after publishing it to the integration branch, not before. These flaws tend to result in slower builds and more time wasted on integration errors.

I demonstrate how to set up a basic CI process starting in the second half of episode 3, “Preparing for Continuous Integration.” I show how to automate that process and make it work with a team of developers in Lessons Learned #1, “Continuous Integration with Git.” The automatopia repository also includes an up-to-date version of that CI script. See the “Continuous Integration” section of the README for details.

The process I describe above is for Git, but it should also translate to other distributed version control systems. If you’re using a centralized version control system, such as Subversion, you can use a rubber chicken instead. (Really! It works great.)

Linting: JSHint

(Linting is introduced in Chapter 1, “Continuous Integration,” and discussed in LL16, “JavaScript Workflow 2015”.)

Static code analysis, or “linting,” is crucial for JavaScript. It’s right up there with putting "use strict"; at the top of your modules. it’s a simple, smart way to make sure that you don’t have any obvious mistakes in your code.

I prefer JSHint. It’s based on Douglas Crockford’s original JSLint but offers more flexibility in configuration.

Another tool that’s been attracting attention lately is ESLint. Its main benefit seems to be a pluggable architecture. I haven’t tried it, and I’ve been happy enough with JSHint’s built-in options, but you might want to check ESLint out if you’re looking for more flexibility than JSHint provides.

Episode 2, “Build Automation & Lint,” shows how to install and configure JSHint with Jake. I’ve since packaged that code up into a module called simplebuild-jshint. You can use that module for any of your JSHint automation needs. See the module for details.

Node.js Testing: Mocha and Chai

(Node.js testing tools are introduced in Chapter 2, “Test Frameworks,” and Lessons Learned #2, “Test-Driven Development with NodeUnit”.)

When I started the screencast, Mocha was my first choice of testing tools, but I had some concerns about its long-term viability. We spent some time in episode 7 discussing those concerns and considering how to future-proof it, but eventually, we decided to go with NodeUnit instead.

It turns out that those concerns were unfounded. Mocha’s stood the test of time and it’s a better tool than NodeUnit. NodeUnit isn’t bad, but it’s no longer my first choice. The test syntax is clunky and limited and even its “minimal” reporter setting is too verbose for big projects.

I recommend combining Mocha with Chai. Mocha does an excellent job of running tests, handling asynchronous code, and reporting results. Chai is an assertion library that you use inside your tests. It’s mature with support for both BDD and TDD assertion styles.

See episode 34, “Cross-Browser and Cross-Platform,” (starting around the eight-minute mark) for an example of using Mocha and Chai. That example is for front-end code, not Node.js, but it works the same way. The only difference is how you run the tests. To run Mocha from Jake, you can use mocha_runner.js from the automatopia repository.

For a step-by-step guide to server-side testing, start with episode 7, “Our First Test.” It covers NodeUnit rather than Mocha, but the concepts are transferable. The automatopia repository shows how to use Mocha instead. If you need help figuring out how to use Mocha, leave a comment here or on episode 7 and I’ll be happy to help out.

Cross-Browser Testing: Karma, Mocha, and Chai

(Cross-browser testing is introduced in Chapter 7, “Cross-Browser Testing,” and Lessons Learned #6, “Cross-Browser Testing with Karma.”) It’s also discussed in LL16, “JavaScript Workflow 2015”.)

Even today, there are subtle differences in JavaScript behavior across browsers, especially where the DOM is concerned. It’s important to test your code inside real browsers. That’s the only way to be sure your code will really work in production.

I use Karma for automated cross-browser testing. It’s fast and reliable. In the screencast, we use it to test against Safari, Chrome, Firefox, multiple flavors of IE, and Mobile Safari running in the iOS simulator. I’ve also used it to test real devices, such as my iPad.

Karma’s biggest flaw is its results reporting. If a test fails while you’re testing a lot of browsers, it can be hard to figure out what went wrong.

An alternative tool that does a much better job of reporting is Test’em Scripts. It’s superior to Karma in nearly every way, in fact, except the most important one: it doesn’t play well with build automation. As a result, I can’t recommend it. For details, see The Lab #4, “Test Them Test’em.”

I combine Karma with Mocha and Chai. Chai doesn’t work with IE 8, so if you need IE 8 support, try Expect.js. Expect.js has a lot of flaws—most notably, its failure messages are weak and can’t be customized—but it’s the best assertion library I’ve found that works well with IE 8.

We cover Karma in depth in Chapter 7, “Cross-Browser Testing,” and Lessons Learned #6, “Cross-Browser Testing with Karma.” For details about the new config file format that was added in Karma 0.10, see episode 133, “More Karma.” The automatopia repository is also set up with a recent version of Karma.

Smoke Testing: Selenium WebdriverJS

(Smoke testing is introduced in Chapter 5, “Smoke Test,” and Lessons Learned #4, “Smoke Testing a Node.js Web Server.” Front-end smoke testing is covered in Chapter 15, “Front-End Smoke Tests,” and Lessons Learned #13, “PhantomJS and Front-End Smoke Testing.”)

Even if you do a great job of test-driven development at the unit and integration testing levels, it’s worth having a few end-to-end tests that make sure everything works properly in production. These are called “smoke tests.” You’re turning on the app and seeing if smoke comes out.

I used to recommend CasperJS for smoke testing, but it uses PhantomJS under the covers, and PhantomJS has been going through some growing pains lately. Now I’m using Selenium WebdriverJS instead. It’s slower but more reliable.

(In fairness, PhantomJS just came out with a new version 2, which may have fixed its problems. I haven’t had a chance to try it yet.)

We cover Selenium WebdriverJS in chapter 39, “Selenium.” PhantomJS is covered starting with episode 95, “PhantomJS,” and also in Lessons Learned #13. We investigate and review CasperJS in The Lab #5.

Front-End Modules: Browserify and karma-commonjs

(Front-end modules are introduced in Chapter 16, “Modularity,” and Lessons Learned #14, “Front-End Modules.” It’s also discussed in LL16, “JavaScript Workflow 2015”.)

Any non-trivial program needs to be broken up into modules, but JavaScript doesn’t have a built-in way of doing that. Node.js provides a standard approach based on the CommonJS Modules specification, but no equivalent standard has been built into browsers. You need to use a third-party tool.

I prefer Browserify for front-end modules. It brings the Node.js module approach to the browser. It’s simple, straightforward, and if you’re using Node, consistent with what you’re using on the server.

Another popular tool is RequireJS, which uses the Asynchronous Module Definition (AMD) approach. I prefer Browserify because it’s simpler, but some people like the flexibility and power AMD provides. I discuss the trade-offs in Lessons Learned #14.

A disadvantage of Browserify is that the CommonJS format is not valid JavaScript on its own. You can’t load a single module into a browser, or into Karma, and have it work. Instead, you must run Browserify and load the entire bundle. That can be slow and it changes your stack traces, which is particularly annoying when doing test-driven development.

In Chapter 17, “The Karma-CommonJS Bridge,” we create a tool to solve these problems. It enables Karma to load CommonJS modules without running Browserify first. That tool has since been turned into karma-commonjs, a Karma plugin.

One limitation of karma-commonjs is that it only supports the CommonJS specification. Browserify does much more, including allowing you to use a subset of the Node API in your front-end code. If that’s what you need, the karma-browserify plugin might be a better choice than karma-commonjs. It’s slower and has uglier stack traces, but it runs the real version of Browserify.

We show how to use Browserify starting with episode 103, “Browserify.” We demonstrate karma-commonjs in episode 134, “CommonJS in Karma 0.10.” There’s a nice summary of Karma, Browserify, and the Karma-CommonJS bridge at the end of Lessons Learned #15. You can find sample code in the automatopia repository.

Notably Missing

These aren’t all the tools you’ll use in your JavaScript projects, just the ones I consider most essential. There are a few categories that I’ve intentionally left out.

Spies, Mocks, and other Test Doubles

I prefer to avoid test doubles in my code. They’re often convenient, and I’ll turn to them when I have no other choice, but I find that my designs are better when I work to eliminate them. So I don’t use any tools for test doubles. I have so few that it’s easy to just create them by hand. It only takes a few minutes.

I explain test doubles and talk about their trade-offs in Lessons Learned #9, “Unit Test Strategies, Mock Objects, and Raphaël.” We create a spy by hand in chapter 21, “Cross-Browser Incompatibility,” then figure out how to get rid of it later in the same chapter. A simpler example of creating a spy appears in episode 185, “The Nuclear Option.”

If you need a tool for creating test doubles, I’ve heard good things about Sinon.JS.

Front-End Frameworks

One of the most active areas of JavaScript development is client-side application frameworks and libraries. Examples include React, Ember and AngularJS.

This topic is still changing too rapidly to make a solid long-term recommendation. There seems to be a new “must use” framework every year. My suggestion is to delay the decision as long as you can. To use Lean terminology, wait until the last responsible moment. (That doesn’t mean “wait forever!” That wouldn’t be responsible.) The longer you wait, the more information you’ll have, and the more likely that a stable and mature tool will float to the top.

If you need a framework now, my current favorite is React. I have a review of it here and an in-depth video in The Lab.

When you’re ready to choose a framework, TodoMVC is a great resource. Remember that “no framework” can also be the right answer, especially if your needs are simple and you understand the design principles involved.

We demonstrate “not using a framework” throughout the screencast. Okay, okay, that’s not hard—the important thing is that we also demonstrate how to structure your application and create a clean design without a framework. This is an ongoing topic, but here are some notable chapters that focus on it:

We’re also investigating front-end frameworks in The Lab. At the time of this writing, React has a review and a video series and so does AngularJS (review, video series). Ember is coming next.

Promises

Promises are a technique for making asynchronous JavaScript code easier to work with. They flatten the “pyramid of doom” of nested callbacks you tend to get in Node.js code.

Promises can be very helpful, but I’ve held off on embracing them fully because upcoming changes in JavaScript may make their current patterns obsolete. The co and task libraries uses ES6 generators for some beautiful results, and there’s talk of an await/async syntax in ES7, which should solve the problem once and for all.

The newer libraries use promises under the covers, so promises look like they’re a safe bet, but the newer ES6 and ES7 approaches have a different syntax than promises do. If you switch existing code to use promises, you’ll probably want to switch it again for ES6, and again for ES7.

As a result, I’m in the “adopt cautiously” camp on promises. I’ll consider them when dealing with complex asynchronous code. For existing callback code that’s not causing problems, I’ll probably just keep using callbacks. There’s no point in doing a big refactoring to promises when that code will just need to be refactored again to one of the newer styles.

ES6 is supposed to have native support for promises. If you need a promise library in the meantime, I’ve heard that Bluebird is good. For compatibility, be sure to stick to the ES6 API.

There’s Room for More

Is there a particular tool or category that I should have included? Add your suggestions in the comments! Remember, we’re looking for tools that are universal, valuable, and mature, so be sure to explain why your suggestion fits those categories.

An Unconventional Review of AngularJS

AngularJS is everything I expect from a framework. That’s not a good thing.

In November, December, and January, I reviewed AngularJS for Let’s Code JavaScript’s “front-end frameworks” series. All together, I spent forty hours researching, coding, and problem-solving. As usual, my goal was to explore and critique AngularJS by creating a real application.

Angular is probably the most popular front-end framework currently available. It’s produced by a team at Google, which gives it instant credibility, and it’s in high demand by employers. It’s so popular, it has its own acronym. It’s part of the “MEAN” stack: MongoDB, Express, AngularJS, Node.JS. A who’s-who of cutting-edge technology.

Angular describes itself as a toolkit for enhancing HTML. It lets you extend HTML with new vocabulary—in the form of “directives”—that turn a static HTML document into a dynamic template. Directives can appear as attributes or tags (or even comments or classes, but that’s unusual) and they turn a static HTML page into something that lives and breathes, seemingly without added JavaScript.

The best example of this is Angular’s famous two-way binding. Your HTML template can include variables, as with most templating languages, but in Angular’s case, your page automatically updates whenever the variables change.

For example, the application I produced for the review has a spreadsheet-like table that changes whenever certain configuration fields change. Here’s the code that renders a row of that table. Notice that there’s no event handling or change monitoring… just a template that describes the cells in the row. Angular automatically ensures that the cells update whenever their values change.

// Copyright (c) 2014-2015 Titanium I.T. LLC. All rights reserved. For license, see "README" or "LICENSE" file.
(function() {
  "use strict";

  var StockMarketCell = require("./stock_market_cell.js");

  var stockMarketRow = module.exports = angular.module("stockMarketRow", [StockMarketCell.name]);

  stockMarketRow.directive("stockMarketRow", function() {
    return {
      restrict: "A",
      transclude: false,
      scope: {
        value: "="
      },
      template:
        '<tr>' +
          '<td stock-market-cell value="value.year()"></td>' +
          '<td stock-market-cell value="value.startingBalance()"></td>' +
          '<td stock-market-cell value="value.startingCostBasis()"></td>' +
          '<td stock-market-cell value="value.totalSellOrders().flipSign()"></td>' +
          '<td stock-market-cell value="value.capitalGainsTaxIncurred().flipSign()"></td>' +
          '<td stock-market-cell value="value.growth()"></td>' +
          '<td stock-market-cell value="value.endingBalance()"></td>' +
        '</tr>',
      replace: true
    };
  });
})();

Magic.

With examples like this, it’s easy to see why Angular is popular. It makes hard problems seem trivial. But will it stand the test of time?

An Unconventional Review

Too many frameworks fall into an all-too-common trap: they make it easy to get started quickly, which is great, and then make it very hard to maintain and extend your code over time. That part’s not so great.

So when I review a framework, I don’t look at the common criteria of performance, popularity, or size. (It’s good to know these things, but you can easily find that information elsewhere.) No, I want to know the answer to a simpler and more vital question:

Over the 5-10+ years I’ll be supporting my product, will this code cause me more trouble than it’s worth?

Most frameworks are designed to save you time when you initially create a product. But that time is trivial in comparison to the cost of maintaining your application for years. Before I can recommend a framework, I need to know that it will stand the test of time. Will it grow and change along with me? Or will I be shackled by a barely-maintainable legacy application in three years?

I look at five common pitfalls.

  1. Lock-In. When I decide to upgrade to a new version, or switch to a different framework, how hard will it be?

  2. Opinionated Architecture. Can I do things in the way that best fits the needs of my app, or do I have to conform to the framework’s pre-canned approach?

  3. Accidental Complexity. Do I spend my time working on my application, or do I waste it on figuring out how to make the framework do what I need?

  4. Testability. Can I test my code using small, fast unit tests, using standard off-the-shelf tools, without excessive mocking?

  5. Server-Side Rendering. Will users have to wait for JavaScript to execute before they see anything useful? Will I have to jump through ridiculous hoops to get search engines to index my site?

I rated Angular in each category with a ☺ (yay!), ☹ (boo!), or ⚇ (it’s a toss-up).

1. Lock-In: ☹ (Boo!)

There’s no question: Angular locks you in. You define your UI with Angular-specific directives, in Angular-specific HTML templates, using Angular-specific jargon and code. There’s no way to abstract it. It will all have to be rewritten when you switch to a different tool.

This isn’t unusual. It’s so usual, in fact, that this level of lock-in normally warrants a “meh” toss-up face. Angular works hard for its frown.

First, Angular wants to own all your client-side code. Writing your app the Angular way means writing validation logic using Angular-specific validators, putting business logic in Angular-specific services, and connecting to the back-end via Angular’s built-in services.

Second, the Angular team has shown that maintenance costs aren’t a priority for them. Angular 1.3 dropped support for IE 8. Angular 2 is a major rewrite of the framework that eliminates several core concepts in the current version. It’s likely to require a rewrite of your app.

This bears repeating: Your entire front-end is locked in, and even staying current will likely require a rewrite. Rewrites are a terrible idea; you’ll spend buckets of money and time just reproducing what you already have. A framework that has a rewrite built into its roadmap is unacceptable, and that’s what AngularJS appears to have.

2. Opinionated Architecture: ⚇ (It’s a toss-up.)

Angular wants you to build your application in a particular way, but it’s not very explicit about it. Call it “passive-aggressive architecture.”

Opinionated architecture is one of those “short-term good, long-term bad” deals. In the short term, an opinionated framework can help you get started quickly by showing you how to structure your application. In the long-term, though, an overly-opinionated framework will limit your options. As your needs grow, the opinions of the framework become a straight-jacket requiring increasingly complex contortions to overcome.

Angular’s passive-aggressive architecture provides the worst of both worlds. It makes assumptions about your application design, but it doesn’t guide you towards those assumptions. I’m not sure I fully understand it even now, but this is what I’ve gleaned so far:

Fundamentally, Angular assumes you use stateless “service” objects for logic and dumb data-structure objects (objects without methods) for state. Services are effectively global variables; most functions can use any service by referencing its name in a particular way. Data structure objects are stored in the “$scope” associated with templates and directives. The data structure objects are manipulated by “controllers” (glue code associated with templates and directives) and services.

I’m not a big fan of this architecture. By separating state and business logic, Angular breaks encapsulation and splits apart tightly coupled concepts. Rather than putting logic alongside the data it operates on, Angular wants you to spread the logic around your application. It risks the “shotgun surgery” code smell: any change requires making lots of little edits.

Angular’s tutorial application demonstrates the problem. The application displays a list of smart phones, and “phone” objects are a core concept. Ideally, a change to the internal structure of the phone objects wouldn’t affect anything else. But they’re just dumb data objects, so a change would require edits throughout the application: the phone-list template, the phone-detail template, and both controllers for those templates.

I prefer rich domain objects that encapsulate state and business logic. That allows me to make changes without breaking things. For my sample app, I used a rich domain layer that relied on immutable value objects. Angular’s passive-aggressive architecture didn’t support that approach—there were times that I had to contort my code to work around Angular’s assumptions—but it wasn’t impossible, either. It could have been worse, and that’s the best I can say about it.

3. Accidental Complexity: ☹ (Boo!)

Angular is known for having a steep learning curve and poor documentation. I think these are symptoms of a bigger problem. It’s not the documentation that’s at fault; it’s Angular. It’s just poorly designed. Here are a few of the flaws I discovered:

  • Leaky abstractions. To use Angular for a non-trivial project, you have to understand, at a deep level, how it works under the covers. You’ll need to understand scopes and how they relate to prototypal inheritance; the digest loop; $watch, $watchCollection, and $apply; and much more.

  • Magic strings as a workaround for poor cohesion. You’ll often have code that’s closely related but spread among different files. They’re connected by using the same string in both places.

  • Obscure sigils everywhere. Angular has multiple tiny languages that you’ll embed into various strings in your application. Be prepared to understand the difference between "=", "&", "=*", and "@"; "E", "A", and "EA"; the "|" operator; and more.

  • Subtle incompatible differences. Problems can be solved in multiple ways, each with small but vital incompatibilities. For example, the way you define a controller will determine the syntax you use in your template and how variables are stored on Angular’s $scope.

  • Bias toward silent failure. It’s easy to do something wrong, have your app not work, and get no indication of why. Did you use "E" where you meant to use "A"? Your application just stopped working.

When I built the sample application for the first time in my React review, it took me 28¾ hours. Doing the same thing with Angular took me 39½ hours, despite having done it once before and being able to reuse some of the React code. That’s more than ten extra hours. The extra time can be laid firmly at the feet of Angular’s excessive complexity.

4. Testability: ⚇ (It’s a toss-up.)

Angular makes a big deal about testing. One of its major features, dependency injection, is specifically intended to make testing easier.

Given this focus, I was surprised how poor Angular’s testing story is. It emphasizes testing logic in controllers and services, but it has poor to non-existant support for testing UI behavior. There’s no support for simulating browser events and it’s flat-out impossible to unit test HTML templates. Custom directives can be tested, but it’s ugly to test a directive that contains another.

Angular focuses on allowing you to unit test business logic. But it only needs to do that because its architecture encourages putting business logic in the UI (specifically, in controllers and services). A better architecture would put business logic in objects that are independent of the UI, rendering the whole thing moot.

A lot of Angular feels like this. Band-aids over self-inflicted wounds.

Once you take out the business logic, as my sample app did, you’re left with testing how Angular renders HTML in reaction to events, and Angular didn’t support me in that. The Angular team recommends using their purpose-built end-to-end testing framework, Protractor, instead.

End-to-end tests are slow and brittle. They should be a kept to a minimum, not relied upon as the centerpiece of your testing strategy. Fortunately, by putting my application UI in custom directives, it was possible for me to unit test Angular, if not pretty, so Angular barely slides by with a “meh” face. If you look close, you can see a single tear sliding down.

5. Server-Side Rendering: ☹ (Boo!)

AngularJS is not meant to run on the server. This isn’t a surprise, nor is it unusual, but it’s something to be aware of.

Summary: Avoid.

Working with Angular was a real slog. Every step exposed a new quirk or challenge to figure out, and by the end of my review, I was well and truly sick of it. If I had done things the Angular way, rather than sticking with my own design, I might have found it easier going, but my purpose was to understand Angular’s long-term maintainability prospects, not get done as quickly as possible.

And those prospects are poor. Angular is a complex framework that’s grown awkwardly. It’s popular, but not good, and I suspect it will quickly fade as better options rise in prominence. With Angular 2 on the horizon, embracing Angular today means you’re likely to need to rewrite in a couple of years. Although the next version may fix its flaws, Angular as it exists today is a poor choice. Avoid it.

If you liked this essay, you’ll probably like:

The Reliable Build

If you look at my WeeWikiPaint codebase, you’ll notice something strange… and a little off-putting. All kinds of crap is checked into the repository: throwaway code experiments, npm modules, and even… IDE settings‽

Given that WeeWikiPaint is the Recorded Live channel’s ongoing example of professional & rigorous software development, what gives? Why the mess?

There’s a reason.

Real-World Software Development

In the Recorded Live series, I’m acting the same way I would if I were on a team developing a real-world software product. In that environment, coordination between team members is important, and it can be surprisingly difficult to maintain. One of the easiest mistakes that can occur is for various development machines to get out of sync. Then you have the dreaded “it worked on my machine” problem.

A variant of this problem is an inability to reproduce old builds. You’re chasing down a bug or something, so you check out an old commit, but it no longer builds, or it fails in a strange way, despite working perfectly in production. Let’s call that the “it no longer works on my machine” problem.

And then there’s the all-too-common case of getting a new development machine and having to spend days to weeks getting everything configured and set up. Also known as the “why doesn’t this #$@%! work on my machine” problem.

These are problems that you never see in a classroom setting, but if you’ve worked in a team environment for more than a few years, you’re sure to have encountered them. They’re painful, annoying, and an utter waste of time. At best! I’ve heard of projects that were irrecoverable because no one could get them to run any more.

The Reliable Reference

Of all the things in your programming environment, there’s only one thing that you can count on to give you the same answer every time: your source repository. When you put something into the repository, you can rely on being able to get exactly that thing back out at some future date. (Nitpicky exceptions aside.)

When else is that true? Your development database changes all the time. Dependencies get updated at the whims of others. Package managers change versions and storage strategies. Your network infrastructure changes and once-vital services are retired. Even your OS changes over time as patches are applied and versions upgraded.

Nothing about your programming environment is the same today as it was 10 years ago… but if you have a code repo that old, you can still get the exact code you had 10 years ago.

But will it run?

Creating a Reliable Build

How do you solve the “it worked on my machine,” “it no longer works on my machine,” and “why doesn’t this #$%! work on my machine” problems?

My way is to take advantage of the reliable repository. I consciously use it for as much coordination as possible. My ideal is to be able to buy a new computer, clone the repo, run one command, and have everything work exactly as it does on every other development machine.

Even more ideally, I’d like that build to work even when the network cable is disconnected. That way I know that all the state I need is stored in the repo, which means we can always reproduce a previous build with 100% fidelity.

I’m not able to achieve that ideal in every case. For example, in WeeWikiPaint’s current setup, you have to install Node.js manually. There’s often big-ticket items such as Node which can’t be checked in. But even when I can’t achieve the ideal, I still want the automated build to tell me when I’ve gotten out of sync with the build’s expectations.

Theory in Practice

That’s the underlying philosophy. You can see it play out in the WeeWikiPaint codebase in a variety of small ways.

  • Jake, and all the command-line tools the build uses, are installed locally rather than globally.

  • When you run the “jake” script for the first time, it automatically builds all the npm binaries, including Jake itself.

  • The build will fail if a different version of Node is installed than expected.

  • The Karma tests confirm that the expected browser and OS versions are being tested.

  • Cross-team IDE settings are stored in the repository (but machine-specific settings are in the .gitignore file).

  • All dependencies, including node_modules are stored in the repository (but binaries are .gitignore’d).

The code’s not perfect, and there are some things that I still don’t know how to store reliably, but I get as close as I can. Those node_modules and .idea folders may look like a mess, or a waste of space, but they’re actually a vital part of a reliable build.

(Thanks to “Madara Uchiha” for asking the questions that inspired this essay.)

AngularJS Design & Architecture Livestream: Nov 18, 20, 25

My investigation into AngularJS continues! Over the next few weeks, I’m livestreaming Part II of my AngularJS review. In Part I, we looked at AngularJS fundamentals: controllers, directives, modularity, and testing. Now, in Part II, we’ll be investigating how Angular influences design and architecture.

  • Tuesday, Nov 18th @ 10am PST. Value objects and integrating with an external domain model.

  • Thursday, Nov 20th @ 10am PST. Form fields, cross-directive communication, and events.

  • Tuesday, Nov 25th @ 10am PST. Cross-application coordination and managing application state.

Watch at http://www.hitbox.tv/jamesshore. Each episode starts at 10am PST (GMT-8) and will last as long as needed, probably about two to four hours each. I’ll announce start and end times on Twitter (@jamesshore) as well.

As always, the livestream is unedited and unfiltered. If you like being part of the process, this can be great! You can interact with me and other viewers in the chat and you’ll have a chance to influence our direction. You might even get a mention in the final review.

On the other hand, if you don’t have patience for hours of research and coding, you’re better off waiting for the edited video. It will condense everything down into about 60 tightly-edited minutes. The edited video comes out on December 5th in The Lab.

It should be fun! I hope to see you there.

Quixote 0.6: Test Responsive Designs

Quixote 0.6 is up today. You can download it from npm or view the code on GitHub.

(Quixote is my library for unit testing CSS. It’s based on work we did on the Live channel of the screencast. It’s very fast, very expressive, and very cool, if I do say so myself.)

I learned a lot about determining page and viewport sizes in this release. Even if you’re not interested in Quixote, you’ll probably find that useful. Skip down to the bottom for more.

Testing Responsive Designs

Go to the QuirksMode blog. Shrink the browser window down and scroll to the right. See how the “QuirksMode” logo breaks out of the header? That’s a simple and classic CSS bug. (And, in the case of QuirksMode, I’d bet it’s totally intentional.) Although the width of the header is 100%, the logo and sidebar are positioned outside the body, so the header glitches when the window is narrower than the page.

Image showing a header that doesn’t extend the full width of the browser window.

Now Quixote can test it.

it("has a header that extends the full width of the page", function() {
  // the "frame" variable is part of our Quixote setup code
  var page = frame.page();
  var header = frame.get(".pageHeader");

  header.assert({
    top: page.top,      // header is flush with top of page
    width: page.width   // header extends entire width of page
  });
});

Of course, that test will work just fine. The bug only shows up with a narrow window. So Quixote has a new method, frame.resize(), that lets you change the size of your test frame.

it("does not break header when page is narrow", function() {
  // assume 'page' and 'header' are defined in our setup code

  frame.resize(500, 1000);
  header.assert({
    width: page.width   // header is still entire width of page
  });
});

And that leads to this beauty:

Differences found:
width of '.pageHeader' was 475px smaller than expected.
  Expected: 975px (width of page)
  But was:  500px

The QuirksMode blog isn’t responsive, or even fluid, but this illustrates the point. Now that we have the ability to resize the window and compare elements to page sizes, we can test any responsive design. Just use frame.resize() to match your breakpoints and assert that everything lines up the way you want.

Other Things You Can Test With 0.6

In addition to comparing elements to the page, you can also compare elements to the viewport. (The viewport is the part of the page you can see in the browser window or frame.) This lets us test all sorts of useful scenarios:

Test that a lightbox is centered in the window:

lightbox.assert({
  center: viewport.center,
  middle: viewport.middle,
  width: viewport.width.times(2/3),
  height: viewport.height.times(2/3)
});

Test that a cookie disclaimer sticks to the bottom of the window:

disclaimer.assert({
  bottom: viewport.bottom,
  width: viewport.width
}, "cookie disclaimer should be at bottom of window");

frame.scroll(0, 100);
disclaimer.assert({
  bottom: viewport.bottom
}, "scrolling should not affect cookie disclaimer");

Test that a sidebar extends the entire height of the page:

sidebar.assert({
  left: page.left,
  height: page.height
});

Test that the content area takes up the whole page, except the sidebar, and starts below the navigation bar:

content.assert({
  top: navbar.bottom,
  right: page.right,
  width: page.width.minus(sidebar.width)
});

The Internals: How We Determine Viewport and Page Size

Determining the viewport and page size was a major hassle. My final solution is nice and simple, but the process of getting there… oy. I won’t go into all the dead ends, but check out our 100 lines of comments or 273 lines of tests if you’re curious.

The viewport size was the easiest. This is actually standardized, and even IE 8 supports the standard when it’s running in standards mode.

var html = document.documentElement;

var viewportWidth = html.clientWidth;
var viewportHeight = html.clientHeight;

Normally, clientWidth and clientHeight returns the width (or height) of an element, including padding, but not including border and margin. But if the element is the root node (our html variable above), it is specified to be the size of the viewport, as long as you’re not in quirks mode.

Obvious, right? Thanks to Peter-Paul Koch of QuirksMode for the essay that finally pointed this out to me.

Now for the page size. In PPK’s excellent essay series, he said it isn’t possible to find the document width. I knew I had my work cut out for me. Many, many test runs later, I had it.

var html = document.documentElement;
var body = document.body;

var pageWidth = Math.max(html.scrollWidth, body.scrollWidth);
var pageHeight = Math.max(html.scrollHeight, body.scrollHeight);

This works on all the browsers I tested. (Firefox, Chrome, Safari, Mobile Safari, and IE 8-11.) I can’t say for sure that it will work everywhere, though, and there may be some test cases I didn’t think of.

Why does it work? Well, it turns out that Firefox and IE behave one way, and Safari and Chrome behave another.

  • On Firefox and IE, html.scrollWidth returns the width of the page. This matches the current working draft standard. On Safari and Chrome, though, html.scrollWidth leaves out the <html> element’s border.

  • On Safari and Chrome, body.scrollWidth returns the width of the page. Firefox and IE correctly return just the width of the body element.

So neither value gives the right answer on all browsers, but by combining them together, we get something that works. So far.

What’s Next

With this release, Quixote has all the layout assertions I originally planned. It’s ready for field trials. I’m going to slow the pace of new features for a while and see how Quixote works on real-world projects. I plan to release a steady stream of small patches as issues are found. Then, after it’s had at least a few months to bake, I’ll decide on the next major features.

As it is, the new viewport, page, and resizing features make Quixote a robust solution for unit testing layout for responsive and non-responsive sites. It’s solid, fast, and has great documentation. Give it a try.

Quixote 0.6 is available to download now from npm and GitHub.

Previously: Quixote Hackathon Final Report