From the blog:
CodeSandbox - Online React Playground - Interview with Ives van Hoorne

Testing is a vital part of development. Even though techniques, such as linting, can help to spot and solve issues, they have their limitations. Testing can be applied against the code and an application on many different levels.

You can unit test specific piece of code, or you can look at the application from the user's point of view through acceptance testing. Integration testing fits between these ends of the spectrum and is concerned about how separate units of code operate together.

You can find a lot of testing tools for JavaScript. The most popular options work with webpack after you configure it right. Even though test runners work without webpack, running them through it allows you to process code the test runners do not understand while having control over the way modules are resolved. You can also use webpack's watch mode instead of relying on one provided by a test runner.



Mocha is a popular test framework for Node. While Mocha provides test infrastructure, you have to bring your asserts to it. Even though Node assert can be enough, there are good alternatives such as power-assert, Chai, or Unexpected.

mocha-loader allows running Mocha tests through webpack. mocha-webpack is another option that aims to provide more functionality. You'll learn the basic mocha-loader setup next.

Configuring mocha-loader with Webpack#

To get started, include Mocha and mocha-loader to your project:

npm install mocha mocha-loader --save-dev

To make ESLint aware of Mocha globals, tweak as follows:


module.exports = {
  env: {
mocha: true,
}, ... };

Setting Up Code to Test#

To have something to test, set up a function:


module.exports = (a, b) => a + b;

Then, to test that, set up a small test suite:


const assert = require('assert');
const add = require('./add');

describe('Demo', () => {
  it('should add correctly', () => {
    assert.equal(add(1, 1), 2);

Configuring Mocha#

To run Mocha against the test, add a script:


"scripts": {
  "test:mocha": "mocha tests",

If you execute npm run test:mocha now, you should see output:

  should add correctly

1 passing (10ms)

Mocha also provides a watch mode which you can activate through npm run test:mocha -- --watch. It runs the test suite as you modify the code.

--grep <pattern> can be used for constraining the behavior if you want to focus only on a particular set of tests.

Configuring Webpack#

Webpack can provide similar functionality through a web interface. The hard parts of the problem have been solved earlier in this book, what remain is combining those solutions together through configuration.

To tell webpack which tests to run, they need to be imported somehow. The Dynamic Loading chapter discussed require.context that allows to aggregate files based on a rule. It's ideal here. Set up an entry point as follows:


// Skip execution in Node
if (module.hot) {
  const context = require.context(
    'mocha-loader!./', // Process through mocha-loader
    false, // Skip recursive processing
    /\.test.js$/ // Pick only files ending with .test.js

  // Execute each test suite

To allow webpack to pick this up, a small amount of configuration is required:


const path = require('path');
const merge = require('webpack-merge');

const parts = require('./webpack.parts');

module.exports = merge([
    title: 'Mocha demo',
    entry: {
      tests: path.join(__dirname, 'tests'),
See the Composing Configuration chapter for the full devServer setup. The page setup is explained in the Multiple Pages chapter.

Add a helper script to make it convenient to run:


"scripts": {
  "test:mocha:watch": "webpack-dev-server --hot --config webpack.mocha.js",
If you want to understand what --hot does better, see the Hot Module Replacement appendix.

If you execute the server now and navigate to http://localhost:8080/, you should see the test:

Mocha in browser

Adjusting either the test or the code should lead to a change in the browser. You can grow your specification or refactor the code while seeing the status of the tests.

Compared to the vanilla Mocha setup, configuring Mocha through webpack comes with a couple of advantages:

  • It's possible to adjust module resolution. Webpack aliasing and other techniques work now, but this would also tie the code to webpack.
  • You can use webpack's processing to compile your code as you wish. With vanilla Mocha that would imply more setup outside of it.

On the downside, now you need a browser to examine the tests. mocha-loader is at its best as a development helper. The problem can be solved by running the tests through a headless browser.

Karma and Mocha#


Karma is a test runner that allows you to run tests against real devices and PhantomJS, a headless browser. karma-webpack is a Karma preprocessor that allows you to connect Karma with webpack. The same benefits as before apply still. This time around, however, there is more control over the test environment.

To get started, install Karma, Mocha, karma-mocha reporter, and karma-webpack:

npm install karma mocha karma-mocha karma-webpack --save-dev

Like webpack, Karma relies on the configuration as well. Set up a file as follows to make it pick up the tests:


module.exports = (config) => {
  const tests = 'tests/*.test.js';

    frameworks: ['mocha'],

    files: [
        pattern: tests,

    // Preprocess through webpack
    preprocessors: {
      [tests]: ['webpack'],

    singleRun: true,
The file has to be named exactly as karma.conf.js. Otherwise Karma doesn't pick it up automatically.
The setup generates a bundle per each test. If you have a large amount of tests and want to improve performance, set up require.context as for Mocha above. See karma-webpack issue 23 for more details.

Add an npm shortcut:

"scripts": {
  "test:karma": "karma start",

If you execute npm run test:karma now, you should see terminal output:

webpack: Compiled successfully.
...:INFO [karma]: Karma v1.5.0 server started at

This means Karma is in a waiting state and you have to visit that url to run the tests. As per configuration (singleRun: true), Karma terminates execution after that:

...:INFO [karma]: Karma v1.5.0 server started at
...:INFO [Chrome 57...]: Connected on socket D...A with id manual-73
Chrome 57...): Executed 1 of 1 SUCCESS (0.003 secs / 0 secs)

Given running tests this way can become annoying, it's a good idea to configure alternative ways. Using PhantomJS is one option.

You can point Karma to specific browsers through the browsers field. Example: browsers: ['Chrome'].

Running Tests Through PhantomJS#

Running tests through PhantomJS requires a couple of dependencies:

npm install karma-phantomjs-launcher phantomjs-prebuilt --save-dev

To make Karma run tests through Phantom, adjust its configuration as follows:


module.exports = (config) => {


browsers: ['PhantomJS'],
}); };

If you execute the tests again (npm run test:karma), you should get output without having to visit an url:

webpack: Compiled successfully.
...:INFO [karma]: Karma v1.5.0 server started at
...:INFO [launcher]: Launching browser PhantomJS with unlimited concurrency
...:INFO [launcher]: Starting browser PhantomJS
...:INFO [PhantomJS ...]: Connected on socket 7...A with id 123
PhantomJS ...: Executed 1 of 1 SUCCESS (0.005 secs / 0.001 secs)

Given running tests after the change can get boring after a while, Karma provides a watch mode.

PhantomJS does not support ES6 features yet so you have to preprocess the code for tests using them. The webpack setup is done later in this chapter. ES6 support is planned for PhantomJS 2.5.

Watch Mode with Karma#

Accessing Karma's watch mode is possible as follows:


"scripts": {
"test:karma:watch": "karma start --auto-watch --no-single-run",
... },

If you execute npm run test:karma:watch now, you should see watch behavior.

Generating Coverage Reports#

To know how much of the code the tests cover, it can be a good idea to generate coverage reports. Doing this requires code-level instrumentation. Also, the added information has to be reported. This can be done through HTML and LCOV reports.

LCOV integrates well with visualization services. You can send coverage information to an external service through a continuous integration environment and track the status in one place.

isparta is a popular, ES6 compatible code coverage tool. Connecting it with Karma requires configuration. Most importantly the code has to be instrumented through babel-plugin-istanbul. Doing this requires a small amount of webpack configuration as well due to the setup. karma-coverage is required for the reporting portion of the problem.

Install the dependencies first:

npm install babel-plugin-istanbul karma-coverage --save-dev

Connect the Babel plugin so that the instrumentation happens when Karma is run:


"env": { "karma": { "plugins": [ [ "istanbul", { "exclude": ["tests/*.test.js"] } ] ] } }

Make sure to set Babel environment, so it picks up the plugin:


module.exports = (config) => {

process.env.BABEL_ENV = 'karma';
config.set({ ... }); };
If you want to understand the env idea, see the Loading JavaScript chapter.

On Karma side, reporting has to be set up and Karma configuration has to be connected with webpack. karma-webpack provides two fields for this purpose: webpack and webpackMiddleware. You should use the former in this case to make sure the code gets processed through Babel.


const path = require('path');
... module.exports = (config) => { ... config.set({ ...
webpack: require('./webpack.parts').loadJavaScript({ include: path.join(__dirname, 'tests'), }), reporters: ['coverage'], coverageReporter: { dir: 'build', reporters: [ { type: 'html' }, { type: 'lcov' }, ], },
}); };
If you want to emit the reports to specific directories below dir, set subdir per each report.

If you execute karma now (npm run test:karma), you should see a new directory containing coverage reports. The HTML report can be examined through the browser.

Coverage in browser

LCOV requires specific tooling to work. You can find editor plugins such as lcov-info for Atom. A properly configured plugin can give you coverage information while you are developing using the watch mode.



Facebook's Jest is an opinionated alternative that encapsulates functionality, including coverage and mocking, with minimal setup. It can capture snapshots of data making it valuable for projects where you have the behavior you would like to record and retain.

Jest tests follow Jasmine test framework semantics, and it supports Jasmine-style assertions out of the box. Especially the suite definition is close enough to Mocha so that the current test should work without any adjustments to the test code itself. Jest provides jest-codemods for migrating more complex projects to Jest semantics.

Install Jest first:

npm install jest --save-dev

Jest captures tests through package.json configuration. It detects tests within a tests directory it also happens to capture the naming pattern the project is using by default:


"scripts": {
"test:jest:watch": "jest --watch", "test:jest": "jest",
... },

Now you have two new commands: one to run tests once and other to run them in a watch mode. To capture coverage information, you have to set "collectCoverage": true at "jest" settings in package.json or pass --coverage flag to Jest. It emits the coverage reports below coverage directory by default.

Given generating coverage reports comes with a performance overhead, enabling the behavior through the flag can be a good idea. This way you can control exactly when to capture the information.

Porting a webpack setup to Jest requires more effort especially if you rely on webpack specific features. The official guide covers quite a few of the common problems. You can also configure Jest to use Babel through babel-jest as it allows you to use Babel plugins like babel-plugin-module-resolver to match webpack's functionality.



AVA is a test runner that has been designed to take advantage of parallel execution. It comes with a test suite definition of its own. webpack-ava-recipe covers how to connect it with webpack.

The main idea is to run both webpack and AVA in watch mode to push the problem of processing code to webpack while allowing AVA to consume the processed code. The require.context idea discussed with Mocha comes in handy here as you have to capture tests for webpack to process somehow.


Mocking is a technique that allows you to replace test objects. Consider the solutions below:

  • Sinon provides mocks, stubs, and spies. It works well with webpack since version 2.0.
  • inject-loader allows you to inject code to modules through their dependencies making it valuable for mocking.
  • rewire-webpack allows mocking and overriding module globals. babel-plugin-rewire implements rewire for Babel.


Webpack can be configured to work with a large variety of testing tools. Each tool has its sweet spots, but they also have quite a bit of common ground.

To recap:

  • Running testing tools allows you to benefit from webpack's module resolution mechanism.
  • Sometimes the test setup can be quite involved. Tools like Jest remove most of the boilerplate and allow you to develop tests with minimal setup.
  • You can find multiple mocking tools for webpack. They allow you to shape test environment. Sometimes you can avoid mocking through design, though.

You'll learn to deploy applications using webpack in the next chapter.

Previous chapterInternationalization

This book is available through Leanpub (digital), Amazon (paperback), and Kindle (digital). By purchasing the book you support the development of further content. A part of profit (~30%) goes to Tobias Koppers, the author of webpack.

Need help?