Migrating to Lerna - Part 2

Posted on May 11, 2019  - Â đŸ‘©đŸżâ€đŸ’»Â 9 min read

In this post we’ll see how to update your development environment to support all of the changes we made in the previous post. This includes building the packages, testing them and showcasing them in a demo site.

Supporting the development environment

This is part 2 of a 3 part series


In the previous post, we talked “refacored” our code in such a way that each component is now an individual package and can (theoretically) work on its own.

This is not the case though, we’re still in a monorepo infrastructure, we still have tools and scripts that apply to all of the components a whole.

We need to update our development environment so that it now knows that each component is a package, it will build each of these packages, watch for changes, update our demo site, and test correctly.

Let’s start with the scripts.

The obvious scripts

What scripts should go in the package.json of each package?

The obvious one is build because the build step of each component is independent of all other components, and if we’ll follow our rule of keeping each package independent, we want to make each component as an autonomous package - this means it can build itself.

For the same reason we’ll have a test script as well.

The missing obvious scripts

That’s about it for the common scripts, now let’s talk about a script that you’d expect to see but isn’t there - watch.

Like every package that’s built, you would also expect to be able to watch it, so you can change stuff and they’re automatically built - so why aren’t we adding it? two reasons:

  • Too many processes.
  • We use webpack-dev-server (or something similar) for our demo site.

What does “too many processes” mean exactly? Well, think about it this way, we started all of this because we couldn’t scale 100+ components right? That would mean we have 100+ packages now, each of them having a watch script that needs to run indefinitely - this chokes up lerna exec. Even if we run everything in --parallel it’d still mean that each “cpu” (assuming we’re all running 8 cores) would still need to constantly switch between 12+ processes.

This doesn’t scale very well.

Also, you probably already have a way to “demo” your components, where you’re probably using storybook or styleguidist or some homemade solution. Either way, you’re already have a running “server” that listens to all of the changes in each of the files it’s serving. which means you probably don’t need to create your own watch script.

So, when do you actually need to have a watch script then? if you want to allow contributors to be able to test their apps with the changes they made to one of the components. That way they can simply do npm link to the package they need and the watch script would take care of building everything for them.

How would we do it then? We already agreed we can’t create a watch script on each package - let’s create a watch in the root package! Although it now means that each package won’t be fully autonomous, we probbaly won’t need this when if we’ll extract this package.

That watch script will have just one process listening to all the changes in all packages - whenever a package is changed it’ll rebuild it. You can achieve that by either implementing it yourself with chokidar & babel or simply using the --watch flag coming built in with babel, bare in mind there are several issues with --watch.

The not so obvious scripts

Lastly, let’s talk about a script you wouldn’t expect to see at all prepare. This script should be really similar to a watch script but instead of building it just copies everything from the source folder to the build folder (it won’t really copy, but it’s close enough).

You’re probably asking yourself “why?” and you should. It took me some time to figure this one out myself. Think about it this way, let’s say you have a component called IconButton that composes Button, something like this:

import Button from '../Button';
import Icon from '../Icon';

export default ({ icon, ...props }) => (
  <Button>
    <Icon icon={icon} />
  </Button>
);

This means that after the migration script we ran earlier it’ll look like:

import Button from '@my-scope/button';
import Icon from '@my-scope/icon';

export default ({ icon, ...props }) => (
  <Button>
    <Icon icon={icon} />
  </Button>
);

The change is very subtle but important, it will now try to resolve your module from the node modules directory since it’s not a relative path anymore. We will now need to find a way to tell our demo site to load these imports from the right place (and that’s not node modules). That’s what why we have the prepare script!

There’s just small catch here, we’re copying the source folder to the build folder, how would this affect the node modules we said this would probably be in? That’s where yarn comes into the picture.

We use yarn here because it has a deeper integration with lerna. That integration basically means that it’ll link1 together all the inter-dependencies and hoist all of the dependencies up to the root2.

1 - linking means that we create a symbolic link between two directories in the file system so that if one of directories changes, the other one will also reflect that change; they’re linked.

2 - the root is the directory containing the package.json where lerna and all other dev dependencies are installed.

This what the folder structure would look like:

  monorepo-root
    |
    |--- packages
    |      |
    |      |--- Button
    |      |--- IconButton
    |      |--- DatePicker
    |      ...
    |--- node_modules
    |      |
    |      |---@my-scope
    |      |      |
    |      |      |--- my-button ( -> ../../packages/Button)
    |      |      |--- my-icon-button ( -> ../../packages/IconButton)
    |      |      |--- my-date-picker ( -> ../../packages/DatePicker)
    |      |      ...

As you can see, what yarn & lerna will do together is link our local dependencies to the root node_modules.

If you’re not familiar with the node module resolution algorithem, you might wonder - how come this works? I have import Button from '@my-scope/my-button'; in packages/IconButton/IconButton.js, shouldn’t it look for this package in the node_modules folder near it?

It does look for it there, but that’s not all - it goes up the entire tree and searches each directory for node_modules folder, if there’s one it looks for packages there. It’ll keep on going till it finds your dependency.

So, what will happen in our case? it’ll look for node_modules in IconButton, it won’t find it - let’s go up! now we’re in packages, still we can’t find node_modules - we’re going up! lastly, we’ve arrived at the root of our monorepo, now we have node modules there! we also have @my-scope/my-button there - great! since it’s a link we’ll actually go to packages/Button which is exactly what we want!

Now all of our packages look for the right depndencies in the right place and they’re all pointing to the right directories, we’re now left with the matter of the build process.

If you’ll remember we put es/index.js as the main field in the package.json of each component.

We don’t necessarily have this path, or it might not be up to date for all the packages - we fix that in the prepare script where we copy everything from the source to the build directory.

One last thing about prepare, at the begining when we talked about prepare we mentioned that it won’t really copy, that’s because it’ll symlink! that’s right, symlinks again. That way whenever something changes in the source directory, it’ll automatically get picked up and “copied” to build directory. As I said at the start, this script is really similar to watch for a reason.

After this, your demo site should be able to run smoothly and use the correct dependencies and update accordingly.

Unit tests

Let’s circle back to unit tests, there are some very subtle things which I think are important to talk about.

The unit tests will suffer the same problem the demo site did. This time though, you don’t want to have a script to “patch” things for the folder structure, you just want things to work.

The next section will be specific to Jest but the concept should be them same for other test runners.

Jest has a configuration for a resolver. From the docs:

This option allows the use of a custom resolver. This resolver must be a node module that exports a function expecting a string as the first argument for the path to resolve and an object with the following structure as the second argument:

{
  "basedir": string,
  "browser": bool,
  "extensions": [string],
  "moduleDirectory": [string],
  "paths": [string],
  "rootDir": [string]
}

The function should either return a path to the module that should be resolved or throw an error if the module can’t be found.

This means that we can define a file that gets the paths of the original imports and tells Jest where to find them.

Luckily for us, the process we did in the previous post is reverisble, we can extract the folder name from the mapping we previously made @my-scope/my-${kebabCase(componentName)} - pascalCase(packageName.replace('@my-scope/my-', '')) and we’re done!

Note: just make sure you’re fixing the paths of imports that you own in the monorepo, the rest should remain the same.

At this point your local environment should be good to go! you should be thrilled if you’ve made it this far! Keep in mind, getting here took me few weeks of trial & error till I arrived to this solution that I’m pretty pleased with - reading & implementing it should definitely take you less than that.

And now, for the fun part - publishing.