Migrating to Lerna - Part 2
Posted on May 11, 2019 -  đ©đżâđ»Â 9 min readIn this post weâll see how to update your development environment to support all of the changes we made in the previous post. This includes building the packages, testing them and showcasing them in a demo site.
Supporting the development environment
This is part 2 of a 3 part series
- Part 1 - Where do we begin?
- Part 2 - Supporting the development environment
- Part 3 - Publish all of the components!
In the previous post, we talked ârefacoredâ our code in such a way that each component is now an individual package and can (theoretically) work on its own.
This is not the case though, weâre still in a monorepo infrastructure, we still have tools and scripts that apply to all of the components a whole.
We need to update our development environment so that it now knows that each component is a package, it will build each of these packages, watch for changes, update our demo site, and test correctly.
Letâs start with the scripts
.
The obvious scripts
What scripts should go in the package.json
of each package?
The obvious one is build
because the build step of each component is independent of all other components, and if weâll follow our rule of keeping each package independent, we want to make each component as an autonomous package - this means it can build itself.
For the same reason weâll have a test
script as well.
The missing obvious scripts
Thatâs about it for the common scripts, now letâs talk about a script that youâd expect to see but isnât there - watch
.
Like every package thatâs built, you would also expect to be able to watch it, so you can change stuff and theyâre automatically built - so why arenât we adding it? two reasons:
- Too many processes.
- We use
webpack-dev-server
(or something similar) for our demo site.
What does âtoo many processesâ mean exactly? Well, think about it this way, we started all of this because we couldnât scale 100+ components right? That would mean we have 100+ packages now, each of them having a watch script that needs to run indefinitely - this chokes up lerna exec
. Even if we run everything in --parallel
itâd still mean that each âcpuâ (assuming weâre all running 8 cores) would still need to constantly switch between 12+ processes.
This doesnât scale very well.
Also, you probably already have a way to âdemoâ your components, where youâre probably using storybook
or styleguidist
or some homemade solution. Either way, youâre already have a running âserverâ that listens to all of the changes in each of the files itâs serving. which means you probably donât need to create your own watch
script.
So, when do you actually need to have a watch
script then? if you want to allow contributors to be able to test their apps with the changes they made to one of the components. That way they can simply do npm link
to the package they need and the watch
script would take care of building everything for them.
How would we do it then? We already agreed we canât create a watch
script on each package - letâs create a watch
in the root package! Although it now means that each package wonât be fully autonomous, we probbaly wonât need this when if weâll extract this package.
That watch
script will have just one process listening to all the changes in all packages - whenever a package is changed itâll rebuild it. You can achieve that by either implementing it yourself with chokidar
& babel
or simply using the --watch
flag coming built in with babel
, bare in mind there are several issues with --watch
.
The not so obvious scripts
Lastly, letâs talk about a script you wouldnât expect to see at all prepare
. This script should be really similar to a watch
script but instead of building it just copies everything from the source folder to the build folder (it wonât really copy, but itâs close enough).
Youâre probably asking yourself âwhy?â and you should. It took me some time to figure this one out myself. Think about it this way, letâs say you have a component called IconButton
that composes Button
, something like this:
import Button from '../Button';
import Icon from '../Icon';
export default ({ icon, ...props }) => (
<Button>
<Icon icon={icon} />
</Button>
);
This means that after the migration script we ran earlier itâll look like:
import Button from '@my-scope/button';
import Icon from '@my-scope/icon';
export default ({ icon, ...props }) => (
<Button>
<Icon icon={icon} />
</Button>
);
The change is very subtle but important, it will now try to resolve your module from the node modules directory since itâs not a relative path anymore. We will now need to find a way to tell our demo site to load these imports from the right place (and thatâs not node modules). Thatâs what why we have the prepare
script!
Thereâs just small catch here, weâre copying the source folder to the build folder, how would this affect the node modules we said this would probably be in? Thatâs where yarn
comes into the picture.
We use yarn
here because it has a deeper integration with lerna
. That integration basically means that itâll link1 together all the inter-dependencies and hoist all of the dependencies up to the root2.
1 - linking means that we create a symbolic link between two directories in the file system so that if one of directories changes, the other one will also reflect that change; theyâre linked.
2 - the root is the directory containing the package.json where lerna
and all other dev dependencies are installed.
This what the folder structure would look like:
monorepo-root
|
|--- packages
| |
| |--- Button
| |--- IconButton
| |--- DatePicker
| ...
|--- node_modules
| |
| |---@my-scope
| | |
| | |--- my-button ( -> ../../packages/Button)
| | |--- my-icon-button ( -> ../../packages/IconButton)
| | |--- my-date-picker ( -> ../../packages/DatePicker)
| | ...
As you can see, what yarn
& lerna
will do together is link our local dependencies to the root node_modules
.
If youâre not familiar with the node module resolution algorithem, you might wonder - how come this works?
I have import Button from '@my-scope/my-button';
in packages/IconButton/IconButton.js
, shouldnât it look for this package in the node_modules
folder near it?
It does look for it there, but thatâs not all - it goes up the entire tree and searches each directory for node_modules
folder, if thereâs one it looks for packages there. Itâll keep on going till it finds your dependency.
So, what will happen in our case? itâll look for node_modules
in IconButton
, it wonât find it - letâs go up! now weâre in packages
, still we canât find node_modules
- weâre going up! lastly, weâve arrived at the root of our monorepo, now we have node modules there! we also have @my-scope/my-button
there - great! since itâs a link weâll actually go to packages/Button
which is exactly what we want!
Now all of our packages look for the right depndencies in the right place and theyâre all pointing to the right directories, weâre now left with the matter of the build process.
If youâll remember we put es/index.js
as the main
field in the package.json of each component.
We donât necessarily have this path, or it might not be up to date for all the packages - we fix that in the prepare
script where we copy everything from the source to the build directory.
One last thing about prepare
, at the begining when we talked about prepare
we mentioned that it wonât really copy, thatâs because itâll symlink! thatâs right, symlinks again. That way whenever something changes in the source directory, itâll automatically get picked up and âcopiedâ to build directory. As I said at the start, this script is really similar to watch
for a reason.
After this, your demo site should be able to run smoothly and use the correct dependencies and update accordingly.
Unit tests
Letâs circle back to unit tests, there are some very subtle things which I think are important to talk about.
The unit tests will suffer the same problem the demo site did. This time though, you donât want to have a script to âpatchâ things for the folder structure, you just want things to work.
The next section will be specific to Jest
but the concept should be them same for other test runners.
Jest
has a configuration for a resolver
. From the docs:
This option allows the use of a custom resolver. This resolver must be a node module that exports a function expecting a string as the first argument for the path to resolve and an object with the following structure as the second argument:
{ "basedir": string, "browser": bool, "extensions": [string], "moduleDirectory": [string], "paths": [string], "rootDir": [string] }
The function should either return a path to the module that should be resolved or throw an error if the module canât be found.
This means that we can define a file that gets the paths of the original imports and tells Jest
where to find them.
Luckily for us, the process we did in the previous post is reverisble, we can extract the folder name from the mapping we previously made @my-scope/my-${kebabCase(componentName)}
- pascalCase(packageName.replace('@my-scope/my-', ''))
and weâre done!
Note: just make sure youâre fixing the paths of imports that you own in the monorepo, the rest should remain the same.
At this point your local environment should be good to go! you should be thrilled if youâve made it this far! Keep in mind, getting here took me few weeks of trial & error till I arrived to this solution that Iâm pretty pleased with - reading & implementing it should definitely take you less than that.
And now, for the fun part - publishing.