Skip to content

Instantly share code, notes, and snippets.

@pralkarz
Last active April 20, 2025 14:39
Show Gist options
  • Save pralkarz/9195ae243fd48604b23232021351d200 to your computer and use it in GitHub Desktop.
Save pralkarz/9195ae243fd48604b23232021351d200 to your computer and use it in GitHub Desktop.
Case study: investigating and improving an ESLint rule's (`eslint-plugin-n/prefer-node-protocol`) performance

Note

My approach contains some guesswork and might lack depth in a few areas. If you notice any mistakes or opportunities for improvement, the easiest way to reach out to me is either commenting under this Gist, or writing directly on Discord where I have the same nickname as here.

The problem

After profiling ESLint rules' performance in the Svelte repository, Ben McCann noticed that the n/prefer-node-protocol rule takes almost 50% of the total linting time: eslint-community/eslint-plugin-n#404. As with most issues I tackle, I wanted to start by reproducing the result, so I cloned the Svelte repository, installed the dependencies, and ran pnpm build.

Initially, I tried to check the performance in a more granular way with the --stats option (available in ESLint 9+, which is used in this particular project). This has proven unfeasible though as even when outputting JSON and redirecting stdout to a file, the resulting object was so big that my editor hung when trying to format it. The repository is decently large, and the option captures statistics per file per rule (of which there are many), so it's understandable.

In the end, I opted for the same approach as Ben, with the TIMING environment variable. I changed the script in package.json to "lint": "set TIMING=10 && eslint" (TIMING=10 && eslint for Unix environments), and ran pnpm lint. Indeed, the results were what I expected, but the process took several seconds, so I attempted to only run that single rule, which ended up being trickier than expected. ESLint's documentation suggests combining the --no-eslintrc and --rule options for that, but disabling the configuration file is not so straightforward in modern projects as the linter depends on the language options that are set there, some custom ignores, plugins, etc. I found eslint-nibble as a potential solution, but eventually decided not to bother and run the whole suite every time since the runtime only took X seconds rather than X minutes.

Here are the initial results:

Rule                                            | Time (ms) | Relative
:-----------------------------------------------|----------:|--------:
n/prefer-node-protocol                          |   902.571 |    51.8%
no-redeclare                                    |   193.443 |    11.1%
@typescript-eslint/prefer-promise-reject-errors |    90.864 |     5.2%
@typescript-eslint/await-thenable               |    46.521 |     2.7%
@stylistic/quote-props                          |    34.386 |     2.0%
no-misleading-character-class                   |    20.995 |     1.2%
svelte/no-unknown-style-directive-property      |    20.845 |     1.2%
lube/svelte-naming-convention                   |    20.601 |     1.2%
no-loss-of-precision                            |    18.458 |     1.1%
constructor-super                               |    17.821 |     1.0%

The relative time spent mostly matches that of Ben's, and the differences in the absolute time most likely stem from my processor being slower than his. Before delving into the further investigation, I noticed that the rule's code was slightly outdated in the repository compared to eslint-plugin-n's master branch, so I plugged that into node_modules (by literally copying and pasting the lib folder from a freshly cloned plugin repository; there's no TypeScript nor any other build step involved, hence why it's so simple to "plug and play" in this case), and ran the benchmark again. The results were pretty much the same, so the problem hasn't been fixed in the meantime.

After syncing with the master branch, a similar development workflow followed from then on: I would make some changes, copy the file(s) manually to node_modules/eslint-plugin-n/lib/rules, run pnpm lint, and compare. I didn't bother setting up anything more sophisticated as it wasn't needed for my case.

Caching/memoization

The first solution I attempted without any further profiling was suggested by one of the maintainers in the issue: eslint-community/eslint-plugin-n#404 (comment). It was trivial to implement and seemed like a solid lead, but eventually yielded no significant improvements, both with a simple Object-based caching, and with flru. I shared these findings and the related benchmarks in the issue: eslint-community/eslint-plugin-n#404 (comment).

Further profiling

In search of a pragmatic way to find the slow code, I started blasting console.log("before calling X: ", Date.now()) at various points in the rule, but since Date.now() returns the time in milliseconds, it's proven itself not granular enough. For that reason, I switched to Node's perf_hooks which supports sub-millisecond accuracy.

The usage is quite simple, especially for people experienced with other Observer-style APIs, e.g. MutationObserver.

  1. Start by initializing the PerformanceObserver object. The provided callback gets triggered every time the observer is notified about a new entry. Afterwards, we configure it with .observe. I used entryTypes: ["measure"] as I'm interested in using the .measure function later on. Details on other types can be found in the documentation.
const observer = new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    if (entry.duration > 0.5) {
      console.log(entry)
    }
  }
});
observer.observe({ entryTypes: ["measure"] })
  1. Whenever there's some code that we'd like to measure, we put marks before and after like so.
observer.mark("before-someFunc")
someFunc()
observer.mark("after-someFunc")
  1. And we use .measure to notify the observer about a new entry. Since I used 0.5 as the arbitrary threshold for logging the entries, we also pass the filename (from ESLint's context) as a detail, so we could identify problematic files that take the longest to lint.
observer.measure("someFunc", { detail: context.filename, startMark: "before-someFunc", endMark: "after-someFunc" })

This approach could be very fruitful, but after the initial draft I had to step away from the computer for a while, so I didn't dig as deep as I'd like to, and I didn't conclusively find the root cause of the performance issues.

The solution

The next day, while enjoying my coffee, I decided to look at how other plugins handle imports (e.g. eslint-plugin-import) before continuing the profiling. The approach was quite different from eslint-plugin-n, so when I got back home, I was eager to test it out. I went ahead and did just that, guessing that it would improve the performance at least a little.

Indeed, it did improve it more than a little (eslint-community/eslint-plugin-n#406). The runtime went from 900 ms all the way down to 50 ms. Now, I haven't rigorously checked what was slower in the previous approach, but I've got two guesses:

  1. Rather than parsing estree's CallExpressions as reported by ESLint, the plugin essentially duplicated the linter's work by iterating over the global references, parsing them, and creating intermediate objects that would later be iterated upon on Program:exit (which, from my understanding, is invoked by ESLint after it's done traversing the file and goes back up again).
  2. The intermediate objects were also created for ESM imports/exports, and the additional memory overhead could cause slowdowns too, especially for files with a large amount of imports.

I rewrote the logic a little bit, got the tests to pass, and performed a smoke test in the Svelte repository – both reporting and fixing (with the ESLint --fix option) turned out to be working as intended. At the time of this writing (January 23, 2025), the PR hasn't been merged yet, but in case I haven't missed any edge cases, it would be quite a performance gain!

Conclusion

Some techniques used can be adapted to other plugins here, some cannot. This write-up's goal isn't to be the single source of truth when it comes to ESLint's plugins' performance, but more so an inspiration for people who'd like to tackle such issues in the future. Also, this should go without saying, but I'm gonna mention it just in case: it's not my intention to criticize anybody's code or design decisions – mistakes happen, bugs happen, performance problems happen. I managed to spot (and potentially fix) this one (largely thanks to Ben and the entire e18e community), but I'm sure there are many more in the wild. Such is software, whether open- or closed-source, and it's on us to work on them while ensuring mutual respect for one another.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment