It's no secret that Reviewable's performance is not exactly best-in-class. While it can deal with large reviews better than GitHub—mainly by not trying to precompute all the diffs and stuff them onto one page—the page load latency and lack of smoothness can quickly get annoying.
The TL;DR is that as of now improving performance is priority #1. It won't happen overnight but my aim is to reach a point where developers choose Reviewable because of its performance, not in spite of it. The rest of this post lays out the technical details of my plan for how to get there—with the important proviso that no plan survives contact with the code.
1. Profile, investigate, and optimize
My first pass is the usual one: profile the system (adding instrumentation where necessary), investigate any anomalies, and optimize where possible. For example, I recently noticed that permission checks were slower than expected. It turns out that i) a server instance dedicated to permission checks was accidentally disabled, so they were being handled by the general pool, ii) a stronger-than-necessary encryption key was adding 300ms of decryption overhead to the process, and iii) permission tickets were expunged sooner than necessary, requiring frequent re-check requests. (All of these are now fixed.)
I've also been digging into page load latency. There's a new
?debug=latency query param you can add to get some metrics logged to the console on every page transition. If you feel that a review is loading particularly slowly for you, please open an issue with those numbers and I'll be able to quickly tell if something unusual is going on and where to dig. These metrics are sampled into an analytics system as well so I can look for long-term trends, and I've been searching for more tools that might help spot performance issues and prevent regressions. I'm evaluating SpeedCurve but any other suggestions you might have would be appreciated.
Other things I'm trying include lightening up the session resumption process by taking full control of authentication instead of relying on Firebase, and spinning up the diff worker earlier during the page load process so it can initialize concurrently. There's lots more to investigate, but one thing I won't do is try to optimize individual UI elements (such as the file matrix, which is known to slow things down when there's a lot of files) because...
2. Goodbye Angular, hello Vue
Reviewable's front end is built on Angular. Angular was pretty amazing when it was released in 2009, and certainly let me get Reviewable off the ground quickly, but it hasn't aged well. I've ended up needing to bypass its digest mechanism to eke out performance more often than not, heaping incidental complexity onto the code. While there's still scope left for optimization, fighting the framework is tiring and not sustainable in the long term.
Hence I've decided to replatform onto Vue.js. Vue.js is friendly, well-documented, similar to Angular, and v2 shows promise in performance benchmarks when matched against heavyweights like React, Angular v2, et al. Even so, porting some 32k lines of JS, HTML, and CSS—plus finding replacements for the many libraries that Reviewable depends on—is going to be a major endeavor.
To reduce risk, I first plan to extract all the “model” code into a clean object-oriented structure that maps directly onto the Firebase schema, leaving less code to port. This will require yet another new framework that I'm building (since it needs to work with both Angular and Vue) but it's the key to a much better designed system that will be able to take on the next layer of features without collapsing. I'm especially excited about the super-efficient system for updating derived properties without needing to explicitly declare dependencies! This should improve performance in and of itself, and also get rid of a whole category of pernicious “data race” bugs that are very hard to track down.
3. Always be closing connected
Besides Angular, a major source of latency is the initial connection to Firebase and data fetch. Firebase is nice and fast once the connection is established, though, which gave me an idea: keep the Firebase connection in a
SharedWorker so multiple tabs can share a single, persistent web socket. (Admittedly, only Chrome and Firefox implement
SharedWorker but that accounts for 92% of the userbase and other browsers can fall back to an unshared one.)
This will be somewhat tricky to implement since the Firebase library wasn't designed to run inside a web worker, but if successful it will significantly lower page startup overhead in the average case and enable sharing of the data cache. It won't hurt performance either to have a second thread pick up some of the load, especially for Enterprise deployments (about which more anon!) that turn on the extra encryption layer. For optimal speedup you'll need to always keep a Reviewable tab open—e.g., the review list—or perhaps install a Chrome extension if I can get it to play nice with web workers.
And that's my ideas so far! No promises on how quickly I can get everything done, but it's going to be my main focus for the next little while so hopefully sooner rather than later.