Perfume is a JavaScript library for measuring Short and Long Script, First (Contentful) Paint (FP/FCP), Time to Interactive (TTI), Component First Paint (CFM), annotating them to the DevTools timeline and reporting the results to Google Analytics.
When a user navigates to a web page, they're typically looking for visual feedback to reassure them that everything is going to work as expected.
Is it happening? Is it useful? Is it usable? Is it delightful? To understand when a page delivers this feedback to its users, we've defined several new metrics:
- First Paint (FC)
- First Contentful Paint (FCP)
- Time to Interactive (TTI)
- Component First Paint (CFP)
Luckily, with the addition of a few new browser APIs, measuring these metrics on real devices is finally possible without a lot of hacks or workarounds that can make performance worse.
Perfume leverage these new APIs for measuring performance that matters! ⚡️
npm (https://www.npmjs.com/package/perfume.js):
npm install perfume.js --save
You can import the generated bundle to use the whole library generated by this starter:
import Perfume from 'perfume.js';
Additionally, you can import the transpiled modules from dist/es
in case you have a modular library:
import Perfume from 'node_modules/perfume.js/dist/es/perfume';
Universal Module Definition:
import Perfume from 'node_modules/perfume.js/perfume.umd.js';
First Contentful Paint (FCP)
This metric mark the point, immediately after navigation, when the browser renders pixels to the screen. This is important to the user because it answers the question: is it happening?
FCP marks the point when the browser renders the first bit of content from the DOM, which may be text, an image, SVG, or even a element.
const perfume = new Perfume({
firstContentfulPaint: true
});
// ⚡️ Perfume.js: First Contentful Paint 2029.00 ms
Time to Interactive (TTI)
The metric marks the point at which your application is both visually rendered and capable of reliably responding to user input. An application could be unable to respond to user input for a couple of reasons:
- The JavaScript needed to make the components on the page work hasn't yet loaded;
- There are long tasks blocking the main thread. The TTI metric identifies the point at which the page's initial JavaScript is loaded and the main thread is idle (free of long tasks).
const perfume = new Perfume({
timeToInteractive: true
});
// ⚡️ Perfume.js: Time to interactive 2452.07 ms
Performance.mark (User Timing API) is used to create an application-defined peformance entry in the browser's performance entry buffer.
perfume.start('fibonacci');
fibonacci(400);
perfume.end('fibonacci');
// ⚡️ Perfume.js: fibonacci 0.14 ms
This metric mark the point, immediately after creating a new component, when the browser renders pixels to the screen.
perfume.start('togglePopover');
$(element).popover('toggle');
perfume.endPaint('togglePopover');
// ⚡️ Perfume.js: togglePopover 10.54 ms
Save the duration and print it out exactly the way you want it.
const perfume = new Perfume({
logPrefix: "🍻 Beerjs:"
});
perfume.start('fibonacci');
fibonacci(400);
const duration = this.perfume.end('fibonacci');
perfume.log('Custom logging', duration);
// 🍻 Beerjs: Custom logging 0.14 ms
To enable Perfume to send your measures to Google Analytics User timing, set the option enable:true
and a custom user timing variable timingVar:"name"
.
const perfume = new Perfume({
googleAnalytics: {
enable: true,
timingVar: "userId"
}
});
Default options provided to Perfume.js constructor.
const options = {
firstPaint: false,
firstContentfulPaint: false,
googleAnalytics: {
enable: false,
timingVar: "name",
},
logging: true,
logPrefix: "⚡️ Perfume.js:",
timeToInteractive: false
};
npm t
: Run test suitenpm start
: Runnpm run build
in watch modenpm run test:watch
: Run test suite in interactive watch modenpm run test:prod
: Run linting and generate coveragenpm run build
: Generate bundles and typingsnpm run lint
: Lints codenpm run commit
: Commit using conventional commit style (husky will tell you to use it if you haven't 😉)
Made with ☕️ by @zizzamia and I want to thank some friends and projects for the work they did:
- Leveraging the Performance Metrics that Most Affect User Experience for documenting this new User-centric performance metrics;
- Performance Timeline Level 2 the definition of PerformanceObserver in that specification;
- The Contributors for their much appreciated Pull Requests and bug reports;
- you for the star you'll give this project 😉 and for supporting me by giving my project a try 😄
Code and documentation copyright 2018 Leonardo Zizzamia. Code released under the MIT license. Docs released under Creative Commons.