Tag Archives: reflow

The Year of Business Metrics – Don’t make your users run away!

Performance at Velocity Conference

A marked change has occurred since the first Velocity Conference a year ago, and while the effects are not yet obvious, they will be. The web is still slow, but we have something now, that we didn’t a year ago: business metrics. This was the year we quantified the impact of performance choices on our businesses, and the results were astounding.

Those of us who worked in the field had a gut feeling that users want a fast web experience, but most of the studies done previously were lacking something, either in experiment design or reliability of the data. They were all strong indicators that more research needed to be done, but they weren’t damning enough to provide real certainty. This year we found a real correlation between websites speed and its ability to establish and keep relationships with visitors. Not everyone could attend, so I’d like to share with you some of the key moments of an amazing conference. Please feel free to add others in the comments.

David Artz at AOL

David Artz from AOL presented their findings from a study which measured page views per visit against performance. They divided users into buckets based on response time and plotted it against PV. The results were startling. Across six AOL sites there was a clear inverse correlation.

The take away: AOL

Users who had a slower experience view far fewer pages.

AOL PV-speed correlation

Goog and Bing sitting in a tree, K-I-S…

Goog and Bing got together (whoa!) to do a study looking at search behavior when performance is worsened over very narrow increments. This study was unique particularly because it followed the same users over a period of time. The data can be used to determine the threshold at which clicks, refined searches, revenue, satisfaction, and time to click are likely to be impacted by features which slow a website. Their methodologies were a bit different, but the conclusions were remarkably similar. A 50MS delay seemed to have no impact, but as little as 200-500MS changed user behavior across the board. Revenue, clicks, and time to first click were most profoundly impacted.

The Take Away: Bing

One key point was that users seem to lose their focus if you make them wait too long. Progressive rendering and flushing the header (which are also recommended by Yahoo!) can help. Bing had this to say:

Notice that as the delays get longer the Time To Click increases at a more extreme rate (1000ms increases by 1900ms). The theory is that the user gets distracted and unengaged in the page. In other words, they’ve lost the user’s full attention and have to get it back.

~ Google & Bing

We’ve all experienced that. We open a new tab and run a search. Multitasking fools that we are, we flip to a new tab or open our email if the results take too long to load.

The Take Away: Google

The most interesting data to come out of the Google tests took place long after the experiment had finished. As much as five weeks later, some users, especially those who saw delays greater than 400MS, were still searching less than before. Performance is a feature users want. Fail them, and they may never come back.

The % change recorded was very small. For instance, a half a second delay caused a -1.2% loss of revenue per user. What does that mean? We need to think big, and simultaneously work on incremental and profound ways to make the web faster.

Shopzilla – Profound improvements

Shopzilla also presented their (profound) performance improvements. They decreased their response time by around 3.5 seconds and the data showed their conversions increased by 7-9% and their page views skyrocketed 25%. This is good stuff. This is how we go to business and make the case that performance is an important feature that deserves attention, not a band-aide that you stick on afterwards. Dave Artz has more details.

JavaScript versus CSS versus Network Latency: Which is killing our sites?

In a separate session with Mike Belche from the Chrome team, he discussed his experiments which tested total time spend on executing JavaScript, rendering the page (CSS), and network latency. He said the vast majority of the time is being spent on network latency. There was a subtle flaw in his methodology, because his rendering time included only one full rendering and painting, because all resources were already in cache and no JavaScript was used, there would be no unnecessary reflows.

The Take Away: Reflows

This got me thinking about images and other fixed dimension media. We should always set height and width of images to avoid reflows being caused when the resource is finally downloaded and available.

I agree with him that, except in extreme cases (and a lot of selector/reflow experiments have been too extreme to really reflect reality), rendering will be much less important that network latency. It is much more important to keep page weight and HTTP requests as low as possible. Over-complicating our CSS selectors to reduce render time would be a mistake. Browsers are really good at parsing selectors, we need to be really good at writing the minimum number we actually need. This is clearly missing not handled correctly in the current suite of testing tools such as Page Speed.

My talk included (not yet released) suggestions for coding performant selectors. More on that later. ;)

Further Reading

  • Aladdin Nassir spoke about linking performance and business metrics via Performance-Based Design.
  • Lindsey Simon spoke about reflows and an open source tool he is building to better measure these things. The methods for accurately measuring reflows are still a WIP, and the numbers are fuzzy, but that makes this a really interesting project to get involved in.

Reflows & Repaints: CSS Performance making your JavaScript slow?

I’ve been tweeting and posting to delicious about reflows and repaints, but hadn’t mentioned either in a talk or blog post yet.

I first started thinking about reflows and repaints after a firey exchange with Mr. Glazman at ParisWeb. I may be stubborn, but I did actually listen to his arguments. :) Stoyan and I began discussing ways to quantify the problem.

Going forward the performance community needs to partner more with browser vendors in addition to our more typical black box experiments. Browser makers know what is costly or irrelevant in terms of performance. Opera lists repaint and reflow as one of the three main contributors to sluggish JavaScript, so it definitely seems worth a look.

Let’s start with a little background information. A repaint occurs when changes are made to an elements skin that changes visibility, but do not affect its layout. Examples of this include outline, visibility, or background color. According to Opera, repaint is expensive because the browser must verify the visibility of all other nodes in the DOM tree. A reflow is even more critical to performance because it involves changes that affect the layout of a portion of the page (or the whole page). Reflow of an element causes the subsequent reflow of all child and ancestor elements as well as any elements following it in the DOM.

For example:

<div class=”error”>
	<h4>My Module</h4>
	<p><strong>Error:</strong>Description of the error…</p>
	<h5>Corrective action required:</h5>
		<li>Step one</li>
		<li>Step two</li>

In the html snippet above, a reflow on the paragraph would trigger a reflow of the strong because it is a child node. It would also cause a reflow of the ancestors (div.error and body – depending on the browser). In addition, the h5 and ol would be reflowed simply because they follow that element in the DOM. According to Opera, most reflows essentially cause the page to be re-rendered:

Reflows are very expensive in terms of performance, and is one of the main causes of slow DOM scripts, especially on devices with low processing power, such as phones. In many cases, they are equivalent to laying out the entire page again.

So, if they’re so awful for performance, what causes a reflow?

Unfortunately, lots of things. Among them some which are particularly relevant when writing CSS:

  • Resizing the window
  • Changing the font
  • Adding or removing a stylesheet
  • Content changes, such as a user typing text in
    an input box
  • Activation of CSS pseudo classes such as :hover (in IE the activation of the pseudo class of a sibling)
  • Manipulating the class attribute
  • A script manipulating the DOM
  • Calculating offsetWidth and offsetHeight
  • Setting a property of the style attribute

Mozilla article about reflows that outlines causes and when they could be reduced.

How to avoid reflows or at least minimize their impact on performance?

Note: I’m limiting myself to discussing the CSS impact of reflows, if you are a JavaScripter I’d definitely recommend reading my reflow links, there is some really good stuff there that isn’t directly related to CSS.

  1. Change classes on the element you wish to style (as low in the dom tree as possible)
  2. Avoid setting multiple inline styles
  3. Apply animations to elements that are position fixed or absolute
  4. Trade smoothness for speed
  5. Avoid tables for layout
  6. Avoid JavaScript expressions in the CSS (IE only)

Change classes as low in the dom tree as possible

Reflows can be top-down or bottom-up as reflow information is passed to surrounding nodes. Reflows are unavoidable, but you can reduce their impact. Change classes as low in the dom tree as possible and thus limit the scope of the reflow to as few nodes as possible. For example, you should avoid changing a class on wrapper elements to affect the display of child nodes. Object oriented css always attempts to attach classes to the object (DOM node or nodes) they affect, but in this case it has the added performance benefit of minimizing the impact of reflows.

Avoid setting multiple inline styles

We all know interacting with the DOM is slow. We try to group changes in an invisible DOM tree fragment and then cause only one reflow when the entire change is applied to the DOM. Similarly, setting styles via the style attribute cause reflows. Avoid setting multiple inline styles which would each cause a reflow, the styles should be combined in an external class which would cause only one reflow when the class attribute of the element is manipulated.

Apply animations with position fixed or absolute

Apply animations to elements that are position fixed or absolute. They don’t affect other elements layout, so they will only cause a repaint rather than a full reflow. This is much less costly.

Trade smoothness for speed

Opera also advises that we trade smoothness for speed. What they mean by this is that you may want to move an animation 1 pixel at a time, but if the animation and subsequent reflows use 100% of the CPU the animation will seem jumpy as the browser struggles to update the flow. Moving the animated element by 3 pixels at a time may seem slightly less smooth on very fast machines, but it won’t cause CPU thrashing on slower machines and mobile devices.

Avoid tables for layout (or set table-layout fixed)

Avoid tables for layout. As if you needed another reason to avoid them, tables often require multiple passes before the layout is completely established because they are one of the rare cases where elements can affect the display of other elements that came before them on the DOM. Imagine a cell at the end of the table with very wide content that causes the column to be completely resized. This is why tables are not rendered progressively in all browsers (thanks to Bill Scott for this tip) and yet another reason why they are a bad idea for layout. According to Mozilla, even minor changes will cause reflows of all other nodes in the table.

Jenny Donnelly, the owner of the YUI data table widget, recommends using a fixed layout for data tables to allow a more efficient layout algorithm. Any value for table-layout other than "auto" will trigger a fixed layout and allow the table to render row by row according to the CSS 2.1 specification. Quirksmode shows that browser support for the table-layout property is good across all major browsers.

In this manner, the user agent can begin to lay out the table once the entire first row has been received. Cells in subsequent rows do not affect column widths. Any cell that has content that overflows uses the ‘overflow’ property to determine whether to clip the overflow content.

Fixed layout, CSS 2.1 Specification

This algorithm may be inefficient since it requires the user agent to have access to all the content in the table before determining the final layout and may demand more than one pass.

Automatic layout, CSS 2.1 Specification

Avoid JavaScript expressions in the CSS

This rule is an oldie but goodie. The main reason these expressions are so costly is because they are recalculated each time the document, or part of the document, reflows. As we have seen from all the many things that trigger a reflow, it can occur thousands and thousands of times per second. Beware!

Further study

The Yahoo! Exceptional Performance team ran an experiment to determine the optimal method to include an external stylesheet. We recommended putting a link tag in the head because, while it was one second slower (6.3 to 7.3 seconds) all the other methods blocked progressive rendering. While progressive rendering is non-negotiable (users hate staring at a blank screen), it does make me curious about the effects of rendering, repaints, reflows and resulting CPU thrashing on component download and overall response time. If we could reduce the number of reflows during loading could we maybe gain back a tenth of the lost time (100ms)? What if it was as much as half?

At SXSW I was trying to convince Steve that reflows are important by telling him about an experiment I’ve been meaning to run for a long time, but just haven’t had time. I do hope someone can pick up where I left off (hint! hint!). While loading the page I’d like to intentionally trigger reflows at various rates. This could perhaps be accomplished by toggling a class name on the body (experiment) versus the last child of the body with no descendants (control). By comparing the two, and increasing in the number of reflows per second, we could correlate reflows to response time. Measuring the impact of reflows on JS responsiveness will be harder because anything we do to trigger the reflows will likely impact the experiment.

In the end, quantifying the impact is only mildly interesting, because browser vendors are telling us it matters. Perhaps more interesting is to focus on what causes reflows and how to avoid them. That will require better tools, so I challenge browser vendors and the performance community to work together to make it a reality!

See it in action

Perhaps you are a visual person? These videos are a really cool visualization of the reflow process.

  1. http://www.youtube.com/watch?v=nJtBUHyNBxs
  2. http://www.youtube.com/watch?v=ZTnIxIA5KGw
  3. http://www.youtube.com/watch?v=dndeRnzkJDU

Reflow gone amok

In order to improve performance browser vendors may try to limit reflows from affecting adjacent nodes or combine several reflows into one larger change such as Mozilla’s dirty reflows. This can improve performance, but sometimes it can also cause display problems. You can use what we’ve learned about reflows and trigger them when necessary to correct related display problems.

For example, when toggling between tabs on our image optimization site, http://smush.it, the height of the content is variable from tab to tab. Occasionally the shadow gets left behind as it is several ancestor nodes above the content being toggled and its container may not be reflowed. This image is simulated because the bug is difficult to catch on camera as any attempts to shoot it cause the reflow that corrects it. If you find yourself with a similar bug, move the background images to DOM elements below the content being toggled.

Smush.it! with un-reflowed shadows

Smush it on tab change in Firefox only.

Another example is dynamically adding items to an ordered list. As you increase from 9 to 10 items or 99 to 100 items the numbers in the list will no longer line up properly across all navigators. When the total number increases by an order of magnitude and the browser doesn’t reflow siblings, the alignment is broken. Quickly toggling the display of the entire list or adding a class, even if it has no associated styles, will cause a reflow and correct the alignment.


A few tools have made waves lately. Stoyan Stefanov and I have been looking for decent ways to measure reflows and repaints and there are a few tools which show promise (despite being very early alpha). Beware, some of these seriously destroyed my browser before I got them working correctly. In most cases you’ll need to have installed the latest nightly builds.

When Mozilla announced the MozAfterPaint Firefox API, the internets were abuzz.

Has anyone else seen any cool tools for evaluating reflows? Please send them my way!

And a couple other tools not directly dealing with reflows.

Ultimately, we need a cross browser tool to quantify and reduce reflows and repaints. I’m hoping that the performance community can partner with browser vendors to make this tool a reality. The browser vendors have been telling us for a while that this was where we needed to look next, the ball is in our court.