Category Archives: Latest Happenings

Welcoming Nicholas Zakas to the Team

I am so very pleased to announce that Nicholas Zakas and I are joining forces to form a consulting duo. Nicholas has spent the last five years defining what it meant to to be a client-side engineer at Yahoo!. He consistently raised the client-side glass ceiling with his commitment to good code and practical solutions. He also literally wrote the book on JavaScript Performance. Like me, Nicholas cares deeply about performance and scalability. And, most importantly, we share a love of finding elegant solutions to hard problems, which we feel makes us a good match.

I’ll let Nicholas speak for himself:

I’ll be … teaming up with my friend (and former Yahoo) Nicole Sullivan to do consulting work. Nicole and I have talked off and on about working together on outside projects after having fun working together on a couple of projects at Yahoo!. Between the two of us, we hope to provide a wide range of front-end consulting services including performance evaluations, general architecture, and of course, JavaScript and CSS. If you’re interested in hiring us, please email projects (at) stubbornella.org.
Nicholas Zakas

Please come see us at Velocity Conference on June 14.

The Year of Business Metrics – Don’t make your users run away!

Performance at Velocity Conference

A marked change has occurred since the first Velocity Conference a year ago, and while the effects are not yet obvious, they will be. The web is still slow, but we have something now, that we didn’t a year ago: business metrics. This was the year we quantified the impact of performance choices on our businesses, and the results were astounding.

Those of us who worked in the field had a gut feeling that users want a fast web experience, but most of the studies done previously were lacking something, either in experiment design or reliability of the data. They were all strong indicators that more research needed to be done, but they weren’t damning enough to provide real certainty. This year we found a real correlation between websites speed and its ability to establish and keep relationships with visitors. Not everyone could attend, so I’d like to share with you some of the key moments of an amazing conference. Please feel free to add others in the comments.

David Artz at AOL

David Artz from AOL presented their findings from a study which measured page views per visit against performance. They divided users into buckets based on response time and plotted it against PV. The results were startling. Across six AOL sites there was a clear inverse correlation.

The take away: AOL

Users who had a slower experience view far fewer pages.

AOL PV-speed correlation

Goog and Bing sitting in a tree, K-I-S…

Goog and Bing got together (whoa!) to do a study looking at search behavior when performance is worsened over very narrow increments. This study was unique particularly because it followed the same users over a period of time. The data can be used to determine the threshold at which clicks, refined searches, revenue, satisfaction, and time to click are likely to be impacted by features which slow a website. Their methodologies were a bit different, but the conclusions were remarkably similar. A 50MS delay seemed to have no impact, but as little as 200-500MS changed user behavior across the board. Revenue, clicks, and time to first click were most profoundly impacted.

The Take Away: Bing

One key point was that users seem to lose their focus if you make them wait too long. Progressive rendering and flushing the header (which are also recommended by Yahoo!) can help. Bing had this to say:

Notice that as the delays get longer the Time To Click increases at a more extreme rate (1000ms increases by 1900ms). The theory is that the user gets distracted and unengaged in the page. In other words, they’ve lost the user’s full attention and have to get it back.

~ Google & Bing

We’ve all experienced that. We open a new tab and run a search. Multitasking fools that we are, we flip to a new tab or open our email if the results take too long to load.

The Take Away: Google

The most interesting data to come out of the Google tests took place long after the experiment had finished. As much as five weeks later, some users, especially those who saw delays greater than 400MS, were still searching less than before. Performance is a feature users want. Fail them, and they may never come back.

The % change recorded was very small. For instance, a half a second delay caused a -1.2% loss of revenue per user. What does that mean? We need to think big, and simultaneously work on incremental and profound ways to make the web faster.

Shopzilla – Profound improvements

Shopzilla also presented their (profound) performance improvements. They decreased their response time by around 3.5 seconds and the data showed their conversions increased by 7-9% and their page views skyrocketed 25%. This is good stuff. This is how we go to business and make the case that performance is an important feature that deserves attention, not a band-aide that you stick on afterwards. Dave Artz has more details.

JavaScript versus CSS versus Network Latency: Which is killing our sites?

In a separate session with Mike Belche from the Chrome team, he discussed his experiments which tested total time spend on executing JavaScript, rendering the page (CSS), and network latency. He said the vast majority of the time is being spent on network latency. There was a subtle flaw in his methodology, because his rendering time included only one full rendering and painting, because all resources were already in cache and no JavaScript was used, there would be no unnecessary reflows.

The Take Away: Reflows

This got me thinking about images and other fixed dimension media. We should always set height and width of images to avoid reflows being caused when the resource is finally downloaded and available.

I agree with him that, except in extreme cases (and a lot of selector/reflow experiments have been too extreme to really reflect reality), rendering will be much less important that network latency. It is much more important to keep page weight and HTTP requests as low as possible. Over-complicating our CSS selectors to reduce render time would be a mistake. Browsers are really good at parsing selectors, we need to be really good at writing the minimum number we actually need. This is clearly missing not handled correctly in the current suite of testing tools such as Page Speed.

My talk included (not yet released) suggestions for coding performant selectors. More on that later. ;)

Further Reading

  • Aladdin Nassir spoke about linking performance and business metrics via Performance-Based Design.
  • Lindsey Simon spoke about reflows and an open source tool he is building to better measure these things. The methods for accurately measuring reflows are still a WIP, and the numbers are fuzzy, but that makes this a really interesting project to get involved in.

Object Oriented CSS video on YDN

Yahoo! Developer Network has released a video of my Object Oriented CSS talk at Web Directions North just in time for Ada Lovelace day. I’ve also been included in a feature on Women in Technology. I’m absolutely flattered to be included among these fantastic technical women. Wow.

Object Oriented CSS: for high performance websites and web applications.

Find out more about object oriented css

  1. Open source project on github (GIT is having some DNS issues, be patient)
  2. Follow along with the slides on slideshare
  3. Join the OOCSS google group

Thanks to Havi, Julie, Ricky, Yahoo! Developer Network, and the whole Web Directions North team for their hard work putting this together!

Object Oriented CSS, Grids on Github

How do you scale CSS for millions of visitors or thousands of pages? Object Oriented CSS allows you to write fast, maintainable, standards-based front end code. It adds much needed predictability to CSS so that even beginners can participate in writing beautiful websites.

I recently presented Object Oriented CSS for high performance web applications and sites at Web Directions North 2009. If you didn’t attend my talk, you are probably asking yourself “what in the world is OO-CSS?”

Object Oriented CSS: Two main principles

1. Separate structure and skin
2. Separate container and content

I’m writing a framework to demonstrate the technique, but more than anything, Object Oriented CSS is a different way of approaching CSS and the cascade. It draws on traditional software engineering concepts like extending objects, modularity, and predictability. Solutions are judged based on their complexity, in other words, “what happens to the size of the CSS file as more pages and modules are added?”

The answer, for most sites, is that it grows out of control and becomes an unmaintainable tangle of spaghetti code. People often complain about CSS, and rightly so — even though it inspired a rant, I understand their frustration.

Current methods for writing CSS require expert level ability just to get started. To become a CSS expert, you need to spend couple years coding away in your basement by yourself before you are remotely useful. Front-end engineering needs to accomodate entry level, mid level, and architect level developers, but our sites are too brittle. You may have a perfectly accessible or high performance website, and then the first newbie to touch it, ruins it. Our code should be robust enough that newbies can contribute while maintaining the standards we’ve set.

We don’t trust each others code

Imagine a JavaScript developer wrote a function to return area, and every now and then it randomly returned the diameter instead. The function would never make it through a code review, and yet we tolerate the same thing from CSS, as if it were immune from normal programming best-practices. This is why CSS code reuse is almost nonexistent. An object should behave predictably no matter where you place it on the page, which is why Object Oriented CSS avoids location dependent styles.

What not to do

#myModule h2{...}
#myModule span{...}
#myModule #saleModule{...}
#myOtherModule h3{...}
#myOtherModule span{...}

Developers have tried to sandbox their CSS into individual modules, to protect against the cascade. But in doing so we’ve ended up with a mess.

Object Oriented CSS Grids on github

My Object Oriented CSS grids and templates are open sourced on github. They have all the functionality of YUI grids plus some important features.

  • Only 4kb, half the size of YUI grids. (I was totally happy when I checked the final size!)
  • They allow infinite nesting and stacking.
  • The only change required to use any of the objects is to place it in the HTML, there are no changes to other places in the DOM and no location dependent styling. Eases back-end development and makes it a lot easier to manage for newbies.
  • Solution for sub-pixel rounding errors.

http://wiki.github.com/stubbornella/oocss

Check out template.css and grids.css and the docs on the github wiki.

My prediction is that you’ll be writing complex layouts in less than 24 hours without adding a line to the CSS file.

What’s up next?

Template and grids are ready for rock and roll. Please be my alpha testers, put them through their paces. Let me know if you find bugs or want additional functionality. I’m also hoping to contribute some of this back to YUI since they now have a github repository. How cool is that?

Rounded Corner Boxes and Tabs

Next up, modules. There are a million cool ways to create rounded corner boxes. I’m going to take several of my favorites (like CSS Mojo and Arnaud Gueras blocks) and convert them to OO-CSS. This will make it super easy for newbies to create their own modules, without needing to understand the minutiae of browser differences.

Video / Podcasts

YDN will publish a video of my talk and Web Directions North is putting out podcasts. I’ll tweet and post when that happens. The audio contains a lot more detail than the slides, so check it out as they become available.

New tool: Easy image optimization with Smush it

Download the Smush it Firefox extension to follow along with my post.

I’m at Ajax Experience this week with my teammate, Stoyan Stefanov. This morning we did a lightning-demo of our new tool SmushIt.com. Smush it allows you to automate image optimization by using the best of open source algorithms to achieve the smallest, high performance images possible.

Image Optimization Lightning
View SlideShare presentation or Upload your own. (tags: performance image)

Smush it comes in different flavors:

  • You can upload a bunch of pictures in your browser
  • You can provide us with a list of image urls or
  • You can get a Firefox Extension to optimize the images found on any web page

Our fundamental principal was that the images we produced needed to be 100% pixel for pixel faithful to the original quality. That means that our techniques are completely lossless. We decided to let designers decide what quality level was necessary, then, given that quality, we use the best open source compression algorithms to make the image as small as possible.

Smush it also generates a zip so that you can easily download and replace all of the images in your page.The tool smushes your images in several ways and outputs the best result, or gives you a bravo if your images are already optimized. Some of the options we test are:

  • Crush PNG
  • Convert GIF to PNG
  • Convert JPG to progressive JPG
  • Remove Metadata
  • Compress animated GIF

We would love feature requests, bugs, or suggestions so that we can improve the tool. I am nicole at my domain. At Ajax Experience I showed the tool on Korea Yahoo, BBC News, and Barack Obama’s site. Can you guess who had over 300K of useless image bloat?

Modern sites are doing more than they ever have before, this tool will help keep them lean, mean, and (of course) fast.

Christian Heilmann recorded a video of our demo for YDN. The audio is a bit wonky, I’ll link to the official AE recordings as they become available.