The 3 Most Surprising Insights From a 200 Website Eye-Tracking Study

eye-tracking website mythsAt EyeQuant, we do a lot of eye-tracking as part of our mission to teach computers to see the web like humans do. The main purpose of our studies is to find the statistical patterns that power our attention models (which you can use to instantly test your websites!) Today, we’re sharing 3 of the most surprising insights we found. 

A lot of you have asked us about general rules of thumb around what drives (and doesn’t drive) attention – in this post you’ll learn why rules of thumb are difficult to establish and how a lot of the common ideas we have about human attention are more complicated than they seem. In fact, what you’re about to read is going to be rather surprising and we’re hoping to dispel some common myths about attention and web design with data. 🙂

METHOD: We’re looking at data from one of our recent eye-tracking studies with 46 subjects who were purchasing products on 200 AdWords eCommerce pages. We recorded 261,150 fixations in total and users we looking at each webpage for 15 sec (+/- 6 sec) on average. The study was conducted in the Neurobiopsychology Lab at the University of Osnabrueck, Germany.

DISCLAIMER: Since the purpose of this study was to further expand EyeQuant’s predictive capacities, we’re also providing EyeQuant’s results for comparison next to the empirical data – please note that these predictions are based on a new EyeQuant model that’s currently in early testing, but are already quite close to the real thing (currently this model provides over 75% predictive accuracy (AUC, warning: math), whereas our standard model achieves over 90%).

Myth #1: “Faces always & instantly draw attention.”

This is probably one of the most universal design assumptions about human attention you’ll find on the internet: “as humans, we’re naturally wired to always seek out and look at at any available faces first.”

Roughly correct – except for when it isn’t. The truth is that as humans we do really like faces. We’ll look at them sometimes. We probably even have a dedicated brain area involved in processing facesHoweverwe look at them much less often than you would typically believe. 

The data (click images to open a large version in a new tab):


Example: a Levis landing page. Left:  Eye-Tracking heatmap of users visiting a Levi’s landing page – users are almost completely ignoring the faces. EyeQuant’s prediction on the right puts a bit more emphasis on the logo than the empirical data, but the big winner on this one is the clearly the headline copy, not the faces. 


Example: a hotel search website, featuring an incredibly happy couple with clearly visible faces. Yet users only seem to care about the search box and the call to action in the center. EyeQuant’s new model provides a very similar result but gets a bit distracted by the wooden texture.

Not convinced? Below you’ll find a lot more examples – from beautifully designed eCommerce shop to web 1.0 wall-of-text. We’re not saying faces don’t attract attention at all and are never looked at. Our data just shows that faces aren’t the powerful attention-grabbers as one usually thinks they are. 

eye-tracking faces
What about guiding user attention through faces? 

This is another popular assumption which seems to make a lot of sense: we’re social beings and user gaze follows the gaze of faces on a website. Again, that’s true, except for when it isn’t:

eyequant vs eye-tracking validation

Example: A Hilton Hotel landing page. Users go straight for the search form and check the offers below, but aren’t paying too much attention to the woman or the headline she’s staring at.

What’s going on here? Our careful, explorative hypothesis is this: looking at a face does provide a sort of emotional buzz, so we may remember looking at them more than we do remember looking at other things. This might lead to wrong conclusions about general viewing behaviour.

Watercooler conclusion: “Faces are emotionally powerful, but they don’t always attract as much attention as we think they do.”

Myth #2: Large text instantly draws a lot of attention.

“Large text is a great way to attract user attention” is another rather popular idea about how attention works online.
However, our data shows that it usually doesn’t work. In a lot of cases big fonts even seem to have a negative effect on attracting attention:

Screen Shot 2014-01-14 at 19.23.52

Example: English Proof Read landing page: Big typography doesn’t work nearly as well as you think it would. The winner on this one are the three descriptive areas below.

eyequant eye-tracking

Example: Canadian Railways. Users had the task to purchase a rail ticket deal. And promptly ignored the advertised one which is USING AMAZINGLY BIG FONTS. Note how this result includes another example for how  gaze doesn’t always guide attention (see Myth #1)

What’s going on here? Our careful, explorative hypothesis is this: there may be an element of “banner blindness” involved. At the same, extremely large letters might be less readable for the human eye as well.

Watercooler conclusion: “Big typography is visually loud, but not at all a safe way to grab user attention. We need to look into other ways as well.”

Myth #3: “The magical word ‘FREE’ always pops out.”

It’s true: economically, nothing beats ‘FREE’. But does this also mean that the word pops out to users immediately when they’re visitig a page? Our data says otherwise.

eyequant vs eye-tracking

EyeQuant validation eye-tracking

Note how EyeQuant’s automatic prediction (on the left) does pick up a little bit on the copy that contains “free”, whereas users in the empirical study on the right completely ignored it. Both study and prediction place almost all the attention on the product description and the model. 

Watercooler conclusion: “‘Free is a powerful semantical tool. We shouldn’t rely on it as our main attention grabber though!”

Conclusions: don’t rely on rules of thumb. Testing always beats guessing.

Rules of thumb are fun. They’re simple. And the more complex the thing is they’re trying to explain the more appealing they become. Alas, that’s also where they often fail – and visual attention is a rather complex, extremely context-driven system that cannot be captured in a set of simple rules.

What we’re doing at EyeQuant is to combine large amounts of data like the study above in lightning-fast computer models. As you’ve seen, our predictions come close to what you’d get from a real study, so if you’re curious to get results for your own website, just test it for free in our web app.

If you found this article interesting, you should talk to us on Twitter!



10 thoughts on “The 3 Most Surprising Insights From a 200 Website Eye-Tracking Study

  1. Zach

    Assuming these heatmaps indicate how long a user focuses on an area, it seems reasonable to suggest that big, bold type doesn’t appear to hold attention simply because it’s quickly digested. I don’t need to focus on “fast, free shipping” for more than a second to gather the info. But I do need to focus on smaller, more complex headlines.

    1. fabstelzer Post author

      Thanks for your comment, Zach! The heatmaps don’t indicate the length, but the total number of fixations an area got from the 46 users. This is what makes the result a bit surprising!

      1. Tim

        I’m still unsure how that disproves Zach’s point. If a paragraph is being read, couldn’t a fixation point be close to or overlap a point that was used to read the previous word or line? Also, what about cases where people re-read sentences to better understand them? That would count as another fixation, correct?

  2. Daniel James

    I agree with Zach – surely big text means you can pick up the whole meaning with a glance. Having to read smaller text means focussing on each word, or every other word, which may well appear like more attention. But staring at the separate individual pieces of a jigsaw puzzle doesn’t mean that you’re spending more time understanding the whole picture.

    Also in the Levi’s example, the biggest heat spot is exactly where the gaze of the guy on the left directs you – just below his face. The guy on the right points you up to the text above him, but less directly and there’s definitely a slightly warmer spot there.

    Then the gaze of the woman in the Hilton page sweeps your gaze to the title on the right and this is reflected by a slight warming in the heat map. Obviously there’s no way to interact with the title, it’s just information, so it’s not going to compete attention wise with a search box where you have to concentrate and actually type and click on things.

    I think if you redo the study with your test subjects sitting on their hands, unable to interact with the web-page, but just able to look at it for five seconds before the next page comes up, you’d get very different results. I think you’d find that the heatmaps then would come out exactly as predicted by common sense – we go for big text, key phrases like FREE, human faces, gazes, human bodies…

    The problem with what your piece is that it compares (the straw man of) common sense assumptions of how we statically look at something – assumptions that are broadly correct in that context – with your ‘raw data testing’ of how we *interact* with something.

    If I’m just looking, I’d choose to look at Beyoncé wearing a wet t-shirt with the words “FREE” emblazoned on them – over Mike Tyson slobbering at the mouth as he bites someone’s ear off. However if I was in the boxing ring with Beyoncé – even if she was taking that t-shirt off – if Mike Tyson was looking hungry for another ear… my attention would obviously be on Mike Tyson, because *interaction* is the key here.

    Our attention is completely changed by the context of how we imagine the possibilities of interacting with the world. That’s why there seems to be so much less space in a city in the Winter when it’s raining. You can’t sit on wet benches. You don’t want to hang about in the rain. You don’t want to walk across the grass. So there’s no space (read ‘space you can interact with) apart from the odd shop between your home and your work. In the summer however, all the dry benches, parks and streets are alive with possibility of interaction.

    Does that make any sense to you? I’d be interested to hear what direction your research has been going since you wrote this.

  3. fabstelzer Post author

    Love your Heideggerian approach to attention, Daniel! 🙂 It’s true that context matters quite a lot – you could also conceive of this as expectations meeting certain conventions. For this study, we did however choose a context and task (purchasing a product when coming from an AdWords landing page) that is very specific to the context for which the websites we’ve tested were designed, which hopefully gives us a representative idea of how users will then visually interact with the specific designs. Now, while the results are definitely different under different task conditions, the results that matter are the ones that reflect the specific task condition that is most relevant for the designer – in this case, purchase intent! Thanks for your thoughtful comment!

    1. Liam

      Great read. The web is often a task-oriented medium where a user’s goal for a given session is priority number one. This is not to imply that there is no value gained from big, bold headlines or human-oriented imagery. This study simply reveals that in a task-oriented context users will naturally gravitate to the tools and functions that seem best suited to completing their task. Kinda makes sense.

      Big, bold headlines and imagery allow a designer to maintain focus and clarity in their composition. For a new user to a site or service, these elements might also serve to offer an appropriate first impression that reveals a brand’s value proposition among other benefits. I would love to see this data augmented with user stats indicating how familiar the participants were with the sites visited!

      Thanks for this study, I’ve bookmarked it for future reference with my team.

  4. Dan

    Nice blog article and goes to show that while heatmaps useful as one element of a usability study, you can never rely solely on these on your findings.

    By digging further into additional metrics in eyetracking, such as gaze plots, areas of interest, time to first fixation, fixation to click, etc will help explain why these particular “rules of thumb” don’t apply.

    It could well be, for example, with the large text, additional metrics may reveal that yes, the user looked at this first, quickly and subconsciously “got it” and filtered this information as part of the task at hand. Heatmaps alone won’t reveal this.

    With the “powerful” facial imagery. As users are specifically tasked to go purchase something, which in itself is partially artificial as we’re looking at how quickly and effectively a user purchases a product/brand they already know (ie in the Adwords ad), without looking at the emotional side of things (ie “why would I purchase with x brand?”). Again, this is where context and additional metrics come in to see whether someone “gets it”. To put it another way, if the emotional imagery were *not* there, would the user, in their own time and expense, still purchase the product?

    And finally, the “FREE” in your cases above all appear within banners. “Banner Blindness” has been well eye-tracking studied and documented by numerous sources (including none other than Jakob Nielsen). Perhaps if the word was used contextually within the main body copy, it might illicit a different response?

  5. Ralph Hinderberger

    On a lot of the pages quoted to dispel the “faces draw attention” myth, the faces are either small and/or the models eyes are not directly facing the observer. This predictably reduces the motivation to look at their faces: The observer is not strongly compelled to check the models eyes in order to detect something interesting, promising or threatening in it (which usually tells our animal brain us to do in case that somebody obviously is looking at us). On the Victoria`s Secrets page the visual attraction is located where the “myth” predicts it: Female face and breast. No wonder, because these areas are stimulating the primal instincts and male and female observers alike (thogh for different reasons). So yes, the headline of this blog is correct although for my taste not precise enough (sorry for being so Germna): Showing “heads” does not automatically draw attention. It depends on the equilibration of primal instincts and the urge to reach objectives: If our Amigdala is triggered (which is particularly reached by direct looks connected to specific face expressions) we are motivated to draw attention. If not – or the primal stimulus is not stronger than the wish to conclude a task on the website – we don`t. BTW which was the gender quota of the sample? 😉

  6. Urs

    If you like to know if “Large text instantly draws a lot of attention” then a heatmap with 15 sec duration is probably not giving the best answer. Using AOI (Area of Interests) and measuring for example Time to first Fixation is much more effective for this type of questions, or not?

    If users are spending average 15 sec on a website, then the number of fixations on brand logos, faces and a big text is definitively much less than on the details where they are asked to look for: The “context and task: purchasing a product”.

    Just a few calculations based on eye tracking practice:
    – In 15 seconds a user makes approx.: 30 – 60 fixations;
    – Recognizing a brand logo take 1-2 fixations (which creates a blurry pink)
    – Looking at an “uninviting face” takes 1-6 fixations (which creates a light blue)

    But it is clear, that users who have to make a decision about “purchasing a product” spend a lot of fixations on searching and reading details about the products resp. searching the path where details can be found.

    Have you done as well AOI analysis?

    1. fabstelzer Post author

      Thanks for your comment Urs!

      We don’t do a classical AOI analysis as the purpose of our studies is purely in data acquisition for our attention models.

      Absolutely agreed that the task at hand makes a big difference. Still, the portrayed myths are very real and persistent and we simply wanted to show that testing > guessing.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s