Author Archives: fabstelzer

Why The New Google Search Ads Design Is a Subtle Work of Genius

It’s official – Google has rolled out a major redesign of their search results and search ads. The company line, as outlined by Jon Wiley – Google’s lead designer for search – is that the new design improves “readability and creates an overall cleaner look“, while the redesign of the ads is “making the multi-device experience more consistent.

Google’s desktop ads now do match the design of their mobile versions and achieving multi-device consistency certainly is a great reason. We’d like to take a data-informed guess on what other good reasons Google might have had for this major revamp of their most important interface. 

Readers of this blog will know that our mission is to teach computers to see designs like humans do – using neuroscience and machine learning. In this article we’ll make use of our EyeQuant technology to better understand how Google’s new ad design affects viewers, and thereby, clicks.

First things first, some of our assumptions for the analysis:

1. It’s unlikely for Google to roll out a relatively major design change without having tested its effect on AdWords CTR (i. e. Google’s main source of revenue) first. It’s pretty safe to assume that the new design performs at least as well (and most likely better) in terms of CTR than the old one.

2. Google is still commited to its ‘Don’t Be Evil’ motto, which in particular affects any conflicts of interest in serving their two most important stakeholders: search users and advertising customers.

Now, as the headline gives it away, we do believe that the redesign is a subtle work of design genius. Here’s why.

Google’s new Ads now very much ‘blend in’ with the organic results, all while attracting MORE attention, and all without being clearly evil. 

Yep, Google somehow squared the circle here. When designing ads, one is usually torn between:

  • aligning the ad’s design to the content in order to battle banner blindness and drive, ahem, involuntary clicks
  • making the ads stick out visually to attract involuntary eye-movements (the most extreme measure being the use of motion).

The first strategy usually implies a sacrifice of direct ad visibility and its ‘pop out’ effect, and worse, moral principles.

Yet, Google’s new ad design manages to achieve the following (besides its officially announced goals):

1. It does clearly label the ads as such – one could even argue that the labeling is clearer than before, as now every ad is explicitly labeled. Not evil. 

2. At the same time it makes the ads blend in much more with the organic results. Ethically, this is a bit of a tricky move, in particular since they had banned a similar type of design explicitly from being used by AdSense publishers. But hey – every ad is labelled as such. 

google search ads redesign

3. The best part? The new layout attracts MORE design-driven attention to the ads than the old one did. 

We ran EyeQuant tests on both the old (~Q1 2013) and new SERP layout to see in which version the upper ad unit would generate more attention – solely based on its design and common viewing behaviours and patterns. EyeQuant results are 90% equivalent of what an empirical study with >25 subjects would provide. The following heatmaps of both the old and new ad units design show the respective probability of an area to be fixated in the first few seconds of exposure – the top 3 ads are marked up in pink:

eyequant eye-tracking google redesign

As it stands, the results show that the new design generates significantly more attention for the top 3 ads than before – especially for the desirable (and expensive) #1 ad. The attentional pull from the organic search results remains unchanged.

It probably goes without saying WHY all of this is good for Google.

But HOW did they do it? There are several factors involved, and we can reverse-engineer them with a bit of knowledge on how human visual attention works:

  • They increased the font size, which typically drives more attention (but not always and only to a certain extent!)
  • The luminance contrast between text and white background is now higher (blue on white) than it used to be in the old version (blue on AdWords skin tone) – luminance contrast is one of the most basic and most powerful drivers of visual attention.
  • The yellow “Ad” logo provides additional color contrast without being overtly aggressive (low luminance contrast between yellow and white), and it does so in every single ad! 

In conclusion, this is a supremely clever, subtle redesign and there’s lots of reasons to believe that it will effectively drive eye-balls, clicks and ad revenue. 

Full disclosure: While several Google teams are using EyeQuant as customers, EyeQuant was not involved in any part of the redesign process.

And here is our own ad: you can try EyeQuant for free on your own designs – all it takes is an email, a URL and 7 seconds of your time. Test it out now!

If you’d like to get a more detailed look into how you can use design to influence visual attention in a systematic and predictable way, check out our quick 8 minute introduction right here:

The 3 Most Surprising Insights From a 200 Website Eye-Tracking Study

eye-tracking website mythsAt EyeQuant, we do a lot of eye-tracking as part of our mission to teach computers to see the web like humans do. The main purpose of our studies is to find the statistical patterns that power our attention models (which you can use to instantly test your websites!) Today, we’re sharing 3 of the most surprising insights we found. 

A lot of you have asked us about general rules of thumb around what drives (and doesn’t drive) attention – in this post you’ll learn why rules of thumb are difficult to establish and how a lot of the common ideas we have about human attention are more complicated than they seem. In fact, what you’re about to read is going to be rather surprising and we’re hoping to dispel some common myths about attention and web design with data. 🙂

METHOD: We’re looking at data from one of our recent eye-tracking studies with 46 subjects who were purchasing products on 200 AdWords eCommerce pages. We recorded 261,150 fixations in total and users we looking at each webpage for 15 sec (+/- 6 sec) on average. The study was conducted in the Neurobiopsychology Lab at the University of Osnabrueck, Germany.

DISCLAIMER: Since the purpose of this study was to further expand EyeQuant’s predictive capacities, we’re also providing EyeQuant’s results for comparison next to the empirical data – please note that these predictions are based on a new EyeQuant model that’s currently in early testing, but are already quite close to the real thing (currently this model provides over 75% predictive accuracy (AUC, warning: math), whereas our standard model achieves over 90%).

Myth #1: “Faces always & instantly draw attention.”

This is probably one of the most universal design assumptions about human attention you’ll find on the internet: “as humans, we’re naturally wired to always seek out and look at at any available faces first.”

Roughly correct – except for when it isn’t. The truth is that as humans we do really like faces. We’ll look at them sometimes. We probably even have a dedicated brain area involved in processing facesHoweverwe look at them much less often than you would typically believe. 

The data (click images to open a large version in a new tab):

eyequant

Example: a Levis landing page. Left:  Eye-Tracking heatmap of users visiting a Levi’s landing page – users are almost completely ignoring the faces. EyeQuant’s prediction on the right puts a bit more emphasis on the logo than the empirical data, but the big winner on this one is the clearly the headline copy, not the faces. 

eyequant

Example: a hotel search website, featuring an incredibly happy couple with clearly visible faces. Yet users only seem to care about the search box and the call to action in the center. EyeQuant’s new model provides a very similar result but gets a bit distracted by the wooden texture.

Not convinced? Below you’ll find a lot more examples – from beautifully designed eCommerce shop to web 1.0 wall-of-text. We’re not saying faces don’t attract attention at all and are never looked at. Our data just shows that faces aren’t the powerful attention-grabbers as one usually thinks they are. 

eye-tracking faces
What about guiding user attention through faces? 

This is another popular assumption which seems to make a lot of sense: we’re social beings and user gaze follows the gaze of faces on a website. Again, that’s true, except for when it isn’t:

eyequant vs eye-tracking validation

Example: A Hilton Hotel landing page. Users go straight for the search form and check the offers below, but aren’t paying too much attention to the woman or the headline she’s staring at.

What’s going on here? Our careful, explorative hypothesis is this: looking at a face does provide a sort of emotional buzz, so we may remember looking at them more than we do remember looking at other things. This might lead to wrong conclusions about general viewing behaviour.

Watercooler conclusion: “Faces are emotionally powerful, but they don’t always attract as much attention as we think they do.”

Myth #2: Large text instantly draws a lot of attention.

“Large text is a great way to attract user attention” is another rather popular idea about how attention works online.
However, our data shows that it usually doesn’t work. In a lot of cases big fonts even seem to have a negative effect on attracting attention:

Screen Shot 2014-01-14 at 19.23.52

Example: English Proof Read landing page: Big typography doesn’t work nearly as well as you think it would. The winner on this one are the three descriptive areas below.

eyequant eye-tracking

Example: Canadian Railways. Users had the task to purchase a rail ticket deal. And promptly ignored the advertised one which is USING AMAZINGLY BIG FONTS. Note how this result includes another example for how  gaze doesn’t always guide attention (see Myth #1)

What’s going on here? Our careful, explorative hypothesis is this: there may be an element of “banner blindness” involved. At the same, extremely large letters might be less readable for the human eye as well.

Watercooler conclusion: “Big typography is visually loud, but not at all a safe way to grab user attention. We need to look into other ways as well.”

Myth #3: “The magical word ‘FREE’ always pops out.”

It’s true: economically, nothing beats ‘FREE’. But does this also mean that the word pops out to users immediately when they’re visitig a page? Our data says otherwise.

eyequant vs eye-tracking

EyeQuant validation eye-tracking

Note how EyeQuant’s automatic prediction (on the left) does pick up a little bit on the copy that contains “free”, whereas users in the empirical study on the right completely ignored it. Both study and prediction place almost all the attention on the product description and the model. 

Watercooler conclusion: “‘Free is a powerful semantical tool. We shouldn’t rely on it as our main attention grabber though!”

Conclusions: don’t rely on rules of thumb. Testing always beats guessing.

Rules of thumb are fun. They’re simple. And the more complex the thing is they’re trying to explain the more appealing they become. Alas, that’s also where they often fail – and visual attention is a rather complex, extremely context-driven system that cannot be captured in a set of simple rules.

What we’re doing at EyeQuant is to combine large amounts of data like the study above in lightning-fast computer models. As you’ve seen, our predictions come close to what you’d get from a real study, so if you’re curious to get results for your own website, just test it for free in our web app.

If you found this article interesting, you should talk to us on Twitter!

 

The Secret Reason behind a 262% Conversion Lift

Screen Shot 2013-03-11 at 5.32.03 PM

MarketingExperiments recently posted a fantastic landing page success story on their blog that piqued our interest: Phonebooth, a company offering VoIP services for businesses, achieved a whopping 262% increase in conversions. Here are the two versions they tested:
Image

MarketingExperiments credits this success to minimizing friction and reducing user anxiety by implementing trust seals and improving the page’s copy with added elements like ‘product of the year’, ‘no contracts’, and ‘zero setup fees’.

These are all important aspects of Phonebooth’s conversion process, but we noticed something else.

Here at EyeQuant, we believe in attention-driven design. The premise is simple: When a user arrives on a landing page, their attention should be immediately directed to the most important content.

As we’ll see shortly, Version B (the winner) does a much better job of this than its rival. But what is the most important content? To answer this question, EyeQuant recommends a method we like to call the 3 W’s.

The 3 W’s framework is simple. When a user appears on your landing page, they should immediately see what the page is about, why they should care, and where they should go next.

Using EyeQuant, we can quickly evaluate exactly what users see in the first few seconds of their visit, and judge these pages based on the 3 W’s. The quantitative Regions of Interest (ROI) feature within EyeQuant allows us to directly compare the design of these two pages. Let’s see how they did!

Establishing Relevance (“What”):
Image


In Version A, the headline does not explain what Phonebooth does; however, the sub-heading indicates that it offers “Business Phone Plans”. This phrase yields an EyeQuant ROI value of 54. In other words, the highlighted area is 54% more attention-grabbing than the average pixel on the page.

 

Image

 

Version B’s headline is much more explicit about what Phonebooth offers: VoIP Business Phone Systems”. This headline has an ROI value of 111. That’s 111% more attention-grabbing than page average, but more importantly, it’s a 205% improvement over the control page.

One could argue that Version B’s ability to establish relevance may be the single greatest improvement over Version A – especially when you consider that the “what” statement also benefits from better copy, changing from “Business Phone Plans” to “VoIP Business Phone Systems”.

Core benefits (“Why”):

Version A’s headline expresses its main benefit: “Only $20 a month per user”. This headline has an ROI value of +105.

 

Image

 

Version B moves that statement to a seal on the left, and instead uses the phrase “Up to 60% less than other leading providers” as the main benefit. This line has an ROI value of 95. In this respect Version A actually out-performs Version B from an attentional standpoint.

 

Image

 

Call to action (“Where Next”):
Version A uses a red “Get Quote” button as its call to action, while Version B uses a green “Get Started” button. The ROI value for Version A is -24, meaning that it’s less attention grabbing than most elements on the page.

Image


The green button in Version B has an ROI value of -3, suggesting it’s about as attention-grabbing as the average pixel on the page.

 

Image

 

Neither of these are great scores for such an important landing page element, but Version B does significantly improve upon Version A.

The great thing about conversion optimization is that no matter how good your landing page is, it can always get better – even after a 262% boost. EyeQuant suggests this is absolutely the case for Phonebooth as well. The perception map below shows what users see within the first 3 seconds of landing on the page (Version B).

 

Image

 

While the “what” and “why” elements are visible right away, the call to action (“where next”) is still somewhat hidden. Our 3 W’s model tells us that by further tweaking the design of this page to draw more attention to the CTA, it could increase conversions even further.

If you’d like to try out EyeQuant on your own website, head over to http://www.eyequant.com. Your first test is free!

What’s in a Heatmap? Predictive Eye-Tracking versus Click-Tracking

Here at EyeQuant, we are often asked what the difference is between EyeQuant and click-tracking tools such as ClickTale or Crazy Egg. Many marketers assume they are the same thing – afterall, they’re all heatmaps of your webpage, right?

Indeed they are, but a heatmap is just a means of visualizing data. The insight you gain from any particular heatmap is based entirely on what kind of data it actually represents. For example, you can have a heatmap showing where heat (literally) is escaping from inside your home, and you can also have a heatmap representing the volume of “tweets” from a given geographic area:

Heatmaps

Just as the energy-efficiency heatmap conveys completely different information than the Twitter heatmap, EyeQuant conveys completely different information from a click-tracking heatmap because the underlying data is very different.

Click-Tracking Heatmaps:

A CrazyEgg click map courtesy of EyeQuant partner Wholegrain Digital.

What it is: products like CrazyEgg allow you to place code on your website and record data on where users clicked on your webpage. The insight is represented in heatmap form by assigning colours to different elements of the page based on how often they were clicked.

What it tells you: where people click the most (and least) on your webpage.

Why it’s useful: Click-Tracking gives you insight into the conversion funnel. What path do your visitors take from the landing page to the page where they ultimately process an order? It also helps you identify usability issues. For example, are people clicking on elements that aren’t actually links?

EyeQuant Heatmaps:

An EyeQuant Attention Map for the Wholegrain Digital page.

What it is: EyeQuant uses a patented model of the human visual attention system to assign a kind of “attention score” to each element of your webpage screenshot. These values are then translated into colours, and areas with lots of “heat” are looked at first by visitors.

The model was created by performing actual eye-tracking studies on hundreds of participants looking at landing pages for 3 seconds each. This data was used in combination with a sophisticated machine-learning process to create a model that could accurately reproduce the real eye-tracking results – almost instantly. The eye-tracking studies took place in the lab of EyeQuant Co-Founder Professor Peter Koenig at the University of Osnabrueck.

What it tells you: when someone comes to your website, what do they see (and miss) right away? What grabs their attention – before users have done anything meaningful with their mouse?

Why it’s useful: EyeQuant allows you to ensure that users see the most important content right away: what is the offer, why is it interesting, and what should the visitor do next? If these 3 critical pieces of information are visible immediately you can expect lower bounce rates, higher engagement and ultimately a higher conversion rate. See our Groupon Case Study for an example.

Which should I be using – Click tracking or EyeQuant?

As we’ve seen, these two powerful tools are very different in terms of the insight they provide, so a serious online marketer should have both in his/her arsenal. EyeQuant should be used for instant analysis of a page before it even goes live. Based on the results, you can iteratively optimize the page and improve it without any traffic. When you’re happy with the first impression that your page will make, you can go live and use click-tracking to get a better idea of how people are actually interacting with the page and how they move around on your website. Make sure to use EyeQuant again any time you update the content on the page. Any change on the page affects how people will look at it. Something as trivial as changing a picture can dramatically affect the attentional landscape of the page.

To Interpret a Heatmap, you Need a Wishmap:

Whenever you analyze and interpret either an EyeQuant heatmap or a click heatmap, it’s important to keep your own internal “wishmap” in mind: that’s the heatmap that represents which areas of your website users SHOULD see AND click on – value propositions, keywords, call to action buttons etc.

Whenever you find that your own “wishmap” deviates too much from both your attention map and your click map, you know you’ve found one of the best optimization levers around: people aren’t clicking because they can’t see the good stuff right away!

Want to try EyeQuant? Sign up here for a free analysis!

“50% of all users don’t buy because they can’t find what they’re looking for” – Shane Cassells, Google

Screen Shot 2013-01-22 at 12.09.19 PM

If you’re interested in conversion optimization, you must check out Google Conversion Expert Shane Cassells‘ excellent presentation in its entirety – his points are very much in line with our mission at EyeQuant: help your users find what they are looking for on your websites as quick as possible. 

…which is closely related to what he calls “Cassells’ Law”:

Every two years the amount of content we add to our websites doubles, while our user’s patience with our websites halves

So true.

You can view all presentations of Jellyfish’s Digital Journey event here. Highly recommended!

 

Caltech STUDY: What you see is what you buy – how a simple design trick affects consumer choices

When consumers are busy, under time pressure or slightly indifferent towards a product’s branding they tend to choose products that visually pop out, even overriding their personal preferences.

As the old adage goes, “seeing is believing”. World-famous behavioural economist Daniel Kahneman specified the principle as “What You See Is All There Is”, pointing out how most of our decisions are based on quick, intuitive and ultimately flawed perceptual processes. A simple example of this is the universally flabberghasting Müller-Lyer illusion, in which two lines of equal length appear to differ once outwards and inwards arrows are added.

Of course you KNOW that the lines are of the same length, yet you can’t “unsee” the presumed difference. Measuring it would certainly help, but our perceptual apparatus has a tendency to laziness. We’re a notoriously visual species – what you see is all there is.

Now, two of the vision researchers behind EyeQuant’s technology just published a study together with their colleagues in the Journal of Consumer Psychology that goes a little further: simply put “seeing is buying” – or: what you see is what you buy.

The study’s experiment, conducted by Caltech researchers and EyeQuant Advisory Board members Professor Mili Milosavljevic and Professor Christof Koch and their colleagues, is fairly straightforward: The scientists presented 7 hungry subjects with 15 food items, such as candy bars and chips of different brands. Subjects were first asked to rank those items in order of their preferences – e. g. some love a certain brand of chips, but hate Snickers, all whilst also having a slight preference for a brand of coconut candies, etc. etc. The subjects then had to make multiple rounds of choices between any two of those items – a little bit like you would when rushing through a supermarket.

This could be as simple as the most preferred item vs. the least preferred one, but obviously could also involve a choice between products that were ranked to be of similar attraction. To emulate a more crowded, supermarketish environment, the researchers surrounded each of the displayed products with 8 other items.

Now, if you’d assume that people stick by their preferences, you’d be absolutely right.

After all, companies spend billions of dollars on marketing to establish those preferences in the first place. Unsurprisingly, the researchers found that the simple choices were pretty much completely in line with the initial ranking given by the participants.

If someone indicated that they preferred snack A over snack B, they’d also choose accordingly.


Unless those preferences are overridden by a simple design tweak.

Milosavljevic and Koch found that they could significantly affect the choices of participants by altering the brightness of an item, which causes it to visually stick out a bit more. That’s right: improving the visual “pop-out effect” of a product (or demoting the pop-out effect of all other items) made it more likely for people to choose it over another. Specifically, they found that manipulating the visual impact had the strongest effect under three circumstances:

1. Time pressure: the effect was strongest when subjects had to make a superquick decision (200 ms of exposure). In fact, the impact of visual saliency for these quick choices was 200% stronger than the effect of liking one product more than the other in the ranking. At the same time, the effect was still significant and sizeable at longer viewing durations of 1500 ms.  Example: if you’re in a rush, you’re more likely to choose products that stick out visually.

2. Similarly attractive products: when subjects didn’t have a particularly strong preference between the two presented products, they tended to choose the one with a higher attentional impact. This is a particularly powerful finding, as most brands do suffer from “sameness” in cluttered marketplaces – improving the visual impact of a product’s design could be a cost-effective way to increase sales without any additional spend on branding – the very definition of conversion optimization. Example: if you’re mostly indifferent towards a range of products, you’re likely to choose the ones that are visually different.

3. Cognitive load / multi-tasking: if subjects had to perform an additional task while making their choices the visual impact bias was even stronger. Example: if you’re distracted while shopping, you might end up buying products with a stronger visual appearance.

Of course, every marketer knows that attention plays a huge role in how consumers perceive and purchase their products – yet our understanding of the underlying principles is still somewhat obscure and “alchemistic”. This study is one of the first to illuminate the role of attention in how consumers make decisions: often, what you see is what you buy.

“Attention Hacks” in eCommerce and online marketing: what they see is what they click?

In many ways, the study’s findings bear super-interesting implications for online marketers: with millions of offers competing for attention, we all spend less and less time to decide whether a freshly opened website is worth our time, often juggling between different tabs, chats and other distractions. Time pressure: check. Multi-tasking: check.

And let’s face it: most of the world’s brands aren’t exactly on par with Apple. Most products in most online shops are by definition of similar brand value – they’re often similarly attractive. 

This all means that the results of the study are quite relevant for online marketers – managing user attention in an effective way does provide a great and cost-effective way to increase online sales. If your online shop or website communicates its core value propositions in visually effective way, it can make all the difference.

At EyeQuant we’re providing tools for just that: you simply upload a screenshot or URL and EyeQuant tells you within seconds what your website’s visitors are most likely to see and miss right away. This way, you can test and optimize your design’s visual impact step by step and within minutes –  test it for free with your website.

We pride ourselves with working with the original neuroscientists behind attention modeling and currently achieve over 90% accuracy when compared to actual eye-tracking – you can tell we’re very excited about these recent findings.

Firstly, the study’s findings confirm what we see on a daily basis with plenty of our clients: what you see is what you click. Making sure that the important parts of your website stick out will almost inevitably increase conversions.

At the same time, we’re about to add automatic and actionable recommendations on how to improve your website – stay tuned and make sure to test your own website at EyeQuant!

Image credit: smemon

Let’s talk about CRO #1: Stephen Pavlovich from Conversion Factory

At EyeQuant we’re working on the best tools for conversion optimization experts. “Let’s talk about CRO” is our brand new interview column dedicated to those experts. We kick things off with Stephen Pavlovich, CEO of Conversion Factory in London! 

“Conversion optimization makes people more likely to do what we want them to do.”

stephen-pavlovich-bw-lo-res

Stephen, how did you get into conversion optimization and what do you enjoy most about it?

My big mouth got me into conversion optimization. I was working in educational publishing and was rude about friend’s ecommerce site. Long story short, he offered me a job, and six months later, we’d increased sales 6x.

How would you explain Conversion Optimization to a child, in a tweet? 🙂

Conversion optimization makes people more likely to do what we want them to do.


What do you consider your biggest success story?

We’ve worked with all kinds of sites: some just starting out, others doing $5 billion per year online. Our highlight has been helping a finance client add $1 billion in funds within 12 months. We used some unconventional tactics, and were hugely fortunate to be working with a client that was open to try this. The results are phenomenal, and we’re thrilled to have been a part of it.

CRO is still a pretty new field. What’s your take on how our industry will look like in 10 years?

For us, CRO is actually an old art-form with a new medium. People have been optimising adverts and direct-response letters for decades – and the best optimizers are using the techniques they’ve developed. At the moment, websites doing CRO properly are getting a huge advantage over their competitors. They’re not just increasing their conversion rate; they’re building market share.
Over time, more companies will wake up to the massive potential CRO gives them. That’ll mean optimizers will have to get more advanced too, and will need to shift their focus more to personalisation.

You’ve tested and optimized hundreds, if not thousands of websites and landing pages – is it possible at all to develop a certain ‚hunch’ on which variation will yield the best results, and if so, have your intuitions improved over the years?

Our hunches get better over time, but there are always tests that surprise us. We’ve tested adding a “Free Delivery” message to an ecommerce client’s site… and it decreased the conversion rate.
Of course, this shows the importance of testing. Most sites would just push these changes live, not knowing when they’re decreasing the conversion rate. And those who do measure can then turn the dial up on the successful changes, getting even higher conversion rates.

Conversion Optimization is a rather technical term. I sometimes think that an easier (or even sexier) label for what we do would potentially benefit our industry – how would you call it if we could change CRO’s name magically over night?

“Conversion optimization” isn’t ideal… I’m not keen on the focus on conversion rates – they’re a misleading metric 99% of the time (ie when two elements aren’t being tested in parallel). That said, if we had to choose another name… I think we’d have to split-test that.

Thanks Stephen!

Stephen Pavlovich runs Conversion Factory, a London-based agency specialising in conversion rate optimisation for highly competitive niches. Working with clients in finance, gaming, lead-gen and ecommerce, they’ve helped clients add millions of dollars in annual revenue.Clients range from $5 million/yr start-ups to web giants making over $5 billion/yr online. Increases typically start at 20%, with some clients generating $xxx million in additional revenue.