Topic: Search

Reputation Search

Case Study: How We Removed a Massive Manual Google Penalty in 5 Steps 0

Posted by Anna_of_PSD2HTML

When I joined the PSD2HTML team in November 2014, the site had been suffering from a manual penalty related to spammy backlinks for over a year. They’d tried everything to promote a recovery, but nothing worked.

They were ready to admit defeat.

The penalty resulted in the loss of over 80% of their organic traffic.

The story of how this happened is very interesting. PSD2HTML was one of the first companies to market PSD to HTML conversions in 2005. At the peak of their success, they transitioned to an SEO company. In 2013, our relationship with a well-known agency resulted in a manual spam penalty.

The following is a screenshot of what this looked like:

Recovering from the penalty was a very painful process. There were two in-house marketing departments that hired several agencies to analyze over 2,500 linking domains. They spent a year and a half trying to remove the penalty, which included submitting numerous reconsideration requests that failed.

Since we finally had the penalty revoked, we wanted to share our experience with other companies, website owners, and SEOs that might be suffering from the same problem, and include some useful tactics for removing penalties.

It is also important to give credit to our outstanding consultant who helped us overcome the disaster.

Step 1: Create a master penalty removal sheet

The first thing we did was to collect all incoming links pointing to PSD2HTML.com. We then created a master spreadsheet that we could work through to identify possible artificial links. This process involves the following steps:

  • Download all links from Google Search Console, using the process described here
  • Supplement with links from Majestic.com
  • Sort them by root domain using a simple Excel process from Distilled that can be found here

This may sound simple, but it’s not. It’s important to start with the right data. While it is helpful to have multiple link sources, Google Search Console data is key.

Step 2: Identify links that Google sees as artificial

Typical unnatural links can include articles, link directories, bookmarks, blog comments, malware, guest posts, and scrapers. They include anything where the content exists primarily to influence rankings more than offering genuine content. PSD2HTML.com had very few of these types of links. This is possibly one reason why the penalty had been around for so long. There were some possible unreliable link directories in their link profile. However, there were very few keyword-stuffed submissions or guest posts with links for rankings. It was important to identify the artificial links.

The theory of primary intention

Penalty write-ups typically list types of links that need to be removed. However, as link building methods continue to evolve, the potential types of unnatural links are unlimited. It is very helpful to identify the common denominator for all artificial links, which is the primary intention. If the primary intention of a link is to influence rankings, then it is artificial.

We went through all linking domains in order to develop a list of some interesting link types that fit this description. Despite our previous attempts, we could not access previous responses to reconsideration requests. As a result, no sample links from Google were available. Therefore, we had to start from scratch.

Giveaways

PSD2HTML had a number of links from giveaway promotions, where users could leave comments in return for a chance to win paid services. Paid reviews of services have a long history of being seen as unnatural by the Google Webspam Team. However, it is not necessarily artificial if a company wants to run a giveaway. Since these giveaways didn’t have the primary intention of gaining links to influence Google, we decided to keep them.

Again, it wasn’t that simple.

In the case of at least one giveaway, the page had a genuine intention and also contained specific links with artificial elements. For example, here are two links to PSD2HTML.com that appeared on the same page:

– “The world’s first and finest PSD to HTML conversion company, PSD2HTML®, is giving away $400, $300 and $200 worth of services!”
– The leading PSD to HTML slicing service has made outstanding changes to the way they do business and provide services.

These two links are bolded here, but not linked, as one of them was artificial. I’m sure that you can guess which one. The first one was a genuine reference to the company name. The second one was a keyword phrase. Therefore, the giveaway was not artificial, but the keyword link was.

In order to deal with this situation, we kept all of the giveaway links and the domains they were featured on. We drilled down to any pages that also had artificial keyword links and disavowed them individually. When Google denied our first request, none of the sample links were giveaways. We therefore inferred that we’d gotten this one right.

Keyword footer links

PSD2HTML had some sites where they’d done conversion work and gained a keyword link at the footer of the site. This brought up the question as to what degree design firms can legitimately place footer links on client sites. John Mueller talks about this here. The intention idea proved useful here.

In one instance we noticed, the actual brand wasn’t linked, but the keywords were linked, so they were assumed to be artificial.

Sponsor and advertisement links

Sponsor links were absolutely fine. We thought sponsor links (not quite sponsored links) could be artificial and wondered if they’d be identified as artificial. However, these were genuine sponsors, so we left them and it worked out fine.

We also found that image ads were fine. However, they usually only showed up in Majestic data and not in Search Console. Therefore, there wasn’t a problem.

Keyword articles, link directories, bookmarks, malware, and scrapers

There were some submission sites with keyword links and the sludge of scrapers that were added to the disavow file. However, the rest of their profile looked clean, so it was submitted.

Step 3: Submission to and response from Google

Our penalty removal consultant had a proven record of eight penalties getting revoked on the first try.

Unfortunately for us, after our submission the Webspam Team returned the following three sample links:

  • http://www.tuicool.com/articles/UV3QZf
  • https://www.campaignmonitor.com/forums/topic/5542/html-dev-required/
  • http://ibartolome.blogspot.com/2012_01_01_archive.html

Interpreting Google sample links

The sample links that Google provides in response to reconsideration requests aren’t just samples. The Webspam Team shows you specific link types that still need to be removed. If you can identify the underlying link types provided by Google, it is possible to look through the link data again and find those link types.

Step 4: Identify links that Google sees as artificial from sample links

We thought the three sample links from Google were unusual. This penalty was interesting because the Webspam Team seemed to be highlighting possible new variations of artificial links. These link types appeared regularly and it looked like they were here to stay.

Sample link type #1: Chinese duplicate translation links

The first such link was a Chinese news site. In the past, it had been possible to clear penalties without similar foreign sites causing problems. These sites were posting verbatim articles from SmashingHub. While the URL included the source (http://smashinghub.com/10-best-online-resources-to-convert-psd-to-xhtmlcss.htm?utm_source=tuicool), Google had identified it as an artificial link. We again went through the links looking for duplicate Chinese pages. This was easy to do with the English ones. However, most of them were not duplicate translations.

This article was a duplicate of http://creativeoverflow.net/top-15-psd-to-html-services-to-use/ that was translated into Chinese. This made it more difficult to identify. Although it is tempting to judge a link only based on a language, countries with non-Western scripts form a massive part of the web, and can also offer genuine links to a site. We wanted to keep any genuine links using the primary intention idea, regardless of country or language. This is an example of a genuine link with no duplicate issues: http://www.rysos.com/bbs/redirect.php?fid=49&tid=7586&goto=nextoldset.

I identified duplicate Chinese links by searching for English keyword phrases. For example, most of the duplicates featured English website names. Therefore, by Googling “PSD2HTML” “CSS Chopper” “Direct Basing,” it was possible to identify the original English post.

Sample link type #2: Brand name used as keywords with genuine intention

Seeing https://www.campaignmonitor.com/forums/topic/5542/html-dev-required/ marked as artificial was frustrating. This is a forum post link that was given completely genuinely.

However, the link text was “PSD to HTML.” It was used legitimately as a brand name that was rewritten with the number 2 in PSD2HTML converted to the word “to.” There were some giveaway links above with some artificial links using “PSD to HTML” as a keyword phrase to influence Google. Those links were artificial. However, the use of “PSD to HTML” was not artificial since the underlying primary intention was completely genuine.

How do you deal with a genuine link that is marked as artificial due to brands listed as keywords? In order to solve this problem, we called Google out on artificial links. In our second reconsideration request message, we argued that while the link text consisted of keywords, they were used as a completely genuine reference to a brand. Therefore, the act of identifying the link as artificial was in itself artificial, since the link itself was entirely genuine.

The frustration of having a genuine link marked as artificial became a tool to add weight to our argument and conversation with the Webspam Team. They regularly respond with sample links that can be argued to be genuine, or are already in the disavow file. These sample links are very important. They can be used in your next presentation to the Webspam Team.

Sample link type #3: Financial offer to influence links

This link was very obscure. However, it could be seen as artificial. It was impressive how the Webspam Team isolated this one link type.

This link was in Spanish: http://ibartolome.blogspot.kr/2012_01_01_archive.html

The writer reported receiving an email offering a Christmas promotion. They would receive $50 off of their next order from “our friends P2H” (P2H.com redirects to PSD2HTM.com).

There was a subtle, but important, difference between this link and the giveaways. Although the giveaways were promotional, they did not appear to be directly created with the intention of gaining a link. In the email, PSD2HTML offered $50 on their next order to bloggers with whom they had no previous relationship, which raised questions about their motivation. The “influenced-recommendation-tone” became clearer as the post continued. This indicated that the emails were sent in order to gain links.

We searched for similar links and found one more. This type of task can be difficult when there are many easy-to-spot, low-quality links. The standard artificial link types were largely irrelevant. The process of searching for the links that matched the exact link types implied by the Webspam Team’s three sample links took real precision.

Step 5: Second submission and response from Google

We submitted this work with our explanation to Google and received a very quick response.

tonystark.jpg

Image credit: Comics Alliance

Euphoria. Penalty revoked.

A note on emails and outreach

We didn’t send any emails to get the penalties removed.

There are a number of differences between only using the disavow tool and also using manual outreach. We had the following findings about the use of email outreach: Email outreach is not required to revoke a penalty. Clients and providers often feel they must use outreach to revoke a penalty. This can significantly add to the costs and timeframe.

It is possible to commit resources to email outreach. However, it is not true that both outreach and manual action removal are needed. Once you realize this, you can have more control and save time and money. Google penalties are a psychological phenomenon. Therefore, getting the “No manual webspam actions found” message showing quickly is very important.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

Getting on the Map: The Intro to Local SEO for SABs 0

Posted by ImprezzioMarketing

Local SEO can be confusing for those businesses that don’t have a physical store for customers to walk into.

Unlike businesses with a brick-and-mortar storefront, service-area businesses (or SABs) go out to meet with their customers, as opposed to their customers coming to see them. This often results in them servicing multiple cities, which can be problematic—the #1 ranking factor in local SEO is the physical address of the business. In addition, business owners are also usually concerned about privacy, as many of them use their home address and can’t utilize some of the features that Google offers small businesses (like Indoor Street View).

This guide will show you how you can maximize your presence on Google and reach more people in your local market.

1. Figure out which address you’re going to use.

As a service-area business, you only have a couple options. Here are some best practices:

  • If you have an office, use that for your business address everywhere online.
  • If you have no office but you have a business partner(s), use the home address for the person who lives closest to the major area that you service.
  • Use the address you registered with for your business everywhere. Think of the address you put on your bank business loan, the address you used for registering for your business telephone line/cell phone, the address you provided when you bought a business vehicle or equipment. These are the addresses that are going to populate online via data providers later in the future, and they’ll give you a possible headache if they don’t match what you used as your address in Google My Business (GMB).

2. Decide if you need to hide your address or not.

Hiding your address means that Google will know where you are (for verification), but users will not see your address publicly on Google.

You should always hide your address if you’re using your home address (unless customers actually show up there). If your customers do visit your home address, it needs to be blatantly obvious on your website. You should:

  • List driving directions,
  • Invite people to come visit, and
  • Include photos of your home office.

You should hide your address if you have an office, but no one is actively staffing it during the day. If a person walked in at 2pm during a work day, would your door be locked with no one there? If so, hide your address.

If you have an office that is actually staffed, you should leave it unhidden.

Flash from the MapMaker Top Contributor team wrote up a great guide that shows you how to hide your address and the rules that Google has about this.

3. Decide if a public address is okay elsewhere online.

If you fall into the majority of SABs that need to hide their address, decide if you’re okay publicly listing your address on other websites.

My advice is to always list your full address everywhere else online (other than Google My Business), including your website, Facebook page, Yellowpages listing, and so on. If you insist on not listing your home address anywhere, that’s okay, but know that you will run into some missed opportunities. There are still many local directories that require an address to be listed. Phil Rozek wrote a great summary of places you can list your business with a hidden address.

4. Think about how you should list the area you service.

In Google My Business, you can select which areas you service. You can do this by adding either zip codes or the names of the cities you service. It’s good to note that what you select here will determine how your business radius and marker will show up on Google Maps. If you choose a ton of cities and zip codes, Google will attempt to find the center of them and put your marker there. The result isn’t always ideal.

Keep in mind that the service areas you select have no impact on your ranking there. It’s extremely unlikely that you will rank in the local pack outside the town your address is in.

joyhawkins4.png

5. Do a thorough check for duplicate listings on Google.

  • The best option for a service-area business is to head to Google and type in this query (with the quotations). Replace the dummy phone number with your actual one:

“plus.google.com” “999-999-9999” “about” “review”

  • Go to the end of the URL string in your browser (it starts with “google.com…”) and add &filter=0
  • Record a list of all the listings you find (they will all start with “plus.google.com” and repeat those 2 steps for every phone number that might be associated with your business. Make sure you check your home phone & cell phone.
  • Once you have your list of existing listings, make sure you deal with all the duplicates appropriately.
  • If your duplicate listings had inconsistencies and used different phone numbers, websites, or addresses than the one you have provided to Google, make sure you search Google for other online references to that information and update it there, as well.

6. Do a local search on Google for a few keywords in the town your address is in and see who your competitors are.

Look for competitors that either have multiple listings (which is not allowed) or that are using keyword stuffing in their business name. Typically, more spam exists for service-area businesses than for businesses with storefronts. Locksmiths are known in the local SEO world as being the most-spammed business category.

joyhawkins6.png

Submit an edit for these listings through Google Maps to remove the keyword stuffing.

joyhawkins6-2.png

If the competitor is a service-area business with multiple listings, you can report the duplicates through Google Maps. As per the guidelines, a service-area business is not allowed to have multiple listings. The only exception would be if they had multiple offices where customers could actually show up.

7. Consider expanding your open hours.

Service-area businesses with hidden addresses have the advantage of listing the hours that they’re available to answer the phone. Businesses with storefronts are supposed to list the actual hours that customers can show up at their front door and get service. If they have a 24-hour call center, they are still not allowed to list themselves that way unless they’re someone like McDonald’s, with a 24-hour drive-through.

Service-area businesses avoid this rule because they have no physical storefront, so their open hours are the equivalent of the hours that they answer the phone. With Google’s new hours display in the search results, having longer open hours could result in a lot more calls.

joyhawkins7.png

8. Come up with a really great content strategy for the areas you target outside of the city your address is in.

Generally, you will only rank in the local pack for the city that your address is in. If your home address isn’t in the city that your primary book of business is in, this can be concerning. Other than setting up offices in different cities (real ones, not virtual ones), your best option is to target long-tail keywords & the organic section of Google using really great content.

Here are some tips for ways to generate good content:

  • Create pages/articles about the different jobs you do. If you are a home remodeler in the Denver, CO area but do jobs in the entire metro area, you could create a page for different jobs you did in Parker, CO. On that page, you could put before & after pictures of the job, a description of the job, details about the neighborhood you did it in, a testimonial from the customer, and so on and so forth.
  • Create how-to videos for your industry. If you’re a tree service business, you could create a video on how to prune a maple tree (think long-tail and get specific). Post the video on YouTube and use their transcription service to transcribe the entire thing as well. In the description, include the full name, address & phone number of your business along with a link back to your website.
  • Use a service like Nearby Now to help automate this process.
  • If you’re a contractor, create a useful page on your site for each town with safety information, emergency contacts or places to get permits.

Technically, I could continue to add a hundred more items to this list—for now, I wanted to focus on the major starting points that will help a service-area business start out on the right track. If you have questions, please let me know in the comments!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

How to Write for the Web—a New Approach for Increased Engagement – Whiteboard Friday 0

Posted by Dan-Petrovic

We tend to put a lot of effort into writing great content these days. But what’s the point of all that hard work if hardly anybody actually reads it through to the end?

In this week’s Whiteboard Friday, Dan Petrovic illustrates a new approach to writing for the web to increase reader engagement, and offers some tools and tips to help along the way.

How to Write for the Web - a New Approach for Increased Engagement Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

G’day, Moz fans, Dan Petrovic from DEJAN here. Today we’re talking about how to write for the web.

How much of an article will people actually read?

This year we did an interesting study involving 500 people. We asked them how do they read online. We found that the amount of people who actually read everything word-for-word is 16%. Amazingly, this is exactly the same statistic, the same percentage that Nielsen came up with in 1997. It’s been nearly two decades, and we still haven’t learned how to write for the Web.

I don’t know about you guys, but I find this to be a huge opportunity, something we can do with our blogs and with our content to change and improve how we write in order to provide better user experience and better performance for our content. Essentially, what happens is four out of five people that visit your page will not actually read everything you wrote. The question you have to ask yourself is: Why am I even writing if people are not reading?

I went a little bit further with my study, and I asked those same people: Why is it that you don’t read? How is it that there are such low numbers for the people who actually read? The answer was, “Well, I just skip stuff.” “I don’t have time for reading.” “I mainly scan,” or, “I read everything.” That was 80 out of 500 people. The rest said, “I just read the headline and move on,” which was amazing to hear.

Further study showed that people are after quick answers. They don’t want to be on a page too long. They sometimes lose interest halfway through reading the piece of content. They find the bad design to be a deterrent. They find the subject matter to be too complex or poorly written. Sometimes they feel that the writing lacks credibility and trust.

I thought, okay, there’s a bunch of people who don’t like to read a lot, and there’s a bunch of people who do like to read a lot. How do I write for the web to satisfy both ends?

Here was my dilemma. If I write less, the effort for reading my content is very low. It satisfies a lot of people, but it doesn’t provide the depth of content that some people expect and it doesn’t allow me to go into storytelling. Storytelling is very powerful, often. If I write more, the effort will be very high. Some people will be very satisfied, but a lot of people will just bounce off. It’ll provide the depth of content and enable storytelling.

Actually, I ended up finding out something I didn’t know about, which was how journalists write. This is a very old practice called “inverted pyramid.”

The rules are, you start off with a primary piece of information. You give answers straight up. Right after that you go into the secondary, supporting information that elaborates on any claims made in the first two paragraphs. Right after that we go into the deep content.

I thought about this, and I realized why this was written in such a way: because people used to read printed stuff, newspapers. They would go read the most important thing, and if they drop off at this point, it’s not so bad because they know actually what happened in the first paragraph. The deep content is for those who have time.

But guess what? We write for the web now. So what happens is we have all this technology to change things and to embed things. We don’t really have to wait for our users to go all the way to the bottom to read deep information. I thought, “How can I take this deep information and make it available right here and right there to give those interested extra elaboration on a concept while they’re reading something?”

This is when I decided I’ll dive deeper into the whole thing. Here’s my list. This is what I promised myself to do. I will minimize interruption for my readers. I will give them quick answers straight in the first paragraph. I will support easy scanning of my content. I will support trust by providing citations and references. I will provide in-depth content to those who want to see it. I will enable interactivity, personalization, and contextual relevance to the piece of content people want to retrieve in that particular time.

I took one of my big articles and I did a scroll test on it. This was the cutoff point where people read everything. At this point it drops to 95, 80, 85. You keep losing audience as your article grows in size. Eventually you end up at about 20% of people who visit your page towards the bottom of your article.

My first step was to jump on the Hemingway app—a very good online app where you can put in your content and it tells you basically all the unnecessary things you’ve actually put in your words—to actually take them out because they don’t really need to be there. I did that. I sized down my article, but it still wasn’t going to do the trick.

Enter the hypotext!

This is where I came up with an idea of hypotext. What I did, I created a little plugin for WordPress that enables people to go through my article, click on a particular piece, kind of like a link.

Instead of going to a new website, which does interrupt their reading experience, a block of text opens within the paragraph of text they’re reading and gives them that information. They can click if they like, or if they don’t want to look up this information, they don’t have to. It’s kind of like links, but injected right in the context of what they’re currently reading.

This was a nerve-wracking exercise for me. I did 500 revisions of this article until I got it right. What used to be a 5,000-word article turned into a 400-word article, which can then be expanded to its original 5,000-word form. People said, “That’s great. You have a nice hypothesis, nice theory, but does this really work?”

So I decided to put everything I did to a test. An old article, which takes about 29 minutes to read, was attracting people to the page, but they were spending 6 minutes on average—which is great, but not enough. I wanted people to spend way more time. If I put the effort into writing, I wanted them to digest that content properly. The bounce rate was quite high, meaning they were quite tired with my content, and they just wanted to move on and not explore anything else on my website.

Test Results

After implementing the compressed version of my original article, giving them a choice of what they will read and when, I expanded the average time on page to 12 minutes, which is extraordinary. My bounce rate was reduced to 60%, which meant that people kept browsing for more of my content.

We did a test with a content page, and the results were like this:

Basically, the engagement metrics on the new page were significantly higher than on the old when implemented in this way.

On a commercial landing page, we had a situation like this:

We only had a small increase in engagement. It was about 6%. Still very happy with the results. But what really, really surprised me was on my commercial landing page—where I want people to actually convert and submit an inquiry—the difference was huge.

It was about a 120% increase in the inquiries in comparison to the control group when I implemented this type of information. I removed the clutter and I enabled people to focus on making the inquiry.

I want you all to think about how you write for the web, what is a good web reading experience, and how content on the web should be, because I think it’s time to align how we write and how we read on the web. Thank you.

Video transcription by Speechpad.com

A few notes:

There are a few things to note here. First, for an example of an implementation of hypotext, take a look at this post on user behavior data.

Next, keep in mind that Google does devalue the hidden content, disagreeing with its usability. You can read more about this on the DEJAN blog—there are further tips on the dangers of hidden content and how you can combat them there.

One solution is to reverse how hypotext works in an article. Rather than defaulting to the shorter piece, you can start by showing the full text and offer a “5-minute-read” link (example here) for those inclined to skim or not interested in the deep content.

Share your thoughts in the comments below, and thanks for listening!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

Click-Through Rate Isn’t Everything: 8 Ways to Improve Your Online Display Ads 0

Posted by rMaynes1

You are exposed to an average of 362 online display ads a day. How close are you to buying anything when you see those ads?

Online display ads have been around for over 20 years. They’re nothing new. But over the past 2 decades, the content, format, and messaging of display ads have changed dramatically—because they have had to!

The click-through rate of that first banner ad in 1994 was 44%. CTRs have steadily declined, and were sitting at around 0.1% in 2012 for standard display ads (video and rich media excluded), according to DoubleClick. Advertisers had to do something to ensure that their ads were seen, and engaged with—ads had to be a useful resource, and not an annoying nuisance.

It’s important, however, that the focus is not firmly fixed on CTRs. Yes, online display ads have largely been considered a tool for direct response advertising, but more recently, advertisers are understanding the importance of reaching the right person, in the right mindset, with an ad that can be seen. This ad may not be clicked on, but does that mean it wasn’t noticed and remembered? Advertisers are increasingly opting to pay for performance as opposed to clicks and/or impressions. Advertisers want their ad to drive action that leads to purchase—and that isn’t always in the form of a click.

Mediative recently conducted and released a research study that looks at how display ads can drive purchase behaviour. If someone is browsing the web and sees an ad, can it influence a purchase decision? Are searchers more responsive to display ads at different stages in the buying cycle? What actions do people take after seeing an ad that captures their interest? Ultimately, Mediative wanted to know how indicative of purchase behaviour a click on an ad was, and if clicks on display ads even matter anymore when it comes to driving purchase behaviour and measuring campaign success. The results from an online survey are quite interesting.

1. The ability of online display ads to influence people increases as they come closer to a purchase decision.

In fact, display ads are 39% more likely to influence web users when they are researching a potential purchase versus when they have no intent to buy.

Advertiser action item #1:

Have different ad creatives with different messaging that will appeal to the researcher and the purchaser of your product or service separately. Combined with targeted impressions, advertisers are more likely to reach and engage their target audience when they are most receptive to the particular messaging in the ad.

Here are a few examples of Dell display ads and different creatives that have been used:

This creative is focusing on particular features of the product that might appeal more to researchers.

This ad injects the notion of “limited time” to get a deal, which might cause people who are on the fence to act faster—but it doesn’t mention pricing or discounts.

These creatives introduce price discounts and special offers which will appeal to those in the market to buy.

2. The relevancy of ads cannot be understated.

40% of people took an action (clicked the ad, contacted the advertiser, searched online for more information, etc.) from seeing an ad because it was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #2:

Use audience data or lookalike modeling in display campaigns to ensure ads will be targeted to searchers who have a higher likelihood of being interested in the product or service. Retargeting ads to people based on their past activity or searches is valuable at this stage, as potential customers can be reached all over the web while they comparison shop.

An established Canadian charitable organization ran an awareness campaign in Q2 2015 using retargeting, first and third party data lookalike modeling, and contextual targeting to help drive existing, and new users to their website. The goal was to drive donations, while reducing the effective cost per action of the campaign. This combination helped drive granularity in the targeting, enabling the most efficient spending possible. The result was a 689% decrease in eCPA—$76 versus the goal of $600.

3. Clicks on ads are not the only actions taken after seeing ads.

53% of people said they were likely to search online for the product featured in the ad (the same as those who said they would click on the ad). Searching for more information online is just as likely as clicking the ad after it captures attention, just not as quickly as a click (74% would click on the ad immediately or within an hour, 52% would search online immediately or within an hour).

Advertiser action item #3:

It is critical not to measure the success of a display campaign by clicks alone. Advertisers can get caught up in CTRs, but it’s important to remember that ads will drive other behaviours in people, not just a click. Website visits, search metrics, etc. must all be taken into consideration.

A leading manufacturer of PCs, laptops, tablets, and accessories wanted to increase sales in Q2 of 2014, with full transparency on the performance and delivery of the campaign. The campaign was run against specific custom audience data focusing on people of technological, educational, and business interest, and was optimized using various tactics. The result? The campaign achieved a post-view ROI revenue (revenue from target audiences who were presented with ad impressions, yet did not necessarily click through at that time) that was 30x the amount of post-click revenue.

4. Clicks on ads are not the only actions that lead to purchase.

33% of respondents reported making a purchase as a direct result of seeing an ad online. Of those, 61% clicked and 44% searched (multiple selections were allowed), which led to a purchase.

Advertiser action item #4:

Revise the metrics you measure. Measuring “post-view conversions” will take into account the fact that people may see an ad, but act later—the ad triggers an action, whether it be a search, a visit, or a purchase—but not immediately, and it is not directly measurable.

5. The age of the target audience can impact when ads are most likely to influence them in the buying cycle.

  • Overall, 18–25 year olds are most likely to be influenced by online advertising.
  • At the beginning of the buying cycle, younger adults aged 18–34 are likely to notice and be influenced by ads much more than people aged over 35.
  • At the later stages of the buying cycle, older adults aged 26–54 are 12% more likely that 18–25 year olds to have made a purchase as a result of seeing an ad.

Advertiser action item #5:

If your target audience is older, multiple exposures of an ad might be necessary in order to increase the likelihood of capturing their attention. Integrated campaigns could be more effective, where offline campaigns run in parallel with online campaigns to maximize message exposure.

6. Gender influences how much of an impact display ads have.

More women took an online action that led to a purchase in the last 30 days, whereas more men took an offline action that led to a purchase.

  • 76% more women than men visited an advertiser’s website without clicking on the ad.
  • 47% more women than men searched online for more information about the advertiser, product, or service.
  • 43% more men than women visited the advertiser’s location.
  • 33% more men than women contacted the advertiser.

Advertiser action item #6:

Ensure you know as much about your target audience as possible. What is their age, their average income? What sites do they like to visit? What are their interests? The more you know about who you are trying to reach, the more likely you will be to reach them at the right times when they will be most responsive to your advertising messages.

7. Income influences how much of an impact display ads have.

  • Web users who earned over $100k a year were 35% more likely to be influenced by an ad when exposed to something they hadn’t even thought about than those making under $50k a year.
  • When ready to buy, people who earned under $20K were 12.5% more likely to be influenced by ads than those making over $100K.

Advertiser action item #7:

Lower earners (students, part-time workers, etc.) are more influenced by ads when ready to buy, so will likely engage more with ads offering discounts. Consider income differences when you are trying to reach people at different stages in the buying cycle.

8. Discounts don’t influence people if they are not relevant.

We were surprised that the results of the survey indicated that discounts or promotions in ads did not have more of an impact on people—but it’s likely that the ads with coupons were irrelevant to the searcher’s needs or wants, therefore would have no impact. We asked people what their reasons were behind taking action after seeing an online ad. 40% of respondents took an action from seeing an ad for a more purchase-related reason than simply being interested—they took the action because the ad was relevant to a need or want, or relevant to something they were doing at the time.

Advertiser action item #8:

Use discounts strategically. Utilizing data in campaigns can ensure ads reach people with a high intent to buy and a high likelihood of being interested in your product or service. Turn interest into desire with coupons and/or discounts—it will have more of an impact if directly tied to something the searcher is already considering.

In conclusion, to be successful, advertisers need to ensure their ads are providing value to online web users—to be noticed, remembered, and engaged with, relevancy of the ad is key. Serving relevant ads that are related to a searcher’s current need or want are far more likely to capture attention than a “one-size-fits-all” approach.

Advertisers will be rewarded for their attention to personalization with more interaction with ads and a higher likelihood of a purchase. Analyzing lower funnel metrics, such as post-view conversions, rather than simply concentrating on the CTR will allow advertisers to have a far better understanding of how their ads are performing, and the potential number of consumers that have been influenced.

Rebecca Maynes, Manager of Content Marketing and Research with Mediative, was the major contributor on this whitepaper. The full research study is available for free download at Mediative.com.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

Why All SEOs Should Unblock JavaScript & CSS… And Why Google Cares 0

Posted by jenstar

If you’re a webmaster, you probably received one of those infamous “Googlebot cannot access CSS and JS files on example.com” warning letters that Google sent out to seemingly every SEO and webmaster. This was a brand new alert from Google, although we have been hearing from the search engine about the need to ensure all resources are unblocked—including both JavaScript and CSS.

There was definite confusion around these letters, supported by some of the reporting in Google Search Console. Here’s what you need to know about Google’s desire to see these resources unblocked and how you can easily unblock them to take advantage of the associated ranking boosts.

Why does Google care?

One of the biggest complaints about the warning emails lay in the fact that many felt there was no reason for Google to see these files. This was especially true because it was flagging files that, traditionally, webmasters blocked—such as files within the WordPress admin area and WordPress plugin folders.

Here’s the letter in question that many received from Google. It definitely raised plenty of questions and concerns:

Of course, whenever Google does anything that could devalue rankings, the SEO industry tends to freak out. And the confusing message in the warning didn’t help the situation.

Why Google needs it

Google needs to render these files for a couple of key reasons. The most visible and well known is the mobile-friendly algorithm. Google needs to be able to render the page completely, including the JavaScript and CSS, to ensure that the page is mobile-friendly and to apply both the mobile-friendly tag in the search results and the associated ranking boost for mobile search results. Unblocking these resources was one of the things that Google was publicly recommending to webmasters to get the mobile-friendly boost for those pages.

However, there are other parts of the algorithm that rely on using it, as well. The page layout algorithm, the algorithm that looks at where content is placed on the page in relation to the advertisements, is one such example. If Google determines a webpage is mostly ads above the fold, with the actual content below the fold, it can devalue the rankings for those pages. But with the wizardry of CSS, webmasters can easily make it appear that the content is front and center, while the ads are the most visible part of the page above the fold.

And while it’s an old school trick and not very effective, people still use CSS and JavaScript in order to hide things like keyword stuffing and links—including, in the case of a hacked site, to hide it from the actual website owner. Googlebot crawling the CSS and JavaScript can determine if it is being used spammily.

Google also has hundreds of other signals in their search algo, and it is very likely that a few of those use data garnered from CSS and JavaScript in some fashion as well. And as Google changes things, there is always the possibility that Google will use it for future signals, as well.

Why now?

While many SEOs had their first introduction to the perils of blocking JavaScript and CSS when they received the email from Google, Matt Cutts was actually talking about it three-and-a-half years ago in a Google Webmaster Help video.

Then, last year, Google made a significant change to their webmaster guidelines by adding it to their technical guidelines:

Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings.

It still got very little attention at the time, especially since most people believed they weren’t blocking anything.

However, one major issue was that some popular SEO WordPress plugins were blocking some JavaScript and CSS. Since most WordPress users weren’t aware this was happening, it came as a surprise to learn that they were, in fact, blocking resources.

It also began showing up in a new “Blocked Resources” section of Google Search Console in the month preceding the mobile-friendly algo launch.

How many sites were affected?

In usual Google fashion, they didn’t give specific numbers about how many webmasters received these blocked resources warnings. But Gary Illyes from Google did confirm that they were sent out to 18.7% of those that were sent out for the mobile-friendly warnings earlier this year:

Finding blocked resources

The email that Google sent to webmasters alerting them to the issue of blocked CSS and JavaScript was confusing. It left many webmasters unsure of what exactly was being blocked and what was blocking it, particularly because they were receiving warnings for JavaScript and CSS hosted on other third-party sites.

If you received one of the warning letters, the suggestion for how to find blocked resources was to use the Fetch tool in Google Search Console. While this might be fine for checking the homepage, for sites with more than a handful of pages, this can get tedious quite quickly. Luckily, there’s an easier way than Google’s suggested method.

There’s a full walkthrough here, but for those familiar with Google Search Console, you’ll find a section called “Blocked Resources” under the “Google Index” which will tell you what JavaScript and CSS is blocked and what pages they’re found in.

You also should make sure that you check for blocked resources after any major redesign or when launching a new site, as it isn’t entirely clear if Google is still actively sending out these emails to alert webmasters of the problem.

Homepage

There’s been some concern about those who use specialized scripts on internal pages and don’t necessarily want to unblock them for security reasons. John Mueller from Google said that they are looking primarily at the homepage—both desktop and mobile—to see what JavaScript and CSS are blocked.

So at least for now, while it is certainly a best practice to unblock CSS and JavaScript from all pages, at the very least you want to make it a priority for the homepage, ensuring nothing on that page is blocked. After that, you can work your way through other pages, paying special attention to pages that have unique JavaScript or CSS.

Indexing of Javascript & CSS

Another reason many sites give for not wanting to unblock their CSS and JavaScript is because they don’t want them to be indexed by Google. But neither of those files are file types that Google will index, according to their long list of supported file types for indexation.

All variations

It is also worth remembering to check both the www and the non-www for blocked resources in Google Search Console. This is something that is often overlooked by those webmasters that only to tend to look at the version they prefer to use for the site.

Also, because the blocked resources data shown in Search Console is based on when Googlebot last crawled each page, you could find additional blocked resources when checking them both. This is especially true for for sites that may be older or not updated as frequently, and not crawled daily (like a more popular site is).

Likewise, if you have both a mobile version and a desktop version, you’ll want to ensure that both are not blocking any resources. It’s especially important for the mobile version, since it impacts whether each page gets the mobile-friendly tag and ranking boost in the mobile search results.

And if you serve different pages based on language and location, you’ll want to check each of those as well. Don’t just check the “main” version and assume it’s all good across the entire site. It’s not uncommon to discover surprises in other variations of the same site. At the very least, check the homepage for each language and location.

WordPress and blocking Javascript & CSS

If you use one of the “SEO for WordPress”-type plugins for a WordPress-based site, chances are you’re blocking Javascript and CSS due to that plugin. It used to be one of the “out-of-the-box” default settings for some to block everything in the /wp-admin/ folder.

When the mobile-friendly algo came into play, because those admin pages were not being individually indexed, the majority of WordPress users left that robots block intact. But this new Google warning does require all WordPress-related JavaScript and CSS be unblocked, and Google will show it as an error if you block the JavaScript and CSS.

Yoast, creator of the popular Yoast SEO plugin (formerly WordPress SEO), also recommends unblocking all the JavaScript and CSS in WordPress, including the /wp-admin/ folder.

Third-party resources

One of the ironies of this was that Google was flagging third-party JavaScript, meaning JavaScript hosted on a third-party site that was called from each webpage. And yes, this includes Google’s own Google AdSense JavaScript.

Initially, Google suggested that website owners contact those third-party sites to ask them to unblock the JavaScript being used, so that Googlebot could crawl it. However, not many webmasters were doing this; they felt it wasn’t their job, especially when they had no control over what a third-party sites blocks from crawling.

Google later said that they were not concerned about third-party resources because of that lack of control webmasters have. So while it might come up on the blocked resources list, they are truly looking for URLs for both JavaScript and CSS that the website owner can control through their own robots.txt.

John Mueller revealed more recently that they were planning to reach out to some of the more frequently cited third-party sites in order to see if they could unblock the JavaScript. While we don’t know which sites they intend to contact, it was something they planned to do; I suspect they’ll successfully see some of them unblocked. Again, while this isn’t so much a webmaster problem, it’ll be nice to have some of those sites no longer flagged in the reports.

How to unblock your JavaScript and CSS

For most users, it’s just a case of checking the robots.txt and ensuring you’re allowing all JavaScript and CSS files to be crawled. For Yoast SEO users, you can edit your robots.txt file directly in the admin area of WordPress.

Gary Illyes from Google also shared some detailed robots.txt changes on Stack Overflow. You can add these directives to your robots.txt file in order to allow Googlebot to crawl all Javascript and CSS.

To be doubly sure you’re unblocking all JavaScript and CSS, you can add the following to your robots.txt file, provided you don’t have any directories being blocked in it already:

User-Agent: Googlebot
Allow: .js
Allow: .css

If you have a more specialized robots.txt file, where you’re blocking entire directories, it can be a bit more complicated.

In these cases, you also need to allow the .js and.css for each of the directories you have blocked.

For example:

User-Agent: Googlebot
Disallow: /deep/
Allow: /deep/*.js
Allow: /deep/*.css

Repeat this for each directory you are blocking in robots.txt.

This allows Googlebot to crawl those files, while disallowing other crawlers (if you’ve blocked them). However, the chances are good that the kind of bots you’re most concerned about being allowed to crawl various JavaScript and CSS files aren’t the ones that honor robots.txt files.

You can change the User-Agent to *, which would allow all crawlers to crawl it. Bing does have its own version of the mobile-friendly algo, which requires crawling of JavaScript and CSS, although they haven’t sent out warnings about it.

Bottom line

If you want to rank as well as you possibly can, unblocking JavaScript and CSS is one of the easiest SEO changes you can make to your site. This is especially important for those with a significant amount of mobile traffic, since the mobile ranking algorithm does require they both be unblocked to get that mobile-friendly ranking boost.

Yes, you can continue blocking Google bot from crawling either of them, but your rankings will suffer if you do so. And in a world where every position gained counts, it doesn’t make sense to sacrifice rankings in order to keep those files private.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz