Topic: Search

Reputation Search

Creating Influencer-Targeted Content to Earn Links + Coverage – Whiteboard Friday 0

Posted by randfish

Most SEO campaigns need three kinds of links to be successful; targeting your content to influencers can get you 2/3 of the way there. In this Whiteboard Friday, Rand covers the tactics that will help your content get seen and shared by those with a wide and relevant audience.

How to create influencer-targeted content - Whiteboard Friday

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about how to create content that is specifically influencer-targeted in order to earn the links and attention and amplification that you often need.

Most SEO campaigns need 3 types of links:

So it’s the case that most SEO campaigns, as they’re trying to earn the rankings that they’re seeking, are trying to do a few things. You’re trying to grow your overall Domain Authority. You’re trying to get some specific keyword terms and phrases ranking on your site for those terms and phrases.

So you need kind of three kinds of links. This is most campaigns.

1. Links from broad, high-Domain Authority sites that are pointing — you kind of don’t care — anywhere on your site, the home page, internal pages, to your blog, to your news section. It’s totally fine. So a common one that we use here would be like the New York Times. I want the New York Times to link to me so that I have the authority and influence of a link from that domain and, hopefully, lots of domains like them, very high-Domain Authority domains.

2. Links to specific high-value keyword-targeted pages, hopefully, hopefully with specific anchor text, and that’s going to help me boost those individual URLs’ rankings. So I want this page over here to link to me and say “hairdryers,” to my page that is keyword targeted for the word “hairdryers.” Fingers crossed.

3. Links to my domain from other sites, in my sector or niche, that provide some of that topical authority and influence to help tell Google and the other search engines that this is what my site is about, that I belong in this sphere of influence, that I’m semantically and topically related to words and phrases like this. So I want appliancegal.com to link to my site if I’m trying to rank in the world of hairdryers and other kinds of appliances.

So of these, for one and three, we won’t talk about two today, but for one and three, much of the time the people that you’re trying to target are what we call in the industry influencers, and these influencers are going to be lots of people. I’ve illustrated them all here — mostly looking sideways at each other, not exactly sure why that is — but bloggers, and journalists, and authors, and conference organizers, and content marketers, and event speakers, and researchers, and editors, and podcasters, and influencers of a wide, wide variety. We could fill up the whole board with the types of people who are in the influencer world or have that title specifically, but they tend to share a few things in common. They are trying to produce content of one kind or another. They’re not dissimilar from us. They’re trying to produce things on the web, and when they do, they need certain elements to help fill in the gap. When they’re looking for those gap-filling elements, that is your opportunity to earn these kinds of links.

Content tactics

So a few tactics for that. First off, one of the most powerful ones, and we’ve talked about this a little bit here on Whiteboard Friday, but probably not in depth, is…

A. Statistics and data. The reason that this is such a powerful tool is because when you create data, especially if it’s either uniquely gathered by you, unique because you have it, because you can collect it and no one else can, or unique because you’ve put it together from many disparate sources, you’re the editorial curator of that data and statistics, everyone like this needs those types of statistics and data to support or challenge their arguments or their assertions or their coverage of the industry, whatever it is.

  • Why this works: This works well because this fills that gap. This gives them the relevant stats that they’re looking for. Because numbers are easy to use and easy to cite, and you can say, “Feel free to link to this. You’re welcome to copy this graph. You’re welcome to embed this chart.” All those kinds of things. That can make it even easier, but much of the time, just by having these statistics, you can do it.
  • The key is that you have to be visible at the time that these people are looking for them, and that means usually ranking for very hard to discover, through at least normal keyword research, long-tail types of terms that use words like “stats,” “data,” “charts,” “graphs,” and kind of these question formats like when, how much, how many, number of, etc.

It’s tough because you will not see many of those in your keyword research, because there’s a relatively few number of these people searching in any given month for this type of gap-filling data, so you have to intuit often what you should title those things. Put yourself in these people’s shoes and start Googling around for “What would I need if I had to write some industry coverage around this?” Then you’ll come up with these types of things, and you can try modifying your keyword research queries or doing some Google Suggest stuff with these words and phrases.

B. Visual content. Visual content is exceptionally valuable in this case because, again, it fills a gap that many of these folks have. When you are a content marketer, or when you’re a speaker at an event, or when you’re an author or a blogger, you need visual content that will help catch the eye, that will break up the writing that you’ve done, and it’s often much easier to get someone else’s visual content and simply cite your source and link to it than it is to create visual content of your own. These people often don’t have the resources to create their own visual content.

  • Why this works: So, for everyone who’s doing posts, and articles, and slide decks, and even videos, they say, “Why not let someone else do the work,” and you can be that someone else and fill these gaps.
  • Key: To do this well, you’re going to want to appear in a bunch of visual content search mediums that these folks are going to use. Those are places like…
    • Google Images most obviously, but also
    • Pinterest
    • SlideShare, meaning take your visuals, put them up in some sort of slide format, give some context to them and upload them to SlideShare. The nice thing about SlideShare, SlideShare actually reproduces each individual slide as a visual, and then Google Images can search those, and so you’ll often see SlideShare’s results inside Google Images. So this can be a great end around for that.
    • Instagram search, many folks are using that especially if you’re doing photos. You can see I’ve illustrated my own hair drying technique right here. This is clearly Rand. Look at me. I’ve got more hair than I know what to do with.
    • Flickr, still being used by many searchers, particularly because it has a Creative Commons search license, and that should bring up using a Creative Commons commercial use license that requires attribution with a link is your best bet for all of these platforms. It will mean you can get on lots of other Creative Commons visual and photography search engines, which can expose you to more of these types of people as they’re doing their searches.

C. Contrarian/counter-opinions. The last one I’ll cover here is contrarian or counter-opinions to the prevailing wisdom. So you might have an opinion like, “In the next three years, hairdryers will be completely obsolete because of X.”

  • Why it works: This works well because modern journalism has this idea and modern content, in fact, has this idea that they are supposed to create conflict and that they should cover both sides of an issue. In many industry specific sorts of fields, it’s often the case that that is a gap that goes unfilled. By being that sort of challenger to conventional wisdom or conventional thinking, you can fill that gap.
  • The key here is you want to either rank in Google search engine for some of those mid or long tail research type queries. These can be competitive, and so this is challenging, but presenting contrarian opinions is often great link bait. This is kind of a good way to earn links of all kinds in here.
  • Second, I would also urge you to do a little bit of comment marketing and some social media platforms, because what you want to start is to build a brand where you are known for having this contrarian opinion on this conventional topic in your space so that people point all these influencers to you when they’re asked about it. You’re trying to build up this branding of, “Well, I don’t agree with the conventional wisdom around hairdryers.” Hairdryers might be a tough topic for that one, but certainly these other two can work real well.

So using these tactics, I hope that you can go reach out and fill some gaps for these influencers and, as a result, earning two of the three exact kind of links that you need in order to rank well in the search results.

And we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

The Case For & Against Attending Marketing Conferences 0

Posted by randfish

I just finished reading Jan Schaumann’s short post on Why Companies Should Pay for Their Employees to Attend Conferences. I liked it. I generally agree with it. But I have more to add.

First off, I think it’s reasonable for managers and company leaders to be wary of conferences and events. It is absolutely true that if your employees attend them, there will be costs associated, and it’s logical for businesses to seek a return on investment.

What do you sacrifice when sending a team member to an event?

Let’s start by attempting to tally up the costs:

  • Lost productivity – Usually on the order of 1 to 4 days depending on the length of the event, travel distance, tiredness from travel, whether the team member does some work at the event or makes up with evenings/weekends, etc. Given marketing salaries ranging from $40K–$100K, this could be as little as $150 (~1 day’s cost at the lower end) to $1,900 (a week’s cost on the high end).
  • Cost of tickets – In the web marketing world, the range of events is fairly standard, between ~$1,000 and $2,000, with discounts of 20–50% off those prices for early registration (or with speaker codes). Some examples:
    • CTAConf in Vancouver is $999 ($849 if you’re an Unbounce customer)
    • Content Marketing World in Cleveland is $1,195 (early rate) or $1,395 later
    • Pubcon Las Vegas in $1,099 (early rate), not sure what it goes up to
    • HubSpot’s INBOUND is $1,299 (or $1,899 for a VIP pass)
    • SMX East is $1,795 (or $2,595 for all access)
    • SearchLove London is $890 (or $1,208 for VIP)
    • MozCon in Seattle is $1,549 (or $1,049 for Moz subscribers)
  • Cost of travel and lodging – Often between $1,000–$3,000/person depending on location, length, and flight+hotel costs.
  • Potential loss of employee through recruitment or networking – It’s a thorny one, but it has to be addressed. I know many employers who fear sending their staff to events because they worry that the great networking opportunities will yield a higher-paying or more exciting offer in the future. Let’s say that for every 30 employees you send (or every 30 events you send an employee to), you’ll lose one to an opportunity that otherwise wouldn’t have had them considering a departure. I think that’s way too high (not because marketers don’t leave their jobs but because they almost always leave for reasons other than an opportunity that came through a conference), but we’ll use it anyway. On the low end, that might cost you $10K (if you’ve lost a relatively junior person who can be replaced fairly quickly) and on the high end, might be as much as $100K (if you lose a senior person and have a long period without rehiring + training). We’ll divide that cost by 30 using our formula of one lost employee per thirty events.

Total: $4,630–$10,230

That’s no small barrier. For many small businesses or agencies, it’s a month or two of their marketing expenses or the salary for an employee. There needs to be significant return on those dollars to make it worthwhile. Thankfully, in all of my experiences over hundreds of marketing events the last 12 years, there is.

What do you gain by sending a team member to an event?

Nearly all the benefits of events come from three sources: the growth (in skills, relationships, exposure to ideas, etc) of the attendee(s), applicable tactics & strategies (including all the indirect ones that come from serendipitous touch points), and the extension of your organization’s brand and network.

In the personal growth department, we see benefits like:

  • New skills, often gained through exposure at events and then followed up on through individual research and effort. It’s absolutely true that few attendees will learn enough at a 30-minute talk to excel at some new tactic. But what they will learn is that tactic’s existence, and a way to potentially invest in it.
  • Unique ideas, undiscoverable through solo work or in existing team structures. I’ve experienced this benefit myself many times, and I’ve seen it on Moz’s team countless times.
  • The courage, commitment, inspiration, or simply the catalyst for experimentation or investment. Sometimes it’s not even something new, or something you’ve never talked about as a team. You might even be frustrated to find that your coworker comes back from an event, puts their head down for a week, and shows you a brilliant new process or meaningful result that you’ve been trying to convince them to do for months. Months! The will to do new things strikes whenever and however it strikes. Events often deliver that strike. I’ve sat next to engineers whom I’ve tried to convince for years to make something happen in our tools, but when they see a presenter at MozCon show off another tool that does it or bemoan the manual process currently required, they suddenly set their minds to it and deliver. That inspiration and motivation are priceless.
  • New relationships that unlock additional skill growth, amplification opportunities, business development or partnership possibilities, references, testimonials, social networking, peer validation, and all the other myriad advancements that accompany human connections.
  • Upgrading the ability to learn, to process data and stories and turn them into useful takeaways.
  • Alongside that, upgraded abilities to interact with others, form connections, learn from people, and form or strengthen bonds with colleagues. We learn, even in adulthood, through observation and imitation, and events bring people together in ways that are more memorable, more imprinted, and more likely to resonate and be copied than our day-to-day office interactions.

A gentleman at SearchLove London 2016 gives me an excellent (though slightly blurry) thumbs up

In the applicable tactics & strategies, we get benefits like:

  • New tools or processes that can speed up work, or make the impossible possible.
  • Resources for advancing skills and information on a topic that’s important to one’s job or to a project in particular.
  • Actionable ideas to make an existing task, process, or result easier to achieve or more likely to produce improved results.
  • Bigger-picture concepts that spur an examination of existing direction and can improve broad, strategic approaches.
  • People & organizations who can help with all above, formally or informally, paid as consultants, or just happy to answer a couple questions over email or Twitter.

Purna Virji at SMX Munich 2017

In the extension of organizational brand/network, we get benefits like:

  • Brand exposure to people you meet and interact with at conferences. Since we know the world of sales & marketing is multi-touch, this can have a big impact, especially if either your customers or your amplification targets include anyone in your professional field.
  • Contacts at other companies that can help you reach people or organizations (this benefit has grown massively thanks to the proliferation of professional social networks like those on LinkedIn and Twitter)
  • Potential media contacts, including the more traditional (journalists, news publications) and the emerging (bloggers, online publishers, powerful social amplifiers, etc)
  • A direct introduction point to speakers and organizers (e.g. if anyone emails me saying “I saw you speak at XYZ and wanted to follow up about…” the likelihood of an invested reply goes way up vs. purely online outreach)

But I said above that these three included “nearly all” the benefits, didn’t I? 🙂

Daisy Quaker at MozCon Ignite

It’s true. There are more intangible forms of value events provide. I think one of the biggest is the trust gained between a manager and their team or an employer and their employees. When organizations offer an events budget, especially when they offer it with relative freedom for the team member to choose how and where to spend it, a clear message is sent. The organization believes in its people. It trusts its people. It is willing to sacrifice short-term work for the long-term good of its people. The organization accepts that someone might be recruited away through the network they gain at an event, but is willing to make the trade-off for a more trusting, more valuable team. As the meme goes:

CFO: What if we invest in our people and they leave?
CEO: What if we don’t and they stay?

Total: $A Lot?

How do you measure the returns?

The challenge comes in because these are hard things for which to calculate ROI. In fact, any number I throw out for any of these above will absolutely be wrong for your particular situation and organization. The only true way to estimate value is through hindsight, and that means having faith that the future will look like the past (or rigorous, statistically sound models with large sample sizes, validated through years of controlled comparison… which only a handful of the world’s biggest and richest companies do).

It’s easy to see stories like “The biggest deals I’ve ever done, mostly (80%) came from meeting people at conferences” and “I’ve had the opportunity to open the door of conversations previously thought locked” and “When I send people on my team I almost always find they come back more inspired, rejuvenated, and full of fire” and dismiss them as outliers or invent reasons why the same won’t apply to you. It’s also easy explain away past successes gained through events as not necessarily requiring the in-person component.

I see this happen a lot. I’m embarrassed to say I’ve seen it at Moz. Remember last summer, when we did layoffs? One of the benefits cut was the conference and events budget for team members. While I think that was the right decision, I’m also hopeful & pushing for that to be one of the first benefits we reinstate now that we’re profitable again.

Lexi Mills at Turing Festival in Edinburgh

Over the years of my event participation, first as an attendee, and later as a speaker, I can measure my personal and Moz’s professional benefits, and come up with some ballpark range. It’s harder to do with my team members because I can’t observe every benefit, but I can certainly see every cost in line-item format. Human beings are pretty awful in situations like these. We bias to loss aversion over potential gain. We rationalize why others benefit when we don’t. We don’t know what we’re missing so we use logic to convince ourselves it’s ROI negative to justify our decision.

It’s the same principle that often makes hard-to-measure marketing channels the best ROI ones.

Some broader discussions around marketing event issues

Before writing this post, I asked on Twitter about the pros and cons of marketing conferences that folks felt were less often covered. A number of the responses were insightful and worthy of discussion follow-ups, so I wanted to include them here, with some thoughts.

If you’re a conference organizer, you know how tough a conversation this is. Want to bring in outside food vendors (which are much more affordable and interesting than what venues themselves usually offer)? 90% of venues have restrictions against it. Want to get great food for attendees? That same 90% are going to charge you on the order of hundreds of dollars per attendee. MozCon’s food costs are literally 25%+ of our entire budget, and considering we usually break even or lose a little money, that’s huge.

If you’re a media company and you run events for profit, or you’re a smaller business that can’t afford to have your events be a money-losing endeavor, you’re between a rock and a hard place. At places like MozCon and CTAConf, the food is pretty killer, but the flip side is there’s no margin at all. Many conferences simply can’t afford to swing that.

Totally agree with Ross — interesting one, and pros/cons to each. At smaller shows, I love the more intimate connections, but I’m also well aware that for most speakers, it’s a tough proposition to ask for a new presentation or to bring their best stuff. It’s also hard to get many big-name speakers. And, as Ross points out, the networking can be deeper, but with a smaller group. If you’re hoping to meet someone from company X or run into colleagues from the past, small size may inhibit.

For years prior to MozCon, I’d only ever been to events with a couple keynotes and then panels of 3–6 people in breakout sessions the rest of the day. I naively thought we’d invented some brilliant new system with the all-keynote-style conference (it had obviously been around for decades; I just wasn’t exposed to it). It also became clear over time that many other marketing conferences had the same idea and today, it’s an even split between those that do all-keynotes vs. those with a hybrid of breakouts, panels, and keynotes.

Personally, my preference is still all-keynote. I agree with Greg that, on occasion, a speaker won’t do a great job, and sitting through those 20–40 minutes can be frustrating. But I can count on a single hand the number of panel sessions I’ve ever found value in, and I strongly dislike being forced to choose between sessions and not sharing the same experience with other attendees. Even when the session I’ve chosen is a good one, I have FOMO (“what if that other session around the corner is even better?!”) and that drives my quality of experience down.

This, though, is personal preference. If you like panels, breakouts, and multi-track options, stick to SMX, Content Marketing World, INBOUND, and others like them. If you’re like me and prefer all keynotes, single track, go for CTAConf, Searchlove, Inbounder, MozCon, and their ilk.

I agree this is a real problem. Being a conference organizer, I get to see a lot of the feedback and requests, and I think that’s where the issue stems from. For example, a few years back, Brittan Bright, who now does sales at Google in New York, gave a brilliant talk about the soft skills of selling and client relations. It scored OK in the lineup, but a lot of the feedback overall that year was from people who wanted more “tactical tips” and “technical tricks” and less “soft skills” content. Every conference has to deal with this demand and supply issue. You might respond (as my friend Wil Reynolds often does) with “who cares what people say they want?! Give them what they don’t know they need!”

That’s how conferences go broke, my friends. 🙂 Every year, we try to include at least a few sessions that focus on these softer skills (in numerous ways), and every year, there’s pushback from folks who wish we’d just show them how to get more easy links, or present some new tool they haven’t heard of before. It’s a tough give and take, but I’m empathetic to both sides on this issue. Actionable tactics matter, and they make for big, immediate wins. Soft skills are important, too, but there’s a significant portion of the audience who’ll get frustrated seeing talks on these topics.

Hrm… I think I agree more with Freja than with Herman, but it’s entirely a personal preference. If you know yourself well enough to know that you’ll benefit more (or less) by attending with others from your team, make the call. This is one reason I love the idea of businesses offering the freedom of choice on how to use their event budget.

There were a number of these conflicting points-of-view in reply to my tweet, and I think they indicate the challenge for attendees and organizers. Opinions vary about what makes for a great conference, a great speaker or session, or the best way to get value from them.

Which marketing conferences do I recommend?

I get this question a lot (which is fair, I go to *a lot* of events). It really depends what you like, so I’ll try to break down my recommendations in that format.

Big, industry-wide events with many thousands of attendees, big name keynotes, famous musical acts, and hundreds of breakout session options:

  • INBOUND by Hubspot (Boston, MA 9/25–9/28) is a clear choice here. If you craft your experience well, you can get an immense amount of value.
  • Content Marketing World (Cleveland, OH 9/5–9/8) is always a good show, and they’ve recently focused on getting more gender-diverse.
  • Dreamforce by Salesforce (San Francisco, CA 11/6–11/9) has a similar feel to INBOUND in size and format, though it’s generally more classic sales & marketing focused, and has less programming that overlaps with our/my world of SEO, social media, content marketing, etc.
  • Web Summit (Lisbon, Portugal 11/6–11/9) is even broader, focusing on technology, startups, entrepreneurship, and sales+marketing. If you’re looking to break out of the marketing bubble and get a chance to see some “where are we going” and “what’s driving innovation” content, this is a good one.
  • SMX Munich (Munich, Germany 3/20–3/21 2018) is one of the best produced and best attended shows in Europe. This event consistently delivers great presentations. Because of its location on the calendar, it’s also where many speakers debut their theses and tactics each year, and since it’s in Germany (or, more probably because it’s run by the amazing Sandra & Matthew Finlay), everything is executed to perfection.

Mid-tier events with 1,000–1,500 attendee:

  • MozCon by Moz (Seattle, WA 7/17–7/19) I’m obviously biased, but I also get to see the survey data from attendees. The ratings of “excellent” or “outstanding” and the high number of people who buy tickets for the following year within a few days of leaving give me confidence that this is still one of the best events in the web marketing world.
  • CTAConf by Unbounce (Vancouver, BC 6/25–6/27) Oli Gardner, who’s become an exceptional speaker himself, works directly with every presenter (all invitation-only, like MozCon) to make sure the decks are top notch. In addition, the setting in Vancouver, the food trucks, the staging, the networking, and the kindness of Canada are all wonderful.
  • Inbounder (Valencia, Spain 5/2018) This event only happens every other year, but if 2016 was anything to judge by, it’s one of Europe’s best. Certainly, you won’t find a more incredible city or a better location. The conference hall is inside a spaceship that’s landed on a grassy park surrounding an ancient walled city. Even Seattle’s glacier-ringed beauty can’t top that.
  • ConversionXL Live (Austin, TX 3/28–3/30) Peep Laja and crew put on a terrific event with a lovely venue and clear attention paid to the actionable, tactical value of takeaways. I came back from the few sessions I attended with all sorts of suggestions for the Moz team to try (if only webdev resources weren’t so difficult to wrangle).
  • SMX Advanced (Seattle, WA TBD 2018) I haven’t been in a couple years, but many search marketers rave about this show’s location, production quality, panels, and speakers. It’s one of the few places that still attracts the big-name representatives from Google & Bing, so if you want to hear directly from the horse’s mouth a few seconds before it’s broadcast and analyzed a million ways on Twitter, this is the spot.

Outside The Inbounder Conference in Valencia, Spain

Smaller, local, & niche events with a few hundred attendees and a more intimate setting:

  • SearchLove (San Diego, Boston, & London 10/16–10/17) It’s somewhat extraordinary that this event remains small, like a hidden secret in the web marketing world. The quality of content and presentations are on par with MozCon (as are the ratings, and I know from other events how rare those are), but the settings are more intimate with only 2-300 participants in San Diego & Boston, and a larger, but still convivial crowd of 4-600 in London. I personally learn more at Searchlove than any other show.
  • Engage (formerly Searchfest) The SEMPDX crew has always had a unique, wonderful event, and Portland, OR is one of my favorite cities to visit.
  • MNSearch (Minneapolis 6/23) One of the exciting up-and-coming local events in our space. The MNSearch folks have brought together great speakers in fun venues at a surprisingly affordable price, and with some killer after-hours events, too. I’ve been twice and was very impressed both times.

This list is by no means exhaustive, and I’m certain there are many other events that give great value. I can only speak from my own experiences, which are going to carry the bias of what I’ve seen and what I like.

Help us better understand the value of conferences to you

Two years ago, I ran a survey about marketing conferences and received, analyzed, then published the results. I’d like to repeat that again, and see what’s changed. Please contribute and tell us what matters to you:

Take the survey here

I look forward to the discussion in the comments. If the Twitter thread was any indication, there’s a lot of passion and interest around this topic, one that I share. And of course, if you’d like to chat in person about this and see how we’re doing things at Moz, I hope you’ll consider MozCon in just a few weeks in Seattle.


Roger MozBotRoger’s note: *beep* Rogerbot here! I think Rand forgot an important benefit of one conference: At MozCon, you can hug a robot. If you’re considering joining us in Seattle this July, we’re over 75% sold out! Be sure to grab your ticket while you can.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

​Moz Local Report: Who’s Winning Wealth Management? 0

Posted by Dr-Pete

As more people look for financial advice online, brick-and-mortar wealth management firms and financial advisors are competing harder than ever for search customers. More than 70% of millennials use search engines for research, and 15% of 18–34 year-olds are turning directly to search engines for financial advice. As consumers in their 20s and 30s grow their wealth, have families, and begin planning for the future, who is best situated to capture their attention online?

This turns out to be a more difficult question than you might think. Focusing on Google, there are three major areas where financial service providers can compete: organic results, local results, and paid results (ads). Even organic results are increasingly localized, with top rankings varying wildly from city to city, and traditional organic results are often pushed below both ads and the local 3-pack. Local packs command a large amount of screen real-estate — here’s a local pack for “financial planner” in my own suburban Chicago neighborhood:

In partnership with Hearsay Systems, which provides Advisor Cloud solutions for the financial services industry, we decided to find out who’s leading the pack (no pun intended) in 2017 for wealth management and financial advisory searches across organic, local, and paid results.

Get the full report

Research methodology

For the purposes of this study, we decided to target five keyphrases related to wealth management and financial advisory services:

  1. financial advisor
  2. financial planning
  3. financial planner
  4. financial consultant
  5. wealth management

For each keyword, we looked at page one of Google results across 5,000 cities (the 5K largest cities in the contiguous 48 states, according to US census data). We then captured URLs and ranking positions across organic, local, and paid results.

To aggregate the data, we weighted each result by the population of the corresponding city and the estimated click-through rate (CTR) of its ranking position. We used a fairly conservative CTR curve, weighting top results a bit heavier, but not too dramatically:

For the final analysis across all five keywords, we weighted each keyword by its estimated search volume (according to Google Adwords) in the United States. By far, “financial advisor” was the most popular keyword, scooping up about 55% of search share across the keyword set.

Since some large brands use multiple websites (domains), we consolidated their numbers across those domains. So, for example, morganstanley.com and morganstanleybranch.com were grouped together in the final analysis. Quite a few brands have separate domains for their corporate site and local/branch locations. We’re interested in the strength of the brands themselves, not the particulars of how they divvy up their websites.

Top 5 organic leaders

The Top 5 for organic results were dominated by informational and news sites. The following graph compares the total “Click Share” based on all available clicks across all sites:

Investopedia led the way, scoring almost one-fifth of all clicks in our aggregate model, across more than 4,000 ranking domains. Among major players in the financial services space, only Edward Jones made it into the Top 5.

This is consistent with the idea that people are seeking general financial advice, and may not always be looking to organic results to find local service providers. Google’s results can often tell us a lot about how they’re interpreting search intent.

Curious case of keyword #4

Across the five keywords, we generally saw similar patterns. There were ranking variations, of course, but most of the top sites for one keyword performed well across the other keywords in organic results. The notable exception was keyword #4, “financial consultant.”

The Top 10 organic competitors for “financial consultant” included Monster.com (#1), Indeed.com (#4), Glassdoor (#5), and Robert Half (#7). Google seems to be interpreting this search as a job-hunting search and not a search for a service provider. This goes to show how important it is to make sure you’re targeting the right terms.

Top 5 local leaders

Applying the same analysis to the local pack, we came up with the following Top 5…

Traditional wealth management players performed much better in local pack results. Across our data, though, Edward Jones dominated the competitors in local rankings, consuming almost 40% of the total Click Share.

Interestingly, there was more overall diversity in local pack results, even with one dominant player and only three ranking positions per page. While just over 4,000 different domains ranked across organic results, local packs in our data set sampled from almost 7,000 different domains.

Top 5 paid/ad leaders

Morgan Stanley led the way in paid positioning, capturing just under 20% of Click Share. The rest of the Top 5 paid players were a bit more well-rounded, consuming roughly equal shares…

Interesting to note that relative newcomer SoFi seems to be spending pretty heavily in the space. SoFi (“Social Finance”) is an online finance community clearly aimed at the digital generation.

Given that this is a competitive space with relatively high costs-per-click (CPC), only 366 domains appeared in paid listings in our study. This was not due to a lack of ads — over 99% of the search results we examined displayed ads, and almost every search had a full complement of seven ads.

Non-traditional players

In addition to SoFi, a couple of newcomers fared pretty well in our data relative to their size and spend. Betterment.com appeared in 25th place in organic and 16th in paid. NerdWallet came in 46th in organic results and 22nd in paid. Credio.com took 20th place in organic overall but had no paid presence.

The one advantage traditional players clearly still have is in local results, where none of these newcomers ranked. Big brands with multiple brick-and-mortar presences still dominate local pack results, for obvious reasons, and online-only players can’t compete in local/map results. This makes performing well in local results even more important for big brands with a strong, nationwide physical presence.

Big winner: Edward Jones

Squeezing a lot of data into one graph can be a little dangerous, but let’s take a peek at what happens when we aggregate across all three types of listings (organic, local, and paid). Here are the Top 5 across all of the data in our study…

The combination of their dominant #1 position in our local data, #5 in organic, and a solid #25 in paid makes Edward Jones the clear overall winner, grabbing just over 14% of total Click Share in our study. Industry powerhouse Morgan Stanley comes in at #2, thanks primarily to their #1 paid ranking and #5 local position.

What’s the secret to Edward Jones’ success? Despite what the Internet wants you to believe, there’s almost never just one weird trick to search marketing success in 2017. One significant factor may be that Edward Jones has gone all-in on hyper-local pages. Their dominant local presence was made up of over 7,000 unique URLs representing their individual advisors.

Each advisor page has a clear, consistent Name, Address, and Phone number (or “NAP,” to use local search lingo), office hours, and other essential information. While the pages aren’t particularly unique, Edward Jones has done a good job of making sure that local offices are well represented and have a consistent, structured page.

It’s worth noting that even local rankings are very keyword specific. While Edward Jones ranked #1 overall in local packs for all four keyphrases starting with “financial…”, they fell to #23 for “wealth management.” Edward Jones has clearly carved out their niche.

The Wall Street Journal, on the other hand, maintains their dominant organic position with just a single page: a guide to choosing a financial planner. This page clearly benefits from WSJ’s overall authority, and it shows just how different ranking for organic and local search has become these days.

A few tactical takeaways

Based on this research, what advice would we give to financial players (big and small) who hope to be competitive in Google search?

Brick-and-mortar should focus on local

The big financial players with physical offices need to capitalize on that fact, because online-only players won’t be able to compete in local results (at least for now). While a hyper-local approach (to the tune of thousands of pages) is a big undertaking and not without risk, I’d highly recommend testing it if you’re a big player in the space. Edward Jones’ success with this approach can’t be ignored.

For local, focus attention on key markets

You don’t have to compete in every market (you’re probably not even physically in every market). Across even five keywords and 5,000 cities, there were roughly 7,000 domains ranking in the local 3-pack. That means that the winners for any given market varied wildly. Invest your hyper-local resources in key markets with the highest potential ROI.

Online-only should invest in content

Sure, the Wall Street Journal is a huge player, but the fact that they ranked across thousands of cities and highly competitive keywords with a single piece of content is still pretty amazing. Google seems to be interpreting these keywords as informational, and so online-only players need to invest heavily in content that hits the research phase of the buyer cycle. If big financial players hope to compete for organic, they may have to do the same.

You may have to pay for placement

I’ve worked in paid search in a former life, and I believe a balanced approach to search marketing has to be an eyes-wide-open approach. Right now, ads have prominent placement on these searches, often with a full seven ads per page (including four at the top). If you have the money and want to compete against organic and local pack results, you have to at least run the numbers on advertising.

Get the full report

Special thanks to our partners at Hearsay Systems for their industry expertise and contributions to planning this project and analyzing the data. Hearsay provides Advisor Cloud solutions for the financial services and insurance industries.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

JavaScript & SEO: Making Your Bot Experience As Good As Your User Experience 0

Posted by alexis-sanders

Understanding JavaScript and its potential impact on search performance is a core skillset of the modern SEO professional. If search engines can’t crawl a site or can’t parse and understand the content, nothing is going to get indexed and the site is not going to rank.

The most important questions for an SEO relating to JavaScript: Can search engines see the content and grasp the website experience? If not, what solutions can be leveraged to fix this?


Fundamentals

What is JavaScript?

When creating a modern web page, there are three major components:

  1. HTML – Hypertext Markup Language serves as the backbone, or organizer of content, on a site. It is the structure of the website (e.g. headings, paragraphs, list elements, etc.) and defining static content.
  2. CSS – Cascading Style Sheets are the design, glitz, glam, and style added to a website. It makes up the presentation layer of the page.
  3. JavaScript – JavaScript is the interactivity and a core component of the dynamic web.

Learn more about webpage development and how to code basic JavaScript.

javacssseo.gif

Image sources: 1, 2, 3

JavaScript is either placed in the HTML document within <script> tags (i.e., it is embedded in the HTML) or linked/referenced. There are currently a plethora of JavaScript libraries and frameworks, including jQuery, AngularJS, ReactJS, EmberJS, etc.

JavaScript libraries and frameworks:

What is AJAX?

AJAX, or Asynchronous JavaScript and XML, is a set of web development techniques combining JavaScript and XML that allows web applications to communicate with a server in the background without interfering with the current page. Asynchronous means that other functions or lines of code can run while the async script is running. XML used to be the primary language to pass data; however, the term AJAX is used for all types of data transfers (including JSON; I guess “AJAJ” doesn’t sound as clean as “AJAX” [pun intended]).

A common use of AJAX is to update the content or layout of a webpage without initiating a full page refresh. Normally, when a page loads, all the assets on the page must be requested and fetched from the server and then rendered on the page. However, with AJAX, only the assets that differ between pages need to be loaded, which improves the user experience as they do not have to refresh the entire page.

One can think of AJAX as mini server calls. A good example of AJAX in action is Google Maps. The page updates without a full page reload (i.e., mini server calls are being used to load content as the user navigates).

Related image

Image source

What is the Document Object Model (DOM)?

As an SEO professional, you need to understand what the DOM is, because it’s what Google is using to analyze and understand webpages.

The DOM is what you see when you “Inspect Element” in a browser. Simply put, you can think of the DOM as the steps the browser takes after receiving the HTML document to render the page.

The first thing the browser receives is the HTML document. After that, it will start parsing the content within this document and fetch additional resources, such as images, CSS, and JavaScript files.

The DOM is what forms from this parsing of information and resources. One can think of it as a structured, organized version of the webpage’s code.

Nowadays the DOM is often very different from the initial HTML document, due to what’s collectively called dynamic HTML. Dynamic HTML is the ability for a page to change its content depending on user input, environmental conditions (e.g. time of day), and other variables, leveraging HTML, CSS, and JavaScript.

Simple example with a <title> tag that is populated through JavaScript:

HTML source

DOM

What is headless browsing?

Headless browsing is simply the action of fetching webpages without the user interface. It is important to understand because Google, and now Baidu, leverage headless browsing to gain a better understanding of the user’s experience and the content of webpages.

PhantomJS and Zombie.js are scripted headless browsers, typically used for automating web interaction for testing purposes, and rendering static HTML snapshots for initial requests (pre-rendering).


Why can JavaScript be challenging for SEO? (and how to fix issues)

There are three (3) primary reasons to be concerned about JavaScript on your site:

  1. Crawlability: Bots’ ability to crawl your site.
  2. Obtainability: Bots’ ability to access information and parse your content.
  3. Perceived site latency: AKA the Critical Rendering Path.

Crawlability

Are bots able to find URLs and understand your site’s architecture? There are two important elements here:

  1. Blocking search engines from your JavaScript (even accidentally).
  2. Proper internal linking, not leveraging JavaScript events as a replacement for HTML tags.

Why is blocking JavaScript such a big deal?

If search engines are blocked from crawling JavaScript, they will not be receiving your site’s full experience. This means search engines are not seeing what the end user is seeing. This can reduce your site’s appeal to search engines and could eventually be considered cloaking (if the intent is indeed malicious).

Fetch as Google and TechnicalSEO.com’s robots.txt and Fetch and Render testing tools can help to identify resources that Googlebot is blocked from.

The easiest way to solve this problem is through providing search engines access to the resources they need to understand your user experience.

!!! Important note: Work with your development team to determine which files should and should not be accessible to search engines.

Internal linking

Internal linking should be implemented with regular anchor tags within the HTML or the DOM (using an HTML tag) versus leveraging JavaScript functions to allow the user to traverse the site.

Essentially: Don’t use JavaScript’s onclick events as a replacement for internal linking. While end URLs might be found and crawled (through strings in JavaScript code or XML sitemaps), they won’t be associated with the global navigation of the site.

Internal linking is a strong signal to search engines regarding the site’s architecture and importance of pages. In fact, internal links are so strong that they can (in certain situations) override “SEO hints” such as canonical tags.

URL structure

Historically, JavaScript-based websites (aka “AJAX sites”) were using fragment identifiers (#) within URLs.

  • Not recommended:

    • The Lone Hash (#) – The lone pound symbol is not crawlable. It is used to identify anchor link (aka jump links). These are the links that allow one to jump to a piece of content on a page. Anything after the lone hash portion of the URL is never sent to the server and will cause the page to automatically scroll to the first element with a matching ID (or the first <a> element with a name of the following information). Google recommends avoiding the use of “#” in URLs.
    • Hashbang (#!) (and escaped_fragments URLs) – Hashbang URLs were a hack to support crawlers (Google wants to avoid now and only Bing supports). Many a moon ago, Google and Bing developed a complicated AJAX solution, whereby a pretty (#!) URL with the UX co-existed with an equivalent escaped_fragment HTML-based experience for bots. Google has since backtracked on this recommendation, preferring to receive the exact user experience. In escaped fragments, there are two experiences here:
      • Original Experience (aka Pretty URL): This URL must either have a #! (hashbang) within the URL to indicate that there is an escaped fragment or a meta element indicating that an escaped fragment exists (<meta name=”fragment” content=”!”>).
      • Escaped Fragment (aka Ugly URL, HTML snapshot): This URL replace the hashbang (#!) with “_escaped_fragment_” and serves the HTML snapshot. It is called the ugly URL because it’s long and looks like (and for all intents and purposes is) a hack.
Image result

Image source

  • Recommended:

    • pushState History API – PushState is navigation-based and part of the History API (think: your web browsing history). Essentially, pushState updates the URL in the address bar and only what needs to change on the page is updated. It allows JS sites to leverage “clean” URLs. PushState is currently supported by Google, when supporting browser navigation for client-side or hybrid rendering.

      • A good use of pushState is for infinite scroll (i.e., as the user hits new parts of the page the URL will update). Ideally, if the user refreshes the page, the experience will land them in the exact same spot. However, they do not need to refresh the page, as the content updates as they scroll down, while the URL is updated in the address bar.
      • Example: A good example of a search engine-friendly infinite scroll implementation, created by Google’s John Mueller (go figure), can be found here. He technically leverages the replaceState(), which doesn’t include the same back button functionality as pushState.
      • Read more: Mozilla PushState History API Documents

Obtainability

Search engines have been shown to employ headless browsing to render the DOM to gain a better understanding of the user’s experience and the content on page. That is to say, Google can process some JavaScript and uses the DOM (instead of the HTML document).

At the same time, there are situations where search engines struggle to comprehend JavaScript. Nobody wants a Hulu situation to happen to their site or a client’s site. It is crucial to understand how bots are interacting with your onsite content. When you aren’t sure, test.

Assuming we’re talking about a search engine bot that executes JavaScript, there are a few important elements for search engines to be able to obtain content:

  • If the user must interact for something to fire, search engines probably aren’t seeing it.

    • Google is a lazy user. It doesn’t click, it doesn’t scroll, and it doesn’t log in. If the full UX demands action from the user, special precautions should be taken to ensure that bots are receiving an equivalent experience.
  • If the JavaScript occurs after the JavaScript load event fires plus ~5-seconds*, search engines may not be seeing it.
    • *John Mueller mentioned that there is no specific timeout value; however, sites should aim to load within five seconds.
    • *Screaming Frog tests show a correlation to five seconds to render content.
    • *The load event plus five seconds is what Google’s PageSpeed Insights, Mobile Friendliness Tool, and Fetch as Google use; check out Max Prin’s test timer.
  • If there are errors within the JavaScript, both browsers and search engines won’t be able to go through and potentially miss sections of pages if the entire code is not executed.

How to make sure Google and other search engines can get your content

1. TEST

The most popular solution to resolving JavaScript is probably not resolving anything (grab a coffee and let Google work its algorithmic brilliance). Providing Google with the same experience as searchers is Google’s preferred scenario.

Google first announced being able to “better understand the web (i.e., JavaScript)” in May 2014. Industry experts suggested that Google could crawl JavaScript way before this announcement. The iPullRank team offered two great pieces on this in 2011: Googlebot is Chrome and How smart are Googlebots? (thank you, Josh and Mike). Adam Audette’s Google can crawl JavaScript and leverages the DOM in 2015 confirmed. Therefore, if you can see your content in the DOM, chances are your content is being parsed by Google.

adamaudette - I don't always JavaScript, but when I do, I know google can crawl the dom and dynamically generated HTML

Recently, Barry Goralewicz performed a cool experiment testing a combination of various JavaScript libraries and frameworks to determine how Google interacts with the pages (e.g., are they indexing URL/content? How does GSC interact? Etc.). It ultimately showed that Google is able to interact with many forms of JavaScript and highlighted certain frameworks as perhaps more challenging. John Mueller even started a JavaScript search group (from what I’ve read, it’s fairly therapeutic).

All of these studies are amazing and help SEOs understand when to be concerned and take a proactive role. However, before you determine that sitting back is the right solution for your site, I recommend being actively cautious by experimenting with small section Think: Jim Collin’s “bullets, then cannonballs” philosophy from his book Great by Choice:

“A bullet is an empirical test aimed at learning what works and meets three criteria: a bullet must be low-cost, low-risk, and low-distraction… 10Xers use bullets to empirically validate what will actually work. Based on that empirical validation, they then concentrate their resources to fire a cannonball, enabling large returns from concentrated bets.”

Consider testing and reviewing through the following:

  1. Confirm that your content is appearing within the DOM.
  2. Test a subset of pages to see if Google can index content.
  • Manually check quotes from your content.
  • Fetch with Google and see if content appears.
  • Fetch with Google supposedly occurs around the load event or before timeout. It’s a great test to check to see if Google will be able to see your content and whether or not you’re blocking JavaScript in your robots.txt. Although Fetch with Google is not foolproof, it’s a good starting point.
  • Note: If you aren’t verified in GSC, try Technicalseo.com’s Fetch and Render As Any Bot Tool.

After you’ve tested all this, what if something’s not working and search engines and bots are struggling to index and obtain your content? Perhaps you’re concerned about alternative search engines (DuckDuckGo, Facebook, LinkedIn, etc.), or maybe you’re leveraging meta information that needs to be parsed by other bots, such as Twitter summary cards or Facebook Open Graph tags. If any of this is identified in testing or presents itself as a concern, an HTML snapshot may be the only decision.

2. HTML SNAPSHOTS
What are HTmL snapshots?

HTML snapshots are a fully rendered page (as one might see in the DOM) that can be returned to search engine bots (think: a static HTML version of the DOM).

Google introduced HTML snapshots 2009, deprecated (but still supported) them in 2015, and awkwardly mentioned them as an element to “avoid” in late 2016. HTML snapshots are a contentious topic with Google. However, they’re important to understand, because in certain situations they’re necessary.

If search engines (or sites like Facebook) cannot grasp your JavaScript, it’s better to return an HTML snapshot than not to have your content indexed and understood at all. Ideally, your site would leverage some form of user-agent detection on the server side and return the HTML snapshot to the bot.

At the same time, one must recognize that Google wants the same experience as the user (i.e., only provide Google with an HTML snapshot if the tests are dire and the JavaScript search group cannot provide support for your situation).

Considerations

When considering HTML snapshots, you must consider that Google has deprecated this AJAX recommendation. Although Google technically still supports it, Google recommends avoiding it. Yes, Google changed its mind and now want to receive the same experience as the user. This direction makes sense, as it allows the bot to receive an experience more true to the user experience.

A second consideration factor relates to the risk of cloaking. If the HTML snapshots are found to not represent the experience on the page, it’s considered a cloaking risk. Straight from the source:

“The HTML snapshot must contain the same content as the end user would see in a browser. If this is not the case, it may be considered cloaking.”
Google Developer AJAX Crawling FAQs

Benefits

Despite the considerations, HTML snapshots have powerful advantages:

  1. Knowledge that search engines and crawlers will be able to understand the experience.

    • Certain types of JavaScript may be harder for Google to grasp (cough… Angular (also colloquially referred to as AngularJS 2) …cough).
  2. Other search engines and crawlers (think: Bing, Facebook) will be able to understand the experience.
    • Bing, among other search engines, has not stated that it can crawl and index JavaScript. HTML snapshots may be the only solution for a JavaScript-heavy site. As always, test to make sure that this is the case before diving in.
"It's not just Google understanding your JavaScript. It's also about the speed." -DOM - "It's not just about Google understanding your Javascript. it's also about your perceived latency." -DOM

Site latency

When browsers receive an HTML document and create the DOM (although there is some level of pre-scanning), most resources are loaded as they appear within the HTML document. This means that if you have a huge file toward the top of your HTML document, a browser will load that immense file first.

The concept of Google’s critical rendering path is to load what the user needs as soon as possible, which can be translated to → “get everything above-the-fold in front of the user, ASAP.”

Critical Rendering Path – Optimized Rendering Loads Progressively ASAP:

progressive page rendering

Image source

However, if you have unnecessary resources or JavaScript files clogging up the page’s ability to load, you get “render-blocking JavaScript.” Meaning: your JavaScript is blocking the page’s potential to appear as if it’s loading faster (also called: perceived latency).

Render-blocking JavaScript – Solutions

If you analyze your page speed results (through tools like Page Speed Insights Tool, WebPageTest.org, CatchPoint, etc.) and determine that there is a render-blocking JavaScript issue, here are three potential solutions:

  1. Inline: Add the JavaScript in the HTML document.
  2. Async: Make JavaScript asynchronous (i.e., add “async” attribute to HTML tag).
  3. Defer: By placing JavaScript lower within the HTML.

!!! Important note: It’s important to understand that scripts must be arranged in order of precedence. Scripts that are used to load the above-the-fold content must be prioritized and should not be deferred. Also, any script that references another file can only be used after the referenced file has loaded. Make sure to work closely with your development team to confirm that there are no interruptions to the user’s experience.

Read more: Google Developer’s Speed Documentation


TL;DR – Moral of the story

Crawlers and search engines will do their best to crawl, execute, and interpret your JavaScript, but it is not guaranteed. Make sure your content is crawlable, obtainable, and isn’t developing site latency obstructions. The key = every situation demands testing. Based on the results, evaluate potential solutions.

Thanks: Thank you Max Prin (@maxxeight) for reviewing this content piece and sharing your knowledge, insight, and wisdom. It wouldn’t be the same without you.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz

When and How to Use Domain Authority, Page Authority, and Link Count Metrics – Whiteboard Friday 0

Posted by randfish

How can you effectively apply link metrics like Domain Authority and Page Authority alongside your other SEO metrics? Where and when does it make sense to take them into account, and what exactly do they mean? In today’s Whiteboard Friday, Rand answers these questions and more, arming you with the knowledge you need to better understand and execute your SEO work.

When and how to use Domain Authority, Page Authority, and link count metrics.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about when and how to use Domain Authority and Page Authority and link count metrics.

So many of you have written to us at Moz over the years and certainly I go to lots of conferences and events and speak to folks who are like, “Well, I’ve been measuring my link building activity with DA,” or, “Hey, I got a high DA link,” and I want to confirm when is it the right time to be using something like DA or PA or a raw link count metric, like number of linking root domains or something like Spam Score or a traffic estimation, these types of metrics.

So I’m going to walk you through kind of these three — Page Authority, Domain Authority, and linking root domains — just to get a refresher course on what they are. Page Authority and Domain Authority are actually a little complicated. So I think that’s worthwhile. Then we’ll chat about when to use which metrics. So I’ve got sort of the three primary things that people use link metrics for in the SEO world, and we’ll walk through those.

Page Authority

So to start, Page Authority is basically — you can see I’ve written a ton of different little metrics in here — linking URLs, linking root domains, MozRank, MozTrust, linking subdomains, anchor text, linking pages, followed links, no followed links, 301s, 302s, new versus old links, TLD, domain name, branded domain mentions, Spam Score, and many, many other metrics.

Basically, what PA is, is it’s every metric that we could possibly come up with from our link index all taken together and then thrown into a model with some training data. So the training data in this case, quite obviously, is Google search results, because what we want the Page Authority score to ultimately be is a predictor of how well a given page is going to rank in Google search results assuming we know nothing else about it except link data. So this is using no on-page data, no content data, no engagement or visit data, none of the patterns or branding or entity matches, just link data.

So this is everything we possibly know about a page from its link profile and the domain that page is on, and then we insert that in as the input alongside the training data. We have a machine learning model that essentially learns against Google search results and builds the best possible model it can. That model, by the way, throws away some of this stuff, because it’s not useful, and it adds in a bunch of this stuff, like vectors or various attributes of each one. So it might say, “Oh, anchor text distribution, that’s actually not useful, but Domain Authority ordered by the root domains with more than 500 links to them.” I’m making stuff up, right? But you could have those sorts of filters on this data and thus come up with very complex models, which is what machine learning is designed to do.

All we have to worry about is that this is essentially the best predictive score we can come up with based on the links. So it’s useful for a bunch of things. If we’re trying to say how well do we think this page might rank independent of all non-link factors, PA, great model. Good data for that.

Domain Authority

Domain Authority is once you have the PA model in your head and you’re sort of like, “Okay, got it, machine learning against Google’s results to produce the best predictive score for ranking in Google.” DA is just the PA model at the root domain level. So not subdomains, just root domains, which means it’s got some weirdness. It can’t, for example, say that randfishkin.blogspot.com is different than www.blogspot.com. But obviously, a link from www.blogspot.com is way more valuable than from my personal subdomain at Blogspot or Tumblr or WordPress or any of these hosted subdomains. So that’s kind of an edge case that unfortunately DA doesn’t do a great job of supporting.

What it’s good for is it’s relatively well-suited to be predictive of how a domain’s pages will rank in Google. So it removes all the page-level information, but it’s still operative at the domain level. It can be very useful for that.

Linking Root Domain

Then linking root domains is the simplest one. This is basically a count of all the unique root domains with at least one link on them that point to a given page or a site. So if I tell you that this URL A has 410 linking root domains, that basically means that there are 410 domains with at least one link pointing to URL A.

What I haven’t told you is whether they’re followed or no followed. Usually, this is a combination of those two unless it’s specified. So even a no followed link could go into the linking root domains, which is why you should always double check. If you’re using Ahrefs or Majestic or Moz and you hover on the whatever, the little question mark icon next to any given metric, it will tell you what it includes and what it doesn’t include.

When to use which metric(s)

All right. So how do we use these?

Well, for month over month link building performance, which is something that a lot of folks track, I would actually not suggest making DA your primary one. This is for a few reasons. So Moz’s index, which is the only thing currently that calculates DA or a machine learning-like model out there among the major toolsets for link data, only updates about once every month. So if you are doing your report before the DA has updated from the last link index, that can be quite frustrating.

Now, I will say we are only a few months away from a new index that’s going to replace Mozscape that will calculate DA and PA and all these other things much, much more quickly. I know that’s been something many folks have been asking for. It is on its way.

But in the meantime, what I recommend using is:

1. Linking root domains, the count of linking root domains and how that’s grown over time.

2. Organic rankings for your targeted keywords. I know this is not a direct link metric, but this really helps to tell you about the performance of how those links have been affected. So if you’re measuring month to month, it should be the case that any months you’ve got in a 20 or 30-day period, Google probably has counted and recognized within a few days of finding them, and Google is pretty good at crawling nearly the whole web within a week or two weeks. So this is going to be a reasonable proxy for how your link building campaign has helped your organic search campaign.

3. The distribution of Domain Authority. So I think, in this case, Domain Authority can be useful. It wouldn’t be my first or second choice, but I think it certainly can belong in a link building performance report. It’s helpful to see the high DA links that you’re getting. It’s a good sorting mechanism to sort of say, “These are, generally speaking, more important, more authoritative sites.”

4. Spam Score I like as well, because if you’ve been doing a lot of link building, it is the case that Domain Authority doesn’t penalize or doesn’t lower its score for a high Spam Score. It will show you, “Hey, this is an authoritative site with a lot of DA and good-looking links, but it also looks quite spammy to us.” So, for example, you might see that something has a DA of 60, but a Spam Score of 7 or 8, which might be mildly concerning. I start to really worry when you get to like 9, 10, or 11.

Second question:

I think this is something that folks ask. So they look at their own links and they say, “All right, we have these links or our competitor has these links. Which ones are providing the most value for me?” In that case, if you can get it, for example, if it’s a link pointing to you, the best one is, of course, going to be…

1. Real traffic sent. If a site or a page, a link is sending traffic to you, that is clearly of value and that’s going to be likely interpreted positively by the search engines as well.

You can also use…

2. PA

3. DA. I think it’s pretty good. These metrics are pretty good and pretty well-correlated with, relatively speaking, value, especially if you can’t get at a metric like real traffic because it’s coming from someone else’s site.

4. Linking root domains, the count of those to a page or a domain.

5. The rankings rise, in the case where a page is ranking position four, a new link coming to it is the only thing that’s changed or the only thing you’re aware of that’s changed in the last few days, few weeks, and you see a rankings rise. It moves up a few positions. That’s a pretty good proxy for, “All right, that is a valuable link.” But this is a rare case where you really can control other variables to the extent that you think you can believe in that.

6. I like Spam Scor for this as well, because then you can start to see, “Well, are these sketchier links, or are these links that I can likely trust more?”

Last one,

So I think this is one that many, many SEOs do. We have a big list of links. We’ve got 50 links that we’re thinking about, “Should I get these or not and which ones should I go after first and which ones should I not go after?” In this case…

1. DA is really quite a good metric, and that is because it’s relatively predictive of the domain’s pages’ performance in Google, which is a proxy, but a decent proxy for how it might help your site rank better.

It is the case that folks will talk about, “Hey, it tends to be the case that when I go out and I build lots of DA 70, DA 80, DA 90+ links, I often get credit. Why DA and not PA, Rand?” Well, in the case where you’re getting links, it’s very often from new pages on a website, which have not yet been assigned PA or may not have inherited all the link equity from all the internal pages.

Over time, as those pages themselves get more links, their PA will rise as well. But the reason that I generally recommend a DA for link outreach is both because of that PA/DA timing issue and because oftentimes you don’t know which page is going to give you a link from a domain. It could be a new page they haven’t created yet. It could be one that you never thought they would add you to. It might be exactly the page that you were hoping for, but it’s hard to say.

2. I think linking root domains is a very reasonable one for this, and linking root domains is certainly closely correlated, not quite as well correlated, but closely correlated with DA and with rankings.

3. Spam Score, like we’ve talked about.

4. I might use something like SimilarWeb‘s traffic estimates, especially if real traffic sent is something that I’m very interested in. If I’m pursuing no followed links or affiliate links or I just care about traffic more than I care about rank-boosting ability, SimilarWeb has got what I think is the best traffic prediction system, and so that would be the metric I look at.

So, hopefully, you now have a better understanding of DA and PA and link counts and when and where to apply them alongside which other metrics. I look forward to your questions. I’ll be happy to jump into the comments and answer. And we’ll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Source: Moz