Innovate not imitate!

Innovate not imitate!
Interested in the latest Growth hacks?

Welcome to our blog

Interested in the latest Growth hacks?

Welcome to our blog!

We want to help you start/manage and grow your business using innovative strategies and implementation. We have a passion for helping businesses and companies of various sizes see the same success that we have achieved.

Our skillsets are wide and varied, from business strategy, marketing, to online strategy. An increasing number of companies are turning to the internet and online media as a means to maximising their marketing reach and exposure. This is special area of focus for us and we do more than simple SEO strategies.

See our website for more: www.innovatetoaccelerate.com

Monday 31 August 2015

Traffic and Engagement Metrics and Their Correlation to Google Rankings

Posted by Royh

When Moz undertook this year’s Ranking Correlation Study (Ranking Factors), there was a desire to include data points never before studied. Fortunately, SimilarWeb had exactly what was needed. For the first time, Moz was able to measure ranking correlations with both traffic and engagement metrics.

Using Moz’s ranking data on over 200,000 domains, combined with multiple SimilarWeb data points—including traffic, page views, bounce rate, time on site, and rank—the Search Ranking Factors study was able to measure how these metrics corresponded to higher rankings.

These metrics differ from the traditional SEO parameters Moz has measured in the past in that they are primarily user-based metrics. This means that they vary based on how users interact with the individual websites, as opposed to static features such as title tag length. We'll find these user-based metrics important as we learn how search engines may use them to rank webpages, as illustrated in this excellent post by Dan Petrovic.

Every marketer and SEO professional wants to know if there is a correlation between web search ranking results and the website’s actual traffic. Here, we’ll examine the relationship between website rankings and traffic engagement to see which metrics have the biggest correlation to rankings.

You can view the results below:

Traffic correlated to higher rankings

For the study, we examined both direct and organic search visits over a three-month period. SimilarWeb’s traffic results show that there is a generally a high correlation between website visits and Google’s search rankings.

Put simply, the more traffic a site received, the higher it tended to rank. Practically speaking, this means that you would expect to see sites like Amazon and Wikipedia higher up in the results, while smaller sites tended to rank slightly worse.

This doesn't mean that Google uses traffic and user engagement metrics as an actual ranking factor in its search algorithm, but it does show that a relationship exists. Hypothetically, we can think of many reasons why this might be the case:

  • A "brand" bias, meaning that Google may wish to treat trusted, popular, and established brands more favorably.
  • Possible user-based ranking signals (described by Dan here) where uses are more inclined to choose recognizable brands in search results, which in theory could push their rankings higher.
  • Which came first—the chicken or the egg? Alternatively, it could simply be the case that high-ranking websites become popular simply because they are ranking highly.

Regardless of the exact cause, it seems logical that the more you improve your website’s visibility, trust, and recognition, the better you may perform in search results.

Engagement: Time on site, bounce rate, and page views

While not as large as the traffic correlations, we also found a positive correlation between a website’s user engagement and its rank in Google search results. For the study, we examined three different engagement metrics from SimilarWeb.

  • Time on site: 0.12 is not considered a strong correlation by any means within this study, but it does suggest there may be a slight relationship between how long a visitor spends on a particular site and its ranking in Google.
  • Page views: Similar to time on site, the study found a small correlation of 0.10 between the number of pages a visitor views and higher rankings.
  • Bounce rate: At first glance, with a correlation of -0.08, the correlation between bounce rate and rankings may seem out-of-whack, but this is not the case. Keep in mind that lower bounce rate is often a good indication of user engagement. Therefore, we find as bounce rates rise (something we often try to avoid), rankings tend to drop, and vice-versa.

This means that sites with lower bounce rates, longer time-on-site metrics, and more page views—some of the data points that SimilarWeb measures—tend to rank higher in Google search results.

While these individual correlations aren’t large, collectively they do lend credence to the idea that user engagement metrics can matter to rankings.

To be clear, this doesn’t mean to imply that Google or other search engines use metrics like bounce rate or click-through rate directly in their algorithm. Instead, a better way to think of this is that Google uses a number of user inputs to measure relevance, user satisfaction, and quality of results.

This is exactly the same argument the SEO community is currently debating over click-through rate and its possible use by Google as a ranking signal. For an excellent, well-balanced view of the debate, we highly recommend reading AJ Kohn’s thoughts and analysis.

It could be that Google is using Panda-like engagement signals. If a site’s correlated bounce rate is negative, that means that the website should have a lower bounce rate because the site is healthy. Similarly, if the time that users spend on-site and the page views are higher, the website should also tend to produce higher Google SERPs.

Global Rank correlations

SimilarWeb’s Global Rank is calculated by data aggregation, and is based on a combination of website traffic from six different sources and user engagement levels. We include engagement metrics to make sure that we’re portraying an accurate picture of the market.

If the website has a lower Global Rank on SimilarWeb, then the website will generally have more visitors and good user engagement.

As Global Rank is a combination of traffic and engagement metrics, it’s no surprise that it was one of the highest correlated features of the study. Again, even though the correlation is negative at -0.24, a low Global Rank is actually a good thing. A website with a Global Rank of 1 would be the highest-rated site on the web. This means that the lower the Global Rank, the better the relationship with higher rankings.

As a side note, SimilarWeb’s Website Ranking provides insights for estimating any website’s value and benchmarking your site against it. You can use its tables to find out who’s leading per industry category and/or country.

Methodology

The Moz Search Engine Ranking Factors study examined the relationship between web search results and links, social media signals, visitor traffic and usage signals, and on-page factors. The study compiled datasets and conducted search result queries in English with Google’s search engine, focusing exclusively on US search results.

The dataset included a list of 16,521 queries taken from 22 top-level Google Adwords categories. Keywords were taken from head, middle, and tail queries. The searches ranged from infrequent (less than 1,000 queries per month), to frequent (more than 20,000 per month), to enormously frequent with keywords being searched more than one million times per month!

The top 50 US search results for each query were pulled from the datasets in a manner that did not account for location or personalization in a location- and personalization-agnostic manner.

SimilarWeb checked the traffic and engagement stats of more than 200,000 websites, and we have analytics on more than 90% of them. After we pulled the traffic data, we checked for a correlation using keywords from the Google AdWords tool to see what effect metrics like search traffic, time on site, page views, and bounce rates—especially with organic searches—have upon Google’s rankings.

Conclusion

We found a positive correlation between websites that showed highly engaging user traffic metrics on SimilarWeb’s digital measurement platform, and higher placement on Google search engine results pages. SimilarWeb also found that a brand’s popularity correlates to higher placement results in Google searches.

With all the recent talk of user engagement metrics and rankings, we’d love to hear your take. Have you observed any relationship, improvement, or drop in rankings based on engagement? Share your thoughts in the comments below.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1PGlJ84
via IFTTT

Friday 28 August 2015

How to Get Content into the Hands of Influencers Who Can Help Amplify It - Whiteboard Friday

Posted by randfish

Step 1: Create 10x content. Step 2: ??? Step 3: Massive flood of traffic.

There's a bigger gap in step 2 than many marketers anticipate, and one of the best ways to fill it is getting your content in front of influential people who can help spread the word. You'll have to make it worth their while, though, and in today's Whiteboard Friday, Rand explains how to go about that.

How to Get Content into the Hands of Influencers Who Can Help Amplify it Whiteboard

Click on the whiteboard image above to open a high resolution version in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about a problem that many of you have mentioned in comments, in tweets, in questions, and emails to me and to other folks here at Moz when we talk about content marketing and specifically content amplification like, "Okay, I made some great content. But how do I actually get people to share it? In particular, how do I get content into the hands of the influencers who might amplify it?"

Look, this is very frustrating, right? If you're a small brand, a small site, have a small social account, content amplification often feels like this.

You're like, "I just made the most amazing thing ever." It sucks, and I get that pain. I totally get that pain. We have all been there.

Moz has the wonderful privilege and opportunity of having this great content amplification channel. But when I started out, when I was making the blog in 2004/2005, nobody was listening. It was a very frustrating experience, and it took years before that content amplification lifecycle became to the point where I remember one year, I think it was 2008, and Greg Boser, who is legendary in the SEO world, was on a panel at a search engine conference. He's there and he says, "Well, Rand just cheats. All he has to do is hit Publish."

I was like, "Oh yeah, all I have to do now is hit Publish." But it takes a long time to build that. Those folks who already have a following, a following on their blog, on their social account, on an email list, on a news site, whatever it is, have this outsized ability to spark virality, to help something that might be incredible, that you've made, become seen by a wide audience who will actually appreciate it.

But it can be a frustrating process. Here's Ann Handley's account.

Ann Handley, one of the best, most followed folks in the online marketing world, @MarketingProfs is her handle, and she tweets stuff all the time that gets a lot of retweets, a lot of engagement, and folks are thinking like, "How do I reach her? I have something amazing that I want the marketing world to see. How could I get Ann to share my content?"

I have a few tactics for you that I hope will work.

First off, this is going to sound tough, but... it is tough.

1) The simple nudge

This is the thing that I think you should probably be using 80% or more of your time.

The simple nudge is just like what it sounds like. "Hi, Ann. I'm a big fan. I'm a long-time follower. We made this thing we think you're going to love. Let us know if you have any suggestions. We're going to be updating it for the next few weeks. Thanks, Rand."

That could be through email. It could be a LinkedIn message. It could even be a Facebook message. It could be a tweet. That's a little bit long for a tweet, maybe a long series of VMs. But the thing that is going on here is the content and relevance have to be outstanding. It has to be something that is remarkable just as soon as you share it, as soon as you give the title of it, Ann's like, "Oh wait, I have not seen one of those. I am super interested in that."

How are you going to find something that you can nudge that influencer with, where they will think, "That's so remarkable that even though I have never heard the name of this person before, I'm going to check it out and then I'm going to share it"? That's hard to do. It's a very, very high bar. But that's the same high bar that you have for creating what I have been calling 10X content, the kind of content that people will actually amplify and share.

I talk about this a lot. But I always urge folks to ask the question, "Who will help amplify this and why?" If you have a great answer to that question, the nudge should be all you need 80% of the time.

Now, I do have two tools that I'm going to recommend. They are both used for email outreach, specifically finding folks' emails or getting connected to folks through email.

Voila Norbert is a great tool for finding email that I have talked about on Whiteboard Friday before, and Conspire is a wonderful tool that will show you all of the people, who you have emailed, who have emailed that person. So if I hadn't emailed Ann directly, I could look in Conspire and I could see, "Oh look, Cyrus Shepard has emailed Ann previously. Great, let me reach out to Cyrus, ask him for an introduction. He'll connect me up."

So some good tools that can be helpful there. I'm reticent to promote it, but Followerwonk is also very useful for this discovery process, figuring out who those influencers are in the first place.

2) The inclusion mention

This tactic can work, and it's a nice, subtle way to get folks involved, especially if you're not too frustrated when it doesn't work out. For example, let's say Ann had tweeted something around CRM software. Well, she did send a tweet around CRM software that I copied in there, but maybe she has expressed some frustration around CRM software. She is like, "I don't know which vendors to choose. I wish there was a great resource."

I can say, "@MarketingProfs, your tweet about CRM frustration inspired us to make this." Cool. Now I'm sharing with her something that she has directly expressed an interest in, and I'm including her in there, again through Twitter. I could reach out through email. I could do it through LinkedIn. It could be through a lot of things.

I think any time you have content that's inspired by or particularly inclusive of a person, brand, or a place, let them know. There's no harm in letting them know. It could be that this is ignored 9 times out of 10, but 1 time out of 10 you're going to get that extra amplification and that's a wonderful thing.

I have found, by the way, that many times when it comes to a longer form content responding to a blog post, responding to a tweet, responding to something that's been shared, that professional, respectful, well thought out pieces that advance the conversation can work well even if we disagree about things.

So if Ann's expressed something and I go, "Hmm, I disagree with that. Let me explain why and the thinking behind it." But I'm going to be very professional, very respectful. I'm going to advance the conversation, like bring things forward, include non-obvious stuff that is helpful that makes this a better dialogue.

I can write up that piece, and then I can share it with her. This can be especially effective if you share it with the person before you hit Publish. Again, a good reason for pre-content amplification outreach.

3) The review

Well, the review is a tough one. I don't love it all the time. I especially don't love it when it's been done to death, which it has very often. But this is something like, "We reviewed the latest guide from @MarketingProfs here." So we go check out MarketingProfs, and we download one of their guides. We really like it. We write up a review.

This works when it's positive and when that positivity is clearly authentic and not just designed to get amplification. One of the things that happens that I see all the time, especially in the web marketing, but even in the technology world and a lot in the travel world is folks doing things like this with no intention other than hopefully getting a link or a retweet. They clearly have not put any authentic thought into it. It's not a quality piece. It's just designed to get that link. It's very transparent to the vast majority of influencers who get targeted with this stuff all the time.

So if it's been done to death, probably don't bother. But the review system can work for other kinds of things, and if it's positive and authentic, and that's not why you did it, great.

4) The network effect

It's frustrating because it's not always as effective. However, it can still be a small win even when you don't get the big win. So the idea would be I go and I check out, maybe potentially use Followerwonk. Or Little Bird is another one. It's paid and expensive, but very, very good.

These four accounts tend to share things that major influencers later pick up. Hmm, interesting. So at such-and-such and at so-and-so, like these folks tend to be often much easier to target and to reach out to, much higher response rate, much more likely to reply to you and engage with you, and even if those major targets never come through, so even if these folks are targeted, they're followed by these seven influencers that we really are going after. Well, you know what, even if they share and none of the seven influencers do, that's okay. It's still a win.

So I hope that with these in your pocket you can go and do a little more successful content outreach and content amplification. If you have some tactics that you would like to share that have worked well for you, I would love to hear about them in the comments. I'm sure everyone else would as well, and you'll get lots of thumbs up.

So I look forward to that and to seeing you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1JBmRtn
via IFTTT

Thursday 27 August 2015

Moz's Acquisition of SERPscape, Russ Jones Joining Our Team, and a Sneak Peek at a New Tool

Posted by randfish

Today, it's my pleasure to announce some exciting news. First, if you haven't already seen it via his blog post, I'm thrilled to welcome Russ Jones, a longtime community member and great contributor to the SEO world, to Moz. He'll be joining our team as a Principal Search Scientist, joining the likes of Dr. Pete, Jay Leary, and myself as a high-level individual contributor on research and development projects.

If you're not familiar with Mr. Jones' work, let me embarrass my new coworker for a minute. Russ:

  • Was Angular's CTO after having held a number of roles with the company (previously known as Virante)
  • Is the creator of not just SERPscape, but the keyword data API, Grepwords, too (which Moz isn't acquiring—Russ will continue operating that service independently)
  • Runs a great Twitter profile sharing observations & posts about some of the most interesting, hardcore-nerdy stuff in SEO
  • Operates The Google Cache, a superb blog about SEO that's long been on my personal must-read list
  • Contributes regularly to the Moz blog through excellent posts and comments
  • Was, most recently, the author of this superb post on Moz comparing link indices (you can bet we're going to ask for his help to improve Mozscape)
  • And, perhaps most impressively, replies to emails almost as fast as I do :-)

Russ joins the team in concert with Moz's acquisition of a dataset and tool he built called SERPscape. SERPscape contains data on 40,000,000 US search results and includes an API capable of querying loads of interesting data about what appears in those results (e.g. the relative presence of a given domain, keywords that particular pages rank for, search rankings by industry, and more). For now, SERPscape is remaining separate from the Moz toolset, but over time, we'll be integrating it with some cool new projects currently underway (more on that below).

I'm also excited to share a little bit of a sneak preview of a project that I've been working on at Moz that we've taken to calling "Keyword Explorer." Russ, in his new role, will be helping out with that, and SERPscape's data and APIs will be part of that work, too.

In Q1 of this year, I pitched our executive team and product strategy folks for permission to work on Keyword Explorer and, after some struggles (welcome to bigger company life and not being CEO, Rand!), got approval to tackle what I think remains one of the most frustrating parts of SEO: effective, scalable, strategically-informed keyword research. Some of the problems Russ, I, and the entire Keyword Explorer team hope to solve include:

  • Getting more accurate estimates around relative keyword volumes when doing research outside AdWords
  • Having critical metrics like Difficulty, Volume, Opportunity, and Business Value included alongside our keywords as we're selecting and prioritizing them
  • A tool that lets us build lists of keywords, compare lists against one another, and upload sets of keywords for data and metrics collections
  • A single place to research keyword suggestions, uncover keyword metrics (like Difficulty, Opportunity, and Volume), and select keywords for lists that can be directly used for prioritization and tactical targeting

You can see some of this early work in Dr. Pete's KW Opportunity model, which debuted at Mozcon, in our existing Keyword Difficulty & SERP Analysis tool (an early inspiration for this next step), and in a few visuals below:

BTW: Please don't hold the final product to any of these; they're not actual shots of the tool, but rather design comps. What's eventually released almost certainly won't match these exactly, and we're still working on features, functionality, and data. We're also not announcing a release date yet. That said, if you're especially passionate about Keyword Explorer, want to see more, and don't mind giving us some feedback, feel free to email me (rand at moz dot com), and I'll have more to share privately in the near future.

But, new tools aren't the only place Russ will be contributing. As he noted in his post, he's especially passionate about research that helps the entire SEO field advance. His passion is contagious, and I hope it infects our entire team and community. After all, a huge part of Moz's mission is to help make SEO more transparent and accessible to everyone. With Russ' addition to the team, I'm confident we'll be able to make even greater strides in that direction.

Please join me in welcoming him and SERPscape to Moz!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1VcDlfn
via IFTTT

Wednesday 26 August 2015

The SEO Professional's Guide to Waterfall Diagrams

Posted by Zoompf

As we know well by now, the speed of a web page is very important from an SEO and user experience perspective. Faster pages have higher search engine ranks, and users will visit more pages and convert higher on a fast performing website. In short, the smart SEO professional needs to also think about optimizing for performance as well as content.

As we discussed in our last article, WebPageTest is a great free tool you can use to optimize your website performance. One of the most useful outputs of the WebPageTest tool is a graphic known as the waterfall diagram. A waterfall diagram is a graphical view of all the resources loaded by a web browser to present your page to your users, showing both the order in which those resources were loaded and how long it took to load each resource. Analyzing how those resources are loaded can give you insight into what's slowing down your webpage, and what you can fix to make it faster.

Waterfall diagrams are a lot like Microsoft Excel: they are simple in concept and can be very powerful, yet most people aren't using them to their fullest potential. In this article, we will show how an SEO professional can use waterfall diagrams created by tools like WebPageTest to identify and improve their site's performance and user experience.

How to read a waterfall diagram

If you haven't done so already, go to WebPageTest and run a test of your site. When the results are finished, click into the first test result to see the waterfall. Below is a sample waterfall chart (click for a larger version).

cia-waterfall-small

As mentioned above, waterfall diagrams are cascading charts that show how a web browser loads and renders a web page. Every row of the diagram is a separate request made by the browser. The taller the diagram, the more requests that are made to load the web page. The width of each row represents how long it takes for the browser to request a resource and download the response.

For each row, the waterfall chart uses a multi-colored bar to show where the browser spent its time loading that resource, for example:

waterfall-row-better

It's important to understand each phase of a request since you can improve the speed of your site by reducing the amount of time spent in each of these phases. Here is a brief overview:

  • DNS Lookup [Dark Green] - Before the browser can talk to a server it must do a DNS lookup to convert the hostname to an IP Address. There isn't much you can do about this, and luckily it doesn't happen for all requests.
  • Initial Connection [Orange] - Before the browser can send a request, it must create a TCP connection. This should only happen on the first few rows of the chart, otherwise there's a performance problem (more on this later).
  • SSL/TLS Negotiation [Purple] - If your page is loading resources securely over SSL/TLS, this is the time the browser spends setting up that connection. With Google now using HTTPS as a search ranking factor, SSL/TLS negotiation is more and more common.
  • Time To First Byte (TTFB) [Green] - The TTFB is the time it takes for the request to travel to the server, for the server to process it, and for the first byte of the response to make it make to the browser. We will use the measurement to determine if your web server is underpowered or you need to use a CDN.
  • Downloading (Blue) - This is the time the browser spends downloading the response. The longer this phase is, the larger the resource is. Ideally you can control the length of this phase by optimizing the size of your content.

You will also notice a few other lines on the waterfall diagram. There is a green vertical line which shows when "Start Render" happens. As we discussed in our last article, until Start Render happens, the user is looking at a blank white screen. A large Start Render time will make the user feel like your site is slow and unresponsive. There are some additional data points in the waterfall, such as "Content Download", but these are more advanced topics beyond the scope of this article.

Optimizing performance with a waterfall diagram

So how do we make a webpage load more quickly and create a better user experience? A waterfall chart provides us with 3 great visual aids to assist with this goal:

  1. First, we can optimize our site to reduce the amount of time it takes to download all the resources. This reduces the width of our waterfall. The skinnier the waterfall, the faster your site.
  2. Second, we can reduce the number of requests the browser needs to make to load a page. This reduces the height of our waterfall. The shorter your waterfall, the better.
  3. Finally, we can optimize the ordering of resource requests to improve rendering time. This moves the green Start Render line to the left. The further left this line, the better.

Let's now dive into each of these in more detail.

Reducing the width of the waterfall

We can reduce the width of the waterfall by reducing how long it takes to download each resource. We know that each row of the waterfall uses color to denote the different phases of fetching a resource. How often you see different colors reveals different optimizations you can make to improve the overall speed.

  • Is there a lot of orange? Orange is for the initial TCP connection made to your site. Only the first 2-6 requests to a specific hostname should need to create a TCP connection, after that the existing connections get reused. If you see a lot of orange on the chart, it means your site isn't using persistent connections. Below you can see a waterfall diagram for a site that isn't using persistent connections and note the orange section at the start of every request row. connections-bad Once persistent connections is enabled, the width of every request row will be cut in half because the browser won't have to make new connections with every request.
  • Are there long, purple sections? Purple is the time spent performing an SSL/TLS negotiation. If you are seeing a lot of purple over and over again for the same site, it means you haven't optimized for TLS. In the snippet of diagram below, we see 2 HTTPS requests. One server has been properly optimized, whereas the other has a bad TLS configuration: ms-is-silly To optimize TLS performance, see our previous Moz article .
  • Are there any long blue sections? Blue is the time spent downloading the response. If a row has a big blue section, it most likely means the response (the resource) is very large. A great way to speed up a site is to simply reduce the amount of data that has to be sent to the client. If you see a lot of blue, ask yourself "Why is that resource so large?" Chances are you can reduce the size of it through HTTP compression, minification, or image optimization. As an example, in the diagram below, we see a PNG image that is taking a long time to download. We can tell because the of the long blue section. long-download Further research revealed that this image is nearly 1.1 MB in size! Turns out the designer forgot to export it properly from Photoshop. Using image optimization techniques reduced this row and made the overall page load faster.
  • Is there a lot of green? Chances are there is a lot of green. Green is the browser just waiting to get content. Many times you'll see a row where the browser is waiting 80 or 90 ms, only to spend 1 ms downloading the resource! The best way to reduce the green section is to move your static content, like images, to a content delivery network (CDN) closer to your users. More on this later.

Reducing the height of the waterfall

If the waterfall diagram is tall, the browser is having to make a large number of requests to load the page. The best way to reduce the number of requests is to review all the content your page is including and determine if you really need all of it. For example:

  • Do you see a lot of CSS or JavaScript files? Below is a snippet of a waterfall diagram from an AOL site which, I kid you not, requests 48 separate CSS files! aol-is-silly If you site is loading a large number of individual CSS or JavaScript files, you should try combining them as with a CMS plugin or as part of your build process. Combining files reduces the number of requests made, improving your overall page speed.
  • Do you see a lot of "small" (less than 2kb) JavaScript files or CSS files? Consider including the contents of those files directly in your HTML via inline <script>, <code>, or <style> tags.
  • Do you see a lot of 302 redirects? Redirects appear as yellow highlighted rows and represent links on your page that are usually outdated or mistakenly made. This creates an unnecessary redirect which is just needlessly increasing the height of your waterfall. Replace those links with direct links to the new URLs.

Improving rendering time

Recall that the Start Render time represents when the user first sees something on the page other than a blank white page.

What is your Start Render time? If its longer than 1.5 seconds, you should try and improve it. To do so, first take a look at all the resources "above and to the left" of the Start Render line. This represents everything that should be considered for optimization to improve your render time.

Here are some tips:

  • Do you see any calls to load JavaScript libraries? JavaScript includes can block page rendering, move these lower in your page if possible.
  • Do you see a lot of requests for separate CSS items? Browsers wait until all the CSS is downloaded before they start rendering the page. Can you combine or inline any of those CSS files?
  • Do you see external fonts? When using an external font, the browser won't draw anything until it downloads that font. If possible, try to avoid using externally loaded fonts. If that is not possible, make sure you are eliminating any unnecessary 302 redirects to load that font, or (even better) consider hosting a copy of that font locally on your own webserver.

As an example, here is the top of a waterfall diagram:

cia-render

The green start render line is just over 1 second which is pretty good. However, if you look to the left of the line, you can see some optimizations. First, there are multiple JS files. With the exception of jQuery, these can probably be deferred until later. There are also multiple CSS files. These could be combined. These optimizations would improve the start render time.

You may need to coordinate with your designers and your developers to implement these optimizations. However the results are well worth it. No one likes looking at an empty white screen!

Other factors

Is my server fast enough?

We know that the time-to-first-byte from your server is a factor in search engine rankings. Luckily a waterfall tells you this metric. Simply look at the first row of the diagram. This should show you timing information for how the browser downloads the base HTML page. Look at the TTFB measurement. If it is longer than about 500 ms, your server may be underpowered or unoptimized. Talk with your hosting provider to improve your server capabilities. Below is an example of a waterfall diagram where the server was taking nearly 10 seconds to respond! That's a slow server!

bad-server

Do I need a CDN?

Latency can be a big source of delay for a website, and it has to do with the geographic distance between your server and your website visitors. As we have discussed, latency is driven by distance and the speed of light; a high speed internet connection alone doesn't fix the problem. Content Delivery Networks (CDNs) speed up your website by storing copies of your static assets (images, CSS, JavaScript files, etc) all over the world, reducing the latency for your visitors.

Waterfalls reveal how latency is affecting the speed of your site, and whether you should use a CDN. We can do this by looking at the TTFB measurements for requests the browser makes to your server for static assets. The TTFB is composed of the time it takes for your request to travel to the server, for the server to process it, and for the first byte of the response to come back. For static assets, the server doesn't have to do any real processing of the request, so the TTFB measurement really just tells us how long a round-trip takes from a visitor to a user. If you are getting high round-trip numbers it means your content is too far away from your visitors.

To determine if you need a CDN, you first need to know the location of your server. Next, use WebPageTest and run a test from a location that is far away from your server. If your site is hosted in the US, run a test from Asia or Europe. Now, find the rows for requests for several images or CSS files on your server and look at the TTFB measurement. If you are getting a TTFB for static content that is more than 150 ms, you should consider a CDN. For commercial sites, you might want to look at the enterprise grade capabilities of Akamai. For a cheaper option, check out CloudFlare which offers free CDN services.

Summary

Believe it or not, we have only scratched the surface of the performance insights you can learn from a waterfall chart. However this should be more than enough to begin to understand how to read a chart and use it to detect the most basic and impactful performance issues that are slowing down your site.

You can reduce the width of the chart by optimizing your content and ensuring that each resource is received as quickly as possible. You can reduce the height of the waterfall by removing unneeded requests. Finally, you can speed up how quickly your users first see your page by optimizing all the content before the Start Render line.

If you're still not sure where to start, check out Zoompf's Free Performance Report to analyze your site and prioritize those fixes that will make the biggest impact on improving your page speed and waterfall chart metrics.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1EhSS9n
via IFTTT

Tuesday 25 August 2015

The True Cost of Local Business Directories

Posted by kristihines

If you're a local business owner, you've likely heard that you should submit your business to local business directories like Yelp, Merchant Circle, Yellow Pages, and similar networks in order to help boost your local search visibility on Google. It sounds easy at first: you think you’ll just go to a few websites, enter your contact information, and you’ll be set. Because all you really want to do is get some links to your website from these profiles.

But the truth is, there are a lot of local business listings to obtain if you go the DIY route. There are local business directories that offer free listings, paid listings, and package listings on multiple networks. There are also local data providers that aren’t necessarily directories themselves, but they push your information out to other directories.

In this post, we’re going to look at the real cost of getting local business listings for your local business.

Finding the right directories

Since one of a business owner’s most important commodities is time, it’s important to note the time investment that you must make to individually create and manage local business listings. Here's what you'll need to do to find the right directories for your business.

Directories ranking for your business

You can start by looking your business up on Google by name to see where you already have listings that need to be claimed.

These are the first directories you'll want to tackle, as they're the ones that people are viewing when they search for your business by name. This is especially important for local businesses that don't have their own website or social media presence. Updating these directories will help customers get to know your business, your hours, and what you have to offer.

These are going to be the easiest, in many cases, because the listing is already there. Most local business directories offer a link to help you start the process.

Depending on the directory, you'll need to look in several places to find the link to claim your business. Sometimes it can be found near the top of your listing. Other times, it may be hidden in the directory's header or footer.

It's important to claim your listings so you can add your website link, business hours, and photos to help your listing stand out from others. Claiming your listing will also help make sure you're notified about any reviews or public updates your business receives.

Directories ranking for your competitors

Once you've claimed the listings you already have, you'll want to start finding new ones. Creating listings on local business directories where your competitors have listings will help you get in front of your target audience. If you notice your competitors have detailed profiles on some networks, but not others, that should clue you in to which ones are going to be most effective.

To find these directories, search for your competitors by name on Google. You should be able to spot which ones you haven't claimed for yourself already and go from there.

Directories ranking for your keywords

What keywords and phrases does your business target in search? Do a quick search for them to see which local directories rank in the top ten search results. Most keyword searches related to local businesses will lead you to your website, your competitors' websites, specific business listings in local business directories, and categories on local business directories.

You should make sure you have a listing on the local business directories that rank for your competitors, as well as the ones whose categories rank. For the latter, you may even want to consider doing paid advertising or sponsorship to make sure your business is first for the category, since that page is likely receiving traffic from your target customers.

Directories ranking in mobile search

After you've looked for the directories that rank for your business name, your competitors, and your target keywords, you'll want to do the same research on mobile search. This will help you find additional directories that are favorites for mobile users. Considering the studies showing that 50% of mobile searchers end up visiting a local store to make a purchase, getting your business in local business directories that rank well in mobile is key to business success.

Claiming and creating local business directory listings

If you think finding the right local business directories is time-consuming, wait until you start to claim and create them. Some directories make it simple and straightforward. Others have a much more complicated process.

Getting your business listing verified is usually the toughest part. Some networks will not require any verification past confirming your email address. Some will have an automated call or texting system for you to use to confirm your phone number. Some will have you speak to a live representative in order to confirm your listing and try to sell you paid upgrades and advertising.

The lengthiest ones from start to finish are those that require you to verify your business by postal mail. It means that you will have to wait a couple of days (or weeks, depending on the directory) to complete your listing.

In the event that you're trying to claim a listing for your business that needs the address or phone number updated, you'll need to invest additional time to contact the directory's support team directly to get your information updated. Otherwise, you won't be able to claim your business by phone or mail.

The cost of local business listings

Now that you know the time investment of finding, claiming, and creating local business directories, it's time to look at the actual cost. While some of the top local business directories are free, others require payment if you want beyond the basic listings, such as the addition of your website link, a listing in more than one category, removal of ads from your listing, and the ability to add media.

Pricing for local directory listings can range from $29 to $499 per year. You will find some directories that sell listings for their site alone, while others are grouped under plans like this one where you can choose to pay for one directory or a group of directories annually.

With the above service, you're looking at a minimum of $199 per year for one network, or $999 per year for dozens of networks. While it might look like a good deal, in reality, you are paying for listings that you could have gotten for free (Yahoo, Facebook, Google+, etc.) in addition to ones that have a paid entry.

So how can you decide what listings are worth paying for? If they are not listings that appear on the first page of search results for your business name, your competitors, or your keywords, you can do some additional research in the following ways:

Check the directory's search traffic

You can use SEMrush for free (10 queries prior to registering + 10 after entering your email address) to see the estimated search traffic for any given local business directory. For example, you can check Yelp's traffic by searching for their domain name:

Then, compare it with other local business directories you might not be familiar with, like this one:

This can help you decide whether or not it's worth upgrading to an account at $108 per month to get a website link and featured placement.

Alternatively, you can use sites like Alexa to estimate traffic through seeing which site has a lower Alexa ranking. For example, you can check Yelp's Alexa ranking:

Then compare it with other local business directories, like this one:

Instantly, you can see that between the two sites, Yelp is more popular in the US, while the other directory is more popular in India. You can scroll down further through the profile to see what countries a local business directory gets the majority of their traffic from to determine if they are getting traffic from your target customer base.

If you have a business in the US, and the directory you're researching doesn't get a lot of US traffic, it won't be worth getting a listing there, and certainly not worth paying for one.

Determine the directory's reputation

The most revealing search you can do for any local business directory that you are considering paying is the directory's name, plus the word "scam." If the directory is a scam, you'll find out pretty quickly. Even if it's not a scam, you will find out what businesses and customers alike find unappealing about the directory's service.

The traffic a directory receives may trump a bad reputation, however. If you look at Yelp's Better Business Bureau page, you will find over 1,700 complaints. It goes to show that while some businesses have a great experience on Yelp, others do not.

If you find a directory with little traffic and bad reviews or complaints, it's best to steer clear, regardless of whether they want payment for your listing.

Look for activity in your category

Are other businesses in your category getting reviews, tips, or other engagement? If so, that means there are people actually using the website. If not, it may not be worth the additional cost.

The "in your category" part is particularly important. Photography businesses may be getting a ton of traffic, but if you have an air conditioning repair service, and none of the businesses in that category have reviews or engagement, then your business likely won't, either.

This also goes for local business directories that allow you to create a listing for free, but make you pay for any leads that you get. If businesses in your category are not receiving reviews or engagement, then the leads you receive may not pan out into actual paying customers.

See where your listing would be placed

Does paying for a listing on a specific local business directory guarantee you first-page placement? In some cases, that will make the listing worth it—if the site is getting enough traffic from your target customers.

This is especially important for local business directories whose category pages rank on the first page for your target keyword. For these directories, it's essential that your business gets placed in the right category and at the top of the first page, if possible.

Think of that category page as search results—the further down the page you are, the less likely people are to click through to your business. If you're on the second or third page, those chances go down even further.

In conclusion

Local business directories can be valuable assets for your local business marketing. Be sure to do your due diligence in researching the right directories for your business. You can also simplify the process and see what Moz Local has to offer. Once your listings are live, be sure to monitor them for new reviews, tips, and other engagement. Also be sure to monitor your analytics to determine which local business directory is giving you the most benefit!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1JgiPmw
via IFTTT

Monday 24 August 2015

User Behaviour Data as a Ranking Signal

Posted by Dan-Petrovic

Question: How does a search engine interpret user experience?
Answer: They collect and process user behaviour data.

Types of user behaviour data used by search engines include click-through rate (CTR), navigational paths, time, duration, frequency, and type of access.

Click-through rate

Click-through rate analysis is one of the most prominent search quality feedback signals in both commercial and academic information retrieval papers. Both Google and Microsoft have made considerable efforts towards development of mechanisms which help them understand when a page receives higher or lower CTR than expected.

Position bias

CTR values are heavily influenced by position because users are more likely to click on top results. This is called “position bias,” and it’s what makes it difficult to accept that CTR can be a useful ranking signal. The good news is that search engines have numerous ways of dealing with the bias problem. In 2008, Microsoft found that the "cascade model" worked best in bias analysis. Despite slight degradation in confidence for lower-ranking results, it performed really well without any need for training data and it operated parameter-free. The significance of their model is in the fact that it offered a cheap and effective way to handle position bias, making CTR more practical to work with.

Result attractiveness

Good CTR is a relative term. A 30% CTR for a top result in Google wouldn't be a surprise, unless it’s a branded term; then it would be a terrible CTR. Likewise, the same value for a competitive term would be extraordinarily high if nested between “high-gravity” search features (e.g. an answer box, knowledge panel, or local pack).

I've spent five years closely observing CTR data in the context of its dependence on position, snippet quality and special search features. During this time I've come to appreciate the value of knowing when deviation from the norm occurs. In addition to ranking position, consider other elements which may impact the user’s choice to click on a result:

  • Snippet quality
  • Perceived relevance
  • Presence of special search result features
  • Brand recognition
  • Personalisation

Practical application

Search result attractiveness is not an abstract academic problem. When done right, CTR studies can provide a lot of value to a modern marketer. Here's a case study where I take advantage of CTR average deviations in my phrase research and page targeting process.

Google's title bolding study

Google is also aware of additional factors that contribute to result attractiveness bias, and they've been busy working on non-position click bias solutions .

Google CTR study

They show strong interest in finding ways to improve the effectiveness of CTR-based ranking signals. In addition to solving position bias, Google's engineers have gone one step further by investigating SERP snippet title bolding as a result attractiveness bias factor. I find it interesting that Google recently removed bolding in titles for live search results, likely to eliminate the bias altogether. Their paper highlights the value in further research focused on the bias impact of specific SERP snippet features.

URL access, duration, frequency, and trajectory

Logged click data is not the only useful user behaviour signal. Session duration, for example, is a high-value metric if measured correctly. For example, a user could navigate to a page and leave it idle while they go out for lunch. This is where active user monitoring systems become useful.

There are many assisting user-behaviour signals which, while not indexable, aid measurement of engagement time on pages. This includes various types of interaction via keyboard, mouse, touchpad, tablet, pen, touch screen, and other interfaces.

Google's John Mueller recently explained that user engagement is not a direct ranking signal, and I believe this. Kind of. John said that this type of data (time on page, filling out forms, clicking, etc) doesn't do anything automatically.

At this point in time, we're likely looking at a sandbox model rather than a live listening and reaction system when it comes to the direct influence of user behaviour on a specific page. That said, Google does acknowledge limitations of quality-rater and sandbox-based result evaluation. They’ve recently proposed an active learning system, which would evaluate results on the fly with a more representative sample of their user base.

"Another direction for future work is to incorporate active learning in order to gather a more representative sample of user preferences."

Google's result attractiveness paper was published in 2010. In early 2011, Google released the Panda algorithm. Later that year, Panda went into flux, indicating an implementation of one form of an active learning system. We can expect more of Google's systems to run on their own in the future.

The monitoring engine

Google has designed and patented a system in charge of collecting and processing of user behaviour data. They call it "the monitoring engine", but I don't like that name—it's too long. Maybe they should call it, oh, I don't know... Chrome?

The actual patent describing Google's monitoring engine is a truly dreadful read, so if you're in a rush, you can read my highlights instead.

MetricsService

Let's step away from patents for a minute and observe what's already out there. Chrome's MetricsService is a system in charge of the acquisition and transmission of user log data. Transmitted histograms contain very detailed records of user activities, including opened/closed tabs, fetched URLs, maximized windows, et cetera.

Enter this in Chrome: chrome://histograms/
(Click here for technical details)

Here are a few external links with detailed information about Chrome's MetricsService, reasons and types of data collection, and a full list of histograms.

Use in rankings

Google can process duration data in an eigenvector-like fashion using nodes (URLs), edges (links), and labels (user behaviour data). Page engagement signals, such as session duration value, are used to calculate weights of nodes. Here are the two modes of a simplified graph comprised of three nodes (A, B, C) with time labels attached to each:

nodes

In an undirected graph model (undirected edges), the weight of the node A is directly driven by the label value (120 second active session). In a directed graph (directed edges), node A links to node B and C. By doing so, it receives a time-label credit from the nodes it links to.

In plain English, if you link to pages that people spend a lot of time on, Google will add a portion of that “time credit” towards the linking page. This is why linking out to useful, engaging content is a good idea. A “client behavior score” reflects the relative frequency and type of interactions by the user.

What's interesting is that the implicit quality signals of deeper pages also flow up to higher-level pages.

Reasonable surfer model

“Reasonable surfer” is the random surfer's successor. The PageRank dampening factor reflects the original assumption that after each followed link, our imaginary surfer is less likely to click on another random link, resulting in an eventual abandonment of the surfing path. Most search engines today work with a more refined model encompassing a wider variety of influencing factors.

For example, the likelihood of a link being clicked on within a page may depend on:

  • Position of the link on the page (top, bottom, above/below fold)
  • Location of the link on the page (menu, sidebar, footer, content area, list)
  • Size of anchor text
  • Font size, style, and colour
  • Topical cluster match
  • URL characteristics (external/internal, hyphenation, TLD, length, redirect, host)
  • Image link, size, and aspect ratio
  • Number of links on page
  • Words around the link, in title, or headings
  • Commerciality of anchor text

In addition to perceived importance from on-page signals, a search engine may judge link popularity by observing common user choices. A link on which users click more within a page can carry more weight than the one with less clicks. Google in particular mentions user click behaviour monitoring in the context of balancing out traditional, more manipulative signals (e.g. links).

In the following illustration, we can see two outbound links on the same document (A) pointing to two other documents: (B) and (C). On the left is what would happen in the traditional "random surfer model,” while on the right we have a link which sits on a more prominent location and tends to be a preferred choice by many of the pages' visitors.

link nodes

This method can be used on a single document or in a wider scope, and is also applicable to both single users (personalisation) and groups (classes) of users determined by language, browsing history, or interests.

Pogo-sticking

One of the most telling signals for a search engine is when users perform a query and quickly bounce back to search results after visiting a page that didn't satisfy their needs. The effect was described and discussed a long time ago, and numerous experiments show its effect in action. That said, many question the validity of SEO experiments largely due to their rather non-scientific execution and general data noise. So, it's nice to know that the effect has been on Google's radar.

Address bar

URL data can include whether a user types a URL into an address field of a web browser, or whether a user accesses a URL by clicking on a hyperlink to another web page or a hyperlink in an email message. So, for example, if users type in the exact URL and hit enter to reach a page, that represents a stronger signal than when visiting the same page after a browser autofill/suggest or clicking on a link.

  • Typing in full URL (full significance)
  • Typing in partial URL with auto-fill completion (medium significance)
  • Following a hyperlink (low significance)

Login pages

Google monitors users and maps their journey as they browse the web. They know when users log into something (e.g. social network) and they know when they end the session by logging out. If a common journey path always starts with a login page, Google will add more significance to the login page in their rankings.

"A login page can start a user on a trajectory, or sequence, of associated pages and may be more significant to the user than the associated pages and, therefore, merit a higher ranking score."

I find this very interesting. In fact, as I write this, we're setting up a login experiment to see if repeated client access and page engagement impacts the search visibility of the page in any way. Readers of this article can access the login test page with username: moz and password: moz123.

The idea behind my experiment is to have all the signals mentioned in this article ticked off:

  • URL familiarity, direct entry for maximum credit
  • Triggering frequent and repeated access by our clients
  • Expected session length of 30-120 seconds
  • Session length credit up-flow to home page
  • Interactive elements add to engagement (export, chart interaction, filters)

Combining implicit and traditional ranking signals

Google treats various user-generated data with different degrees of importance. Combining implicit signals such as day of the week, active session duration, visit frequency, or type of article with traditional ranking methods improves reliability of search results.

page quality metrics

Impact on SEO

The fact that behaviour signals are on Google's radar stresses the rising importance of user experience optimisation. Our job is to incentivise users to click, engage, convert, and keep coming back. This complex task requires a multidisciplinary mix, including technical, strategic, and creative skills. We're being evaluated by both users and search engines, and everything users do on our pages counts. The evaluation starts at the SERP level and follows users during the whole journey throughout your site.

"Good user experience"

Search visibility will never depend on subjective user experience, but on search engines' interpretation of it. Our most recent research into how people read online shows that users don't react well when facing large quantities of text (this article included) and will often skim content and leave if they can't find answers quickly enough. This type of behaviour may send the wrong signals about your page.

My solution was to present all users with a skeletal content form with supplementary content available on-demand through use of hypotext. As a result, our test page (~5000 words) increased the average time per user from 6 to 12 minutes and bounce rate reduced from 90% to 60%. The very article where we published our findings shows clicks, hovers, and scroll depth activity of double or triple values to the rest of our content. To me, this was convincing enough.

clicks

Google's algorithms disagreed, however, devaluing the content not visible on the page by default. Queries contained within unexpanded parts of the page aren't bolded in SERP snippets and currently don't rank as well as pages which copied that same content but made it visible. This is ultimately something Google has to work on, but in the meantime we have to be mindful of this perception gap and make calculated decisions in cases where good user experience doesn't match Google's best practices.

Relevant papers


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from Moz Blog http://ift.tt/1JMnHFP
via IFTTT

Social Media Today