Innovate not imitate!

Innovate not imitate!
Interested in the latest Growth hacks?

Welcome to our blog

Interested in the latest Growth hacks?

Welcome to our blog!

We want to help you start/manage and grow your business using innovative strategies and implementation. We have a passion for helping businesses and companies of various sizes see the same success that we have achieved.

Our skillsets are wide and varied, from business strategy, marketing, to online strategy. An increasing number of companies are turning to the internet and online media as a means to maximising their marketing reach and exposure. This is special area of focus for us and we do more than simple SEO strategies.

See our website for more: www.innovatetoaccelerate.com

Friday 29 June 2018

Most common, yet troubling WordPress errors and their solutions

Running a WordPress website or a blog is exciting. The thrill of being able to share your content with your audience at ease is the driving factor in why WordPress powers over 30% of all the websites. A people’s platform, WordPress is a popular Content Management System (CMS) for new and experienced users alike.

WordPress however, does offer its fair share of issues that trouble its users. Some of these issues are generic and can be addressed with small amendments. Other complications with the system demand a technical learning curve to solve. This article highlights the common issues and how to solve them.

Issues with themes and plugins

Themes and plugins are essentially the structures that support WordPress’ framework. Users often have to deal with issues related to them.

Theme issues:

  • Theme installation failed
  • Missing stylesheet
  • Sample data import errors
  • HomePage not similar to the demo etc.

The root cause of such theme related issues could be that something is missing in the zip folder or you could have simply missed uploading the root theme folder.

For sample data import errors, you can try any of these solutions:

  • Once you have activated the theme, make a check and ensure that your theme includes custom post types and taxonomies
  • If you fail to import media, you can open the sample data in a text editor and try and locate one of those files and test the link in your browser
  • Alternatively, you can get in touch with the theme developer and share your issues if you are unable to address them successfully.

Plugin issues:

Regularly updating and ensuring that you download plugins from reliable sources can reduce risk. However, some errors still creep in which can be dealt with in the following manner.

  • Some plugin updates go along with the latest update of your WordPress version. Make sure you don’t miss them.
  • Plugins can be complex to set up and require careful configuration. Make sure that you are meticulous with the plugin documentation and follow instructions.
  • Always upload your plugins to the right folder: wp-content/plugins
  • If everything else fails, get in touch with the Plugin developer to seek your answers.

Lost WordPress admin password

Losing your WordPress site’s login password can cause real issues.

If you can successfully retrieve it through the emailed link request – you are one of the lucky ones. A lot of WordPress admins never receive these emails in their inbox.

You can try resetting the email and password through the phpMyAdmin option. To do so, you will have to login to your cpanel, locate the phpMyAdmin and select the database option of your WordPress website.

  • Click on the wp_users table to enter a new username and password
  • Move to ‘Functions’ and click the MD5 option as it highly recommended
  • Save the changes and you will be able to access your site’s admin dashboard.

Another way around this is to edit your theme’s functions.php file. Make the following additions and save the file to upload it. You can login to the dashboard and remove the code from the file after yet another upload.

wp_set_password(‘DesiredNewPassword’,1);

A hacked WordPress site

A hacked WordPress website is unfortunately a common issue. It can only be dealt with by the implementation of a robust website monitoring security system and with a WordPress security plugin in place. You can also try hiding your site’s login page or integrate 2-factor authentication to make sure that you have ample time to act before your website is attacked.

The white screen of death

The most common WordPress error is the ‘white screen of death’. To make sure that things get back to the normal, you can try checking if your existing theme or the installed plugins are facing some compatibility issues. This method however could result in a lengthy process and requires you to deactivate all the plugins and reactivate them one by one to figure out the one that has been causing the trouble.

If you have been locked out of your dashboard, you can go the FTP way.

The other way of fixing your site’s white screen of death error is by increasing the PHP memory limit via FTP where you will be prompted to edit your wp-config.php. All you need to do is add the following code snippet at the bottom of the wp-config.php file

define( ‘WP_MEMORY_LIMIT’, ‘256M’ );

Dealing with spam

Spam is a recurrent issue faced by many new WordPress site owners.

The only way to deal with spam is by downloading and installing anti-spam plugins such as the Akismet plugin. You should also make sure that you cut out user-generated spam on your site’s group or forums to keep the situation under control. Eliminating spam is generally a great way to speed up your WordPress site.

Error 404

One of the most irritating WordPress errors is where the site posts return a 404 Error when your website is unable to locate a page that you are trying to access. To fix WordPress posts returning 404 error, you can generate a new .htaccess file by navigating to Settings > Permalinks. Just remember to  click on save changes.

Error establishing a database connection

If your website has been hacked or if there is an issue with your site’s web host provider, your website might run a message mentioning an error establishing a database connection.

To fix the issue, you can check your wp-config.php file to see if any of the information such as the username, database name, password, and host are all correct and not missing.

If the error continues, you can try repairing the error by adding the following line to initiate the repair of the database. Just be sure that this code is removed from the wp-config file to avoid public access.

define (‘WP_ALLOW_REPAIR’, true);.”

However, if everything is intact and the error prevails, you can seek assistance from your host provider regarding the error as it might be taking place due to issues at their end.

Conclusion

There are unfortunately another hundred WordPress errors that demand a space in this article, but we have captured the most common ones that can be dealt with easy tweaks. These errors occur to make sure that all your website elements are in their right places before it’s too late to make a change and your website might go missing, entirely.

 

Pawan Sahu is a digital marketer and passionate blogger at MarkupTrend.

 

 

 

 

 

 

 

 



from SEO – Search Engine Watch https://ift.tt/2Mwrnyk
via IFTTT

What Do SEOs Do When Google Removes Organic Search Traffic? - Whiteboard Friday

Posted by randfish

We rely pretty heavily on Google, but some of their decisions of late have made doing SEO more difficult than it used to be. Which organic opportunities have been taken away, and what are some potential solutions? Rand covers a rather unsettling trend for SEO in this week's Whiteboard Friday.

What Do SEOs Do When Google Removes Organic Search?

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're talking about something kind of unnerving. What do we, as SEOs, do as Google is removing organic search traffic?

So for the last 19 years or 20 years that Google has been around, every month Google has had, at least seasonally adjusted, not just more searches, but they've sent more organic traffic than they did that month last year. So this has been on a steady incline. There's always been more opportunity in Google search until recently, and that is because of a bunch of moves, not that Google is losing market share, not that they're receiving fewer searches, but that they are doing things that makes SEO a lot harder.

Some scary news

Things like...

  • Aggressive "answer" boxes. So you search for a question, and Google provides not just necessarily a featured snippet, which can earn you a click-through, but a box that truly answers the searcher's question, that comes directly from Google themselves, or a set of card-style results that provides a list of all the things that the person might be looking for.
  • Google is moving into more and more aggressively commercial spaces, like jobs, flights, products, all of these kinds of searches where previously there was opportunity and now there's a lot less. If you're Expedia or you're Travelocity or you're Hotels.com or you're Cheapflights and you see what's going on with flight and hotel searches in particular, Google is essentially saying, "No, no, no. Don't worry about clicking anything else. We've got the answers for you right here."
  • We also saw for the first time a seasonally adjusted drop, a drop in total organic clicks sent. That was between August and November of 2017. It was thanks to the Jumpshot dataset. It happened at least here in the United States. We don't know if it's happened in other countries as well. But that's certainly concerning because that is not something we've observed in the past. There were fewer clicks sent than there were previously. That makes us pretty concerned. It didn't go down very much. It went down a couple of percentage points. There's still a lot more clicks being sent in 2018 than there were in 2013. So it's not like we've dipped below something, but concerning.
  • New zero-result SERPs. We absolutely saw those for the first time. Google rolled them back after rolling them out. But, for example, if you search for the time in London or a Lagavulin 16, Google was showing no results at all, just a little box with the time and then potentially some AdWords ads. So zero organic results, nothing for an SEO to even optimize for in there.
  • Local SERPs that remove almost all need for a website. Then local SERPs, which have been getting more and more aggressively tuned so that you never need to click the website, and, in fact, Google has made it harder and harder to find the website in both mobile and desktop versions of local searches. So if you search for Thai restaurant and you try and find the website of the Thai restaurant you're interested in, as opposed to just information about them in Google's local pack, that's frustratingly difficult. They are making those more and more aggressive and putting them more forward in the results.

Potential solutions for marketers

So, as a result, I think search marketers really need to start thinking about: What do we do as Google is taking away this opportunity? How can we continue to compete and provide value for our clients and our companies? I think there are three big sort of paths — I won't get into the details of the paths — but three big paths that we can pursue.

1. Invest in demand generation for your brand + branded product names to leapfrog declines in unbranded search.

The first one is pretty powerful and pretty awesome, which is investing in demand generation, rather than just demand serving, but demand generation for brand and branded product names. Why does this work? Well, because let's say, for example, I'm searching for SEO tools. What do I get? I get back a list of results from Google with a bunch of mostly articles saying these are the top SEO tools. In fact, Google has now made a little one box, card-style list result up at the top, the carousel that shows different brands of SEO tools. I don't think Moz is actually listed in there because I think they're pulling from the second or the third lists instead of the first one. Whatever the case, frustrating, hard to optimize for. Google could take away demand from it or click-through rate opportunity from it.

But if someone performs a search for Moz, well, guess what? I mean we can nail that sucker. We can definitely rank for that. Google is not going to take away our ability to rank for our own brand name. In fact, Google knows that, in the navigational search sense, they need to provide the website that the person is looking for front and center. So if we can create more demand for Moz than there is for SEO tools, which I think there's something like 5 or 10 times more demand already for Moz than there is tools, according to Google Trends, that's a great way to go. You can do the same thing through your content, through your social media, and through your email marketing. Even through search you can search and create demand for your brand rather than unbranded terms.

2. Optimize for additional platforms.

Second thing, optimizing across additional platforms. So we've looked and YouTube and Google Images account for about half of the overall volume that goes to Google web search. So between these two platforms, you've got a significant amount of additional traffic that you can optimize for. Images has actually gotten less aggressive. Right now they've taken away the "view image directly" link so that more people are visiting websites via Google Images. YouTube, obviously, this is a great place to build brand affinity, to build awareness, to create demand, this kind of demand generation to get your content in front of people. So these two are great platforms for that.

There are also significant amounts of web traffic still on the social web — LinkedIn, Facebook, Twitter, Pinterest, Instagram, etc., etc. The list goes on. Those are places where you can optimize, put your content forward, and earn traffic back to your websites.

3. Optimize the content that Google does show.

Local

So if you're in the local space and you're saying, "Gosh, Google has really taken away the ability for my website to get the clicks that it used to get from Google local searches," going into Google My Business and optimizing to provide information such that people who perform that query will be satisfied by Google's result, yes, they won't get to your website, but they will still come to your business, because you've optimized the content such that Google is showing, through Google My Business, such that those searchers want to engage with you. I think this sometimes gets lost in the SEO battle. We're trying so hard to earn the click to our site that we're forgetting that a lot of search experience ends right at the SERP itself, and we can optimize there too.

Results

In the zero-results sets, Google was still willing to show AdWords, which means if we have customer targets, we can use remarketed lists for search advertising (RLSA), or we can run paid ads and still optimize for those. We could also try and claim some of the data that might show up in zero-result SERPs. We don't yet know what that will be after Google rolls it back out, but we'll find out in the future.

Answers

For answers, the answers that Google is giving, whether that's through voice or visually, those can be curated and crafted through featured snippets, through the card lists, and through the answer boxes. We have the opportunity again to influence, if not control, what Google is showing in those places, even when the search ends at the SERP.

All right, everyone, thanks for watching for this edition of Whiteboard Friday. We'll see you again next week. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2KvxMcF
via IFTTT

Thursday 28 June 2018

How UX fits into SEO

There is a common misconception that SEO simply involves link building and including relevant keywords in content. While these are two important strategies, search engines consider a lot more than this when ranking websites.

Elements of user experience (UX) have been rolled into SEO practices. Is your site fast, secure and mobile-friendly? Do you have quality content that engages users and encourages them to stay on your website? Is your site quick to load and easy to navigate?

These are all elements that are considered by Google and other search engines when determining how to rank your website. With that in mind, read on to discover more about how UX fits into search engine optimization.

Why UX is important for SEO

Google has changed considerably over the years. The search engine giant constantly updates its algorithms to ensure users are provided with the best possible results. Each and every update that Google has made has been geared towards providing more user-focused and user-friendly results. We have seen changes to SERPs, like knowledge panel and rich snippets, and algorithm updates that have shown just how critical UX has become to Google. You only need to look at RankBrain to see that this is the case.

The introduction of RankBrain

RankBrain was introduced in 2015, and it was considered the third most significant factor in determining the SEO value of your website, only falling below links and content. RankBrain is driven by behavior metrics, including pages per session, bounce rate, dwell time, and organic CTR. Essentially, these metrics inform the search engine as to whether users enjoy their experience on your website.

After all, if a user visits your website again and again, spends a good chunk of time on it, and moves through the website with ease, it tells Google that you provide a good UX, and as a consequence, your ranking will improve. On the flip side, if someone leaves your website as soon as they enter it, returning back to the search results, it indicates that they did not find relevant information, and this can cause a drop in your ranking.

UX and SEO share common goals

UX also fits into SEO because they both share common goals. If you have been following SEO over the past few years, you will know that it has moved away from solely ranking for search terms. Now, it seeks to provide searchers with information that answers their queries. This is where UX and SEO start to interact. Both share the goal of helping users to complete their tasks by providing them with relevant information. SEO will lead a person to the content they need, and the UX answers their queries once a user ends up on the webpage.

Important SEO practices that influence UX

It is important to understand the common SEO and content practices that influence UX:

  • Both image tags and headings are critical. Image tags provide details when the images do not load, ensuring the user receives a similar UX irrespective of whether there is a picture or not. Headers help structure page content and improve page readability.
  • Creating page copy over 600 words is important to ensure it is in-depth and answers user questions.
  • Page speed also plays a vital role. No one wants to wait two seconds for a page to load. The Internet is supposed to be about convenience. How often have you had to wait for a page to load and you have ended up hitting the refresh button several times in frustration?

How to get the UX right

Hopefully, you now have a better understanding regarding the importance of UX in terms of search engine optimization. So, where should you begin in terms of improving your website’s UX so that it has a positive impact on your ranking?

  • Align your UX and SEO strategies – The first thing you need to do is make sure both strategies are integrated, rather than working in separate lanes. After all, one of the main objectives of your website should be to generate more leads and conversions. Both UX and SEO play a critical part in achieving this goal, but they need to work together if you are to have success.
  • Focus on designs that fit SEO principles – This includes providing focused product names and descriptions, creating a clear navigation path, optimising menu names and functionalities, maximizing H1 and H2 titles, and creating content that resonates with both visitors and search engines.
  • Invest in responsive web design – There is no excuse for having a website that is not optimized for use across all platforms in 2018. It is projected that by 2020 there will be 2.87 billion smartphone users. Just think of how many potential customers you are missing out on by failing to optimise your website. Not only this, but your search engine ranking will be suffering too. If someone enters your website via mobile phone and it is difficult to read, some of the buttons don’t work, and/or it is slow to load, you will never be able to reach one of the top spots on Google.
  • Simplify navigation – Website navigation is a key factor when it comes to UX and consequently your search engine ranking. Your homepage should feature clear and easy navigation. Users should be able to use your website intuitively – they shouldn’t have to think about their next step. One effective method for helping Google to understand and index your pages is including a sitemap on your website.
  • Focus on quality – Navigation is not the only factor that Google considers when determining whether your website is of a high quality or not. Other factors you need to work on include page layout, content relevance, content originality, internal link structure, and page speed.

When it comes to ranking your website on the search engine result pages, there is no denying that UX is one of the most critical factors. If you want to increase your online visibility and ultimately boost your conversion rate, you need to align your UX and SEO strategy.

Use the advice that has been provided as a starting point, but make sure you continue to test your website and make improvements. After all, if you remain stagnant, you will only get left behind.



from SEO – Search Engine Watch https://ift.tt/2Mr8ehl
via IFTTT

The Minimum Viable Knowledge You Need to Work with JavaScript & SEO Today

Posted by sergeystefoglo

If your work involves SEO at some level, you’ve most likely been hearing more and more about JavaScript and the implications it has on crawling and indexing. Frankly, Googlebot struggles with it, and many websites utilize modern-day JavaScript to load in crucial content today. Because of this, we need to be equipped to discuss this topic when it comes up in order to be effective.

The goal of this post is to equip you with the minimum viable knowledge required to do so. This post won’t go into the nitty gritty details, describe the history, or give you extreme detail on specifics. There are a lot of incredible write-ups that already do this — I suggest giving them a read if you are interested in diving deeper (I’ll link out to my favorites at the bottom).

In order to be effective consultants when it comes to the topic of JavaScript and SEO, we need to be able to answer three questions:

  1. Does the domain/page in question rely on client-side JavaScript to load/change on-page content or links?
  2. If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?
  3. If not, what is the ideal solution?

With some quick searching, I was able to find three examples of landing pages that utilize JavaScript to load in crucial content.

I’m going to be using Sitecore’s Symposium landing page through each of these talking points to illustrate how to answer the questions above.

We’ll cover the “how do I do this” aspect first, and at the end I’ll expand on a few core concepts and link to further resources.

Question 1: Does the domain in question rely on client-side JavaScript to load/change on-page content or links?

The first step to diagnosing any issues involving JavaScript is to check if the domain uses it to load in crucial content that could impact SEO (on-page content or links). Ideally this will happen anytime you get a new client (during the initial technical audit), or whenever your client redesigns/launches new features of the site.

How do we go about doing this?

Ask the client

Ask, and you shall receive! Seriously though, one of the quickest/easiest things you can do as a consultant is contact your POC (or developers on the account) and ask them. After all, these are the people who work on the website day-in and day-out!

“Hi [client], we’re currently doing a technical sweep on the site. One thing we check is if any crucial content (links, on-page content) gets loaded in via JavaScript. We will do some manual testing, but an easy way to confirm this is to ask! Could you (or the team) answer the following, please?

1. Are we using client-side JavaScript to load in important content?
2. If yes, can we get a bulleted list of where/what content is loaded in via JavaScript?”

Check manually

Even on a large e-commerce website with millions of pages, there are usually only a handful of important page templates. In my experience, it should only take an hour max to check manually. I use the Chrome Web Developers plugin, disable JavaScript from there, and manually check the important templates of the site (homepage, category page, product page, blog post, etc.)

In the example above, once we turn off JavaScript and reload the page, we can see that we are looking at a blank page.

As you make progress, jot down notes about content that isn’t being loaded in, is being loaded in wrong, or any internal linking that isn’t working properly.

At the end of this step we should know if the domain in question relies on JavaScript to load/change on-page content or links. If the answer is yes, we should also know where this happens (homepage, category pages, specific modules, etc.)

Crawl

You could also crawl the site (with a tool like Screaming Frog or Sitebulb) with JavaScript rendering turned off, and then run the same crawl with JavaScript turned on, and compare the differences with internal links and on-page elements.

For example, it could be that when you crawl the site with JavaScript rendering turned off, the title tags don’t appear. In my mind this would trigger an action to crawl the site with JavaScript rendering turned on to see if the title tags do appear (as well as checking manually).

Example

For our example, I went ahead and did a manual check. As we can see from the screenshot below, when we disable JavaScript, the content does not load.

In other words, the answer to our first question for this pages is “yes, JavaScript is being used to load in crucial parts of the site.”

Question 2: If yes, is Googlebot seeing the content that’s loaded in via JavaScript properly?

If your client is relying on JavaScript on certain parts of their website (in our example they are), it is our job to try and replicate how Google is actually seeing the page(s). We want to answer the question, “Is Google seeing the page/site the way we want it to?”

In order to get a more accurate depiction of what Googlebot is seeing, we need to attempt to mimic how it crawls the page.

How do we do that?

Use Google’s new mobile-friendly testing tool

At the moment, the quickest and most accurate way to try and replicate what Googlebot is seeing on a site is by using Google’s new mobile friendliness tool. My colleague Dom recently wrote an in-depth post comparing Search Console Fetch and Render, Googlebot, and the mobile friendliness tool. His findings were that most of the time, Googlebot and the mobile friendliness tool resulted in the same output.

In Google’s mobile friendliness tool, simply input your URL, hit “run test,” and then once the test is complete, click on “source code” on the right side of the window. You can take that code and search for any on-page content (title tags, canonicals, etc.) or links. If they appear here, Google is most likely seeing the content.

Search for visible content in Google

It’s always good to sense-check. Another quick way to check if GoogleBot has indexed content on your page is by simply selecting visible text on your page, and doing a site:search for it in Google with quotations around said text.

In our example there is visible text on the page that reads…

"Whether you are in marketing, business development, or IT, you feel a sense of urgency. Or maybe opportunity?"

When we do a site:search for this exact phrase, for this exact page, we get nothing. This means Google hasn’t indexed the content.

Crawling with a tool

Most crawling tools have the functionality to crawl JavaScript now. For example, in Screaming Frog you can head to configuration > spider > rendering > then select “JavaScript” from the dropdown and hit save. DeepCrawl and SiteBulb both have this feature as well.

From here you can input your domain/URL and see the rendered page/code once your tool of choice has completed the crawl.

Example:

When attempting to answer this question, my preference is to start by inputting the domain into Google’s mobile friendliness tool, copy the source code, and searching for important on-page elements (think title tag, <h1>, body copy, etc.) It’s also helpful to use a tool like diff checker to compare the rendered HTML with the original HTML (Screaming Frog also has a function where you can do this side by side).

For our example, here is what the output of the mobile friendliness tool shows us.

After a few searches, it becomes clear that important on-page elements are missing here.

We also did the second test and confirmed that Google hasn’t indexed the body content found on this page.

The implication at this point is that Googlebot is not seeing our content the way we want it to, which is a problem.

Let’s jump ahead and see what we can recommend the client.

Question 3: If we’re confident Googlebot isn’t seeing our content properly, what should we recommend?

Now we know that the domain is using JavaScript to load in crucial content and we know that Googlebot is most likely not seeing that content, the final step is to recommend an ideal solution to the client. Key word: recommend, not implement. It’s 100% our job to flag the issue to our client, explain why it’s important (as well as the possible implications), and highlight an ideal solution. It is 100% not our job to try to do the developer’s job of figuring out an ideal solution with their unique stack/resources/etc.

How do we do that?

You want server-side rendering

The main reason why Google is having trouble seeing Sitecore’s landing page right now, is because Sitecore’s landing page is asking the user (us, Googlebot) to do the heavy work of loading the JavaScript on their page. In other words, they’re using client-side JavaScript.

Googlebot is literally landing on the page, trying to execute JavaScript as best as possible, and then needing to leave before it has a chance to see any content.

The fix here is to instead have Sitecore’s landing page load on their server. In other words, we want to take the heavy lifting off of Googlebot, and put it on Sitecore’s servers. This will ensure that when Googlebot comes to the page, it doesn’t have to do any heavy lifting and instead can crawl the rendered HTML.

In this scenario, Googlebot lands on the page and already sees the HTML (and all the content).

There are more specific options (like isomorphic setups)

This is where it gets to be a bit in the weeds, but there are hybrid solutions. The best one at the moment is called isomorphic.

In this model, we're asking the client to load the first request on their server, and then any future requests are made client-side.

So Googlebot comes to the page, the client’s server has already executed the initial JavaScript needed for the page, sends the rendered HTML down to the browser, and anything after that is done on the client-side.

If you’re looking to recommend this as a solution, please read this post from the AirBNB team which covers isomorphic setups in detail.

AJAX crawling = no go

I won’t go into details on this, but just know that Google’s previous AJAX crawling solution for JavaScript has since been discontinued and will eventually not work. We shouldn’t be recommending this method.

(However, I am interested to hear any case studies from anyone who has implemented this solution recently. How has Google responded? Also, here’s a great write-up on this from my colleague Rob.)

Summary

At the risk of severely oversimplifying, here's what you need to do in order to start working with JavaScript and SEO in 2018:

  1. Know when/where your client’s domain uses client-side JavaScript to load in on-page content or links.
    1. Ask the developers.
    2. Turn off JavaScript and do some manual testing by page template.
    3. Crawl using a JavaScript crawler.
  2. Check to see if GoogleBot is seeing content the way we intend it to.
    1. Google’s mobile friendliness checker.
    2. Doing a site:search for visible content on the page.
    3. Crawl using a JavaScript crawler.
  3. Give an ideal recommendation to client.
    1. Server-side rendering.
    2. Hybrid solutions (isomorphic).
    3. Not AJAX crawling.

Further resources

I’m really interested to hear about any of your experiences with JavaScript and SEO. What are some examples of things that have worked well for you? What about things that haven’t worked so well? If you’ve implemented an isomorphic setup, I’m curious to hear how that’s impacted how Googlebot sees your site.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2KuqaqL
via IFTTT

Wednesday 27 June 2018

Google Testing Large Product Knowledge Panel With Paid & Organic Features

Google is testing a new large product based knowledge panel in their search results.  This new style is a cross between a product knowledge panel and a Product Listing Ad, something they first began testing several years ago, with a combination of both paid and organic search features included. Dr. Pete Meyers from Moz spotted […]

The post Google Testing Large Product Knowledge Panel With Paid & Organic Features appeared first on The SEM Post.



from The SEM Post https://ift.tt/2Msz7S9
https://ift.tt/eA8V8J via IFTTT

Keyword Stuffing Will Not Cause Sites to be Removed from Google

Keyword stuffing is an SEO trick that dates back many, many years – predating Google – and it is still periodically seen by sites that are ranking well in the search results.  But how bad is it to do it on a site? The question came up on Twitter and John Mueller from Google clarified […]

The post Keyword Stuffing Will Not Cause Sites to be Removed from Google appeared first on The SEM Post.



from The SEM Post https://ift.tt/2Mr2cNz
https://ift.tt/eA8V8J via IFTTT

Tuesday 26 June 2018

The ultimate guide to meta tags: why they matter and how to optimize them for impact

Whether you work in an agency or in-house, SEO success has a lot to do with influencing other functions, for example, web development, site merchandising, content marketing, PR, etc. As SEO professionals, we do have our own secret sauce to cook with: meta tags.

Although meta tags are only used for search engines, they are still an essential part of Google’s core algorithm and must not be ignored. We will go through the most common meta tags and highlight their usefulness so you can easily check if you’re spending enough time where it counts.

Meta tags defined

Meta tags, or HTML elements, are codes of text that help search engines and website visitors better understand the content found on a website page. Meta tags are not the actual content that is featured on the page.

The purpose of meta tags is instead to describe the content. Therefore, these HTML elements are found in the <head> section of the HTML page, not within the <body> section. Since meta tags need to be written in the HTML code, you may or may not be the one implementing the tags, but knowing what’s most essential will set you up for success.

Why is it still important?

We know that SEO is evolving and the importance of keywords has changed, but let’s keep in mind the impact of the actual query that is being searched for. A search query is formulated in words, and search engine users are essentially scanning the SERPs for the words they entered into the search bar.

Search engines understand that their users are expecting to see results containing the exact words they entered. Let’s say I’m thinking of starting a business and run a search for the query “how to come up with a business name.” As I scan through the SERPs, my eye is looking for pages that contain the words “come up with business name.” While search engines may indulge in semantic search and latent semantic indexing, serving up results that contain the exact words of the search query will remain a strong asset.

Must have meta tags

Title tags and meta descriptions are the bread and butter of SEO. These are essential HTML elements that are needed for a page to rank well organically. As a refresher, let’s look more closely at them and why they are on the list of must haves.

Title tags

A title tag is an HTML element that describes the topic of a page. It is displayed at the top of the browser in the title bar and in the listing titles of a search engine results page. The presence of a search friendly term in the title tag is still a strong relevancy signal for search engines. Also, search engines will bold keywords from the user’s search in the title. This helps attract a higher click-through rate because internet users scan search results looking for their search term. If they don’t see it, then they are less likely to click on the listing, therefore, reducing CTRs.

Title tags must be relevant to the content on the page. The main keyword should be the first word in the page title, and the closer to the start of the title tag a keyword is, the more helpful it will be for ranking purposes.

Meta descriptions

A meta description is an HTML meta tag that provides a brief description of the page. Although it is not visible to users on the site, search engines often use meta descriptions as the brief snippet of text underneath a title tag in the search engine results. Well-written meta description tags, while not important to search engine rankings, are extremely important in promoting user click-through from search engine result pages.

Meta descriptions should be written using compelling copy. Since the meta description serves as advertising copy in search results, this is your chance to draw searchers in. Describe the page clearly and use a friendly marketing voice to create an appealing description that will attract a higher click-through rate.

Alt text for images

Alt text is an attribute added to an image tag in HTML to help search engines understand what an image is about. Although search engines cannot see the images we post on our websites, they can read what is featured in the alt attribute. While most searches are not image related, there is still a strong opportunity to acquire organic search engine visitors and boost brand recognition through impressions earned for images.

Alt text should be written clearly and contain text that describes the image. If your image is of an object, consider using adjectives like the color or the size of the object to provide more details on what exactly the image is displaying. Moreover, alt text is not for search engines only: they represent a necessary element to meet basic accessibility standards. Alt text provides a clear text alternative of the image for screen reader users.

No follow tags

Google defines “nofollow” as a way for webmasters to tell search engines not to follow links on a specific page. The rel=”nofollow” attribute can be quite beneficial in ensuring that PageRank is not being transferred across links found on your site. Nofollow tags are essential if you are participating in any kind of paid sponsorship with the intent of earning links.

No index tags

The “noindex” tag is used to notify search engine crawlers not to include a particular page in it’s search results. These tags are essential if there is content on your website that you would like to keep out of the search results. Noindex tags can be implemented either as a meta tag or as an HTTP response header.

Nice to have meta tags

In a highly competitive organic search landscape, “nice to have” meta tags, while not as essential as those listed above, should not be overlooked.

Canonical links

The canonical link element is used when a page’s content is available through multiple URLs, creating duplicate URLs. In order to consolidate the duplicate entries and help the search engine select the best URL, we recommend using a canonical link to indicate which the indexable URL should be.

Simply identify a single preferred URL (generally the simplest one), and add the rel=”canonical” link element, using that preferred URL, to every variant of the page. When Google crawls the site, it will consolidate duplicates within it’s index to the preferred URL.

 

 

HTML heading tags (H1-H6)

HTML heading tags are a key component of semantic search, as they provide key contextual clues to the search engines and help them better understand both a page’s content and its overall structure. Search engine bots use the order of heading tags (<h1>, <h2>, etc.) to better understand the structure and relevance of a page’s content. Therefore, HTML heading tags should be ordered on the page by their importance (h1 is considered the highest, h6 is the lowest). In the absence of sectioning content tags, the presence of a heading tag will still be interpreted as the beginning of a new content section.

Meta robots attribute

The meta robots attribute is a piece of code used to instruct search engines on how to interact with a web page. Similar to a robots.txt file that informs search engines on how to crawl a web page, the meta robots attribute provides parameters to search engines on whether they should crawl or index a page’s content.

The “only if” meta tags

Meta keywords

Only necessary if you want to provide your competitors with a list of the keywords you are targeting. In the earlier days of SEO, the meta keyword tag was an element used to describe the keywords that the web page was focused on. Until 2002, the meta keywords tag was used by some search engines in calculating keyword relevance. It was abandoned because it was too difficult for many website owners to identify appropriate keywords to describe their content, and because unscrupulous marketers stuffed the tag with unrelated keywords in an attempt to attract more organic search traffic. All modern search engines ignore the meta keywords tag.

Social meta tags (open graph and Twitter cards)

Social meta tags are used when you want to control how the content of a page shows up when it is shared on social media sites. Open graph tags are a set of meta tags that can be added to any page of a website, and help define the content of the page, such as the title, description and image via social media.

Such information is expressed via two protocols: Open Graph (for Facebook, Google+ and Pinterest) and Twitter Cards (for…you can easily guess), and is used by the respective social media to present the snippet of the pages that users share. Through Social Meta Tags you can for instance make use of a title, description or image specifically targeted for social media audiences, in order to boost CTR from this channel.

Hreflang attribute (commonly referred to as Hreflang tag)

Only if…you have a global website with multiple countries and languages being featured. Google recommends using hreflang tags to specify language and regional variations of your pages (regardless of where they are hosted: subfolders, subdomains or separate domains).

The objective of having Hreflang tags on your site is to provide Google with the most accurate information on localized pages, so that the search engine can serve the relevant language version in search results. There are two ways you can implement the Hreflang tags: directly in the HTML document or in your sitemap.

As you’ll see, meta tags come in many forms and some are more critical than others. But they truly are easy wins that provide great ROI, simply as they require a low amount of resources and still have a high impact. We hope you’ll use this ultimate guide to meta tags as the foundation of your SEO strategy for continued success.

 

Johann Godey is SEO director at Vistaprint.

 

 



from SEO – Search Engine Watch https://ift.tt/2yImsrV
via IFTTT

Monday 25 June 2018

Google Adds New Hotel Carousel Features to Search Results

Google is testing a new carousel style feature to local packs for resorts and hotels.  In addition to the usual information, it has added an additional carousel before the listings. Dr. Pete Meyers spotted the test.  Here is what it looks like: As you can see in the screenshot, there are new buttons in a […]

The post Google Adds New Hotel Carousel Features to Search Results appeared first on The SEM Post.



from The SEM Post https://ift.tt/2tBnC2Z
https://ift.tt/eA8V8J via IFTTT

Friday 22 June 2018

Laying the foundations of good SEO: the most important tasks (part 1)

Nobody ever said SEO was easy. It not only requires a myriad of different methods that evolve over time and follow no particular pattern, but is also impacted by ever-changing search engine policies.

Yet SEO is actually quite methodical. While you will need to mix and combine multiple on-page, off-page, local and other factors to come up with an effective SEO strategy, you can’t just start anywhere. You must prioritize tasks — from basic to advanced SEO — to succeed.

If you do not begin by laying a foundation, you will end up spending a lot of time without achieving the results you need to support your bottom line.

Set up and check SEO tools

SEO deals with data, so your first priority should be to make sure your tools to collect and analyze that data are working properly. The most important are:

  • Google Search Console. You will not be able to track a site’s performance in Google search without this. It is also useful for keyword analysis, implementing and fixing technical SEO, and analyzing UX factors, for example
  • Bing Webmaster Tools. While not as popular as Google, around one quarter of all searches in the US are performed using Bing, and it does have some useful features that enable users to analyze keywords, inbound links, traffic and more
  • Google Analytics. Make sure that your Google Analytics account is properly connected to Google Search Console, then set up specific reports and goals to track your website’s performance stats (e.g. traffic, top-performing pages, page views, bounce rate, CTR)
  • Yoast SEO for WordPress. Since WordPress is one of the most popular CMS systems on the Web, chances are you will be using the Yoast SEO plugin. Intuitive and user-friendly, it helps with titles, meta descriptions, URLs, keywords, and content quality. More technical like sitemaps and robots.txt is also covered.

Keyword research

Keyword research is the foundation of all SEO activity. Once you have ensured that your SEO tools do their jobs, figure out which keywords you need to optimize for and which errors you need to fix to avoid penalties. There are three key areas to keep in mind:

  • Over-optimization. Keyword stuffing will quickly put you on the wrong side of Google, so you should ensure that keywords are placed naturally (you will notice if over-optimization is an issue). On average, you want to have up to five ‘required’ keywords and keyword phrases per page.
  • Long-tail keywords. It’s important not to use one keyword repeatedly, so to optimize for user intent placing long-tail keywords in your content is a must. Use Google Suggest, Google Keyword Planner and Keyword Tool to research the long-tail keywords your customers are searching for.
  • Synonyms and LSIs. Another way to show to Google that you cater for your audience is to include multiple variations of keyword synonyms and LSI (Latent Semantic Indexing) phrases in your content. As a rule, these are low-competition keywords and you can rank for them pretty easily. Carry out some research using Quora, Reddit and other forums to figure out which keywords your customers use in searches. Tools such as KWFinder, LSIGraph and Answer The Public may also will help.

On-site optimization

To improve your site’s rankings in search engines, you must provide clear signals that your pages are better than those of your competitors. In other words, you need to excel at on-page SEO. Here are some key areas to focus on:

  • According to Brian Dean’s search engine rankings research, shorter URLs featuring one keyword rank better than longer URLs. Since Google prefers this format, it naturally makes sense to shorten them and place your target keyword in the URL to make it more descriptive.
  • Tags and descriptions. Titles, subtitles, alt tags and meta descriptions are important on-page SEO factors. Ensure that:
    • They all feature your targeted keyword
    • The title does not exceed 70 characters
    • h1, h2, and h3 tags are scannable (i.e. allow users to get a post’s meaning without reading it)
    • The alt tag allows users to figure out the image’s meaning if it is not displayed on the page
    • Meta descriptions are descriptive and feature LSIs for user intent.
  • External links. Links to trusted, authoritative websites are indicators that a piece of content is well-researched and well-referenced. Furthermore, they provide additional value to users. Use between five and eight external links in your content pieces.
  • Internal links. You should link your pages together to create crawling paths for Google bots and conversion funnels for your users. Place between two and five internal links per content piece.
  • Website structure, navigation, and UX factors. According to the three-click rule, users should be able to find any information on a website within three mouse clicks. No matter how much sense this rule makes, it comes down to the fact that any website must be easy to navigate and use, and its structure simple and cohesive.

Conclusions

In this article the author has shared his perspectives on the most important SEO tasks with regard to SEO tools, keyword research, and on-page optimization factors.

These three areas are the foundation of any SEO campaign as will they allow you to efficiently collect and analyze data, optimize the keywords your customers search for (and thus drive targeted traffic), and enhance your website by optimizing URLs, tags, descriptions, structure, navigation and UX.

Other areas to keep in mind are technical SEO (specifically, the factors related to mobile-friendliness and loading speed), content, and off-page optimization. These will be discussed in the next article.



from SEO – Search Engine Watch https://ift.tt/2tuiAVB
via IFTTT

The Goal-Based Approach to Domain Selection - Whiteboard Friday

This summary is not available. Please click here to view the post.

Thursday 21 June 2018

Google Adds Location & Location Icon to Search Results

Google is testing a new icon in the search results to show the specific location of a search result, in addition to showing a location icon, similar to the map icon used on Google Maps.  It seems to be used primarily for news results in the regular organic search results. Here is what it looks […]

The post Google Adds Location & Location Icon to Search Results appeared first on The SEM Post.



from The SEM Post https://ift.tt/2JSwP1A
https://ift.tt/eA8V8J via IFTTT

Tuesday 19 June 2018

The take-over of augmented and the future possibilities

The vast potential to create, interact and educate with augmented reality (AR) is quickly gaining popularity. In the past, AR gained media attention for simply existing, but recently, companies have been applying the strategy to their marketing campaigns and reaping the rewards.

As we move further into the digital world, the benefits of implementing AR are staggering. For instance, AR has an average dwell time of 75 seconds – affording companies an unprecedented chance to appeal to their consumers. Flow Digital, a Newcastle-based digital marketing company, are sharing why 2018 is the takeover of the media channel, and what it means for the future.

The statistics driving AR

In the past two years, the AR industry has experienced unprecedented growth. We can largely attribute the early success to the pioneers of AR, Pokémon Go which became the most downloaded app in 2016 with over 750 million to date.

By 2020, the number of AR users is expected to surpass one billion and by 2021, the market for AR, and VR, is estimated to reach $215bn. The benefits of implementing AR are reason enough in these statistics – particularly for e-commerce, marketing and automotive brands which are the industries that experience the largest growth with the communication tool.

E-commerce uses

Ikea Place demonstrated the potential for the natural partnership of AR and retail. Since launching in 2017 – using Apple’s ARKit tech – the Ikea Place has been downloaded two million times. The potential for allowing users to actually see what items look like in their home will significantly boost revenue.

Similarly, AR provides companies with the opportunity to target impulse shoppers. If you can showcase how their life can vastly improve with this cactus plant on their new coffee table (no doubt that it will), you can catch them before they even realised the need for such a product.

EstĂ©e Lauder recently rolled out AR into their marketing campaign – adopting the ‘try before you buy’ method. Users could ‘try’ various makeup products using their Facebook messenger chatbot, with the company experiencing a rise in social media engagement.

However, it’s important to note the limitations in an AR world for both e-commerce and marketing. While we can certainly appeal to more consumers and provide the ‘wow factor’ so many prospects look for, we must take into account the lack of adverts. Marketing ads and header bidding do not have a place in augmented reality, so companies will have to get creative.

Take the example of Pepsi, turning the average bus shelter into a fake window. Relying on a camera to capture people and vehicles in the street, they showcased images of crashing comets, a rogue cheetah and a man flying away while holding onto balloons. While it may not have been your ‘typical’ advertisement for the drink, the ad certainly proved engaging.

Future of video content

Video content has certainly seen a boom – particularly because of an increasing number of Facebook, Twitter, Snapchat and Instagram users. Today, there are more than 22m daily views on Facebook, Snapchat and YouTube, with the number continuing to grow.

360-degree views are universally appealing, enabling users to go behind-the-scenes with the brand. If there’s anything we can guarantee, it’s that consumers love a nosy. Typically, videos afford companies 2.5 seconds to catch the attention of their prospects. However, AR provides brands with an average of 75 seconds dwell time, offering a staggering amount of time to share relevant content.

 

Implementing AR

We have touched on implementing AR above, and the reasons for doing so are almost endless. Essentially, you are bringing your products and services to life. A static digital advert becomes an interactive catalogue or brochure. In doing so, you are improving the experience of communicating with your brand, leaving more information at their disposal and helping them to make informed decisions. In return, you should see a substantial lift in consumers trusting your company, word-of-mouth sales and potentially ROI.

Social media will only benefit from AR. It’s likely that consumers will share their interactions with your brand on their social platforms – particularly with a specific hashtag – and thus build your following. There is also the opportunity for partnerships with social media platforms. For example, Fanta partnered with Snapchat for their Halloween campaign, offering users a unique Snapchat filter if they scan their limited edition cans.

In simple terms, using AR helps to build transparency. All successful relationships start with trust, and you can even take your customer behind-the-scenes with this communication channel. Share how the product was made, guide them through the delivery process and we can guarantee you will see an increase in interaction.

Partnership of AR and PR

There is a natural partnership between AR and the PR industry, for which we could see an increase in the use of the marketing channel for events. Something as small as including a QR code to your event invite – producing a unique illustration or even animated brand logo – creates a layer of interest. Similarly, product launches can experience the benefit. If you can take your audience into the augmented world, highlighting the key features of your product, you will likely see results. Perhaps, rather than share the product in detail, you could leave a trail of breadcrumbs. Each time a QR code is scanned, more is revealed about the product.

AR transforming other industries

E-commerce and marketing are industries experiencing a boom due to AR, but the medical sector is also seeing the technological advancements. Go Surgery, the brainchild behind Touch Surgery, offers step-by-step guides to performing surgical procedures, as if in real time. The procedure is holographically projected onto a screen. Likewise, the Microsoft HoloLens AR glasses have been used to aid in reconstructive surgery.

One industry in particular which should reap the benefits from the rise of video content is hospitality. For example, guests can explore the rooms before booking and companies can even go so far as to allow guests to review the room when using the app. Likewise, restaurants can share the experience of dining with the through AR.

Companies, such as WayRay, are offering Navion, a system that directs you while you drive. Basically, it’s like Google Maps on the road, but you don’t have to keep looking at the Sat Nav. Navion shows exactly where you want to go, continually adjusting to anything in front of the car.

Ultimately, AR spells the dawn of a different age. Those companies who embrace and adapt will certainly see the rewards, especially when labelled pioneers of the channel.



from SEO – Search Engine Watch https://ift.tt/2M4DySC
via IFTTT

An 8-Point Checklist for Debugging Strange Technical SEO Problems

Posted by Dom-Woodman

Occasionally, a problem will land on your desk that's a little out of the ordinary. Something where you don't have an easy answer. You go to your brain and your brain returns nothing.

These problems can’t be solved with a little bit of keyword research and basic technical configuration. These are the types of technical SEO problems where the rabbit hole goes deep.

The very nature of these situations defies a checklist, but it's useful to have one for the same reason we have them on planes: even the best of us can and will forget things, and a checklist will provvide you with places to dig.


Fancy some examples of strange SEO problems? Here are four examples to mull over while you read. We’ll answer them at the end.

1. Why wasn’t Google showing 5-star markup on product pages?

  • The pages had server-rendered product markup and they also had Feefo product markup, including ratings being attached client-side.
  • The Feefo ratings snippet was successfully rendered in Fetch & Render, plus the mobile-friendly tool.
  • When you put the rendered DOM into the structured data testing tool, both pieces of structured data appeared without errors.

2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?

  • The review pages of client & competitors all had rating rich snippets on Google.
  • All the competitors had rating rich snippets on Bing; however, the client did not.
  • The review pages had correctly validating ratings schema on Google’s structured data testing tool, but did not on Bing.

3. Why were pages getting indexed with a no-index tag?

  • Pages with a server-side-rendered no-index tag in the head were being indexed by Google across a large template for a client.

4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?

  • A website was randomly throwing 302 errors.
  • This never happened in the browser and only in crawlers.
  • User agent made no difference; location or cookies also made no difference.

Finally, a quick note. It’s entirely possible that some of this checklist won’t apply to every scenario. That’s totally fine. It’s meant to be a process for everything you could check, not everything you should check.

The pre-checklist check

Does it actually matter?

Does this problem only affect a tiny amount of traffic? Is it only on a handful of pages and you already have a big list of other actions that will help the website? You probably need to just drop it.

I know, I hate it too. I also want to be right and dig these things out. But in six months' time, when you've solved twenty complex SEO rabbit holes and your website has stayed flat because you didn't re-write the title tags, you're still going to get fired.

But hopefully that's not the case, in which case, onwards!

Where are you seeing the problem?

We don’t want to waste a lot of time. Have you heard this wonderful saying?: “If you hear hooves, it’s probably not a zebra.”

The process we’re about to go through is fairly involved and it’s entirely up to your discretion if you want to go ahead. Just make sure you’re not overlooking something obvious that would solve your problem. Here are some common problems I’ve come across that were mostly horses.

  1. You’re underperforming from where you should be.
    1. When a site is under-performing, people love looking for excuses. Weird Google nonsense can be quite a handy thing to blame. In reality, it’s typically some combination of a poor site, higher competition, and a failing brand. Horse.
  2. You’ve suffered a sudden traffic drop.
    1. Something has certainly happened, but this is probably not the checklist for you. There are plenty of common-sense checklists for this. I’ve written about diagnosing traffic drops recently — check that out first.
  3. The wrong page is ranking for the wrong query.
    1. In my experience (which should probably preface this entire post), this is usually a basic problem where a site has poor targeting or a lot of cannibalization. Probably a horse.

Factors which make it more likely that you’ve got a more complex problem which require you to don your debugging shoes:

  • A website that has a lot of client-side JavaScript.
  • Bigger, older websites with more legacy.
  • Your problem is related to a new Google property or feature where there is less community knowledge.

1. Start by picking some example pages.

Pick a couple of example pages to work with — ones that exhibit whatever problem you're seeing. No, this won't be representative, but we'll come back to that in a bit.

Of course, if it only affects a tiny number of pages then it might actually be representative, in which case we're good. It definitely matters, right? You didn't just skip the step above? OK, cool, let's move on.

2. Can Google crawl the page once?

First we’re checking whether Googlebot has access to the page, which we’ll define as a 200 status code.

We’ll check in four different ways to expose any common issues:

  1. Robots.txt: Open up Search Console and check in the robots.txt validator.
  2. User agent: Open Dev Tools and verify that you can open the URL with both Googlebot and Googlebot Mobile.
    1. To get the user agent switcher, open Dev Tools.
    2. Check the console drawer is open (the toggle is the Escape key)
    3. Hit the … and open "Network conditions"
    4. Here, select your user agent!

  1. IP Address: Verify that you can access the page with the mobile testing tool. (This will come from one of the IPs used by Google; any checks you do from your computer won't.)
  2. Country: The mobile testing tool will visit from US IPs, from what I've seen, so we get two birds with one stone. But Googlebot will occasionally crawl from non-American IPs, so it’s also worth using a VPN to double-check whether you can access the site from any other relevant countries.
    1. I’ve used HideMyAss for this before, but whatever VPN you have will work fine.

We should now have an idea whether or not Googlebot is struggling to fetch the page once.

Have we found any problems yet?

If we can re-create a failed crawl with a simple check above, then it’s likely Googlebot is probably failing consistently to fetch our page and it’s typically one of those basic reasons.

But it might not be. Many problems are inconsistent because of the nature of technology. ;)

3. Are we telling Google two different things?

Next up: Google can find the page, but are we confusing it by telling it two different things?

This is most commonly seen, in my experience, because someone has messed up the indexing directives.

By "indexing directives," I’m referring to any tag that defines the correct index status or page in the index which should rank. Here’s a non-exhaustive list:

  • No-index
  • Canonical
  • Mobile alternate tags
  • AMP alternate tags

An example of providing mixed messages would be:

  • No-indexing page A
  • Page B canonicals to page A

Or:

  • Page A has a canonical in a header to A with a parameter
  • Page A has a canonical in the body to A without a parameter

If we’re providing mixed messages, then it’s not clear how Google will respond. It’s a great way to start seeing strange results.

Good places to check for the indexing directives listed above are:

  • Sitemap
    • Example: Mobile alternate tags can sit in a sitemap
  • HTTP headers
    • Example: Canonical and meta robots can be set in headers.
  • HTML head
    • This is where you’re probably looking, you’ll need this one for a comparison.
  • JavaScript-rendered vs hard-coded directives
    • You might be setting one thing in the page source and then rendering another with JavaScript, i.e. you would see something different in the HTML source from the rendered DOM.
  • Google Search Console settings
    • There are Search Console settings for ignoring parameters and country localization that can clash with indexing tags on the page.

A quick aside on rendered DOM

This page has a lot of mentions of the rendered DOM on it (18, if you’re curious). Since we’ve just had our first, here’s a quick recap about what that is.

When you load a webpage, the first request is the HTML. This is what you see in the HTML source (right-click on a webpage and click View Source).

This is before JavaScript has done anything to the page. This didn’t use to be such a big deal, but now so many websites rely heavily on JavaScript that the most people quite reasonably won’t trust the the initial HTML.

Rendered DOM is the technical term for a page, when all the JavaScript has been rendered and all the page alterations made. You can see this in Dev Tools.

In Chrome you can get that by right clicking and hitting inspect element (or Ctrl + Shift + I). The Elements tab will show the DOM as it’s being rendered. When it stops flickering and changing, then you’ve got the rendered DOM!

4. Can Google crawl the page consistently?

To see what Google is seeing, we're going to need to get log files. At this point, we can check to see how it is accessing the page.

Aside: Working with logs is an entire post in and of itself. I’ve written a guide to log analysis with BigQuery, I’d also really recommend trying out Screaming Frog Log Analyzer, which has done a great job of handling a lot of the complexity around logs.

When we’re looking at crawling there are three useful checks we can do:

  1. Status codes: Plot the status codes over time. Is Google seeing different status codes than you when you check URLs?
  2. Resources: Is Google downloading all the resources of the page?
    1. Is it downloading all your site-specific JavaScript and CSS files that it would need to generate the page?
  3. Page size follow-up: Take the max and min of all your pages and resources and diff them. If you see a difference, then Google might be failing to fully download all the resources or pages. (Hat tip to @ohgm, where I first heard this neat tip).

Have we found any problems yet?

If Google isn't getting 200s consistently in our log files, but we can access the page fine when we try, then there is clearly still some differences between Googlebot and ourselves. What might those differences be?

  1. It will crawl more than us
  2. It is obviously a bot, rather than a human pretending to be a bot
  3. It will crawl at different times of day

This means that:

  • If our website is doing clever bot blocking, it might be able to differentiate between us and Googlebot.
  • Because Googlebot will put more stress on our web servers, it might behave differently. When websites have a lot of bots or visitors visiting at once, they might take certain actions to help keep the website online. They might turn on more computers to power the website (this is called scaling), they might also attempt to rate-limit users who are requesting lots of pages, or serve reduced versions of pages.
  • Servers run tasks periodically; for example, a listings website might run a daily task at 01:00 to clean up all it’s old listings, which might affect server performance.

Working out what’s happening with these periodic effects is going to be fiddly; you’re probably going to need to talk to a back-end developer.

Depending on your skill level, you might not know exactly where to lead the discussion. A useful structure for a discussion is often to talk about how a request passes through your technology stack and then look at the edge cases we discussed above.

  • What happens to the servers under heavy load?
  • When do important scheduled tasks happen?

Two useful pieces of information to enter this conversation with:

  1. Depending on the regularity of the problem in the logs, it is often worth trying to re-create the problem by attempting to crawl the website with a crawler at the same speed/intensity that Google is using to see if you can find/cause the same issues. This won’t always be possible depending on the size of the site, but for some sites it will be. Being able to consistently re-create a problem is the best way to get it solved.
  2. If you can’t, however, then try to provide the exact periods of time where Googlebot was seeing the problems. This will give the developer the best chance of tying the issue to other logs to let them debug what was happening.

If Google can crawl the page consistently, then we move onto our next step.

5. Does Google see what I can see on a one-off basis?

We know Google is crawling the page correctly. The next step is to try and work out what Google is seeing on the page. If you’ve got a JavaScript-heavy website you’ve probably banged your head against this problem before, but even if you don’t this can still sometimes be an issue.

We follow the same pattern as before. First, we try to re-create it once. The following tools will let us do that:

  • Fetch & Render
    • Shows: Rendered DOM in an image, but only returns the page source HTML for you to read.
  • Mobile-friendly test
    • Shows: Rendered DOM and returns rendered DOM for you to read.
    • Not only does this show you rendered DOM, but it will also track any console errors.

Is there a difference between Fetch & Render, the mobile-friendly testing tool, and Googlebot? Not really, with the exception of timeouts (which is why we have our later steps!). Here’s the full analysis of the difference between them, if you’re interested.

Once we have the output from these, we compare them to what we ordinarily see in our browser. I’d recommend using a tool like Diff Checker to compare the two.

Have we found any problems yet?

If we encounter meaningful differences at this point, then in my experience it’s typically either from JavaScript or cookies

Why?

We can isolate each of these by:

  • Loading the page with no cookies. This can be done simply by loading the page with a fresh incognito session and comparing the rendered DOM here against the rendered DOM in our ordinary browser.
  • Use the mobile testing tool to see the page with Chrome 41 and compare against the rendered DOM we normally see with Inspect Element.

Yet again we can compare them using something like Diff Checker, which will allow us to spot any differences. You might want to use an HTML formatter to help line them up better.

We can also see the JavaScript errors thrown using the Mobile-Friendly Testing Tool, which may prove particularly useful if you’re confident in your JavaScript.

If, using this knowledge and these tools, we can recreate the bug, then we have something that can be replicated and it’s easier for us to hand off to a developer as a bug that will get fixed.

If we’re seeing everything is correct here, we move on to the next step.

6. What is Google actually seeing?

It’s possible that what Google is seeing is different from what we recreate using the tools in the previous step. Why? A couple main reasons:

  • Overloaded servers can have all sorts of strange behaviors. For example, they might be returning 200 codes, but perhaps with a default page.
  • JavaScript is rendered separately from pages being crawled and Googlebot may spend less time rendering JavaScript than a testing tool.
  • There is often a lot of caching in the creation of web pages and this can cause issues.

We’ve gotten this far without talking about time! Pages don’t get crawled instantly, and crawled pages don’t get indexed instantly.

Quick sidebar: What is caching?

Caching is often a problem if you get to this stage. Unlike JS, it’s not talked about as much in our community, so it’s worth some more explanation in case you’re not familiar. Caching is storing something so it’s available more quickly next time.

When you request a webpage, a lot of calculations happen to generate that page. If you then refreshed the page when it was done, it would be incredibly wasteful to just re-run all those same calculations. Instead, servers will often save the output and serve you the output without re-running them. Saving the output is called caching.

Why do we need to know this? Well, we’re already well out into the weeds at this point and so it’s possible that a cache is misconfigured and the wrong information is being returned to users.

There aren’t many good beginner resources on caching which go into more depth. However, I found this article on caching basics to be one of the more friendly ones. It covers some of the basic types of caching quite well.

How can we see what Google is actually working with?

  • Google’s cache
    • Shows: Source code
    • While this won’t show you the rendered DOM, it is showing you the raw HTML Googlebot actually saw when visiting the page. You’ll need to check this with JS disabled; otherwise, on opening it, your browser will run all the JS on the cached version.
  • Site searches for specific content
    • Shows: A tiny snippet of rendered content.
    • By searching for a specific phrase on a page, e.g. inurl:example.com/url “only JS rendered text”, you can see if Google has manage to index a specific snippet of content. Of course, it only works for visible text and misses a lot of the content, but it's better than nothing!
    • Better yet, do the same thing with a rank tracker, to see if it changes over time.
  • Storing the actual rendered DOM
    • Shows: Rendered DOM
    • Alex from DeepCrawl has written about saving the rendered DOM from Googlebot. The TL;DR version: Google will render JS and post to endpoints, so we can get it to submit the JS-rendered version of a page that it sees. We can then save that, examine it, and see what went wrong.

Have we found any problems yet?

Again, once we’ve found the problem, it’s time to go and talk to a developer. The advice for this conversation is identical to the last one — everything I said there still applies.

The other knowledge you should go into this conversation armed with: how Google works and where it can struggle. While your developer will know the technical ins and outs of your website and how it’s built, they might not know much about how Google works. Together, this can help you reach the answer more quickly.

The obvious source for this are resources or presentations given by Google themselves. Of the various resources that have come out, I’ve found these two to be some of the more useful ones for giving insight into first principles:

But there is often a difference between statements Google will make and what the SEO community sees in practice. All the SEO experiments people tirelessly perform in our industry can also help shed some insight. There are far too many list here, but here are two good examples:

7. Could Google be aggregating your website across others?

If we’ve reached this point, we’re pretty happy that our website is running smoothly. But not all problems can be solved just on your website; sometimes you’ve got to look to the wider landscape and the SERPs around it.

Most commonly, what I’m looking for here is:

  • Similar/duplicate content to the pages that have the problem.
    • This could be intentional duplicate content (e.g. syndicating content) or unintentional (competitors' scraping or accidentally indexed sites).

Either way, they’re nearly always found by doing exact searches in Google. I.e. taking a relatively specific piece of content from your page and searching for it in quotes.

Have you found any problems yet?

If you find a number of other exact copies, then it’s possible they might be causing issues.

The best description I’ve come up with for “have you found a problem here?” is: do you think Google is aggregating together similar pages and only showing one? And if it is, is it picking the wrong page?

This doesn’t just have to be on traditional Google search. You might find a version of it on Google Jobs, Google News, etc.

To give an example, if you are a reseller, you might find content isn’t ranking because there's another, more authoritative reseller who consistently posts the same listings first.

Sometimes you’ll see this consistently and straightaway, while other times the aggregation might be changing over time. In that case, you’ll need a rank tracker for whatever Google property you’re working on to see it.

Jon Earnshaw from Pi Datametrics gave an excellent talk on the latter (around suspicious SERP flux) which is well worth watching.

Once you’ve found the problem, you’ll probably need to experiment to find out how to get around it, but the easiest factors to play with are usually:

  • De-duplication of content
  • Speed of discovery (you can often improve by putting up a 24-hour RSS feed of all the new content that appears)
  • Lowering syndication

8. A roundup of some other likely suspects

If you’ve gotten this far, then we’re sure that:

  • Google can consistently crawl our pages as intended.
  • We’re sending Google consistent signals about the status of our page.
  • Google is consistently rendering our pages as we expect.
  • Google is picking the correct page out of any duplicates that might exist on the web.

And your problem still isn’t solved?

And it is important?

Well, shoot.

Feel free to hire us…?

As much as I’d love for this article to list every SEO problem ever, that’s not really practical, so to finish off this article let’s go through two more common gotchas and principles that didn’t really fit in elsewhere before the answers to those four problems we listed at the beginning.

Invalid/poorly constructed HTML

You and Googlebot might be seeing the same HTML, but it might be invalid or wrong. Googlebot (and any crawler, for that matter) has to provide workarounds when the HTML specification isn't followed, and those can sometimes cause strange behavior.

The easiest way to spot it is either by eye-balling the rendered DOM tools or using an HTML validator.

The W3C validator is very useful, but will throw up a lot of errors/warnings you won’t care about. The closest I can give to a one-line of summary of which ones are useful is to:

  • Look for errors
  • Ignore anything to do with attributes (won’t always apply, but is often true).

The classic example of this is breaking the head.

An iframe isn't allowed in the head code, so Chrome will end the head and start the body. Unfortunately, it takes the title and canonical with it, because they fall after it — so Google can't read them. The head code should have ended in a different place.

Oliver Mason wrote a good post that explains an even more subtle version of this in breaking the head quietly.

When in doubt, diff

Never underestimate the power of trying to compare two things line by line with a diff from something like Diff Checker. It won’t apply to everything, but when it does it’s powerful.

For example, if Google has suddenly stopped showing your featured markup, try to diff your page against a historical version either in your QA environment or from the Wayback Machine.


Answers to our original 4 questions

Time to answer those questions. These are all problems we’ve had clients bring to us at Distilled.

1. Why wasn’t Google showing 5-star markup on product pages?

Google was seeing both the server-rendered markup and the client-side-rendered markup; however, the server-rendered side was taking precedence.

Removing the server-rendered markup meant the 5-star markup began appearing.

2. Why wouldn’t Bing display 5-star markup on review pages, when Google would?

The problem came from the references to schema.org.

        <div itemscope="" itemtype="https://schema.org/Movie">
        </div>
        <p>  <h1 itemprop="name">Avatar</h1>
        </p>
        <p>  <span>Director: <span itemprop="director">James Cameron</span> (born August 16, 1954)</span>
        </p>
        <p>  <span itemprop="genre">Science fiction</span>
        </p>
        <p>  <a href="../movies/avatar-theatrical-trailer.html" itemprop="trailer">Trailer</a>
        </p>
        <p></div>
        </p>

We diffed our markup against our competitors and the only difference was we’d referenced the HTTPS version of schema.org in our itemtype, which caused Bing to not support it.

C’mon, Bing.

3. Why were pages getting indexed with a no-index tag?

The answer for this was in this post. This was a case of breaking the head.

The developers had installed some ad-tech in the head and inserted an non-standard tag, i.e. not:

  • <title>
  • <style>
  • <base>
  • <link>
  • <meta>
  • <script>
  • <noscript>

This caused the head to end prematurely and the no-index tag was left in the body where it wasn’t read.

4. Why did any page on a website return a 302 about 20–50% of the time, but only for crawlers?

This took some time to figure out. The client had an old legacy website that has two servers, one for the blog and one for the rest of the site. This issue started occurring shortly after a migration of the blog from a subdomain (blog.client.com) to a subdirectory (client.com/blog/…).

At surface level everything was fine; if a user requested any individual page, it all looked good. A crawl of all the blog URLs to check they’d redirected was fine.

But we noticed a sharp increase of errors being flagged in Search Console, and during a routine site-wide crawl, many pages that were fine when checked manually were causing redirect loops.

We checked using Fetch and Render, but once again, the pages were fine.

Eventually, it turned out that when a non-blog page was requested very quickly after a blog page (which, realistically, only a crawler is fast enough to achieve), the request for the non-blog page would be sent to the blog server.

These would then be caught by a long-forgotten redirect rule, which 302-redirected deleted blog posts (or other duff URLs) to the root. This, in turn, was caught by a blanket HTTP to HTTPS 301 redirect rule, which would be requested from the blog server again, perpetuating the loop.

For example, requesting https://www.client.com/blog/ followed quickly enough by https://www.client.com/category/ would result in:

  • 302 to http://www.client.com - This was the rule that redirected deleted blog posts to the root
  • 301 to https://www.client.com - This was the blanket HTTPS redirect
  • 302 to http://www.client.com - The blog server doesn’t know about the HTTPS non-blog homepage and it redirects back to the HTTP version. Rinse and repeat.

This caused the periodic 302 errors and it meant we could work with their devs to fix the problem.

What are the best brainteasers you've had?

Let’s hear them, people. What problems have you run into? Let us know in the comments.

Also credit to @RobinLord8, @TomAnthonySEO, @THCapper, @samnemzer, and @sergeystefoglo_ for help with this piece.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!



from The Moz Blog https://ift.tt/2lfAXtQ
via IFTTT

Social Media Today