Table of Contents
Summary

This guide explains how to run a practical technical SEO audit by prioritizing fixes that affect crawling, indexation, security, site structure, and Core Web Vitals. It helps small-site owners focus on high-impact issues first instead of wasting time on low-priority audit-tool warnings.

summary img

 

What is a technical SEO audit?

A technical SEO audit is the process of analyzing all the technical aspects of your website to ensure search engines (like Google) can rank it and all the pages on it.

When you perform a technical SEO audit, you check if your website is optimized for search engines.

Most technical SEO audits produce a list of 200 issues and no clear direction. You fix the easiest things, ignore the rest, and wonder why rankings didn't move.

The problem isn't the audit. It's the order.

This guide is built around a single principle: severity determines sequence. You'll learn what to check, what to fix first, and (just as importantly) what to safely ignore.

Scope note: This guide is calibrated for sites under 500 pages, managed by one or two people, using free or freemium tools. If you run a large-scale or JavaScript-heavy site, some recommendations will need to be adapted.

 

The Severity Filter: Your Triage Model

Before running a single tool, understand how to read what it tells you. Not all audit flags are equal, and treating them as equal is the fastest way to waste time on something Google doesn't care about.

Use this three-tier model throughout every section that follows:

Tier Category Action
Tier 1 Ranking blockers Fix immediately: these actively suppress visibility
Tier 2 Crawl inefficiencies Fix this sprint: these limit reach without hard blocking
Tier 3 Low-priority flags Schedule or ignore: these rarely affect rankings for small sites

If a tool flags something and you can't place it in Tier 1 or Tier 2, it belongs in Tier 3 until proven otherwise.

 

Pre-Audit Setup: Tools and Baseline

Let's talk about the tools you need for a technical SEO audit.

Free stack for this audit:

If your site exceeds 500 pages, prioritize your highest-traffic and highest-converting URLs for the Screaming Frog crawl. Don't try to audit everything at once.

Recommended articles: Google Search Console: The Ultimate Guide for 2026

 

Domain, DNS, and Security Health

This is the audit category every competitor skips. It belongs in Tier 1 — not because these issues are common, but because when they exist, they block everything downstream. A broken SSL certificate or a misconfigured redirect at the domain level can quietly suppress your entire site before a single piece of content is even evaluated.

Most site owners jump straight to keywords and content. This section is about the foundation those things sit on.

Check these in order:

 

1. Is your site running on HTTPS?

Is your site running on HTTPS?When you visit a website, your browser and the server are constantly exchanging information. HTTPS is the secure version of that connection, it encrypts the data so it can't be intercepted. HTTP (without the S) is the older, unencrypted version.

You can tell which one your site is using by looking at your browser's address bar. A padlock icon means HTTPS is active. A "Not secure" warning means it isn't, and both Google and your visitors will notice.

Why it matters for SEO: Google has confirmed HTTPS as a ranking signal. More practically, browsers like Chrome actively warn visitors away from non-secure sites. If any page on your site still loads over HTTP, fix it before anything else.

What to check
 

It's not enough for your homepage to be secure. Every page, and every asset on those pages (images, fonts, scripts, stylesheets), needs to load over HTTPS. When a secure page loads an insecure asset, it's called a mixed content error. The page technically has HTTPS, but the browser flags it as partially insecure.

Use Why No Padlock, paste in any URL and it will tell you exactly which assets are loading insecurely and where they're coming from.

 

2. Does your domain have a consistent address?

Your website can technically be reached at two different addresses: www.yourdomain.com and yourdomain.com (without the www). To a browser, these are two separate locations. To Google, they can look like two separate websites publishing identical content.

This is called a www vs. non-www conflict, and it's one of the most common domain-level issues on small sites.

The fix is simple
 

Pick one version (either www or non-www) as your canonical (official) address. Then set up a 301 redirect from the other version. A 301 redirect is a permanent instruction that tells browsers and search engines: "This address has moved here for good. Follow this link and don't come back."

Once set up, anyone typing either version will land on the same place, and Google will treat your site as one unified entity rather than two duplicates.

How to check: Type both versions of your domain into a browser and watch what happens in the address bar. If one redirects cleanly to the other, you're fine. If both load independently (showing the same content), you have a duplicate content problem to fix. You can also use redirect-checker.org to confirm the redirect is a true 301 and not a softer, temporary redirect.

3. Are your staging or test sites visible to Google?

When developers build or update a website, they typically work on a separate version of the site first, often at an address like staging.yourdomain.com or dev.yourdomain.com. This is called a staging environment or test subdomain. It's meant to be invisible to the public.

The problem: if nobody explicitly tells Google to stay out, Googlebot will find it and crawl it. Now Google has two versions of your site (the live one and the staging one) with identical content. This confuses indexation and wastes crawl budget on pages that should never appear in search results.

The fix
 

Staging and test subdomains should be blocked from crawlers using a robots.txt directive, or better yet, password-protected entirely so only your team can access them. If you're not sure whether your staging environment is exposed, type staging.yourdomain.com (and common variations like dev., test., beta.) directly into a browser. If it loads without a password prompt, it's publicly accessible.

 

4. Is your SSL certificate valid and current?

An SSL certificate is what makes HTTPS work. It's a small digital file installed on your server that verifies your site is who it claims to be and enables the encrypted connection. SSL certificates expire, and if yours lapses, the consequences are immediate.

When an SSL certificate expires, browsers display a full-screen warning to visitors: "Your connection is not private." Most people leave immediately. An invalid SSL certificate can block users, damage trust, create browser warnings, and may harm crawling/page experience and the padlock disappears.

An SSL certificate is what makes HTTPS work. It's a small digital file installed on your server that verifies your site is who it claims to be and enables the encrypted connection

5. Are there unnecessary detours in your domain's redirect path?

A redirect is an instruction that sends a visitor (or a search engine bot) from one URL to another. One redirect is normal, for example, redirecting http:// to https://, or www to non-www. The problem starts when redirects stack on top of each other.

Redirect chains happen when one redirect leads to another redirect before finally reaching the destination. For example: a visitor goes to Page A, which redirects to Page B, which redirects to Page C, which is the actual page. Each extra hop adds loading time and increases the chance that Google's crawler gives up before reaching the final destination. Chains like this often build up silently after a site migration, a domain change, or an HTTPS upgrade that wasn't fully cleaned up.

Redirect loops are more serious. This is when a redirect points back to a page that redirects to itself: Page A redirects to Page B, which redirects back to Page A. Neither users nor crawlers can ever land anywhere. The browser will display an error and Google won't be able to index either page. This is a Tier 1 fix.

How to check
 

Use redirect-checker.org: enter your domain and it will map out every hop in the redirect path. You're looking for a clean, single-step redirect. Anything with two or more hops needs to be collapsed so the first address redirects directly to the final destination.

 

Crawlability Audit

If Googlebot can't access a page, that page doesn't rank. Before Google can consider your content for search results, it needs to be able to find and read it. Crawlability is about removing the obstacles that get in the way of that, most of which are invisible until you look for them.

 

1. Your robots.txt file — the gatekeeper

Every website has (or should have) a file at yourdomain.com/robots.txt. It's a plain text file that tells search engine bots which pages they're allowed to crawl and which to skip. Type that URL directly into your browser to see yours.

The most damaging mistakes here aren't exotic, they're accidental. The three most common:

  • Blocking the entire site — a single line (Disallow: /) tells all crawlers to stay out completely. This can happen when a developer sets it during a build and forgets to remove it before launch.
  • Blocking CSS or JavaScript files — Google needs to load your site's styling and scripts to understand how your pages look and behave. Block these and Google may render your pages incorrectly or not at all.
  • Leaving old rules in place — staging-era instructions that made sense during development often get carried into production by accident, quietly restricting access to pages that should be indexed.

If you spot any of these, flag them as Tier 1 and get them corrected before proceeding.

To learn more about robot.txt, watch this video by Google Search Central: 

 

2. Your XML sitemap — the roadmap

A sitemap is a file that lists all the pages on your site you want Google to index. Think of it as handing Google a structured map of your site rather than making it discover everything by following links.

Tip
 

To check yours, go to Google Search Console → Sitemaps. GSC will show you how many URLs were submitted and how many were actually indexed. A significant gap between those two numbers is a signal worth investigating.

While you're there, look for three specific problems:

  • Pages returning 4xx errors — these are broken URLs listed in your sitemap, pointing Google toward dead ends.
  • Noindexed URLs included in the sitemap — a page can't be both "please index this" (sitemap) and "don't index this" (noindex tag) at the same time; one instruction will win, and the conflict wastes crawl budget.
  • Important pages missing entirely — if a key page isn't in your sitemap, Google may still find it, but you're leaving discovery to chance.

A sitemap is a file that lists all the pages on your site you want Google to index.

 

3. Crawl budget — only relevant at scale

Crawl budget refers to the number of pages Google will crawl on your site within a given period. For most small sites (under 500 pages), this is not a priority concern, Google will crawl everything it can access.

It becomes relevant when your site generates large numbers of low-value or near-duplicate URLs automatically. Common causes: filter and sorting combinations on product pages, session IDs appended to URLs, or pagination running into hundreds of near-identical pages.

If your Screaming Frog crawl returns a page count that's dramatically higher than your actual content count, investigate the URL patterns before assuming they're all intentional. You may have a crawl trap generating thousands of URLs that eat up crawl budget without contributing anything to rankings.

 

Indexation Audit

Crawlability and indexation are different problems. A page can be crawlable but excluded from the index (often by accident).

The site: operator check

Search site:yourdomain.com in Google. The number of results gives you a rough index count. A dramatic mismatch between this number and your actual page count signals an indexation problem.

Noindex audit

Accidental noindex tags are the most common self-inflicted ranking blocker. Run Screaming Frog and filter for pages returning a noindex directive. Cross-reference against pages you expect to rank. A noindex on your homepage or key landing pages is a Tier 1 emergency.

Canonical tag logic

A canonical tag is a small piece of code in a page's header that tells Google: "This is the official version of this page." It exists because the same content can often be reached at multiple different URLs, with or without a trailing slash, with tracking parameters added, or through both HTTP and HTTPS versions. Without a canonical tag, Google has to guess which URL is the "real" one. Sometimes it guesses wrong.

The tag looks like this in a page's HTML:

<link rel="canonical" href="https://www.yourdomain.com/your-page/" />

There are two correct uses:

  • Self-referencing canonical — the page points to itself, confirming it's the primary version. This is the standard setup for most pages and simply tells Google "this URL is correct, index this one."
  • Consolidating canonical — a duplicate or near-duplicate page points to the preferred version. For example, if yourdomain.com/page?ref=email and yourdomain.com/page show identical content, the parameter URL should have a canonical pointing to the clean version.

Where it breaks down is when canonical tags point to the wrong place. The three most damaging mistakes:

  • Canonical pointing to a 404 page — you're telling Google the preferred version of this page is one that doesn't exist
  • Canonical pointing to a redirect — Google follows the redirect, sees the destination, and has to reconcile which URL you actually intended
  • Canonical pointing to the wrong page entirely — this can happen after migrations or CMS template errors, and it tells Google to suppress the very page you want to rank
Tip
 

To check yours: run Screaming Frog and look at the Canonicals report. It will show you each page's canonical URL and flag mismatches, missing tags, and canonicals pointing to non-200 pages. Any page where the canonical destination returns a 4xx or 3xx is Tier 1.

Parameter and trailing slash duplicates

/page, /page/, and /page?ref=email can all be treated as separate URLs by Googlebot. Confirm your server or CMS handles these consistently, or use canonical tags to consolidate them.

 

On-Page Technical Signals

These are structural elements (distinct from copywriting) that affect how Google parses and represents your pages.

 

Title tags and meta descriptions

Title tags should stay under 60 characters to avoid truncation in SERPs. Meta descriptions under 155 characters. In Screaming Frog, filter the Page Titles report for entries flagged as "too long" or "missing." These won't cause ranking drops, but truncated titles reduce click-through rates.\

 

Heading hierarchy

Each page should have exactly one H1. Multiple H1s don't directly harm rankings, but they signal unclear page structure. More damaging: pages with no H1, or H1 text that doesn't match the page's primary topic.

 

Broken internal links

Every internal 404 wastes crawl budget and creates a dead end for link equity. Screaming Frog surfaces these under Response Codes → 4xx. Fix by updating the link destination or redirecting the broken URL.

 

Image alt text

Alt text is a crawl signal, not just an accessibility feature. Images without alt text are invisible to Googlebot's text-based parsing. In Screaming Frog, check the Images report for missing alt attributes on images that carry content value.

 

Core Web Vitals (2025 Standards)

Core Web Vitals are three metrics Google uses to measure how a page actually feels to use. Not just whether it loads, but whether it loads fast, responds quickly, and stays visually stable while it does. They're part of how Google evaluates page quality, and they show up directly in PageSpeed Insights and Google Search Console.

There are currently three metrics. If your audit references First Input Delay (FID) anywhere, retire it — FID was officially replaced by INP on March 12, 2024.

 

INP: Is your page responsive when people click?

INP stands for Interaction to Next Paint. It measures how quickly your page visually responds after a user does something: clicks a button, opens a menu, types in a field. If there's a noticeable delay between the action and the page reacting, that's a poor INP score.

Thresholds:

  • Good = under 200ms
  • Needs improvement = 200–500ms
  • Poor = over 500ms

INP stands for Interaction to Next Paint

Source: Screenshot: Interaction to Next Paint (INP)

The most common causes on small sites: too much JavaScript running in the background, third-party scripts (chat widgets, analytics, ad tags) competing for the browser's attention, and slow server responses.

 

LCP: Does your main content load quickly?

LCP stands for Largest Contentful Paint. It measures how long it takes for the biggest visible element on the page to fully load: usually a hero image, a large heading, or a featured photo. It's Google's way of asking: "How quickly does the page feel usable?"

Threshold: Good = under 2.5 seconds

LCP stands for Largest Contentful PaintSource: Screenshot: Largest Contentful Paint (LCP)

The most common causes of a slow LCP: hero images that haven't been compressed, CSS or JavaScript that blocks the page from rendering, and slow hosting or server response times.

 

CLS: Does the page stay still while it loads?

CLS stands for Cumulative Layout Shift. It measures how much the page jumps around visually as it loads. You've experienced a poor CLS score before, you go to click something, and at the last second an image loads above it, pushing everything down and making you click the wrong thing.

Threshold: Good = under 0.1

CLS stands for Cumulative Layout Shift.Source: Screenshot: Cumulative Layout Shift (CLS)

The most common causes: images without defined dimensions (the browser doesn't know how much space to reserve), ads or embeds that load late and push content down, and web fonts that swap in after the page has already rendered.

 

How to check all three

Go to PageSpeed Insights and enter your most important pages one at a time:

  • Your homepage
  • Your main product or service page
  • Your highest-traffic landing page

When the results load, scroll past the lab data (the simulated scores) to the field data section at the top. Field data reflects real visitors on real devices, and it's what Google actually uses when evaluating your pages. If your site doesn't have enough traffic yet to generate field data, the lab scores are your best available proxy. Treat them as directional, not definitive.

GSC also has a dedicated Core Web Vitals report (under Experience) that groups your URLs by status:

  • Good
  • Needs Improvement
  • Poor

And shows which specific metric is failing on which pages.

Here's how it looks when you perform an audit with Google PageSpeed:

googlepagespead testing

You can even see more details about your performance score.

 

Site Structure and Internal Linking

Link equity flows through internal links. If it's leaking or pooling in the wrong places, pages that should rank won't, even if everything else is correct.

 

Crawl depth audit

Any important page more than three clicks from the homepage is effectively buried. In Screaming Frog, check the Crawl Depth column. Pages at depth 4+ should either be promoted in the navigation or linked from higher-authority pages.

 

Orphan pages

An orphan page has no internal links pointing to it. Googlebot may find it via the sitemap, but without internal links, it receives no equity and signals low importance. Cross-reference your sitemap URLs against Screaming Frog's inlinks report.

Adding breadcrumb navigation to content-heavy sections of your site is an efficient way to simultaneously resolve orphan issues and improve crawl path clarity for both users and bots.

dynadot page for registering domains

Redirect chains and loops

As already mentioned, check redirect chains and redirect loops to see the full redirect path for any URL.

 

Mobile and Structured Data

Mobile usability

Google completed its transition to mobile-first indexing in July 2024. All sites are now crawled and indexed using Googlebot Smartphone. Check the Mobile Usability report for errors: text too small to read, clickable elements too close together, content wider than screen. Any errors here are Tier 2 at minimum.

 

Structured data

Schema markup doesn't guarantee rich results, but it makes your content machine-readable. For most small sites, the highest-value schema types are: Article, FAQ, Breadcrumb, and LocalBusiness (if location-relevant). Validate your implementation using Google's Rich Results Test.

google's rich results test screenshot

Flag structured data errors as Tier 2. Flag structured data warnings as Tier 3. They don't prevent rich results from appearing.

 

Verify the Fix

Most guides end at "fix this." That's where they fail you.

Every fix needs a confirmation step before you move on.

Fix type Verification method Timeline
Indexation / noindex removed GSC URL Inspection → Request Indexing Days to weeks
Core Web Vitals improvements PageSpeed Insights re-run + GSC CWV report 28-day field data lag
Crawl errors resolved Screaming Frog re-crawl; compare against baseline Immediate
Structured data added Rich Results Test re-run Immediate validation

Google re-evaluates some signals quickly (URL-level indexation) and others slowly (CWV field data reflects a rolling 28-day window of real user interactions).

Tip
 

Set a calendar reminder rather than checking daily.

 

When Should You Perform an SEO Audit?

It is great if you could perform it as often as you can, but at least on a quarterly basis. If you notice some declines in your website rankings, it is a good signal for a new audit, even if it's not scheduled.

 

When to Re-Audit

A technical SEO audit is not a one-time task. Run a full audit:

  • When launching a new website — establish a clean baseline before any traffic accumulates
  • Every 6 months as standard maintenance
  • After any major site migration (new CMS, new domain, new URL structure)
  • After a significant ranking drop that isn't explained by content changes
  • After adding new site sections or templates that could introduce new crawl patterns

Between audits, keep GSC's Coverage and Core Web Vitals reports open as a passive monitoring layer.

 

Audit Priority Checklist

Use this after completing each section. Every item maps to a section above.

 

Conclusion

A technical SEO audit isn't just a one-time task—it's an ongoing process that helps your website stay competitive in search results. By regularly examining the technical aspects of your site, you can identify and fix issues before they impact your rankings.

Remember that technical SEO is just one piece of the puzzle. While it creates the foundation for success, you'll still need quality content and a strong backlink profile to achieve top rankings.

Technical SEO supports crawlability, indexation, performance, and user experience, which can improve search visibility when paired with strong content. Regular audits (every 6-12 months) are essential to mitigate risks and capitalize on emerging opportunities.

Stay ahead of the curve and subscribe to our newsletter for the latest trends and industry insights.

 

Frequently Asked Questions

 

What's the difference between a crawlability issue and an indexation issue?

Crawlability refers to whether Googlebot can access and read a page (it's blocked at the network or robots level). Indexation refers to whether Google has chosen to include that page in its search index. A page can be crawlable but still excluded from the index due to a noindex tag, a duplicate content signal, or a canonical pointing elsewhere. Audit them separately: crawlability first, indexation second.

 

Do redirect chains actually hurt rankings?

Google has stated that 301 redirects don't lose PageRank. The practical risk with redirect chains is latency (each hop adds load time) and the increased likelihood that Googlebot abandons the chain before fully resolving it, particularly on slower servers. Reduce chains to single hops as a crawl efficiency measure, not because each hop "loses" equity.

 

My audit tool flagged 200+ issues. Where do I actually start?

Start with the Tier 1 checklist in this guide: HTTPS enforcement, SSL validity, accidental noindex tags, and redirect loops. These are the issues most likely to be actively suppressing visibility right now. Ignore everything that doesn't fall into Tier 1 or Tier 2 until those are resolved. A clean, crawlable, indexable site with acceptable Core Web Vitals will outperform a technically "perfect" site with unresolved blocking issues.

 

How often should I run a technical SEO audit?

Every six months as standard maintenance. Additionally, run a focused audit after any site migration, significant platform change, or unexplained ranking drop. Between audits, GSC's Coverage report and Core Web Vitals dashboard give you enough passive signal to catch new issues before they compound.

Share
/
AUTHOR
Natasa Vujovic
Marketing SpecialistNatasa is an SEO specialist and content writer at Dynadot, specializing in search optimization, keyword strategy, and domain industry trends. With a strong background in digital marketing, she helps domain investors, entrepreneurs, and businesses understand the critical intersection between SEO and domains. At Dynadot, she creates actionable guides on choosing SEO-friendly domain names, and leveraging new TLDs to increase online visibility.