How to block inner pages using Robots.txt

kevinmax Offline referral

Posts: 72
Joined: Jun 2015
Reputation: 0

I need to blog website inner pages using robots.txt, I don’t want to ignore all the pages, I want Google to deindex the specific inner pages of my website which is specified in the robots.txt file.

Examples please.
maya Offline referral

Posts: 141
Joined: Jul 2015
Reputation: 0

Let say I wanna block or disallow Google from crawling directory called TEST and a specific URL called TEST.php too:

Disallow: /TEST.php
Disallow: /TEST*
salenaadam Offline referral

Posts: 86
Joined: Jan 2017
Reputation: 0

The robots exclusion protocol (REP), or robots.txt is a text file webmasters create to instruct robots (typically search engine robots) ... Block a specific web crawler from a specific web page ... Other than with crawler directives, each search engine interprets REP tags differently. .... External Links · Internal Links · Anchor Text.
rawatgoal Offline referral

Posts: 45
Joined: Jun 2017
Reputation: 0

Junior Member
As per me, you should not use robots.txt to hide your web pages from Google Search results. This is because other pages might point to your page, & your page could get indexed that way, avoiding the robots.txt file. If you want to block your page from search results, use another method such as password protection or noindex tags or directives.
maheshseo Offline referral

Posts: 158
Joined: May 2017
Reputation: 0

Disallow:/ your inner page
sophiawatson175 Offline referral

Posts: 55
Joined: Aug 2018
Reputation: 0

You can simply write the disallow rule for the inner page in your robots file, and they page will be removed from Google Search Engine Results. The example of the robots file rule is.

Disallow:/ your inner page

User(s) browsing this thread: 1 Guest(s)