Understanding the Next.js Robots Metadata Function

June 15, 2023

Next.js is a popular framework for building server-side rendered React applications. One of the many features it offers is the ability to control how search engine crawlers interact with your website using the robots.txt file. The robots.txt file is a standard used by all major search engines to determine which pages on a website should be crawled and indexed.

The robots.txt file can be used to exclude certain pages from being crawled, or to specify how often crawlers should visit your website.

The robots metadata function is a feature of Next.js that allows you to specify more detailed instructions for search engine crawlers. This function can be added to your Next.js application by creating a robots.ts file in the app directory.

Here's an example of a robots.ts file that tells search engine crawlers not to index a particular page:

index.ts
1import { MetadataRoute } from 'next'
2
3export default function robots(): MetadataRoute.Robots {
4 return {
5 rules: {
6 userAgent: '*',
7 allow: '/',
8 disallow: '/private/',
9 },
10 }
11}

In this example, the robots function returns an object with the disallow property set to '/private/', which tells search engine crawlers not to index that pages on the website.

The robots metadata function also allows you to specify other instructions for search engine crawlers, such as how often they should visit your website and how long they should wait between visits. Here's an example of a robots.ts file that tells search engine crawlers to visit a particular page every day:

index.ts
1import { MetadataRoute } from 'next'
2
3export default function robots(): MetadataRoute.Robots {
4 return {
5 rules: {
6 crawlDelay: 86400, // 60*60*24
7 },
8 sitemap: 'https://example.com/sitemap.xml',
9 }
10}

In this example, the robots function returns an object with the crawlDelay property set to 86400, which tells search engine crawlers to visit the website once every 24 hours. The sitemap property is set to 'https://example.com/sitemap.xml', which specifies the location of the website's sitemap.

In addition to the disallow, crawlDelay, and sitemap properties, the robots metadata function also allows you to specify other properties. You can find more information about these properties in the Next.js documentation.

Conclusion

In conclusion, the Next.js robots metadata function is a powerful feature that allows you to provide more granular control over how search engine crawlers interact with your website. By using this feature, you can ensure that your website is crawled and indexed in a way that is optimal for your business needs. Additionally, by using TypeScript and exporting a function, you can ensure that your metadata conforms to a specific type and catch errors early on in the development process.

Share this post on Twitter