Robots.txt not adding excluded pages?

I have 5 pages that are excluded from search engine indexing. i.e. in the Page tab under Search Engines I have Indexing: Deny.

But the robots.txt is completely missing these. I am on Sparkle 3.0.8. Is anyone else having this problem?

Here is my robots.txt.

User-agent: *
Disallow:

Sitemap: https://curlewescape.com.au/sitemap.xml

Hi.

Correct. Same here.
BUT: in the head section of the specific page you should see this:
meta name=“robots” content=“noindex,noarchive”

The robots meta tag in the example above tells search engines not to display the page in search results. The value of the name (robots) attribute indicates that the directive applies to all crawlers.

That’s the way skarkle does it.

Mr. F.