I have a website that I would like to create a PDF from. I am using the Create -> PDF from Web Page..., selecting the site's home page, and capturing 2 levels, with "stay on same path" and "stay on same server" checked in order to limit the scope of the crawl.
Where the pages are at example.com/foo/ and example.com/foo/bar/, this works fine. However, where the pages are at example.com/foo/ and example.com/foo/?p=1, the page represented by the query string URL is not converted to the PDF.
This is a problem, given that the site I want to archive as a PDF uses query strings for most of its pages.
I have been able to individually convert a single query-string-based page into a PDF using this method, but doing this for every page on the site would be almost impossible given the sheer number of pages on the site.
Is this a known issue? Is there a workaround other than separately capturing each page (which would be prohibitive effort)?
I have tried this in both Acrobat Pro X and Acrobat Pro 9 for Mac, with the same results.