It's not a a great option, but it might at least let you grab some kind of snapshot of the information, before you lose access even to the archive.org copy.
One tool which does what you're asking about it WGET, which allows you to iterate over an entire site using HTTP and download all the HTML documents and other linked resources.
What that gives you is something essentially like a Google bot crawling every link on your site: It will download the HTML page that is generated for each thread and page, as WGET recursively walks every link within your site.
"As HTML" isn't going to be the best or most fun to have to re-parse through later (manually, or with some scripting or macro prowess) to try and make new similar phpBB posts on a new site, if that's your intention. But it's at least "something", and would be "complete" in terms of knowledge on the site, and doesn't require any more access to create beyond "an ability to view the web site." You would have a complete offline HTML copy of whatever you were able to view through archive.org.
There is a GUI front-end for WGET available from https://sourceforge.net/projects/winwget/
. That's probably better for people like me who are unfamiliar with WGET, rather than trying to download and use the Win32 port of the GNU original
. But even with WINWGET, I would expect to spend a lot of time testing and getting download settings correct before you're finally able to "unleash" WGET to go ahead and walk the entire site.
Don't forget archive.org may throttle you or even cut you off after a certain amount of bandwidth, so perhaps include a crawling delay so that you're not being too greedy about the archive.org site processing and bandwidth.