diff options
author | Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com> | 2023-12-24 19:11:05 +0100 |
---|---|---|
committer | Bjørn Erik Pedersen <bjorn.erik.pedersen@gmail.com> | 2024-01-27 16:28:14 +0100 |
commit | 7285e74090852b5d52f25e577850fa75f4aa8573 (patch) | |
tree | 54d07cb4a7de2db5c89f2590266595f0aca6cbd6 /hugolib/page__meta.go | |
parent | 5fd1e7490305570872d3899f5edda950903c5213 (diff) | |
download | hugo-develop2024.tar.gz hugo-develop2024.zip |
all: Rework page store, add a dynacache, improve partial rebuilds, and some general spring cleaningdevelop2024
There are some breaking changes in this commit, see #11455.
Closes #11455
Closes #11549
This fixes a set of bugs (see issue list) and it is also paying some technical debt accumulated over the years. We now build with Staticcheck enabled in the CI build.
The performance should be about the same as before for regular sized Hugo sites, but it should perform and scale much better to larger data sets, as objects that uses lots of memory (e.g. rendered Markdown, big JSON files read into maps with transform.Unmarshal etc.) will now get automatically garbage collected if needed. Performance on partial rebuilds when running the server in fast render mode should be the same, but the change detection should be much more accurate.
A list of the notable new features:
* A new dependency tracker that covers (almost) all of Hugo's API and is used to do fine grained partial rebuilds when running the server.
* A new and simpler tree document store which allows fast lookups and prefix-walking in all dimensions (e.g. language) concurrently.
* You can now configure an upper memory limit allowing for much larger data sets and/or running on lower specced PCs.
We have lifted the "no resources in sub folders" restriction for branch bundles (e.g. sections).
Memory Limit
* Hugos will, by default, set aside a quarter of the total system memory, but you can set this via the OS environment variable HUGO_MEMORYLIMIT (in gigabytes). This is backed by a partitioned LRU cache used throughout Hugo. A cache that gets dynamically resized in low memory situations, allowing Go's Garbage Collector to free the memory.
New Dependency Tracker: Hugo has had a rule based coarse grained approach to server rebuilds that has worked mostly pretty well, but there have been some surprises (e.g. stale content). This is now revamped with a new dependency tracker that can quickly calculate the delta given a changed resource (e.g. a content file, template, JS file etc.). This handles transitive relations, e.g. $page -> js.Build -> JS import, or $page1.Content -> render hook -> site.GetPage -> $page2.Title, or $page1.Content -> shortcode -> partial -> site.RegularPages -> $page2.Content -> shortcode ..., and should also handle changes to aggregated values (e.g. site.Lastmod) effectively.
This covers all of Hugo's API with 2 known exceptions (a list that may not be fully exhaustive):
Changes to files loaded with template func os.ReadFile may not be handled correctly. We recommend loading resources with resources.Get
Changes to Hugo objects (e.g. Page) passed in the template context to lang.Translate may not be detected correctly. We recommend having simple i18n templates without too much data context passed in other than simple types such as strings and numbers.
Note that the cachebuster configuration (when A changes then rebuild B) works well with the above, but we recommend that you revise that configuration, as it in most situations should not be needed. One example where it is still needed is with TailwindCSS and using changes to hugo_stats.json to trigger new CSS rebuilds.
Document Store: Previously, a little simplified, we split the document store (where we store pages and resources) in a tree per language. This worked pretty well, but the structure made some operations harder than they needed to be. We have now restructured it into one Radix tree for all languages. Internally the language is considered to be a dimension of that tree, and the tree can be viewed in all dimensions concurrently. This makes some operations re. language simpler (e.g. finding translations is just a slice range), but the idea is that it should also be relatively inexpensive to add more dimensions if needed (e.g. role).
Fixes #10169
Fixes #10364
Fixes #10482
Fixes #10630
Fixes #10656
Fixes #10694
Fixes #10918
Fixes #11262
Fixes #11439
Fixes #11453
Fixes #11457
Fixes #11466
Fixes #11540
Fixes #11551
Fixes #11556
Fixes #11654
Fixes #11661
Fixes #11663
Fixes #11664
Fixes #11669
Fixes #11671
Fixes #11807
Fixes #11808
Fixes #11809
Fixes #11815
Fixes #11840
Fixes #11853
Fixes #11860
Fixes #11883
Fixes #11904
Fixes #7388
Fixes #7425
Fixes #7436
Fixes #7544
Fixes #7882
Fixes #7960
Fixes #8255
Fixes #8307
Fixes #8863
Fixes #8927
Fixes #9192
Fixes #9324
Diffstat (limited to 'hugolib/page__meta.go')
-rw-r--r-- | hugolib/page__meta.go | 520 |
1 files changed, 296 insertions, 224 deletions
diff --git a/hugolib/page__meta.go b/hugolib/page__meta.go index eb1559fb1..0ffdb0b84 100644 --- a/hugolib/page__meta.go +++ b/hugolib/page__meta.go @@ -14,28 +14,26 @@ package hugolib import ( + "context" "fmt" - "path" "path/filepath" "regexp" "strings" - "sync" "time" - "github.com/gohugoio/hugo/langs" - "github.com/gobuffalo/flect" + "github.com/gohugoio/hugo/identity" + "github.com/gohugoio/hugo/langs" "github.com/gohugoio/hugo/markup/converter" - - "github.com/gohugoio/hugo/hugofs/files" - - "github.com/gohugoio/hugo/common/hugo" + xmaps "golang.org/x/exp/maps" "github.com/gohugoio/hugo/related" "github.com/gohugoio/hugo/source" + "github.com/gohugoio/hugo/common/hugo" "github.com/gohugoio/hugo/common/maps" + "github.com/gohugoio/hugo/common/paths" "github.com/gohugoio/hugo/config" "github.com/gohugoio/hugo/helpers" @@ -50,79 +48,76 @@ import ( var cjkRe = regexp.MustCompile(`\p{Han}|\p{Hangul}|\p{Hiragana}|\p{Katakana}`) type pageMeta struct { - // kind is the discriminator that identifies the different page types - // in the different page collections. This can, as an example, be used - // to to filter regular pages, find sections etc. - // Kind will, for the pages available to the templates, be one of: - // page, home, section, taxonomy and term. - // It is of string type to make it easy to reason about in - // the templates. - kind string - - // This is a standalone page not part of any page collection. These - // include sitemap, robotsTXT and similar. It will have no pageOutputs, but - // a fixed pageOutput. - standalone bool - - draft bool // Only published when running with -D flag - buildConfig pagemeta.BuildConfig - - bundleType files.ContentClass + kind string // Page kind. + term string // Set for kind == KindTerm. + singular string // Set for kind == KindTerm and kind == KindTaxonomy. - // Params contains configuration defined in the params section of page frontmatter. - params map[string]any + resource.Staler + pageMetaParams - title string - linkTitle string + pageMetaFrontMatter - summary string + // Set for standalone pages, e.g. robotsTXT. + standaloneOutputFormat output.Format - resourcePath string + resourcePath string // Set for bundled pages; path relative to its bundle root. + bundled bool // Set if this page is bundled inside another. - weight int + pathInfo *paths.Path // Always set. This the canonical path to the Page. + f *source.File - markup string - contentType string - - // whether the content is in a CJK language. - isCJKLanguage bool - - layout string - - aliases []string - - description string - keywords []string - - urlPaths pagemeta.URLPath - - resource.Dates - - // Set if this page is bundled inside another. - bundled bool - - // A key that maps to translation(s) of this page. This value is fetched - // from the page front matter. - translationKey string + s *Site // The site this page belongs to. +} - // From front matter. - configuredOutputFormats output.Formats +// Prepare for a rebuild of the data passed in from front matter. +func (m *pageMeta) setMetaPostPrepareRebuild() { + params := xmaps.Clone[map[string]any](m.paramsOriginal) + m.pageMetaParams.params = params + m.pageMetaFrontMatter = pageMetaFrontMatter{} +} - // This is the raw front matter metadata that is going to be assigned to - // the Resources above. - resourcesMetadata []map[string]any +type pageMetaParams struct { + setMetaPostCount int + setMetaPostCascadeChanged bool - f source.File + params map[string]any // Params contains configuration defined in the params section of page frontmatter. + cascade map[page.PageMatcher]maps.Params // cascade contains default configuration to be cascaded downwards. - sections []string + // These are only set in watch mode. + datesOriginal pageMetaDates + paramsOriginal map[string]any // contains the original params as defined in the front matter. + cascadeOriginal map[page.PageMatcher]maps.Params // contains the original cascade as defined in the front matter. +} - // Sitemap overrides from front matter. - sitemap config.SitemapConfig +// From page front matter. +type pageMetaFrontMatter struct { + draft bool // Only published when running with -D flag + title string + linkTitle string + summary string + weight int + markup string + contentType string // type in front matter. + isCJKLanguage bool // whether the content is in a CJK language. + layout string + aliases []string + description string + keywords []string + translationKey string // maps to translation(s) of this page. - s *Site + buildConfig pagemeta.BuildConfig + configuredOutputFormats output.Formats // outputs defiend in front matter. + pageMetaDates // The 4 front matter dates that Hugo cares about. + resourcesMetadata []map[string]any // Raw front matter metadata that is going to be assigned to the page resources. + sitemap config.SitemapConfig // Sitemap overrides from front matter. + urlPaths pagemeta.URLPath +} - contentConverterInit sync.Once - contentConverter converter.Converter +func (m *pageMetaParams) init(preserveOringal bool) { + if preserveOringal { + m.paramsOriginal = xmaps.Clone[maps.Params](m.params) + m.cascadeOriginal = xmaps.Clone[map[page.PageMatcher]maps.Params](m.cascade) + } } func (p *pageMeta) Aliases() []string { @@ -144,8 +139,15 @@ func (p *pageMeta) Authors() page.AuthorList { return nil } -func (p *pageMeta) BundleType() files.ContentClass { - return p.bundleType +func (p *pageMeta) BundleType() string { + switch p.pathInfo.BundleType() { + case paths.PathTypeLeaf: + return "leaf" + case paths.PathTypeBranch: + return "branch" + default: + return "" + } } func (p *pageMeta) Description() string { @@ -160,7 +162,7 @@ func (p *pageMeta) Draft() bool { return p.draft } -func (p *pageMeta) File() source.File { +func (p *pageMeta) File() *source.File { return p.f } @@ -192,6 +194,9 @@ func (p *pageMeta) Name() string { if p.resourcePath != "" { return p.resourcePath } + if p.kind == kinds.KindTerm { + return p.pathInfo.Unmormalized().BaseNameNoIdentifier() + } return p.Title() } @@ -217,28 +222,11 @@ func (p *pageMeta) Params() maps.Params { } func (p *pageMeta) Path() string { - if !p.File().IsZero() { - const example = ` - {{ $path := "" }} - {{ with .File }} - {{ $path = .Path }} - {{ else }} - {{ $path = .Path }} - {{ end }} -` - p.s.Log.Warnln(".Path when the page is backed by a file is deprecated. We plan to use Path for a canonical source path and you probably want to check the source is a file. To get the current behaviour, you can use a construct similar to the one below:\n" + example) - - } - - return p.Pathc() + return p.pathInfo.Base() } -// This is just a bridge method, use Path in templates. -func (p *pageMeta) Pathc() string { - if !p.File().IsZero() { - return p.File().Path() - } - return p.SectionsPath() +func (p *pageMeta) PathInfo() *paths.Path { + return p.pathInfo } // RelatedKeywords implements the related.Document interface needed for fast page searches. @@ -256,31 +244,7 @@ func (p *pageMeta) IsSection() bool { } func (p *pageMeta) Section() string { - if p.IsHome() { - return "" - } - - if p.IsNode() { - if len(p.sections) == 0 { - // May be a sitemap or similar. - return "" - } - return p.sections[0] - } - - if !p.File().IsZero() { - return p.File().Section() - } - - panic("invalid page state") -} - -func (p *pageMeta) SectionsEntries() []string { - return p.sections -} - -func (p *pageMeta) SectionsPath() string { - return path.Join(p.SectionsEntries()...) + return p.pathInfo.Section() } func (p *pageMeta) Sitemap() config.SitemapConfig { @@ -309,79 +273,114 @@ func (p *pageMeta) Weight() int { return p.weight } -func (pm *pageMeta) mergeBucketCascades(b1, b2 *pagesMapBucket) { - if b1.cascade == nil { - b1.cascade = make(map[page.PageMatcher]maps.Params) - } - - if b2 != nil && b2.cascade != nil { - for k, v := range b2.cascade { +func (ps *pageState) setMetaPre() error { + pm := ps.m + p := ps + frontmatter := p.content.parseInfo.frontMatter + watching := p.s.watching() - vv, found := b1.cascade[k] - if !found { - b1.cascade[k] = v - } else { - // Merge - for ck, cv := range v { - if _, found := vv[ck]; !found { - vv[ck] = cv - } + if frontmatter != nil { + // Needed for case insensitive fetching of params values + maps.PrepareParams(frontmatter) + pm.pageMetaParams.params = frontmatter + if p.IsNode() { + // Check for any cascade define on itself. + if cv, found := frontmatter["cascade"]; found { + var err error + cascade, err := page.DecodeCascade(cv) + if err != nil { + return err } + pm.pageMetaParams.cascade = cascade + } } + } else if pm.pageMetaParams.params == nil { + pm.pageMetaParams.params = make(maps.Params) } + + pm.pageMetaParams.init(watching) + + return nil } -func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, frontmatter map[string]any) error { - pm.params = make(maps.Params) +func (ps *pageState) setMetaPost(cascade map[page.PageMatcher]maps.Params) error { + ps.m.setMetaPostCount++ + var cascadeHashPre uint64 + if ps.m.setMetaPostCount > 1 { + cascadeHashPre = identity.HashUint64(ps.m.cascade) + ps.m.cascade = xmaps.Clone[map[page.PageMatcher]maps.Params](ps.m.cascadeOriginal) - if frontmatter == nil && (parentBucket == nil || parentBucket.cascade == nil) { - return nil } - if frontmatter != nil { - // Needed for case insensitive fetching of params values - maps.PrepareParams(frontmatter) - if p.bucket != nil { - // Check for any cascade define on itself. - if cv, found := frontmatter["cascade"]; found { - var err error - p.bucket.cascade, err = page.DecodeCascade(cv) - if err != nil { - return err + // Apply cascades first so they can be overriden later. + if cascade != nil { + if ps.m.cascade != nil { + for k, v := range cascade { + vv, found := ps.m.cascade[k] + if !found { + ps.m.cascade[k] = v + } else { + // Merge + for ck, cv := range v { + if _, found := vv[ck]; !found { + vv[ck] = cv + } + } } } + cascade = ps.m.cascade + } else { + ps.m.cascade = cascade } - } else { - frontmatter = make(map[string]any) } - var cascade map[page.PageMatcher]maps.Params + if cascade == nil { + cascade = ps.m.cascade + } - if p.bucket != nil { - if parentBucket != nil { - // Merge missing keys from parent into this. - pm.mergeBucketCascades(p.bucket, parentBucket) + if ps.m.setMetaPostCount > 1 { + ps.m.setMetaPostCascadeChanged = cascadeHashPre != identity.HashUint64(ps.m.cascade) + if !ps.m.setMetaPostCascadeChanged { + // No changes, restore any value that may be changed by aggregation. + ps.m.dates = ps.m.datesOriginal.dates + return nil } - cascade = p.bucket.cascade - } else if parentBucket != nil { - cascade = parentBucket.cascade + ps.m.setMetaPostPrepareRebuild() + } + // Cascade is also applied to itself. for m, v := range cascade { - if !m.Matches(p) { + if !m.Matches(ps) { continue } for kk, vv := range v { - if _, found := frontmatter[kk]; !found { - frontmatter[kk] = vv + if _, found := ps.m.params[kk]; !found { + ps.m.params[kk] = vv } } } + if err := ps.setMetaPostParams(); err != nil { + return err + } + + if err := ps.m.applyDefaultValues(); err != nil { + return err + } + + // Store away any original values that may be changed from aggregation. + ps.m.datesOriginal = ps.m.pageMetaDates + + return nil +} + +func (p *pageState) setMetaPostParams() error { + pm := p.m var mtime time.Time var contentBaseName string - if !p.File().IsZero() { + if p.File() != nil { contentBaseName = p.File().ContentBaseName() if p.File().FileInfo() != nil { mtime = p.File().FileInfo().ModTime() @@ -393,10 +392,12 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron gitAuthorDate = p.gitInfo.AuthorDate } + pm.pageMetaDates = pageMetaDates{} + pm.urlPaths = pagemeta.URLPath{} + descriptor := &pagemeta.FrontMatterDescriptor{ - Frontmatter: frontmatter, Params: pm.params, - Dates: &pm.Dates, + Dates: &pm.pageMetaDates.dates, PageURLs: &pm.urlPaths, BaseFilename: contentBaseName, ModTime: mtime, @@ -412,7 +413,7 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron p.s.Log.Errorf("Failed to handle dates for page %q: %s", p.pathOrTitle(), err) } - pm.buildConfig, err = pagemeta.DecodeBuildConfig(frontmatter["_build"]) + pm.buildConfig, err = pagemeta.DecodeBuildConfig(pm.params["_build"]) if err != nil { return err } @@ -420,7 +421,7 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron var sitemapSet bool var draft, published, isCJKLanguage *bool - for k, v := range frontmatter { + for k, v := range pm.params { loki := strings.ToLower(k) if loki == "published" { // Intentionally undocumented @@ -458,15 +459,6 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron if strings.HasPrefix(url, "http://") || strings.HasPrefix(url, "https://") { return fmt.Errorf("URLs with protocol (http*) not supported: %q. In page %q", url, p.pathOrTitle()) } - lang := p.s.GetLanguagePrefix() - if lang != "" && !strings.HasPrefix(url, "/") && strings.HasPrefix(url, lang+"/") { - if strings.HasPrefix(hugo.CurrentVersion.String(), "0.55") { - // We added support for page relative URLs in Hugo 0.55 and - // this may get its language path added twice. - // TODO(bep) eventually remove this. - p.s.Log.Warnf(`Front matter in %q with the url %q with no leading / has what looks like the language prefix added. In Hugo 0.55 we added support for page relative URLs in front matter, no language prefix needed. Check the URL and consider to either add a leading / or remove the language prefix.`, p.pathOrTitle(), url) - } - } pm.urlPaths.URL = url pm.params[loki] = url case "type": @@ -615,8 +607,8 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron if isCJKLanguage != nil { pm.isCJKLanguage = *isCJKLanguage - } else if p.s.conf.HasCJKLanguage && p.source.parsed != nil { - if cjkRe.Match(p.source.parsed.Input()) { + } else if p.s.conf.HasCJKLanguage && p.content.openSource != nil { + if cjkRe.Match(p.content.mustSource()) { pm.isCJKLanguage = true } else { pm.isCJKLanguage = false @@ -628,28 +620,39 @@ func (pm *pageMeta) setMetadata(parentBucket *pagesMapBucket, p *pageState, fron return nil } -func (p *pageMeta) noListAlways() bool { - return p.buildConfig.List != pagemeta.Always +// shouldList returns whether this page should be included in the list of pages. +// glogal indicates site.Pages etc. +func (p *pageMeta) shouldList(global bool) bool { + if p.isStandalone() { + // Never list 404, sitemap and similar. + return false + } + + switch p.buildConfig.List { + case pagemeta.Always: + return true + case pagemeta.Never: + return false + case pagemeta.ListLocally: + return !global + } + return false +} + +func (p *pageMeta) shouldListAny() bool { + return p.shouldList(true) || p.shouldList(false) } -func (p *pageMeta) getListFilter(local bool) contentTreeNodeCallback { - return newContentTreeFilter(func(n *contentNode) bool { - if n == nil { - return true - } +func (p *pageMeta) isStandalone() bool { + return !p.standaloneOutputFormat.IsZero() +} - var shouldList bool - switch n.p.m.buildConfig.List { - case pagemeta.Always: - shouldList = true - case pagemeta.Never: - shouldList = false - case pagemeta.ListLocally: - shouldList = local - } +func (p *pageMeta) shouldBeCheckedForMenuDefinitions() bool { + if !p.shouldList(false) { + return false + } - return !shouldList - }) + return p.kind == kinds.KindHome || p.kind == kinds.KindSection || p.kind == kinds.KindPage } func (p *pageMeta) noRender() bool { @@ -660,17 +663,17 @@ func (p *pageMeta) noLink() bool { return p.buildConfig.Render == pagemeta.Never } -func (p *pageMeta) applyDefaultValues(n *contentNode) error { +func (p *pageMeta) applyDefaultValues() error { if p.buildConfig.IsZero() { p.buildConfig, _ = pagemeta.DecodeBuildConfig(nil) } - if !p.s.isEnabled(p.Kind()) { + if !p.s.conf.IsKindEnabled(p.Kind()) { (&p.buildConfig).Disable() } if p.markup == "" { - if !p.File().IsZero() { + if p.File() != nil { // Fall back to file extension p.markup = p.s.ContentSpec.ResolveMarkup(p.File().Ext()) } @@ -679,43 +682,26 @@ func (p *pageMeta) applyDefaultValues(n *contentNode) error { } } - if p.title == "" && p.f.IsZero() { + if p.title == "" && p.f == nil { switch p.Kind() { case kinds.KindHome: p.title = p.s.Title() case kinds.KindSection: - var sectionName string - if n != nil { - sectionName = n.rootSection() - } else { - sectionName = p.sections[0] - } + sectionName := p.pathInfo.Unmormalized().BaseNameNoIdentifier() if p.s.conf.PluralizeListTitles { sectionName = flect.Pluralize(sectionName) } p.title = p.s.conf.C.CreateTitle(sectionName) case kinds.KindTerm: - // TODO(bep) improve - key := p.sections[len(p.sections)-1] - p.title = strings.Replace(p.s.conf.C.CreateTitle(key), "-", " ", -1) + if p.term != "" { + p.title = p.s.conf.C.CreateTitle(p.term) + } else { + panic("term not set") + } case kinds.KindTaxonomy: - p.title = p.s.conf.C.CreateTitle(p.sections[0]) - case kinds.Kind404: + p.title = strings.Replace(p.s.conf.C.CreateTitle(p.pathInfo.Unmormalized().BaseNameNoIdentifier()), "-", " ", -1) + case kinds.KindStatus404: p.title = "404 Page not found" - - } - } - - if p.IsNode() { - p.bundleType = files.ContentClassBranch - } else { - source := p.File() - if fi, ok := source.(*fileInfo); ok { - class := fi.FileInfo().Meta().Classifier - switch class { - case files.ContentClassBranch, files.ContentClassLeaf: - p.bundleType = class - } } } @@ -734,12 +720,12 @@ func (p *pageMeta) newContentConverter(ps *pageState, markup string) (converter. var id string var filename string var path string - if !p.f.IsZero() { + if p.f != nil { id = p.f.UniqueID() filename = p.f.Filename() path = p.f.Path() } else { - path = p.Pathc() + path = p.Path() } cpp, err := cp.New( @@ -803,3 +789,89 @@ func getParam(m resource.ResourceParamsProvider, key string, stringToLower bool) func getParamToLower(m resource.ResourceParamsProvider, key string) any { return getParam(m, key, true) } + +type pageMetaDates struct { + dates resource.Dates +} + +func (d *pageMetaDates) Date() time.Time { + return d.dates.Date() +} + +func (d *pageMetaDates) Lastmod() time.Time { + return d.dates.Lastmod() +} + +func (d *pageMetaDates) PublishDate() time.Time { + return d.dates.PublishDate() +} + +func (d *pageMetaDates) ExpiryDate() time.Time { + return d.dates.ExpiryDate() +} + +func (ps *pageState) initLazyProviders() error { + ps.init.Add(func(ctx context.Context) (any, error) { + pp, err := newPagePaths(ps) + if err != nil { + return nil, err + } + + var outputFormatsForPage output.Formats + var renderFormats output.Formats + + if ps.m.standaloneOutputFormat.IsZero() { + outputFormatsForPage = ps.m.outputFormats() + renderFormats = ps.s.h.renderFormats + } else { + // One of the fixed output format pages, e.g. 404. + outputFormatsForPage = output.Formats{ps.m.standaloneOutputFormat} + renderFormats = outputFormatsForPage + } + + // Prepare output formats for all sites. + // We do this even if this page does not get rendered on + // its own. It may be referenced via one of the site collections etc. + // it will then need an output format. + ps.pageOutputs = make([]*pageOutput, len(renderFormats)) + created := make(map[string]*pageOutput) + shouldRenderPage := !ps.m.noRender() + + for i, f := range renderFormats { + + if po, found := created[f.Name]; found { + ps.pageOutputs[i] = po + continue + } + + render := shouldRenderPage + if render { + _, render = outputFormatsForPage.GetByName(f.Name) + } + + po := newPageOutput(ps, pp, f, render) + + // Create a content provider for the first, + // we may be able to reuse it. + if i == 0 { + contentProvider, err := newPageContentOutput(po) + if err != nil { + return nil, err + } + po.setContentProvider(contentProvider) + } + + ps.pageOutputs[i] = po + created[f.Name] = po + + } + + if err := ps.initCommonProviders(pp); err != nil { + return nil, err + } + + return nil, nil + }) + + return nil +} |