Improving Performance of Your Sitecore Platform
Apr 27, 2022 • 36 Minute Read • Richard Cabral, Technical Director
Introduction
My name is Rick Cabral, known affectionately here in the Northeast US as "Sergeant Sitecore." I've been working with Sitecore for over 15 years starting with the release of Sitecore 5.0 back in 2005. In that time I've created and led dozens of top-notch Sitecore development teams. While my teams have worked on scores of "new" Sitecore projects, we've also "rescued" dozens of failed projects or faltering installations.
Here at Verndale, we're seeing a recurring theme among these rescue missions: "Sitecore is not performing well." It might be that the website is failing under significant traffic loads, or it might be that the website is simply slow. Because the website is being run through Sitecore, the brand name gets the ding, but it's seldom a problem with the Sitecore product itself. (We'll get to the exceptions later). Let's unpack how my team identifies and fixes performance problems.
Get Performance Analysis Tools in Place
Often we get two kinds of non-specific complaints about website performance:
- Users are complaining the site is slow.
- We are having regular site outages.
While subjective complaints are fine for a starting point, your Sitecore team is going to want to have an objective, scientific way to identify the site's current performance and measure real progress via testable KPIs.
Use Google Analytics to Identify High Priority Pages
Know what your visitors do on site. Just because a given page is slow doesn't mean you should move heaven and earth to fix it.
- Use the 80/20 rule and only address pages that are high-traffic and high-value to your conversion stream.
- Look for pages where visitors drop off and push those pages through Page Speed Insights (below) to see if performance is a factor in those exits.
- Ensure your entry pages are lightening fast (e.g. - home page, campaign pages, product pages, keyword optimized pages).
- Check that all the key pages in your conversion funnel are good performers.
Use Google Page Speed Insights for Individual Page Load Metrics
Your Chrome browser ships with a wonderful tool for both diagnosing site performance problems as well as giving you a wrapped-up KPI for measuring improvements. With Page Speed Insights you get:
- Objective analysis on how long it takes your pages to load over broadband and mobile networks.
- Objective analysis of a user's perceived page speed.
- HTML specific recommendations that you can put directly in your defect queue.
Simply addressing all the recommendations Page Speed Insights produces in its report will produce a visibly snappier site without touching Sitecore at all.
Use a Load Testing Tool for Infrastructure Capacity Metrics
If you've never run a load test on your production installation, you have no idea how many concurrent users it can support. Here at Verndale we use K6, https://k6.io, to simulate real-world visitor click paths, including loading all assets per page. Load testing tools can be as simple to use as recording a visitor session in your browser and uploading it to the bot network to test. K6 gives you outstanding real-time monitoring of your load test, and allows you to schedule capacity bumps in stages to help identify the effectiveness of caching strategies.
A load testing network will give you information on:
- Whether your existing environment can support your "normal" daily traffic volume
- Whether your environment can support a "peak day" event like a holiday sale, a marketing blitz, or an email campaign
- What kind of errors your environment throws when it reaches capacity
- How much traffic is needed to trigger any auto-scaling you have configured on cloud installations
- Which URLs in the click path are causing problems (e.g. - time to response, error rate, large numbers of downloads, overall payload size)
- Which URLs would benefit from further mitigation strategies
Here at Verndale we put every new website project through a load test before launch to ensure that the system can actually handle normal daily traffic, as well as the increased load caused by marketing the site on launch day.
Use these Tools to Measure the Impact of Mitigation Strategies
Whenever you deploy a fix, use the tool that indicated the performance problem to verify the problem is resolved. Track your statistics in Page Speed Insights over time to see if you're making progress. Re-running tests will make sure your improvements are effective and economical. It's possible (particularly with Page Speed Insights) to focus on problems that are identified as important but:
- Are intractable problems due to factors beyond your control - Google Page Speed Insights regularly dings Google Tag Manager as a poor performer, but your DM team certainly isn't going to turn that off.
- Are extremely difficult to fix, but are only good for 1/2 a point in your overall Page Speed score. Focus on the big wins. A CDN for Media Library items will give you more ROI than converting all your images to WebP format.
- Affect users that are already invested in the journey and can tolerate a slow performing page due to the value it provides (i.e. - low ROI to fix).
- Are a statistically significant problem that affect a statistically insignificant number of users (i.e. - low ROI to fix).
Whenever you make any change to your site, part of your DevOps strategy should involve performance analysis using the tools mentioned here. Marketing needs will change. The site will evolve. It's entirely possible to introduce something "new" that will have an adverse effect on site performance. Diagnostic tools can help prevent launch-day catastrophes.
Now that we can evaluate our current condition and benchmark improvements, let's get into the actual problems that we've encountered.
25 Common Performance Problems in Sitecore-Specific Installations
I'm going to list these problem areas in order of expense to fix. We're going to start with the low-hanging fruit and work our way up to major infrastructure changes.
Problem: Poor Media Library Management
My number-one cause of poor page performance on Sitecore-run sites is large image file sizes. I rank this one at the absolute top because it's 100% preventable if the developer guards against human nature from the start. Content authors are often not aware of what causes poor page performance, and they're also not necessarily masters of Photoshop. They pick a good picture for the task at hand, and upload it to Sitecore. If that picture came off a 30MP camera, it's going to be huge. There are a few ways to defend against this behavior:
- If you don't need to handle large media items (like 100MB technical PDFs), set the maximum media upload size in Sitecore to something aggressive enough to prevent large images from being added in the first place. This is the less kind option, but it's also the fastest to implement in a damaged installation.
- You can specify the image dimensions you need when you render the image on-page. Sitecore will resize the image on the server and only deliver the pixels you actually need to the browser. There are some implementation concerns, and this should be implemented as part of a defense-in-depth image management plan. See the sections on Image Transformation Services and CDNs below for more details.
- You can implement an Item Created event handler that processes images as they're added to Sitecore.
- If you're using a Digital Asset Management (DAM) system like Sitecore's Content Hub, you can build resizing and optimization into your image export strategy. This way, images used in Sitecore are always web-ready.
Sitecore is a Content Management System (CMS), not a digital asset warehouse. The only digital assets that should be added to Sitecore are ones that are already optimized for web delivery. If you don't have an in-house plan for processing images for the web before they go to the content team, you need to address this in your content lifecycle immediately.
Problem: Not using Sitecore's Image Transformation Services
Sitecore itself has always offered on-the-fly image transformation via query string parameters. (The Sitecore UI makes heavy use of this feature). Used in conjunction with the <picture/> element, you can get right-sized images every time even before you factor in CDN-based image processing. Developers reaching for "Dianoga" would be better served using Sitecore's media URL transformation parameters.
Some tips for Sitecore's Image Transformation:
- Get familiar with the XML config settings Sitecore uses to handle image resize requests. Most of these settings come from the .NET System.Drawing namespace; lightly documented in the Sitecore config files under "configuration/sitecore/settings." Look for setting Keys that start with "Media."
- There are settings for the resampling/resizing algorithms to use. Make sure you pick the most efficient algorithm. We use Media.InterpolationMode = Bicubic.
- By default Sitecore is set to resize files at maximum (100%) quality, which is not only the slowest operation, but also produces the largest file size. This setting will un-do any pre-upload optimization you've done on your images. Make sure this value is set to match your in-house compression standards for images. We usually go with 80%.
- Aspect ratios matter. When re-scaling an image, it's wise to decide which dimension is "canonical" and only resize by width or height to ensure the image does not distort. With Responsive, the canonical dimension is usually width.
- Image resizing can be used to ease the content author burden of maintaining different images at different aspect ratios for different breakpoints, as long as your design uses the same aspect ratio at all breakpoints. If you need to crop, don't use Sitecore's Image API to crop. Just get the canonical dimension correct and mask the other dimension with CSS.
- As mentioned in the last tip, Sitecore cannot reliably "crop" images. If you need to change aspect ratios, consider using a CSS mask along with content fields that allow the content author to move the image within the mask to keep the image's focus visible.
A Note About "Dianoga"
Many Sitecore developers attempt to circumvent this problem by installing an open source "plugin" for Sitecore called "Dianoga." This oddly-named tool runs 3rd party optimization strategies on images as they're requested by your site's visitors. Here at Verndale we recommend against this plugin for a number of reasons:
- Images are optimized on the fly, image-heavy pages can cause a lot of unplanned load on your content delivery servers. This can cause other kinds of performance problems that I would classify as "poor Sitecore development strategies."
- Large images remain in the Sitecore content databases, which can have an impact on your hosting costs as well as the performance of your Sitecore infrastructure.
- "Dianoga" can call 3rd party services for image compression, which can impact overall page request time as well as generate unusual server-side errors if those services become unavailable.
- A more responsible developer intervention is to build an Event Handler that processes images when they're uploaded into Sitecore. This straightforward development task ensures that all images are web-ready as they're uploaded. It keeps the content database small and prevents unnecessary problems on your Content Delivery servers.
Problem: Not Using Edge-Based Image Performance
If you are employing a CDN in any capacity (and you should be) your provider may offer image optimization services that require little-to-no developer intervention. Akamai Image Services, for example, takes any source type (GIF/JPEG/PNG) of image and creates a WebP variant, which it adds to its cache. If the user's browser supports WebP, Akamai will serve the browser the WebP file, regardless of the file type mentioned in the URL. This seamless optimization means Developers do not need to alter the media URLs provided by Sitecore, and don't need to specify all possible formats in their <picture/> elements. CDN-based image management can be an extremely low effort page speed improvement. It can also be extremely cost effective. For example, Cloudflare's new "Polish" image processing service is included in Business level contracts.
Problem: Storing Video or Audio Files in the Media Library
Sitecore is not capable of streaming media assets like video. When a user stores a video file in the Media Library, the browser must download the entire file before starting playback. Since even a small video file is several megabytes in size, this can destroy a visitor's page performance immediately. If there are a significant number of users on your site, a single Media Library video on your home page can take down your website entirely by overloading the Content Delivery server's ability to respond to requests.
Use a 3rd party video streaming provider like Brightcove, Vimeo, or even YouTube to host your videos and integrate them into Sitecore-hosted pages using their "embed" style players or JavaScript APIs. These 3rd party players can optimize the video's size base on the size of the player on page, and the user's available bandwidth, ensuring they get the highest-performance experience.
Verndale Best Practice
Here at Verndale we use a "belt and suspenders" approach:
- Optimize images before or during uploading.
- Use Sitecore image resizing for responsive <picture/> element URLs.
- Push all media URLs through your CDN's edge-based image optimizer.
- Host video on a 3rd party service, not in the Media Library.
Media-based performance mitigation can be done in stages by implementing the above steps in any order.
Poor Quality HTML and JavaScript
Discussing the full nature of modern, quality, responsive HTML is beyond the scope of this article. Running Google Page Speed Insights on a page will also provide an incredible amount of advice on how to ensure an HTML document is organized for high performance. We'll take a moment to talk about a few key developer behaviors that can have a negative impact on your Sitecore installation:
Problem: Not Using the <picture/> Element
All responsive websites developed in the last 3-5 years should be using the <picture/> element instead of the traditional <img/> element. For each breakpoint on your website, HTML developers should be specifying the exact URL of the image to display in a given component. If you have 3 breakpoints, there should be 3 image URLs. Each image should be exactly the size needed for that breakpoint and optimized for that size. This strategy can shave megabytes off of your page load at the mobile breakpoint. Given that in 2022 most website views come through the smartphone rather than the desktop, optimizing that experience with right-sized images should be priority one.
Problem: Not Lazy Loading Images (and Not Using the <picture/> Element)
We discussed using the <picture/> element previously. Ensure that the image that gets downloaded has just enough bytes to fill the space for a given breakpoint. We can optimize one step further by adding the "loading=lazy" attribute to image tags. This ensures that the browser doesn't start to download the image immediately, but rather waits until the image would be visible in the viewport. Just adding loading=lazy to all image tags on page can have a remarkable effect on performance.
Problem: Excessive Font References and Font File Sizes
I once saw a home page that clocked in at 20MB! The largest contributor to size was a collection of 30 (thirty) font references in the HTML document header. Every possible variant of the given fonts had been loaded, although only 3 fonts were used and each only needed one variant.
- If you can't use a "web safe" font, make sure the website uses a font that comes in the WOFF/WOFF2 format, which is the most efficient and web-friendly format for font files.
- Consider using a font from a public library like fonts.google.com for speed.
- Make sure that you only link to the font variants that are actually used on your page.
Problem: Poor JavaScript Organization
We recently encountered a client that had 85 script files referenced on every page of their site. Performance was predictably poor.
Each URL in your HTML document requires a new connection to the server to download. It's much faster to download one large document than to download 20 documents due to the overhead of managing the connection. Additionally, all browsers have a limited number of connections they can establish at any given time. As soon as the HTML author exceeds that limit, no more assets will be loaded until an existing connection is closed. For JavaScript files, this can be mitigated with the use of the "async" or "defer" tags. Their use is well documented and beyond the scope of this article.
Assuming deferred JavaScript files, another common sin is to load all JavaScript required for the entire site on every page in a given Sitecore installation. Considering this puts the largest burden on the first page a visitor encounters, it's far from ideal. Instead, JavaScript should only be loaded if an HTML component on page requires it.
Problem: Putting Too Much Data in the HTML Document
While this is also a Sitecore development strategy issue, the solution involves changes in the way the HTML document is constructed and thus it's relevant here.
Consider a contact form that exists in the header of every page of a site. The form is "hidden" behind a button and only rolls out if the user engages with it. The form includes a "country" and "state/province" selector, both of which are dropdowns. Between the two dropdowns you have more than 500 discrete <option/> elements. For a multi-language site, these options must be represented as content items within Sitecore. Generating this list involves a significant amount of CMS data processing and produces a large amount of HTML that dilutes the SEO relevance of the page by filling the top 500KB of the document with generic facts. Having all 500 of these options (indeed, having the form at all) included in the original page request is inefficient and hurts SEO.
The solution is to remove the form from the hosting page, and only deliver it when the button to display it is clicked. This AJAX approach has the following benefit:
- The page's desired keyword density goes up.
- The page loses .5MB worth of dead weight.
- The page loads 100-500ms faster.
- Because the form is its own request, you can actually cache the result of the form request (JSON or HTML fragment) discretely either through Sitecore's cache system or better yet, through a CDN. This ensures the form loads quickly and doesn't burden your Content Delivery servers.
Verndale Best Practice
- Use "async" and "defer" in your <script/> elements to ensure the browser doesn't stop processing the page to grab a JavaScript file that's not needed immediately.
- Use a script bundler to merge individual module files into a single file that can be compressed and optimized for quick download.
- Use a module management tool like Antler or WebPack that only retrieves module-specific JavaScript if the module itself appears on page. (This is critical behavior for Sitecore websites.)
- If you must use individual script files for modules, consider having the <script/> element included within the HTML output of the Sitecore Rendering that requires the JavaScript.
- Use AJAX calls to load heavy data that is:
- Not required for the page to display properly
- Appears on more than one page
- Is bulky enough to affect HTML document size
- Has a negative impact on SEO
Poor Sitecore Rendering Strategies
We've been going through common performance problems in order of expense to fix, and we've only just started to talk about Sitecore development issues. Because Sitecore problems are a broad topic, we're going to start with the lowest hanging fruit: mistakes made by junior Sitecore developers that directly impact Rendering performance. These mistakes are the cheapest to fix, and are where I start every diagnostic of a poorly performing Sitecore installation.
Problem: Failure to Use the Sitecore Output Cache
If I encounter a poorly performing Sitecore installation, I can almost guarantee the developers didn't activate any of the built-in Rendering caching flags. The side effect of this is that every page component must ask the Sitecore Data layer for relevant Items on every request. The "average" page in Sitecore can reference 20 to 100 discrete content items. Depending on other settings, this can cause a large amount of memory churn and database access. Output cache management is generally thought of as an "end game" optimization, but in reality output caching is an architectural element that must be carefully planned into your installation.
Verndale Best Practice
- Any Rendering that references a Datasource Item should be cached, "vary by Datasource."
- Any Rendering that defines its output based on the Context Item can also be cached "vary by Datasource."
- Any Rendering that retrieves its output based on the Context Database, Language, or Site should be reorganized to get that information from the Context Item and then see Rule #2.
- Renderings that rely on multiple disparate content Items should be broken up into separate Renderings that rely on exactly 1 of those Items, then see Rule #1.
- Any Rendering that retrieves its output based on XPATH needs a root Item, and that Item should be its Datasource, then see Rule #1.
- Any Rendering that retrieves its output from ContentSearch still needs some context. Get that context from an Item, name it the Datasource, then see Rule #1.
- Any data that absolutely, positively cannot be cached because it cannot be unique to a given URL and/or is dependent on User interaction should be handled via AJAX to reduce page assembly time.
Problem: Failure to Break the Header and Footer Up Into Small Enough Components to Take Advantage of the Output Cache
The visible "Header" of any modern website consists of many parts:
- Logo/Home Link
- Utility navigation such as Login, Register, or Change Language
- A Search feature
- Primary navigation, which often is a series of expanding buttons, and which often also conveys context by highlighting the link that defines what part of the site the visitor is viewing
- Stock tickers, user login status, shopping carts and other state indicators
Junior developers will often start programming a header as a single component. This causes all kinds of problems because you cannot cache the entire header:
- The language selector is bound to the current user's selection.
- The primary navigation will change state based on what page you're on.
- Any user-specific features will vary as the user interacts with them.
Historically, Primary Navigation is a demanding component from a data retrieval perspective. Not only does one have to interrogate a few hundred items for links, but the relationship of those links to the currently viewed page must also be divined. Caching the Primary Navigation on a page-by-page basis is critical to getting Sitecore performance where it needs to be. (This also applies to Fat Footers and the like.)
Instead of having a single Header, the Header should be broken out into a series of placeholders, each of which holds a Rendering that can be cached based on Sitecore's available cache parameters: Data, Context Item, Language, Querystring, User, etc...
The key development philosophy is to keep Sitecore page components as small as physically possible. This is usually determined by the uniqueness of the data they're displaying.
Problem: Using MVC Partial Views
An MVC "Partial View" allows the developer to reference one HTML fragment from another. They can also pass information from the "parent" fragment to the "child" fragment. The problem with this technology is that it's completely invisible to Sitecore.
- You cannot set output cache directives on it.
- Content Authors cannot manage whether the partial view appears on page or not.
- Sitecore's debugger cannot provide information on the rendering of a partial view, preventing important diagnostics from being available to developers.
- Depending on how (or if) data is passed parent-to-child, the child view may not maintain the same "context" as the Parent, leading to problems where off-language or protected content bleeds through.
Developers should never use Partials in Sitecore. Instead, reference Partials as full-fledged Renderings bound to Placeholders for maximum caching, personalization, and multivariate testing possibilities.
Problem: Using MVC Child Actions
Similar to "Partial Views," Child Actions are a way for developers to reference controller-hosted functionality from within a "parent" view. They have exactly the same problems as "Partial Views" in Sitecore, and cannot be cached, designed, debugged, as well as are susceptible to losing mission critical request context.
Developers should never use Child Actions in Sitecore. Instead, convert Child Actions to full-fledged Renderings bound to Placeholders for maximum caching, personalization, and multivariate testing possibilities.
Problem: Failure to Use the ContentSearch API for Bulk Item Retrieval
Sitecore remedial developer training does not cover extensive use of the ContentSearch API, which is the data retrieval API backed by Solr indexes. As such, junior Sitecore developers will instead lean on the older XPATH API to retrieve bulk Sitecore Items. The XPATH API is good for two purposes:
- Locating a very specific item at a specific place in the content tree relative to another item.
- Locating a collection of items at a specific depth in the content tree relative to another item.
When developers need to retrieve a series of Items that may be scattered throughout the content tree, or when they need to retrieve thousands of Items, the XPATH API lacks the performance to handle the task quickly and will rapidly consume available server power if the site is under load.
The solution is to move all bulk item retrieval to the ContentSearch API. However:
- Developers must take care to ensure that their query results don't require further processing. Don't loop through the result set or use LINQ to further restrict your result set. LINQ is Loops and Loops are slow (big "O"*n slow). Instead refine your Solr filters to remove any unwanted results so that the results returned from the ContentSearch API are immediately accurate and sufficient for the task at hand.
- Developers must not "hydrate" ContentSearch API results into full-fledged Sitecore Items, which effectively defeats the purpose of the ContentSearch API.
Verndale Best Practice
- Utilize ContentSearch to locate groups of Items larger than 20, particularly if they're not direct relatives of each other.
- Ensure all ContentSearch queries are bounded by Context Site, Language, Database, Template, and location in the content tree before specifying fuzzy matches. This prevents looping through the results weeding out setting or standard value objects.
- Provide the ContentSearch API with a custom "POCO" that represents only the fields that are required for display and the fields that are explicitly required to refine the search. This prevents looping through the results writing to more output-specific classes.
- Use EDISMAX search parameters when performing text search to ensure accurate results with appropriate slop, word stemming, and relevance priority by fields searched. This improves result set accuracy and reduces "garbage" items that do not appear to contain relative matches.
- Define searchable Item types (Templates) such that all searchable fields are on the Item itself, and not merely referenced by a Rendering's Datasource. This improves result set accuracy and reduces the computational load of a search.
Problem: Using the Sitecore ItemService or Item REST API to Retrieve Item Data for AJAX Calls
Developers who became familiar with Sitecore during the version 7 series were introduced to "native" Sitecore APIs created for the "brand new" WYSIWYG Experience Editor. However, these APIs were never meant to be used for public content delivery. They present such a security risk that they're disabled by default on "Content Delivery" servers. They also lack the cache layers of the standard HttpRequest pipeline and do not scale. Sites with poor performance and a lot of AJAX calls tend to be suffering from inappropriate use of this particular Sitecore feature.
The ItemService API:
- Does not support output caching
- Does not support key Sitecore URL management features such as automatic Site, Language, Security and Version resolution
- Does not enforce Item security
- Allows access to content-in-progress
- Does not enforce Publishing restrictions
- Is a two-way protocol, enabling API consumers to write content back to Sitecore
- Does not participate in any XDB runtime processing of user behavior or content profiling
Verndale Best Practice
Rather than use the API "folder," the best approach is to use Item Controllers, and treat any AJAX API calls as if they were page URLs within a given site. This has the following benefits:
- The AJAX call passes through Sitecore's HttpRequestPipeline, which ensures proper Site, Language, and Security resolution.
- Depending upon the strategy, the output of the API can be cached like any page fragment, allowing for very high performance responses to AJAX requests.
- The AJAX response can also be cached downstream in the CDN, and cache directives can be installed in a manner consistent with all other public Sitecore output.
- If a given page requires an AJAX call, a URL can be re-used via Sitecore's Device management strategy. This allows a URL that normally returns HTML to return JSON or XML based upon a simple querystring parameter.
- AJAX requests handled as above can also be subject to personalization rules, will fire goals, and will participate in content profiling.
Poor Sitecore Architecture Decisions
In my experience, a few bad choices in Sitecore architectural design can render an installation hopeless. In this section, we'll again be looking at problems in order of expense to fix. If you need to start changing your Sitecore architecture, be aware that you're about to make a significant investment.
Keep in mind that I'm limiting the conversation to development problems that can negatively impact performance. This isn't an exhaustive list of Sitecore faux pas.
Problem: Glass Mapper and Code Generators
Between late versions of Sitecore 6 and the introduction of Sitecore 9 an ORM craze struck the Sitecore developer community. There was a desire to further abstract Sitecore's Item objects into a more class-like structure that closely mirrored the Template structure defined by developers. "Glass Mapper" became the de-facto open source solution to this problem and was implemented broadly. However, Glass Mapper and similar technologies have a number of performance downsides:
- Glass Mapper relies heavily on Inversion of Control and Dependency Injection to "infer" the right objects to create at runtime. This has a negative impact on Sitecore startup as well as realtime data retrieval.
- Glass Mapper was designed to be a read/write solution, as such, Glass objects consume a lot of memory and "hold on" to objects much longer than necessary, which can affect system performance.
- Because of the way Mapped classes need to be constructed, it's common for code using Glass Mapper to have de-centralized access to the Sitecore data layer. This makes it very hard to optimize expensive data operations.
- The "tree like" nature of Mapped classes encourage developer behavior that will degrade performance. It's extremely common for programmers to loop through collections of expensive objects rather than use the ContentSearch API to retrieve answers at a fraction of the time cost.
While out of the scope of performance problems, Glass Mapper introduces a number of compile-time and DevOps concerns as well. The Sitecore Developer community has almost universally moved away from Glass Mapper as best practice in favor of lighter-weight, high performing solutions.
Verndale Best Practice
- Do not use Glass Mapper or any ORM for Presentation Layer development tasks.
- Do no use Glass Mapper or any ORM for any Sitecore data manipulation development tasks.
- Do not use Glass Mapper or any ORM with a code-generation system/T4 Templates requirement.
When a Sitecore solution has performance issues getting data out of the database, the culprit is often bad content tree design. A junior school of thought tends to organize content by silos of content type. While this may work out for a "product catalog," it breaks down quickly when one is organizing page fragments. The Information Architect designing the content tree needs to look at a given Page in the tree as well as the related Items it's likely to access. These should be stored as close to the page Item as practically possible, depending on whether the Page has exclusive access to that data or whether the data is shared by multiple pages. If a page consists of renderings that reference Items scattered broadly throughout the content tree, it becomes very difficult to build queries to grab appropriate data in an efficient manner. Bad content tree design leads to inefficient or slow XPATH statements or an over-use of the ContentSearch API, which can overload your backing Solr installation.
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Verndale Best Practice
- Page fragments that are exclusive to a Page are stored as children of the Page.
- Page fragments that are shared between Pages are stored as children of the Site where Pages can reference them.
- "Catalogs" of Pages (Blogs, Press Releases, Products, People) are stored together, sitemap-style according to their best SEO URL.
- "Catalogs" of non-pages (Products, taxonomical systems, etc...) are stored next to the Items that use them.
Problem: A Lack of Semantic Page Templates
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Some developers over-embrace the concept of reusable components, or use an off-the-shelf framework like Sitecore SXA to build up a site from a single concept of "page." While this provides tremendous flexibility to the Content Author, who can put anything anywhere, it creates scenarios where it's extremely difficult to locate specific content when establishing lists, faceted search, taxonomies, or even simply "related content." Effectively a variant on "Bad Content Tree Design" above, getting data out of Sitecore becomes extremely resource intensive, which introduces performance problems.
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Verndale Best Practice
- Isolate the use of "generalist" page types to areas where content should be freeform and seldom directly referenced.
- Identify business concepts that require strong organization and classification and provide specialized Page templates to make them easy to retrieve via query.
- Identify specific page types for "hierarchy" purposes that make the tree easy to walk. Divide the site into Sections, with Landing Pages, Detail Pages, and List Pages, all of which can be used to anchor XPATH queries and to tighten up what should be returned in a ContentSearch query.
Problem: Using Sitecore to Manage SSL Requirements
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">All websites today need to support the "https" protocol. Developers that are unfamiliar with the way DNS resolves, or the way Windows IIS handles protocols, will frequently program "http to https" shunting within Sitecore renderings. Here are the causes:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- The developer has mis-configured Sitecore's Link Management system and the links produced by Sitecore lack the "https" protocol specification.
- The implementer has decided to offload "SSL/TLS" to the network boundary, so Sitecore processes unencrypted requests and has no concept of the fact that the site must support encrypted traffic. (With Google's new mandate for SSL and developing data privacy laws, this is less of an issue, but it does come up.)
- Links in Sitecore are managed inconsistently from component-to-component, causing some links to support "https" while others do not.
Using Sitecore to redirect unencrypted (http) traffic to encrypted (https) requests is a very slow, CPU-intensive process. A Sitecore implementation of this redirect structure also tends to be haphazard, which can produce difficult to replicate runtime errors. While this seems like a simple fix, developers tend to "hunt" for solutions to this problem and it may take a significant amount of time to untangle their efforts, particularly if they've ignored, bent, or replaced Sitecore's built in link management system.
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Verndale Best Practice
- Ensure that website requests are encrypted end-to-end, all the way to the Content Delivery server.
- Use Microsoft's IIS UrlRewrite 2.1 to create a universal http to https permanent redirect for all requests. UrlRewrite executes before Sitecore and is much more efficient than any .NET Framework solution.
- If using a CDN that supports redirect rules, consider moving the rule above to the CDN.
- Configure Sitecore's Link Management settings to use the "https" Scheme. This will force all CMS-generated links to be prefixed with "https" and greatly eliminate a security-based redirect for page requests.
Problem: Over-Use of Sitecore's In-System Personalization Features
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">When it debuted in Sitecore 6.2, The "OMS" product (now "XDB" or simply "XP") added the ability for content authors to "personalize" any page component without programmer intervention. This highly desirable function unfortunately has a number of performance side effects, which is why in 2020 Sitecore purchased Boxever and now offers a Jamstack/SAAS based approach with their "Sitecore Personalize" product.
Sitecore XDB Performance Liabilities:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- Every component that requires personalization cannot participate in Sitecore's output cache system. This means a heavier server load on pages that have personalization, which means paying close attention to your server capacity.
- The more personalization rules you have per component, the slower page requests will become. A lean rule strategy is required.
- While it's possible to "customize" personalization with rules that connect to Back Office systems or CRM vendors, these connections need to be real-time and can also impact page performance.
Verndale Best Practice
- If you haven't started using Sitecore's built-in personalization rules, we strongly recommend using Sitecore Personalize or even Google services to achieve the same effect.
- If you must use the built in Personalization rules, consider full-page personalization instead of component-by-component. this will significantly mitigate performance issues.
- If you are explicitly personalizing based on an authenticated visitor, consider custom programming built into "personalizable" page fragments rather than the personalization engine - you can achieve much higher performance.
- "Permanent" personalization rules should also be offloaded to either full-page personalization or custom programming rather than implemented with the off-the-shelf system.
- Use the personalization rules only to improve conversion. Don't use personalization rules for seasonality or other aspects that affect every user. Instead, consider using Publishing Restrictions to launch seasonally specific versions of your pages.
- Evaluate personalization rules frequently (i.e. monthly) and remove rules that are not relevant to your current marketing agenda.
Problem: Over Accumulation of Sitecore Analytics Data
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">We see a lot of installations where use of Sitecore's in-system analytics data and Marketing Automation is a "phase II" item that never gets the attention it deserves. Developers are asked to "turn it on" but the system is not given any specific objectives, except possibly storing all form data from the Sitecore Forms module. On busy sites, this lights the fuse on a runtime problem that may rear its head a week, month, or year down the line, almost certainly during a peak traffic time, and without warning. Here's what breaks:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- Under significant traffic load, the XConnect service can become overloaded, depending upon whether you've configured any personalization or interactions. This can take your Content Delivery servers offline.
- Also under significant traffic load, Sitecore's Aggregation and Reporting services can place a heavy demand on your Solr servers while processing visitor data into reports. Maxing out your Solr servers renders them unreachable by your Content Delivery servers, which are using Solr to resolve visitors' search queries. Depending on where you're using the ContentSearch API, this can take your entire site offline.
- If Sitecore is not configured to ignore such traffic, "ping" type status monitors that execute page requests to determine server health can fill your analytics logs with garbage data. Because server health monitoring services tend to execute "critical" visitor paths, monitoring can provide so much data that the Sitecore infrastructure (XConnect, SQL Server, and Solr) choke to death on terabytes of useless data.
Verndale Best Practice
- If you're not actively using Sitecore's in-system Analytics reports and Marketing Automation, do not start, and turn off analytics entirely. Instead consider SaaS based products offered by Google, Sitecore, Optimizely, Salesforce and others.
- If you must use Sitecore's XDB features:
- Ensure that Robot Detection is enabled for every request. Make sure that any public health check systems are included on the list of Robots to ignore.
- Have a robust data retention policy and purge data regularly.
- Load test your production infrastructure not just to accommodate visitors, but to see what kind of load analytics processing puts on your system.
- Make sure your Solr servers are sized appropriately or can scale automatically.
- Make sure XConnect is isolated to its own server/process and can scale automatically.
- Isolate the Aggregation and Reporting features to their own server/process.
- Make sure that any scheduled aggregation and reporting processes do not occur too frequently. Ensure a given job has enough time to complete before the next scheduled event.
- Isolate Sitecore's Analytics databases from Sitecore's Content databases and ensure that the Analytics database pool has sufficient throughput to prevent outages under pressure.
Problem: Sitecore SXA
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">The promise of Sitecore SXA is to remove all custom server-side development from the platform in favor of a WIX/Squarespace style HTML/Design framework. This provides Content Authors with the ability to "wireframe" pages up and send them to HTML developers for styling. From a performance perspective, SXA introduces some challenges:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- Basically a framework on top of the Sitecore framework, there's a significantly higher processing load on the server for every page request. SXA pages don't perform as well as "traditional" Sitecore programming.
- Because SXA provides a generalized framework for page and site design, data access and search can be very challenging to get right.
- Because SXA has an opinion on how your team writes their HTML, you may have problems getting good Page Speed Insights scores without fighting SXA's core framework.
Verndale Best Practice
- If you need more than two Content Delivery servers to support your website's daily traffic, avoid using SXA.
- If you're planning on using Sitecore PAAS, avoid using SXA.
- If you must use SXA, a CDN is mandatory, and you must cache pages, not just media library and static assets.
Problem: Sitecore JSS
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">While "Headless" when typically implemented, Sitecore JSS is not 100% "Jamstack." Requests from visitors are processed on a server, and the page is assembled before being sent to the browser. A typical JSS installation replaces ASP.NET MVC with a Node.JS server within your Sitecore installation. This Node server is what responds to visitor requests. Behind Node, there is either a Content Delivery server or Experience Edge responsible for providing data to Node in real-time. Like any Sitecore installation, a JSS installation requires careful programming and sufficient infrastructure to handle your visitor load effectively.
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- NodeJS servers and cloud services are generally not as high performance as .NET servers/services due to the nature of NodeJS.
- Headless JSS is typically slower to respond than traditional Microsoft .NET Sitecore servers because JSS servers must make an additional "hop" to a content delivery API to get data. (In comparison, CD servers talk directly to the Data layer in memory.)
- Headless JSS lacks the Sitecore Output Cache technology that provides a significant performance boost to traditional Sitecore Delivery servers. This makes it very vulnerable to poor content tree architecture and data retrieval problems discussed previously.
Verndale Best Practice
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Getting high performance out of a JSS installation requires very specific approaches:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- A CDN must be implemented, and it must cache HTML pages, not just Media Library objects and static files.
- JSS should be paired with a Static Site Generator (ex: Vercel or Uniform) to guarantee high performance and to prevent real-time access to the Sitecore data layer.
- Both of the above approaches require 100% commitment to a Jamstack approach to website development. Contextual visitor data and real-time data access should be handled out-of-band by AJAX calls rather than during the initial page request.
Performance Problems on the Editor Side of Sitecore
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Often the complaint "Sitecore is slow" doesn't come from the page analytics team, but from the content authoring group. Ensuring that Sitecore is reliable and easy to use for content maintenance is absolutely key to the success of the installation. Let's look at the most common problems encountered:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Problem: Using Item Cloning or Language Fallback
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Sitecore Item Cloning was a unique solution to an intractable problem, but it's never the best solution to the problem. Cloning overrides Sitecore's default field value resolver to allow you to essentially copy one Item and maintain the reference back to the original, to keep the two in lock step. Aside from the Content Authoring challenges this system exposes, Cloning creates some very real performance problems:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- Distinguishing cloned content or modifying "parents" of clones in the Experience Editor can be a challenging experience for content authors.
- Content Authors attempting to edit content in a heavily cloned system may have to wait minutes for pages to load in the Content Editor or Experience Editor.
- Clones cause similar processing delays with Sitecore's publishing features.
If a Sitecore system was implemented with Cloning as a core strategy, the best solution is usually to start from scratch and re-implement the system without cloning.
Sitecore Language Fallback is a feature that allows a page component to display an alternate language should there be no data for the context language. This technology was released in Sitecore version 7 series, and pre-dates the idea of "Final Renderings" and language-specific page layout. Aside from a lack of compatibility with the more modern Presentation Details structure, Language Fallback causes performance problems during page response generation, as for each Item referenced by the page, it must walk through all installed System languages looking for the "best fit" content.
As for mitigation, Language Fallback can be "disabled" by bringing all in-system languages into full coverage, rendering fallback unnecessary. If 1:1 translation is not an option, significant content tree organization to separate language options and regionalized sites may be required. Extensive regression testing will also be required to ensure disabling Language Fallback will not introduce runtime errors.
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Verndale Best Practice
- Don't use the Item Cloning feature of Sitecore, particularly for sites with high traffic needs.
- Don't use the Language Fallback feature of Sitecore. Develop a sustainable strategy for regional content and translation rather than a stopgap measure.
Wrap Up
<h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">Sitecore is an incredibly flexible framework for designing enterprise websites. But the solution will only be as good as the implementer. All programming, from SQL to PHP, suffers from the same liabilities. A programmer building a website that will support a significant number of visitors needs to think about problems in a very different way than a programmer designing a single-user desktop application. Every aspect of getting data out of a database, formatting it for display, and delivering it to the browser needs to be tested against realistic traffic expectations. That said, optimization at every level can get expensive. Here's some guidance on how to attack the problem:
</h3id="poorlyperformingsitecoreinstallation?here'sthesolution.-problem:acontenttreethatdoesn'tsupportefficientxpathnavigation">- The industry-standard approach to scalability and performance is to concentrate on the "edge." Utilize a CDN in every way possible for Media Library, static files, and most importantly for HTML.
- Take a page from the Jamstack approach and avoid real-time connections to Sitecore. Factor out all "integration points" until they can be serviced asynchronously via JavaScript - preferably from a service bus like Mulesoft, or directly from the content's "source of truth." Avoid importing 3rd party content into Sitecore unless you need to reference it from native Sitecore Items.
- Put your web pages on a diet. Cut out every unnecessary pixel in images. Don't download any CSS or JS that you don't need to render a given page.
- Use Google PageSpeed Insights to trim any fat you haven't already removed in the first 3 steps.
- Where you must access real-time content, make sure that the content is stored in a way that makes it fast to retrieve. Keep it semantic, and keep relevant Items together. Make sure your retrieval methods (XPATH or ContentSearch API) are as efficient as possible.
- Avoid performance sucking complexity like SXA, Cloning, Language Fallback, and realtime JSS.
- Never use JSS without static site generation.
- Never use XDB/Personalization. Convert to the SaaS based Sitecore Personalize and Sitecore CDP.