Blazing Fast Edge-side Personalization for Sitecore
Why organizations struggle with activating personalization
Successfully activating the personalization capabilities of your Sitecore Experience Platform is something many organizations struggle with. Technical and business users need to collaborate closely. Non-functional requirements like performance and scalability must be defined and worked on from the earliest stages of the project.
There are many reasons why an organization may struggle with activating personalization. Some of these are business related and have nothing whatsoever to do with technology. That being said, understanding how to ensure the technology involved is ready to meet the modern challenges of global contextual content delivery is critical.
I - and many of my colleagues at Uniform - have spent many years working with some of the most ambitious teams around the world building cutting-edge solutions on top of Sitecore. We saw one common pattern emerge that explained why some of the biggest brands struggle with activating personalization: server-side personalization is slow, expensive and does not scale.
This post will dive deep into our multi-year journey to what we call Edge-side Personalization capability for Sitecore. I will go through our discovery process, which will allow me explain why we landed exactly on this approach.
I hope you feel like you are a part of this journey with us after reading this post!
Performance matters more than ever
Performance is the single most important factor that affects visitor engagement. It's a bigger factor than the quality or presentation of the actual content. A visitor will bounce before they even engage with your content if the site is slow. Google has started to emphasize site performance as a key factor in determining search results. Failing to consider site performance may have a drastic impact on your SEO strategy. It is essential to consider performance early and often.
It is a mistake to assume that the "enterprise grade" Digital Experience Platform (DXP) that you spent a lot of money on can meet these challenges. Customer and market expectations change faster than DXPs can adapt. The approaches we used in the past require rethinking. Cloud services and edge computing continue to become more powerful and accessible, opening new doors to delivering blazing fast personalization at scale, decoupled from the complex infrastructure footprint, skyrocketing costs, slow performance and limited scalability characteristics of your DXP.
In this post, I will share the results of years of our R&D into how to deliver the fastest possible personalized Sitecore sites. The team at Uniform is already helping Sitecore customers around the world to achieve page load times that are simply impossible regardless of how much scaling and caching you configure on your Sitecore content delivery (CD) instances.
Understanding how we accomplish this requires starting with a description of how Sitecore personalization works out of the box.
Fundamentals of Sitecore personalization
Sitecore personalization involves three moving parts:
Business users configure personalization using the Sitecore Rules Editor to define a set of rules that are associated with a specific component on a specific page. These rules determine how the component should be personalized for a visitor. The rules themselves are stored in a special field on the page that the component is assigned to.
When a visitor views a page with personalized components, the Sitecore CD instance runs the rules that are assigned to the personalized components. This is the personalization process and it is a part of the standard page rendering process.
The context is the data the personalization process uses to determine how to personalize a component for a visitor. This includes data about the current session (for example, which campaign the user landed on) and the current visitor (for example, the visitor id). The context is maintained by the Sitecore tracker, which runs on the Sitecore CD instance. This data is rehydrated from xDB when the visitor is returning to the site and is kept in the out-of-process session (typically backed by Redis).
All three of these components come together in order for personalization to happen: Configuration + Execution + Context = Personalization.
Personalization process explained
While the Sitecore CD instance plays the key role in the personalization process, it is not the only actor in the process. Other actors play critical roles in the execution of personalization rules.
The diagram (and description) below is simplified. In reality, there are a lot more moving parts involved. The purpose of this diagram is to highlight the components that are essential to the personalization process.
The browser issues an HTTP request to the Sitecore CD instance, which is running in a data center somewhere. If the visitor is a returning visitor, the browser includes a special cookie in the HTTP request to indicate the visitor has been to the site before. The browser also adds other context data to the HTTP request, such as the visitor's IP address. This context data is available to the CD instance when the personalization rules are executed.
The CD instance creates a server-side data object that represents the page context. This data object is populated with context data from the HTTP request. The CD instance also initializes the session, which is where the visit data is stored. If the visitor is a returning visitor, the contact record must be retrieved from Sitecore xDB, after which the contact data is added to the session.
Next the personalization rules can be executed using the data in the context. The result of the personalization rules being executed is that the components and content that make up the requested page are identified so the Sitecore page assembly process can run. The assembly process runs on the server. It involves rendering page markup. After the assembly process finishes, the CD instance creates an HTTP response, includes the resulting markup and a tracking cookie (so the visitor can be identified as a returning visitor), and returns the HTTP response to the visitor's browser.
The browser renders the page for the visitor, who is delivered a personalized experience.
This is origin-based personalization
The process described above is called origin-based personalization. The hallmark of origin-based personalization is that the personalization process happens on an application server. In the case of Sitecore, the application server is the CD instance.
The word "origin" is a term that comes from architectures that use Content Delivery Networks (CDNs) that cache content. The CDN has to get the content it caches from somewhere. That "somewhere" is called the origin. The CDN is often called the "edge". If a request is made to the CDN for a page that is not already in the CDN cache, the CDN makes a request to the origin. From that point on, however, the CDN can handle all requests for the page. The CDN can deliver pages much more efficiently and faster than the origin can.
Architectural considerations for scaling
A seasoned Sitecore architect may wonder - how does one scale a solution that requires origin-based personalization?
With origin-based personalization, CDNs are not an option. This is the problem with origin-based personalization. Personalization only works if the origin can execute it. This means you cannot cache any personalized page on the CDN because if you do, the origin never has a chance to start the personalization process. This means that every request for any page that includes any personalization has to go through the CD instance. Every page, for every visitor, every time.
Scaling CD instances
CD instances are typically responsible for all sorts of work in addition to the page rendering process, including handling API requests, search queries, handling cache invalidations on publish, and background tasks.
The more work a CD instance has to do, the fewer the number of requests that CD instance can handle. Tasks like running visitor session state and personalization on top of everything else that the CD instance is responsible for inevitably leads to the need to scale CD instances. Horizontal and vertical scaling are possible, but scaling isn't a silver bullet.
Scaling increases operational costs. You are paying to run additional CD instances. These costs can be unpredictable and can add up quickly.
Cold startup time
When scaling out (horizontal scaling), you are adding additional CD instances to your architecture. Even though this can happen when certain performance thresholds are met, it takes time for a new CD instance to be able to handle requests. This is called the "cold startup" time. The new CD instance is added to the mix, but visitors who are routed to the new CD instance will experience significant performance degradation while the instance warms up, which can take minutes.
Scaling only helps to a certain point. CD instances will run faster when you scale them up (vertical scaling), but you are limited by the hardware available to you. Scaling out does not, in general, improve performance. It allows you to handle more visitors. So by scaling out you can ensure your visitors are being served, but you cannot ensure they are being served quickly.
Scaling session state
Session state is responsible for storing the visitor state, so it is critical for personalization. State management is often the cause for scaling problems. This is true with most systems, not just Sitecore CD instances.
Since out-of-process session state management (typically using Redis) is usually used, this adds another scaling challenge. State management also adds extra I/O (input/output) operations, which are among the most time consuming processes in computing.
Finally, this layer is typically quite "hot", meaning it is very active. You must ensure that enough resources are available (that adequate service tiers are provisioned) or else this layer will become a bottleneck. This adds more costs for both infrastructure and operations.
Network latency is the amount of time it takes for the data to physically move from one system to another. Bandwidth has a significant affect on network latency. But even if you have infinite bandwidth, the movement of data across a network is still limited by the law of physics. When global distances are involved, this speed limit can have noticeable affects on performance.
When personalization is involved, there are two places in particular where network latency can be an issue:
Between the visitor's browser and the CD instance.
Between the CD instances and the xDB infrastructure.
Historically you were able to deploy clusters of CD instances in different regions and thereby reduce the latency between the visitor's browser and the CD instance that services the request. Even though this configuration is cost prohibitive to many, some customers went down that road.
Things get more complex when you need to scale with the Sitecore origin-based personalization feature activated. Since xDB is a part of this effort, it is important to keep in mind is that it was designed as a single origin type of system, not distributed out of the box. Even though the recent release of Sitecore 10.1 makes it possible to scale out the xConnect read replicas (thanks to Always On availability group feature of SQL Server), scaling this out further increases the cost and bloat of the architecture, not to mention it requires you upgrade to 10.1. The writes will still have to go to a centralized instance.
So the reality is that most implementations that require meeting the demands of global traffic will stay confined to a single region, at least on the XP side of things. Even if you geo distribute the CD infrastructure and assume the added cost and complexity, the dream of a fully geo distributed XP platform is likely to stay an academic topic.
How big of a deal is this?
Let's switch gears to something more practical and try to understand the effect better using the numbers. Consider the diagram below that illustrates the realities of single-origin deployment when serving a global audience. The single-origin deployment of the XP platform is performed within a West Europe region in Azure. Visitors make requests for personalized pages from around the world.
Measuring performance using Time to First Byte
All of the scaling and performance tuning you do on your Sitecore architecture is to reduce the time it takes for the CD instance to return a response to a visitor's browser. This is represented by a performance metric called Time to First Byte (TTFB).
To be more specific, TTFB is the amount of time it takes for the visitor's browser to receive the first byte of data from the CD instance. TTFB measures the latency of a round trip to the CD instance, in addition to the time it takes for the server to prepare and deliver the response.
As of 2021, TTFB is usually 100-500ms. Google, the company that uses a site's TTFB value as one factor in determining search results, says to aim for 200ms.
TTFB & personalization
To be fair, TTFB is not the be-all-end-all metric. Most of your visitors don't really care about TTFB (unless your visitors are performance nerds who are glued to the network view of their browser's developer console). Fast TTFB alone does not guarantee great UX and great SEO.
However, TTFB is the foundational metric that impact other Core Web Vitals. More importantly, it is the one metric that is directly affected by origin-based personalization as it takes into account both network latency and the cost of all the additional I/O involved in the personalization process: instantiating the personalization context, executing the personalization rules and rendering the page.
Understanding the impact on user experience
If you've ever navigated to a site only to see a blank page for a while, you are experiencing a site with slow TTFB. The browser cannot do anything until the HTML document and other resources (CSS, JSS, images, etc.) are downloaded and can be parsed and executed.
To help you visualize this, I spun up a vanilla Sitecore XP Single deployment in Azure PaaS in the West Europe region. Since there will be no load on this CD instance, no scaling is needed. Then I installed a simple MVC site, configured Campaign based personalization and ran a session from Sydney, Australia on Broadband network using webpagetest.org. Though this represents one of the most extreme scenarios in terms of distance between the browser and the origin, this clearly shows the effect of a slow TTFB on the user experience. This is what it looks like:
This is a test conducted in a controlled environment. What happens under real world conditions?
Mobile networks will add new variables into the mix. The availability of fast mobile connections is not evenly distributed in the world.
This is a vanilla solution, it doesn't do anything beyond serving a simple page. Real world solutions will incur additional cost of executing request pipeline customizations, running more expensive data access and logic as a part of the render, maybe even performing additional I/O, like fetching data from a remote search index. You typically mitigate this with HTML caching, but that is not a silver bullet.
When new content is published, the corresponding Sitecore cache is invalidated. This has a significant effect on TTFB due to cache clearing, which in some cases (with HTML caching) ends up invalidating the whole cache. The more you publish, the bigger the effect of this is.
New deployments will recycle the state of your CD instances. On larger systems, it can take minutes for the CD instances to warm up. During this time, the CD instances can handle no traffic.
When you experience the traffic spike, two things can happen:
If you are not provisioned to handle the traffic volume, the users will experience slow page loading and possibly even timeouts.
If you have auto-scaling turned on, then the scale out process will be triggered, causing more instances to enter the pool to serve traffic. The effect is the same as when you deploy a new CD instance: it takes time before the additional instances initialize and warm up, and this results in visitors not being served.
So what does it mean if you are in pursuit of a sub-second page-load time under any load? If the HTML page cannot be delivered in under 1s, it is very unlikely that the rest of the waterfall completes in under 3s. Here is the picture that is likely familiar if you have ever had to troubleshoot slow response times from your CD servers. In this picture the blank screen will be shown for over 3 seconds:
All these data points clearly suggest that a different approach is needed in order to get personalization to meet modern performance expectations.
Exploring an alternate approach
This section will describe the results of our discovery for an alternative approach and a summary of why we landed on the edge-side personalization option.
When considering an alternate approach to solving this problem of origin-based personalization, the team at Uniform set out the following requirements:
1. No rebuild, upgrade or re-platform is required. Sitecore customers have already made significant investments in Sitecore XP. It is not fair to go back and tell them that they must rebuild their site, upgrade their environment, or worst of all, transition to another architecture or another platform.
2. Business users must not notice any changes. A large part of the value of Sitecore XP, and a big reason why customers buy Sitecore in the first place, is that it provides tools for business users. And these business users are trained to use these tools. It is important that any alternate approach preserves that business value. Business users must be able to configure personalization using the Sitecore Rules Editor without leaving their content authoring environment, regardless of whether they use Content Editor or Experience Editor.
3. Complex personalization scenarios can be supported. The approach must be compatible with more complex personalization scenarios, such as those that involve visitor behavior, customer data provided from outside of xDB (for example a DMP or CDP), as well as historical visitor activity.
Can Sitecore Headless Services/JSS be the answer?
Since our team is intimately familiar with Sitecore JSS - thanks to our prior product experience with this part of product - this was the first option we looked into.
This option requires your site be built in a certain way using Sitecore Headless Services (or JSS). This is not an option for sites built using MVC. In addition, JSS requires Sitecore 9+, and depending on your situation, may require a separate license.
Sitecore Headless Services offers an interesting new pattern where the static portion of the page (the shell) can be pre-rendered ahead of time and delivered from your CDN's closest point of presence (PoP). Data for personalized components can be decoupled and retrieved later.
Assuming you have a license for JSS, you can implement this approach today by leveraging the Layout Service API to retrieve content from a specific placeholder (see the product documentation for details). The endpoint for this API looks like the following:
Using this endpoint retrieves data for all of the components within a given placeholder. The rehydration process used on the front-end - React or Vue - can retrieve this data and render the appropriate components on the client.
Origin-based personalization problems remain
Looking at the righthand side of the sequence diagram, this approach is very similar to the standard personalization approach. While you will get a faster TTFB for the static html page, this does not contain the personalized content, which needs to be fetched client-side. So the same TTFB characteristics will apply to those requests for the personalized content. Even though the act of getting personalized content is decoupled from the page rendering process, the same personalization process still runs when the Layout Service API is executed. This means that all of the limitations of the origin-based process still apply.
Beware of the negative side effects on Core Web Vitals
If you follow this approach, there are some unintended consequences to deal with that may affect your Core Web Vitals.
First/Largest Contentful Paint metrics
If you are personalizing a footer, it is not a big deal. But if you are personalizing an element above the fold, which is common because that is where personalization can have the greatest impact, this is an important consideration to take into account. The trade off of ignoring this issue is being penalized for having a high First Contentful Paint (FCP) or Largest Contentful Paint (LCP) metric, or both, depending on your site.
One technique to tackle this is to display a "content placeholder". You can read more about the technique here. This requires a developer to program a unique placeholder (loading) state per component. This naturally increases developer effort and won't win you any UX awards in 2021.
Cumulative Layout Shift
Another challenge you will face by going down this road is doing battle with the Cumulative Layout Shift (CLS).
Cumulative Layout Shift (CLS) is an important user-centric metric for measuring visual stability. It quantifies how often users experience unexpectedly shifts in the page layout. If you have seen components on a page jump around while a page is loading, this is what CLS measures. A low CLS helps to ensure a delightful experience for the website visitor.
In addition, CLS is one of the "Core Web Vitals" mentioned a number of times already.
This is another key metric that is likely to be negatively affected by changing content on the page after the original content is downloaded. Since you are not able to predict the height of the page for a personalized piece of content, as it may vary outside of developer control, addressing this is especially tricky for content personalization scenarios.
Session locking & multi-placeholder personalization
Consider a case where you have multiple personalized components that are bound to different placeholders. With this approach, you will need to make three different requests to the Layout Service API. Due to session locking on the CD instance, these requests will not be processed in parallel, as you might expect.
Instead, they are queued on the CD instance. This means that the request for placeholder A must complete before the CD instance will process the request for placeholder B. Similarly, the request for placeholder B must finish before the request for placeholder C is processed.
This default behavior is not new and is not specific to JSS. It is a fairly well-known behavior within the Sitecore community. If you want to learn more about it, check out this great post by Jeroen de Groot and this Knowledge Base article by Sitecore here.
Here is the demo of how it works on a vanilla local Sitecore XP 10.1 instance with Sitecore JSS React app deployed. The network is not throttled in any way:
The Time to First Byte is increasing with each Layout Service API request, the third one takes over 1 second, and that's from a local instance. The more API / AJAX calls your solution is making, the more taxing this approach will be.
This issue can be somewhat mitigated by making one request for the whole layout instead of making individual requests. This, however, will result in data over-fetching (fetching more data than your front-end actually needs). The CD instance will do more work to process all the components from all the placeholders. You are not able to request personalized data for an individual component using out of the box APIs.
There are also some customizations you can put in place to make session state read-only on the API controllers, however, it is not clear what kind of side effects these customizations will have on personalization and overall stability of the solution.
This didn't turn out to be a good solution
Even though this approach paves the way for delivering personalization without the render-blocking that accompanies traditional Sitecore page delivery, the underlying architecture still depends on origin-based personalization.
This means that high TTFB for the async request for personalized content has a good chance of affecting not only the user experience, but of causing you to miss the opportunity to engage with the visitor who scrolled away, navigated away or closed the browser tab while waiting for your personalized content to load. In addition, this approach does nothing to address the scaling challenges inherent in the xDB architecture.
We turned to the exciting world of Serverless Edge Computing. If you are not familiar with the topic, make sure to check out these articles by Akamai and Cloudflare on the topic. There is a lot of good stuff in there!
Enter Edge-side Personalization
If the Sitecore CD instance is a bottleneck that cannot be improved, why not remove it? The problem is that the personalization instructions need to be executed on the CD instance. Why not move the execution of the personalization instructions off the CD instance? You probably are asking yourself two questions at this point:
How can we move it?
If we can move it, where would we move it to?
For the first question, remember the three layers of the personalization process I described at the beginning of this article:
The configuration of personalization
The execution of personalization
The context of personalization
In the requirements section I specifically explained that I don't want the business users to notice a change, so moving the configuration of personalization rules off of Sitecore is not an option. But what about the other two? Do the personalization rules really need to run on the origin? Does the context really need to be managed on the origin? No, they don't.
The next question is where to move them. And the answer is, "to the edge". The edge is your CDN. If you haven't kept up with innovation in the CDN space, you may be surprised to learn that CDNs do a lot more than just cache files. Today they offer an entire edge-compute layer that addresses all of the performance, scaling and network latency issues that are relevant to the Content Delivery part of the Sitecore XP platform.
This is exactly the approach we took when building the "Optimize" capability for our Uniform for Sitecore product. This product is the result of years of R&D and hands-on engagement with some of the largest and most complex Sitecore installations in the world. Optimize allows your business users to continue to configure personalization exactly as they always have, while offloading everything related to running personalization to the edge.
This is how it works:
Pages are cached on the CDN in a pre-rendered state that includes all personalization configuration that the business users assigned to each page using Sitecore. The context required to execute personalization is passed along with the HTTP request. Notice that no call to the origin (CD instance) is required in order to deliver a personalized page to the browser.
What about analytics?
But if no call is made to the CD instance, what about capturing analytics? Uniform includes a tracker that runs on the client. It captures all of the same activity that is captured in xDB when the CD instance handles tracking, including profile scoring, pattern matching and personalization events.
The Uniform tracker provides the ability to dispatch analytics data to different systems. Most customers choose to dispatch this analytics data to Google Analytics. This unlocks data that was previously only available in xDB, such as personalization activity. Customers who still want analytics captured in xDB can use the Uniform tracker to dispatch analytics to xDB. Uniform does this without requiring the CD instance to handle the entire page rendering process, thereby significantly reducing the load on the CD servers and lowering the cost of running them.
The benefits of decoupled personalization
There are multiple benefits to this approach:
Avoid all origin calls; greatly reduce your CD infrastructure Pre-rendered pages can avoid all origin calls. For these sites, you can literally disable your entire CD infrastructure. The result is a TTFB that is typically in the 50-100ms range (2-4x faster than Google recommends).
Instant, automatic, reliable scalability You depend on your CDN to handle scaling, which is handled instantly and automatically. The elasticity of traffic spikes is handled by your CDN - a system that is designed to do this - rather than the Sitecore CD infrastructure which cannot.
Low network latency Low network latency, courtesy of hundreds (or even thousands) of edge nodes (Points of Presence, or PoPs) around the world. This ensures content is served as close as possible to the visitor. Remember, the laws of physics still apply, even in cyberspace. It takes time for data to travel around the world. Akamai has 4,207 PoPs. Cloudflare's CDN covers more than 200 cities in more than 100 countries.
Leverage your existing Sitecore investment
This approach is compatible with sites using both MVC (with or without SXA) and JSS presentation layer technologies.
While this approach does not require you to have the Jamstack architecture in place, if this is something you are interested in, Uniform for Sitecore can help. Uniform for Sitecore has two main capabilities: Optimize for decoupled tracking and personalization, and Deploy. The Deploy capability allows you to apply a Jamstack architecture to sites built using any of the presentation layer technologies listed above. Yes, you can actually get the benefits of Jamstack with your existing MVC sites. This is some pretty amazing stuff you really need to see to believe.
Personalization for Sitecore CM customers
Finally, this may not be obvious so I want to call it out explicitly: this approach does not require Sitecore XP. It is fully compatible with your Sitecore XM license and topology. This can significantly reduce the footprint of your Sitecore infrastructure, enabling you to go from the XP Scaled topology...
...to the XM Scaled topology:
Learn more about Sitecore topologies in the Sitecore product documentation.
Edge-side Personalization in Action
Let's take a sample MVC site for a spin and see what kind of performance we can get. In this test I used Akamai, but you can expect similar results on Cloudflare.
To get started, I will demonstrate a very simple use case: personalize based on the UTM campaign in the query string. This information is included in the HTTP request that the browser creates:
This is what the personalization rule looks like in Sitecore:
After the page is published and available on the CDN, I used webpagetest.org to make a request.
TTFB component typically stays under 100ms, as you can see in the report below.
Here is a more detailed breakdown of the 91ms page response time:
What's even crazier is that making the request from my own browser, I get even faster performance - less than 17ms (almost 12x faster than Google's recommendation).
Next I kicked it up a notch and added more personalization conditions into the mix. I added three more variants, using more interesting conditions:
GEO - Personalization based on the visitor's country.
Goal based- Personalization based on whether a specific Sitecore goal has been triggered during the current visit.
Pattern match - Personalization based on whether the visit matches a profile pattern.
Besides these conditions, you can target anything with the HTTP request (query string parameters and cookies), along with conditions that target visit and visitor data (like visit number, goals, page events, profile scores and pattern matches). The complete list of the supported conditions is available here. We are constantly adding support for additional conditions. In addition, custom conditions can be supported using the Uniform Optimize API. Our product docs guide you through the entire process.
Here is a quick 10 second demo showing edge-side personalization with these conditions in actions. TTFB is shown on the left. The personalization in action is on the right. The hero component is personalized. Navigating to different pages causes the visitor profile to be updated, which creates the context to trigger different personalization variants.
Ok, what's the catch? What are the limitations?
Does this sound too good to be true? Here are some of the typical questions we get after we deliver a demo:
How does behavioral personalization work with this approach? Do you support Sitecore profiling and pattern matching?
Pattern match conditions work the same way with the Uniform tracker as it works with the origin-based Sitecore tracker. The Uniform tracker implements profile scoring and pattern matching, so it runs on the visitor's browser, eliminating the need for the CD instance to handle the request. The demo above shows an example of pattern-match personalization in action.
How do you personalize based on historic events?
The Uniform tracker captures visitor activity and persists it in browser storage. All of the activity captured by the tracker is available for personalization. You can also populate tracker state using data captured outside of the Uniform tracker (for example, from a DMP or CDP). Uniform for Sitecore also provides an endpoint that allows you to load historical data from Sitecore xDB into the tracker, in case you have existing visitor data you want to personalize on.
How does analytics work with this approach?
The Uniform tracker has a capability called "dispatch" that causes the tracker to send data to external systems as the tracker captures the data. The kind of activity that you see in the Sitecore Experience Profile can be sent to Google Analytics using our connector, no custom code required. Dispatch also enables us to ensure all visitor activity continues to get captured in xDB, for customers who want to continue to collect visitor data there. The Uniform tracker API enables you to create custom dispatchers, as well, so you can send tracker data anywhere you like.
Is A/B testing supported?
This is one place where we made a conscious decisions to depart from standard Sitecore functionality. Uniform supports A/B testing, but it is not Sitecore's version of A/B testing. Our version of A/B testing is the result of the many years the Uniform team has spent working with customers to design and implement personalization systems. It is the kind of A/B testing our customers tell us they are looking for. We believe it is a simpler, more natural, and more usable way to handle A/B testing. It is automatically integrated with Google Analytics, the analytics tool that our customers tell us they prefer.
Is my CDN supported?
At this moment we support Akamai and Cloudflare CDNs. However, our approach is compatible with any CDN that supports programmatic control over the HTTP request handling process.
With the recently release v5 of our flagship Sitecore product - Uniform for Sitecore - Sitecore customers can offload 100% of Sitecore CD traffic to global CDNs without making any revolutionary changes to their existing solution. You now have a choice of using decoupled personalization with or without the Jamstack architecture approach.
This capability is compatible with Sitecore 9+. We support both XM and XP topologies. We do not require you change the presentation technology of your site: MVC, SXA, or JSS - we support them all.
If you haven't seen the product in action, make sure to schedule a demo today!
And thanks for making it to the end of this post, cheers!