Skip to main content

Adobe Experience Manager as a headless back-end: GraphQL server(less)

AEM

Jasper Simon Technical Consultant

Jasper Simon

Technical Consultant

I’ve been working on a GraphQL interface that serves content from Adobe Experience Manager and some other systems in a Back-end For Front-end (BFF) architecture for about a year now.

Along this way, I’ve learning some best practices to move to AEM as a headless back-end. In this blog, I go through the details on how I tackled these 3 challenges so you can avoid the most common pitfalls in the process.

How to define your AEM headless architecture: explore GraphQL API

While the AEM GraphQL API is a great idea for exposing content fragments, out of the box it does only that. And until recently, it was only available for AEM-as-a-Cloud-Service users.

You can provide additional code to expose pages and other structures, but when you require content from other API’s such as personalized content, a dedicated GraphQL server seems like the go-to alternative to make this process smoother.

In this case, we introduced Apollo Server as the GraphQL implementation of choice, implemented in a serverless environment.

To back it up on the AEM side, Sling Models with their Exporters are used to generate the appropriate JSON representations of pages and components. This had the added benefit of being able to re-use many of the Sling Models, as they were already used to serve JSON content.

Some other external API’s are also in place, mainly used for personalized enrichment of the delivered content

Setting up a cache mechanism compatible with your serverless environment and content personalization

With the serverless environment and personalized content, caching immediately comes to mind as a risk. As the personalized content is prone to change often, we can’t cache it. This effectively renders Apollo’s TTL-based caching mechanism useless, as nearly all requests will have a net TTL of zero. As we can’t cache the entire GraphQL response we send back to the user, it becomes important to cache the generic data we need from the backend, and that all personalized, uncached content is delivered fast.

To achieve caching the generic content, a caching layer was added to the AEM stack which applies a stale-while-revalidate policy, resulting in a hit rate over 99% for the AEM-side shared cache.

Additionally, a smaller in-memory cache was added to the GraphQL implementation. This in-memory cache delivers a speed improvement as well as a cost reduction by decreasing the amount of network traffic.

Identify all your content sources

Now, once we had all these things setup, it was of course important to start thinking about how to offer all the content. While at first it seemed obvious to use AEM pages and serve the content as a whole, this came with some performance issues. It appears that not every device running our front-end application was keen on rendering lists with over 200 items – who would’ve thought, right?

Luckily pagination is pretty straightforward in GraphQL. Whether you choose for offset- or cursor-based pagination is up to you, both are equally simple to implement. The real catch this introduces, is how this affects your data-needs. You now need to be able to request the list-component separately and so there is now a need to be able to identify and fetch your AEM components separately.

With how Adobe Experience Manager works and with how our caching system was setup, we decided to fetch the entire page and then extract the components in the GraphQL api. Each list item itself is fetched from AEM as it’s a page and so it’s also put in the cache. As a result, requesting a page based on a click on a list item was very fast as the page was already in the cache.

One of the major difficulties here was identification. IDs needed to be constructed to allow the GraphQL application to fetch the appropriate page and extract the desired component. With the introduction of personalized lists, this identification system went through various revisions.

Hopefully these tips will save you some time and some headaches when moving to AEM as a headless backend. Still have some questions left? Our Adobe experts are available to help you through your AEM journey - contact us!