Powering Our Website's Evolution: Next.js App Router and Sanity CMS in Action

June 15, 2023
Carlos KellyCarlos Kelly

Introduction

A company's public-facing website is important – it helps those outside of the company understand who the company is and what they do. From helping many clients build modular, performant, content-driven websites, we've learned a thing or two about how to build an authoring setup that is great for content authors and frontends that are great for end users. In terms of CMS-driven websites, we believe the following aspects are key:

  1. Non-technical stakeholders should be able to easily author content without developer intervention.
  2. For content that will end up on the website, content authors should be able to preview what the site will look like prior to publishing their content.
  3. Publishing is fast – it shouldn't take an hour for a content change to go live on the website.
  4. The site itself is usable, pleasant, and performant for end-users.
  5. Under the hood, the site is easy to make changes and improvements to.

We recently did an audit of our own site formidable.com and realized that our own setup only satisfied a couple of these points, and it was time to do something about it. We decided to re-architect formidable.com using one of the primary technical stacks that we recommend to our own clients, which includes:

  • A modern headless CMS – in our case, Sanity CMS – for providing content authors with a pleasant and flexible authoring experience, coupled with Cloudinary for managing image assets.
  • Next.js for the development of a modern frontend that leverages user and performant-centric patterns like server rendering and caching.
  • Vercel for deploying our frontend (and CMS interface).

In this post, we'll walk through our tech stack choices, and how they fit together to provide a flexible authoring experience and a performant frontend.

Content and Asset Management with Sanity and Cloudinary

Sanity is a headless CMS that offers flexible and powerful data modeling capabilities that are defined in code. This allows for a high degree of customization and control over the content management process. In addition, Sanity offers APIs that enable developers to take total control of their content and integrate it with other tools and services. One of the key benefits of Sanity is its extensible CMS interface, which is written in React and allows for customization of the authoring interface to fit specific needs.

Page building

Many of our pages have their own set layout logic, but have content sourced from Sanity – such as blog posts, employee profiles, and so on. However, we have a subset of pages that are composed of a set of components we call "blocks", which allows content authors to create pages by choosing specific blocks to add to the page.

By creating a library of reusable UI components, we’re able to provide our marketing and design team with a more efficient and streamlined process for creating and managing content in Sanity Studio. In addition, by designing our blocks to be responsive and scalable, we’re able to ensure that our pages look great on all devices, while also ensuring consistency and brand cohesion across all pages.

Previously, we committed all of our website images into our site's git repository and deployed them as part of our site's static assets. We've decided to take a more scalable approach to our digital asset management and delivery and use Cloudinary, a cloud-based image and video management and delivery platform. This reduced our time to launch over creating our own image processing and delivery infrastructure. Cloudinary offers a range of features that allows us to optimize our images, including automatic image resizing, compression, and format conversion. Cloudinary integrates seamlessly with Sanity, allowing for easy management and optimization of images within the CMS.

Custom Components and Plugins

To fetch data from Sanity we built Groqd, an OSS library that simplifies the process of building queries for GROQ (Sanity's query language for querying content). GROQD wraps Zod (a TypeScript-friendly validation library) and provides a query-builder interface that generates both a GROQ query and a validation schema to validate your data with. This allows us to query for (fundamentally unstructured) content from Sanity's Content Lake and validate it's shape and type for use in the frontend.

We also extended our CMS interface by building a number of custom components and dashboard plugins. For example, we've open-sourced a GROQD Playground plugin to experiment with GROQD queries and run then against our own dataset, as illustrated below.

Groqd Playgrounds

For content authors we built a "social card" preview component that allows the content author to see what the page's social cards will look like on e.g. social media prior to publishing page content.

Social Share

Furthermore, since we prefer authoring text content in Markdown, we built a custom Markdown component (using CodeMirror) within Sanity that allows us to upload or select an asset from Cloudinary and automatically integrate the asset URL into the Markdown content.

Markdown Editor

Overall, Sanity provides the flexibility for us to create the content authoring workflows that work for us, and Cloudinary allows us to manage digital assets in a performant way without worrying about infrastructure.

Live Previews

We believe that if CMS content is powering a webpage, it's key for content authors to be able to preview how changes to content will affect the associated webpage(s). As such, we set up "live previews" in our Sanity CMS interface that hooks into our Next.js site.

See the Previews and Publishing section below for more technical details on how this draft/preview mode works under the hood!

Live Previews

Modern Frontend with Next.js and TailwindCSS

Our new site is built upon the latest Next.js patterns with App Router and React Server Components (RSCs). With RSCs, we're able to render the vast majority of our content on the server and send minimal excess JavaScript to the browser. This allows us to use the modern frontend tooling we know and love, but trim excess fat from our client-side bundle!

With RSCs, rendering is done on the server – so any CSS that is generated at runtime, typically found in CSS-in-JS solutions, will not be available during the initial render. This can result in a flash of unstyled content when the page loads, which can be jarring for users and negatively impact the user experience. To avoid this issue, and for its other wealth of benefits, we opted for TailwindCSS – a utility-first CSS framework. TailwindCSS generates all CSS styles/classes at build-time and serves them as a plain ol' .css file, which is easier for browsers to parse and cache and requires zero runtime JS. This approach ensures that all styles are available during the initial render and can help improve performance and the user experience, on top of making it easier for us to codify our design tokens and ensure a consistent design across the site.

Fetch Caching

Next.js App Router has an improved caching story, with a caching mechanism injected into the global fetch method. This modified fetch method allows you to cache responses and provide options such as stale/revalidation period and cache "tags" to use for cache revalidation. This allows us to cache our GROQD and GraphQL requests and revalidate those requests on demand as content changes in the CMS to ensure end-users are getting up-to-date content as quickly as possible.

Below is a code example illustrating our fetch usage for GraphQL and GROQ queries, where the consumer of this theoretical method would pass along details about the request and a tags array of strings to "tag" the cached response with:

const fetchData = ({ url, query, params, tags }) => { return fetch(url, { method: "POST", body: JSON.stringify({ query, params }), headers: { "content-type": "application/json" }, next: { tags: ["all", ...tags] } }) }

With this in place, Next.js will cache all of our responses indefinitely, allowing us to forego making requests to Sanity's servers on each page request. Only the first user needs to wait for a response from Sanity, and subsequent users will be served cached responses.

"Zones" with Redirects

We host our OSS project documentation sites under https://formidable.com/open-source, and each project’s documentation is its own standalone site. We use a “Multi Zone” approach using Next.js rewrites so that e.g. our Victory documentation can live at https://formidable.com/open-source/victory without building that into our Next.js site. This works great for us, as we primarily use Docusaurus for generating documentation sites – and then we can “stitch” those into our website using rewrites. The rewrites method of our next.config.js configuration then looks something like the following.

const nextConfig = { // ... async rewrites() { const vercelDeployedSites = [ ["/open-source/groqd", "https://groqd.vercel.app"], ["/open-source/react-native-owl", "https://react-native-owl.vercel.app"], /* ... */ ] return [ ...vercelDeployedSites.map(([pathBase, vercelUrl]) => ({ source: `${pathBase}/:path*`, destination: `${vercelUrl}/:path*`, })), ] }, }

With this tiny bit of configuration, we can “stitch” these mini sites right into our primary formidable.com domain with effectively zero additional infrastructure work.

Deployment on Vercel

Vercel is the easiest place to deploy a Next.js application, and allows us to ship a fast website without worry about infrastructure. We're using a pretty standard git-based workflow where Vercel deploys commits to our main git branch to production, and Vercel gives us staging previews on pull requests.

Beyond Vercel's standard Next.js hosting, we're also using Vercel's Edge runtime to render most of our site's pages. The Edge runtime, coupled with the Next.js fetch cache, means we can deploy an extremely fast server-rendered site without managing any infrastructure. This approach also fits in well with Next.js's "draft mode" to support live previews, which we'll outline below.

Previews, Publishing, and Cache Invalidation

Getting live previews, end-user caching, and publishing working in an efficient manner was a learning experience with Next.js’s latest APIs, but is quite elegant using Next.js’s draftMode and fetch cache tag and APIs, and Sanity webhooks.

Live Previews (under the hood)

When the author requests a preview in the CMS, the CMS makes a request to an API endpoint in our Next.js app; the API endpoint validates the request is "legit" and sets a "draft mode" cookie and returns the url to use for the preview. This "draft mode" cookie is then used by our Next.js app to determine whether to show draft content or production content to the user. A simplified version of this endpoint is shown below.

import { draftMode } from "next/headers"; import { NextResponse } from "next/server"; export async function GET(request: Request) { // ... validate request const isValid = true; if (!isValid) return new NextResponse("Unauthorized", { status: 401 }) // ... build path based on request parameters const path = "..." // enable draft mode! draftMode().enable(); // and send the path back as part of response. return NextResponse({ path }); }

When the CMS page receives this response, the “draft mode” cookie will be set and the CMS can use the path to point an iframe to.

On the Next.js frontend, we can use draftMode().isEnabled to determine if the requester is in draft mode or not, and “break through” the fetch cache accordingly. The fetchData method we showed above then gets modified to something like the following:

const fetchData = ({ url, query, params, isDraftMode, tags }) => { return fetch(isDraftMode ? url + "#draft" : url, { method: "POST", body: JSON.stringify({ query, params }), headers: { "content-type": "application/json" }, next: { ...(isDraftMode && { revalidate: 0, cache: "no-store" }), tags: ["all", ...tags] } }) } // ... and use with isDraftMode flag const isDraftMode = draftMode().isEnabled; fetchData({ isDraftMode, /* ... */ })

A few things of note here:

  • The fetch cache uses the URL as part of it's cache key, so we'll append a #draft hash to the end of the url if we're in draft mode so that we can "break through" the cache.
  • In draft mode, we set some cache flags like revalidate: 0 to disable caching.
  • We pass through a tags array of strings to specify a list of cache tags, which we can later use to revalidate this response!

With these in place, and some modifications to our GROQD queries, we can set up a preview environment where content authors can see their changes in near-live time, while maintaining the fetch cache for end-users.

Publishing and Cache Invalidation

Our aggressive fetch caching is great for keeping our site nice and speedy for end users. However, when content is changed and published in the CMS, we want to invalidate the smallest amount of cache possible to start serving fresh content to users. Using Sanity webhooks and Next.js’s revalidateTag API, we can invalidate specific fetch responses by tag when content is published.

In our Sanity project dashboard, we have a webhook configured that points to a revalidation API endpoint in our Next.js application, with a custom “projection” (data payload to send when content changes) that looks like the following:

{ _id, _type, "slug": slug.current, "operation": delta::operation() }

When content changes, we’ll receive a payload with content type, id, slug, and what the operation was. Our API endpoint that handles this webhook payload looks something like the following:

import { NextResponse } from "next/server" import { isValidSignature, SIGNATURE_HEADER_NAME } from "@sanity/webhook" export async function POST(req: Request) { try { // Get signature header (send with webhook request) const signatureHeader = req.headers.get(SIGNATURE_HEADER_NAME) || "" const signature = Array.isArray(signatureHeader) ? signatureHeader[0] : signatureHeader // Parse body stream, which we'll eventually JSON parse. const body = req.body && (await streamToString(req.body)) if (!body) return new NextResponse("Bad Input", { status: 400 }) // Validate signature if (!isValidSignature(body, signature, secret)) return new NextResponse("Unauthorized", { status: 401 }) // Add your own custom logic to choose what tags to invalidate const { _id, _type, slug, operation } = JSON.parse(body); const tagsToInvalidate = new Set<string>() // ... // Revalidate all of the appropriate tags tagsToInvalidate.forEach(tag => { try { revalidateTag(tag) } catch {} }) // And send back a 🤙 response return NextResponse.json({ success: true }) } catch { return NextResponse.json({ success: false, message: err instanceof Error ? err.message : "Unknown error", }) } } // util to parse stream to a string async function streamToString(stream: ReadableStream<Uint8Array>) => { const chunks = [], reader = stream.getReader() let { done, value } = await reader.read() do { if (value !== undefined) chunks.push(value) ;({ done, value } = await reader.read()) } while (!done) return Buffer.concat(chunks).toString("utf8") }

At this point, we’ve got a fine-grained cache invalidation workflow that allows us to “freshen” specific content as it changes. Here’s a demo of this in action.

The way forward

Sanity, Cloudinary, Next.js and Vercel are powerful tools that enable us to create scalable and efficient websites with a streamlined content management process. By leveraging Next.js App Router’s improved caching mechanism, WebResponse-based API routes, and support for React Server Components, we’re able to create websites that are optimized for performance and provide a seamless user experience. Moving forward, we plan to continue exploring new features and capabilities of these technologies and platforms, and to leverage our experience to deliver better solutions for our clients. We believe that by staying at the forefront of these technologies and continuing to innovate, we can help our clients achieve their goals and provide a better user experience for their customers.

Related Posts

How we reduced image bandwidth by 86% migrating our media library to Cloudinary

February 8, 2023
Merely migrating to Cloudinary is saving us significant image bandwidth, which is great for our end-users and great for the world.

Ranked Choice Voting: The Mobile Challenge

November 19, 2024
While working on VoteHub, a mobile absentee ballot solution for U.S. elections, I was tasked with designing and prototyping an interface for a relatively new election contest type, rapidly gaining attention and adoption, called Ranked Choice Voting (RCV).

Empowering Users: Developing Accessible Mobile Apps using React Native

July 2, 2024
Robust technical accessibility strategies and best practices we implemented for a mobile voting application using React Native.