We on the INNOQ website team maintain and develop the INNOQ website and the underlying CMS — mostly part-time, because we have other responsibilities that take most of our time, like working as consultants for customers or leading the marketing team.

The status quo

Information technology is responsible for about four percent of the global carbon emissions, as of 2019. That is twice as much as the share of Germany, which is in the top six of the biggest emitters.

Before we made our conscious decision to reduce the emissions of our website we had already applied several techniques that turned out to support our goal of a more sustainable compant website. Initially though, these techniques were aimed at different quality goals:

When evaluating methods to reduce the carbon footprint we figured that, by implementing these measures, we had also made our website more sustainable.

Some of the strategies that we have been maintaining in detail:

Minimal JavaScript

The HTTP Archive regularly crawls a huge amount of popular web pages and analyzes them regarding various criteria, one of them being the amount of JavaScript served. According to the Web Almanac 2022 published by the HTTP Archive, the median JavaScript transfer size per web page in 2022 was 461 KB. This means that 50 percent of the analyzed web pages serve up to 461 KB of JavaScript. The tenth percentile is at 87 KB of JavaScript, while the 25th percentile is at 209 KB of JavaScript.

Our company website has a JavaScript transfer size of 35 KB per page. So when it comes to JavaScript, we were already below the 10th percentile in the Web Almanac 2022 report.

This is important because the amount of data transferred for a web page is one important factor contributing to the carbon emissions caused by a page view. The less JavaScript we need to transfer to the browser, the lower the emissions attributable to our web page. In addition, all this JavaScript will have to be executed by the browser, which often requires more energy on the visitor’s machine than merely processing HTML and CSS and rendering images. This additional energy usage not only causes more carbon emissions related to the page view. If the visitor’s device is a laptop, tablet, or mobile phone, it also means that the battery built into the device needs to be re-charged sooner, decreasing the overall lifetime of the battery, and thus of the device.

The INNOQ website uses minimal JavaScript. Our JavaScript bundle mainly consists of a handful of Web Components, and we follow the principle of progressive enhancement. The website should be fully functional even if JavaScript is disabled.

Using an image CDN

We have been using an image CDN for a while for user-contributed images, for example images that belong to a blog post. This has a few advantages:

For one, we can use content negotiation to serve images in modern formats like WebP for all browsers that support them, while older browsers will be served alternative formats like JPEG or PNG. WebP or AVIF images are usually much smaller than JPEG or PNG images, leading to reduced carbon emissions. Wihout an image CDN, we could achieve the same thing, for example by pre-rendering all images in different formats and using the picture tag in our HTML templates. However, the image CDN makes it all quite a bit easier.

Moreover, the image CDN uses fingerprinting on all images. This means that the URL of each image contains a hash that changes whenever we there is new version of the image. As a consquence, the image CDN can make each image resource immutable by sending appropriate directives in the Cache-Control response header. Again, you can do this yourself. In fact, we are also using fingerprinting for our static assets in the build process. But it would add quite a bit of complexity if we were to do this ourselves for user-provided images.

The image CDN also makes it a lot easier to serve responsive images, because all kinds of transformations can be applied to an image by adding the corresponding URL parameters. For responsive images, for example, our HTML references images in various sizes, and we don’t have to take care of pre-rendering all those different size variants in the build process.

Lazy loading of images

To take image optimisation one step further, we implemented the lazy loading feature for our content images by leveraging native browser support with the loading="lazy" attribute. The lazy loading effect will only load images when they are scrolled into the user’s viewport. Until then they will stay hidden and neither data will requested from the CDN in the background nor will it have to put weight on the network. Any refinements to the native lazy loading strategy will depend on adding custom JavaScript though. So far there has been no need to on our site.

The wall of consent

We use walls of consent on embedded 3rd party content. Obviously, this started out as a way to comply with GDPR, but as it turned out, it has positive effects on reducing the carbon footprint as well!

If visitors do not actively confirm to load additional content, a lot of JavaScript, CSS, and images will never be loaded. In the browser’s console under the network tab, the amount of avoidable JavaScript transmitted can be observed impactfully when loading a Twitter card or YouTube video. In the latter case, the mount of JavaScript loaded after giving consent is almost 850 KB. This is more than twenty times as much as our own JavaScript. In the Web Almanac 2022, 75 percent of all analyzed pages deliver up to 850 KB of JavaScript.

In the case of a video there are also constant requests in the background loading the next video chunk which put workload on the network and in turn increase the overall carbon emissions. Another thing to consider is video quality which translates into more and bigger data packages additionally stressing the clients battery life and often do not equal to more information or better user experience. This also informed our decision to remove a video on our home page, which we will come back to later.

Podcast transcripts

The INNOQ podcast is an appreciated format for all kinds of curious people in tech — beginners and advanced!

In order to make our podcast content inclusively available we introduced transcripts from the start. Informed by web analytics we learned, that even popular episodes listened to are read as well. During our status quo evaluation we figured, that offering transcripts can also aid in reducing the carbon emissions since the transmission of large audio data can be avoided without loss of information. Another benefit to consider is that users with strictly limited mobile data volume can discover the information spontaneously on the go.

Taking stock: emissions in the summer of 2022

The first thing we did when starting our decarbonisation efforts was to take stock: What are the carbon emissions caused by our website before implementing additional measures to reduce them?

Methodology

Precisely determining the carbon emissions of a website is almost impossible. That’s why there are a few estimation models out there based on what research there is available. None of these estimation models claim to be correct. That’s not possible and when using them, you need to be aware of their limitations. These estimation models are still useful, though. They can give you a rough idea of the carbon emissions caused by your website, and you can use the tools applying these models to compare your own website with others and to observe whether you are on the right track when it comes to minimising the carbon footprint of your website.

We used the Website Carbon Calculator to estimate our emissions. It uses the SWD model for calculating digital emissions. Based on scientific research, it makes a few assumptions about

It then uses the page weight of a first time visit to determine the overall carbon emissions of a page view. This means that it does not take into account how well you make use of HTTP caching, how efficient your server-side code is, or what kind of data you transfer to the browser — for example, rendering a big image is usually less energy-intensive on the end user device than executing tons of JavaScript code. What the SWD model does show you is how you can reduce your carbon emissions by reducing your page weight.

It would not be feasible to calculate the emissions for each single page on our website. Instead, we calculated the emissions for the ten most visited pages of the last twelve months. Among those most visited pages were pages of different types, e.g. the home page, blog posts, the podcast episodes overview page, the working at INNOQ page, etc.

For the remaining page views of the last twelve months, we determined an assumed emissions value per page view. For this, we used the median of the emissions of all the page views of the ten most visited pages. Of course, this is a somewhat arbitrary heuristic, but we think it gives us a reasonable estimate of the emissions for pages that we didn’t explicitly measure. The median value was 0.2g CO2eq per page view. If you compare this with the carbon emissions by percentile published in the Web Almanac 2022, this is already pretty good. Only 25 percent of pages analyzed by the HTTP Archive cause emissions of 0.34g CO2eq or less.

We used this assumed value to estimate the emissions for the page views of all the pages we didn’t test with the Website Carbon Calculator.

Total emissions at the start of the project

Using this approach, we came up with an estimated 180kg CO2eq for all page views of the last twelve months, as of July 2022. Sometimes, it helps to understand how much this is by looking at what else would cause this many emissions. This is roughly the amount of COeq that you are accountable for when flying from Berlin to Valencia. Alternatively, you could eat 13.5kg of beef mince, or cook 29,350 litres of water. All this based on the article What is 1kg of CO2 equal to?.

Our decarbonisation strategy

We did not have a specific reduction goal that we wanted to achieve when we started the project of reducing the carbon footprint of our website. We wanted to see significant improvements, and we wanted all our pages to be better than the average of the pages tested with the Website Carbon Calculator.

We didn’t limit ourselves to measures that reduce the page weight and are thus taken into account by the SWD model. We were aware that some improvements we would make on the server side or on the client side would be invisible when we would take stock again later.

We strived to implement the low-hanging fruits first. As a guideline, we used a matrix of emissions per page view and number of page views.

Decision matrix showing the impact of improvements depending on emissions per page view and number of page views
Decision matrix showing the impact of improvements depending on emissions per page view and number of page views

For example, for a page with very few page views and emissions that are already quite low, the impact of improving it further would be low. On the other hand, a page that is viewed a lot and has high emissions per page view is a high-impact opportunity and should be tackled first.

Removing the video on the home page

With our strategy in place, we optimsied our start page by removing the hero video that had been created intricately in cooperation with a media artist some time ago. We replaced the video by a static background image and re-evaluated the page weight with the carbon calculator.

The result speaks for a quick-win: The weight of our start page, originally 8 MB in size, was reduced down to 5.7 MB. That’s a decrease in size by almost 25% with simple measures!

On top of that we made sure to convert the replacement background image from JPEG to WebP in order to tweak the savings in data transmission sizes even more.

Optimising the podcast episodes overview

Coming back to podcasts for a moment, we realised that the way our episodes overview was constructed included redundant duplications of media and HTTP-requests respectively.

Our podcast overview page uses a rhythmically tiled layout to present recent episodes. Next to other textual information each epsiode shows truncated portraits of the speakers in that episode as well. It was those portrait information that was duplicated: Each episode requested the images from our CDN and persisted it to its entry in the database. Even if the speaker had already been in episodes before and no updates had been made to the portrait, it yet travelled over the network, kept our CDN busy and polluted the application’s database with the same information.

We solved this problem by redesigning the association: The speaker’s portrait was added to our StaffMember entity modelling the speakers instead and in turn we resolved the unnecessary duplication of data.

As a second improvement we added pagination to our growing podcast overview page in order to relieve the client. As a trade-off older episodes now require explicit user interaction to be loaded which equals new requests, but in total we cut our carbon expenses for the podcast episodes overview page by 88% with both measures combined.

Improving caching headers

We were already using fingerprinting on our static assets, e.g. JavaScript, CSS, font files, or images and specified a very long max-age for these resources in the Cache-Control response header. Due to fingerprinting, we know that these resources never change — a change leads to a new URL instead. However, when a user reloads the page, the browser typically requests all resources referenced in the page again, regardless of whether they are still fresh or not. We added the immutable directive to all our fingerprinted static assets to prevent these requests. Since we know that these resources never change, there is no need to download them again, ever.

In addition, we improved our Cache-Control headers for content pages like blog posts, articles, etc. Here, we added the s-maxage directive to allow caching proxies to cache these resources much longer than browsers, which look at the maxage directive. We use Cloudflare in front of our website, and we have a way to purge the Cloudflare cache if a resource changes before it is stale according to the s-maxage directive.

These small measures reduce the amount of transferred data, but this is not reflected in the test results of the Website Carbon Calculator. For one, it doesn’t know anything about data flowing from the origin server to Cloudflare. Moreover, it uses a static factor to determine the page weight for repeat visits.

Also, because Cloudflare has to fetch the current version of our content resources a lot less often than before, this leads to fewer requests to our origin server, where the responses are created dynamically and involve database queries and rendering HTML. While we certainly reduced the energy consumption on our servers, this is again not something we were able to measure using the SWD estimation model for carbon emissions.

Optimising responsive images

We had already been using responsive images for all of our content images. Ideally, this means that we can reduce the page weight depending on the device, because devices with small and low-resolution displays will get a much smaller version of an image than devices with high resolution displays. We were using resolution switching with display density descriptors. This means that the size of the image we deliver depends solely on the device pixel ratio of the display.

This was better than no responsiveness at all. However, we often delivered images that were much bigger than necessary. The reason is that devices do not only differ in their device pixel ratio, but also in the dimensions of the viewport. Depending on the viewport width, an image is displayed in a different size, regardless of the device pixel ratio of the display.

We went for a more complex way of doing resolution switching, using the sizes attribute and media conditions. It’s not easy to decide how many breakpoints you want to have and what they should be, though. The more breakpoints you have, the closer the image selected by the browser is to the ideal size for the respective display. However, the more breakpoints you have, the fewer cache hits you will have in the CDN. In the extreme case, all users viewing your page have displays that need a different version of your image.

Optimising responsive images is not a trivial problem, and we plan to go into more depth in a separate blog post.

Lazy loading slide viewer

Slides related to talks given by the INNOQ staff are made available on our website and are displayed within a custom built slide viewer element.

The default browser content loading strategy is eager load. This means that content is loaded no matter if visible within the viewport or not on page load. But what if the user is not interested in browsing through the slides or only a part of it? Is it worth to transmit, say 1 MB of data that potentially has no benefit? Especially if there are competing media types available on the same page, a video of the entire talk for example. Depending on talk length and/or total slides varying savings in data can be achieved here.

Since slides are just images we thought of two strategies:

  1. Treat slides like any other image by using native lazy loading. The slides will be shown only when the slide viewer enters the viewport and incrementally load slides as the user scrolls within the viewer.
  2. Treat slides as 3rd party content. Block them by a wall unless explicitly requested by clicking to avoid traffic.

A quick-win, we opted for strategy one as it is the least intrusive to the user experience and effectively saves carbon emissions.

What did we actually achieve?

After we implemented all these measures, we ran our calculations again against the new, lower-impact versions of our pages. We used the same page view statistics of the last twelve months as of July 2022. So now we know what our estimated emissions would have been if all our improvements had already been in place in those twelve months. We think this is the only reasonable approach because if you want to see what the effect of your changes are, you cannot compare the emissions based on the page views for completely different time periods.

So, if all our improvements had already been in place, our emissions for the last twelve months as of July 2022 would have been 96kg of CO2eq. In other words, we would have avoided 84kg of CO2eq or 47 percent of our carbon emissions. There is not a single page left that is above the median of the carbon emissions reported in the Web Almanac 2022, and all our pages have lower carbon emissions than the average of all web pages measured by the Website Carbon Calculator.

Outlook

We think that cutting the estimated carbon emissions of our company website in half is already pretty good. But we don’t want to stop there. Here’s just a few of the things we still want to do:

Corporate design versus sustainability

To this date our corporate design poses a constraint on what changes we can implement in order to reduce our carbon footprint. For example when it comes to communicating the work culture at INNOQ high resolution bitmap images are a conscious choice. But there is hope. The new AVIF format has the potential to combine the best of both worlds: High detail despite high compression. The format has not widespread yet, but we aim to support it via our CDN soon. Also, we were already able to optimise their file sizes by switching from JPEG to WebP, which is widely supported.

Fonts are another important corporate design aspect as they contribute to the overall look and feel. There are variable fonts out there that are able to bundle all necessary characters and styles in one package which limits the number of requests per font to one. It can also reduce the total transfer for fonts.

We will have to evaluate which options we have here: Could our brand font-family be transformed into a variable font? If not, how do we find out unused characters as a way to minimise it? Should we keep it altogether or opt for a truly variable alternative?

We think that the corporate design optimisations will require more planning and negotiation in the future.

The eco mode

We have experimented with an eco mode. So far, it is not possible to choose it via a UI control yet. Instead, you need to activate it using the query parameter mode=eco. Also, it doesn’t change much yet. So far, the only effect is that the images for podcast episodes are vectorized and delivered as SVG files. This leads to a considerably smaller page weight than the normal version that uses the original photos of the people in the podcast. It also deviates from our current corporate design quite a bit, though.

We would like to explore additional changes that we can implement in the eco mode — changes that are difficult to do in the normal version of our website, because they are too radically different from our current corporate design. We also want to allow visitors to switch to the eco mode by means of a UI control instead of having to know a secret query parameter.

Raising awareness amongst co-workers

The quality content on our site is primarily shaped by the INNOQ staff. Not only do we offer podcasts or talks, but also technical and blog articles. Some of which have become real classics and hence have been requested a lot of times. So how do we sensitise our colleagues to plan their articles with web sustainability in mind?

First this means to abstain from using unnecessary imagery, video or other media. For example stock images that do not contribute to any deeper understanding of the topic and thus create no value except decor we consider unnecessary. But we do not intend to force any rules here. This would not make much sense nor do we know of any metric that objectively evaluates the degree of essential or non-essential media. That’s why we thought of ideas about how we might support our authors technically:

As an example: If we had removed two stock photos (130 KB) from a popular blog article from the start, over a course of twelve months we would have saved 2.7 kg of carbon emissions that could have been used to boil 450l of water instead.

Conclusion

Avoiding an estimated 84kg CO2eq emissions may be small compared to the total emissions caused by an IT company. But it’s something, and it was a quick win that we could achieve with about five to seven person days of work, distributed over a couple of months.

We think that every gram of CO2eq that is not emitted is worth it, especially if the effort involved is so little — and as long as it’s not the only action you take to reduce your organization’s carbon footprint.

While our website has some popular content, it’s not really a high-traffic site. Of course, the amount emissions you can avoid by implementing measures such as those presented in this article can be considerably higher for high-traffic websites. Nevertheless, we think it’s important to clean up our own backyard first.

What we did so far for our company website is certainly not the end of the story. Even bigger wins, especially on the server side, are conceivable but would involve a lot more effort and architectural changes — and are much more difficult to measure. Nevertheless, it’s also something we will have to face, even if it’s more challenging.

We hope that this case study inspires you to look at what low hanging fruits you have in order to reduce the carbon footprint of your own company website — and to take action by actually picking them.

We support you and your team