Dieser Artikel ist auch auf Deutsch verfügbar

This web page is slowing down your browser – this is the type of warning that Chrome, Firefox, and other browsers give when a script requires too much computing time, bringing the tab (in the old days the entire browser) to a standstill. The cause is quickly explained: Both today and back when JavaScript first saw the light of day in Netscape Navigator, JavaScript code shares execution time in a tab with rendering and event handling. One of the results of this is that no clicks or keystrokes can be processed as long as a JavaScript function is still beavering away.

This may be surprising in an era when even cheap smartphones have four or more cores. Why doesn’t the JavaScript and browser codes simply distribute themselves across multiple threads? And if they don’t, why don’t browsers freeze far more frequently? Modern websites run all kinds of scripts, but with good programming the browser doesn’t freeze – and with bad programming even more cores don’t help.

Because more cores only offer more performance if the computing power can actually be leveraged. Most websites are not CPU-intensive but I/O-intensive. That means the computing units are bored while the page waits for a response from the server, interactions from the user, or data from the webcam.

This can be seen very clearly in everyday life. The lion’s share of the time until the finished web page is displayed passes while all necessary resources are retrieved, i.e., HTML, graphics, stylesheets, etc. With only rare exceptions, the execution of JavaScript is not the limiting factor. That’s why browsers allocate only one thread to each page, as prescribed by the HTML and JavaScript specifications. For the rare exceptions, there are “workers” (see box).

Parallelism with Workers

Although JavaScript code within a web page is always executed with only one thread, parallelism works anyway thanks to so-called web workers. Such a worker allows a page to start a JavaScript file in a separate thread:

const worker = new Worker("/prog.js");

The main script and the worker script are decoupled from one another. It is not possible to call functions defined in the worker from the main script or to access its variables. Conversely, a worker is also not allowed to access the Document Object Model (DOM) of the web page or use functions or variables of the main script.

The only communication path is by means of messages. Both scripts can send messages via postMessage(), which are handled by the receiving script using an event handler:

  event => console.log(event.data)

Threads with Loops

To ensure that a website always responds quickly to user inputs and doesn’t hang, more or faster CPUs are not the answer. Instead, the website needs a way to respond to user inputs even when it is waiting for something else, such as a file download or image data from the webcam.

Modern operating systems offer a solution for exactly this scenario: event-based input/output. In traditional I/O, a program requests the content of a file and then actively waits for it to become available. Then the program processes the file and only then it continues, possibly with a request for the next file. We call that “synchronous.” With event-based I/O, a program instead monitors a range of event sources and responds when an event occurs. Program execution depends on the order in which events occur, not the order in which the code is written. This is referred to as “asynchronous” programming. Queues ensure that events can be processed one after the other, even if they follow each other in quick succession.

Event-based I/O also allows a program to request the content of multiple files at the same time. It then tends to other things or simply does nothing (and uses no resources except memory). As soon as one of the files is available, the operating system appends an event to a queue, notifying it. The program can then react to the event, for example by processing the now-available file content.

In this way, the program doesn’t hang while waiting for access to the file. If the user clicks or types in the meantime, it can respond to these events even though the file is still not available. This greatly increases the responsiveness of an application. By decoupling “request file” from “file available,” the operating system’s I/O scheduler can furthermore rearrange the requests in the most efficient way or undertake other optimizations.

Event-based I/O is also used for web pages. Internally, each browser tab is assigned an event loop. The loop waits for messages from the operating system, such as mouse movements or clicks, timer events, network packets, etc. As soon as an event occurs, the event loop calls the associated routine for processing. Only when the processing is finished is it the event loop’s turn again. If another event has occurred in the meantime, it immediately calls the next routine to process it. Otherwise it just waits until the next event occurs.

Synchronous and Asynchronous

The event loop is however not a silver bullet that automatically ensures that the browser always reacts quickly. As before, only one thread is available, and if the processing of an event takes 30 seconds then no other events will be processed in this time and the tab will appear to hang.

JavaScript provides the necessary means to make optimal use of resources without blocking the browser tab. A rough distinction is made between two types of interfaces: synchronous and asynchronous. The difference can be illustrated very simply:

alert("Hello world");
document.addEventListener("click", () => alert("Hello world"));

The first line produces a dialog window with “Hello world.” This action is executed immediately. Put simply, this means that the browser stops executing the JavaScript code as long as the dialog is displayed. Only when the user closes the dialog is the next instruction executed. It is therefore a synchronous call.

The second line in contrast registers a routine that is executed somewhere on the page when clicked. Processing is asynchronous because the code does not wait for the event to occur at this point. Therefore, the passed handler (in the example () => alert("Hello world")) is often called a “callback” function. It only comes into play when the event occurs (a mouse click in the example) and no other JavaScript code is being executed at that moment. Without the click event the handler won’t even be executed, and if there is other code running, the handler cannot react to the click event until the execution of the code is finished. This is also the reason for the error message mentioned at the beginning: If the code of a website runs for a very long time, the event processing doesn’t get a look-in and the website doesn’t react to anything.

Lengthy actions, such as the loading of data, should therefore ideally not be programmed synchronously but should be handled via asynchronous events:

const req = new XMLHttpRequest();
req.open("GET", "/more_data.txt");
req.addEventListener("load", () => alert(req.responseText));

This code first generates an HTTP request (XMLHttpRequest, also known as “XHR”) and initializes it with open(). The code then registers a callback handler, which processes the response. After that, send() submits the request and the program keeps running. For example, it could set up and send more requests. Only when the file more_data.txt has been loaded (and no more code is currently being executed) does the handler of the load event come into play. In the example, it outputs the file content via alert().


If you build the web application according to these principles, you will accumulate numerous callbacks. If the callbacks are not completely independent of each other, they must be nested within each other. This quickly makes the code confusing and it is difficult to keep track of the entire stack. Which part of the code is running synchronously? Which parts are triggered by which events? The term “callback hell” has been coined for this problem.

As a simple example, a file is to be loaded again. However, its URL is in another file, which must therefore first be requested:

const r1 = new XMLHttpRequest();
const r2 = new XMLHttpRequest();
r1.open("GET", "/url.txt");
r1.addEventListener("load", () => {
  const url = r1.responseText;
  r2.open("GET", url);
  r2.addEventListener("load", () => {
    const text = r2.responseText;

This is not clear even in this simple example. More callbacks or more complex dependencies make it even more complicated. The necessary nestings lead step by step to more and more indentations in the code, leading to the so-called “callback pyramid of doom.”

“In the old days ...” – for Mozilla, callback hell is apparently already ancient history (screenshot from MDN).
“In the old days …” – for Mozilla, callback hell is apparently already ancient history (screenshot from MDN).

With the JavaScript standard ES2015 (also known as ES6), in 2015 “promises” were introduced aimed at banishing callback hell. Such a promise is an object that can be in one of three conditions: pending, fulfilled (with a value), or rejected (with an error). A newly-created promise and is at first in pending state, but immediately available for use without stopping the program flow. It is, so to speak, an empty shell for a value. Later, when its inner value becomes available (or definitely will not become available), it enters the fulfilled (or rejected) state.

Promises offer a way out of callback hell because they can be chained: Methods like then(), which are used to respond to the fulfillment (or rejection) of a promise, return a promise themselves. This does away with nesting and indentation, although the code continues to consist of asynchronous callbacks.

New browser interfaces use promises by default. For example, as a replacement for XHR there is the function fetch():

  .then(f1 => f1.text())
  .then(url => fetch(url))
  .then(f2 => f2.text())
  .then(text => console.log(text));

This snippet performs the same function as the XHR example further above, but is almost as readable as synchronous code. In contrast to this however, the program continues to run directly after this snippet without waiting for the files to load. The then() chain is processed asynchronously each time a file is loaded and no other code is currently running.


A syntax extension for promises that the JavaScript language committee has cooked up makes it even easier:

async function loadFile() {
  const f1 = await fetch("/url.txt");
  const url = await f1.text();
  const f2 = await fetch(url);
  const text = await f2.text();

Thanks to the two keywords async and await, functions that internally rely on promises can be written in an ostensibly synchronous style. This gives the impression that calls such as await fetch() would act as a block and the execution would not continue until after the call. In truth, callbacks are concatenated and the code is processed asynchronously. The browser engine treats this piece of code and the one above with the then() chain largely the same.

There is one restriction, however: await calls must be wrapped in an async function. This has technical reasons: Very new browser engines soften the restriction and allow await without explicit async in modules.

Although await gives the illusion of a linear program flow, the browser can parallelize I/O operations. The following code triggers multiple HTTP requests waiting to be completed at the same time:

const responses = await Promise.all([

Here too, only one thread is running in the browser tab (control of the network I/O is delegated to the operating system). Nevertheless, responses is immediately available as a pending promise and program execution continues. When all three files have been loaded, responses is fulfilled and code that has been waiting for this via then() or await is executed.


Even though the code looks synchronous and addEventListener() or the like is nowhere to be seen, operations on promises are also handled by the event loop. To do so, the browser engine generates a kind of virtual event when a promise is fulfilled or rejected. Thus it bundles all events in a central location and processes them in a defined order.

The detailed process is somewhat more complicated, because browsers differentiate between tasks and microtasks, which accumulate in different queues. A task is – in simplified terms – a piece of JavaScript code that must be executed sequentially. For example, a click on a button leads to the browser scheduling the corresponding click callback as a task. Once the callback has been executed the task is completed and it is the turn of the event loop to be processed again. The browser can perform a new render between multiple tasks, but doesn’t have to. Browsers process tasks in the order in which they arise.

As the name suggests, microtasks are smaller tasks. A precise definition of them is difficult – the browsers have subtle differences. Roughly speaking, if a promise is fulfilled during the execution of a task, then the browser calls the corresponding callbacks immediately after the completion of the task. This also means that it might process a function with multiple await instructions en bloc without the event loop getting control in between. However, there is no guarantee of this. Events and promises remain asynchronous code that will eventually be executed, but usually only when no other code is running.

Modern browsers do use multiple processes – but usually to isolate tabs from each other rather than for parallelism within a tab.
Modern browsers do use multiple processes – but usually to isolate tabs from each other rather than for parallelism within a tab.


The core idea of asynchronous programming is that optimal use can be made of the available resources – even though only a single thread is available. For this to work, however, one must not write code that waits for events itself. That would block the browser, or at least the browser tab. Instead, one should – explicitly via event or promise or implicitly via await – return control to the event loop and handle the expected event in a callback.

Since the first release of Chrome, it has been fashionable for browsers to start one process per tab. If a web page hangs due to bad programming, at least it doesn’t drag all the other tabs down with it. The disadvantage is that many processes require a lot of main memory. The question of which tabs have to share a process, and whether and when old tabs are put to bed, is something that every browser developer must answer for themselves. These areas are continuously being optimized in order to squeeze the last grain of performance out of modern hardware.