Application model

The Rerun distribution comes with numerous moving pieces:

  • The SDKs (Python, Rust & C++), for logging data and querying it back. These are libraries running directly in the end user's process.
  • The Native Viewer: the Rerun GUI application for native platforms (Linux, macOS, Windows).
  • The TCP server, which receives data from the SDKs and forwards it to the Native Viewer and/or WebSocket Server. The communication is unidirectional: clients push data into the TCP connection, never the other way around.
  • The Web Viewer, which packs the Native Viewer into a WASM application that can run on the Web and its derivatives (notebooks, etc).
  • The Web/HTTP Server, for serving the web page that hosts the Web Viewer.
  • The WebSocket server, for serving data to the Web Viewer. The communication is unidirectional: the server pushes data to the Web Viewer, never the other way around.
  • The CLI, which allows you to control all the pieces above as well as manipulate RRD files.

The Native Viewer always includes:

  • A Chunk Store: an in-memory database that stores the logged data.
  • A Renderer: a 3D engine that renders the contents of the Chunk Store.

What runs where? what-runs-where

This is a lot to take in at first, but as we'll see these different pieces are generally deployed in just a few unique configurations for most common cases.

The first thing to understand is what process do each of these things run in.

The CLI, Native Viewer, TCP server, Web/HTTP Server and WebSocket Server are all part of the same binary: rerun. Some of them can be enabled or disabled on demand using the appropriate flags but, no matter what, all these pieces are part of the same binary and execute in the same process. Keep in mind that even the Native Viewer can be disabled (headless mode).

The SDKs are vanilla software libraries and therefore always executes in the same context as the end-user's code.

Finally, the Web Viewer is a WASM application and therefore has its own dedicated .wasm artifact, and always runs in isolation in the end-user's web browser.

The best way to make sense of it all it to look at some of the most common scenarios when:

  • Logging and visualizing data on native.
  • Logging data on native and visualizing it on the web.

Logging and visualizing data on native logging-and-visualizing-data-on-native

There are two common sub-scenarios when working natively:

  • Data is being logged and visualized at the same time (synchronous workflow).
  • Data is being logged first to some persistent storage, and visualized at a later time (asynchronous workflow).

Synchronous workflow synchronous-workflow

This is the most common kind of Rerun deployment, and also the simplest: one or more SDKs, embedded into the user's process, are logging data directly to a TCP Server, which in turns feeds the Native Viewer. Both the Native Viewer and the TCP Server are running in the same rerun process.

Logging script:

#!/usr/bin/env python3

import rerun as rr

rr.init("rerun_example_native_sync")

# Connect to the Rerun TCP server using the default address and
# port: localhost:9876
rr.connect_tcp()

# Log data as usual, thereby pushing it into the TCP socket.
while True:
    rr.log("/", rr.TextLog("Logging things..."))

Deployment:

# Start the Rerun Native Viewer in the background.
#
# This will also start the TCP server on its default port (9876, use `--port`
# to pick another one).
#
# We could also have just used `spawn()` instead of `connect()` in the logging
# script above, and # we wouldn't have had to start the Native Viewer manually.
# `spawn()` does exactly this: it fork-execs a Native Viewer in the background
# using the first `rerun` # binary available # on your $PATH.
$ rerun &

# Start logging data. It will be pushed to the Native Viewer through the TCP link.
$ ./logging_script

Dataflow:

Reference:

Asynchronous workflow asynchronous-workflow

The asynchronous native workflow is similarly simple: one or more SDKs, embedded into the user's process, are logging data directly to one or more files. The user will then manually start the Native Viewer at some later point, in order to visualize these files.

Note: the rerun process still embeds both a Native Viewer and a TCP Server. For each Native Viewer, there is always an accompanying TCP Server, no exception.

Logging script:

#!/usr/bin/env python3

import rerun as rr

rr.init("rerun_example_native_sync")

# Open a local file handle to stream the data into.
rr.save("/tmp/my_recording.rrd")

# Log data as usual, thereby writing it into the file.
while True:
    rr.log("/", rr.TextLog("Logging things..."))

Deployment:

# Log the data into one or more files.
$ ./logging_script

# Start the Rerun Native Viewer and feed it the RRD file directly.
#
# This will also start the TCP server on its default port (9876, use `--port`
# to pick another one). Although it is not used yet, some client might want
# to connect in the future.
$ rerun /tmp/my_recording.rrd

Dataflow:

Reference:

FAQ faq

How can I use multiple Native Viewers at the same (i.e. multiple windows)? how-can-i-use-multiple-native-viewers-at-the-same-ie-multiple-windows

Every Native Viewer comes with a corresponding TCP Server -- always. You cannot start a Native Viewer without starting a TCP server.

The only way to have more than one Rerun window is to have more than one TCP server, by means of the --port flag.

E.g.:

# starts a new viewer, listening for TCP connections on :9876
rerun &

# does nothing, there's already a viewer session running at that address
rerun &

# does nothing, there's already a viewer session running at that address
rerun --port 9876 &

# logs the image file to the existing viewer running on :9876
rerun image.jpg

# logs the image file to the existing viewer running on :9876
rerun --port 9876 image.jpg

# starts a new viewer, listening for TCP connections on :6789, and logs the image data to it
rerun --port 6789 image.jpg

# does nothing, there's already a viewer session running at that address
rerun --port 6789 &

# logs the image file to the existing viewer running on :6789
rerun --port 6789 image.jpg &