Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Introduction and strategy

Multi-tenant CockroachDB is a new way to structure the CockroachDB technology that achieves isolation between logical clusters. This is most useful when we share a common distributed storage across competing customers.

(As an analogy, multi-tenant CockroachDB achieves the virtualization of CockroachDB SQL in a similar way that containers or VMs achieve a virtualization of hosted servers.)

Today (Early 2022), the multi-tenant architecture is only available inside the CockroachCloud Serverless product. However, eventually, we wish to evolve CockroachDB to serve all application traffic using the multi-tenant architecture, including inside CockroachCloud Dedicated and licensed CockroachDB self-hosted customers.

In the words of our CTO, “Multi-tenant CockroachDB is the way CockroachDB should have been designed from the start.”

This also means that we are now focusing our development to maximize the application developer experience on top of multi-tenant.

Care must be taken to distinguish the internal product architecture, discussed here, from the ability to actually run two or more tenants side-by-side:

  • Cockroach Labs would retain exclusive right to define more than one tenant side-by-side on a shared storage cluster, via the Serverless product offering.

  • In CockroachCloud Dedicated and for self-hosted deployments, applications will be able to utilize a single pre-defined virtual cluster layered on top of the multi-tenant architecture, without the capability to define more tenants.

Overview of run-time components

Summary table

Deployment components: the deployment/SRE view

Description

In-code abstraction

In-memory instance

Unix process

Running container

Routes SQL clients to the right server

“SQL proxy”

“SQL proxy instance”

“SQL proxy server”

“SQL proxy pod”

Runs SQL queries

“SQL”or “SQL gateway”

“SQL instance”

“SQL server”or “SQL-only server” to highlight server contains no KV instance

“SQL pod”(implies “SQL-only server”)

Runs KV queries

“KV components” (plural)

“KV instance”

“KV server” but the term is inclusive of mixed servers, we don't yet support KV-only servers.

N/A, we don't currently run KV-only servers.

Runs both SQL and KV queries

NEW: “Mixed SQL/KV servers”

NEW: “Mixed SQL/KV pods”

Stores data for multiple tenants, 1 unit

NEW: “Shared storage/DB server”

NEW: “Shared storage/DB pod”

Stores data for all tenant, fleet of all servers

NEW: “Shared storage cluster”

NEW: “Shared storage cluster”

We also use the word “node” to designate either a unix process or Docker container, when the distinction does not matter.

Logical components: the account administrator's view

What's virtualized

New name for the virtualized logical concept

Previous terminology

New name for the physical infrastructure 

The CockroachDB cluster service, as a whole

“Virtual cluster”

“Cluster”

N/A: the underlying infrastructure is not visible to end-users any more.

Run-time state for a (virtual) cluster

“Tenant servers/pods”

“Servers/pods”

NEW: “Shared storage/DB servers/pods”

On-disk state for a (virtual) cluster

“Tenant-specific data”

“CockroachDB data”

NEW: “Shared storage/DB data”

new: the virtual cluster used to administer other virtual clusters = system cluster

Beware of the difference between “Shared storage cluster” (deployed system) and “System cluster” (logical cluster an administrator connects to, to create additional virtual clusters)

Ownership (not data)

“Tenant”

“User”

Architectural terms

SQL Proxy

Role:

  • Accepts incoming connections from client apps

  • Determines which tenant the connection is for

  • Routes each connection to a SQL instance

Segue: Instances, servers, pods and nodes

  • Instance”: a run-time realization of a data structure in the source code. Think: class vs object.
    TCP/UDP ports are attached to instances.

  • Server”: a unix process  started from an executable file.  Contains diverse instances.
    CPU/memory/IOPS accounting commonly happens here.

  • Pod”: a container, a kind of reduced virtual machine that can be managed by Kubernetes.

    Usually contains 1 process, can contain more.

    IP addresses and storage volumes are attached to containers. 

For example:

We use the word “Node” when the distinction between “server” and “pod” does not matter.

SQL

NB: The name is just “SQL”.
Derived as “SQL instance”, “SQL server”, “SQL pod”, “SQL node” depending on the run-time properties of interest.

Role:

  • Accepts incoming connections from SQL proxy.

  • Responsible for SQL query execution for client apps.

  • Performs KV data requests to a shared storage cluster.

  • Also offers tenant-specific HTTP APIs.

  • Also known as “SQL-only server, pod, node” when the process only contains a SQL instance.

Shared storage cluster

Role (collective):

  • Accepts (KV) data requests from SQL instances.

  • Shared by many tenants.

  • Responsible for persisting (storing) data.

Abstract concept: KV-only server, pod, node

KV instance”: Accepts and serves KV requests for SQL instances. This does exist.

KV-only server”: This does not exist yet: we have not yet built the capability to run a process containing only a KV instance.

Storage server, pod, node

Storage server”: a process that contains both a KV and SQL instance.

Alternatively: “mixed KV/SQL server”.

Multiple storage servers make collectively a “shared storage cluster”.

The SQL component here is “System SQL

  • invisible to tenants.

  • used to administer tenants and KV.

Logical concepts

The essence of the multi-tenant architecture is to introduce logical boundaries inside of a shared architecture  — for the purpose of separate billing, running client apps side-by-side, avoiding interference, etc. So we also need words to designate those things that have received logical boundaries.

These concepts exist on a different semantic level than the run-time “deployment” aspects covered above. Hence the need for a separate vocabulary.

Virtual CockroachDB clusters

To the extent that CockroachDB is perceived to serve a “database product” to end-users, the multi-tenant architecture creates a virtualization of this product.

This acknowledges a pattern already settled in our industry:

  • Datacenter hosting went from physical machines to virtual machines (VMs) running on a shared physical infrastructure.

  • Memory architectures have this same split between physical addressing (corresponding to hardware) and virtual addressing (multiple logical address spaces using shared hardware, coordinated by MMUs).

  • Operating systems enable sharing physical processing units (cores) to present virtual processing units (threads) to software.

Likewise, in multi-tenant CockroachDB, 

The “per-tenant” product that end-users see is a virtual CockroachDB cluster.

The architecture shares a physical cluster (a set of interconnected shared storage servers) to produce the illusion of many virtual clusters for end-users.

Tenants: the owners of virtual clusters

There's a lot of different data that is coordinated from a CockroachDB cluster: its KV persistent state, its backups (stored elsewhere, e.g. in storage buckets), its authentication service for logins, etc. All this is “owned” by an organization / customer / end-user, identified as a single entity in the control plane. 

We're going to call the owner of virtual clusters and their adjacent data, tenants.

This “owner” abstraction exists beyond the CC serverless infrastructure: when our self-hosted customers ask us to deploy multi-tenant in their infrastructure, it's because they want to split ownership of a physical cluster between multiple sub-organizations.

What's a virtual cluster made of: tenant-specific data

A single tenant does not own just a virtual CockroachDB that can run SQL queries.

It really owns an adjacent constellation of data that is not shared with other tenants, including:

  • The tenant-specific keyspace, that defines the virtual CockroachDB cluster in KV.Also virtual keyspace.

  • The tenant-specific log files.

  • The tenant-specific heap, profile and goroutine dumps.

  • The tenant-specific crash dumps.

  • The tenant-specific exported traces.

  • The tenant-specific debug zips.

  • The tenant-specific backups and exports.

  • The tenant-specific metrics.

The state of a virtual cluster is the collection of all the related tenant-specific data.

Tenant servers and pods

Mostly for security reasons, and additionally for billing reasons, we find it important to ensure that a single SQL server process does not serve instances on behalf of more than one tenant.

In other words, our architecture (currently) implies that a SQL-only server corresponds to exactly one tenant, the one that owns the virtual cluster served by that SQL server.

We are thus tempted to equate the phrases “tenant server” = “SQL-only server”.

However, consider that next to SQL nodes (servers and pods), a deployment would also run other pods that are specific to just one tenant; for example, a Prometheus pod and a log collector.

We'll name the fleet of run-time nodes (servers and pods) that are serving just one tenant, the tenant nodes (servers and pods). This includes both SQL-only servers but also other tenant-specific services needed to serve a virtual cluster.

System cluster: the administrative environment

Currently, we have chosen to administer the creation/deletion of virtual clusters using SQL statements run in the context of a virtual cluster with special privileges.

This was not the only possible choice; we could have chosen to design an API separate from SQL, that exists “outside” of the virtual cluster APIs. But here we are.

So we need a word to designate that virtual cluster. To follow established terminology, we will call this the system cluster.

Today, the term “system cluster” largely overlaps with “shared storage cluster” because, implementation-wise, we have chosen to give SQL semantics to the keyspace that does not use a tenant prefix. However, this choice may be revisited in the future, such that we mandate a tenant prefix for all logical clusters including the system cluster. Should such plans materialize, the system cluster would become virtual too. It is thus useful to be disciplined about distinguishing the term “system cluster”, which designates a logical cluster which is possibly virtual, and “shared storage cluster”, which strictly designates the set of interconnected storage servers.

This system cluster and all its adjacent data also has an owner, which in the context of CC is Cockroach Labs itself. The owner of the system cluster is the system tenant.

System instances and servers

Currently, we have chosen to co-host the SQL instances that can serve queries for the system cluster together with the KV instances for the storage servers. 

That's what our current “mixed SQL/KV servers” are about. They contain:

  • the KV instances shared by all virtual clusters;

  • SQL instances specific to the system tenant, able to serve access to the system cluster.


However, this is not the only way we can do this. In fact, we could also make a plan to enable running SQL instances for the system cluster in a separate SQL-only server.

Generally, we'll call any server that contains at least one SQL instance for the system cluster, a system server.  Our current shared storage servers are also system servers; our future SQL-only servers with system cluster capability will be system servers too.

Our unit tests also run many SQL instances side-by-side, including multiple SQL instances that operate on system clusters; inside the context of tests, these are system instances.

Shared state in a multi-tenant deployment

In addition to tenant-specific state that defines virtual clusters, a multi-tenant deployment needs shared state too:

  • At run-time:

    • The SQL proxy node(s) (server(s) and pod(s)), which routes SQL client apps to their own virtual cluster.

    • The shared storage/DB nodes (servers and pods).

    • The networked shared storage/DB cluster, as a fleet of nodes.

    • The run-time state of the system cluster.

  • On disk:

    • The aggregate state of all virtual clusters stored on a single storage cluster.

  • No labels