New ACM paper, free-tier cloud, and open-source license

TypeDB Cloud Self-hosted

This page is an installation guide for a self-hosted deployment of TypeDB Cloud. You can get a license for it from our sales team: contact us via e-mail or contact form.

TypeDB Cloud is available as a fully managed cloud service with no installation required at cloud.typedb.com.

System Requirements

TypeDB Cloud runs on macOS, Linux, and Windows. The only requirement is Java v.11+ (OpenJDK or Oracle Java).

Download and Install

TypeDB Cloud is distributed and installed separately from TypeDB Core.

Request access to the TypeDB Cloud distributive repositories from Vaticle. You can do that via technical support, using the contact details from your contract, or through our Discord server.

Then you can install it with Docker or manual download and installation.

You can deploy a cluster of TypeDB Cloud servers manually or via Kubernetes.

If you don’t have a license yet, you can try our TypeDB Cloud service or contact our sales team.

Docker

TypeDB Cloud Docker image is stored in a private repository. Make sure to use docker login first, to get correct authentication.

To pull the TypeDB Cloud Docker image, run:

$ docker pull vaticle/typedb-cloud:latest

Use docker run to create a new container with the downloaded image. By default, a TypeDB Cloud server is expected to be running on port 1729. To ensure that data is preserved even when the container is killed or restarted, use a Docker volume:

$ docker run --name typedb -d -v $(pwd)/db/:/opt/ -p 1729:1729 vaticle/typedb-cloud:latest

Start and Stop TypeDB Cloud in Docker

To start the Docker container:

$ docker start typedb

To check the containers running:

$ docker ps

To stop the Docker container:

$ docker stop typedb

Manual installation

Make sure to have access to the main repository: https://repo.typedb.com.

Download the latest release, unzip it in a location on your machine that is easily accessible via terminal.

If TypeDB doesn’t have a distribution you need, please open an issue on GitHub.

Start and Stop TypeDB Cloud manually

To start TypeDB Cloud manually:

  1. Navigate into the directory with unpacked files of TypeDB Cloud.

  2. Run:

    $ ./typedb server

Now TypeDB Cloud should show a welcome screen with its version, address, and bootup time.

To stop TypeDB Cloud:

Close the terminal or press Ctrl+C.

Connecting

To check whether TypeDB Cloud is working and interact with it, you can connect to it with any TypeDB Client.

You can use TypeDB Console from your TypeDB Cloud directory:

$ ./typedb console --cloud 127.0.0.1:1729 --username admin --password

To run TypeDB Console from a Docker container, run:

$ docker exec -ti typedb-cloud bash -c '/opt/typedb-cloud-all-linux-x86_64/typedb console --cloud 127.0.0.1:1729 --username admin --password'

Deploying manually

While it’s possible to run TypeDB Cloud in a single-server mode, a highly available and fault-tolerant production-grade setup includes setting up multiple servers to connect and form a cluster. At any given point in time, one of those servers acts as a leader, and the others are followers. Increasing the number of servers increases the cluster’s tolerance to failure: to tolerate N servers failing, a cluster needs to consist of 2N + 1 servers. This section describes how you can set up a 3-server cluster (in this case, one server can fail with no data loss).

Each TypeDB Cloud server in a cluster binds to three ports: a client port that TypeDB drivers connect to (1729 by default) and two server ports (1730 and 1731) for server-to-server communication.

For this tutorial, it’s assumed that all three servers are on the same virtual network, have the relevant ports open, with no firewall interference, and the servers have IP addresses 10.0.0.1, 10.0.0.2 and 10.0.0.3 respectively.

If you’re using a single machine to host all nodes, it may be easier to use localhost or 127.0.0.1 address, but prefix the port with the node number; this way the ports 1729, 1730, 1731 would turn into:

  • 11729, 11730, 11731;

  • 21729, 21730, 21731;

  • 31729, 31730, 31731.

Starting

TypeDB servers working in a cluster shall be configured specifically to know all servers in the cluster (peers). This could be done through peers key in the server config, or with command-line arguments when starting the server. Command-line arguments have priority over config file.

See below an example of how 3-server TypeDB Cloud would be started on three separate machines to be in a cluster.

Example 1. Server #1

The first machine with the IP address of 10.0.0.1.

To run TypeDB server #1:

$ ./typedb server \
    --server.address=10.0.0.1:1729 \
    --server.internal-address.zeromq=10.0.0.1:1730 \
    --server.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-1.address=10.0.0.1:1729 \
    --server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
    --server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-2.address=10.0.0.2:1729 \
    --server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
    --server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
    --server.peers.peer-3.address=10.0.0.3:1729 \
    --server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
    --server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
Example 2. Server #2

The first machine with the IP address of 10.0.0.2.

To run TypeDB server #2:

$ ./typedb server \
    --server.address=10.0.0.2:1729 \
    --server.internal-address.zeromq=10.0.0.2:1730 \
    --server.internal-address.grpc=10.0.0.2:1731 \
    --server.peers.peer-1.address=10.0.0.1:1729 \
    --server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
    --server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-2.address=10.0.0.2:1729 \
    --server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
    --server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
    --server.peers.peer-3.address=10.0.0.3:1729 \
    --server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
    --server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
Example 3. Server #3

The first machine with the IP address of 10.0.0.3.

To run TypeDB server #3:

$ ./typedb server \
    --server.address=10.0.0.3:1729 \
    --server.internal-address.zeromq=10.0.0.3:1730 \
    --server.internal-address.grpc=10.0.0.3:1731 \
    --server.peers.peer-1.address=10.0.0.1:1729 \
    --server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
    --server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-2.address=10.0.0.2:1729 \
    --server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
    --server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
    --server.peers.peer-3.address=10.0.0.3:1729 \
    --server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
    --server.peers.peer-3.internal-address.grpc=10.0.0.3:1731

The above example assumes the application (TypeDB Client) accessing TypeDB Cloud resides on the same private network as the cluster.

If this is not the case, TypeDB Cloud also supports using different IP addresses for client and server communication.

Example 4. Example of using separate network for client connection

The relevant external hostname should be passed as arguments using the --server.address and --server.peers flags as below.

$ ./typedb server \
    --server.address=external-host-1:1729 \
    --server.internal-address.zeromq=10.0.0.1:1730 \
    --server.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-1.address=external-host-1:1729 \
    --server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
    --server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
    --server.peers.peer-2.address=external-host-2:1729 \
    --server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
    --server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
    --server.peers.peer-3.address=external-host-3:1729 \
    --server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
    --server.peers.peer-3.internal-address.grpc=10.0.0.3:1731

And so on for servers #2 and #3.

In this case, port 1729 would need to be open to public and clients would use the external-host-1, external-host-2 and external-host-3 hostnames to communicate with TypeDB Cloud; inter-server communication would be done over a private network using ports 1730 and 1731.

Stopping

Stopping TypeDB Cloud is done the same way as on a single server: to stop TypeDB Cloud, close the terminal or press Ctrl+C. All nodes must be shut down independently in the same way.

Kubernetes

To deploy a TypeDB Cloud cluster with Kubernetes, we can use Helm package manager.

Requirements

  • kubectl installed and configured for your Kubernetes deployment. You can use minikube for local testing.

  • helm installed.

Initial Setup

First, create a secret to access TypeDB Cloud image on Docker Hub:

$ kubectl create secret docker-registry private-docker-hub --docker-server=https://index.docker.io/v2/ \
--docker-username=USERNAME --docker-password='PASSWORD' --docker-email=EMAIL

You can use an access token instead of password.

Next, add the Vaticle Helm repo:

$ helm repo add vaticle https://repo.typedb.com/repository/helm/

Encryption setup (optional)

This step is necessary if you wish to deploy TypeDB Cloud with in-flight encryption support. There are two certificates that need to be configured: external certificate (TLS) and internal certificate (Curve). The certificates need to be generated and then added to Kubernetes Secrets.

An external certificate can either be obtained from trusted third party providers such as CloudFlare or letsencrypt.org. Alternatively, it is also possible to generate it manually with a tool we provide with TypeDB Cloud in the tool directory:

$ java -jar encryption-gen.jar --ca-common-name=<x500-common-name> --hostname=<external-hostname>,<internal-hostname>

Please note that an external certificate is always bound to URL address, not IP address.

Ensure the external certificate (<external-hostname> in the command above) is bound to *.<helm-release-name>. For example, for a Helm release named typedb-cloud, the certificate needs to be bound to *.typedb-cloud.

The encryption-gen.jar generates three directories:

  • internal-ca and external-ca — The CA keypairs stored only in case you want to sign more certificates in the future.

  • <external-hostname> — A directory with certificates to be stored on all servers in the cluster.

All files from the directory named after external domain shall be copied to the proper directory on every server in a cluster. By default, they are stored in /server/conf/encryption inside the TypeDB Cloud main directory. For example, typedb-cloud-all-mac-x86_64-2.25.12/server/conf/encryption. The path to each file is configured in the encryption section of the TypeDB Cloud config file.

Once the external and internal certificates are all generated, we can upload it to Kubernetes Secrets. Navigate into the directory with cluster certificates and run:

$ kubectl create secret generic ext-grpc \
  --from-file ext-grpc-certificate.pem \
  --from-file ext-grpc-private-key.pem \
  --from-file ext-grpc-root-ca.pem
$ kubectl create secret generic int-grpc \
    --from-file int-grpc-certificate.pem \
    --from-file int-grpc-private-key.pem \
    --from-file int-grpc-root-ca.pem
$ kubectl create secret generic int-zmq \
  --from-file int-zmq-private-key \
  --from-file int-zmq-public-key

Deploying a cluster via K8s

There are three alternative deployment modes that you can choose from:

  • Private Cluster — For applications (clients) that are located within the same Kubernetes network as the cluster.

  • Public Cluster — To access the cluster from outside the Kubernetes network.

  • Public Cluster (Minikube) — To deploy a development cluster on your local machine.

Deploying a Private Cluster

This deployment mode is preferred if your application is located within the same Kubernetes network as the cluster. In order to deploy in this mode, ensure that the exposed flag is set to false.

In-flight encryption
  • Without encryption

  • With encryption

Deploy:

$ helm install typedb-cloud vaticle/typedb-cloud --set "exposed=false,encrypted=false"

To enable in-flight encryption for your private cluster, make sure the encrypted flag is set to true:

$ helm install typedb-cloud vaticle/typedb-cloud --set "exposed=false,encrypted=true" \
--set servers=3,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,domain=localhost-ext --set encryption.enable=true --set encryption.enable=true,encryption.externalGRPC.secretName=ext-grpc,encryption.externalGRPC.content.privateKeyName=ext-grpc-private-key.pem,encryption.externalGRPC.content.certificateName=ext-grpc-certificate.pem,encryption.externalGRPC.content.rootCAName=ext-grpc-root-ca.pem \
--set encryption.internalGRPC.secretName=int-grpc,encryption.internalGRPC.content.privateKeyName=int-grpc-private-key.pem,encryption.internalGRPC.content.certificateName=int-grpc-certificate.pem,encryption.internalGRPC.content.rootCAName=int-grpc-root-ca.pem \
--set encryption.internalZMQ.secretName=int-zmq,encryption.internalZMQ.content.privateKeyName=int-zmq-private-key,encryption.internalZMQ.content.publicKeyName=int-zmq-public-key

The servers will be accessible via the internal hostname within the Kubernetes network, i.e., typedb-cloud-0.typedb-cloud, typedb-cloud-1.typedb-cloud, and typedb-cloud-2.typedb-cloud.

Deploying a Public Cluster

This deployment mode is preferred if you need to access the cluster from outside the Kubernetes network. For example, if you need to access the cluster from TypeDB Studio or TypeDB Console running on your local machine.

Deploying a public cluster can be done by setting the exposed flag to true.

Technically, the servers are made public by binding each one to a LoadBalancer instance which is assigned a public IP/hostname. The IP/hostname assignments are done automatically by the cloud provider that the Kubernetes platform is running on.

In-flight encryption
  • Without encryption

  • With encryption

Deploy:

$ helm install typedb-cloud vaticle/typedb-cloud --set "exposed=true"

Once the deployment has been completed, the servers can be accessible via public IPs/hostnames assigned to the Kubernetes LoadBalancer services. The addresses can be obtained with this command:

$ kubectl get svc -l external-ip-for=typedb-cloud \
-o='custom-columns=NAME:.metadata.name,IP OR HOSTNAME:.status.loadBalancer.ingress[0].*'

To enable in-flight encryption, the servers must be assigned URL addresses. This restriction comes from the fact that external certificates must be bound to a domain name, and not an IP address.

Given a "domain name" and a "Helm release name", the address structure of the servers will follow the specified format:

<helm-release-name>-{0..n}.<domain-name>

The format must be taken into account when generating the external certificate of all servers such that they’re properly bound to the address. For example, you can generate an external certificate using wildcard, i.e., *.<helm-release-name>.<domain-name>, that can be shared by all servers.

Once the domain name and external certificate have been configured accordingly, we can proceed to perform the deployment. Ensure that the encrypted flag is set to true and the domain flag set accordingly.

$ helm install typedb-cloud vaticle/typedb-cloud --set "exposed=true,encrypted=true,domain=<domain-name>" \
--set servers=3,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,domain=localhost-ext --set encryption.enable=true --set encryption.enable=true,encryption.externalGRPC.secretName=ext-grpc,encryption.externalGRPC.content.privateKeyName=ext-grpc-private-key.pem,encryption.externalGRPC.content.certificateName=ext-grpc-certificate.pem,encryption.externalGRPC.content.rootCAName=ext-grpc-root-ca.pem \
--set encryption.internalGRPC.secretName=int-grpc,encryption.internalGRPC.content.privateKeyName=int-grpc-private-key.pem,encryption.internalGRPC.content.certificateName=int-grpc-certificate.pem,encryption.internalGRPC.content.rootCAName=int-grpc-root-ca.pem \
--set encryption.internalZMQ.secretName=int-zmq,encryption.internalZMQ.content.privateKeyName=int-zmq-private-key,encryption.internalZMQ.content.publicKeyName=int-zmq-public-key

After the deployment has been completed, we need to configure these URL addresses to correctly point to the servers. This can be done by configuring the A record (for IPs) or CNAME record (for hostnames) of all the servers in your trusted DNS provider:

typedb-cloud-0.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-0 service>
typedb-cloud-1.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-1 service>
typedb-cloud-2.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-2 service>

Deploying a Public Cluster with Minikube

Please note that in-flight encryption cannot be enabled in this configuration.

This deployment mode is primarily intended for development purposes as it runs a K8s cluster locally.

Ensure to have Minikube installed and running.

Deploy, adjusting the parameters for CPU and storage to run on a local machine:

$ helm install typedb-cloud vaticle/typedb-cloud --set image.pullPolicy=Always,servers=3,singlePodPerNode=false,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,javaopts=-Xmx4G --set encryption.enable=false

Once the deployment has been completed, enable tunneling from another terminal:

$ minikube tunnel

K8s cluster status check

To check the status of a cluster:

$ kubectl describe sts typedb-cloud

It should show Pods Status field as Running for all the nodes after a few minutes after deploying a TypeDB Cloud cluster.

You can connect to a pod:

$ kubectl exec --stdin --tty typedb-cloud-0 -- /bin/bash

K8s cluster removal

To stop and remove a K8s cluster from Kubernetes, use the helm uninstall with the helm release name:

$ helm uninstall typedb-cloud

K8s troubleshooting

To see pod details for the typedb-cloud-0 pod:

$ kubectl describe pod typedb-cloud-0

The following are the common error scenarios and how to troubleshoot them.

All pods are stuck in ErrImagePull or ImagePullBackOff state

This means the secret to pull the image from Docker Hub has not been created. Make sure you’ve followed Initial Setup instructions and verify that the pull secret is present by executing kubectl get secret/private-docker-hub. Correct state looks like this:

 $ kubectl get secret/private-docker-hub
 NAME                 TYPE                             DATA   AGE
 private-docker-hub   kubernetes.io/dockerconfigjson   1      11d

One or more pods of TypeDB Cloud are stuck in Pending state

This might mean pods requested more resources than available. To check if that’s the case, run on a stuck pod (e.g. typedb-cloud-0):

$ kubectl describe pod/typedb-cloud-0

Error message similar to 0/1 nodes are available: 1 Insufficient cpu. or 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. indicates that cpu or storage.size settings need to be decreased.

One or more pods of TypeDB Cloud are stuck in CrashLoopBackOff state

This might indicate any misconfiguration of TypeDB Cloud. Please check the logs:

$ kubectl logs pod/typedb-cloud-0

Helm configuration reference

Configurable settings for Helm package include:

Key Default value Description

name

null

Used for naming deployed objects. When not provided, the Helm release name will be used instead.

image.repository

vaticle/typedb-cloud

The docker hub organization and repository from which to pull an appropriate image.

image.tag

2.25.9

The version of TypeDB Cloud to use.

image.pullPolicy

IfNotPresent

Image pulling policy.
For more information, see the image pull policy in Kubernetes documentation.

image.pullSecret

-

The name of a secret containing a container image registry key used to authenticate against the image repository.

exposed

false

Whether TypeDB Cloud supports connections via public IP/hostname (outside of Kubernetes network).

serviceAnnotations

null

Kubernetes annotations to be added to the Kubernetes services responsible for directing traffic to the TypeDB Cloud pods.

tolerations

[]

Kubernetes tolerations of taints on nodes.
For more information, see the tolerations in Kubernetes documentation.

Example
[key: "typedb-cloud-only"
    operator: "Exists"
    effect: "NoSchedule"]`

nodeAffinities

{}

Kubernetes node affinities.
For more information, see the node affinities in Kubernetes documentation.

podAffinities

{}

Kubernetes pod affinities.
For more information, see the pod affinities in Kubernetes documentation.

podAntiAffinities

{}

Kubernetes pod anti-affinities.
For more information, see the pod affinities in Kubernetes documentation.

singlePodPerNode

true

Whether a pod should share nodes with other TypeDB Cloud instances from the same Helm installation.

Warning: changing this to false and making no anti-affinities of your own will allow Kubernetes to place multiple cluster servers on the same node, negating the high-availability guarantees of TypeDB Cloud.

podLabels

{}

Kubernetes pod labels.
For more information, see the pod labels in Kubernetes documentation.

servers

3

Number of TypeDB Cloud servers to run.

resources

{}

Kubernetes resources specification.
For more information, see the resource requests and limits in Kubernetes documentation.

storage.size

100Gi

How much disk space should be allocated for each TypeDB Cloud server.

storage.persistent

true

Whether TypeDB Cloud should use a persistent volume to store data.

encryption.enabled

false

Whether TypeDB Cloud uses an in-flight encryption.

encryption.externalGRPC

Encryption settings for client-server communications.

encryption.internalGRPC

Encryption settings for cluster management, e.g., creating a database on all replicas.

encryption.internalZMQ

Encryption settings for data replication.

authentication.password.​disallowDefault

false

Check whether the admin account has the default password.

logstash.enabled

false

Whether TypeDB Cloud pushes logs into Logstash

logstash.uri

""

Hostname and port of a Logstash daemon accepting log records

Current Limitations

TypeDB Cloud doesn’t support dynamic reconfiguration of server count without restarting all the servers.

Provide Feedback