TypeDB Enterprise
This page is an installation guide for TypeDB Enterprise, a self-hosted deployment of TypeDB. You can get a license for it from our sales team: contact us via e-mail or contact form.
TypeDB is available as a fully managed cloud service with no installation required at cloud.typedb.com. |
System Requirements
TypeDB Enterprise runs on macOS, Linux, and Windows. The only requirement is Java v.11+ (OpenJDK or Oracle Java).
Download and Install
TypeDB Enterprise is distributed and installed separately from other editions of TypeDB.
Request access to the TypeDB Enterprise distribution repositories from the TypeDB team. You can do that via technical support, using the contact details from your contract, or through our Discord server.
Then install it with Docker or via manual download and installation.
You can deploy a cluster of TypeDB Enterprise servers manually or via Kubernetes.
If you need a license, contact our sales team.
Docker
The TypeDB Enterprise image is hosted in our private Docker repository.
Make sure to use docker login
first to authenticate.
To pull the TypeDB Enterprise Docker image, run:
docker pull vaticle/typedb-cloud:latest
Use docker create
to create a new container with the downloaded image.
By default, a TypeDB Enterprise server is expected to be running on port 1729
.
To ensure that data is preserved even when the container is killed or restarted, use Docker volumes:
docker create --name typedb -p 1729:1729 \
-v $(pwd)/db/data:/opt/typedb-cloud-all-linux-x86_64/server/data \
-v $(pwd)/db/replication:/opt/typedb-cloud-all-linux-x86_64/server/replication \
-v $(pwd)/db/user:/opt/typedb-cloud-all-linux-x86_64/server/user \
vaticle/typedb-cloud:latest
The port number is configurable. For example, you could configure TypeDB Enterprise to listen on port 80 instead of the default port 1729. |
Manual installation
Make sure to have access to the main repository: https://repo.typedb.com.
Download the latest release, unzip it in a location on your machine that is easily accessible via terminal.
If TypeDB doesn’t have a distribution you need, please open an issue on GitHub.
Start and Stop TypeDB Enterprise manually
To start TypeDB Enterprise manually:
-
Navigate into the directory with the unpacked files of TypeDB Enterprise.
-
Run:
./typedb server
Now TypeDB Enterprise should show a welcome screen with its version, address, and bootup time.
To stop TypeDB Enterprise:
Close the terminal or press Ctrl+C.
Connecting
To check whether TypeDB Enterprise is working and interact with it, you can connect to it with any TypeDB Client.
You can use TypeDB Console from your TypeDB Enterprise directory:
./typedb console --cloud 127.0.0.1:1729 --username admin --password
To run TypeDB Console from a Docker container, run:
docker exec -ti typedb-cloud bash -c '/opt/typedb-cloud-all-linux-x86_64/typedb console --cloud 127.0.0.1:1729 --username admin --password'
Deploying manually
While it’s possible to run TypeDB Enterprise in a single-server mode, a highly available and fault-tolerant
production-grade setup includes setting up multiple servers to connect and form a cluster.
At any given point in time, one of those servers acts as a leader, and the others are followers.
Increasing the number of servers increases the
cluster’s tolerance to failure: to tolerate N servers failing, a cluster needs to consist of 2N + 1
servers.
This section describes how you can set up a 3-server cluster (in this case,
one server can fail with no data loss).
Each TypeDB Enterprise server in a cluster binds to three ports:
a client port that TypeDB drivers connect to (1729
by default) and two server ports
(1730
and 1731
) for server-to-server communication.
For this tutorial, it’s assumed that all three servers are on the same virtual network, have the relevant ports open,
with no firewall interference, and the servers have IP addresses 10.0.0.1
, 10.0.0.2
and 10.0.0.3
respectively.
If you’re using a single machine to host all nodes,
it may be easier to use
|
Starting
TypeDB Enterprise servers working in a cluster shall be configured specifically to know all servers in the cluster (peers).
This could be done through peers
key in the server config, or with command-line arguments when starting the server.
Command-line arguments have priority over config file.
See below an example of how 3-server TypeDB Enterprise would be started on three separate machines to be in a cluster.
The first machine with the IP address of 10.0.0.1
.
To run TypeDB Enterprise server #1:
./typedb server \
--server.address=10.0.0.1:1729 \
--server.internal-address.zeromq=10.0.0.1:1730 \
--server.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
The first machine with the IP address of 10.0.0.2
.
To run TypeDB Enterprise server #2:
./typedb server \
--server.address=10.0.0.2:1729 \
--server.internal-address.zeromq=10.0.0.2:1730 \
--server.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
The first machine with the IP address of 10.0.0.3
.
To run TypeDB Enterprise server #3:
./typedb server \
--server.address=10.0.0.3:1729 \
--server.internal-address.zeromq=10.0.0.3:1730 \
--server.internal-address.grpc=10.0.0.3:1731 \
--server.peers.peer-1.address=10.0.0.1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=10.0.0.2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=10.0.0.3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
The above example assumes the application (TypeDB Client) accessing TypeDB Enterprise resides on the same private network as the cluster.
If this is not the case, TypeDB Enterprise also supports using different IP addresses for client and server communication.
The relevant external hostname should be passed as arguments using the --server.address
and
--server.peers
flags as below.
./typedb server \
--server.address=external-host-1:1729 \
--server.internal-address.zeromq=10.0.0.1:1730 \
--server.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-1.address=external-host-1:1729 \
--server.peers.peer-1.internal-address.zeromq=10.0.0.1:1730 \
--server.peers.peer-1.internal-address.grpc=10.0.0.1:1731 \
--server.peers.peer-2.address=external-host-2:1729 \
--server.peers.peer-2.internal-address.zeromq=10.0.0.2:1730 \
--server.peers.peer-2.internal-address.grpc=10.0.0.2:1731 \
--server.peers.peer-3.address=external-host-3:1729 \
--server.peers.peer-3.internal-address.zeromq=10.0.0.3:1730 \
--server.peers.peer-3.internal-address.grpc=10.0.0.3:1731
And so on for servers #2 and #3.
In this case, port 1729
would need to be open to public and clients would use the external-host-1
, external-host-2
and external-host-3
hostnames to communicate with TypeDB Enterprise; inter-server communication would be done over a
private network using ports 1730
and 1731
.
Kubernetes
To deploy a TypeDB Enterprise cluster with Kubernetes, we can use the Helm package manager.
Requirements
-
kubectl
installed and configured for your Kubernetes deployment. You can use minikube for local testing. -
helm
installed.
Initial Setup
First, create a secret to access TypeDB Enterprise image on Docker Hub:
kubectl create secret docker-registry private-docker-hub --docker-server=https://index.docker.io/v2/ \
--docker-username=USERNAME --docker-password='PASSWORD' --docker-email=EMAIL
You can use an access token instead of password.
Next, add the TypeDB Helm repo:
helm repo add typedb https://repo.typedb.com/repository/helm/
Encryption setup (optional)
This step is necessary if you wish to deploy TypeDB Enterprise with in-flight encryption support. There are two certificates that need to be configured: external certificate (TLS) and internal certificate (Curve). The certificates need to be generated and then added to Kubernetes Secrets.
An external certificate can either be obtained from trusted third party providers such as
CloudFlare or letsencrypt.org.
Alternatively, it is also possible to generate it manually with a tool we provide with TypeDB Enterprise
in the tool
directory:
java -jar encryption-gen.jar --ca-common-name=<x500-common-name> --hostname=<external-hostname>,<internal-hostname>
Please note that an external certificate is always bound to URL address, not IP address.
Ensure the external certificate (<external-hostname>
in the command above) is bound to *.<helm-release-name>
.
For example, for a Helm release named typedb-cloud
, the certificate needs to be bound to *.typedb-cloud
.
The encryption-gen.jar
generates three directories:
-
internal-ca
andexternal-ca
— The CA keypairs stored only in case you want to sign more certificates in the future. -
<external-hostname> — A directory with certificates to be stored on all servers in the cluster.
All files from the directory
named after external domain shall be copied to the proper directory on every server in a cluster.
By default, they are stored in /server/conf/encryption
inside the TypeDB Enterprise main directory.
For example, typedb-cloud-all-mac-x86_64-2.25.12/server/conf/encryption
.
The path to each file is configured in the encryption
section of the TypeDB Enterprise config file.
Once the external and internal certificates are all generated, we can upload it to Kubernetes Secrets. Navigate into the directory with cluster certificates and run:
kubectl create secret generic ext-grpc \
--from-file ext-grpc-certificate.pem \
--from-file ext-grpc-private-key.pem \
--from-file ext-grpc-root-ca.pem
kubectl create secret generic int-grpc \
--from-file int-grpc-certificate.pem \
--from-file int-grpc-private-key.pem \
--from-file int-grpc-root-ca.pem
kubectl create secret generic int-zmq \
--from-file int-zmq-private-key \
--from-file int-zmq-public-key
Deploying a cluster via K8s
There are three alternative deployment modes that you can choose from:
-
Private Cluster — For applications (clients) that are located within the same Kubernetes network as the cluster.
-
Public Cluster — To access the cluster from outside the Kubernetes network.
-
Public Cluster (Minikube) — To deploy a development cluster on your local machine.
Deploying a Private Cluster
This deployment mode is preferred if your application is located within the same Kubernetes network as the cluster.
In order to deploy in this mode, ensure that the exposed
flag is set to false
.
-
Without encryption
-
With encryption
Deploy:
helm install typedb-cloud typedb/typedb-cloud --set "exposed=false,encrypted=false"
To enable in-flight encryption for your private cluster, make sure the encrypted
flag is set to true
:
helm install typedb-cloud typedb/typedb-cloud --set "exposed=false,encrypted=true" \
--set servers=3,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,domain=localhost-ext --set encryption.enable=true --set encryption.enable=true,encryption.externalGRPC.secretName=ext-grpc,encryption.externalGRPC.content.privateKeyName=ext-grpc-private-key.pem,encryption.externalGRPC.content.certificateName=ext-grpc-certificate.pem,encryption.externalGRPC.content.rootCAName=ext-grpc-root-ca.pem \
--set encryption.internalGRPC.secretName=int-grpc,encryption.internalGRPC.content.privateKeyName=int-grpc-private-key.pem,encryption.internalGRPC.content.certificateName=int-grpc-certificate.pem,encryption.internalGRPC.content.rootCAName=int-grpc-root-ca.pem \
--set encryption.internalZMQ.secretName=int-zmq,encryption.internalZMQ.content.privateKeyName=int-zmq-private-key,encryption.internalZMQ.content.publicKeyName=int-zmq-public-key
The servers will be accessible via the internal hostname within the Kubernetes network,
i.e.,
typedb-cloud-0.typedb-cloud
,
typedb-cloud-1.typedb-cloud
, and
typedb-cloud-2.typedb-cloud
.
Deploying a Public Cluster
This deployment mode is preferred if you need to access the cluster from outside the Kubernetes network. For example, if you need to access the cluster from TypeDB Studio or TypeDB Console running on your local machine.
Deploying a public cluster can be done by setting the exposed
flag to true
.
Technically, the servers are made public by binding each one to a LoadBalancer
instance which is assigned a public
IP/hostname. The IP/hostname assignments are done automatically by the cloud provider that the Kubernetes platform is
running on.
-
Without encryption
-
With encryption
Deploy:
helm install typedb-cloud typedb/typedb-cloud --set "exposed=true"
Once the deployment has been completed,
the servers can be accessible via public IPs/hostnames assigned to the Kubernetes LoadBalancer
services.
The addresses can be obtained with this command:
kubectl get svc -l external-ip-for=typedb-cloud \
-o='custom-columns=NAME:.metadata.name,IP OR HOSTNAME:.status.loadBalancer.ingress[0].*'
To enable in-flight encryption, the servers must be assigned URL addresses. This restriction comes from the fact that external certificates must be bound to a domain name, and not an IP address.
Given a "domain name" and a "Helm release name", the address structure of the servers will follow the specified format:
<helm-release-name>-{0..n}.<domain-name>
The format must be taken into account when generating the external certificate of all servers such that they’re properly
bound to the address.
For example, you can generate an external certificate using wildcard, i.e.,
*.<helm-release-name>.<domain-name>
, that can be shared by all servers.
Once the domain name and external certificate have been configured accordingly,
we can proceed to perform the deployment.
Ensure that the encrypted
flag is set to true
and the domain
flag set accordingly.
helm install typedb-cloud typedb/typedb-cloud --set "exposed=true,encrypted=true,domain=<domain-name>" \
--set servers=3,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,domain=localhost-ext --set encryption.enable=true --set encryption.enable=true,encryption.externalGRPC.secretName=ext-grpc,encryption.externalGRPC.content.privateKeyName=ext-grpc-private-key.pem,encryption.externalGRPC.content.certificateName=ext-grpc-certificate.pem,encryption.externalGRPC.content.rootCAName=ext-grpc-root-ca.pem \
--set encryption.internalGRPC.secretName=int-grpc,encryption.internalGRPC.content.privateKeyName=int-grpc-private-key.pem,encryption.internalGRPC.content.certificateName=int-grpc-certificate.pem,encryption.internalGRPC.content.rootCAName=int-grpc-root-ca.pem \
--set encryption.internalZMQ.secretName=int-zmq,encryption.internalZMQ.content.privateKeyName=int-zmq-private-key,encryption.internalZMQ.content.publicKeyName=int-zmq-public-key
After the deployment has been completed, we need to configure these URL addresses to correctly point to the servers.
This can be done by configuring the A record
(for IPs) or CNAME record
(for hostnames) of all the servers in your
trusted DNS provider:
typedb-cloud-0.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-0 service>
typedb-cloud-1.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-1 service>
typedb-cloud-2.typedb-cloud.example.com => <public IP/hostname of typedb-cloud-2 service>
Deploying a Public Cluster with Minikube
Please note that in-flight encryption cannot be enabled in this configuration.
This deployment mode is primarily intended for development purposes as it runs a K8s cluster locally.
Ensure to have Minikube installed and running.
Deploy, adjusting the parameters for CPU and storage to run on a local machine:
helm install typedb-cloud typedb/typedb-cloud --set image.pullPolicy=Always,servers=3,singlePodPerNode=false,cpu=1,storage.persistent=false,storage.size=1Gi,exposed=true,javaopts=-Xmx4G --set encryption.enable=false
Once the deployment has been completed, enable tunneling from another terminal:
minikube tunnel
K8s cluster status check
To check the status of a cluster:
kubectl describe sts typedb-cloud
It should show Pods Status
field as Running
for all the nodes after a few minutes
after deploying a TypeDB Enterprise cluster.
You can connect to a pod:
kubectl exec --stdin --tty typedb-cloud-0 -- /bin/bash
K8s cluster removal
To stop and remove a K8s cluster from Kubernetes, use the helm uninstall
with the helm release name:
helm uninstall typedb-cloud
K8s troubleshooting
To see pod details for the typedb-cloud-0
pod:
kubectl describe pod typedb-cloud-0
The following are the common error scenarios and how to troubleshoot them.
All pods are stuck in ErrImagePull
or ImagePullBackOff
state
This means the secret to pull the image from Docker Hub has not been created.
Make sure you’ve followed Initial Setup instructions and verify that the pull secret is present by
executing kubectl get secret/private-docker-hub
. Correct state looks like this:
kubectl get secret/private-docker-hub
NAME TYPE DATA AGE
private-docker-hub kubernetes.io/dockerconfigjson 1 11d
One or more pods of TypeDB Enterprise are stuck in Pending
state
This might mean pods requested more resources than available.
To check if that’s the case, run on a stuck pod (e.g. typedb-cloud-0
):
kubectl describe pod/typedb-cloud-0
Error message similar to
0/1 nodes are available: 1 Insufficient cpu.
or
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
indicates that cpu
or storage.size
settings need to be decreased.
Helm configuration reference
Configurable settings for Helm package include:
Key | Default value | Description |
---|---|---|
|
|
Used for naming deployed objects. When not provided, the Helm release name will be used instead. |
|
|
The docker hub organization and repository from which to pull an appropriate image. |
|
|
The version of TypeDB Enterprise to use. |
|
|
Image pulling policy. |
|
- |
The name of a secret containing a container image registry key used to authenticate against the image repository. |
|
|
Whether TypeDB Enterprise supports connections via public IP/hostname (outside of Kubernetes network). |
|
|
Kubernetes annotations to be added to the Kubernetes services responsible for directing traffic to the TypeDB Enterprise pods. |
|
|
Kubernetes tolerations of taints on nodes. Example
|
|
|
Kubernetes node affinities. |
|
|
Kubernetes pod affinities. |
|
|
Kubernetes pod anti-affinities. |
|
|
Whether a pod should share nodes with other TypeDB Enterprise instances from the same Helm installation. Warning: changing this to false and making no anti-affinities of your own will allow Kubernetes to place multiple cluster servers on the same node, negating the high-availability guarantees of TypeDB Enterprise. |
|
|
Kubernetes pod labels. |
|
|
Number of TypeDB Enterprise servers to run. |
|
|
Kubernetes resources specification. |
|
|
How much disk space should be allocated for each TypeDB Enterprise server. |
|
|
Whether TypeDB Enterprise should use a persistent volume to store data. |
|
|
Whether TypeDB Enterprise uses an in-flight encryption. |
|
Encryption settings for client-server communications. |
|
|
Encryption settings for cluster management, e.g., creating a database on all replicas. |
|
|
Encryption settings for data replication. |
|
|
|
Check whether the |
|
|
Whether TypeDB Enterprise pushes logs into Logstash |
|
|
Hostname and port of a Logstash daemon accepting log records |