rm -r ~/yb_docker_data
mkdir ~/yb_docker_data
docker network create custom-network
docker run -d --name yugabytedb_node1 --net custom-network \
-p 15433:15433 -p 7001:7000 -p 9000:9000 -p 5433:5433 \
-v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
yugabytedb/yugabyte:latest \
bin/yugabyted start --tserver_flags="ysql_sequence_cache_minval=1" \
--base_dir=/home/yugabyte/yb_data --daemon=false
docker run -d --name yugabytedb_node2 --net custom-network \
-p 15434:15433 -p 7002:7000 -p 9002:9000 -p 5434:5433 \
-v ~/yb_docker_data/node2:/home/yugabyte/yb_data --restart unless-stopped \
yugabytedb/yugabyte:latest \
bin/yugabyted start --join=yugabytedb_node1 --tserver_flags="ysql_sequence_cache_minval=1" \
--base_dir=/home/yugabyte/yb_data --daemon=false
docker run -d --name yugabytedb_node3 --net custom-network \
-p 15435:15433 -p 7003:7000 -p 9003:9000 -p 5435:5433 \
-v ~/yb_docker_data/node3:/home/yugabyte/yb_data --restart unless-stopped \
yugabytedb/yugabyte:latest \
bin/yugabyted start --join=yugabytedb_node1 --tserver_flags="ysql_sequence_cache_minval=1" \
--base_dir=/home/yugabyte/yb_data --daemon=false
With auto-explain:
docker run -d --name yugabytedb_node1 --net custom-network \
-p 15433:15433 -p 7001:7000 -p 9000:9000 -p 5433:5433 \
-v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \
yugabytedb/yugabyte:latest \
bin/yugabyted start --tserver_flags="flagfile=/home/yugabyte/yb_data/tserver_flags" \
--base_dir=/home/yugabyte/yb_data --daemon=false
- Clone the Kine repo:
git clone https://github.com/k3s-io/kine
- Start a Kine instance connecting to the YugabyteDB cluster:
go run . --endpoint "postgres://yugabyte:[email protected]:5433/yugabyte"
- Connect to YugabyteDB with psql:
psql -h 127.0.0.1 -p 5433 -U yugabyte
- Check that the Kine schema is ready:
yugabyte=# \d List of relations Schema | Name | Type | Owner --------+-------------+----------+---------- public | kine | table | yugabyte public | kine_id_seq | sequence | yugabyte (2 rows)
- Stop the Kine instance and drop the schema on YugabyteDB end:
drop table kine cascade;
Now, experiment with the Kine version that supports YugabyteDB backend. The YugabyteDB backend is an optimized version of the original Kine's backend for Postgres.
- Clone the Kine repo:
git clone https://github.com/dmagda/kine-yugabytedb.git
- Start a Kine instance connecting to the YugabyteDB cluster:
go run . --endpoint "yugabytedb://yugabyte:[email protected]:5433/yugabyte"
- Connect to YugabyteDB with psql:
psql -h 127.0.0.1 -p 5433 -U yugabyte
- Check that the Kine schema is ready:
yugabyte=# \d List of relations Schema | Name | Type | Owner --------+----------+----------+---------- public | kine | table | yugabyte public | kine_seq | sequence | yugabyte (2 rows)
- Make sure the batched nested loops are set to 1024:
yugabyte=# show yb_bnl_batch_size; yb_bnl_batch_size ------------------- 1024 (1 row)
- Stop the Kine instance and drop the schema on YugabyteDB end:
drop table kine cascade; drop sequence kine_seq;
Now, you need to start a k3s instance using the Kine version support the YugabyteDB backend. To do that, you need to be k3s from sources: https://github.com/k3s-io/k3s/blob/master/BUILDING.md
- Clone k3s:
git clone --depth 1 https://github.com/k3s-io/k3s.git
- Remove artifacts from a previous release if any:
rm -r build rm -r dist
- Open the
go.mod
file and add the following line to the end of thereplace (..)
section:github.com/k3s-io/kine => github.com/dmagda/kine-yugabytedb v0.2.0
- Add the private repo:
go env -w GOPRIVATE=github.com/dmagda/kine-yugabytedb
- Make sure the changes take effect:
go mod tidy
- Prepare for a k3s full release:
mkdir -p build/data && make download && make generate
- Build a full release:
SKIP_VALIDATE=true make
- Navigate to the directory with the k3s build artifacts:
cd dist/artifacts/
- Start a k3s server connecting it to a YugabyteDB backend:
sudo ./k3s server \
--token=sample_secret_token \
--datastore-endpoint="yugabytedb://yugabyte:[email protected]:5433/yugabyte"
- Make sure the server node is ready:
sudo ./k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
market-orders-app-vm Ready control-plane,master 11m v1.27.3+k3s-be442433
Let's deploy a sample application to confirm the cluster is usable. You can consider this one: https://github.com/digitalocean/kubernetes-sample-apps/tree/master/emojivoto-example
Deploy:
sudo ./k3s kubectl apply -k ~/kubernetes-sample-apps/emojivoto-example/kustomize
Confirm:
sudo ./k3s kubectl get all -n emojivoto
Uninstall scrupt for k3s:
First this command:
and then this script: