0%

kind集群内部署可访问的Harbor服务

部署Kubernetes及必要服务

  • docker版本24.0.5

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    $ docker version
    Client:
    Version: 24.0.5
    API version: 1.43
    Go version: go1.20.3
    Git commit: 24.0.5-0ubuntu1~22.04.1
    Built: Mon Aug 21 19:50:14 2023
    OS/Arch: linux/amd64
    Context: default

    Server:
    Engine:
    Version: 24.0.5
    API version: 1.43 (minimum version 1.12)
    Go version: go1.20.3
    Git commit: 24.0.5-0ubuntu1~22.04.1
    Built: Mon Aug 21 19:50:14 2023
    OS/Arch: linux/amd64
    Experimental: true
    containerd:
    Version: 1.7.2
    GitCommit:
    runc:
    Version: 1.1.0-0ubuntu1.1
    GitCommit:
    docker-init:
    Version: 0.19.0
    GitCommit:
  • kind版本v0.22.0

    1
    2
    $ kind version
    kind v0.22.0 go1.21.1 linux/amd64
  • kind集群配置文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    containerdConfigPatches:
    - |-
    [plugins."io.containerd.grpc.v1.cri".registry]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
    endpoint = ["https://registry-1.docker.io"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."gcr.io"]
    endpoint = ["https://gcr.m.daocloud.io"]

    networking:
    apiServerAddress: "0.0.0.0"
    apiServerPort: 6443

    nodes:
    - role: control-plane
    kubeadmConfigPatches:
    - |
    kind: InitConfiguration
    nodeRegistration:
    kubeletExtraArgs:
    node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 31000
    hostPort: 21000
    protocol: TCP
  • 部署Kubernetes集群(单节点)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    $ kind create cluster --name harbor --config demo.yaml
    Creating cluster "harbor" ...
    ✓ Ensuring node image (kindest/node:v1.29.2) 🖼
    ✓ Preparing nodes 📦
    ✓ Writing configuration 📜
    ✓ Starting control-plane 🕹️
    ✓ Installing CNI 🔌
    ✓ Installing StorageClass 💾
    Set kubectl context to "kind-harbor"
    You can now use your cluster with:

    kubectl cluster-info --context kind-harbor

    Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
  • 验证集群(默认context已经修改为最新部署的kind-harbor)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    $ kubectl get no
    NAME STATUS ROLES AGE VERSION
    harbor-control-plane Ready control-plane 37s v1.29.2

    $ kubectl get po -A
    NAMESPACE NAME READY STATUS RESTARTS AGE
    kube-system coredns-76f75df574-6zqwf 1/1 Running 0 98s
    kube-system coredns-76f75df574-kwtm7 1/1 Running 0 98s
    kube-system etcd-harbor-control-plane 1/1 Running 0 112s
    kube-system kindnet-6xcrd 1/1 Running 0 98s
    kube-system kube-apiserver-harbor-control-plane 1/1 Running 0 113s
    kube-system kube-controller-manager-harbor-control-plane 1/1 Running 0 113s
    kube-system kube-proxy-sslv8 1/1 Running 0 98s
    kube-system kube-scheduler-harbor-control-plane 1/1 Running 0 113s
    local-path-storage local-path-provisioner-7577fdbbfb-9zgqc 1/1 Running 0 98s
  • 获取cert-manager的Chart版本v1.14.3

    1
    2
    3
    4
    $ cat Chart.yaml
    name: cert-manager
    appVersion: v1.14.3
    version: v1.14.3
  • 部署cert-manager

    1
    $ helm install cert-manager --namespace cert-manager --create-namespace --kube-context kind-harbor .
  • 获取traefik的Chart版本26.1.0

    1
    2
    3
    4
    $ cat Chart.yaml
    name: traefik
    appVersion: v2.11.0
    version: 26.1.0
  • 部署traefik

    1
    $ helm install traefik --namespace kube-system --create-namespace --kube-context kind-harbor .
  • 修改traefik的service配置

  1. 修改traefik的service类型,LoadBalancer => NodePort。
  2. 修改nodePort端口为31000,与之前部署集群的对外端口对应
1
2
3
4
5
6
7
8
spec:
ports:
- name: websecure
nodePort: 31000
port: 443
protocol: TCP
targetPort: websecure
type: NodePort
  • 修改local-path以支持ReadWriteMany模式
    1
    $ kubectl edit cm local-path-config -n local-path-storage

更新config.json如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/var/local-path-provisioner"]
}
]
}
---
apiVersion: v1
data:
config.json: |-
{
"sharedFileSystemPath": "/opt/local-path-provisioner"
}

重启local-path服务

1
$ kubectl -n local-path-storage get po -o jsonpath="{.items[].metadata.name}" | xargs kubectl -n local-path-storage delete po 

部署Harbor服务

  • 获取harbor的Chart版本19.9.0

https://github.com/bitnami/charts/tree/main/bitnami/harbor

1
2
3
4
$ cat Chart.yaml
name: harbor
apiVersion: v2
version: 19.9.0
  • 修改values.yaml
1
2
3
4
5
6
7
externalURL: https://harbor.internal.com
adminPassword: "Passw0rd"

ingress:
certProvider: "cert-manager"
core:
hostname: harbor.internal.com
  • 部署harbor
    1
    $ helm upgrade harbor -i --namespace harbor --create-namespace --set global.storageClass=standard --kube-context kind-harbor .

查看部署的POD情况

1
2
3
4
5
6
7
8
9
$ kubectl -n harbor get po
NAME READY STATUS RESTARTS AGE
harbor-core-75f79fb9f9-qpblp 1/1 Running 0 14m
harbor-jobservice-5677b948b8-sw87s 1/1 Running 0 14m
harbor-portal-67995b4946-mhvkn 1/1 Running 0 14m
harbor-postgresql-0 1/1 Running 0 14m
harbor-redis-master-0 1/1 Running 0 14m
harbor-registry-5fb59b549-c86kj 2/2 Running 0 14m
harbor-trivy-0 1/1 Running 0 14m
  • 配置域名
    /etc/hosts内配置

    1
    192.168.2.110 harbor.internal.com
  • 访问harbor页面,输入之前配置的admin的密码

    1
    https://harbor.internal.com:21000

修改Harbor服务类型

从Statefulset修改为Deployment。

1
2
3
4
5
6
7
8
9
10
11
12
kind: StatefulSet => Deployment
spec:
# serviceName # 需要注释掉
updateStrategy => strategy

# 需要将整个volumeClaimTemplates改为PersistentVolumeClaim形式
# 同时要增加volumes声明域
volumeClaimTemplates => PersistentVolumeClaim
volumes:
- name: data
persistentVolumeClaim:
claimName: data