HI,
I am using the deployment guide of MOSIP Docs 1.2.0 to set up a complete MOSIP environment in my testing environment. The document link is: On-Prem without DNS Installation Guidelines - MOSIP Docs 1.2.0. During the deployment process, I encountered some issues and would appreciate your help. Thank you.
- Is it possible to not use WireGuard?
I have deployed the cluster in my internal virtual machines, and there won’t be any external access. I skipped the installation steps for WireGuard server and client, and so far, I haven’t encountered any issues. - The MOSIP cluster cannot be integrated into the observation cluster’s Rancher.
The observation cluster’s components have been set up, and I can access the Rancher UI page (the domain name is rancher.xyz.cn). I have completed the deployment of the MOSIP cluster’s Rancher cluster, and I am preparing to register the MOSIP cluster with the observation cluster’s Rancher for management. My testing environment is purely internal, and I access the cluster by modifying the local hosts file. The certificate is self-signed. I added hosts in the coreDNS of the MOSIP cluster, pointing the rancher.xyz.cn domain name to the nginx of the observation cluster. However, the rancher agent pod cannot run correctly, and the pod logs show the error “x509: certificate relies on legacy Common Name field, use SANs instead.”
root@V3console:/home/mosip/k8s-infra/rancher/on-prem# kubectl get pod -n cattle-system
NAME READY STATUS RESTARTS AGE
cattle-cluster-agent-5d8db9f65-mhpqh 0/1 CrashLoopBackOff 1 (8s ago) 11s
cattle-cluster-agent-5d8db9f65-x6mlh 0/1 CrashLoopBackOff 1 (7s ago) 11s
rancher-59fc94555d-8ptbp 1/1 Running 0 155m
rancher-59fc94555d-xlf9x 1/1 Running 0 155m
rancher-webhook-558f8df4f9-4jwnl 1/1 Running 0 150m
root@V3console:/home/mosip/k8s-infra/rancher/on-prem# kubectl logs -f cattle-cluster-agent-5d8db9f65-mhpqh -n cattle-system
INFO: Environment: CATTLE_ADDRESS=10.42.2.17 CATTLE_CA_CHECKSUM= CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=tcp://10.43.67.114:80 CATTLE_CLUSTER_AGENT_PORT_443_TCP=tcp://10.43.67.114:443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=10.43.67.114 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=tcp://10.43.67.114:80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=10.43.67.114 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=10.43.67.114 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=94d744bb-cbb6-4afe-9964-c923756b4000 CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-5d8db9f65-mhpqh CATTLE_RANCHER_WEBHOOK_MIN_VERSION= CATTLE_RANCHER_WEBHOOK_VERSION=2.0.5+up0.3.5 CATTLE_SERVER=https://rancher.xyz.net CATTLE_SERVER_VERSION=v2.7.5
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local miaxishz.com options ndots:5
INFO: https://rancher.xyz.net/ping is accessible
INFO: rancher.xyz.net resolves to 192.168.5.239
time="2023-07-25T08:36:00Z" level=info msg="Listening on /tmp/log.sock"
time="2023-07-25T08:36:00Z" level=info msg="Rancher agent version v2.7.5 is starting"
time="2023-07-25T08:36:00Z" level=info msg="Certificate details from https://rancher.xyz.net"
time="2023-07-25T08:36:00Z" level=info msg="Certificate #0 (https://rancher.xyz.net)"
time="2023-07-25T08:36:00Z" level=info msg="Subject: CN=*.xyz.net,OU=MIAXIS,O=ORG,L=HZ,ST=ZJ,C=CN"
time="2023-07-25T08:36:00Z" level=info msg="Issuer: CN=*.xyz.net,OU=MIAXIS,O=ORG,L=HZ,ST=ZJ,C=CN"
time="2023-07-25T08:36:00Z" level=info msg="IsCA: true"
time="2023-07-25T08:36:00Z" level=info msg="DNS Names: <none>"
time="2023-07-25T08:36:00Z" level=info msg="IPAddresses: <none>"
time="2023-07-25T08:36:00Z" level=info msg="NotBefore: 2023-07-21 01:12:18 +0000 UTC"
time="2023-07-25T08:36:00Z" level=info msg="NotAfter: 2042-09-19 01:12:18 +0000 UTC"
time="2023-07-25T08:36:00Z" level=info msg="SignatureAlgorithm: SHA256-RSA"
time="2023-07-25T08:36:00Z" level=info msg="PublicKeyAlgorithm: RSA"
time="2023-07-25T08:36:00Z" level=fatal msg="Get \"https://rancher.xyz.net\": x509: certificate relies on legacy Common Name field, use SANs instead"
- I encountered an error during the installation of Istio.
Here is the error message during the installation of Istio:
‘unable to recognize “istio-monitoring/PodMonitor.yaml”: no matches for kind “PodMonitor” in version “monitoring.coreos.com/v1”’
I noticed that I couldn’t find any relevant information about Prometheus Operator in the guide. Do I need to manually install Prometheus Operator, or did I miss any steps?
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# ./install.sh
Operator init
Installing operator controller in namespace: istio-operator using image: docker.io/istio/operator:1.15.0
Operator controller will watch namespaces: istio-system
✔ Istio operator installed
✔ Installation complete
Create ingress gateways, load balancers and istio monitoring
istiooperator.install.istio.io/istio-operators-mosip created
unable to recognize "istio-monitoring/PodMonitor.yaml": no matches for kind "PodMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "istio-monitoring/ServiceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Wait for all resources to come up
Waiting for deployment "istiod" rollout to finish: 0 of 1 updated replicas are available...
deployment "istiod" successfully rolled out
Error from server (NotFound): deployments.apps "istio-ingressgateway" not found
Error from server (NotFound): deployments.apps "istio-ingressgateway-internal" not found
Installing gateways, proxy protocol, authpolicies
Public domain: api.sandbox.xyz.net
Internal dome: api-internal.sandbox.xyz.net
NAME: istio-addons
LAST DEPLOYED: Mon Jul 24 20:18:33 2023
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
------ IMPORTANT ---------
If you already have pods running with envoy sidecars, restart all of them NOW. Check if all of them appear with command istioctl proxy-status
--------------------------
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
istio-operator istio-operator 1/1 1 1 11m
istio-system istio-ingressgateway 1/1 1 1 9m17s
istio-system istio-ingressgateway-internal 1/1 1 1 9m16s
istio-system istiod 1/1 1 1 10m
kube-system calico-kube-controllers 1/1 1 1 132m
kube-system coredns 2/2 2 2 132m
kube-system coredns-autoscaler 1/1 1 1 132m
kube-system metrics-server 1/1 1 1 132m
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort 10.43.104.203 <none> 15021:30521/TCP,80:30080/TCP 10m
istio-ingressgateway-internal NodePort 10.43.123.199 <none> 15021:31521/TCP,80:31080/TCP,61616:31616/TCP,5432:31432/TCP 10m
istiod ClusterIP 10.43.31.189 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 11m