Osip 1.2.0 deployment issues, problems with not using WireGuard, and errors during Istio installation

HI,
I am using the deployment guide of MOSIP Docs 1.2.0 to set up a complete MOSIP environment in my testing environment. The document link is: On-Prem without DNS Installation Guidelines - MOSIP Docs 1.2.0. During the deployment process, I encountered some issues and would appreciate your help. Thank you.

  1. Is it possible to not use WireGuard?
    I have deployed the cluster in my internal virtual machines, and there won’t be any external access. I skipped the installation steps for WireGuard server and client, and so far, I haven’t encountered any issues.
  2. The MOSIP cluster cannot be integrated into the observation cluster’s Rancher.
    The observation cluster’s components have been set up, and I can access the Rancher UI page (the domain name is rancher.xyz.cn). I have completed the deployment of the MOSIP cluster’s Rancher cluster, and I am preparing to register the MOSIP cluster with the observation cluster’s Rancher for management. My testing environment is purely internal, and I access the cluster by modifying the local hosts file. The certificate is self-signed. I added hosts in the coreDNS of the MOSIP cluster, pointing the rancher.xyz.cn domain name to the nginx of the observation cluster. However, the rancher agent pod cannot run correctly, and the pod logs show the error “x509: certificate relies on legacy Common Name field, use SANs instead.”
root@V3console:/home/mosip/k8s-infra/rancher/on-prem# kubectl get pod -n cattle-system
NAME                                   READY   STATUS             RESTARTS     AGE
cattle-cluster-agent-5d8db9f65-mhpqh   0/1     CrashLoopBackOff   1 (8s ago)   11s
cattle-cluster-agent-5d8db9f65-x6mlh   0/1     CrashLoopBackOff   1 (7s ago)   11s
rancher-59fc94555d-8ptbp               1/1     Running            0            155m
rancher-59fc94555d-xlf9x               1/1     Running            0            155m
rancher-webhook-558f8df4f9-4jwnl       1/1     Running            0            150m
root@V3console:/home/mosip/k8s-infra/rancher/on-prem# kubectl logs -f cattle-cluster-agent-5d8db9f65-mhpqh -n cattle-system
INFO: Environment: CATTLE_ADDRESS=10.42.2.17 CATTLE_CA_CHECKSUM= CATTLE_CLUSTER=true CATTLE_CLUSTER_AGENT_PORT=tcp://10.43.67.114:80 CATTLE_CLUSTER_AGENT_PORT_443_TCP=tcp://10.43.67.114:443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_ADDR=10.43.67.114 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PORT=443 CATTLE_CLUSTER_AGENT_PORT_443_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_PORT_80_TCP=tcp://10.43.67.114:80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_ADDR=10.43.67.114 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PORT=80 CATTLE_CLUSTER_AGENT_PORT_80_TCP_PROTO=tcp CATTLE_CLUSTER_AGENT_SERVICE_HOST=10.43.67.114 CATTLE_CLUSTER_AGENT_SERVICE_PORT=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTP=80 CATTLE_CLUSTER_AGENT_SERVICE_PORT_HTTPS_INTERNAL=443 CATTLE_CLUSTER_REGISTRY= CATTLE_INGRESS_IP_DOMAIN=sslip.io CATTLE_INSTALL_UUID=94d744bb-cbb6-4afe-9964-c923756b4000 CATTLE_INTERNAL_ADDRESS= CATTLE_IS_RKE=false CATTLE_K8S_MANAGED=true CATTLE_NODE_NAME=cattle-cluster-agent-5d8db9f65-mhpqh CATTLE_RANCHER_WEBHOOK_MIN_VERSION= CATTLE_RANCHER_WEBHOOK_VERSION=2.0.5+up0.3.5 CATTLE_SERVER=https://rancher.xyz.net CATTLE_SERVER_VERSION=v2.7.5
INFO: Using resolv.conf: nameserver 10.43.0.10 search cattle-system.svc.cluster.local svc.cluster.local cluster.local miaxishz.com options ndots:5
INFO: https://rancher.xyz.net/ping is accessible
INFO: rancher.xyz.net resolves to 192.168.5.239
time="2023-07-25T08:36:00Z" level=info msg="Listening on /tmp/log.sock"
time="2023-07-25T08:36:00Z" level=info msg="Rancher agent version v2.7.5 is starting"
time="2023-07-25T08:36:00Z" level=info msg="Certificate details from https://rancher.xyz.net"
time="2023-07-25T08:36:00Z" level=info msg="Certificate #0 (https://rancher.xyz.net)"
time="2023-07-25T08:36:00Z" level=info msg="Subject: CN=*.xyz.net,OU=MIAXIS,O=ORG,L=HZ,ST=ZJ,C=CN"
time="2023-07-25T08:36:00Z" level=info msg="Issuer: CN=*.xyz.net,OU=MIAXIS,O=ORG,L=HZ,ST=ZJ,C=CN"
time="2023-07-25T08:36:00Z" level=info msg="IsCA: true"
time="2023-07-25T08:36:00Z" level=info msg="DNS Names: <none>"
time="2023-07-25T08:36:00Z" level=info msg="IPAddresses: <none>"
time="2023-07-25T08:36:00Z" level=info msg="NotBefore: 2023-07-21 01:12:18 +0000 UTC"
time="2023-07-25T08:36:00Z" level=info msg="NotAfter: 2042-09-19 01:12:18 +0000 UTC"
time="2023-07-25T08:36:00Z" level=info msg="SignatureAlgorithm: SHA256-RSA"
time="2023-07-25T08:36:00Z" level=info msg="PublicKeyAlgorithm: RSA"
time="2023-07-25T08:36:00Z" level=fatal msg="Get \"https://rancher.xyz.net\": x509: certificate relies on legacy Common Name field, use SANs instead"

  1. I encountered an error during the installation of Istio.
    Here is the error message during the installation of Istio:
    ‘unable to recognize “istio-monitoring/PodMonitor.yaml”: no matches for kind “PodMonitor” in version “monitoring.coreos.com/v1”’
    I noticed that I couldn’t find any relevant information about Prometheus Operator in the guide. Do I need to manually install Prometheus Operator, or did I miss any steps?
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# ./install.sh
Operator init
Installing operator controller in namespace: istio-operator using image: docker.io/istio/operator:1.15.0
Operator controller will watch namespaces: istio-system
✔ Istio operator installed
✔ Installation complete
Create ingress gateways, load balancers and istio monitoring
istiooperator.install.istio.io/istio-operators-mosip created
unable to recognize "istio-monitoring/PodMonitor.yaml": no matches for kind "PodMonitor" in version "monitoring.coreos.com/v1"
unable to recognize "istio-monitoring/ServiceMonitor.yaml": no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Wait for all resources to come up
Waiting for deployment "istiod" rollout to finish: 0 of 1 updated replicas are available...
deployment "istiod" successfully rolled out
Error from server (NotFound): deployments.apps "istio-ingressgateway" not found
Error from server (NotFound): deployments.apps "istio-ingressgateway-internal" not found
Installing gateways, proxy protocol, authpolicies
Public domain: api.sandbox.xyz.net
Internal dome: api-internal.sandbox.xyz.net
NAME: istio-addons
LAST DEPLOYED: Mon Jul 24 20:18:33 2023
NAMESPACE: istio-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
------ IMPORTANT ---------
If you already have pods running with envoy sidecars, restart all of them NOW. Check if all of them appear with command istioctl proxy-status
--------------------------
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# kubectl get deployment -A
NAMESPACE        NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
istio-operator   istio-operator                  1/1     1            1           11m
istio-system     istio-ingressgateway            1/1     1            1           9m17s
istio-system     istio-ingressgateway-internal   1/1     1            1           9m16s
istio-system     istiod                          1/1     1            1           10m
kube-system      calico-kube-controllers         1/1     1            1           132m
kube-system      coredns                         2/2     2            2           132m
kube-system      coredns-autoscaler              1/1     1            1           132m
kube-system      metrics-server                  1/1     1            1           132m
root@V3console:/home/mosip/k8s-infra/mosip/on-prem/istio# kubectl get svc -n istio-system
NAME                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                       AGE
istio-ingressgateway            NodePort    10.43.104.203   <none>        15021:30521/TCP,80:30080/TCP                                  10m
istio-ingressgateway-internal   NodePort    10.43.123.199   <none>        15021:31521/TCP,80:31080/TCP,61616:31616/TCP,5432:31432/TCP   10m
istiod                          ClusterIP   10.43.31.189    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                         11m

Hi @ryan

Thank you for reaching out and using the MOSIP deployment guide for setting up your testing environment. We’re here to assist you with any issues you encounter and regarding your questions one of our team member will shortly provide you with the resolution.

Best Regards,
MOSIP Team

Hi @sanchi-singh24
Thank you very much for your response. I think I have an idea about the second issue. After replacing the certificate for the rancher cluster’s nginx and adding hostAliases to the deployment of cattle-cluster-agent in mosip cluster, I was able to connect to the rancher cluster. However, I encountered a new problem, and upon investigation, it seems to be a version compatibility issue. The rancher installed using the 1.2.0-b3 version script is version 2.7.5, while the k8s cluster created with rke is version 1.22. There might be compatibility issues between these two components. I plan to reinstall rancher with version 2.6.x to see if that resolves the problem.

1 Like
  1. It is not mandatory to use a wireguard, Wireguard acts as a VPN to access the services. If you are able to access nodes, then wireguard is not required.
  2. The cluster import is getting failed due to a self-signed certificate, Use the curl --insecure option displayed on rancher UI to import the cluster
    image
  3. Ensure to deploy monitoring and logging before proceeding with deployments.
    Follow the sequence as provided in k8s-infra repo.

Hi @syed.salman
Thank you very much for your response. However, I still have some doubts. The article recommends the following steps: 1. rancher cluster, 2. mosip cluster, and then proceed to monitoring and logging, as shown in the screenshot.
image

Currently, I am in the MOSIP cluster step, specifically the Istio for service discovery and Ingress section of the MOSIP cluster. Are you suggesting that after installing Longhorn, I should first deploy monitoring and logging, and then come back to execute the Istio for service discovery and Ingress step?

1 Like

@ryan we have never faced this issue while deploying istio.

Try with installing only monitoring then proceed with istio.

Alright, I re-deployed the cluster, and before installing Istio, I deployed monitoring. Thank you so much.

1 Like

Hi @ryan

I hope your issue is resolved if anything else comes up do let us know, our team will help you out.