Load a sample dashboard to Kibana

I’ve been giving this error, How to fix?

and this is the error log on kibana

THis is the error on installation

Hi @paredescedric3

Thanks for sharing the log file with us !

I have asked one of my team member to look into this and guide you.

Best Regards,
Team MOSIP

Updates Maam, with this?

Dear @paredescedric3 ,

Request your update on the below Queries

  1. What is the puropose of deploying Kibana Dashboards ? Pod Log view or Design Dashboard using Database ?
  2. Which Database are you using ? Postgres ?

This is issue is resolved, and we are now currently deploying on V3

can we followup with regards on our issue in installing V3

Kernel notifier crashloopback - General - All things MOSIP

Dear @paredescedric3

As per above screen shot, i am assuming that, you have deploying MOSIP Default Dashboard by using MOSIP Database.

Please find the approach Here

  1. Update configuration in Postgres override.conf file. set wal_level = ‘logical’
  2. Install Cattle Logging System https://github.com/mosip/k8s-infra/tree/release-1.2.0.1/logging
  3. Install Reporting Module GitHub - mosip/reporting at release-1.2.0.1

Note: Please select correct branch or Tag before start deployment and follow ReadMe for deployment steps

  1. it is already set to logical
    image
  2. this pod is constantly crashing and running back and forth, hence it makes the rancher website inaccessible


ts=2023-10-17T09:03:00.676Z caller=main.go:539 level=info msg=“Starting Prometheus Server” mode=server version=“(version=2.38.0, branch=HEAD, revision=818d6e60888b2a3ea363aee8a9828c7bafd73699)”
ts=2023-10-17T09:03:00.676Z caller=main.go:544 level=info build_context=“(go=go1.18.5, user=root@e6b781f65453, date=20220816-13: 23:14)”
ts=2023-10-17T09:03:00.676Z caller=main.go:545 level=info host_details=“(Linux 5.13.0-1022-azure #26~20.04.1-Ubuntu SMP Thu Apr 7 19:42:45 UTC 2022 x86_64 prometheus-rancher-monitoring-prometheus-0 (none))”
ts=2023-10-17T09:03:00.676Z caller=main.go:546 level=info fd_limits=“(soft=1048576, hard=1048576)”
ts=2023-10-17T09:03:00.676Z caller=main.go:547 level=info vm_limits=“(soft=unlimited, hard=unlimited)”
ts=2023-10-17T09:03:00.679Z caller=web.go:553 level=info component=web msg=“Start listening for connections” address=0.0.0.0:909 0
ts=2023-10-17T09:03:00.679Z caller=main.go:976 level=info msg=“Starting TSDB …”
ts=2023-10-17T09:03:00.680Z caller=repair.go:56 level=info component=tsdb msg=“Found healthy block” mint=1697500800000 maxt=1697 508000000 ulid=01HCXS6W0B5YVQ10RP98KCZMD7
ts=2023-10-17T09:03:00.681Z caller=repair.go:56 level=info component=tsdb msg=“Found healthy block” mint=1697508000000 maxt=1697 515200000 ulid=01HCY02K8FPCMSZ701X0SS9KK7
ts=2023-10-17T09:03:00.681Z caller=tls_config.go:231 level=info component=web msg=“TLS is disabled.” http2=false
ts=2023-10-17T09:03:00.681Z caller=repair.go:56 level=info component=tsdb msg=“Found healthy block” mint=1697440068542 maxt=1697 500800000 ulid=01HCY0F5ME18J0WSZX088745HQ
ts=2023-10-17T09:03:00.682Z caller=repair.go:56 level=info component=tsdb msg=“Found healthy block” mint=1697515200000 maxt=1697 522400000 ulid=01HCY71AS11HWFYJWK9ZYT2DT1
ts=2023-10-17T09:03:00.682Z caller=dir_locker.go:77 level=warn component=tsdb msg=“A lockfile from a previous execution already existed. It was replaced” file=/prometheus/lock
ts=2023-10-17T09:03:00.761Z caller=head.go:495 level=info component=tsdb msg=“Replaying on-disk memory mappable chunks if any”
ts=2023-10-17T09:03:01.242Z caller=head.go:538 level=info component=tsdb msg=“On-disk memory mappable chunks replay completed” d uration=481.191962ms
ts=2023-10-17T09:03:01.242Z caller=head.go:544 level=info component=tsdb msg=“Replaying WAL, this may take a while”
ts=2023-10-17T09:03:09.331Z caller=head.go:580 level=info component=tsdb msg=“WAL checkpoint loaded”
ts=2023-10-17T09:03:09.547Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=137 maxSegment=169
ts=2023-10-17T09:03:09.592Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=138 maxSegment=169
ts=2023-10-17T09:03:09.746Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=139 maxSegment=169
ts=2023-10-17T09:03:09.790Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=140 maxSegment=169
ts=2023-10-17T09:03:09.956Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=141 maxSegment=169
ts=2023-10-17T09:03:10.024Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=142 maxSegment=169
ts=2023-10-17T09:03:10.444Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=143 maxSegment=169
ts=2023-10-17T09:03:10.633Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=144 maxSegment=169
ts=2023-10-17T09:03:11.042Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=145 maxSegment=169
ts=2023-10-17T09:03:12.834Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=146 maxSegment=169
ts=2023-10-17T09:03:13.231Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=147 maxSegment=169
ts=2023-10-17T09:03:13.635Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=148 maxSegment=169
ts=2023-10-17T09:03:14.123Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=149 maxSegment=169
ts=2023-10-17T09:03:14.446Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=150 maxSegment=169
ts=2023-10-17T09:03:14.538Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=151 maxSegment=169
ts=2023-10-17T09:03:14.631Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=152 maxSegment=169
ts=2023-10-17T09:03:14.822Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=153 maxSegment=169
ts=2023-10-17T09:03:14.931Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=154 maxSegment=169
ts=2023-10-17T09:03:15.420Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=155 maxSegment=169
ts=2023-10-17T09:03:15.636Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=156 maxSegment=169
ts=2023-10-17T09:03:16.135Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=157 maxSegment=169
ts=2023-10-17T09:03:16.627Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=158 maxSegment=169
ts=2023-10-17T09:03:18.922Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=159 maxSegment=169
ts=2023-10-17T09:03:19.624Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=160 maxSegment=169
ts=2023-10-17T09:03:19.922Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=161 maxSegment=169
ts=2023-10-17T09:03:20.628Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=162 maxSegment=169
ts=2023-10-17T09:03:21.325Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=163 maxSegment=169
ts=2023-10-17T09:03:21.926Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=164 maxSegment=169
ts=2023-10-17T09:03:22.725Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=165 maxSegment=169
ts=2023-10-17T09:03:23.028Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=166 maxSegment=169
ts=2023-10-17T09:03:23.722Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=167 maxSegment=169
ts=2023-10-17T09:03:26.421Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=168 maxSegment=169
ts=2023-10-17T09:03:26.422Z caller=head.go:615 level=info component=tsdb msg=“WAL segment loaded” segment=169 maxSegment=169
ts=2023-10-17T09:03:26.422Z caller=head.go:621 level=info component=tsdb msg=“WAL replay completed” checkpoint_replay_duration=8 .089238653s wal_replay_duration=17.090247296s total_replay_duration=25.660736312s
ts=2023-10-17T09:03:27.342Z caller=main.go:997 level=info fs_type=EXT4_SUPER_MAGIC
ts=2023-10-17T09:03:27.342Z caller=main.go:1000 level=info msg=“TSDB started”
ts=2023-10-17T09:03:27.342Z caller=main.go:1181 level=info msg=“Loading configuration file” filename=/etc/prometheus/config_out/ prometheus.env.yaml
ts=2023-10-17T09:03:27.358Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.358Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.359Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.360Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.360Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.360Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.360Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.360Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.361Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.361Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.361Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.361Z caller=kubernetes.go:326 level=info component=“discovery manager scrape” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.361Z caller=kubernetes.go:326 level=info component=“discovery manager notify” discovery=kubernetes msg=“U sing pod service account via in-cluster config”
ts=2023-10-17T09:03:27.742Z caller=main.go:1218 level=info msg=“Completed loading of configuration file” filename=/etc/prometheu s/config_out/prometheus.env.yaml totalDuration=399.814738ms db_storage=900ns remote_storage=1.4µs web_handler=300ns query_engine =900ns scrape=233.306µs scrape_sd=3.33088ms notify=22.6µs notify_sd=225.606µs rules=380.114663ms tracing=6.9µs
ts=2023-10-17T09:03:27.742Z caller=main.go:961 level=info msg=“Server is ready to receive web requests.”
ts=2023-10-17T09:03:27.742Z caller=manager.go:941 level=info component=“rule manager” msg=“Starting rule manager…”
ts=2023-10-17T09:03:41.919Z caller=compact.go:519 level=info component=tsdb msg=“write block” mint=1697522400000 maxt=1697529600 000 ulid=01HCYE0HQNSMMWDPSKCTX45ZWN duration=8.810123788s
ts=2023-10-17T09:03:42.350Z caller=head.go:844 level=info component=tsdb msg=“Head GC completed” duration=427.732552ms
ts=2023-10-17T09:03:42.358Z caller=checkpoint.go:100 level=info component=tsdb msg=“Creating checkpoint” from_segment=137 to_seg ment=157 mint=1697529600000
ts=2023-10-17T09:03:46.831Z caller=head.go:1013 level=info component=tsdb msg=“WAL checkpoint complete” first=137 last=157 durat ion=4.473104767s
ts=2023-10-17T09:04:39.588Z caller=compact.go:460 level=info component=tsdb msg=“compact blocks” count=3 mint=1697500800000 maxt =1697522400000 ulid=01HCYE0Z4F369QDYRSG6MYYRWR sources=“[01HCXS6W0B5YVQ10RP98KCZMD7 01HCY02K8FPCMSZ701X0SS9KK7 01HCY71AS11HWFYJW K9ZYT2DT1]” duration=52.757410394s
ts=2023-10-17T09:04:41.089Z caller=db.go:1294 level=info component=tsdb msg=“Deleting obsolete block” block=01HCY02K8FPCMSZ701X0 SS9KK7
ts=2023-10-17T09:04:41.095Z caller=db.go:1294 level=info component=tsdb msg=“Deleting obsolete block” block=01HCY71AS11HWFYJWK9Z YT2DT1
ts=2023-10-17T09:04:41.102Z caller=db.go:1294 level=info component=tsdb msg=“Deleting obsolete block” block=01HCXS6W0B5YVQ10RP98 KCZMD7

  1. we haven’t know what it is yet

For Dashboard, you need to check following Namespace only.

  1. Cattle-logging-system
  2. reporting

if pods belongs to this namespace working fine then Dashboards will work.

Please confirm, whether dashboard issue resolved and you able to access dashboard


Having these error, hence I cant install the said application

@paredescedric3

The current concern appears to be related to the cluster’s accessibility. Could you kindly furnish the details pertaining to the cluster’s status?

To inspect the status of the cluster nodes, please execute the command:

kubectl get nodes

Additionally, it is advisable to verify the operational status of the cluster agents by using the following command:

kubectl get pods -n cattle-system

Hello @syed.salman

Here is the output

@paredescedric3

Looks fine with the cluster now, are you still able to observe the same error?
image

If no, then proceed with deploying monitoring, logging and reporting as provided in the documentation

Hello @syed.salman

the error is still existing

@paredescedric3

can you please check wether are pods are up and running in cattle-monitoring-system.

1 Like