Issue
Logging-operator-logging's component FluentD forwards the log data to the datasource Loki. Earlier versions of Loki drops logs that are "out of order" (delayed). Which could be caused by multiple issues, like highly verbosed or large number of logs, sourced from multiple workers.
The "entry out of order" error can be observed when viewing the logs of fluentd.
kubectl exec -it logging-operator-logging-fluentd-0 -n kommander -- cat /fluentd/log/out
[...] 2022-12-19 07:39:51.692696116 +0000 fluent.warn: {"message":"[clusterflow:kommander:cluster-containers:clusteroutput:kommander:loki] failed to write post to http://grafana-loki-loki-distributed-gateway .kommander.svc.cluster.local/loki/api/v1/push (400 Bad Request entry with timestamp 2022-12-17 08:10:05.628786235 +0000 UTC ignored, reason: 'entry out of order' for stream:
Solution
Loki's version 2.4 has added a feature to remove the constrain of receiving logs in perfect chronological order, mentioned here.
In DKP 2.3 the version of Loki has been bumped to version 2.5. So upgrading to DKP 2.3 should handle the "out of entry" issues experienced in logging-operator-logging.