Stackdriver logging doesn't always have to be done on Google Cloud Console

I have a relatively large number of services running under the Kubernetes / Google Container Engine, with a cluster created through the Google Cloud Console UI. In containers, I register Java services in JSON format.

Most of the time it just works, but lately more and more logs "stop" at a seemingly random point in time, with the last entry (at any log level) from hours or days ago.

I am not changing any parameters and have not been able to isolate the reason for this. What's going on here? What can I do to improve and / or fix this problem?

+3
google-container-engine stackdriver


source to share


No one has answered this question yet

Check out similar questions:

6
How does stop disk logging confirm the severity of the recording?
five
How do I register custom metrics with the Google Container Engine for Stackdriver?
4
HTTPS on Google Cloud Container Engine
2
Gce container logs not showing in cloud protocol
2
Filter Google container namespaces in cloud protocol
1
Kubernetes with Google Cloud DNS
1
Stackdriver login from google cloud endpoints to Kubernetes pod
0
Registration using the Stackdriver API on Kubernetes / Google Container Engine (GKE)
0
Stackdriver Log Agent - Log level unrelated to Google Cloud Logging for Docker driver
0
How to view recent Google Stackdriver logs without looking at all logs



All Articles
Loading...
X
Show
Funny
Dev
Pics