Loggly - Refactoring Context Format - Indexing the Unique Limit of Field Names

Over the past few months, we have logged into Loggly incorrectly. Our contexts have historically been a numeric array of strings.

['message1', 'message2, 'message3' ...]

We want to send loggly an array of floats that should use fewer keys.

An example of a new loggly payload:

['orderId' => 123, 'logId' => 456, 'info' => json_encode(SOMEARRAY)]

When testing the new format in which we have a cleaner logging format, Loggly provides the following message:

2 out of 9 submitted in this case were not indexed due to the maximum allowed (100) for this account exceeded unique field names. The following fields were affected: [json.context.queue, json.context.demandId]

We are on a 30 day plan. Does this mean that in order for our contexts to be indexed correctly, we need to wait 30 days for the old indexed logs to expire? Is there a way to rebuild the indexing to accommodate the new format journals?

+3


source to share


1 answer


You don't need to wait 30 days. As long as you stop sending logs in the old format, usually within a few hours or a couple of days at the most, you will be able to send data with the new fields. You can also contact support@loggly.com.



0


source







All Articles