Set the gsutil component count of linked objects to 0 (rateLimitExceeded Error)

While copying some log files (generated by gsutil compose command):

gsutil -m cp -R gs://mybucket/PROD/ gs://mybucket/TEST/ 

      

we get a lot of errors like this:

"errors":[  
    {  
        "domain":"usageLimits",
        "reason":"rateLimitExceeded",
        "message":"The total number of compose requests for this bucket project exceeds the rate limit. Please reduce the rate of compose requests."
    }
],
"code":429,

      

By running gsutil stat on these objects, I can see that their Component-Count is 972, etc.

We tried to go a short way:

gsutil setmeta -h "Component-Count:0" gs://mybucket/PROD/composite.log

      

but we'll press a:

CommandException: Invalid or disallowed header (component-count).
Only these fields (plus x-goog-meta-* fields) can be set or unset:

      

In fact, the copying process is completely complete, so it's just very annoying to see all these errors.

Does anyone know how to set the component count to 0?

+3


source to share


1 answer


You can safely ignore these errors. As you can see, the team is gsutil cp

completing their task.

If you want to get rid of these errors, you might want to try this workaround to set the Component-Count

header of the compound object to 0 and basically "sort" the objects. To do this, you can drop it on the wire and go back to the cloud storage again.

An easy way to do this is with the "daisy-chain" (-D option) from the cp command:

gsutil cp -D gs://mybucket/PROD/composite.log gs://mybucket/PROD/notcompositeanymore.log

      



It works great with the gsutil -m (multi-threaded) and cp -R (recursive) options!

If you are concerned about increasing the speed and reducing the cost of this process, I would suggest that you do it from the VM Engine Compute Engine, preferably in an area close to your bucket.

Happy coding!

+5


source







All Articles