Configure the GCS bucket to allow public write but not overwrite

On google cloud storage, I want PUBLIC (allUsers) to be able to upload new files and upload existing files, but I don't want PUBLIC to be able to overwrite an existing file.

Reference Information. Loading and loading urls is usually determined by my own application. Therefore, under normal conditions, there is no problem, because the application makes sure that URLs are always unique when written. But an attacker could hack into my application and then could potentially upload files (bad) to my cloud storage and overwrite existing files (very bad).

I know I can fix this problem by proxying through App Engine or using signed urls, which I am trying to avoid due to time constraints. Timely processing is essential as my application processes files (almost) in real time and the additional 1000ms latency to process two consecutive requests would be too long.

Is it possible to configure cloud storage in such a way that an error is returned if an already existing file was already deleted during upload, for example:

Bucket: PUBLIC has WRITE access Separate file: PUBLIC has READ access

Will this work? What happens in GCS if slave and file ACLs are inconsistent? In the above example, the bucket allows write access, but if the download hits an already existing read-only file, will such a request be executed by GCS or will GCS treat the file as already existing at that point and replace it with new content

Any other approach that might work would be much appreciated.

+3


source to share


1 answer


You want to install these IAM roles on a bucket:

  • Roles / storage.objectCreator
  • Roles / storage.objectViewer


https://cloud.google.com/storage/docs/access-control/iam-roles reports:

"objectCreator allows users to create objects. Does not grant permission to view, delete, or overwrite objects."

0


source







All Articles