Increasing the power of DynamoDB Stream + Lambda

I have a DynamoDB thread that runs a Lambda function. I notice that bursts of thousands of records into a DynamoDB table can take many minutes (the longest I've seen is 30 minutes) for everything to be processed by Lambda. The average duration of each Lambda summon with batch size 3 is around 2 seconds. These Lambdas perform heavy I / O tasks, so a small batch size and more concurrent calls are preferred. However, the parallelism of these Lambdas is tied to the shard count of DynamoDB Stream, but I can't find a way to scale the shard count.

Is there a way to increase the throughput of these Lambdas by using a larger batch size and more optimized code?

+3


source to share


2 answers


I also don't see many configuration options.



You can separate your processing. If your change records are not too large, your inbound Lambda can simply split them into multiple small SNS messages. Each of these lesser SNS messages could have caused Lambda to do the actual processing. If there are more changes, you can use SQS or S3 and trigger Lambda processing on new messages via SNS or directly on files.

+3


source


Each bypass stream is associated with a section in DynamoDB. If you increase the bandwidth on your desk so much that you force splitting to split , then you will end up with more shards. With more shards, more Lambda functions that run in parallel.



+1


source







All Articles