11/7/2022 0 Comments Request was throttled meaning![]() A monolithic design would likely maintain state, allowing you to simply delay API calls until an internal counter indicated some passage of time. In more “traditional” applications, we can throttle calls to third-party APIs in a number of ways. Identifying the problemīefore we get to the solution, let’s identify the root of the problem first. This allows them to throttle requests to their backend systems, but potentially leaves us with a bunch of 429 Too Many Requests errors. Many use daily quotas, but most use a much more granular per second or per minute model. Let’s get started!įUN FACT: Third-party APIs can’t scale infinitely either, so most (if not all) put some sort of quota on the number of API calls that you can make. Request was throttled meaning how to#We’ll also discuss how you can implement (almost) guaranteed ordering, state management (for multi-tiered quotas), and how to plan for failure. We’ll look at how we can use a combination of SQS, CloudWatch Events, and Lambda functions to implement a precisely controlled throttling system. There are many ways to add resiliency to our serverless applications, but this post is going to focus on dealing specifically with quotas in third-party APIs. And more importantly, our end users should experience minimal, if any, negative effects when we reach these thresholds. Whether these are physical limits, like network throughput or CPU capacity, or soft limits, like AWS Account Limits or third-party API quotas, our serverless applications still need to be able to handle periods of high load. But in reality, many components of our serverless applications DO have limits. With the right design (and enough money), this is theoretically possible. In the serverless world, we often get the impression that our applications can scale without limits. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |