Ошибка 429 python

As MRA said, you shouldn’t try to dodge a 429 Too Many Requests but instead handle it accordingly. You have several options depending on your use-case:

1) Sleep your process. The server usually includes a Retry-after header in the response with the number of seconds you are supposed to wait before retrying. Keep in mind that sleeping a process might cause problems, e.g. in a task queue, where you should instead retry the task at a later time to free up the worker for other things.

2) Exponential backoff. If the server does not tell you how long to wait, you can retry your request using increasing pauses in between. The popular task queue Celery has this feature built right-in.

3) Token bucket. This technique is useful if you know in advance how many requests you are able to make in a given time. Each time you access the API you first fetch a token from the bucket. The bucket is refilled at a constant rate. If the bucket is empty, you know you’ll have to wait before hitting the API again. Token buckets are usually implemented on the other end (the API) but you can also use them as a proxy to avoid ever getting a 429 Too Many Requests. Celery’s rate_limit feature uses a token bucket algorithm.

Here is an example of a Python/Celery app using exponential backoff and rate-limiting/token bucket:

class TooManyRequests(Exception):
"""Too many requests"""

@task(
   rate_limit='10/s',
   autoretry_for=(ConnectTimeout, TooManyRequests,),
   retry_backoff=True)
def api(*args, **kwargs):
  r = requests.get('placeholder-external-api')

  if r.status_code == 429:
    raise TooManyRequests()

HTTP error 429 (Too Many Requests) is a common error that occurs when a client makes too many requests to a server within a certain time frame. This error is often returned by APIs to prevent excessive usage, as well as to protect the server from being overwhelmed by too many requests. The error is usually accompanied by a «Retry-After» header, which tells the client the number of seconds to wait before making another request. When encountering this error in Python, it is important to handle it properly in order to maintain a stable connection to the server and avoid unnecessary disruption of the request-response cycle.

Method 1: Add Wait Time

If you are making too many requests to an API or a web server using Python, you may encounter HTTP Error 429 — Too Many Requests. This error occurs when the server is overloaded with requests and cannot handle them all at once. One way to avoid this error is to add wait time between requests. In this tutorial, we will show you how to use Python’s time.sleep() function to add wait time between requests.

Step 1: Import the Required Modules

Before we can use the time.sleep() function, we need to import the time module. We also need to import the requests module to make HTTP requests.

import time
import requests

Step 2: Define the Function to Make HTTP Requests

Next, we will define a function that makes an HTTP request to a given URL. We will use the requests.get() function to make the request.

def make_request(url):
    response = requests.get(url)
    if response.status_code == 200:
        return response.text
    else:
        return None

Step 3: Add Wait Time Between Requests

To add wait time between requests, we will use the time.sleep() function. This function takes a number of seconds as an argument and suspends the execution of the current thread for that many seconds.

def make_requests_with_wait(urls, wait_time):
    results = []
    for url in urls:
        response = make_request(url)
        if response is not None:
            results.append(response)
        time.sleep(wait_time)
    return results

In this function, we take a list of URLs and a wait time as arguments. We then loop through the URLs, make a request to each URL using the make_request() function, and append the response to a list of results. After each request, we add a wait time using the time.sleep() function.

Step 4: Test the Function

To test the make_requests_with_wait() function, we can create a list of URLs and call the function with a wait time of 1 second.

urls = [
    'https://jsonplaceholder.typicode.com/posts/1',
    'https://jsonplaceholder.typicode.com/posts/2',
    'https://jsonplaceholder.typicode.com/posts/3',
    'https://jsonplaceholder.typicode.com/posts/4',
    'https://jsonplaceholder.typicode.com/posts/5'
]

results = make_requests_with_wait(urls, 1)
print(results)

This will make a request to each URL with a 1 second wait time between requests. The results will be printed to the console.

That’s it! You now know how to avoid HTTP Error 429 (Too Many Requests) in Python by adding wait time between requests.

Method 2: Use Exponential Backoff

When making HTTP requests, it is common to encounter the HTTP Error 429 (Too Many Requests) response from the server. This error occurs when the server receives too many requests from the same client in a short period of time. To avoid this error, we can use the Exponential Backoff technique.

Exponential Backoff is a technique that increases the delay between retries exponentially. It starts with a small delay and doubles it after each retry until a maximum delay is reached. This approach allows the server to recover from a high load and reduces the likelihood of encountering the HTTP Error 429.

Here’s an example of how to implement Exponential Backoff in Python:

import requests
import time

def make_request(url):
    retries = 0
    max_retries = 5
    delay = 1
    while retries < max_retries:
        response = requests.get(url)
        if response.status_code == 200:
            return response
        elif response.status_code == 429:
            print(f"Too many requests, retrying in {delay} seconds")
            time.sleep(delay)
            delay *= 2
            retries += 1
        else:
            print(f"Unexpected error: {response.status_code}")
            return None
    print("Max retries reached, giving up")
    return None

In this example, we define a function make_request that takes a URL as an argument and makes an HTTP GET request to that URL. If the server responds with a status code of 200, we return the response. If the server responds with a status code of 429, we retry the request with an increasing delay between retries using the Exponential Backoff technique. If the server responds with any other status code, we return None.

We set the maximum number of retries to 5 and the initial delay to 1 second. The delay is doubled after each retry until a maximum delay is reached. By default, the maximum delay is not set, so the delay will continue to double indefinitely. You can set a maximum delay by adding a condition to the while loop.

By using Exponential Backoff, we can avoid encountering the HTTP Error 429 (Too Many Requests) in our Python applications.

Method 3: Implement Rate Limiting

Step 1: Import necessary libraries

import time
import requests

Step 2: Define rate-limiting function

def rate_limited(max_per_second):
    min_interval = 1.0 / float(max_per_second)
    last_time_called = [0.0]

    def decorate(func):
        def rate_limited_function(*args,**kargs):
            elapsed = time.clock() - last_time_called[0]
            left_to_wait = min_interval - elapsed
            if left_to_wait>0:
                time.sleep(left_to_wait)
            ret = func(*args,**kargs)
            last_time_called[0] = time.clock()
            return ret
        return rate_limited_function
    return decorate

Step 3: Decorate your API function with rate limiting

@rate_limited(2) # 2 requests per second
def make_api_request():
    response = requests.get('https://example.com/api')
    return response.json()

In the above example, the make_api_request() function is decorated with the rate_limited() function with a rate of 2 requests per second. This means that the function will only be called twice per second.

Method 4: Use Caching

To avoid HTTP Error 429 (Too Many Requests) in Python, we can use caching. Caching is a technique that stores the response of a request in a cache, so that the next time the same request is made, the response can be served from the cache instead of making a new request to the server.

Here are the steps to use caching to avoid HTTP Error 429 in Python:

  1. Import the required modules:
import requests
from requests_cache import CachedSession
  1. Create a CachedSession object:
session = CachedSession()
  1. Set the cache expiration time (optional):
session.cache.expiration_time = 3600

This sets the cache expiration time to one hour (3600 seconds).

  1. Make a request using the CachedSession object:
response = session.get('https://api.example.com/data')
  1. Check the response status code:
if response.status_code == 429:
    print('Too many requests. Using cached response.')
else:
    print('New response received.')
  1. Access the response content:
content = response.content

Here is the complete code:

import requests
from requests_cache import CachedSession

session = CachedSession()
session.cache.expiration_time = 3600

response = session.get('https://api.example.com/data')

if response.status_code == 429:
    print('Too many requests. Using cached response.')
else:
    print('New response received.')

content = response.content

This code will check if the response status code is 429 (Too Many Requests), and if it is, it will use the cached response. If the status code is not 429, it will make a new request and cache the response.

Using caching can significantly reduce the number of requests made to the server, and can help you avoid HTTP Error 429 (Too Many Requests) in Python.

Handling HTTP 429 errors in Python

This repo contains an example of how to handle HTTP 429 errors with Python.
If you get an HTTP 429, wait a while before trying the request again.

What is HTTP 429?

The HTTP 429 status code
means «Too Many Requests», and it’s sent when a server wants you to slow
down the rate of requests.

The 429 status code indicates that the user has sent too many
requests in a given amount of time («rate limiting»).

The response representations SHOULD include details explaining the
condition, and MAY include a Retry-After header indicating how long
to wait before making a new request.

For example:

HTTP/1.1 429 Too Many Requests
Content-Type: text/html
Retry-After: 3600

One way to handle an HTTP 429 is to retry the request after a short delay,
using the Retry-After header for guidance (if present).

How do you handle it in Python?

The tenacity library has some functions
for handling retry logic in Python. This repo has an example of how to use
tenacity to retry requests that returned an HTTP 429 status code.

The example uses urllib.request from the standard
library, but this approach can be adapted for other HTTP libraries.

There are two functions in handle_http_429_errors.py:

  • retry_if_http_429_error() can be used as the retry keyword of
    @tenacity.retry. It retries a function if the function makes an
    HTTP request that returns a 429 status code.
  • wait_for_retry_after_header(underlying) can be used as the wait keyword
    of @tenacity.retry. It looks for the Retry-After header in the HTTP response,
    and waits for that long if present. If not, it uses the supplied fallback strategy.

Example code

In the example below, the get_url() function tries to request a URL. If it
gets an HTTP 429 response, it waits up to three times before erroring out —
either respecting the Retry-After header from the server, or 1 second between
consecutive requests if not.

import urllib.request

from tenacity import retry, stop_after_attempt, wait_fixed

from handle_http_429_errors import (
    retry_if_http_429_error,
    wait_for_retry_after_header
)


@retry(
    retry=retry_if_http_429_error(),
    wait=wait_for_retry_after_header(fallback=wait_fixed(1)),
    stop=stop_after_attempt(3)
)
def get_url(url):
    return urllib.request.urlopen(url)


if __name__ == "__main__":
    get_url(url="https://httpbin.org/status/429")

Reader caution

I wrote this as a proof-of-concept in a single evening. It’s not
rigorously tested, but hopefully gives an idea of how you might do this if you
wanted to implement it properly.

Until last week, I have been able to use the python smopy package https://pypi.org/project/smopy/ to display map tiles obtained from openstreetmap https://www.openstreetmap.org/#map=4/-28.15/133.29 in a python program.
But since last week, for some strange reason, smopy has been failing, giving the following error:

Traceback (most recent call last):
  File "/home/pi/Python Learning/smopyTest.py", line 7, in <module>
    m = smopy.Map((-36.5,-150.0), z=0)
  File "/home/pi/Python Learning/smopy.py", line 291, in __init__
    self.fetch()
  File "/home/pi/Python Learning/smopy.py", line 323, in fetch
    self.img = fetch_map(self.box_tile, self.z)
  File "/home/pi/Python Learning/smopy.py", line 64, in fetch_map
    img.paste(fetch_tile(x, y, z), (px, py))
  File "/home/pi/Python Learning/smopy.py", line 44, in fetch_tile
    png = BytesIO(urlopen(url).read())
  File "/usr/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python3.6/urllib/request.py", line 532, in open
    response = meth(req, response)
  File "/usr/lib/python3.6/urllib/request.py", line 642, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python3.6/urllib/request.py", line 570, in error
    return self._call_chain(*args)
  File "/usr/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/usr/lib/python3.6/urllib/request.py", line 650, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 429: Too Many Requests

I notice on stack Overflow that many people have got the same HTTP error when trying to access other websites. The answers suggest to wait for some time before making multiple requests, etc. When I checked the code of smopy.py, it already has a protection to limit the number of tiles to 16 to ensure tile access policy is not violated.

Even when I changed the zoom level of the map requested to z=0, which should simply download only one map tile, I get the same error. So, it puzzles me why I get the too many requests error.

The python application I am trying to develop will display a number of fixed and mobile sensors on the map. So, once the map loads up, it remains mostly unchanged. Only occasionally, if one or more sensors move out of the map area, the application will have to reload the map to try to keep all sensors in view.

I found that the following website shows some alternative tile servers:
https://wiki.openstreetmap.org/wiki/Tile_servers.

I experimented by modifying the smopy.py module by replacing the line

TILE_SERVER = «https://tile.openstreetmap.org/{z}/{x}/{y}.png»

by either

TILE_SERVER = «http://c.tile.stamen.com/watercolor/{z}/{x}/{y}.jpg»

or

TILE_SERVER = «https://maps.wikimedia.org/osm-intl/{z}/{x}/{y}.png.»

In both these cases, the map displayed as expected.

So, this suggests to me that it is a restriction specifically related to accessing the tiles from openstreetmap.org rather than something particularly wrong with smopy.

For my application, the watercolor maps of stamen are not suitable and the wikimedia server says that it is an experimental one; so I am not sure whether I can rely on that.

Therefore, can someone please show me what the correct way to access openstreetmap to display a map in a python program with smopy or another similar package?

I show below the minimal code I use to display a map.I am running this code on Ubuntu 18.04. I get identical error whether I run it in IDLE or Thonny.

import matplotlib.pyplot as plt
import smopy

plt.ion()
fig, ax = plt.subplots(figsize=(8,8))
m = smopy.Map((-36.5,-150.0), z=0)
m.show_mpl(ax = ax)

plt.draw()

As MRA said, you shouldn’t try to dodge a 429 Too Many Requests but instead handle it accordingly. You have several options depending on your use-case:

1) Sleep your process. The server usually includes a Retry-after header in the response with the number of seconds you are supposed to wait before retrying. Keep in mind that sleeping a process might cause problems, e.g. in a task queue, where you should instead retry the task at a later time to free up the worker for other things.

2) Exponential backoff. If the server does not tell you how long to wait, you can retry your request using increasing pauses in between. The popular task queue Celery has this feature built right-in.

3) Token bucket. This technique is useful if you know in advance how many requests you are able to make in a given time. Each time you access the API you first fetch a token from the bucket. The bucket is refilled at a constant rate. If the bucket is empty, you know you’ll have to wait before hitting the API again. Token buckets are usually implemented on the other end (the API) but you can also use them as a proxy to avoid ever getting a 429 Too Many Requests. Celery’s rate_limit feature uses a token bucket algorithm.

Here is an example of a Python/Celery app using exponential backoff and rate-limiting/token bucket:

class TooManyRequests(Exception):
"""Too many requests"""

@task(
   rate_limit='10/s',
   autoretry_for=(ConnectTimeout, TooManyRequests,),
   retry_backoff=True)
def api(*args, **kwargs):
  r = requests.get('placeholder-external-api')

  if r.status_code == 429:
    raise TooManyRequests()

Понравилась статья? Поделить с друзьями:
  • Ошибка 429 google
  • Ошибка 429 activex component can t create object
  • Ошибка 42812 apple music
  • Ошибка 42800 apple music что это
  • Ошибка 43 видеокарта nvidia отвал