connection problems

Retrying after exceptions, and handling Internet connection problems

If request information from a remote web server, you should make sure that your program can handle network problems and server fails appropriately. In case of some errors–e.g. connection timeout or HTTP error 503 (Service Temporarily Unavailable)–it makes sense to retry a few times if you think the error was intermittent.

Here I discus one of the ways to approach this kind of problems using a forelse construct. The for loop in Python has a little known else clause, which is executed if the for loop terminates successfully. Here is some boilerplate code:

for i in range(max_retries):
    try:
        #do stuff
    except SomeParticularException:
        continue  # retrying
    else:
        break
else:
    # network is down, act accordingly

In this example when SomeParticularException is caught, the for loop continues to execute, and after all max_retries iterations we reach the else clause. Note that it is really important to catch only the exceptions that indicate that you need to retry doing what you’re doing. If you catch everything with except Exception, it will catch, well, everything — including, for example, KeyboardInterrupt, which is definitely not what we want to achieve here.

If you send requests in many places across your program using different functions, it is reasonable to create a decorator which will make it easier to define different functions with the same “retrying behaviour”. Here’s an example of how it may be implemented if you use requests library. In case of requests, the underlying library urllib3 can also handle the retries. However, it doesn’t sleep between retries, and applies only to failed DNS lookups, socket connections and connection timeouts; if you want to retry on anything else, you’ll need to roll on your own. This is how I roll:

import requests
import time

class NetworkError(RuntimeError):
    pass
        
def retryer(func):
    retry_on_exceptions = ( 
         requests.exceptions.Timeout,
         requests.exceptions.ConnectionError,
         requests.exceptions.HTTPError
    )
    max_retries = 10,
    timeout=5
    def inner(*args, **kwargs):
        for i in range(max_retries):
            try:    
                result = func(*args, **kwargs)
            except retry_on_exceptions:
                time.sleep(timeout)
                continue
            else:
                return result
        else:
            raise NetworkError 
    return inner

Now every time you create a function which you want to have this behaviour, just decorate it:

@retryer
def foo(stuff):
    #do stuff

You can catch the NetworkError and handle the case when network problems are not intermittent by retrying again after a longer timeout, notifying the user, or performing some other task. If you use a different library to make requests (for instance, various libraries to connect to some services via their REST api), just modify the retry_on_exceptions to include the relevant errors.

You can upgrade this example by creating a decorator which accepts arguments, which makes possible to set different number of retries and timeout for different functions:

import requests
import time

class NetworkError(RuntimeError):
    pass

def retryer(max_retries=10, timeout=5):
    def wraps(func):
        request_exceptions = (
            requests.exceptions.Timeout,
            requests.exceptions.ConnectionError,
            requests.exceptions.HTTPError
        )
        def inner(*args, **kwargs):
            for i in range(max_retries):
                try:    
                    result = func(*args, **kwargs)
                except request_exceptions:
                    time.sleep(timeout)
                    continue
                else:
                    return result
            else:
                raise NetworkError 
        return inner
    return wraps

Now the decorator accepts arguments, which allows us to customise the behaviour of different functions:

@retryer(max_retries=7, timeout=12)
def foo(stuff):
    #do stuff

In some cases, it is reasonable to use increasing sleep intervals between retries: time.sleep(timeout*i). This way the more attempts you make, the longer you wait between them (similarly to how Gmail handles missing internet connection).



LEAVE A COMMENT