Django Caching in Production: A Comprehensive Setup Guide

How to set up Django’s caching framework for a production environment, including different caching strategies
April 25, 2025 by
Django Caching in Production: A Comprehensive Setup Guide
Hamed Mohammadi
| No comments yet

Caching is a critical technique for improving Django application performance in production. By storing the results of expensive computations (like rendered pages or database queries), Django can serve repeated requests faster and reduce load on your database and server (Django’s cache framework | Django documentation | Django) (Django’s cache framework | Django documentation | Django). This guide covers how to set up Django’s caching framework for a production environment, including different caching strategies (per-site, per-view, template fragment, and low-level caching) and a comparison of Django’s supported cache backends (Memcached, Redis, database, and local-memory caching). We’ll include configuration steps, examples, best practices for Django 5.x+, and relevant official documentation links.

Caching Strategies in Django

Django offers multiple levels of cache granularity (Django’s cache framework | Django documentation | Django) to suit different needs. You can cache your entire site, individual views, or parts of a template, or use the low-level cache API for arbitrary data. Below we detail each strategy and how to configure it:

Per-Site Caching (Site-Wide Cache)

Overview: Per-site caching saves the entire output of each page (for all users) in the cache. Once enabled, the first request to any page will be computed normally and cached; subsequent requests to the same URL will be served from cache until the cache expires.

Setup: To enable site-wide caching, add Django’s cache middleware to your MIDDLEWARE settings and configure a few settings:

  1. Add Cache Middleware: Include UpdateCacheMiddleware and FetchFromCacheMiddleware in your MIDDLEWARE list. Order is important: UpdateCacheMiddleware should come before CommonMiddleware and FetchFromCacheMiddleware should come after it (usually at the top and bottom of the list) (Django’s cache framework | Django documentation | Django). For example:

    MIDDLEWARE = [
        "django.middleware.cache.UpdateCacheMiddleware",   # saves responses to cache
        "django.middleware.common.CommonMiddleware",       # common middleware
        "django.middleware.cache.FetchFromCacheMiddleware" # fetches from cache if available
    ]
    

    (Having the update middleware first and fetch last ensures each request checks the cache on the way in and updates it on the way out (Django’s cache framework | Django documentation | Django).)

  2. Configure Cache Settings: Add the following in settings:

    For example:

    CACHE_MIDDLEWARE_ALIAS = "default"
    CACHE_MIDDLEWARE_SECONDS = 300  # cache pages for 5 minutes
    CACHE_MIDDLEWARE_KEY_PREFIX = "mysite"  # to avoid key collisions (use a unique value per site)
    

How it Works: With the above enabled, Django will cache each complete page (status code 200 responses for GET/HEAD requests) on the first access (Django’s cache framework | Django documentation | Django). Subsequent requests to the same URL return the cached page without executing the view. The caching is keyed by the full URL including query parameters, so /page?foo=1 and /page?foo=2 are cached separately (Django’s cache framework | Django documentation | Django). The middleware also adds appropriate HTTP headers (Expires and Cache-Control) to inform downstream caches or browsers of the caching policy (Django’s cache framework | Django documentation | Django).

Considerations: Site-wide caching is a blunt tool – it will cache everything rendered in a page. This is ideal for mostly-static sites but less so if pages include user-specific data or frequently changing content. You can use Vary headers or Django’s @vary_on_cookie / @cache_control decorators to fine-tune caching per view when using the site cache. For example, if a view sets its own Cache-Control: max-age header (via cache_control() decorator), that specific max-age will override CACHE_MIDDLEWARE_SECONDS for that view’s caching (Django’s cache framework | Django documentation | Django). You can also mark certain views as never cacheable using the @never_cache decorator (for login pages, etc.). In a multilingual site, Django automatically varies the cache key by active language (if USE_I18N=True) (Django’s cache framework | Django documentation | Django), so that each language version of a page is cached separately.

Per-View Caching

Overview: Per-view caching (also called view-level caching) caches the output of individual view functions or class-based views. This is more granular than site-wide caching – you choose which views to cache and for how long.

Setup: Use the cache_page decorator on your view or wrap the view in the URL configuration:

  • Decorator usage: Import cache_page and apply it to a view function, specifying the timeout (in seconds) as the parameter. For example:

    from django.views.decorators.cache import cache_page
    
    @cache_page(60 * 15)  # cache for 15 minutes
    def my_view(request):
        ...  # expensive operations
        return HttpResponse("Result")
    

    In this example, my_view will be cached for 900 seconds (15 minutes) (Django’s cache framework | Django documentation | Django). The first call executes the view and stores its response; subsequent calls within 15 minutes return the cached response.

  • URL conf usage: If you prefer not to modify the view code, you can apply caching in the URL patterns. Wrap the view with cache_page() when referencing it in urlpatterns. For example:

    from django.views.decorators.cache import cache_page
    
    urlpatterns = [
        path("reports/<int:id>/", cache_page(300)(reports_view)),  # cache this URL for 5 minutes
        ...
    ]
    

    This decouples caching from the view definition (useful if you want to reuse the view without caching in other contexts) (Django’s cache framework | Django documentation | Django) (Django’s cache framework | Django documentation | Django).

Behavior: Like site caching, the cache key is based on the request URL. Each distinct URL gets its own cache entry (Django’s cache framework | Django documentation | Django). So if your view URL has dynamic segments (e.g., /reports/1/ vs /reports/2/), they will be cached separately, as expected (Django’s cache framework | Django documentation | Django). By default, the cached data goes into the "default" cache, but you can specify a different cache alias if you have multiple cache configurations. For instance, @cache_page(60, cache="special_cache") will use the cache configured under the alias "special_cache" (Django’s cache framework | Django documentation | Django). You can also provide a key_prefix to differentiate cache entries on a per-view basis (this works similarly to the global key prefix, useful if you want to avoid collisions or group cache keys) (Django’s cache framework | Django documentation | Django). For example, @cache_page(300, key_prefix="v2") could be used during a second version rollout to avoid hitting cached content from a previous version of the view.

Considerations: Per-view caching is useful for isolating heavy views. It gives more control than site-wide caching – you can choose which views benefit from caching and set different timeouts for each. It’s commonly used for expensive pages (e.g., reports, dashboards) while leaving simpler or user-specific views uncached. Remember that like site caching, it will cache the entire response of the view. If you need more fine-grained control (only parts of the page), consider template fragment caching instead.

Template Fragment Caching

Overview: Template fragment caching allows you to cache a portion of a template instead of the whole page. This is useful when a page is only partially dynamic – for example, a page with a slow-rendering sidebar or footer that you want to cache, while leaving the rest of the page uncached (or with a different cache policy).

Setup: Use the {% cache %} template tag in your Django templates:

  1. Load the cache template tag library at the top of your template: {% load cache %}.

  2. Wrap the fragment of the template you want to cache with {% cache timeout key %} ... {% endcache %}.

The {% cache %} tag requires at least two arguments: a timeout (in seconds) and a fragment name (a key to identify the fragment) (Django’s cache framework | Django documentation | Django). Example:

{% load cache %}
{% cache 300 sidebar %}
    ... expensive sidebar rendering ...
{% endcache %}

In this example, the content inside the {% cache %} block is cached for 300 seconds (5 minutes) under the name "sidebar" (Django’s cache framework | Django documentation | Django). The first time this template renders, Django will generate the sidebar normally and store it; for the next 5 minutes, any template that hits this cache tag (with the same name and same key arguments, see below) will use the cached content.

Varying by Dynamic Data: Often, you may want multiple versions of a fragment cache depending on some context (such as the logged-in user, or other parameters). You can supply additional arguments to the cache tag which will be factored into the cache key (Django’s cache framework | Django documentation | Django). For example:

{% load cache %}
{% cache 500 sidebar request.user.username %}
    ... sidebar content personalized for user ...
{% endcache %}

Here, we pass request.user.username as an extra key. This means each user gets their own cached sidebar fragment (identified by the fragment name plus their username) (Django’s cache framework | Django documentation | Django). You can pass multiple variables if needed to uniquely distinguish the fragment variant (Django’s cache framework | Django documentation | Django). If a timeout of None is used, the fragment will be cached indefinitely (until manually invalidated).

Using a Specific Cache: By default, fragment caching uses a cache named "template_fragments" if defined; otherwise it falls back to the default cache (Django’s cache framework | Django documentation | Django). You can direct fragment caches to a specific backend by adding using="cache_alias" at the end of the tag (Django’s cache framework | Django documentation | Django). For example:

{% cache 300 sidebar request.user.username using="special_cache" %}
    ... 
{% endcache %}

This will store/retrieve the fragment in the "special_cache" cache configuration.

Invalidating Fragments: To manually invalidate a cached fragment (e.g., if the data changes and you want to bust the cache), you can compute its cache key in Python and delete it. Django provides a utility django.core.cache.utils.make_template_fragment_key(fragment_name, vary_on=None) to get the exact cache key used for a fragment (Django’s cache framework | Django documentation | Django). For example:

from django.core.cache import cache
from django.core.cache.utils import make_template_fragment_key

key = make_template_fragment_key("sidebar", [username])
cache.delete(key)  # remove the cached fragment for this user

This generates the key for the "sidebar" fragment for a given username and deletes it (Django’s cache framework | Django documentation | Django).

Considerations: Fragment caching is very flexible – it lets you cache the complex parts of a page while still rendering other parts per request. It does add some complexity (you must manage multiple keys and ensure caches are invalidated appropriately when underlying data changes). Use descriptive fragment names and vary keys to avoid collisions. This method is great when full-page caching is too coarse (for instance, caching an entire page might cache navigation or user info that shouldn’t be cached for all users). Combining fragment caching with per-view or site caching is possible but rarely needed – typically you choose one approach based on needs.

Low-Level Cache API (Manual Caching)

Overview: The low-level cache API allows you to cache arbitrary data (not just view responses or template output) in Django. This is essentially a Python interface to your configured caches, letting you store and retrieve values by keys. Use this for caching things like computed values, query results, or any expensive operation that you want to reuse without recomputation.

Accessing the Cache: Django’s cache framework is configured via the CACHES setting (covered in the next section). Once configured, you can access a specific cache by name via django.core.cache.caches['alias']. Django also provides a default cache accessible as django.core.cache.cache (which is an alias for caches['default']) for convenience (Django’s cache framework | Django documentation | Django). For example:

from django.core.cache import cache  # default cache
# or alternatively:
from django.core.cache import caches
my_cache = caches["default"]

Storing and Retrieving Data: The cache objects (like cache above) have a simple API:

For example:

cache.set("my_key", "Hello, world!", timeout=60)  # cache the string for 60 seconds
value = cache.get("my_key")  # returns "Hello, world!" if within 60s

This would output "Hello, world!" when fetched (Django’s cache framework | Django documentation | Django). You can cache any picklable Python object (strings, numbers, dicts, even querysets or model objects) with this API (Django’s cache framework | Django documentation | Django). Django will handle serialization for you.

Other useful methods include:

  • cache.add(key, value, timeout) which sets a value only if the key isn’t already in cache.

  • cache.get_or_set(key, default, timeout) which returns the cached value if present, or sets it to a default (e.g. result of a function) and returns it if not.

  • cache.delete(key) to remove a specific key (Django’s cache framework | Django documentation | Django), cache.clear() to clear the entire cache, and cache.delete_many([key1, key2, ...]) to delete a set of keys at once (Django’s cache framework | Django documentation | Django).

  • cache.touch(key, timeout) to reset the timeout on an existing cache item without changing its value (available in Django 4+).

Use Cases: Low-level caching is handy for caching pieces of data that are expensive to compute and are used in multiple places or requests. For example, if you have a complex database query whose result is needed in several views, you could cache the result with a key (perhaps based on query parameters or a model last-updated timestamp) and reuse it. It is also used internally by higher-level caching (the site, view, and fragment caches use the same API under the hood). When using the low-level API, you are responsible for choosing sensible cache keys and invalidating them when needed. It’s good practice to namespace your keys (e.g., "products_list_page1" or "user_{id}_preferences") to avoid collisions. Django’s KEY_PREFIX (discussed below) can also help namespace all keys for a given site or project (Django’s cache framework | Django documentation | Django).

Considerations: Always be cautious to avoid caching sensitive data unintentionally and be mindful of cache invalidation (the classic “cache invalidation” challenge). The low-level API gives you flexibility – you can implement custom caching logic like cache aside or write-through caching as needed. In production, ensure your cache store has enough memory and appropriate eviction policies (discussed next) so that your low-level cached data stays available and doesn’t overwhelm the cache.

With caching strategies covered, we will now explore the caching backends available in Django – i.e., where the cached data is stored. Django supports several cache backends, each with pros/cons suitable for different production scenarios.

Django Caching Backends and Configuration

Django’s cache framework is backend-agnostic – you can choose to store cache data in memory, on disk, or in a database, by adjusting the CACHES setting. The CACHES setting is a dictionary defining one or more cache configurations. For example, you might have a default cache and an alternate cache. Each cache config specifies a backend (the storage type/engine) and location/settings for that backend (Django’s cache framework | Django documentation | Django). Here we focus on four common backends for production use: Memcached, Redis, Database, and Local-Memory. We’ll describe how to install/setup each, show a sample settings.py configuration, and discuss their pros, cons, and best use cases.

Memcached

About: Memcached is a high-performance, distributed memory caching system. It runs as a separate daemon process and stores all data in RAM for quick access (Django’s cache framework | Django documentation | Django). Memcached has been used by large sites like Facebook and Wikipedia to reduce database load and speed up dynamic websites (Django’s cache framework | Django documentation | Django). It’s a great choice for production when you need an ephemeral, fast cache that can be shared across multiple servers.

Installation: Memcached consists of two parts: the Memcached server, and a Python client library.

  • Server: Install Memcached on your server or ensure you have access to a Memcached service. On Ubuntu/Debian you can install via apt (sudo apt install memcached), on CentOS/RHEL via yum, on macOS via Homebrew (brew install memcached). Start the memcached daemon so it’s running (by default on port 11211).

  • Python client library: Django supports two Python bindings for Memcached: pylibmc and pymemcache (Django’s cache framework | Django documentation | Django). Install one of them in your Django project. For example:

    pip install pymemcache
    

    or

    pip install pylibmc
    

    (If both are installed, you can choose either backend in Django; pymemcache is used in Django’s example configs by default.)

Django Configuration: In settings.py, add a cache configuration using the Memcached backend. For example, to use Memcached on localhost:11211 with the pymemcache library:

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.memcached.PyMemcacheCache",
        "LOCATION": "127.0.0.1:11211",
    }
}

This tells Django to use Memcached for the default cache (Django’s cache framework | Django documentation | Django). If you preferred pylibmc, you would use "BACKEND": "django.core.cache.backends.memcached.PyLibMCCache" instead (Django’s cache framework | Django documentation | Django) (make sure to install pylibmc in that case).

Advanced configuration:

  • Unix socket: If your Memcached is listening on a Unix domain socket (which can be faster than TCP on the same host), use the unix:/path/to/socket syntax for LOCATION (Django’s cache framework | Django documentation | Django). For example: "LOCATION": "unix:/tmp/memcached.sock".

  • Multiple servers: Memcached can be distributed across several servers and Django will treat them as one cache (spreading keys across them). To configure this, provide a list of server addresses in LOCATION (Django’s cache framework | Django documentation | Django). For example:

    "LOCATION": [
        "memcache1.example.com:11211",
        "memcache2.example.com:11211",
    ]
    

    Memcached will automatically shard and manage the cache across both servers.

  • Options: You can pass options to the memcache client via an OPTIONS dict. For instance, with PyLibMCCache, you might set {"binary": True} to use binary protocol (Django’s cache framework | Django documentation | Django), or other options like timeout, etc. (Refer to Django docs or the client library docs for supported OPTIONS).

Pros (Memcached in Production):

  • Blazing fast reads/writes: All data is in memory (RAM), with very quick lookup by key. Retrieval is typically a sub-millisecond operation (plus network latency) – much faster than hitting a database or computing data.

  • Scalable and distributed: Memcached is designed to scale out. You can run multiple memcached servers and add them to the pool; the cache will distribute data among them, allowing you to increase total cache size and throughput linearly (Django’s cache framework | Django documentation | Django). This is great for high-traffic, multi-server web deployments.

  • Lightweight: Memcached has a simple design and low memory overhead per item. It uses an LRU (least-recently-used) eviction policy to automatically remove the oldest items when memory is full, which keeps the cache from growing indefinitely.

  • Battle-tested: It’s a mature technology widely used in production, so it’s reliable and well-supported.

Cons/Considerations:

  • Volatile (no persistence): Memcached is purely in-memory; if the service restarts or the machine reboots, all cached data is lost (A Comprehensive Guide to Django Caching — SitePoint). This means cache is always ephemeral (which is usually okay for caching, but you must accept that a restart cold-starts your cache).

  • External service overhead: Running Memcached means an extra moving part in your architecture. In production, you need to maintain the memcached server(s) (ensure they have enough RAM, monitor their health, etc.). If the memcached server is down, your app will revert to computing everything (which could spike load). The cache is not critical in the sense of correctness, but it is a dependency for performance.

  • Data size limits: By default, memcached often has a limit of 1MB per cache item. Very large objects can’t be cached unless you reconfigure the server. Also, total cache size is limited to the memory you allocate to memcached; if you exceed it, older items will be evicted. This is usually fine (you want eviction), but it means cache entries might disappear if your usage grows unless you scale up memory.

  • Security: Memcached by default does not have authentication and expects to be in a trusted network. Ensure your memcached port is firewalled from the public. Alternatively, use SASL (if compiled in) or stunnel if you need authentication/encryption for memcached (not commonly done in simple setups).

Performance & Use Cases: Memcached excels in scenarios with heavy read traffic and where cached data can be regenerated or is not absolutely required to persist. Typical use cases:

  • Caching the rendered HTML of pages or fragments (to avoid re-rendering templates and hitting the DB).

  • Caching database query results or expensive computations (to avoid repeating complex queries on each request).

  • Session storage (some use memcached for session backing store in Django for performance).

  • Deduplicating repeated API calls or third-party data fetches by caching the responses.

In a production Django app, Memcached is a solid default for a shared cache when you have multiple web servers or processes. It’s simple, fast, and if the data is mostly transient (which cache data should be), its lack of persistence is not an issue. Make sure to allocate enough memory to memcached to hold your working set of cache data, and monitor the cache miss rates. A high miss rate might mean your cache is too small or your timeouts are too short.

Redis

About: Redis is another popular in-memory cache backend, often used as a more feature-rich alternative to Memcached. Redis is essentially an in-memory data structure server – it can act as a cache, database, and message broker. For caching, it offers similar speed to Memcached with additional capabilities like persistence and more data structure options (A Comprehensive Guide to Django Caching — SitePoint). It’s well-suited for high-performance and distributed cache scenarios and is widely used in modern web apps.

Installation: Like memcached, you need to install and run the Redis server, and also install a Python client.

  • Server: Install Redis on your server (e.g., sudo apt install redis-server on Debian/Ubuntu, or use Docker or a cloud service like AWS ElastiCache). Ensure the Redis service is running and accessible. Default port is 6379.

  • Python client library: Django 4.x has a built-in Redis cache backend that works with the redis-py library. Install the redis package from PyPI (which provides redis-py):

    pip install redis
    

    It’s also recommended to install hiredis (an optional C library for faster parsing), which redis-py can use to improve performance (Django’s cache framework | Django documentation | Django):

    pip install hiredis
    

    (Alternatively, you can use the third-party django-redis package which provides a Django cache backend with some extras, but for Django 4+ the built-in backend is usually sufficient.)

Django Configuration: Configure the default cache to use Redis. Example:

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.redis.RedisCache",
        "LOCATION": "redis://127.0.0.1:6379",
    }
}

This points to a Redis server on localhost, port 6379 (Django’s cache framework | Django documentation | Django). Django’s RedisCache backend expects a URL in the LOCATION: it can include the scheme (redis:// or rediss:// for SSL), and optional username, password, host, port, and database. For instance, if your Redis requires a password or specific database:

"LOCATION": "redis://:password@127.0.0.1:6379/1"

This would use database index 1 and authenticate with the given password (username is rarely needed for Redis, unless you use ACLs with multiple users) (Django’s cache framework | Django documentation | Django). If you omit a database number, it defaults to 0.

Advanced configuration:

  • Multiple Redis nodes (replication/sharding): The Django RedisCache supports specifying multiple URLs in a list. If you provide multiple LOCATION entries, it will treat the first as the primary (for writes) and others as replicas (for reads) (Django’s cache framework | Django documentation | Django). This is useful if you have a master-replica Redis setup for high availability – Django will read from replicas (distributed read) and write to the master (Django’s cache framework | Django documentation | Django). Example:

    "LOCATION": [
        "redis://master.redis.internal:6379",       # write + read
        "redis://replica1.redis.internal:6379",     # read-only
        "redis://replica2.redis.internal:6379",     # read-only
    ]
    

    Keys written will propagate to replicas, and reads may come from any, which can improve read throughput.

  • Redis-specific options: You can pass an OPTIONS dict with RedisCache as well. This can include things like TIMEOUT overrides or custom connection pool classes etc. (For example, you might specify a different CLIENT_CLASS if using django-redis, but with core RedisCache, redis-py handles connections; you can still pass a CONNECTION_POOL_KWARGS if needed, etc. Consult Django docs if doing custom setups.)

Pros:

  • Fast and memory-based: Like Memcached, Redis stores data in RAM, so reads and writes are extremely fast (sub-millisecond level for in-memory operations). It’s perfectly suited for caching use cases where low latency is important.

  • Persistence options: Unlike Memcached, Redis can optionally persist data to disk (snapshotting or append-only file). In a caching context, you might not need persistence (some people disable it for pure caching to avoid disk I/O), but it’s available. If the Redis server restarts, it can reload data from disk if persistence was enabled. This means your cache can survive reboots (with caveats of data possibly slightly stale depending on persistence frequency).

  • Rich data structures: Redis supports not just string keys/values, but also hashes, lists, sets, sorted sets, etc. Django’s cache framework will just treat it as a key-value store of pickled data, but having Redis in your stack means you could leverage it for other purposes (outside of the Django cache API). For instance, you might use Redis for caching and also for things like pub/sub, counters, or session storage. It can be a unified solution for several needs.

  • Supports large cache and advanced configs: Redis can handle very large datasets (limited by RAM). It also supports clustering (sharding data across nodes) and replication for high availability. In production, you can run Redis in a primary-replica setup or a cluster mode to scale out. This makes it suitable for very high load scenarios.

Cons/Considerations:

  • Operational overhead: Running Redis is an additional service to maintain, similar to Memcached. It’s arguably a bit more complex than Memcached (more configuration options, persistence settings, etc.) (A Comprehensive Guide to Django Caching — SitePoint). You need to monitor Redis memory usage and performance. If Redis goes down or becomes unreachable, your application’s caching will stall (it may hang on connection attempts or fall back to recomputation). So you should plan for failure (e.g., use try/except around cache access or ensure timeouts are set on the client).

  • Memory management: By default, Redis will not evict old data until it reaches a memory limit (if one is set). If no max memory is set, and your cache fills up the server’s RAM, Redis will start to refuse writes (or crash) rather than evict data. Best practice: set a maxmemory in Redis config and an eviction policy (like allkeys-lru for least-recently-used) when using it as a cache. This way it behaves more like Memcached, evicting old entries when full. This requires configuration on the Redis server side – not a Django setting, but a Redis config.

  • Single-threaded performance: Redis handles requests in a single main thread (for CPU-bound tasks this could be a bottleneck, though it can serve a huge number of ops per second, often > 100k ops/s on a decent server). Memcached by contrast can utilize multiple threads. In practice, for web caching, Redis’s single-threaded nature is not usually a problem unless you have extremely high throughput or very CPU-intensive Lua scripts, etc., running. You can scale by running a Redis cluster if needed.

  • Network latency: This is the same consideration as Memcached – accessing Redis involves a network call (unless it’s local). For highest performance you’d run Redis on the same server or a fast network. Usually this latency is still very low (microseconds on localhost, maybe a fraction of a millisecond over network), but it’s there. Local memory cache avoids this at the cost of other trade-offs.

Performance & Use Cases: Redis and Memcached often overlap in use cases. Redis shines if you want a bit more than a simple key-value cache – for example, if you plan to use features like cache persistence (so that warm cache survives a reboot), or you want to share the service with other roles (e.g., using Redis for Celery queues, Django channel layers, or real-time pub/sub, in addition to caching). Many modern architectures use Redis as a multi-purpose in-memory datastore.

Typical use cases for Redis caching in Django:

  • Similar to Memcached: caching computed results, full pages, fragments, queries, etc., to speed up responses.

  • Scenarios where cache data is expensive to recompute and you prefer to have it survive restarts (by using Redis persistence). For example, caching a machine learning model output or a heavy report that’s generated nightly – if the server restarts, memcached would lose it, but Redis could be configured to keep it.

  • If you need to cache larger objects or a huge number of keys, Redis can be configured with more memory and has more robust memory management options.

  • Applications that are already using Redis (for example, for real-time features or as a primary datastore for some non-critical data) can use the same Redis for caching to reduce infrastructure complexity (one less service to run).

In terms of raw speed, both Redis and Memcached are very fast. Benchmarks vary, but for simple GET/SET, they are often comparable. Memcached might have a slight edge in some cases (as it’s purely optimized for cache and multi-threaded), whereas Redis might be within the same order of magnitude. The difference is usually not significant for web app caching – network latency will dominate anyway. So the choice often comes down to features and existing ecosystem: use Memcached if you want a straightforward, memory-only cache and possibly simpler setup; use Redis if you want the flexibility of persistence, advanced features, or if it aligns with your stack (many people choose Redis nowadays since it can do more).

Example: To use Redis in Django 4.x, after configuring as above, you can now do cache.set()/cache.get() and it will store data in Redis. If you open a Redis CLI, you would see keys like :1:my_key (the :1: might represent the cache version or database number). The data is stored pickled, but you can see the keys and memory usage in Redis. It’s a good practice to also periodically monitor your Redis memory usage (INFO memory command) and eviction stats to ensure your caching is operating as expected.

Database Caching

About: Django can use a SQL database table as a cache backend. This means cached entries are stored as rows in a table in your project’s database (or another database if you specify). This backend is convenient if you cannot or do not want to set up an external cache server – it leverages the existing database. However, database caching is generally slower than memory-based caches and is not as common for high-traffic production sites, since it puts load back on the database. It can be useful for small sites or for data that must persist and where consistency is more important than raw speed.

Installation: No separate service is needed beyond your database. However, you must create the cache table in the database before using it.

  • Configure the cache in settings.py (see below).

  • Run the management command to create the table:

    python manage.py createcachetable
    

    This will create a table (by default named as you specify in LOCATION) with columns for cache key, value, and expiration, which Django will use to store cached entries (A Comprehensive Guide to Django Caching — SitePoint). Ensure you run this command on your production database (you might include it in your deployment or migrations process).

Django Configuration: Example settings for database cache:

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.db.DatabaseCache",
        "LOCATION": "my_cache_table",
    }
}

Here, "LOCATION" is the name of the database table to use for storing cache data (Django’s cache framework | Django documentation | Django) (Django’s cache framework | Django documentation | Django). You can choose any table name that isn’t already used. In our example, the cache table will be named my_cache_table. Once this is set, run createcachetable to actually create the table structure.

By default, this will use your default database (DATABASES['default']). If you want, you could designate a separate database for caching in the DATABASES setting and then point the cache to that by using the database router or specifying DATABASE option under the cache settings (not commonly needed – most just use the main DB).

Pros:

  • No extra service: It piggybacks on the existing database, so you don’t need to maintain a separate cache server. This can simplify deployment for small projects.

  • Persistent storage: Cache entries are stored on disk in the database, so they survive application restarts and even database restarts (unlike purely in-memory caches). This means if you cache something expensive, it remains cached until it expires or is invalidated, regardless of service restarts.

  • OK for small scale: If your site has light traffic, the database can handle the extra queries. Each cache get/set is just a database query, which might be fine if it’s not too frequent. Also, because the data is in a table, you could even inspect or manipulate it using SQL if needed (for debugging).

  • Security & transactions: The data is in your controlled database, which might be easier to secure (no open ports like a cache server) and can participate in transactions if needed (though Django’s cache API doesn’t automatically do that).

Cons:

  • Slower performance: Accessing the cache involves executing a SQL query every time. Even if the table is indexed and queries are fast, it’s significantly more overhead than an in-memory lookup (A Comprehensive Guide to Django Caching — SitePoint) (A Comprehensive Guide to Django Caching — SitePoint). The whole point of caching is to avoid hitting the database; with this backend, you are hitting a database (albeit a different table) for the cache. Thus, the latency improvement is smaller. It might still save time if the original computation was very slow or involved multiple queries, but the benefit is less than using a memory cache.

  • Increased DB load: Every cache get/set is load on the DB server. In a high-traffic scenario, this could contend with your normal database operations. The database might become a bottleneck or failure point for your caching as well as your data. If the DB is under heavy load, your cache could perform poorly or time out, negating the benefits.

  • No built-in eviction by size: The database cache does not automatically purge old entries until you perform a cache operation that triggers culling (Django’s cache framework | Django documentation | Django). Django’s database cache will remove expired entries only when you add a new entry or explicitly call .cull(). If your cache table grows large and your site becomes idle (no new cache sets), expired rows just sit there. Over time this table can become very large if not managed. You may need to periodically clean up expired rows (e.g., with a cron job that runs a SQL DELETE on expired ones, or just rely on the fact that new cache sets will eventually trigger culling of old ones).

  • Transactions/isolation: If you use transactions heavily, reads of the cache might be affected by transaction isolation level (e.g., a cache set in a transaction might not be visible to other transactions until commit if they’re on different connections). In most cases this is fine (cache can be slightly behind), but it’s a nuance to be aware of.

Performance & Use Cases: Database caching is best suited for small to medium sites or low QPS (queries per second) scenarios, where you cannot introduce an in-memory cache service. For example, a small corporate website hosted on a server where you only have MySQL/PostgreSQL available might use DB cache to store a few computed values to save recomputation. It can also make sense if the data you cache is not accessed super frequently, but when it is, you want to avoid recompute and you don’t mind a quick DB lookup.

Another use case: If you have data that must be consistent and you prefer it to always come from the primary database (to avoid any chance of stale data beyond the expiration you set), you might use short-timeout DB caching. However, this is edge – usually if consistency is critical, caching might be avoided altogether or carefully managed.

Tips for using DatabaseCache in production:

  • Make sure the cache table has proper indexes (the Django createcachetable command will create an index on the cache key, etc., as needed). The table structure typically has columns for key, value (perhaps as text or binary), and an expiration timestamp, with indexes on the key and the expiration.

  • Keep the timeout values reasonable. Don’t cache things for extremely long times unless they truly rarely change, to avoid a massive buildup of expired rows.

  • Monitor the size of the cache table. If it grows continuously, you might need to intervene with manual cleanup or increase cache writes to trigger culls. By default, upon adding a new cache entry, Django will delete a bunch of expired entries (and if the table grows too large, it will delete some oldest entries even if not expired, to cap the table size) – but this culling is not as proactive as memcached/redis eviction.

  • Because each get is a query, extremely frequent cache access (hundreds per second) could saturate your DB. If you anticipate high read rates on the cache, you should really consider using an in-memory cache instead.

Sample scenario: Suppose you have a report that takes 5 seconds to generate by crunching a lot of data. You decide to cache it for an hour. If you use the DB cache, the first request will compute and store the report in the my_cache_table. Subsequent requests will do a quick SELECT on that table to fetch the cached report. That SELECT might take, say, 5-10 milliseconds, which is a big improvement over 5 seconds. So it works – the cache is effective. Now, if you had used Redis or Memcached, those subsequent requests might take <1 millisecond. In both cases the user sees a fast response, but the CPU/IO load is different. With DB cache, your database did extra work; with Redis, your cache server did the work (and more efficiently). For a handful of requests, both are fine; at scale, the difference becomes critical.

In summary, use the database cache only if you must – typically when you cannot add a dedicated cache service. It’s better than nothing, but not as good as an in-memory solution for high throughput.

Local-Memory Cache (LocMemCache)

About: The local-memory cache backend (LocMemCache) stores cache data in memory within the Django process itself. This is the default cache backend if you don’t specify one in settings (Django’s cache framework | Django documentation | Django). It requires no setup and is extremely fast since data is just kept in Python memory (effectively a thread-safe dictionary in the running process). However, it’s process-local: each process (or machine) has its own separate cache. This has important implications for production.

Installation: None required – it’s built into Django. You just need to configure it in CACHES if you want to adjust its behavior (or it will be used by default if no CACHES setting is provided).

Django Configuration: Example:

CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.locmem.LocMemCache",
        "LOCATION": "unique-snowflake"
    }
}

Here we explicitly configured the locmem cache (Django’s cache framework | Django documentation | Django). The LOCATION is an arbitrary identifier for the memory store – caches with the same LOCATION in the same process share data. If you have only one locmem cache, you can leave LOCATION blank or set any string; it mainly matters if you define multiple locmem caches and want them isolated (different names) or shared (same name) within a process (Django’s cache framework | Django documentation | Django).

How it works: The local-memory cache keeps data in the Python runtime memory. It is thread-safe, so it works with multi-threaded environments. Django uses an LRU eviction strategy for LocMemCache when it reaches a certain number of entries (by default it might store up to 300 entries and then cull 30% of the oldest entries, but these defaults can be overridden via OPTIONS like {"MAX_ENTRIES": 1000, "CULL_FREQUENCY": 3} if needed). Since it’s in-memory, it’s extremely fast – accessing the cache is just a Python function call and dictionary lookup.

Pros:

  • Blisteringly fast (in-process): Retrieving from a locmem cache is as fast as reading a Python dict in memory, since that’s essentially what it is. There’s no network overhead, no serialization cost beyond pickling the object in memory.

  • Zero setup: It works out-of-the-box. If you don’t configure any caches, Django uses a locmem cache by default for the 'default' alias (Django’s cache framework | Django documentation | Django) (with a capacity of 300 entries by default). This makes it great for development or testing – you get caching behavior without installing anything.

  • Good for single-server or single-process apps: If your Django app runs on one server with one process (or a few worker threads in one process), a locmem cache can serve as a simple, effective cache without any external dependencies.

  • Isolated per process: In some cases, you might want each process to have its own cache (though this is rare for web caching). This isolation can be a pro for certain non-web caching uses or tests to avoid interference.

Cons:

  • Not shared across processes or servers: Each process maintains its own cache data (A Comprehensive Guide to Django Caching — SitePoint). In a typical production setup (e.g., multiple gunicorn/uWSGI workers or multiple Django instances behind a load balancer), this means each worker has a separate cache. Effect: one user’s request might be served by worker A and warm that cache, but another user’s request hits worker B and misses the cache because worker B has its own empty cache. This greatly reduces the benefit of caching. Essentially, the cache is “per-process” rather than global, which is not efficient for scaling. There is no built-in synchronization between these caches.

  • Not persistent: Like other in-memory caches, locmem data is lost when the process exits. Deploying new code (which restarts processes) will flush the cache.

  • Memory inefficiency: If you have multiple processes, each one stores the same data in memory (if the same keys are accessed by each). This duplicates memory usage. For example, if you cache a 1MB object and you have 10 worker processes, you could end up using ~10MB across all processes to store that object once in each. An external shared cache would store it once for all to use.

  • Limited size: The default max entries (if not configured) is 300. If you cache more objects than the max, older ones will be evicted. You can increase this via OPTIONS, but since it’s in-process, you have to be mindful of memory usage – it’s eating into your Django process memory.

  • Not suitable for multi-process production: Due to the above reasons, Django’s documentation explicitly notes that local memory cache “isn’t particularly memory-efficient, so it’s probably not a good choice for production environments. It’s nice for development.” (Django’s cache framework | Django documentation | Django).

Performance & Use Cases: The locmem cache is ideal for development and testing, where you want caching behavior without external infrastructure, and where you typically run a single process. It can also be useful in small deployments where your app runs on one server with one process (though even there, using something like Redis locally could allow scale later).

If you do use locmem in production (for example, in a very small app), consider that if you scale up to multiple processes or servers, you should switch to a shared cache backend to get real benefits.

Another use case is if you need a quick temporary cache in a management command or one-off script within Django – LocMemCache can be used since it doesn’t require any setup.

Key point: For production websites that run with multiple processes or multiple machines, do not use LocMemCache as your main cache, because each process will cache separately and users will see inconsistent caching performance. Instead, use Memcached or Redis so all processes share a single cache store.

Comparison of Cache Backends

Each cache backend has its niche. The table below summarizes the key differences and best-use scenarios for Django’s cache backends:

Backend Setup & Dependencies Shared Across Instances Persistent? Relative Speed Best Use Cases
Memcached Install memcached server + Python client (pymemcache/pylibmc). Config in CACHES. Yes (central cache server accessible by all app servers) No (in-memory only, data lost on restart) ⭐ Ultra-fast (in-memory, network call) High-read or high-load sites needing quick ephemeral caching of pages/query results; easy to scale out across multiple servers ([Django’s cache framework
Redis Install Redis server + Python client (redis/hiredis). Config with Redis URL. Yes (central server; supports replication/clustering for HA) Optional (can configure RDB/AOF persistence in Redis) ⭐ Ultra-fast (in-memory, network call) High-load applications where cache persistence is beneficial or a single cache store is used for multiple purposes (sessions, tasks, etc.). Supports advanced data types and large datasets.
Database Use existing DB, run createcachetable to set up table. Yes (shared via common database) Yes (stored on disk in DB) ◼︎ Moderate (SQL query per cache access) Small to medium sites or internal tools with low traffic, where adding a cache server isn’t possible. Cache data that must survive restarts. Not ideal for high traffic due to DB load (A Comprehensive Guide to Django Caching — SitePoint).
Local-Memory No external setup (built-in, default). No (cache is per-process only) No (in-memory per process) ⭐⭐ Extremely fast (function call) Development, testing, or single-process setups. Not recommended for production with multiple workers (A Comprehensive Guide to Django Caching — SitePoint) (each has separate cache).

(⭐ = very high speed, ◼︎ = moderate speed relative to others)

Note: File-based caching (not detailed above) is another option (caching data as files on disk). It is persistent and shared if all processes use the same directory, but it’s slower than memory and can be cumbersome to maintain; it’s generally used for simple low-throughput scenarios or development. Django also has a “DummyCache” which performs no caching (useful to disable caching without changing code, e.g., in tests) (Django’s cache framework | Django documentation | Django).

Best Practices for Production Caching (Django 5.x+)

  • Use a Shared Cache in Production: For a typical Django deployment with multiple processes or servers, configure a shared cache backend like Memcached or Redis for the best performance gains. Local-memory cache is not effective in such environments (Django’s cache framework | Django documentation | Django).

  • Tune Cache Timeout and Invalidation: Set appropriate timeouts for your cached data. In Django, you can use a short timeout for data that changes frequently and a longer timeout for expensive-to-generate data that changes rarely. Ensure that when underlying data changes, you explicitly invalidate or update the cache if the timeout is long (e.g., by using cache versioning keys or calling cache.delete on relevant keys).

  • Key Management: Use CACHE_MIDDLEWARE_KEY_PREFIX (for site-wide cache) or per-view key_prefix to segregate cache entries by site or by version, especially if you deploy new versions of your site that might have incompatible cached data. This prevents serving old content after a deploy (Django’s cache framework | Django documentation | Django). Also, be mindful of key length and uniqueness – Django will make composite keys for site/view caching, but for low-level caching, choose descriptive keys and include identifiers (e.g., user ID if the data is per user).

  • Monitor your Cache: In production, monitor cache hit ratios and performance. Django doesn’t provide these stats out of the box, but for Memcached you can use its stats, and for Redis you can use INFO to see hits/misses. A high hit rate (e.g., >90%) means your caching is effective. If the hit rate is low, consider increasing timeouts or memory, or check if keys are being evicted too often.

  • Memory and Eviction Policy: If using Redis, configure maxmemory and an eviction policy suitable for caching (e.g., allkeys-lru) on your Redis server, so it behaves like a cache by evicting least used entries when full. For Memcached, allocate enough memory to hold your working set; memcached will evict LRU by default when needed. For database cache, keep an eye on the table size and consider periodic cleanup if necessary.

  • Security Considerations: Ensure your cache backend is secured. Memcached should be bound to localhost or a private network, or firewalled – never expose it to the public internet (there have been amplification attacks via open memcached). Redis should be secured with a strong password (if accessible outside localhost) or bound to private interfaces, and use TLS if crossing datacenter boundaries (use rediss:// in LOCATION for SSL). The database cache will use whatever security your database has.

  • Django 4.x Features: Take advantage of Django 4+ improvements, such as the built-in RedisCache backend (no need for third-party packages) and the ability to specify multiple Redis servers for read replicas (Django’s cache framework | Django documentation | Django). These can improve performance and reliability in production.

  • Test in Staging: Caching can sometimes introduce subtle bugs (e.g., stale data being shown). Always test your caching setup in a staging environment. Use tools like Django Debug Toolbar which can show cache hits/misses in the panel, or add logging to confirm when a view is using cached data. Ensure that pages that should vary by user (e.g., user’s dashboard) are either not cached or correctly varied (via Vary headers or fragment caching with user-specific keys).

  • Graceful Degradation: Design your application to handle cache outages gracefully. If the cache backend is down (e.g., Redis not available), Django’s cache calls might throw exceptions or time out. You can catch exceptions around critical cache access or configure a short timeout in OPTIONS for the cache client if possible. It’s often acceptable for the site to run a bit slower (no caching) rather than crash if the cache is unavailable.

By following these best practices and using the appropriate caching strategies and backend, you can significantly improve your Django application’s response times and scalability. Caching in Django 4.x is robust and flexible – from full-site caching via middleware to low-level APIs for fine-grained control, and with support for multiple backends to suit your infrastructure. For more details, refer to the official Django documentation on the caching framework (Django’s cache framework | Django documentation | Django) (Django’s cache framework | Django documentation | Django) and backend-specific settings.

Django Caching in Production: A Comprehensive Setup Guide
Hamed Mohammadi April 25, 2025
Share this post
Tags
Archive

Please visit our blog at:

https://zehabsd.com/blog

A platform for Flash Stories:

https://readflashy.com

A platform for Persian Literature Lovers:

https://sarayesokhan.com

Sign in to leave a comment