Introduction
Redis is the world's most popular in-memory data structure store. This tutorial covers everything from basic installation to advanced patterns for caching, rate limiting, leaderboards, and real-time messaging.
Installation and Basic Setup
Install on Ubuntu: sudo apt install redis-server. Enable and start: sudo systemctl enable redis-server, sudo systemctl start redis-server. Test connection: redis-cli ping (returns PONG). Configure /etc/redis/redis.conf: set supervised systemd, bind 127.0.0.1 (or internal IP), requirepass for authentication, maxmemory and maxmemory-policy for eviction strategies.
Redis Data Structures
Strings: SET key value, GET key, INCR counter, MSET (multiple set). Lists: LPUSH/RPUSH (add to head/tail), LPOP/RPOP (remove), LRANGE (range query). Sets: SADD (add), SREM (remove), SINTER (intersection), SUNION (union). Hashes: HSET user:1000 name "John", HGET, HGETALL (all fields). Sorted Sets: ZADD leaderboard 100 "player1", ZRANGE (ordered by score), ZRANK (get position). HyperLogLog, Bitmaps, Geospatial for specialized use.
Caching Patterns with Redis
Implement read-through cache: check cache first, if missing query database, store result with TTL (expiration). Write-through: update cache when writing to database. Cache aside: application manages both. Implement cache invalidation strategies: TTLs (EXPIRE key seconds), explicit delete (DEL key), or versioned keys (key:v2). Use Redis as LRU cache with maxmemory-policy allkeys-lru.
Persistence Options
RDB (Redis Database File): point-in-time snapshots every N seconds if M changes. Configure save directive: save 900 1 (15 min if 1 change), save 300 10 (5 min if 10 changes). AOF (Append-Only File): logs every write operation, configurable fsync policy (always, everysec, no). Best practice: enable both RDB and AOF for durability with performance. Use redis-check-rdb and redis-check-aof for integrity.
Pub/Sub Messaging
Publish/subscribe pattern: SUBSCRIBE channel (client listens), PUBLISH channel message (producer sends). Pattern subscriptions: PSUBSCRIBE news.* (matches news.sports, news.tech). Use cases: real-time notifications, chat, streaming data. Note: messages not persisted - lost if no subscribers. Combine with Streams (Redis 5+) for persistence and consumer groups.
Redis Streams for Event Sourcing
Streams provide log-like data structure with consumer groups. XADD stream * field value (add entry), XREAD (read new entries), XGROUP CREATE (create consumer group), XREADGROUP (consumer reads). Use cases: message queues, event sourcing, activity feeds. Streams support blocking reads, pending entry lists, and claim for failed message handling.
Rate Limiting Implementation
Fixed window counter: SET user:rate:minute 0 NX EX 60, INCR on each request, reject when > limit. Sliding window with sorted sets: ZADD user:requests timestamp, ZREMRANGEBYSCORE remove old, ZCARD count recent requests. Token bucket: Lua script to store tokens, decrement with each request, refill at fixed rate. Redis Cell module for advanced rate limiting (GCRA algorithm).
Leaderboards and Real-Time Rankings
Sorted sets ideal for leaderboards: ZADD scores 1000 "player1", ZINCRBY (increment), ZREVRANGE (top 10), ZREVRANK (player position). Real-time updates: atomic operations, O(log N) complexity. Implement social feeds with time-based scores (Unix timestamp). Combine with Lua scripting for complex updates (add point, recalc rank in single operation).
Distributed Locks with Redlock
Prevent race conditions across multiple app instances. SET lock_key unique_value NX PX 30000 (set if not exists with 30s expiration). Release: Lua script that deletes only if value matches (prevents accidental removal). Redlock algorithm (Redis author) for distributed locks across multiple independent Redis master instances. Use Redlock only when strong consistency needed.
Redis Cluster and Sentinel
Sentinel provides high availability: monitoring, automatic failover, configuration provider. Minimum 3 sentinel instances. Redis Cluster shards data across multiple nodes (up to 1000), supports online resharding, many clients support cluster protocol. Data distribution via hash slots, MOVED/ASK redirections. For most use cases, single instance with replication and Sentinel suffices until massive scale needed.
Monitoring and Performance Tuning
Use redis-cli --stat (real-time stats), INFO (all server info), SLOWLOG GET (slow commands). Memory optimization: use hashes for small objects, enable ziplist for small sorted sets. Monitor with RedisInsight (GUI), Prometheus Redis exporter, or Datadog integration. Tune Linux OS: vm.overcommit_memory=1, disable THP (transparent huge pages), increase somaxconn. Set appropriate maxmemory (70-80% of RAM) to leave room for background save operations.
Conclusion
Redis is incredibly versatile - start with simple key-value caching, then explore data structures as needs evolve. Use TTLs to control memory, implement proper error handling for connection failures, and always consider persistence requirements for your use case.