Skip to content

jeffotoni/benchmark-gocache

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

License GitHub last commit GitHub forks GitHub stars

πŸš€πŸ’• Benchmarks for Go In-Memory Caches

This repository provides a comprehensive benchmark suite comparing nine custom cache implementations from jeffotoni/gocache (versions v1 through v9), plus two well-known open-source libraries:

All tests are run on an Apple M3 Max machine (Darwin/arm64) to measure both 1-second and 3-second benchmark performance (-benchtime=1s and -benchtime=3s).

Why Use an In-Memory Cache?

  • Faster Access: In-memory caches reduce latency by storing frequently accessed data directly in memory, avoiding repeated database or external service calls.
  • Reduced Load: Caching lowers the workload on databases and APIs, improving overall system throughput.
  • Quick Expiration: In-memory caches are best for ephemeral data where occasional staleness is tolerable, and items can expire quickly.

Cache Implementations Tested

We benchmarked nine versions from jeffotoni/gocache, each using different concurrency strategies, expiration approaches, and internal data structures. Additionally, we tested:

Below is a snippet showing how the caches are instantiated in our benchmarking suite:

var cacheV1 = v1.New(10 * time.Minute)
var cacheV2 = v2.New[string, int](10*time.Minute, 0)
var cacheV3 = v3.New(10*time.Minute, 1*time.Minute)
var cacheV4 = v4.New(10 * time.Minute)
var cacheV5 = v5.New(10 * time.Minute)
var cacheV6 = v6.New(10 * time.Minute)
var cacheV7 = v7.New(10 * time.Minute)
var cacheV8 = v8.New(10*time.Minute, 8)
var cacheV9 = v9.New(10 * time.Minute)

// Third-party libraries
var cacheGoCache = gocache.New(10*time.Second, 1*time.Minute)
var fcacheSize = 100 * 1024 * 1024 // 100MB cache
var cacheFreeCache = freecache.NewCache(fcacheSize)

Each version of gocache implements different optimizations (locking, sharding, ring buffers, etc.) to analyze performance trade-offs.

Example Benchmark Function

Below is an example benchmark test used for Version 1 (v1), focusing on both Set() and Get():

//The result wiil BenchmarkGcacheSet1(b *testing.B)
func BenchmarkGcacheSet1(b *testing.B) {
	for i := 0; i < b.N; i++ {
		key := strconv.Itoa(i)
		cacheV1.Set(key, i, time.Duration(time.Minute))
	}
}

//The result wiil BenchmarkGcacheSetGet1(b *testing.B)
func BenchmarkGcacheSetGet1(b *testing.B) {
	for i := 0; i < b.N; i++ {
		key := strconv.Itoa(i)
		cacheV1.Set(key, i, 10*time.Minute)
		i, ok := cacheV1.Get(key)
		if !ok {
			log.Printf("Not Found %v", i)
		}
	}
}
...

Note: Similar benchmark functions are repeated for v2 through v9, plus go-cache and freecache.

Benchmark de Cache em Go

Architecture: Apple M3 Max (arm64) Package: benchmark-gocache

$ go test -bench=. -benchtime=5s -benchmem

πŸ“Š Go Cache Benchmark Comparison Freecache, Ristretto, Bigcache and Go-cache

This benchmark compares several Go in-memory caching libraries using go test -bench on an Apple M3 Max (arm64) CPU.
Each implementation is tested for raw Set performance and Set/Get combined performance.
The values reflect nanoseconds per operation, allocations, and bytes per op under high concurrency (GOMAXPROCS=16).


Implementation Set Ops Set ns/op Set/Get Ops Set/Get ns/op Observations
gocache V1 28,414,197 338.6 ns/op 22,687,808 294.9 ns/op Baseline version β€” decent speed, moderate allocs
gocache V8 26,022,742 364.5 ns/op 15,105,789 393.6 ns/op High memory cost, TTL enabled
gocache V9 44,026,141 265.4 ns/op 23,528,972 270.0 ns/op πŸ† Fastest write throughput
gocache V10 19,749,439 393.2 ns/op 16,217,510 495.9 ns/op ❌ Higher allocation and latency
gocache V11 (Short) 39,719,458 264.2 ns/op 23,308,189 265.4 ns/op ⚑ Short TTL β€” very fast overall
gocache V11 (Long) 22,334,095 348.8 ns/op 18,338,124 319.7 ns/op Balanced long TTL setup
go-cache 25,669,981 392.5 ns/op 20,485,022 306.0 ns/op Stable, but slower than newer gocache versions
freecache 41,543,706 380.3 ns/op 14,433,577 425.2 ns/op πŸš€ Fast writes, significantly slower reads
ristretto 30,257,541 352.3 ns/op 10,055,701 547.8 ns/op 🧠 TinyLFU eviction, high allocation per op
bigcache 30,260,250 320.6 ns/op 14,382,721 354.6 ns/op πŸ”₯ Very consistent, low GC overhead

🧠 Notes:

  • gocache is a custom in-memory cache optimized for concurrency, modularity, and optional TTL.
  • freecache, go-cache, bigcache, and ristretto are popular open-source libraries with different focuses (size control, expiration, LFU, etc).
  • Set/Get benchmarks include an immediate Get() call after each Set().
  • Allocation and GC behavior differ drastically across libraries, especially with TTL and internal eviction mechanisms.

πŸš€ 1-Second Benchmarks

$ go test -bench=. -benchtime=1s
Implementation Set Ops Set ns/op Set/Get Ops Set/Get ns/op Observations
gocache V1 6,459,714 259.4 ns/op 5,062,861 245.0 ns/op Fast reads, decent writes
gocache V2 6,597,314 239.5 ns/op 4,175,704 280.4 ns/op Good write speed, average read
gocache V3 7,094,665 259.7 ns/op 4,746,934 281.6 ns/op Balanced performance
gocache V4 4,644,594 324.4 ns/op 3,571,759 330.9 ns/op ❌ Slower (sync.Map)
gocache V5 6,311,216 252.6 ns/op 4,714,106 278.7 ns/op Solid all-around
gocache V6 7,532,767 262.6 ns/op 4,865,896 256.2 ns/op πŸ”₯ Great concurrency
gocache V7 8,026,825 222.4 ns/op 4,978,083 244.3 ns/op πŸ† Best write (1s), fast reads
gocache V8 4,708,249 309.3 ns/op 2,513,566 399.7 ns/op ❌ Slower overall
gocache V9 9,295,434 215.9 ns/op 5,096,511 272.7 ns/op πŸ† Fastest write (lowest ns/op)
go-cache 6,463,236 291.6 ns/op 4,698,109 290.7 ns/op Solid library, slower than V7/V9
freecache 5,803,242 351.1 ns/op 2,183,834 469.7 ns/op πŸš€ Decent writes, poor reads

πŸš€ 3-Second Benchmarks

$ go test -bench=. -benchtime=3s
Implementation Set Ops Set ns/op Get Ops Get ns/op Observations
gocache V1 17,176,026 338.5 ns/op 13,891,083 268.6 ns/op Fast read, solid write
gocache V2 16,457,449 318.5 ns/op 12,379,336 304.4 ns/op Good write speed, average read
gocache V3 20,858,042 310.8 ns/op 14,042,400 287.1 ns/op Balanced, efficient
gocache V4 15,255,268 422.4 ns/op 8,882,214 406.3 ns/op ❌ Slow (sync.Map)
gocache V5 20,500,326 348.9 ns/op 12,597,715 271.7 ns/op Good balance
gocache V6 21,767,736 341.4 ns/op 13,085,462 297.3 ns/op πŸ”₯ Strong concurrency
gocache V7 27,229,544 252.4 ns/op 14,574,768 268.6 ns/op πŸ† Best write (3s)
gocache V8 15,796,894 383.5 ns/op 8,927,028 408.8 ns/op ❌ Slower overall
gocache V9 24,809,947 252.1 ns/op 13,225,228 275.7 ns/op πŸ† Very fast write, good read
go-cache 15,594,752 375.4 ns/op 14,289,182 269.7 ns/op πŸš€ Excellent reads, slower writes
freecache 13,303,050 402.3 ns/op 8,903,779 421.4 ns/op ❌ Decent write, slow read

πŸ… Benchmark Icons Guide

These icons indicate key performance insights from our benchmarks:

  • πŸ† Top Performance β†’ Best result in a specific category (fastest read/write).
  • ❌ Underperformance β†’ Notably slower compared to other implementations.
  • πŸ”₯ Balanced & Scalable β†’ Strong concurrency, optimized trade-offs.
  • πŸš€ High Speed β†’ Impressive performance, but not always the absolute fastest.

πŸ’‘ Use these indicators to quickly identify the strengths and weaknesses of each cache version!

πŸš€ Key Highlights

βœ… Best Write Performance:

  • πŸ† V7 and V9 consistently deliver the fastest writes (lowest ns/op in Set benchmarks).
  • V9 achieves top speeds while maintaining strong read performance.

βœ… Best Read Performance:

  • V1 and go-cache often provide the lowest ns/op in Get benchmarks, making them excellent choices for read-heavy workloads.
  • go-cache remains a strong competitor in retrieval speed.

⚠️ Slower Performance Observed:

  • ❌ V4 (sync.Map) and V8 struggle in both read and write speeds, making them less suitable for high-performance applications.
  • freecache performs well in writes but has significantly slower read speeds.

πŸ”₯ Overall, V7 and V9 stand out as the best-balanced options for both write speed and retrieval performance!

βš–οΈ Overall Trade-Offs

Every cache implementation has its own strengths and weaknesses:

βœ… Optimized for Reads β†’ Some caches prioritize fast retrieval speeds.
πŸš€ High Write Throughput β†’ Others are designed to handle massive insertions efficiently.
πŸ”₯ Balanced Performance β†’ V7 and V9 strike a great balance between read and write speeds.

πŸ’‘ Choosing the right cache depends on your workload needs!


🀝 Contributing

Want to enhance this benchmark suite? Follow these simple steps:

1️⃣ Fork this repo and add your own cache tests or custom versions.
2️⃣ Submit a Pull Request (PR) with your improvements or questions.
3️⃣ Join the discussion by opening an issue to suggest new features or optimizations.

Your contributions are always welcome! πŸš€βœ¨


πŸ”— Related Projects

This benchmark compares the following caching solutions:

βœ… jeffotoni/gocache – Custom high-performance cache versions (V1–V9).
βœ… patrickmn/go-cache – Lightweight in-memory cache with expiration.
βœ… coocood/freecache – High-speed cache optimized for low GC overhead.

πŸ“Œ If you know another cache worth benchmarking, feel free to suggest it!


πŸ“œ License

This project is open-source under the MIT License.

πŸ’‘ Feel free to fork, modify, and experiment with these benchmarks in your own applications or libraries.
πŸ”¬ The goal is to help developers choose the best in-memory cache for their needs.

πŸš€ Happy benchmarking!

About

Benchmark Caches - gocache v1, v2..v9, freecache, go-cache

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published