Caching is the single most impactful performance lever in SAP Commerce. A well-configured caching strategy can reduce page load times by 10x, cut database load by 90%, and let a single cluster handle traffic spikes that would otherwise require emergency scaling. A poorly configured one can serve stale data, cause cache stampedes, and create debugging nightmares.
This article covers every caching layer in the SAP Commerce stack — from the type system cache and region cache through to HTTP caching and CDN configuration — with practical configuration examples and troubleshooting guidance.
SAP Commerce has multiple caching layers, each serving a different purpose:
┌────────────────────────────────────────────────────┐
│ CDN (Cloudflare, Akamai, CloudFront) │ TTL: minutes–hours
│ Static assets, anonymous page responses │
├────────────────────────────────────────────────────┤
│ HTTP Reverse Proxy (Varnish / CDN edge) │ TTL: seconds–minutes
│ OCC API responses, page fragments │
├────────────────────────────────────────────────────┤
│ Application Server — Spring Cache │ TTL: configurable
│ Method-level caching for services │
├────────────────────────────────────────────────────┤
│ SAP Commerce Region Cache │ TTL: per-region
│ Entity cache, query cache, search cache │
├────────────────────────────────────────────────────┤
│ Type System Cache │ Permanent (until restart)
│ Type definitions, attribute metadata │
├────────────────────────────────────────────────────┤
│ Database Query Cache (MySQL/HANA) │ Managed by DB engine
│ SQL result caching │
└────────────────────────────────────────────────────┘
The type system cache holds the metadata about all item types, attributes, relations, and enumerations. It’s loaded at startup and stays in memory.
When SAP Commerce starts, it reads the entire type system definition from the database and builds an in-memory representation. Every modelService.get(), every FlexibleSearch query, every ImpEx import uses this cache to understand the data model.
# Type system cache is not configurable per se — it's always on.
# But you can control the startup behavior:
# Force type system rebuild on startup (use after items.xml changes)
typesystem.cache.validate=true
# Log type system cache statistics
log4j2.logger.typesystem.name = de.hybris.platform.persistence.type
log4j2.logger.typesystem.level = DEBUG
The type system cache only causes issues during development when you change items.xml and forget to run ant updatesystem. Symptoms include:
UnknownIdentifierException for types you just addedFix: Always run ant updatesystem after changing items.xml, then restart.
The region cache is the primary application-level cache in SAP Commerce. It caches entities, queries, and other frequently accessed data in memory.
SAP Commerce defines several cache regions:
| Region | Purpose | Default Size | Typical Hit Rate |
|---|---|---|---|
entityCacheRegion |
Item model instances | 50,000 | 80-95% |
queryCacheRegion |
FlexibleSearch results | 10,000 | 60-85% |
typesystemCacheRegion |
Type definitions | 5,000 | ~100% |
entityEnumCacheRegion |
Enum value lookups | 5,000 | ~100% |
catalogVersionsCacheRegion |
Catalog version data | 1,000 | ~100% |
# local.properties or project.properties
# Entity cache — the most important cache
regioncache.entityCacheRegion.size=100000
regioncache.entityCacheRegion.evictionpolicy=LRU
regioncache.entityCacheRegion.statsEnabled=true
# Query cache — caches FlexibleSearch results
regioncache.queryCacheRegion.size=20000
regioncache.queryCacheRegion.evictionpolicy=LRU
regioncache.queryCacheRegion.statsEnabled=true
# Enable cache statistics (monitor via HAC)
cache.main.regioncache.stats=true
Cache invalidation happens automatically when items are modified through the SAP Commerce API:
// This automatically invalidates the cache for this product
modelService.save(productModel);
// This invalidates the cache for the removed item
modelService.remove(productModel);
// ImpEx imports also trigger cache invalidation
In a multi-node cluster, cache invalidation must propagate across all nodes. SAP Commerce uses a cache invalidation topic:
# Cluster cache sync configuration
cluster.node.stale.timeout=60000
cluster.broadcast.method.jgroups=jgroups-tcp.xml
# CCv2 uses its own cluster discovery — don't override
# But ensure broadcast is working:
cluster.broadcast.methods=jgroups
When node A saves a product, it broadcasts an invalidation message. Nodes B, C, and D receive this message and evict the product from their local caches.
Access cache statistics via HAC → Monitoring → Cache:
Cache Region Statistics:
┌───────────────────────┬───────────┬──────────┬───────────┬──────────┐
│ Region │ Size │ Hit Rate │ Hits │ Misses │
├───────────────────────┼───────────┼──────────┼───────────┼──────────┤
│ entityCacheRegion │ 87,432 │ 94.2% │ 2,341,897 │ 142,103 │
│ queryCacheRegion │ 15,678 │ 78.5% │ 892,456 │ 244,332 │
│ typesystemCacheRegion │ 3,245 │ 99.9% │ 567,890 │ 45 │
│ enumCacheRegion │ 1,234 │ 99.8% │ 234,567 │ 123 │
└───────────────────────┴───────────┴──────────┴───────────┴──────────┘
Target hit rates: Entity cache >90%, query cache >70%. If entity cache hit rate drops below 80%, increase the cache size or investigate which items are being loaded repeatedly.
FlexibleSearch queries are cached at two levels: the query plan cache and the result cache.
Compiled query plans are cached to avoid re-parsing SQL on every execution:
# Query plan cache (parsed SQL → compiled query)
flexiblesearch.cache.size=10000
flexiblesearch.cache.enabled=true
Query results can be cached based on the query string and parameters:
// This query's results are cached
FlexibleSearchQuery query = new FlexibleSearchQuery(
"SELECT {pk} FROM {Product} WHERE {code} = ?code");
query.addQueryParameter("code", "CAM-001");
query.setCacheable(true); // Enable result caching for this query
SearchResult<ProductModel> result = flexibleSearchService.search(query);
The query cache stores complete result sets. For queries that return large result sets or queries that are rarely repeated with the same parameters, caching wastes memory:
// BAD: Don't cache queries with unique parameters
FlexibleSearchQuery query = new FlexibleSearchQuery(
"SELECT {pk} FROM {Order} WHERE {code} = ?code");
query.addQueryParameter("code", uniqueOrderCode);
query.setCacheable(true); // Wastes cache space — each order code is unique
// GOOD: Cache queries with reusable parameters
FlexibleSearchQuery query = new FlexibleSearchQuery(
"SELECT {pk} FROM {Product} WHERE {approvalStatus} = ?status AND {catalogVersion} = ?cv");
query.addQueryParameter("status", ApprovalStatus.APPROVED);
query.addQueryParameter("cv", onlineCatalogVersion);
query.setCacheable(true); // Good — this query is reused frequently
If you use Solr for product search, its caches significantly impact search performance.
<!-- solrconfig.xml -->
<config>
<!-- Filter cache: caches filter queries (category, facet, stock status) -->
<filterCache class="solr.CaffeineCache"
size="4096"
initialSize="1024"
autowarmCount="512"/>
<!-- Query result cache: caches ordered result sets -->
<queryResultCache class="solr.CaffeineCache"
size="2048"
initialSize="512"
autowarmCount="256"/>
<!-- Document cache: caches stored fields for documents -->
<documentCache class="solr.CaffeineCache"
size="8192"
initialSize="2048"/>
</config>
After indexing, Solr needs to warm its caches for the new searcher:
<listener event="newSearcher" class="solr.QuerySenderListener">
<arr name="queries">
<!-- Pre-warm with common category queries -->
<lst>
<str name="q">*:*</str>
<str name="fq">catalogVersion:Online</str>
<str name="sort">score desc</str>
<str name="rows">10</str>
</lst>
</arr>
</listener>
# Check Solr cache statistics via admin API
curl "http://localhost:8983/solr/electronics_Product/admin/mbeans?cat=CACHE&stats=true&wt=json" | jq '.["solr-mbeans"]'
Key metrics to watch:
For the headless storefront (Spartacus), cache OCC API responses at the HTTP layer:
@GetMapping("/products/{productCode}")
public ResponseEntity<ProductWsDTO> getProduct(
@PathVariable String productCode,
@RequestParam(defaultValue = DEFAULT_FIELD_SET) String fields) {
ProductData data = productFacade.getProductForCode(productCode);
ProductWsDTO dto = dataMapper.map(data, ProductWsDTO.class, fields);
return ResponseEntity.ok()
.cacheControl(CacheControl
.maxAge(5, TimeUnit.MINUTES)
.staleWhileRevalidate(30, TimeUnit.SECONDS))
.eTag(generateETag(dto))
.body(dto);
}
| Resource | Cache-Control | Rationale |
|---|---|---|
| Product detail | max-age=300, stale-while-revalidate=30 |
Changes infrequently |
| Product list/search | max-age=60 |
Moderate change frequency |
| Cart | no-store |
User-specific, never cache |
| Checkout | no-store |
User-specific, never cache |
| Static assets (JS, CSS) | max-age=31536000, immutable |
Content-hashed filenames |
| Product images | max-age=86400 |
Rarely change |
| CMS content | max-age=600, stale-while-revalidate=60 |
Updated by business users |
ETags enable efficient cache revalidation:
@GetMapping("/categories/{categoryCode}/products")
public ResponseEntity<ProductSearchPageWsDTO> searchProducts(
@PathVariable String categoryCode,
HttpServletRequest request) {
// Generate ETag from content hash
ProductSearchPageData results = searchFacade.categorySearch(categoryCode);
String etag = DigestUtils.md5Hex(results.hashCode() + "");
// Check If-None-Match header
if (etag.equals(request.getHeader("If-None-Match"))) {
return ResponseEntity.status(HttpStatus.NOT_MODIFIED).build();
}
ProductSearchPageWsDTO dto = dataMapper.map(results, ProductSearchPageWsDTO.class);
return ResponseEntity.ok()
.eTag(etag)
.cacheControl(CacheControl.maxAge(1, TimeUnit.MINUTES))
.body(dto);
}
For production deployments, a CDN layer caches responses close to users geographically.
CDN Configuration (e.g., Cloudflare, Akamai):
┌─────────────────────────────┬────────────┬────────────────────┐
│ URL Pattern │ CDN TTL │ Strategy │
├─────────────────────────────┼────────────┼────────────────────┤
│ /occ/v2/*/products/* │ 5 min │ Cache, honor origin│
│ /occ/v2/*/categories │ 10 min │ Cache, honor origin│
│ /occ/v2/*/cms/pages/* │ 10 min │ Cache, honor origin│
│ /occ/v2/*/users/* │ 0 (bypass) │ Never cache │
│ /occ/v2/*/cart* │ 0 (bypass) │ Never cache │
│ /occ/v2/*/orders* │ 0 (bypass) │ Never cache │
│ /medias/* │ 24 hours │ Cache aggressively │
│ /*.js, /*.css │ 1 year │ Immutable │
│ /*.html (SSR pages) │ 1 min │ Stale-while-reval │
└─────────────────────────────┴────────────┴────────────────────┘
The CDN cache key determines what makes two requests “the same.” Include:
fields, lang, curr. Exclude tracking parameters.Accept-Language for multi-language sites# Cloudflare page rule example
Match: /occ/v2/*/products/*
Cache Level: Cache Everything
Edge Cache TTL: 300
Browser Cache TTL: 60
Cache Key: URL + query string (filtered) + Accept-Language header
Bypass: If Cookie contains "access_token"
When product data changes, purge the CDN cache:
@Component
public class CDNCachePurger implements AfterSaveListener {
@Override
public void afterSave(Collection<AfterSaveEvent> events) {
for (AfterSaveEvent event : events) {
if (isProductRelated(event)) {
String productCode = getProductCode(event);
purgeProductFromCDN(productCode);
}
}
}
private void purgeProductFromCDN(String productCode) {
// Purge specific product URL patterns from CDN
List<String> urlsToPurge = List.of(
"/occ/v2/*/products/" + productCode + "*",
"/medias/*" + productCode + "*"
);
cdnClient.purge(urlsToPurge);
}
}
// WRONG: This caches per-user data with a shared key
@Cacheable("priceCache")
public PriceData getPrice(String productCode) {
// Returns user-specific price (based on contract, user group, etc.)
return priceFacade.getPrice(productCode);
}
// RIGHT: Include user-identifying information in cache key
@Cacheable(value = "priceCache", key = "#productCode + '-' + #userGroup")
public PriceData getPrice(String productCode, String userGroup) {
return priceFacade.getPrice(productCode);
}
# WRONG: No size limit
regioncache.entityCacheRegion.size=0
# RIGHT: Set appropriate limits based on available memory
# Rule of thumb: entity cache size ≈ unique items accessed in a typical hour
regioncache.entityCacheRegion.size=100000
// WRONG: Caching stock levels for an hour
return ResponseEntity.ok()
.cacheControl(CacheControl.maxAge(1, TimeUnit.HOURS))
.body(stockData);
// RIGHT: Short TTL or no cache for volatile data
return ResponseEntity.ok()
.cacheControl(CacheControl.maxAge(30, TimeUnit.SECONDS))
.body(stockData);
When a cached item expires and many requests arrive simultaneously, they all miss the cache and hit the database at once.
// Solution: stale-while-revalidate
return ResponseEntity.ok()
.cacheControl(CacheControl
.maxAge(5, TimeUnit.MINUTES)
.staleWhileRevalidate(60, TimeUnit.SECONDS)) // Serve stale while refreshing
.body(dto);
// Solution: Cache warming
@Scheduled(fixedRate = 240000) // Every 4 minutes (before 5-min TTL expires)
public void warmProductCache() {
List<String> topProductCodes = analyticsService.getTopProductCodes(100);
for (String code : topProductCodes) {
productFacade.getProductForCode(code); // Refreshes cache
}
}
Cache Health Dashboard:
┌──────────────────────────────────────────────────┐
│ Entity Cache │
│ Hit Rate: 94.2% ████████████████████░░ (Target: >90%)
│ Size: 87,432 / 100,000 (87% full) │
│ Evictions/min: 45 │
├──────────────────────────────────────────────────┤
│ Query Cache │
│ Hit Rate: 72.1% ██████████████░░░░░░░ (Target: >70%)
│ Size: 15,678 / 20,000 (78% full) │
│ Evictions/min: 120 │
├──────────────────────────────────────────────────┤
│ CDN Cache │
│ Hit Rate: 88.5% █████████████████░░░░ (Target: >85%)
│ Bandwidth saved: 4.2 TB/day │
│ Origin requests: 1.2M/day (down from 10.8M) │
├──────────────────────────────────────────────────┤
│ Solr Filter Cache │
│ Hit Rate: 91.3% ██████████████████░░░ (Target: >85%)
│ Evictions: 234 (since last commit) │
└──────────────────────────────────────────────────┘
Effective caching in SAP Commerce requires understanding and tuning every layer:
The goal isn’t maximum caching — it’s the right caching. Cache the data that’s expensive to compute and frequently requested, with TTLs that balance freshness against performance. And always have a way to invalidate when the source data changes.