Lesson 6 of 14

WiredTiger in plain English: pages, cache, compression - 004

RAM 64GB unna kuda MongoDB slow ga unda? WiredTiger cache eviction ni artham chesukokapothe anthe!

Core Explanation: WiredTiger data ni Pages lo divide chestundi. Memory lo uncompress format lo, disk meeda compressed format (Snappy/Zstd) lo untundi. Dirty pages (modified data) ni checkpoint vachinappudu disk ki flush chestundi. Mee working set (frequently accessed data) RAM size kante pedda ga unte, IOPS perigi system freeze aipothundi.

Wrong Practice: Assuming more RAM solves poor indexes.

# Monitoring tools showing high 'pages read into cache' 
# means your working set doesn't fit or your indexes are bad.

Best Practice: Index size ni RAM lo fit ayyela chusukondi. Keep working set < RAM.

// Check stats
db.stats().indexSize 
// Compare this with available cache (default 50% of RAM - 1GB)

Closing Insight: "RAM penchadam kante mundu, indexes reduce cheyyandi. Index size eh performance ki bottleneck."