Posts

Snowflake Caching Performance Explained — A Clear, Practical Benchmark Story

Image
Originally published on LinkedIn: https://www.linkedin.com/pulse/snowflake-caching-performance-explained-clear-story-mohapatra-p0ric/?trackingId=XOnrAVSWV7bM%2FOvYlOboOA%3D%3D Migrated on: 2026-04-05 The query result cache is essential for repeated query performance. By reusing the results of recently run queries, it drastically reduces both time and resource consumption. 🛠️ How It Works : If the exact query is repeated within 24 hours and no changes occur in the underlying data, Snowflake will return the cached result. This optimization can save significant compute costs in repetitive reporting environments. 👩💻 Real-World Example : A marketing analyst rerunning customer engagement reports throughout the day would experience much faster query response times due to the query result cache, thus streamlining the reporting process unless the underlying data has changed. Managing Warehouse Cache for Efficiency The warehouse cache stores data...

Snowflake Backup & Data Recovery – Key Concepts

Originally published on LinkedIn: https://www.linkedin.com/pulse/nowflake-backup-data-recovery-key-concepts-arabinda-mohapatra-yvwfc/ Migrated on: 2026-04-05 What is Time Travel? Time Travel is like a time machine for your data. It allows you to go back to a specific point in the past and see or recover data as it was at that time. This is extremely useful when you need to undo accidental deletions or changes. 🔑 Key Insights: Retention Period : You can set how long Snowflake keeps past versions of your data. The default is 1 day, but in the Enterprise Edition or above, you can extend this up to 90 days. Easy Recovery : With Time Travel, you can query, restore, or clone data as it was at any point during the retention period. Rollback Mistakes : You can fix user errors by simply rolling back to a previous version of your table or database. 🚫 1. External Tables — Why They Are Not Cloned External tables only store m...

7AM DataEngineering Sunrise - Digest Week 14, 2026

7AM DataEngineering NewsDigest - 7AM DataEngineering Sunrise - Digest Week 14, 2026 Compiled: 2026-04-05 06:58:41 1. OCSF explained: The shared data language security teams have been missing Source: VentureBeat — Original The security industry has spent the last year talking about models, copilots, and agents, but a quieter shift is happening one layer below all of that: Vendors are lining up around a shared way to describe security data. The Open Cybersecurity Schema Framework(OCSF), is emerging as one of the strongest candidates for that job.It gives vendors, enterprises, and practitioners a common way to representsecurity events, findings, objects, and context. Read full article 2. Nvidia launches enterprise AI agent platform with Adobe, Salesforce, SAP among 17 adopters at GTC 2026 Source: VentureBeat — Original Jensen Huangwalked onto theGTC stageMonday wearing his trademark leather jacket and carrying, as it turned out, the blueprints for a new kind of industry dom...

7AM DataEngineering Sunrise - Thursday Digest Week 9, 2026

7AM DataEngineering NewsDigest - 7AM DataEngineering Sunrise - Thursday Digest Week 9, 2026 Compiled: 2026-02-26 09:49:51 1. The era of human web search is over: Nimble launches Agentic Search Platform for enterprises boasting 99% accuracy Source: VentureBeat — Original Web Search has already been disrupted by AI — just take a look at how readilyGoogleis presenting users with AI Overviews (summaries of search results) at the top of their results pages, howBing early on integrated OpenAI's GPT models, and howPerplexitycontinues to build on its own AI-driven web search platform and browsers.Nimble announced the launch of its Agentic Search Platform, a system designed to transform the public web into trusted, decision-grade data for AI systems and business workflows. Read full article 2. Anthropic says Claude Code transformed programming. Now Claude Cowork is coming for the rest of the enterprise. Source: VentureBeat — Original Anthropicopened its virtual "Briefing:...

How Zerodha Scaled to Millions with Just PostgreSQL

🐘 Deep Dive · Database Engineering How Zerodha Scaled to Millions with Just PostgreSQL A no-nonsense technical breakdown of how India's largest stockbroker stretched one open-source database to its absolute limits — and won. 📅 2025 ✍️ play-with-data.blogspot.com ⏱ ~18 min read 🎯 Data Engineers · Backend Engineers · DBAs § 00 · Opening The Counterintuitive Choice When you hear that a platform processes tens of millions of trades per day , serves ~15 million registered users , manages hundreds of billions of rows spanning close to 20 terabytes of financial data, and does all of that while running a 30-person engineering team — the first database that comes to mind is probably not PostgreSQL. You'd expect Cassandra, maybe TiDB, maybe a hybrid of ClickHouse and Kafka with a sprinkling of DynamoDB. You'd expect horizontal auto-scaling, manag...