What is Delta Live Tables?
What is Delta Live Tables?
Published: None
What is Delta Live Tables?
🚀 Transform Your Data Processing with Delta Live Tables! 🚀
Streamline your data pipelines effortlessly with Delta Live Tables, a powerful declarative framework that ensures reliability, maintainability, and testability. 🛠️ Define your data transformations, and let Delta Live Tables handle task orchestration, cluster management, monitoring, data quality, and error handling.
Instead of juggling multiple Apache Spark tasks, simply define streaming tables and materialized views, and Delta Live Tables will keep them up to date. With built-in data quality expectations, you can ensure your data meets the highest standards and decide how to handle any discrepancies
Delta Live Tables Datasets
1️⃣ Streaming Tables: Process each record exactly once.
2️⃣ Materialized Views: Process records as needed for accurate results, perfect for transformations, aggregations, and pre-computing slow queries.
3️⃣ Views: Process records on query for intermediate transformations and data quality checks.
Streaming Tables:
- Process growing datasets, handling each row only once.
- Ideal for ingestion workloads needing data freshness and low latency.
- Useful for massive scale transformations by incrementally calculating results as new data arrives.
Pro Tip: Although designed for append-only sources, you can override this with the skipChangeCommits flag if needed.
Materialized Views:
- Precomputed results for powerful performance.
- Automatically refreshed based on pipeline schedules.
- Perfect for handling changes in input data, compliance, corrections, and aggregations.
Views:
- Compute results from source datasets on query with caching optimizations.
- Not published to the catalog, keeping them internal within pipelines.
- Ideal for intermediate transformations and data quality enforcement.
- Create a Folder called "DLT PIPELINE"
- Create two notebooks 00_dlt_introduction & 00__seup_for_introduction
3. Create Schema
4.CLONE SOME SAMPLE DATA
5.Add some data incrementally in ORDERS
6.Checking the table dev.bronze.orders_raw
7. create a streaming table for orders
8.create a materiliazed views for customer
9. create a view to join the orders with customer
10. create a additional column that is current_timestamp to addd a new column
11.aggeregate c_markert and find the count of the order
12 .Create a ETL Pipeline(Delta Live Table) & configure it
DLT Internals & Incremental load | Modify columns| Rename table| Data Lineage
- Modify Columns & Rename Tables: Simply tweak the script in your notebook and re-run it to see changes automatically reflected.
- Incremental Load: When new data is added to the row table, running the delta flow will incrementally load and update the data seamlessly.
- Data Lineage: Utilize databricks_internal to store metadata of all tables and views, keeping a history of each change.
Will write further on more details
Reference:
https://learn.microsoft.com/en-us/azure/databricks/delta-live-tables/
Comments
Post a Comment