11AM PT, 2PM ET
Government agencies today deal with ever-increasing volumes and varieties of data. Unfortunately, traditional data warehouses fail to provide government organizations with the capabilities they need to maximize the value of their big data. Managing data pipelines and executing relatively simple database tasks can be a time consuming and costly effort. Fortunately, open-source technologies make these complex tasks simple.
Join us September 25 at 11:00am PT/2:00pm ET to hear experts explain the fundamentals of building a modern data analytics architecture with open-source technologies. You’ll learn how to transform your messy data lake into a highly performant, scalable and reliable analytics engine capable of delivering on a wide range of use cases, from batch and streaming ingest to fast interactive queries to machine learning.
- Key challenges of legacy data analytics architectures in the public sector
- How to prepare your data for analytics at scale with a modern architecture built on Apache Spark and Delta Lake
- A Live Demo:
- How to optimize upserts (merge into) and queries using Delta Lake
- Ensuring consistency with ACID transactions with Delta Lake
- Extend your modern analytics solution by time travel!