Lakehouse Days: January 2025

Want to see e6data in action?

Learn how data teams power their workloads.

Get Demo
Get Demo

About the event

Join us for an exclusive in-person event on “Apache Iceberg: understanding the internals, performance, and future” hosted by e6data in Gurgaon!

This meetup is designed specifically for data engineers, data architects, and senior software engineers who constantly seek to optimize their data architecture to make it more price-performant while delivering the best user experience. In this edition, we will dive deep into the internal architecture of open table formats like Apache Iceberg, the consequence of the announcement of the AWS S3 tables for Apache Iceberg, and streaming ingestion to Iceberg using a Rust-based solution. We aim to raise awareness about these open-table formats and gain a deeper understanding.

Lakehouse Days is designed to enable fellow data nerds to meet, network, and have insightful discussions on the entropic world of data.

Meet the speakers

Soumil Shah, Sr. Software Engineer at Zeta Global
Ankur Ranjan, Senior Softwate Engineer at e6data

Topic: A deep dive into the AWS S3 Tables since the announcement

In this session, Soumil and Ankur will dissect and discuss AWS’s recent announcement of Amazon S3 Tables – a fully managed Apache Iceberg Table offering by AWS, optimized for analytics workloads. They will discuss the consequences of the announcement and how it will shape the Lakehouse world.

Time: 09:45 - 10:30 AM IST

Sachin Tripathi, Senior Data Engineer at EarnIn

Topic: Apache Iceberg 101

This discussion covers key features such as iceberg's ACID-like transactions ,time travel, schema evolution, hidden partitioning, and catalogs. It also offers insights into optimizing analytics, managing metadata, and ensuring interoperability across multi-engine ecosystems, highlighting their advantages.

Time: 10:45 - 11:30 AM IST

Karthic Rao, Principal Engineer at e6data
Shreyas Mishra
, Software Development Engineer at e6data

Topic: Streaming ingestion to Apache Iceberg using a rust-based solution

Apache Iceberg is an open-source high-performance format for huge analytic tables that enables using SQL tables for big data while making it possible for engines like Spark, Trino, Flink, Presto, and e6data query engines. In this talk, Karthic and Shreyas will explain how they have re-imagined streaming ingestion to Apache Iceberg using a rust-based solution instead of Apache Flink, Spark Structure streaming, or a Kafka stream. Rust’s memory safety and concurrency features make it ideal for building efficient ingestion pipelines that can transform and write data directly into Iceberg’s table format. This ensures seamless integration, low-latency ingestion, and effective handling of schema evolution, enabling real-time analytics on fresh data.

Time: 11:45 - 12:30 PM IST

Share on

Build future-proof data products

Try e6data for your heavy workloads!

Get Started for Free
Get Started for Free
Frequently asked questions (FAQs)
How do I integrate e6data with my existing data infrastructure?

We are universally interoperable and open-source friendly. We can integrate across any object store, table format, data catalog, governance tools, BI tools, and other data applications.

How does billing work?

We use a usage-based pricing model based on vCPU consumption. Your billing is determined by the number of vCPUs used, ensuring you only pay for the compute power you actually consume.

What kind of file formats does e6data support?

We support all types of file formats, like Parquet, ORC, JSON, CSV, AVRO, and others.

What kind of performance improvements can I expect with e6data?

e6data promises a 5 to 10 times faster querying speed across any concurrency at over 50% lower total cost of ownership across the workloads as compared to any compute engine in the market.

What kinds of deployment models are available at e6data ?

We support serverless and in-VPC deployment models. 

How does e6data handle data governance rules?

We can integrate with your existing governance tool, and also have an in-house offering for data governance, access control, and security.