Want to see e6data in action?

Learn how data teams power their workloads.

Get Demo
Get Demo

Lakehouse Days: Dec 2024

About the event

Join us for an exclusive in-person event on “Apache Iceberg: understanding the internals, performance, and future" hosted by e6dataThis meetup is designed specifically for data engineers, data architects, and senior software engineers who are constantly looking to optimize their data architecture to make it more price-performant while delivering the best user experience. In this edition, we will deep-dive into the internal architecture of open table formats like Apache Iceberg, the recent announcement of AWS S3 tables for Apache Iceberg, streaming ingestion to Iceberg using a Rust-based solution, and how Apache Iceberg is being used at Netflix at scale. We aim to raise awareness about these open-table formats and gain a deeper understanding.Lakehouse Days is designed to enable fellow data nerds to meet, network, and have insightful discussions on the entropic world of data.

Meet the speakers

Sachin Tripathi, Senior Data Engineer at Bureau

Topic: Apache Iceberg 101: Understanding the Need for Lakehouses Over Data Lakes or Warehouses

This discussion covers key features such as time travel, schema evolution, hidden partitioning, and catalogs. It also offers insights into optimizing analytics, managing metadata, and ensuring interoperability across multi-engine ecosystems, highlighting their advantages.

Time: 9:00 - 9:45 AM IST

Soumil Shah, Sr. Software Engineer at Zeta Global

Topic: A take on AWS's recent announcement of the S3 table

In this session, Soumil will dissect and discuss AWS’s recent announcement of Amazon S3 Tables – a fully managed Apache Iceberg tables offering by AWS, optimized for analytics workloads.

Time: 10:00 - 10:45 AM IST

Vipul Bharat Marlecha, Senior Software Engineer, Netflix
Ankur Ranjan, Senior Softwate Engineer at e6data

Topic: Open discussion on streaming ingestion & apache iceberg

In this session, Vipul and Ankur will engage in an open discussion to showcase how Apache Iceberg technology facilitates streaming ingestion, along with its advantages and disadvantages. They will also explore how Netflix leverages Apache Iceberg at scale, covering aspects like table maintenance, cataloging, streaming sources, and much more.

Time: 11:00 - 11:45 AM IST

Fenil Jain, Software Development Engineer at e6data
Shreyas Mishra, Software Development Engineer at e6data

Topic: Streaming ingestion to Apache Iceberg using a rust-based solution

Apache Iceberg is an open-source high-performance format for huge analytic tables, which enables the use of SQL tables for big data while making it possible for engines like Spark, Trino, Flink, Presto, and e6data query engines. In this talk, we will re-imagine the streaming ingestion to Apache Iceberg using a rust-based solution instead of Apache Flink, Spark Structure streaming, or Kafka stream. Rust’s memory safety and concurrency features make it ideal for building efficient ingestion pipelines that can transform and write data directly into Iceberg’s table format. This ensures seamless integration, low-latency ingestion, and effective handling of schema evolution, enabling real-time analytics on fresh data.

Time: 12:00 - 12:45 PM IST


Read more about Apache Iceberg

Share on

Get product updates and legal tips straight to your inbox.

This is an exclusive and invite-only event. Please RSVP to reserve your spot below.

Venue:

e6data, Bangalore, India (Details in invite)

Date & Time:

21st Dec 2024 from 8:45 AM to 1:30 PM IST

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Register Now
Register Now

Build future-proof data products

Try e6data for your heavy workloads!

Get Started for Free
Get Started for Free
1

Pick a heavy workload

Choose a common cross-industry "heavy" workload; OR Work with our solution architect team to identify your own.

2

Define your 360° interop

Define all points of interop with your stack: e.g. Catalog, BI Tool, etc. e6data is serverless first and available on AWS/Azure.

3

Pick a success metric

Supported dimensions: Speed/Latency, TCO, Latency Under Load. Pick any linear combination of these three dimensions.

4

Pick a kick off date

Assemble your team (data engineer, architect, devOps) for kickoff from the date of kickoff, and go live in 10 business days.

Frequently asked questions (FAQs)
How do I integrate e6data with my existing data infrastructure?

We are universally interoperable and open-source friendly. We can integrate across any object store, table format, data catalog, governance tools, BI tools, and other data applications.

How does billing work?

We use a usage-based pricing model based on vCPU consumption. Your billing is determined by the number of vCPUs used, ensuring you only pay for the compute power you actually consume.

What kind of file formats does e6data support?

We support all types of file formats, like Parquet, ORC, JSON, CSV, AVRO, and others.

What kind of performance improvements can I expect with e6data?

e6data promises a 5 to 10 times faster querying speed across any concurrency at over 50% lower total cost of ownership across the workloads as compared to any compute engine in the market.

What kinds of deployment models are available at e6data ?

We support serverless and in-VPC deployment models. 

How does e6data handle data governance rules?

We can integrate with your existing governance tool, and also have an in-house offering for data governance, access control, and security.