Join us for an open discussion on Apache Iceberg in Bangalore on 21st Dec Learn more ->

NextGen lakehouse compute engine


The next-gen analytics engine for
“heavy” workloads.






Trusted by NASDAQ listed enterprises and private Unicorns, backed by Accel. Read More
Faster

10-100x

Lower TCO

60-80%

Ready to Go Value

360° Interoperability Prod-ready in 10 days

AS SEEN ON

Second largest category of IT spend at
over $100B

Fastest growing at 30% YoY

CONTEXT

Data leaders are feeling
the pain.

Lack of compelling compute engine alternatives

Limited competition due to deep barriers to entry: Specialized know-how, Massive capital needs , and Long time-to-market.

Existing platforms are indistinguishable on price - performance reducing the incentive to switch.

Worries around compute ecosystem lock-in

Vendors tie core performance, cost to specific table formats (e.g. Delta or Iceberg), catalogs / governance layers (Unity).

Migrating from one engine’s SQL dialect to another engine’s SQL involves months of effort.

NOT ALL DATA INTELLIGENCE WORKLOADS ARE CREATED EQUAL

Some workloads are
“just different”.

Mission critical | compute intensive | non-discretionary

Purpose-built for the 10% heavy workloads
that drive 80% of Cost , Engineering effort,
and Stakeholder complaints.

Introducing e6data

Amplify ROI, unlock new capabilities on existing data platforms.



SPEED / LATENCY

TOTAL COST OF OWNERSHIP (TCO)

GUARANTEED LATENCY UNDER LOAD

Introducing e6data

Negate any compute
ecosystem lock-in.

Truly format-neutral compute, interoperable with all major open standards.

EXISTING BI TOOL / NOTEBOOK / API / OTHER

(JDBC / ODBC / SQL ALCHEMY)

EXISTING TABLE FORMAT

(HIVE, DELTA, ICEBERG, HUDI)

EXISTING FILE FORMAT

(PARQUET, ORC, AVRO)

EXISTING STORAGE LAYER

(S3, GCS, ADLS, HDFS)

EXISTING CLOUD PROVIDER

(AWS, GCP, AZURE, ON-PREM)

how we built e6data

What we did not do (too) differently.

Columnar processing

Pipelined execution

Cache-friendly execution, Vectorisation

Highly parallel, Distributed execution

Optimal Planning, Optimisation

Data caching

how we built e6data

What we did (very) differently.

Novel fully disaggregated Architecture
No centralised coordinator or driver Lightweight, single-purpose services Granular, independent scaling of services No Single Point of Failure (SPOF)

Decentralised task scheduling, execution
No centralised task scheduling, and coordination Distributed executors pull tasks when free Mitigates challenges of variable task time

Own implementations of the SOTA
But on our unique architecture and distributed processing foundations.