Discussing e6data’s Architectural Bets on the Zero Prime Podcast
Our founding engineer and Head of Engineering, Sudarshan, recently went on the Zero Prime podcast and unpacked the internals of our compute engine.
It’s a story of breaking away from the driver-executor model, rethinking scheduling for the object-store era, and why atomic, per-component scaling actually matters.
Everyone says “compute and storage are decoupled.” Not really.
Today’s data infra ≠ Today’s compute requirements.
We are building e6data by imagining a new playbook. No central coordinator. No one mega-driver. No lock-in to a single table format. Here’s the breakdown:
1. Disaggregation of internals
- Separate the planner, metadata ops, and workers.
- Each scales independently, not as a monolith.
2. Dynamic, mid-query scaling
- Queries can scale up/down during execution.
- No pre-provisioning for worst-case. Just-in-time compute.
3. Push-based vectorized execution
- We’re similar to DuckDB/Photon but go deeper on compute orchestration.
- Useful when dealing with 1k+ concurrent user-facing queries.
4. No opinionated stack
- Bring your own catalog, governance layer, and format.
- Plug in; don’t port over.
Listen to the full podcast
We are universally interoperable and open-source friendly. We can integrate across any object store, table format, data catalog, governance tools, BI tools, and other data applications.
We use a usage-based pricing model based on vCPU consumption. Your billing is determined by the number of vCPUs used, ensuring you only pay for the compute power you actually consume.
We support all types of file formats, like Parquet, ORC, JSON, CSV, AVRO, and others.
e6data promises a 5 to 10 times faster querying speed across any concurrency at over 50% lower total cost of ownership across the workloads as compared to any compute engine in the market.
We support serverless and in-VPC deployment models.
We can integrate with your existing governance tool, and also have an in-house offering for data governance, access control, and security.