James Ding
Apr 09, 2026 16:48
Notion migrated from Spark on EMR to Ray, slicing embedding prices 80% and enhancing question latency 10x. Uber and Salesforce shared related AI infrastructure wins.
Notion has slashed its AI embedding pipeline prices by greater than 80% after migrating from Apache Spark to Ray, the distributed computing framework backed by Anyscale. The productiveness software program firm additionally achieved 10x enhancements in question latency whereas consolidating three separate jobs per area into one.
The migration particulars emerged at Ray Day Seattle on April 9, 2026, the place ML engineers from Notion, Uber, Salesforce, and Apple shared hard-won classes about scaling AI infrastructure.
What Notion Really Modified
Mickey Liu, a software program engineer on Notion’s search platform workforce, walked via the overhaul. Their unique setup used a three-step Spark pipeline operating on Amazon EMR: information chunking, third-party API requires embedding era, and writes to a vector retailer.
The ache factors had been predictable however extreme. Double compute prices. Third-party API fee limits throttling throughput. Debugging nightmares when failures occurred throughout instruments—driver and executor logs weren’t even continued in YARN.
The brand new structure streams Kafka information instantly right into a Ray cluster dealing with CPU chunking, GPU embedding era, and vector retailer writes in a single pipeline. No intermediate S3 handoffs. What began because the backend for a Q&A characteristic in 2023 now powers all of Notion AI and customized brokers.
Uber and Salesforce Report Related Features
Uber’s Peng Zhang detailed how their Michelangelo ML platform developed from TensorFlow/Horovod to Ray with PyTorch. The standout transfer: separating CPU data-loading nodes from GPU coaching nodes in a heterogeneous cluster design. End result? GPU utilization jumped 20%, and coaching time dropped roughly 50% in choose pipelines.
Salesforce tackled a special beast—summarizing paperwork as much as 200,000 tokens lengthy (roughly a brief novel) with P95 latency below 15 seconds. Their workforce used Ray to chunk paperwork and run parallel inference throughout a distributed actor pool with vLLM, then merge outcomes. They landed on 1-2 GPU information parallelism because the candy spot after operating scaling experiments instantly on Ray.
Why This Issues Past These Firms
Robert Nishihara, Ray’s co-creator and Anyscale co-founder, opened the occasion by framing the core drawback: AI infrastructure retains getting more durable. Multimodal information processing, reinforcement studying workloads, and multi-node LLM inference are pushing current instruments previous their limits.
Each speaker landed on the identical conclusion from totally different angles—their earlier tooling ran out of highway.
Apple engineers Charlie Chen and Haocheng Bian highlighted basis mannequin coaching challenges: huge unstructured information, billion-plus parameters, and sparse architectures like Combination of Specialists. Conventional engines fail as a result of information pipelines and coaching frameworks run in separate environments with no shared context.
What’s Subsequent
Ray Day Seattle kicked off Anyscale’s 2026 “Ray on the Highway” tour—eight cities throughout three nations. The corporate can be operating invite-only buyer roundtables at every cease to preview their product roadmap.
For groups hitting related partitions with Spark or different distributed frameworks, Notion’s full technical writeup is accessible on their engineering weblog below “Two Years of Vector Search at Notion.” The 80% price discount and 10x latency enchancment supply a concrete benchmark for anybody evaluating related migrations.
Picture supply: Shutterstock

