Austin, or Remote (with flexibility for timezone overlap)
About Dreambase 💭
We're completely reimagining product analytics by flipping the entire industry approach upside down. While everyone else forces you through the painful dance of data extraction, transformation, expensive tooling, and armies of data engineers just to understand your users, we start where your data already lives: your Postgres product database. From there, we build up and out, connecting database records to event streams to behavioral patterns to research insights to your actual codebase in one unified system that eliminates the traditional pipeline nightmare and puts real product intelligence directly in your hands.
Your mission
As our Founding AI Infrastructure Architect, you'll build the unbreakable foundation that makes our AI-native analytics revolution possible at enterprise scale. You'll architect data systems that seamlessly handle everything from a startup's first thousand events to enterprises processing billions of records, designing the infrastructure that keeps customer data secure and compliant while delivering lightning-fast insights. This isn't just backend work, you're engineering the critical systems that let us deliver on our promise to eliminate the analytics nightmare, building infrastructure so robust and intelligent that it makes the impossible feel effortless.
What you'll do
Architect for Scale: Design and build database systems that gracefully scale from gigabytes to petabytes while maintaining sub-second query performance
Secure the Foundation: Implement enterprise-grade security, compliance frameworks (SOC 2, GDPR, HIPAA), and data governance that customers can trust with their most sensitive data
Master the Data Stack: Build sophisticated data infrastructure leveraging Postgres optimization, DuckDB for analytics workloads, Apache Iceberg for data lakes, and S3 for scalable storage
Stream at Speed: Design and implement real-time event streaming pipelines that capture millions of user interactions per second with zero data loss
Own the Backend: Architect APIs, authentication systems, and database layers using Supabase, including RLS policies, performance tuning, indexing strategies, and storage optimization
Build AI-Native Infrastructure: Create MCP (Model Context Protocol) servers and Streamable HTTP endpoints that enable seamless AI-to-data communication
Optimize Relentlessly: Monitor, profile, and optimize every layer of the stack for performance, cost-efficiency, and reliability at scale
Pioneer Standards: Establish infrastructure patterns and best practices as we grow from startup to enterprise platform
What we're looking for
Infrastructure Mastery: You've built and scaled backend systems that handle massive data volumes in production environments
Database Wizard: Deep expertise in Postgres optimization, indexing, query planning, and scaling strategies; bonus for DuckDB and SQLite experience
Security-First Mindset: You understand data security, compliance requirements, and how to build systems that meet enterprise standards
Supabase Expert: You know Supabase inside and out: auth, storage, RLS, Edge Functions, real-time subscriptions, and performance optimization
API Architect: You design clean, efficient REST APIs and understand HTTP at a deep level
Data Pipeline Guru: Experience building event streaming systems and analytics pipelines that reliably process high-volume data
Cloud Native: Hands-on experience with Vercel & Cloudflare hosting, Supabase backend, AWS (especially S3), data lake technologies like Apache Iceberg, and modern cloud infrastructure
AI Infrastructure Savvy: You understand what AI applications need from their infrastructure and how to build backends that support LLM-powered features
Full-Stack Capable: Comfortable with Node.js, TypeScript, React, and Next.js to collaborate effectively across the stack
Problem-Solving Machine: You debug complex distributed systems issues and architect elegant solutions to gnarly infrastructure challenges
Ship Fast, Scale Smart: You balance rapid iteration with building foundations that won't need complete rewrites at scale
Bonus points for
Experience with Databricks and/or Snowflake in production environments
Previous work scaling analytics platforms or data-intensive applications to enterprise customers
Contributions to open-source database, infrastructure, or data tools
Experience with real-time analytics systems or OLAP databases
Background as a Data Engineer, Data Scientist, or Analytics Engineer
Previous infrastructure leadership at a high-growth startup
Deep knowledge of data warehouse architecture and query optimization
Tools & tech you're probably already using
The ideal Dream builder is likely already playing with and excited about:
Databases & Storage: Postgres, DuckDB, SQLite, Supabase, S3, Apache Iceberg
Server & Serverless: Node.js, Next.js API routes, REST, HTTP, Vercel/Supabase Edge Functions
Event Streaming: Real-time analytics pipelines, event ingestion systems, streaming data architectures
AI Context (Awareness): Vercel AI-SDK, OpenAI/Anthropic/Gemini APIs, RAG pipelines
AI Infrastructure: MCP (Model Context Protocol) servers, Streamable HTTP, LLM-optimized backends
Cloud & DevOps: Vercel, Cloudflare, AWS, GCP, or Azure; Infrastructure as Code; monitoring and observability tools
Security & Compliance: RLS policies, encryption, SOC 2 requirements, GDPR compliance frameworks
Development Tools: Claude Code, Cursor, or similar AI-augmented coding tools
Data Platforms: Databricks, Snowflake, or similar enterprise data platforms (bonus)
Why join Dreambase 💭
Make Real Impact: Build infrastructure that powers how thousands of businesses understand their products
Get in Early: Architect the foundation of a revolutionary platform and grow with us
Solve Hard Problems: Tackle complex infrastructure challenges at the intersection of AI, analytics, and scale
Join an Amazing Team: Learn and grow alongside passionate, brilliant colleagues who love what they do
Work How You Work Best: Remote-first culture focused on results, not hours spent at a desk
Get Rewarded: Competitive salary and equity that reflects your founding role in our success
How to apply
Show us what you can do! Rather than a resume, we'd love to see:
A case study of infrastructure you've built and scaled (architecture decisions, scaling challenges you solved, performance improvements you achieved) - Videos go a long way!
Your GitHub profile or code samples that demonstrate your infrastructure and backend expertise
A technical deep-dive on a database optimization, scaling challenge, or infrastructure problem you've solved
A brief note about why Dreambase excites you and your vision for the future of AI-native infrastructure
Email jobs<at>dreambase.ai with the subject line "Let's Build the Foundation - [Your Name]"