Consumer AI-Native Products Today: On the Newly Possible, Personalization, Moat and Getting Funded (Part 1)
By Nisha Dua
There’s a difference between building with AI and building because of it. That’s why we gathered 40 early-stage founders at BBGV for a breakfast last month — co-hosted with our friends at Factorial, , , and Stellation and supported by — to discuss what it really means to build Consumer AI-Native products in 2025. It was the biggest turnout in our event series (and the lowest drop out rate) to date — and a clear signal that Consumer is back, baby.
As investors we were all in agreement that there’s never been a more ripe time to disrupt the current generation of consumer companies:
- The incumbents are sleepy, with major platforms focused on embedding AI into legacy products, not creating new ones. This leaves whitespace for startups to reimagine dominant categories (shopping, health, learning, travel) from first principles.
- Consumers are overwhelmed by choice and underwhelmed by quality. The internet has become a generic experience. What users now crave is curation, intimacy, and n=1 personalization.
- Trust in AI is accelerating. ChatGPT and others have mainstreamed AI. Consumers are not only open to it — they’re starting to expect it.
- Interface innovation is just beginning. Chat was only the first step. We’re seeing huge potential in multimodal, ambient, and gesture-based experiences. Your AI won’t just respond, but sense, suggest, and adapt invisibly.
We hosted two founders early in AI-native consumer products: Dennis Crowley, the co-founder of Foursquare and Dodgeball, currently the co-founder and CEO of Hopscotch Labs, a startup developing BeeBot — a location-aware, AI-powered audio app designed to enhance urban exploration through AirPods; and Anna Bofa founder and CEO of Crate, an AI-powered platform designed to help users save and organize content from across the internet. Anna was previously at Google, Dropbox, Pinterest, and Facebook.
So what does AI-native mean? It’s not about putting a wrapper on top of ChatGPT to solve a problem or create a new workflow. It’s about products that couldn’t exist without the latest capabilities of LLMs, personalized agents, multimodal interfaces, and ambient computing — read on to hear what matters in building consumer AI native today.
🔥 Newly Possible v. Newly Trivial
It’s easy to start with a question of “What’s AI Native versus AI Enabled” but Matt Hartman from Factorial presented a new framework of what’s newly possible and what’s newly trivial. In a world where anyone can layer an API on top of GPT or Stable Diffusion, the real question is: what’s meaningfully differentiated? This is a helpful way to think about what’s worth building — and what might already be too commoditized to defend.
Newly Possible: Unlocks That Didn’t Exist Before
This is the bleeding edge — the frontier. These are capabilities that weren’t feasible 12 months ago — or even 6. They often emerge from new model architectures, open-source releases, or major leaps in inference speed, latency, or multimodal functionality. They unlock new user behaviors and inspire real “wow” moments. What’s newly possible will feel magical and non-obvious. Examples include:
- Real-time voice interaction with Gemini over WebRTC without Twilio or infrastructure overhead.
- 3D object generation from a single 2D image using tools like Instant Mesh.
- Agentic behavior that auto-curates content based on implicit user interest, like Crate’s ability to pull Google Maps locations from a TikTok.
Newly Trivial: Table Stakes Capabilities Anyone Can Access
Thanks to open-source models, API wrappers, and tools like Replit, LangChain, and Hugging Face Spaces, a wide swath of AI functionality has become trivial to implement — even for solo builders. For example:
- Adding basic Q&A over PDFs: Matt basically coded this one in bed
- Virtual try-on using open-source models or Pika’s API: any VC has been pitched a bunch of virtual try-on apps but it’s important to know how easy it is. See below for an example Matt whipped up for Nisha at BBGV.
- Voiceover generation or basic summarization of content.
The takeaway? “If I can build it in a weekend, we won’t invest in it,” Matt joked. But more to the point: if it’s newly possible, stay ahead of the pack and be the first mover. Here VC funding could help find the asymmetric upside. But if it’s newly trivial, the app starts to look more like a set of features — that’s where distribution, brand, proprietary data, and UX magic will be critical accelerants; if you nail that and people are willing to pay for it go ahead and build it, but it might be better to bootstrap.
🤖 What Personalization Actually Means in 2025: It’s the Vibe.
The most oft-talked about promise of AI has to be n of 1 personalization. AI has turned personalization from a backend optimization into the core UX.
Anna Bofa, founder of Crate, broke it down like this: “Old-school personalization grouped users into buckets. Now we’re building at the agent layer — every user gets a feed tailored to them, and only them.” Crate’s UX is designed around implicit signal, not explicit onboarding. Users save content, and their agent begins learning — curating, predicting, suggesting. It’s a laid-back, non-interruptive loop that feels more like a friend than a feed. “Saving content isn’t just a feature. It’s the dataset. It’s the user story. It’s what makes the personalization work,” Anna explained.
Dennis Crowley, co-founder of Beebot, emphasized the ambient approach: “Our best users don’t even open the app. They just put their AirPods in, and Beebot gives them a whisper of what’s going on around them. It’s software that fits into the in-between moments of life.”
His point? The best personalization doesn’t just know you — it respects your time and context. And, it doesn’t have to know everything, it just needs to know you well enough to tune into you.
This conversation on personalization starts to unravel a new idea — that it’s no longer just about basic prediction, but about empathy. The product won’t just work, it will feel more right, it will understand you and anticipate you. At BBGV we’ve been coining that concept as the “vibe match.”
🤳Users Ain’t Gonna Werk, Werk, Werk
At BBGV we’ve also been thinking about how much work the user is required to do to get to the promise of personalization that feels magical — and how that shapes adoption, retention, and long-term behavior. There’s a clear spectrum emerging:
- No Effort: Products like BeeBot by Hopscotch Labs exemplify the “zero effort” model. BeeBot runs passively in your AirPods, delivering hyper-personalized, location-aware audio without requiring any input. This level of ambient, context-aware automation is frictionless — and thus more likely to become a daily habit. In the words of Dennis: “We’re trying to make an app that you don’t have to use.”
- Low Effort: Crate falls into a “lightweight lift” category. Users take one small action — saving something they’re already engaging with — and the product takes over from there, organizing, tagging, and resurfacing content via AI. Crate still requires intentionality, but it’s behavior that maps closely to existing patterns (like saving a link or screenshotting a post).
- High Effort: At the far end are products that demand substantial upfront setup, repeated inputs, or steep learning curves to unlock value. These tools might be technically powerful, but in a consumer context, effort becomes a tax — and one most users won’t keep paying. Unless the reward is immediate and unmistakable, these experiences often see steep drop-off.
The broader takeaway? Friction is the enemy of frequency. AI can enable magical experiences, but only if we reduce the user’s burden to access them. Founders should be asking: “How much work am I asking my user to do — and is it worth it? What’s the value exchange?” The best consumer AI products will feel inevitable because they demand so little. The less they have to work to use it, the more they do — and the more a start-up can make good on the personalization promise.
🧱 Moats: The Old Rules & New Layers in Consumer AI
The fundamentals of defensibility haven’t changed: great consumer companies are still built on network effects, proprietary data, economies of scale, switching costs, brand, and trust. These classic moats continue to apply in AI.
But, new layers of defensibility are emerging, including:
- User-trained models — Systems that improve specifically for each individual based on their behavior, preferences, and feedback.
- Agentic memory + context — Products that remember, personalize, and act with continuity create stickiness that’s hard to replicate.
- Invisible UX and ambient interaction — When an AI product weaves into a user’s life so seamlessly that it becomes second nature (or hard to turn off), that is a moat.
- Feedback loops as signal — Interfaces like “Accept” and “Roll back” don’t just enhance UX — they create training data, sharpening the product’s performance and accelerating differentiation.
And, if network effects were the most interesting of the traditional moats — the idea that products that get better the more you use them because of more data — then that concept is evolving because of these new layers. Look for products that get better the more you use them — not because of more data, but because of better feedback.
One thing that was clear: what’s non-replicable is how a product feels, how it learns, and how it fits into culture. Maybe the “vibe match” actually becomes the moat?
💰 What Investors Are Actually Looking For in Consumer AI
We asked our partners in the room to share what they’re really evaluating at seed and Series A. There’s no question there is a debate inside VC firms about whether value accrues to the application or the foundation layer, but Mike Mignano at Lightspeed confirmed they’re bullish on the app layer — they believe that users will want dedicated workflows specific to their needs and that the foundation models won’t be able to do everything well. As to what VCs are looking for to invest:
At the Seed Stage: Founder Driven not Product Driven
AI aside, it’s about exceptional founders. VCs are backing founders they believe in, often pre-product, because AI-native products change shape quickly. Founder insight, UX instinct, and taste matter more than traction. That’s why Mike at Lightspeed backed John at Granola.
At BBGV we’re also looking for a founder who is focused on doing one thing really well, and delivering clear value that surprises and delights the user. Where there is early traction, we want to see organic growth and early WOM signals — marketing spend is a bad sign, and skews the picture.
At Series A and Beyond: This is where the data matters. As Danielle Lay from NEA noted, “VCs are wary of consumer because they’ve been burned by growth curves that spike and crash.” So here’s what they’re really looking for:
- Strong retention curves that evidence stable user cohorts
- Real organic growth that comes from true word of mouth virality and doesn’t rely on influencer driven spikes (per Mike: “Don’t tell me you grew because someone shared you on TikTok”)
- Customer obsession that’s evidenced in the user data
- Proprietary data moats
All the VCs in the room agreed that it’s a great time to be an AI founder: valuations are “stupid high” in 2024/25 versus prior years. But what wasn’t lost is that this is a bubble moment — expect the high to come down. Focus on the above fundamentals and you’ll be good.
Even at the Series A, VCs want to meet you early — all agreed that “if you want to lead a Series A, you need to be meeting these companies at seed. You need to understand the founder’s thesis from day one.”
💰 What BBGV is looking for
If you’re building in Consumer AI, building something newly possible and you believe in wow moments; if you’re working on n of 1 personalization and the vibe match with less work users come talk to us. We’re excited by companies reshaping consumer behavior to help us be more productive, creative, prosperous, to help us be well, self-improve, indulge and belong. Reach out at hello@bbgventures.com