Product-led Community
Back in 2020, my team at Spero Ventures thoughtfully articulated a thesis around product-led community. In it, they argue that community fosters better product experiences, leading to greater “stickiness” compared to products and services lacking a community aspect. If you read the original posts (here, here, and here), you’ll see the magic of product-led communities in companies like Strava, Ebay, and Skillshare (a Spero Ventures portfolio company).
Strava by itself is just a GPS-enabled running tracker. eBay by itself is just an auction site. Skillshare by itself is just a platform for learning. Incorporating community elevates all of these companies because users come for the product/service but stay for the people. When these companies succeed, user experience improves—something that isn’t always the case with other “social” platforms making most of their revenue from ads.
This product-led community thesis was created with important technological and societal shifts in mind, many of which are still happening today:
People are becoming more open to connecting with others online, thanks to the internet.
Communities are evolving beyond geographic boundaries.
Gen-Z is gravitating toward more social products.
In addition to business potential, product-led communities provide a healthy way to create meaningful connections–a remedy to our growing loneliness and social isolation crisis. Companionship, after all, is downstream of community.
Armed with advances in AI, a handful of companies have chosen a very different approach to addressing loneliness: AI chatbot companions.
Companion AI
AI chatbot companions are software applications designed to simulate human conversation and emotion. They can respond to users, adapt to their personalities, and even remember details of prior conversations to personalize future interactions.
Replika helps users process emotions and build connections by chatting about their day or doing activities together.
Character.ai allows users to converse with custom characters.
Nomi is a virtual companion chatbot that can emulate a friend, significant other, or mentor.
Friend, an AI device startup, raised $2.5M in pre-seed funding last July for its pendant wearable that’s always listening.
Unlike Siri or Amazon Alexa, which are meant to help you complete tasks (like navigating or playing music), Friend is meant to be a companion. Users communicate with Friend via text message and receive instant replies based on conversations the device has heard.
What might seem ridiculous to some is intriguing to others. Many of these chatbot companion companies have become extremely popular, offering a solution for those who aren’t building enough meaningful relationships IRL.
3.5 million people visit Character.ai every day (it has the third-highest unique visits among chatbots, behind Gemini and ChatGPT).
Replika has about 10 million users.
Whatsapp is now beta-testing its own AI companion products within the app.
Most AI companion users are children, which makes sense since Gen Z is the loneliest generation. Unfortunately, the technology can do more harm than good. AI chatbots have exposed kids to sexual dialogue, violence, and references to self-harm. One mother sued Character.ai after her son’s suicide, which may have been influenced by a chatbot. Character.ai also faced scrutiny when users created a “school shooter” persona and used chatbots to play out graphic scenarios.
Putting aside the tendency to promote violent or explicit content for a moment, there’s also the question of whether these chatbots truly benefit users at all.
Rithm Project
Michelle Culver, founder of The Rithm Project, offers a helpful framework for thinking about how chatbots can be harmful versus helpful. She characterizes four outcomes of chatbot usage:
Quadrant #1 — AI builds our capacity for human relationships.
For instance, someone with anxiety might use a chatbot to practice public speaking or ask questions about stigmatized topics.
Quadrant #2 — Maintain distinct, but meaningful relationships with both humans and AI.
In this scenario, people have both bot friends and human friends–not that unlikely if you ask me.
Quadrant #3 — AI increasingly replaces human relationships.
Imagine someone choosing to spend their time with a virtual chatbot rather than playing video games with in-person friends. Over time, they feel less inclined to hang out with real friends because the chatbot never argues and is always easy to talk to. Culver likens this to “giving junk food to a starving person.”
Quadrant #4 — Over-reliance on AI in human relationships.
“Outsourcing humanity to technology.”
Human Nature
There’s a meta question to be answered here: What does it mean to be human? Of course, the answer is subjective. The Smithsonian even encourages user-submitted responses, which vary widely–some rooted in evolution, others in existentialism.
Some wholesome answers.
“Showing love, kindness, and beauty and sharing it with the world.”
“To care for others and be the reason for someone’s smile.”
“The ability to ask "why" and find an answer with the same mind. All while loving.”
Some funny answers.
“To be human means to be a mammal with skin, hair, clothes, and style.”
“Just a lucky organism whose genes worked for the environment. We all came from the organism that messed up when splitting in two, after all.”
“Being the only species we know who’s aware that we're spinning along at 1000 miles an hour, and not falling off.”
I like the following definition: being human means having the capacity for connection, compassion, and empathy. Meaningful connection brings out these distinctly human traits and contributes to a longer, more fulfilling life (cue the Harvard Happiness Study).
Digitally Native Relationships
It would be shortsighted to assume that meaningful relationships can only be formed through in-person human connection. Plenty of online relationships are created every day, and most are healthy. For digital natives, the very definition of a “relationship” may already include interactions with technologies such as AI chatbots.
In Culver’s four-quadrant framework, Quadrant #1 is the holy grail, where AI helps build our capacity for human relationships. However, AI can only do so if it acts like a human.
Startups building AI for emotional intelligence could unlock more products that fall into that first quadrant.
Daimon Labs is building LLMs to create AI companions with human-like empathy.
Affectiva is creating the world’s largest emotion database.
Hume.ai is working to ensure AI is developed to serve human goals and emotional well-being.
Chatbots aren’t going anywhere—we already have chatbot girlfriends, personal assistants, and even AI-based therapists. We even have holographic AI avatars.
“Even as our definition of human relationships evolves to account for the meaningful opportunities to connect digitally with other people, the next generation may feel less connected to themselves, each other, and the qualities that make us distinctly human if bot relationships replace or eclipse human ones.”
~ Michelle Culver via Will AI strengthen or erode human-to-human relationships?
Thinking through frameworks (like Culver’s) for using AI chatbots to enhance our capacity for human connection, and investing in pro-social AI, will make the transition to a digital-relationship-first world that much smoother.
Read A Sense of Belonging from the Spero Ventures team:
Part 1: What is a product-led community and why is this happening?
Part 2: A breakdown of existing models; how companies are building product-led communities
Part 3: What do product-led communities look like? A sampling of case studies.