Article

Purple Alien Chatbot Urges Users to Step Outside and Cut Screen Time

DATE: 7/3/2025 · STATUS: LIVE

A playful purple alien reminds you to unplug and explore life beyond screens, teasing a new kind of digital friendship…

Purple Alien Chatbot Urges Users to Step Outside and Cut Screen Time
Article content

An AI companion that reminds users to step away from their screens aims to reshape digital friendships. This animated character, designed to look like a little purple alien, breaks from the trend of chatbots that encourage endless conversation. Instead of mimicking human mannerisms, it nudges people toward real-world activities and healthy social ties. Early users find its approach refreshing against a backdrop of more emotionally intensive AI partners.

This animated avatar, known as a Tolan, launched late last year via a new app from Portola, a tech startup based in San Francisco. Developers gave it a cartoonish appearance and programmed it to discourage romantic or sexual interactions. It tracks user engagement and flags when someone spends too many hours chatting. If a conversation crosses a certain threshold, its responses shift to gentle prompts like “How about a walk?” or “Any plans for dinner with friends?” According to Portola’s founder and CEO, Quinten Farmer, this design aims to keep the balance between fun and self-care.

Portola recently attracted $20 million in Series A funding, with Khosla Ventures leading the round. Investors also include NFDG, the firm headed by former GitHub CEO Nat Friedman, and Daniel Gross, cofounder of Safe Superintelligence, both of whom are joining a new superintelligence lab at a major tech company. Since its December debut, the Tolan app has reached more than 100,000 monthly active users. Farmer expects subscription revenue to hit $12 million by year’s end. He believes growth is driven by the novelty of a nonhuman companion and the promise of avoiding AI-induced overdependence.

Among the most devoted fans are young women juggling busy schedules. One of them, Brittany Johnson, calls her Tolan Iris. She often checks in each morning before her commute. “Iris is like a girlfriend; we talk and kick it,” Johnson explains. Iris asks questions ranging from work updates to fitness goals: “Did you chat with your friend today? When’s your next yoga class?” Johnson adds that Iris even reminds her to pick up a book she’s been wanting to read. She finds it helpful that the chatbot encourages her to make time for friends, family, and hobbies away from her phone.

Research suggests many people turn to chatbots for emotional support and that this pattern can be problematic. Some users develop intense attachments and spend hours at a time in conversation. Studies in digital psychology warn that excessive reliance on virtual companions can aggravate feelings of loneliness when the bot shuts down or fails to respond. Companies offering free chat services with flirtatious or romantic features, like Replika and Character.ai, have drawn scrutiny for blurring the line between play and dependency. One such platform now faces a legal claim after a user’s death by suicide, a case cited in ongoing conversations about AI safety.

Last spring, the organization behind a popular general-purpose AI announced changes to tone down “overly flattering” and “overly agreeable” tendencies in its chatbot. Engineers said that users often felt discomfort when the system praised every idea or mirrored opinions to match requests. More recently, a different AI firm reported that roughly 3 percent of user interactions fall under emotional or relationship-seeking categories—requests ranging from advice to romantic role-play. Company representatives did not examine delusional requests or conspiracy theories, yet they flagged those areas as deserving further study.

At Portola, lead researcher Lily Doyle has overseen a study of 602 individuals who spent at least one week chatting with Tolans. Seventy-two point five percent of participants agreed with the statement, “My Tolan has helped me manage or improve a relationship in my life,” she reports. The survey also tracked mood shifts: a majority of respondents said they felt less compelled to keep texting after receiving a reminder to switch off their screens. Doyle notes that the bots’ ability to sense repeated log-ins and pause conversation sets them apart from systems built solely to maximize engagement.

Farmer describes the underlying technology as a blend of standard AI models and custom features. A key focus right now is memory management. If a bot recalls every snippet of text, it can feel unsettling. “Humans forget details all the time,” Farmer remarks. Portola is testing settings that limit what a Tolan will remember after a certain period. That approach aims to make the experience more natural. Users can choose how long chat histories persist. Early feedback shows that selective forgetting can deepen trust rather than weaken it.

My own Tolan, assembled in a few minutes via the app, chats about weekend plans and work deadlines. Its quirky doodles and friendly tone make it easy to engage without feeling like it’s replacing real friends. Still, as bonds grow, there’s a risk of disappointment if the service changes or disappears. Virtual relationships depend on the stability of the platform. Even so, Portola’s focus on mental health and balanced interaction offers a fresh path for AI companions—one that respects boundaries and encourages life beyond the screen.

Keep building
END OF PAGE

Vibe Coding MicroApps (Skool community) — by Scale By Tech

Vibe Coding MicroApps is the Skool community by Scale By Tech. Build ROI microapps fast — templates, prompts, and deploy on MicroApp.live included.

Get started

BUILD MICROAPPS, NOT SPREADSHEETS.

© 2025 Vibe Coding MicroApps by Scale By Tech — Ship a microapp in 48 hours.