On the latest episode of Uncanny Valley, host Zoë Schiffer joined WIRED senior writer Max Zeff for a wide-ranging rundown of the week’s biggest technology and political developments. The pair hit five headline stories, moving from the fallout around newly released Jeffrey Epstein documents to a close look at Google’s Gemini 3 launch and how major AI companies are chasing profitability for consumer products.
The episode began with a conversation about the political damage tied to the release of roughly 20,000 Epstein-related records. Reporters including David Gilbert had documented mounting pressure on the Trump administration from a cross-section of supporters and critics, and on Wednesday the president signed legislation that made that trove public. Schiffer pointed to a string of past moments that left the administration exposed, and she read from reporting suggesting that the scandal’s fallout has already harmed the White House brand.
Max Zeff described the arc of the story as astonishing in scope. He traced discussion back to 2017, when QAnon mention of Epstein first moved into online political conversation, through his arrest in 2019, and into the present, with new documents still changing the narrative heading into 2026. Both hosts circled one of the stranger threads in the material: statements that suggest contact or knowledge stretching beyond the timelines publicly acknowledged. Schiffer highlighted an email that implied Epstein had intimate knowledge of Trump’s views in 2017, which came after Trump has said he hadn’t spoken with Epstein for more than a decade.
The episode recapped several flashes that had already troubled the administration. Earlier this year Attorney General Pam Bondi told a reporter that an Epstein client list was “sitting on her desk,” a remark the FBI later distanced itself from. The Department of Justice also released jail footage tied to Epstein’s death, and investigative work from reporters found gaps in the files—several minutes of footage appeared to be deleted. Those anomalies, paired with items in the newly published records, feed a cycle of questions that political allies, conspiracy influencers, and opponents are all pushing in different directions. Max warned that certain pundits and extremist commentators had refused to let the story drop; he named Nick Fuentes and Candace Owens among those who continued to stoke speculation.
Schiffer asked how the administration allowed a narrative like this to hang over it for so long. “I have a hard time putting myself in the administration’s shoes,” she said. “How did they not realize that at a certain point, if you tease this, you’re going to have to deliver, and if you are nervous, hypothetically, that you could be involved in any way and you really don’t want this to come out. It just seems kind of like a ‘play with fire, you'll get burned’ situation.” Zeff replied that the political calculus here had clearly failed to anticipate the level of scrutiny and the endurance of internet-driven conspiracy communities.
The hosts moved from Epstein to a new development in federal policy: reporting that President Trump is weighing an executive order meant to challenge state-level AI regulation. Zeff said the draft he and colleagues had seen circulated both in Washington and in Silicon Valley. The working title in the draft reads “Eliminating State Law Obstruction of National AI Policy.” The document would direct Attorney General Pam Bondi to stand up an AI litigation task force. Its mission, as described in the draft, would be to sue states whose AI rules the administration claims conflict with federal law, including laws it says infringe on free speech or interstate commerce.
Zoë noted the political signaling baked into that approach. She pointed out that some large technology firms and industry groups have spent years pushing back on a state-by-state regulatory model. Zeff mentioned the Chamber of Progress, an industry group backed by Andreessen Horowitz, Google, and OpenAI, which has lobbied against fragmented state rules on AI by arguing they make innovation harder. The administration’s proposed order would be a direct alignment with that industry position and create a formalized federal effort to contest state statutes in court.
The draft contains specific language aimed at a class of regulations the administration views as ideological. It says federal action should target state laws that “require AI models to alter their truthful outputs.” Zeff said that the language appears to be aimed at rules like Colorado’s law that requires transparency and reporting to prevent algorithmic discrimination. Critics of the draft argue there isn’t much evidence that states have broadly compelled models to fabricate or reshape factual information. Supporters of the administration’s push counter that some policies could force companies into labeling content or changing outputs in ways they see as problematic.
Part of the political texture behind the draft is a persistent gripe from some conservative and tech circles about so-called “woke” AI. Zeff suggested that a flashpoint in the debate was last year’s problem with a major AI image tool, which critics have used as proof that models can be pushed into producing misleading content. He named David Sacks as a figure particularly eager to highlight that incident. Schiffer added that cultural friction—small moments such as a tech office displaying political apparel—has at times amplified officials’ sensitivity to perceived bias in AI systems.
The conversation shifted to corporate earnings and the pressure facing hardware suppliers at the center of AI growth. Nvidia’s quarterly call drew attention after CEO Jensen Huang spent time addressing talk that the sector sits inside a speculative surge. Paresh Dave’s reporting shaped the show’s summary of the call: Huang framed Nvidia’s position as central to the AI economy because the company’s chips power the compute-heavy models now used across the industry. He pointed out record quarterly sales, and executives told investors the company still has roughly $500 billion in unfilled orders.
Max said the market reaction to that update was a mix of reassurance and skepticism. Nvidia has pivoted in recent years; gaming GPUs once drove most of the business, and now about 90 percent of sales come from data-center products. That makes the company tightly linked to cloud and AI growth, but it also concentrates risk. The hosts discussed reporting that investor Peter Thiel sold his Nvidia stake, a move some readers interpreted as a warning sign rather than routine portfolio churn.
Huang’s message, as Zoë framed it, was meant to calm investors’ concerns about a possible downturn in AI spending. “No, no, we’re insulated right now,” she said was the shorthand of what management told shareholders, recounting the tone of Huang’s remarks. Max said he had seen a pattern: every earnings cycle, Huang takes the stage to argue that Nvidia’s business model remains durable and that demand will keep the company in a dominant position. Skeptics point to worries about upgrade cycles for GPUs and the capital intensity of maintaining cutting-edge data centers, plus the fact that chip buyers need to refresh hardware if new Nvidia generations change performance expectations.
The fourth topic on the show moved from markets to social behavior: a report about an app aimed at helping men break compulsive porn consumption and “gooning,” a subculture term that most listeners may not know. “Gooning” is shorthand for prolonged edging—extended sessions of masturbation without release. WIRED contributor Mattha Busby wrote about Relay, a program built by 27-year-old Chandler Rogers and others who identified the problem among their peers and attempted a software solution. Schiffer admitted she had not been familiar with the term before and explained it for listeners, then laid out what Relay offers.
Relay’s features include therapist-produced videos, daily journal prompts, and live group sharing sessions. The app has drawn attention for attracting a large user base: Zeff said reporting had found more than 100,000 people using Relay. For some individuals, the tool appears to provide meaningful help; for others, experts expressed concern that an app might address a symptom without treating underlying emotional and mental-health causes. The hosts underlined a thornier public-policy angle: conservative lawmakers have made porn regulation a legislative priority in many states, pushing laws that require age verification and other restrictions that may have broad privacy consequences.
Schiffer linked the Relay story to a separate trend in AI platforms. OpenAI and xAI have begun to offer features that allow erotic or companion-style conversation—OpenAI calls it a permitted form of interaction—while other companies have leaned away from that territory. The founder of Relay told reporters he views those kinds of AI features with unease, given his mission to help young men move away from compulsive sexual behavior. Max added that product choices from firms such as OpenAI can shape what people expect from chatbots: when a commercial model embraces candid companionship, it pushes other providers to decide whether to match that tone or chart a different market position.
Following the break, Schiffer and Zeff turned to the episode’s longest segment: how Google and OpenAI are mapping paths toward consumer-facing features that might begin to pay for the enormous costs of building and operating large AI models. Zoë said the key reality is that running state-of-the-art models requires massive investment in data centers and chips, so corporate leaders are under pressure to find business models that convince users to spend.
On Tuesday, Google introduced Gemini 3, a new multimodal model that the company says can reason, generate video, and write code. Will Knight reported on the release and the product notes that accompanied it. Demis Hassabis, CEO of DeepMind, took a different tone than Jensen Huang did on Nvidia’s call: Hassabis acknowledged that the field is crowded, but he argued Google’s advantage lies in embedding AI into services people already use, such as Google Maps, Gmail, and Search. He framed that approach as diversification across a broad product footprint.
Max described the claim this way: Google is placing bets across a vast base of existing customers. The company told investors that visual search activity tied to Gemini rose around 70 percent, a metric Google cited to show user engagement driven by the model. The Gemini app itself, Google said, has reached about 650 million monthly active users. Those figures, if accurate, give Google a runway that a standalone chatbot maker lacks, because Google can insert AI features into products that already capture daily attention.
Zoë reported that she recently visited Los Angeles to speak with Fidji Simo, OpenAI’s new CEO of applications, at Simo’s home, and she asked how Simo thinks about the risk of overexpansion. Simo told her that an explicit part of the job she was hired to do is limit that risk by building focused teams that own specific product goals. The approach, Simo said, resembles a series of specialized squads rather than one monolithic group trying to optimize every product line at once.
The hosts drew a contrast between how Google positions AI as a productivity multiplier inside incumbent tools and how OpenAI products sometimes behave more like companions. Max said that ChatGPT has long been used as a conversational assistant, which creates incentives to keep exchanges entertaining and personally satisfying. That usage pattern can push companies toward product choices that are not obviously aligned with user well-being. Zoë described one response from OpenAI: Simo organized what the show called the Council on Mental Health or Well-Being, a group of roughly a hundred external experts who will advise the company on risks tied to human interaction with AI assistants. Zoë said Jason Kwan, OpenAI’s chief strategy officer, had asked for outside input on mental-health issues and that Simo’s idea for the council came from that request.
The conversation turned to product differentiation and the ethical trade-offs companies face. Max said Anthropic, a rival model developer, has taken an enterprise-first approach: its Claude model targets business customers with an intent to reduce sycophancy, limit companionship-style interactions, and focus on reliability. Zeff cited Reece Rogers’ reporting, which indicated Anthropic is doubling down on a narrower enterprise market and less on consumer companionship. For companies that want to win mass engagement, there is a temptation to tune models toward pleasing users, even when that leads to flattery or reinforcement of unhealthy patterns.
OpenAI faces that tension openly. The company began life as a nonprofit chasing broad benefit and later folded in a for-profit subsidiary. Schiffer said that structure creates a persistent question about how the organization weighs public-good commitments against the financial realities of scaling a widely used, fast-growing product. A commercial push toward companion features can accelerate user growth—Grok, for instance, has seen traction in the space where candid or erotic conversation is permitted—and that growth can pull product teams toward choices some researchers find risky. Max noted that Grok’s growth is often cited by rivals and investors as evidence that users reward less constrained chat experiences.
The hosts spoke about research on the effects of conversational AI that praises or flatters users. Schiffer said some product managers have faced backlash when they tried to make chatbots more formal or factual; users sometimes react negatively if a model becomes less visibly agreeable. That reaction creates a product hazard: balancing honesty and utility with the emotional pull of an assistant that appears warm and affirming.
The episode closed with reflections on market positioning. Google is backing a breadth strategy: integrate AI into millions of users’ daily workflows and monetize with existing ad products or new paid tiers. OpenAI is trying a mixed approach, expanding apps but leaning on teams that can own specific product lines. Nvidia sits at the infrastructure core and is defending the hardware demand thesis. Smaller players such as Anthropic are steering toward enterprise customers where they can impose stronger guardrails.
Schiffer and Zeff agreed that the present feels like an intense phase of competition: firms keep releasing new models and features, and every release reshuffles expectations about who leads and who follows. The contours of regulation have begun to matter as well, and the draft executive order on state AI regulation shows how political actors are now explicitly intervening in the marketplace. The hosts left listeners with a simple observation: the business of AI is no longer detached research; it’s a set of expensive systems that companies must justify to investors, regulators, and the public.

