A new startup is attempting to solve one of the most persistent problems in the AI era: how to provide expert advice that is actually trustworthy, private, and legally sound.
Onix, a newly launched platform led by former WIRED contributor David Bennahum, describes itself as a “Substack for chatbots.” Instead of subscribing to a writer’s newsletter, users can subscribe to an “Onix” —an AI version of a celebrated human expert trained to mimic their specific knowledge, personality, and advice.
Turning Expertise into Capital
The business model behind Onix is a direct response to the “gig economy” of the digital age. For professionals like doctors, therapists, or wellness influencers, time is their most limited resource. Onix aims to turn an expert’s knowledge into a “capital asset” that generates revenue 24/7 without the expert needing to be physically present.
This isn’t entirely uncharted territory. For example, parenting expert Becky Kennedy has successfully built a massive business around a specialized chatbot. For Onix, the goal is to scale this model to thousands of experts, starting with a vetted group of 17 specialists focused primarily on health and wellness.
Solving the “AI Problem”
The platform attempts to address the three biggest criticisms of current Large Language Models (LLMs) like ChatGPT:
- Privacy: Onix uses “Personal Intelligence” technology, storing user data locally and encrypted on the user’s device. The company claims that even under government demand, they can only provide basic contact information, not the contents of private conversations.
- Intellectual Property: Unlike general AI models that “scrape” the internet without permission, Onix bots are trained specifically on the content provided by the experts themselves, ensuring they are compensated for their IP.
- Accuracy (Hallucinations): By using “guardrails” that restrict the AI to a specific subject matter, the company aims to minimize the tendency of AI to make things up.
However, early testing suggests these guardrails are not foolproof. During user trials, bots occasionally “broke character,” drifting into unrelated topics or hallucinating facts when prompted with “jail-breaking” questions.
The Ethical Gray Zone: Guidance vs. Treatment
One of the most significant tensions within Onix is the line between educational guidance and medical advice.
While Onix includes clear disclaimers stating that their bots do not provide medical treatment, the reality of human behavior is different. In a world where many people use free AI tools as makeshift therapists because they cannot afford real healthcare, the distinction becomes blurred.
This leads to several emerging concerns:
* Product Placement: Because these bots are trained by experts who often sell their own products (supplements, devices, or books), the AI naturally tends to recommend those specific items. This creates a built-in loop of automated marketing.
* The Human Connection: While an AI can mimic “empathy” and “compassion,” it lacks a physical presence. There is a psychological risk in replacing human-to-human support with a simulation, especially in high-stress wellness or mental health contexts.
* Vetting at Scale: While the initial 17 experts are highly vetted, Onix has yet to define how it will maintain quality and ethics as it grows to include thousands of users.
The Big Question: Does it actually work?
As Dr. Robert Wachter of UCSF points out, the ultimate metric for Onix is empirical: Does it actually work?
If a digital twin can successfully help a user understand their body, manage stress, or navigate a “pediatric journey” more affordably than a human professional, it could be a revolutionary tool for accessibility. However, if the bots fail to maintain accuracy or provide a hollow version of human empathy, they may remain little more than sophisticated, automated brochures.
Conclusion: Onix represents a bold attempt to monetize human expertise through AI, offering a potential bridge for those seeking affordable guidance. Yet, its success depends on whether it can navigate the thin line between helpful automation and the loss of genuine human connection.




























