Building community around human agency: a year in review
Exercising maximum agency while making friends and influencing people
You know what you don’t get to do when you coin a term like “agentic tech” to define a set of products and design principles that enhance human agency in technology? Sit around and hope the idea of agency simply catches on.
You actually have to go out and market the vision until it becomes a household term. In other words, you can’t wait for permission. You exercise maximum agency and make the thing happen.
Which is how agentic tech has now grown into a full-blown investment category with hundreds of founders building in the space, millions of dollars deployed, and new ideas driving better ways for humans to play an active role in the technologies that shape our lives.
I started writing about agentic tech in 2021, hosting online firesides with experts building across privacy, digital identity, decentralized platforms, cryptography, confidential compute, and a whole host of categories I thought of as the “primitives” that needed to be in place to secure human agency and arrive at a future where humans live well with technology.
For example, my first fireside guest, way back in 2022, was Paul Frazee, the CTO of Bluesky, sharing the early outlines of what would soon become the defining new entrant in the social media space, completely reimagining what a decentralized newsfeed might look like.
The guests on my firesides didn’t yet know they were building what I was calling agentic tech. But I figured as long as I kept talking about it, people would eventually catch on that the individual silos they were building in were actually part of a vast network of interconnected spaces all contributing key technological primitives toward an agentic future. What’s more, that there there exists an even vaster network of humans to conspire with instead of going it alone against the default paradigms of tech incumbents that constrain human agency, trade privacy for convenience, and bankroll an economy that often seems directly at odds with human flourishing.
And I was right. The term caught on. People started building in it, pitching ideas as part of it, getting funded based on it. A few months into running monthly firesides with the small but active community I’d organized, ex/ante picked it up as their investment thesis and brought me in as ecosystem advisor.
But now agentic tech had a vibrant online community spread (mostly) across NYC and San Francisco — and no one physically around me in Austin who had any idea what I was doing or why it mattered. And I was experiencing Zoom fatigue from running online firesides with people I never got to actually hang out with. So I decided to see what I could do about gravitating people toward these ideas in person.
When I started Tethics & Chill in Austin in 2023, I just wanted to find the smartest people here and make them my friends. I also wanted to bring these conversations shaping the future of agentic tech to everyone, not just insiders. So I started hosting monthly salons in Austin that mixed formal (tech + ethics) with fun (the chill part)—bringing artists and musicians together with technologists for evenings where we don’t just engage our intellectual muscles, but drop into our bodies and connect on a human level before diving into the heady conversations that follow. I found that this way, the conversations were more vibrant and honest — and the participants more confident about contributing to the conversation, if they don’t consider themselves experts.
The format worked exceptionally well. While I was running Tethics & Chill in Austin, Zoe, the founder of ex/ante ,was running in-person Tethics & Chill breakfasts in NYC. At the same time, we started experimenting with an online version of Tethics & Chill—a new iteration of the original firesides but with more structure and an even bigger community. We doubled our online membership in 2025 alone, entirely through word-of-mouth.
I also launched the Agentic Tech Podcast — a rebrand of the Privacy Podcast I’d hosted previously. While my initial work in this space focused on privacy, it was clear to me that privacy is just one of the components that go into securing human agency — a part of the thing, but not the thing in itself. As the mission broadened and as people’s consciousness of agentic tech as a real category one can actually build in began to concretize, it was time to broaden the scope of the podcast conversations.
In 2025, we proved this model of community-building around a new category works: give people a place to talk about it, and they will come. But they’re even hungrier for in-person opportunities to meet and share space. So in 2026, I, Zoe, and the rest of the team will be focusing more on in-person Tethics & Chill events — in Austin, in NYC, and even in San Francisco.
By the numbers
13 events. 586 attendees. 75 new Signal community members.
We averaged 40 people per conversation — intentionally intimate, allowing conversation to emerge and relationships to flourish. We gathered around breakfast tables in NYC, packed into private Austin homes, and joined from across time zones online. These weren’t passive audiences. They were engineers, ethicists, founders, policymakers, and researchers asking the hard questions about agency, governance, and collective intelligence.
The format proved resilient: our online events actually drew more people on average than in-person gatherings. But there’s something irreplaceable about the energy when you’re in the room together, so that’s where we’re focused in 2026.
The conversations that shaped 2025
We explored a full spectrum of technology and human agency:
Governance & Infrastructure — Dean Ball, architect of America’s AI Action Plan, shared how the AI sausage actually gets made at the highest levels. Mozilla Foundation’s Executive Director Nabiha Syed revealed the Foundation’s plans for unlearning “defaults” to envision (and fund!) new possibilities for the open web. And Network Goods’ Connor McCormick walked us through novel governance mechanisms to internalize externalities and distribute resources beyond traditional market competitors.
Safety & Alignment — Arm’s Zach Lasiuk taught us how he’s rolling out a framework for threat modeling potential negative externalities of even the most successful product design. RAND researcher Sunischal Dev joined philanthropist and poker champion Igor Kurganov to help us think through biosecurity and the latest in AI bio-safety benchmarking. Ivan Vendrov (formerly Midjourney, Anthropic, Google) shared how he thinks about integrating AI into our society and biosphere, preventing a bureaucratic AI State-God, and building a wholesome social fabric.
Democracy & Truth — NYU’s Zeve Sanderson and Jigsaw’s Beth Goldberg shared new research on leveraging generative AI to support democratic politics and protect against the impacts of generative AI on elections. Renée DiResta, author of Invisible Rulers, walked us through the influence machine and what truth means in networked systems.
Human Autonomy — Cosmos Institute’s Brendan McCord and Workshop Labs’ Luke Drago packed a room in Austin to explore autonomy in the automation age. Andrew Mayne, OpenAI’s first prompt engineer, gave us an intimate AMA about shaping the conversation between humans and machines. And ProAlign’s Erin Beffa held a rousing workshop on OSINT and protecting yourself and your data against manipulation and theft.
What’s next
As we head into 2026, the stakes are getting higher and the questions more urgent. We’re at a breaking point with AI intimacy as lawsuits pile up against companion apps and we’re finally reckoning with the mental health crisis of synthetic relationships designed to maximize engagement over user wellbeing. Brain privacy is becoming an even bigger topic than when we first explored it in 2024 with Brown University’s Nita Farahany: as consumer neurotechnology moves from research labs to everyday devices, your emotional state, attention patterns, and cognitive load becoming just another data stream to monetize. And increasingly, we’re losing the ability to distinguish between AI agents acting on our behalf — agentic tech, aligned with the agency of the user the agent represents — and AI agents acting on behalf of the platforms that deploy them. The latter, while certainly agentic, raises key questions: whose agency is the AI agent representing — and is it self-agentic or human-agentic?
Meanwhile, there are so many new companies to highlight which I’m eager to share both on the Agentic Tech Podcast and via Tethics & Chill. I’ve even thrown my hat into the agentic tech ring, founding a new venture in the problem space I’ve been obsessed with since I started thinking about technology and human agency as part of my graduate research years ago: how much more value can we create, and how much more resilient can society become, if technology were optimized for human thriving rather than engagement metrics? More on that in 2026!
Thank you to every speaker who brought their expertise and humility to our community. Thank you to everyone who showed up, asked questions, and took these ideas into your own work. And thank you for trusting me, coming on the podcast, joining us at Tethics & Chill, and growing this ecosystem alongside everyone else in this Rebel Alliance.
Parting thoughts
Do the thing.
Do the thing and they might come.
Don’t wait for permission.
Because if you don’t do the thing, they definitely won’t come — since the thing doesn’t exist.
So be crazy. Be agentic. Give yourself permission. Create a whole tech category if that’s your thing.
But do the thing.
Want to join us?
Reach out to get added to our Signal chat, subscribe to the Agentic Tech Podcast, and add the Tethics & Chill calendar to get notified about 2026 events.



