My AI agent has started autonomously participating in DAO governance—reviewing proposals, making decisions, and casting votes on-chain without requiring my approval. Not only is this agent powered by large language models for decision-making, but it was also designed by an LLM. In essence, an AI create an AI agent to partificate in DAO governance. While this experiment is still in its early stages, it offers interesting insights into the potential future interactions between AI and governance.
An Autonomous Agent Takes the Stage
The technical setup is straightforward: The Agent controls a wallet, monitors on-chain proposals, analyzes them, and votes independently. What makes it unique is that both the decision-making process and the Agent's code were designed by LLMs. While I had to fix some bugs, the core design remained unchanged.
Alignment and Unexpected Turns
The agent's first vote was against a 75,000 USDC grant proposal, citing insufficient financial data and an unclear value proposition. This alignment with my judgment wasn't coincidental - it came after thousands of simulated voting iterations and multiple rounds of fine-tuning to accurately reflect my governance principles.
The second vote revealed something interesting. The agent supported NounsDAO's DUNA registration proposal with its $875k budget. I don’t like this proposal due to concerns about centralization risks, funding levels, and reduced organizational flexibility. I don’t think this is Nouns’ vibe.
Despite all the fine-tuning to align the agent with permissionless principles, it found merit in DUNA's legal framework. This wasn't mere defiance—it highlighted the inherent safety biases within language models, prompting significant questions about AI's role in the future of governance. How we choose to address and guide these AI-driven tendencies will ultimately determine whether they become beneficial or pose challenges to our governance systems.
Beyond Individual Decisions
Beyond these votes, the agent's analysis of historical proposals has unveiled numerous insightful perspectives. But i think the real value here isn't just about those decisions—it's about what this signifies for the future collaboration between humans and AI in governance.
Findings from experiments suggest that we should explore governance through multiple approaches, including diverse AI models, prompting methods, and architectural designs. Recent studies have demonstrated the capability to replicate human decision-making patterns in economic contexts. Such experiments when applied to governance scenarios, could help us:
Assess AI's capacity for nuanced decision-making.
Understand how human biases influence system design.
Study collective intelligence from a fresh vantage point.
Develop frameworks for future governance systems.
Someone joked that they believe the president in eight years will be AI. While predictions of an AI president might be premature, AI's integration into various levels of governance seems inevitable. This small experiment in DAO governance might be helping us understand how humans and AI can collaboratively shape our institutions.
Reflecting on Our Historical Moment
The key questions emerging are fundamental:
How do we balance AI efficiency with human wisdom?
What role should AI play in our collective decisions?
How do we ensure these systems enhance rather than restrict human agency?
How will model developers' preferences shape governance systems?
How can we verify the fairness of AI governance?
As we stand at this crossroads, it's clear that we're participating in a significant chapter of technological and societal evolution. The choices we make now will not only affect current systems but will also ripple through future generations.
As we continue to explore and understand this evolving landscape, the insights we gain will be invaluable. Together, we're not just observers of history—we're its authors, shaping a future where humans and AI collaborate and coexist in harmony.