Open-Source AI Isn’t Enough—Onchain AI Is the Next Frontier
Forget the hype—open-source AI won’t save us. The real game-changer? Putting AI on the blockchain.
Why decentralization matters: Onchain AI cuts out middlemen, bypasses corporate gatekeepers, and lets algorithms trade value directly. No more ’ethics committees’ funded by the same banks that rigged LIBOR.
The finance jab: Wall Street will still front-run your AI trades, but at least the code will be auditable.
The promise and the pitfalls
Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents show us the power of accessible technology. DeepSeek claims it built its system for just $5.6 million, nearly one-tenth the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, supercharged by Claude 3.5 Sonnet, lets anyone, even non-coders, build software from natural language prompts.
The implications are huge. This means that basically everyone, including smaller companies, startups, and independent developers, can now use this existing (and very robust) model to build new specialized AI applications, including new AI agents, at a much lower cost, faster rate, and with greater ease overall. This could create a new AI economy where accessibility to models is king.
But where open-source shines—accessibility—it also faces heightened scrutiny. Free access, as seen with DeepSeek’s $5.6 million model, democratizes innovation but opens the door to cyber risks. Malicious actors could tweak these models to craft malware or exploit vulnerabilities faster than patches emerge.
Open-source AI doesn’t lack safeguards by default. It builds on a legacy of transparency that has fortified technology for decades. Historically, engineers leaned on “security through obfuscation,” hiding system details behind proprietary walls. That approach faltered: vulnerabilities surfaced, often discovered first by bad actors. Open-source flipped this model, exposing code—like DeepSeek’s R1 or Replit’s Agent—to public scrutiny, fostering resilience through collaboration. Yet, neither open nor closed AI models inherently guarantee robust verification.
The ethical stakes are just as critical. Open-source AI, much like its closed counterparts, can mirror biases or produce harmful outputs rooted in training data. This isn’t a flaw unique to openness; it’s a challenge of accountability. Transparency alone doesn’t erase these risks, nor does it fully prevent misuse. The difference lies in how open-source invites collective oversight, a strength that proprietary models often lack, though it still demands mechanisms to ensure integrity.
The need for verifiable AI
For open-source AI to be more trusted, it needs verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or skewing automated decisions that increasingly shape our world. It’s not enough for models to be accessible; they must also be auditable, tamper-proof, and accountable.
By using distributed networks, blockchains can certify that AI models remain unaltered, their training data stays transparent, and their outputs can be validated against known baselines. Unlike centralized verification, which hinges on trusting one entity, blockchain’s decentralized, cryptographic approach stops bad actors from tampering behind closed doors. It also flips the script on third-party control, spreading oversight across a network and creating incentives for broader participation, unlike today, where unpaid contributors fuel trillion-token datasets without consent or reward, then pay to use the results.
A blockchain-powered verification framework brings layers of security and transparency to open-source AI. Storing models onchain or via cryptographic fingerprints ensures modifications are tracked openly, letting developers and users confirm they’re using the intended version.
Capturing training data origins on a blockchain proves models draw from unbiased, quality sources, cutting risks of hidden biases or manipulated inputs. Plus, cryptographic techniques can validate outputs without exposing personal data users share (often unprotected), balancing privacy with trust as models strengthen.
Blockchain’s transparent, tamper-resistant nature offers the accountability open-source AI desperately needs. Where AI systems now thrive on user data with little protection, blockchain can reward contributors and safeguard their inputs. By weaving in cryptographic proofs and decentralized governance, we can build an AI ecosystem that’s open, secure, and less beholden to centralized giants.
AI’s future is based on trust… onchain
Open-source AI is an important piece of the puzzle, and the AI industry should work to achieve even more transparency—but being open-source is not the final destination.
The future of AI and its relevance will be built on trust, not just accessibility. And trust can’t be open-sourced. It must be built, verified, and reinforced at every level of the AI stack. Our industry needs to focus its attention on the verification layer and the integration of SAFE AI. For now, bringing AI onchain and leveraging blockchain tech is our safest bet for building a more trustworthy future.
David Pinger is the co-founder and CEO of Warden Protocol, a company that focuses on bringing safe AI to web3. Before co-founding Warden, he led research and development at Qredo Labs, driving web3 innovations such as stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held roles in product, data analytics, and operations at both Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.