Major corporations control decentralized AI (DeAI) companies, leaving decentralized AI in the dust. To build a more decentralized world, the sector must actively execute upon a focused DeAI strategy, with shared standards between projects, without compromise.
In April, a UN report warned that AI’s $4.8-trillion market is dominated by a mere 100 companies, most of which are based in the US and China. Centralized AI incumbents have the money and the connections to control this massive new industry, which means significant implications for society.
These companies, all employing centralized AI technology, have run into their fair share of headaches. For example, Microsoft’s Copilot garnered attention for creating explicit, inappropriate images, such as children in compromising scenarios. This sparked a public and regulatory backlash.
Although Microsoft created stricter moderation, it had already demonstrated that centralized AI can harbor problems in part due to its closed-source code.
Citadel was wrapped up in an AI trading scandal in the financial sector, as algorithms allegedly manipulated stock prices via artificial volume creation.
Google’s Project Maven, a Pentagon pilot program used in military tech, has raised ethical questions.
“We believe that Google should not be in the business of war,” reads a letter penned by Google employees and addressed to Sundar Pichai, the company’s CEO. The employees requested that Google leave Project Maven.
“We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter states.
So much for “Don’t be evil” — the company’s old slogan.
These situations give us clear examples of the potential failures of centralized AI, including ethical lapses, opaque decision-making and monopolistic control. DeAI’s open-source ethos, community governance, audit trails and computer facilities can give more than a few massive corporations an edge in the future of AI.