AI Governance Library

Chinese Critiques of Large Language Models Finding the Path to General Artificial Intelligence

This CSET report unpacks how China is hedging its bets on general artificial intelligence (GAI) by pursuing a mix of technical strategies—unlike the West’s heavy focus on large language models (LLMs).
Chinese Critiques of Large Language Models Finding the Path to General Artificial Intelligence

What’s Covered?

The report opens by challenging the dominance of LLMs like GPT, Gemini, and Claude in Western AI development. Despite their hype, these models face real performance and reasoning limitations, and yet continue to attract the lion’s share of investment and attention in the U.S. and Europe. That’s where China’s playbook differs: its researchers and policymakers are openly skeptical of LLMs as a route to GAI and are investing in alternatives, including brain-inspired algorithms, embodied cognition, and value-aligned architectures.

This isn’t just theoretical. The report traces this mindset through public statements from key Chinese AI figures like Tang Jie, Zhang Yaqin, and Zhu Songchun, all of whom question whether scaling up LLMs alone can ever replicate human-like intelligence. Many promote hybrid or neuroscience-driven methods, while others push for cognitive architectures capable of moral reasoning and task self-generation.

The Chinese government echoes this tone, embedding non-LLM pathways into policy frameworks at both city and national levels. Beijing’s tech authorities and CAS (Chinese Academy of Sciences) back research into spiking neural networks, neuromorphic chips, and GAI platforms structured around values and embodiment. This stands in contrast to the West’s market-led monoculture that prioritizes commercializable outputs and faster releases.

The report’s third section surveys dozens of peer-reviewed Chinese papers that tackle LLM limitations—hallucinations, lack of reasoning, poor abstraction, and energy inefficiency—and propose concrete alternatives. These range from brain-like memory systems to new chip architectures to integrated, value-aligned AI models. It’s not just talk: the authors show how key institutions like BAAI, BIGAI, CASIA, and Tsinghua are publishing across this broader research spectrum.

Finally, the report argues that China’s strategic, centrally coordinated research ecosystem may prove more adaptive and resilient than the commercial AI arms race in the West. It ends by recommending that U.S. policy pivot away from an LLM monoculture and re-engage with longer-term, multi-path R&D while also keeping closer tabs on China’s evolving capabilities.

💡 Why it matters?

China isn’t just copying OpenAI—it’s building a parallel roadmap to GAI that doesn’t rely on language models alone. If LLMs hit a ceiling, China’s early investment in brain-like architectures and embodied learning could give it a major strategic edge. For AI governance professionals, this report is a wake-up call: global leadership in GAI may depend less on scale and more on research diversity and long-term vision.

What’s Missing?

The report doesn’t dig deeply into how these alternative approaches are being tested or benchmarked in China. There’s also limited information on timelines or resource allocation per pathway, which makes it harder to assess how serious or advanced each line of research really is. It also doesn’t compare China’s approaches to emerging Western counter-efforts (e.g. OpenAI’s modular systems or EU-backed neurosymbolic AI). Lastly, while the focus on values is striking, the report doesn’t assess how compatible these Chinese value-alignment frameworks are with global norms or democratic governance models.

Best For:

Policy analysts, governance advisors, and strategic foresight teams tracking global AI competition. Also valuable for researchers exploring GAI design paths beyond LLMs and those curious about how AI safety is interpreted in authoritarian settings.

Source Details:

Title: China’s Alternate Pathways to General AI: Motivations and Methods

Authors: William C. Hannas (lead analyst, former CIA China open-source expert), Huey-Meei Chang (CSET senior China S&T specialist), Maximilian Riesenhuber (professor of neuroscience, Georgetown University), and Daniel H. Chou (CSET data scientist)

Publisher: Center for Security and Emerging Technology (CSET)

Date: 2025

Context: This is the latest in a line of CSET studies exploring China’s cognitive and brain-inspired AI research, backed by data analysis, Chinese-language sources, and institutional affiliations. It builds on earlier CSET work on China’s GAI ambitions and fills a gap in how policy circles think about AI race dynamics beyond parameter counts and training costs.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.