04/03 2026
336

"The End of the Pseudo-Intelligent Era, the Dawn of True Intelligence"
Author | Zhen Yao Editor | Li Guozheng Produced by | Bangning Studio (gbngzs)
Recently, the buzz around large models in cars has resurfaced.
On March 26, FAW Hongqi and Alibaba Cloud jointly announced that the QianWen intelligent agent would be integrated into Hongqi's smart cockpit, debuting in the new Hongqi HS6 PHEV.
Meanwhile, that evening, the IM LS8 was available for pre-sale. A few days earlier, the automaker had announced itself as the world's first to mass-produce a vehicle equipped with the QianWen large model—its super intelligent agent, IM Ultra Agent, is one of the core selling points of the LS8.
Some large models are also on the verge of being integrated into vehicles.
Dongfeng Motor recently quietly announced that its self-developed Taiji large model has passed the filing (filing) with the National Cyberspace Administration for generative AI services, obtaining compliance certification. This means the AI system, exclusive to the automotive industry, is one step closer to mass production in vehicles.
"The first half of the automotive industry was an energy revolution, and the second half will be an intelligence revolution—more accurately, an AI-driven automotive revolution," said Xiang Jiao, CTO of IM Motors. As the foundation for intelligent driving matures and large models explode in capability, the era of the "super intelligent agent" in cars has arrived.
Huang Rui from the AI Lab at Dongfeng Motor's R&D Headquarters stated, "Just as humans use their eyes, ears, and touch to make comprehensive judgments and decisions, cars also need multi-sensory collaboration and deep brain-like thinking." He believes the core challenge for the industry is enabling cars to truly understand the world, comprehend users, and evolve autonomously.
Behind this surge of large models in cars, many questions arise—is this Intelligent Carnival (intelligent frenzy) a true revolution in in-car AI, or just another gimmick for industry involution? Are the highly touted large model intelligent agents a must-have for cars, or just flashy but impractical features?
▍01 The End of the Pseudo-Intelligent Era
Over the past five years, in-car voice assistants have been a standard selling point for automakers.
Open the promotional page of almost any new car, and you'll see claims like "99% voice Awakening rate (wake-up rate)," "supports continuous dialogue," and "multi-round interaction"—as if simply saying, "Xiao X, turn on the AC" represents the entirety of intelligence.
But the truth is, most in-car voice assistants are pseudo-intelligent scams.
Traditional in-car voice systems are based on fixed command libraries, essentially keyword matching—you must use specific phrases for it to perform set actions.

For example, if you say, "I'm cold," it might only turn on the AC. If you say, "I want to find a restaurant with parking," it will list restaurants but won't filter for parking availability. If you say, "Leave for work at 8 AM tomorrow and grab a coffee on the way," it can only process these as two separate commands, unable to create a coherent itinerary.
Even more awkwardly, this command-based weak intelligence can't even cover the most basic scenarios. While driving, if you try to navigate and adjust the AC temperature simultaneously, the voice assistant stutters. If you have an accent, it fails to recognize you. If you make a vague request, it simply replies, "Sorry, I didn't understand."
Automakers aren't unaware of this, but under pressure from the intelligence race, they've had no choice but to double down. The industry competes on wake-up speed, interaction rounds, and recognition accuracy—yet no one dares admit that voice assistants without large model support are essentially upgraded versions of "artificial stupidity," incapable of solving real user pain points.
"Past voice interactions were essentially command-based—users issued commands, and the vehicle executed operations," said Xiang Jiao. "This didn't truly unleash the value of artificial intelligence."
It wasn't until the emergence of large model intelligent agents that this scam was fully exposed.
Xiang believes what consumers truly need now is for cars to understand their intentions and proactively handle tasks. This means interactions will evolve from voice-controlled vehicle operations to natural conversations with the car, enabling it to truly understand you and fulfill various needs across all scenarios—what they call "comprehensive assistance."

Compared to traditional voice assistants, large model intelligent agents shift from passive execution to proactive service—no longer just following commands, but understanding needs and getting things done.
For example, saying, "Plan a business trip tomorrow, avoid morning rush hour, and book a hotel and airport transfer" allows it to automatically break down the request, coordinate with navigation, Fliggy, and Gaode, and complete the entire process from route planning to hotel reservation.
Users no longer operate the infotainment system; instead, they have a dedicated travel assistant. Thus, the competition for intelligence in car cockpits has shifted from stacking parameters to delivering experiences.
And this marks the entry of in-car AI into the final round—the end of the pseudo-intelligent era and the dawn of true intelligence.
▍02 The Race for Implementation
By 2026, large models in cars are no longer a gimmick but a survival line.
But many misunderstand the rules of this final round—it's not about whose model has higher parameters or more advanced technology, but who can translate technology into experiences users can perceive and rely on.
The trajectory of this track (race) is clear and ruthless.
In 2023, ChatGPT ignited the generative AI craze overseas, followed closely by Baidu's ERNIE Bot, sparking the first wave of excitement around large models in cars in China.

Nearly 10 automakers, including Changan, Jiyue, LanTu, Hongqi, and Great Wall, swiftly announced collaborations. Back then, large models were the entry ticket for automakers' intelligent transformation—having one mattered more than how strong it was.
In 2025, DeepSeek emerged as a dark horse, triggering a second wave of AI integration in cars. Within months, nearly 20 automakers, including Geely, Chery, BYD, Great Wall, and Leapmotor, rushed to adopt it, seeking to leverage this new force to gain visibility and an edge in the second half of the intelligence race.

In reality, over the past two years, many automakers have been stacking parameters—promoting models with hundreds of billions of parameters and supporting multimodal interactions. However, most of these technologies remain at the demonstration stage, unable to be mass-produced or truly solve user pain points.
Only after buying the car do users realize that the so-called large model intelligent agent is still just a voice assistant in disguise, incapable of proactive service.
As 2026 arrives, the hype fades, and the rules of the final round change.
The industry no longer competes on higher parameters or more modalities but on who can translate large models into user-perceptible, user-reliant experiences.
Alibaba's QianWen, Huawei's HarmonyOS Cockpit 5.0, XPENG's Tianji AIOS 6.0, NIO's Banyan, and Dongfeng's Taiji large model—whether developed by suppliers or automakers themselves—have taken different approaches in the same final arena.
Among them, Alibaba's QianWen doesn't stack paper specs but focuses on forming a service closed loop (closed loop) through cloud-based agents + a full ecosystem.

Its strength lies in breaking down complex, long user instructions into cross-application tasks like navigation, dining reservations, ticket purchases, and itinerary planning with a single phrase. Leveraging Alibaba's mature ecosystem, it offers a complete service chain, controllable costs, and engineering-friendly implementation.
Unlike others, QianWen doesn't deeply bind hardware with intelligent driving but focuses on running the full chain of intention, decision, and execution. This lightweight, efficient model is one of the safer choices for automakers to quickly mass-produce intelligent agents.
The IM LS8 and Hongqi HS6 PHEV have become its implementation carriers, transitioning from voice commands to comprehensive assistance.
In Xiang Jiao's words: "Whether during daily commutes or family outings with kids, a single phrase or command is enough for it to perceive your intentions, understand the scenario, and complete a series of operations."
▍03 Who Will Have the Last Laugh?
Huawei has taken a different path with full-stack synergy. Its HarmonyOS Smart Cockpit (HarmonySpace 5) is labeled as "seamlessly connecting people, vehicles, and homes," setting the benchmark for smooth experiences.
Built on the MoLA hybrid large model architecture, Huawei pursues a full-stack collaboration (synergistic) route of hardware + system + ecosystem, excelling in multimodal perception, four-zone voice recognition, continuous dialogue, and fuzzy intention understanding—enabling seamless transitions between phones, infotainment systems, and smart homes.

Huawei's differentiated advantage lies in the natural barrier of the HarmonyOS ecosystem and its Ultimate fluency (ultimate smoothness), essentially bringing mature mobile internet experiences intact into the car. Its high standardization and rapid automaker adaptation are its core strengths in capturing the market.
Now, let's look at Dongfeng Motor's Taiji large model.
"With the Taiji large model, cars are becoming more human-like—not only understanding speech but also judging surroundings and even proactively offering services," said Sun Kuan, an AI R&D engineer at Dongfeng Motor's R&D Headquarters AI Development Center.
He explained that in traditional intelligent driving, perception, prediction, and control functions operate separately, with algorithms stacked together, leading to delays and weak reasoning. The Taiji large model, however, achieves "end-to-end" integration, enabling instant human-like thinking and judgment with millisecond-level response times.

As the final round intensifies, three key trends are reshaping the competitive landscape of in-car large models:
First, voice assistants are being marginalized.
In the future, traditional voice assistants without large model support will gradually be eliminated by the market.
Second, tech companies and automakers are forming deep alliances.
Developing large models requires massive investment and technical expertise, making solo R&D inefficient and costly for automakers. Meanwhile, tech companies lack vehicle architecture support for solo implementations, preventing true cockpit-driving fusion.
This means future competition will no longer be between automakers alone but between ecosystem partnerships between automakers and tech giants like Alibaba, Tencent, and Huawei. Alibaba's QianWen collaborations with IM and Hongqi exemplify this trend.
Third, intelligence competition is returning to user essentials.
Whoever can translate large model intelligent agent capabilities into indispensable daily driving experiences—truly solving user pain points—will stand out in the final round. This represents the ultimate return to value in this track (race).
For automakers, does integrating an in-car large model mean they can rest easy?
In reality, the competition in the final round has only just begun.
Integrating Alibaba's QianWen, Huawei's HarmonyOS Cockpit, Tencent's Yuanbao, etc., merely grants automakers entry into the final round. The true test lies in subsequent integration and optimization. Take QianWen as an example—the key to victory is deeply aligning its capabilities with the automaker's brand positioning and user needs to create differentiated intelligent experiences.
After all, the ultimate goal of intelligence has never been about showing off but about practicality.
(Featured image generated by AI)
