04/01 2026
456
Preface:
The era of optical hardware is being rapidly rewritten by computational photography.
On one hand, its sensors support the global mobile imaging market, while on the other, its camera business is being cornered by smartphone manufacturers using its own sensors.
Sony has been trapped for years in this dilemma where [the shovels it sells are used by others to dig its own gold mine].
Success and Struggle with Hardware
By 2025, the global market for CMOS image sensors is projected to reach $19.17 billion, with Sony capturing nearly half of the market share. The sensor business has become one of Sony Group's most stable cash cows.
However, this absolute hardware advantage has led Sony into an unshakable path dependency.
For a long time, Sony's understanding of imaging remained stuck in the hardware logic of [a bigger sensor always wins].
This logic was flawless in the pure optical era but faced a dimensionality reduction strike in the computational photography era.
By 2026, computational photography has evolved to the stage of scene reconstruction based on semantic understanding. The deployment of on-device large models allows smartphones to understand shooting scenes like the human brain.
Apple, Huawei, Xiaomi, and Vivo—these smartphone manufacturers fiercely competing in the imaging race are essentially Sony's downstream customers, using Sony-produced sensors to build their own computational photography empires.
The traditional photography chain is linear: optics → sensor → ISP → output.
Computational photography transforms this chain into a complex reconstruction system: multi-frame capture → data fusion → AI inference → semantic reconstruction → output.
This means photos are no longer [captured] but [generated].
For example, night mode, HDR, and AI portraits represent the [optimal solution] obtained through algorithmic fusion of multi-frame information.
In this process, the importance of ISPs and AI models has rapidly increased, even surpassing pure light-sensing capabilities.
In the era of computational photography, Sony has become an awkward player: you provide the raw materials, but others decide the final flavor.
Breaking the Deadlock with AI Chips: Starting from the Imaging Infrastructure
Unlike smartphone manufacturers integrating NPUs into SoCs or developing external standalone imaging chips, Sony's AI strategy takes a more radical approach.
It deeply embeds AI capabilities at the very front end of the imaging chain, using AI to complete image processing and scene understanding from the moment the sensor captures light signals, achieving true [AI-native imaging].
Sony's full-frame mirrorless Alpha 7 V features a core upgrade with the new BIONZ XR2 image processor.
This processor fully integrates AI intelligent processing functions into the chip unit, consolidating tasks that previously required two BIONZ XR processors and a standalone AI chip into a single chip.
According to Sony's official data, the Alpha 7 V's real-time AF recognition performance has improved by approximately 30%, with 759 phase-detection autofocus points covering 94% of the frame.
This AI-integrated processor can simultaneously recognize and focus on seven types of subjects: humans, animals, birds, insects, airplanes, and vehicles, intelligently switching recognition targets in auto mode.
Sony Semiconductor Solutions released the LYTIA 901, the first flagship sensor in the LYTIA series. This 200-megapixel, 1/1.12-inch large-bottom product integrates AI-based image processing circuits inside the sensor for the first time.
Traditional computational photography involves the sensor capturing images first, then transmitting data to the smartphone's ISP and NPU for algorithmic processing—akin to shooting first and editing later.
Sony's breakthrough lies in its ability to complete array rearrangement, detail restoration, noise suppression, and other processing tasks through the sensor's built-in AI circuits while converting light signals into electrical signals, achieving full real-time operation of [simultaneous capture, understanding, and processing].
For the Quad-Quad Bayer Coding array used in the LYTIA 901, Sony developed a dedicated AI-learning array rearrangement technology.
Sony has directly embedded core AI image processing capabilities into the sensor, transforming algorithms from smartphone manufacturers' [proprietary capabilities] into [native functions] of imaging hardware.
The same AI logic is being applied by Sony to its camera products to counter smartphone competition.
The upcoming APS-C flagship A6900, set to enter testing in April 2026, features a new enhanced AI chip and a 33-megapixel stacked sensor.
With the AI chip, it achieves 30 fps electronic shutter burst shooting and 15 fps mechanical shutter shooting, significantly improving subject recognition and autofocus accuracy, while also enhancing in-body five-axis image stabilization to 8.5 stops.
These upgrades essentially use AI and underlying sensor technology to address traditional cameras' shortcomings relative to smartphones while amplifying the image quality ceiling of professional equipment.
Sony is using AI technology to redefine traditional cameras' positioning, safeguarding professional creation amidst smartphone competition.
Beyond Cameras: Targeting the Visual Entry Point of the AI Era
Sony's true strategy is to stop competing with smartphone manufacturers on imaging experience and instead redefine imaging infrastructure by moving one layer deeper.
From integrating AI into sensors with the LYTIA 901 to its semiconductor business's continuous expansion into automotive and edge AI fields, Sony is targeting the visual entry point of the entire AI era.
In the future, whether for smartphones, smart cars, industrial robots, security surveillance, or XR devices, there will be a massive demand for visual sensors.
CMOS sensors integrated with AI capabilities can perform recognition and processing simultaneously during image capture—exactly what edge AI devices need most.
Sony has already begun relevant deployments. It previously invested in the UK's Raspberry Pi company, jointly launching an edge AI camera priced at just $70, designed for developing edge AI applications without expensive GPUs.
In the automotive sector, the rapid increase in both quantity and quality requirements for onboard cameras will become another important growth driver for Sony's CMOS business.
For Sony, cameras and smartphone sensors are just the tip of the iceberg in its vast strategy.
Its true goal is to leave its technological mark in [seeing] scenarios within AI-era visual perception technology.
Conclusion
Imaging is about recording emotions, and technological iteration ultimately aims to enable freer expression of what one wants to convey.
In an era where everyone can take photos, good imaging technology ultimately strives to help more people effortlessly capture the images in their minds.
References: Leikeji: [Computational Photography Surges, Sony's Counterattack with AI Chips: Launching a Camera Defense War], Semiconductor Industry Insights: [Sony Semiconductor's Rise!], Du Qin DQ: [The Imaging Revolution Sparked by an AI CIS]