01-04-2025 | Sarcina Technology | Semiconductors
Sarcina Technology has launched its innovative AI platform to facilitate advanced AI packaging solutions that can be tailored to meet specific customer requirements. Employing ASE's FOCoS-CL (Fan-Out Chip-on-Substrate-Chip Last) assembly technology, this platform comprises an interposer which supports chiplets using UCIe-A for die-to-die interconnects, allowing for the delivery of cost-effective, customisable, cutting-edge solutions.
The company aims to push the boundaries of AI computing system development by providing a unique platform for efficient, scalable, configurable and cost-effective semiconductor packaging solutions for AI applications. As AI workloads continue to evolve, there is a demand for increasingly sophisticated packaging solutions that can support higher computational demands. Its novel interposer packaging technology integrates leading memory solutions with high-efficiency interconnects. Whether prioritising cost, performance or power efficiency, this new AI platform can deliver.
According to Dr Larry Zu, CEO of Sarcina Technology: "Six years ago, after prototyping a 2.5D silicon TSV interposer package that integrated one ASIC and two HBMs, we predicted this technology would enable highly complex computing solutions. Today, this vision is becoming a reality, driven by RDL die-to-die interconnects like UCIe."
Zu continues: "With FOCoS assembly technology, we are entering a new era of AI computing. Our AI platform offers greater efficiency and customisation, with the lowest cost in the industry for generative AI chips. This ensures that our customers stay competitive in the rapidly evolving AI landscape."
The company's team has successfully developed an interposer with up to 64 bits of data interface per module, achieving data rates of up to 32GT/s. This delivers the highest UCIe-A performance in bandwidth and data rate, as the UCIe 2.0 standard specifies. Multiple modules can be set in parallel along the silicon die edge to enhance data transfer throughput further. There is also a choice between LPDDR5X/6 packaged memory chips and HBMs.
The company has extensive expertise in designing high-power, high-performance semiconductor packages. This enables semiconductor startups to focus on developing efficient algorithms for generative AI and edge AI training without the need for an expensive post-silicon design and manufacturing team. Startups can develop their silicon and pass it to Sarcina for post-silicon packaging, streamlining the process and reducing costs while maintaining high performance. Its die-to-die interposer solution allows AI customers to use chiplets to form large silicon areas, supporting high-performance computing with satisfactory wafer yields. This large package design permits more memory integration, which is vital for generative AI applications that require rapid, parallel data processing.
The launch of the company's AI platform is set to transform AI computing capabilities across industries such as autonomous systems, data centres and scientific computing.
Visit Sarcina Technology at OFC, booth 3019, from 30 March to 3 April 2025.