Categories: NewsTechnology

California startup introduces first ever trillion-transistor chip for AI training

With the need to rapidly process immense amounts of data for the purpose of optimally training artificial intelligence programs stronger than ever, industry experts are looking forward to tools that will enable them to achieve this feat. Graphics processing units (GPUs) are the norm for AI training, thanks to their considerable processing power and speed, but a California startup has unleashed a massive trillion-transistor chip that puts all of them to shame.

Based in San Francisco, Cerebras Systems specializes in producing technologies focused on accelerating deep learning, an important subset of artificial intelligence. Its most recent work arrives in the form of the Cerebras Wafer Scale Engine (WSE), a 46,225 square millimetre processing behemoth that is actually more than 56 times larger than the biggest GPUs around. According to Cerebras, the WSE consists of an on-chip memory that is 3,000 times faster and a bandwidth memory that is an immense 10,000 times greater than GPU-based AI acceleration.

Naturally, these features make the WSE an incredible tool, as larger chips are able to process massive amounts of data rapidly thereby allowing engineers and researchers to study and deploy efficient AI solutions. Of course, one has to wonder why no other company has succeeded in making such an immense processor in the past. According to Cerebras Systems, they have achieved this feat because they managed to deal with challenges that effectively limit chip size and speed.

Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state of the art by solving decades-old technical challenges that limited chip size – such as cross-reticle connectivity, yield, power delivery and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems.

The WSE hosts a total of 400,000 computing cores optimized for AI, and 18 gigabytes worth of local, distributed memory. The cores are connected together via a fine-grained mesh network that enables an average bandwidth of 100 petabits per second. These figures, if nothing else, prove that AI will become a powerful mainstream technology sooner than we thought, as the processors responsible for executing them keep getting better and better.

Sponsored
Hamza Zakir

Platonist. Humanist. Unusually edgy sometimes.

Share
Published by
Hamza Zakir
Tags: AI

Recent Posts

Weekend Brings Positive Change in Lahore’s Air Quality Index

LAHORE: Lahore's air pollution levels showed significant improvement over the weekend, with the overall Air…

14 hours ago

PTA Streamlines VPN Registration for Freelancers

ISLAMABAD: The Pakistan Telecommunication Authority (PTA) has streamlined the procedure of registering Virtual Private Networks…

14 hours ago

Report Predicts PSX Will Hit 127,000 by December 2025

The Pakistan Stock Market (PSX), fuelled by economic stability and budgetary consolidation, is expected to…

14 hours ago

Lahore Completes Preparations for Artificial Rain Project

The Punjab government is advancing plans to generate artificial rain in Lahore to further enhance…

16 hours ago

Indus Motor Company Suspends Toyota Car Production

Indus Motor Company (IMC), which makes Toyota cars in Pakistan, has said that production will…

16 hours ago

Azerbaijan Shows Interest in J-10Cs After Acquiring JF-17s from Pakistan

At the 2024 Zhuhai Air Show, the J-10C "Vigorous Dragon" fighter jet has emerged as…

16 hours ago