Categories: NewsTechnology

California startup introduces first ever trillion-transistor chip for AI training

With the need to rapidly process immense amounts of data for the purpose of optimally training artificial intelligence programs stronger than ever, industry experts are looking forward to tools that will enable them to achieve this feat. Graphics processing units (GPUs) are the norm for AI training, thanks to their considerable processing power and speed, but a California startup has unleashed a massive trillion-transistor chip that puts all of them to shame.

Based in San Francisco, Cerebras Systems specializes in producing technologies focused on accelerating deep learning, an important subset of artificial intelligence. Its most recent work arrives in the form of the Cerebras Wafer Scale Engine (WSE), a 46,225 square millimetre processing behemoth that is actually more than 56 times larger than the biggest GPUs around. According to Cerebras, the WSE consists of an on-chip memory that is 3,000 times faster and a bandwidth memory that is an immense 10,000 times greater than GPU-based AI acceleration.

Naturally, these features make the WSE an incredible tool, as larger chips are able to process massive amounts of data rapidly thereby allowing engineers and researchers to study and deploy efficient AI solutions. Of course, one has to wonder why no other company has succeeded in making such an immense processor in the past. According to Cerebras Systems, they have achieved this feat because they managed to deal with challenges that effectively limit chip size and speed.

Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state of the art by solving decades-old technical challenges that limited chip size – such as cross-reticle connectivity, yield, power delivery and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems.

The WSE hosts a total of 400,000 computing cores optimized for AI, and 18 gigabytes worth of local, distributed memory. The cores are connected together via a fine-grained mesh network that enables an average bandwidth of 100 petabits per second. These figures, if nothing else, prove that AI will become a powerful mainstream technology sooner than we thought, as the processors responsible for executing them keep getting better and better.

Sponsored
Hamza Zakir

Platonist. Humanist. Unusually edgy sometimes.

Share
Published by
Hamza Zakir
Tags: AI

Recent Posts

Google to Enhance iOS Search with AI Suggestions for More Precise Results

Google is currently testing a new AI-powered feature for its iOS app aimed at enhancing…

13 hours ago

IT Minister Responds to Social Media Shutdown Concerns, Emphasizes Privacy and Security

Islamabad: During a National Assembly session on Wednesday, Minister for IT Shaza Fatima Khawaja addressed…

14 hours ago

Over 1.4 Million Websites Blocked by PTA Under PECA

ISLAMABAD: The Pakistan Telecommunications Authority (PTA) has taken decisive action against illegal online activities by…

14 hours ago

Samsung Galaxy S25+ Live Images and Ultra Design Details Leak Ahead of Release

The highly anticipated Samsung Galaxy S25 series, which includes the Galaxy S25, Galaxy S25+, and…

14 hours ago

Winter Vacation Plan for Punjab Schools Suddenly Changed: Here’s New Update

LAHORE: The Punjab Education Department has issued a revised schedule for winter vacations in schools…

14 hours ago

Punjab Govt to Introduce Conceptual Exams in Schools and Colleges to Revamp Education System

Rawalpindi: Punjab to introduce conceptual exams in schools and colleges as part of a broader…

15 hours ago