The building of more powerful data centers for artificial intelligence, stuffed with more and more GPU chips, is driving data centers to enormous size, according to the chief executive of Ciena, which makes fiber-optic networking equipment purchased by cloud computing vendors to connect their data centers together.
“Some of these large data centers are just mind-blowingly large, they are enormous,” says Gary Smith, CEO of Hannover, Maryland-based Ciena.
Also: OpenAI’s o3 isn’t AGI yet but it just did something no other AI has done
“You have data centers that are over two kilometers,” says Smith, more than 1.24 miles. Some of the newer data centers are multi-story he notes, creating a second dimension of distance on top of horizontal sprawl.
Smith made the remarks as part of an interview with the financial newsletter The Technology Letter last week.
Even as cloud data centers grow, corporate campuses are straining to support clusters of GPUs as their size increases, Smith said.
“These campuses are getting bigger and longer,” he says. The campus, which comprises many buildings, is “blurring the line between what used to be a wide-area network and what’s inside the data center.”
Also: AWS says its AI data centers just got even more efficient – here’s how
“You’re beginning to see these campuses get to quite decent distances, and that is putting massive strain on the direct-connect technology.”
–>
Smith expects to start selling fiber-optic equipment in coming years that is similar to what is in long-haul telecom networks but tweaked to connect GPUs inside the data center.
Ciena