Meta, a founding member of the Open Compute Project (OCP), emphasized the importance of open source hardware for the future of AI data center infrastructure at this week's OCP Global Summit. The summit, themed “Leading the Future of AI,” highlighted Meta’s ongoing efforts to promote open standards and sustainable practices in data center design.
Meta, along with industry peers, is supporting OCP’s Open Data Center Initiative, which seeks to establish common standards for power, cooling, mechanical structure, and telemetry within data centers. According to Meta, “the future of AI requires a new level of collaboration across the data center industry. To keep pace with the growth of demand and maximize the benefit of AI to society, the data center industry must standardize its approach to building physical infrastructure in a way that encourages interoperability while still allowing for differentiation and innovation.”
At the summit, Meta announced advancements in network fabrics for its AI training clusters. These include open hardware designs and new switches that integrate NVIDIA’s Spectrum Ethernet. The company also became an initiating member of Ethernet for Scale-Up Networking (ESUN), an OCP workstream aimed at improving connectivity as AI systems expand.
Meta introduced specifications for the Open Rack Wide (ORW) form factor, an open source rack standard designed for next-generation AI systems. The ORW aims to enhance power, cooling, and efficiency. AMD also announced Helios, an advanced AI rack based on ORW standards. According to Meta, “Helios and our ORW form factor represent a fundamental move toward standardized, interoperable, and scalable hardware data center design across the industry.”
In addition to these developments, Meta unveiled several next-generation AI hardware platforms intended to improve performance, reliability, and serviceability for large-scale generative AI workloads.
On sustainability, Meta presented “Design for Sustainability,” a set of principles focused on reducing IT hardware emissions. These principles encourage strategies such as modularity, reuse, retrofitting, dematerialization, and extending hardware lifecycles. Meta also detailed its methodology for tracking emissions from millions of hardware components in its data centers, using its Llama AI models to optimize emission tracking databases.
Meta stated: “At Meta, we’re focused on reaching our sustainability goals, and we’re inviting the wider industry to join us in adopting the strategies and frameworks outlined here to help them reach theirs.”
The company concluded that continued hardware innovation will be key as AI systems become more complex: “We’re excited about the progress we’ve already made, and look forward to continuing to drive openness and collaborating with industry partners as the complexity of AI systems grows.”