Q1. Meeting the Growing Demand for AI Workloads
As artificial intelligence advances, the role of the data center is transforming. What used to be a neutral “compute warehouse” is now becoming an AI factory, purpose-built to support the intensive demands of training and deploying large-scale models.
Facilities designed for general IT workloads cannot meet the electrical, thermal, and network requirements of modern AI clusters.
Switch sees three infrastructure advancements as central to this evolution:
- Scalable Density with EVO
AI systems are rapidly increasing their power needs, with GPU and accelerator roadmaps moving from racks consuming tens of kilowatts to racks requiring megawatts. Switch anticipated this change. Our EVO architecture scales from 50 kW racks up to 2,200 kW racks, aligning directly with the growth path of advanced GPU systems. This flexibility allows customers to expand capacity without disruption.
- Unified Data and Network Fabrics
AI is not only about compute, it is about data. Moving, storing, and synchronizing massive training sets requires more than isolated solutions. Switch’s networking business enables a unified data fabric that integrates storage, transport, and interconnect into one high-performance system. This reduces latency, simplifies orchestration, and connects heterogeneous systems and sites without fragmenting AI pipelines.
- End-to-End Digital Orchestration
The next step extends beyond the data hall itself. AI clusters must be orchestrated in concert with utility power, renewable availability, and enterprise workflows. Switch is advancing digital design and digital twin platforms that model, schedule, and optimize workloads across both digital and physical layers. By connecting data centers directly with energy systems, we ensure AI infrastructure operates as an active participant in the grid, balancing resilience, efficiency, and responsibility.
In short, the growth of AI requires infrastructure that is flexible, unified, and integrated with the wider energy ecosystem. With EVO density, converged fabrics, and digital orchestration, Switch is building the foundations of the AI factory era.
Q2. Building Resilient and Secure AI Infrastructure
The United States leads in AI development, but much of the industry depends on international supply chains. At Switch, resilience is rooted in control.
All Switch campuses are in the United States. We design, build, and operate our facilities ourselves, which means every stage of the process—from concept to daily operations—remains under our direct oversight. This reduces risk, strengthens security, and ensures consistency across our ecosystem.
We also prioritize domestic sourcing wherever possible. Switch continuously works to manufacture critical components in the U.S. and to strengthen trusted local partnerships. This improves reliability and allows us to innovate without depending on offshore vulnerabilities.
Resilience is never complete. We are always improving, refining, and future-proofing the infrastructure our customers depend on. By keeping campuses U.S.-based, vertically integrated, and focused on continuous improvement, Switch ensures its AI infrastructure is secure, reliable, and ready for the next era.
Q3. The Role of Edge in a Distributed AI World
AI models are growing larger and more resource-intensive, prompting new conversations about “edge computing.” Too often these discussions imagine small servers scattered across metro areas. That view does not reflect the reality of modern AI.
When you use ChatGPT or another large model, you type a question, there is a pause, and tokens stream back one by one. Whether the first token takes 500 milliseconds or 5 seconds, the experience still feels instant. That is because the heavy computation is not in a metro data center. It is running in AI factories with racks drawing hundreds of kilowatts to over a megawatt under liquid cooling. The extra milliseconds of fiber travel are invisible compared to the model’s compute time.
This is why Switch has always invested in regional exascale campuses. They are designed for the vertically scaled systems AI requires, not just for caching content. We pioneered building whole regions for density, resiliency, and integration, and that model has become the industry standard. We have built tier-IV enterprise edge sites before and we can evolve that product for the AI era.
Edge still has a role, but not as thousands of small boxes. Upcoming GPUs and accelerators are deployed at rack level, often consuming more power than entire legacy caching sites. The edge for AI will look like regional clusters of dense racks positioned where latency truly matters. These clusters will manage real-time inference while synchronizing with larger AI factories for training and large-scale hosting.
For Switch, this is not new. Our campuses are built for vertically scaled systems, with fabrics that connect seamlessly to regional deployments. The architecture that protects user experience will be a continuum: rack-level inference clusters at the edge where immediacy is required, synchronized with exascale factories where efficiency and resilience are maximized.
Q4. Balancing Energy Demand and Environmental Responsibility
The growth of AI training and inference has put a spotlight on energy use. At Switch, sustainability is not an add-on, it is a foundation. From the beginning, we designed our campuses to balance scale with responsibility, and we continue to raise that standard as AI workloads expand.
Efficiency by Design and Innovation
Switch pioneered many of the industry’s efficiency breakthroughs, and we continue to advance them. Our EVO architecture is liquid-cooled by design, using a closed-loop system that eliminates all water loss and improves rack-level performance. This allows us to scale from today’s high-density systems to tomorrow’s megawatt racks without wasting energy or consuming any water. Efficiency is embedded in the physical design, not added afterward.
Strong Policy, Transparency, and Continuous Improvement
Switch operates with direct access to renewable energy resources and has developed unique processes for power development, purchasing, and use. Sustainability is a cycle of auditing, reporting, and improving every year, never a task considered finished.
Community and Resiliency Alignment
Data centers are part of larger ecosystems. Switch works with utilities, regulators, and local communities to ensure projects strengthen, not strain, regional infrastructure. We invest in recycled water systems, renewable generation, and grid partnerships so that growth creates resilience for the regions where we operate.
For Switch, sustainability and AI growth are inseparable. The intelligence built in our factories must be aligned with responsibility to people, communities, and the planet.
Q5. The Next Decade of AI Infrastructure
High-Power Rack Distribution
The challenge is no longer just cooling a room of servers. It is delivering and managing power at the rack level in megawatts. Efficient distribution, liquid cooling, and closed-loop systems will define the next decade. The way racks are powered, cooled, and synchronized with the grid will matter as much as the chips inside them.
Emerging Technologies and Trends
Several developments could fundamentally reshape AI infrastructure in the U.S.:
- Heterogeneous Compute: GPUs will remain central, but new accelerators, custom silicon, and eventually quantum co-processors will demand infrastructure that can host a mix of architectures side by side.
- Energy-Aware Orchestration: Workloads will be scheduled based on real-time grid conditions, carbon intensity, and renewable availability. Clusters will flex in harmony with the power system, with safety and audit layers embedded as standard.
- Federated and Distributed AI: Instead of siloed deployments, secure federated fabrics will allow models to learn from distributed data without moving it, reshaping networking, compliance, and governance.
- Digital Twin Integration: Data centers will be modeled, monitored, and optimized as digital twins, enabling predictive management of energy, water, and workload flows.
- AI Safety and Audit Layers: Just as cybersecurity became a native layer of IT, safety, alignment, and rollback will become native to AI infrastructure operations.
For Switch, the next decade is about more than scale. It is about building infrastructure flexible enough for new architectures, intelligent enough to orchestrate around energy realities, and principled enough to embed responsibility at every layer. AI factories will continue to evolve, and Switch will continue to shape that evolution.
Source: InvestmentsReports.co

