AI-ready data centers: the 5 essential criteria
AI is profoundly changing society and data centers, too. Until now, these infrastructures were optimized for storing info, sending emails, hosting websites, and running traditional apps. Today, AI models are pushing these infrastructures to the limit, forcing us to adapt their design. In 2026, the challenge will be to make these buildings “AI-ready,” meaning capable of supporting massive workloads while remaining economically and environmentally viable.
AI's computing, storage, and energy requirements
Artificial intelligence consumes a lot of resources. A ChatGPT query of 400 tokens (approximately 280 words) consumes about 2 Wh of electricity, which is more than six times the consumption of a Google search. This demand for energy and storage is due to the very nature of the calculations that AI must perform. While conventional software processes information one piece at a time, AI processes millions of pieces of data simultaneously. The data center infrastructure itself must therefore evolve to adapt.
A sharp rise in electricity consumption
Data center electricity consumption could double by 2030, requiring an increase in production capacity or energy purchases. Since most of the electricity produced in France is carbon-free, this rise in consumption should not compromise the carbon neutrality target set for 2050. Why does AI have such an impact on energy consumption? Because training AI requires thousands of graphics processors running at full capacity. A stable and sufficient energy supply is therefore necessary. France and RTE have already implemented strategies to ensure that there is no shortage of electricity to power data centers. The country is anticipating the rise of AI by identifying and securing in advance sites suitable for data centers, equipped with high electrical connection capacities, several of which are “fast track” (an accelerated project mode where the design, authorization, and construction stages are carried out in parallel in order to significantly reduce delivery times). Thanks to coordinated planning between the government, RTE, and economic stakeholders, projects are supported upstream to ensure a sufficient, reliable, and continuous electricity supply.
Heat management
In a traditional data center, a rack (a server cabinet) consumes around 10 kW. With AI, this density rises to 50 or even 100 kW per rack. Not all current data centers are designed to efficiently dissipate this much heat. Heat management and therefore the cooling systems installed in data centers must be redesigned to adapt to AI.
The five essential criteria for adapting data centers to AI
To meet the new challenges posed by AI, data centers must meet five technical criteria.
1. High-performance cooling systems
Air cooling alone (conventional ventilation with CRAC-CRAH units) reaches its limits when faced with the density of AI servers. To prevent overheating, operators now combine ventilation with other more efficient solutions such as liquid cooling. This solution involves installing cold plates near equipment (racks, servers, etc.) to capture heat. Equipment can also be completely immersed in a non-conductive liquid for cooling. Since water is more effective than air at cooling servers, this type of method is increasingly being used, despite higher installation costs.
These methods are much more efficient than air conditioning alone and reduce energy consumption related to cooling. Today, data centers tend to use approximately 70% water cooling and 30% air cooling on the same site.
2. Reliable, ultra-fast connectivity
In an AI-dedicated infrastructure, servers work together as a single unit. The speed of communication between them is therefore crucial. Without an ultra-fast network, data transfer risks becoming bottlenecked. Current connection standards are therefore evolving towards technologies such as InfiniBand or 400 Gbps Ethernet. The aim is to guarantee minimal latency (response time) so that the flow of information between storage and computing processors is smooth.
3. Large storage capacities
AI requires large storage capacities, but also extremely fast access to this data. Data centers therefore mainly use “object” or “file” storage systems, which are larger and faster than the block storage traditionally in use.
Object storage allows vast amounts of data (in the petabyte range) to be processed and can be easily scaled as needed. Rather than being organized into files in folders or divided into blocks on servers, data is segmented into standalone units called objects, grouped together in a single, flat repository. This architecture allows AI algorithms to instantly access billions of files. Cloud storage is also sometimes used for AI and machine learning applications, in particular.
4. High-performance hardware: GPU, TPU, ASIC
The traditional processor (CPU) is no longer suitable for heavy AI calculations. Data centers must now incorporate specialized chips such as:
• GPUs (Graphics Processing Units): these processors are capable of performing many calculations at the same time. They are the current standard for training artificial intelligence models.
• TPUs (Tensor Processing Units): created by Google, they are specifically optimized for AI machine learning tasks.
• Les ASIC : des puces conçues sur mesure pour une seule tâche réseau critique (gestion du trafic, transfert de paquets...).
5. Adaptability
While a traditional data center is built to last 20 years, technological advances, particularly in AI, require operators to design modular and adaptive data centers. The infrastructure must therefore be designed to be scalable and modular. For example, if a customer decides to deploy a new GPU cluster to train an AI model, the data center must be able to add new racks quickly, without having to redesign the entire electrical architecture. This means that the aisles must already be sized to support higher power, but also that the cabling and cooling systems must be capable of handling these new components.
The modular design of server racks is therefore a key lever for AI-ready data centers. This ability to gradually expand electrical power, cooling, and connectivity is essential to support the rapid evolution of AI applications.
UltraEdge: local data centers optimized for AI
As we noted earlier, AI is resource-intensive. Hyperscale data centers (the largest ones in existence) are therefore often chosen to host these applications. However, a new trend is emerging, particularly in France: edge data centers. These local data centers are smaller in size and spread across the country. Thanks to high-performance equipment, they are now capable of hosting artificial intelligence applications.
At UltraEdge, we have more than 250 local data centers in France to run your AI use cases. By choosing our AI Ready data centers, you benefit from:
• Low latency: the data centers are located as close as possible to your company or your customers, which reduces connection times.
• Optimal security: our data centers are all located in France and therefore comply with French and European standards for data security and confidentiality.
• A redundancy system: our infrastructures have redundancy systems in place to ensure service continuity even in the event of a failure.
