Last week's second edition of the ai-Pulse conference at Station F in Paris focused on the AI themes of "big, efficient, and open." However, the event also served as a showcase to highlight France's ambition to take center stage in Europe's AI ecosystem.
Opening the event, Xavier Niel, founder of Iliad Group, briefly reflected on France and Europe’s journey in building a world-class AI ecosystem. Niel's contributions to this effort have been considerable: founder of telecom giant Illiad Group; founder of the Station F startup campus; owner of cloud provider Scaleway (the official conference organizer); one of France's most active startup investors; and co-founder of the Kyutai AI research institute.
At the event last year, Niel announced the creation of Kyutai. The institute, which aims to develop open-source AI models, is emblematic of France’s efforts to create a unique, sovereignty-focused AI infrastructure, Niel said.
“Eighteen months ago, people doubted Europe could be a major player in AI,” Niel said on stage. “But today, we have an ecosystem that rivals the best in the world, supported by remarkable talents and thriving startups like Mistral AI.”
Scaleway CEO Damien Lucas then explained how the company is rapidly evolving to provide the infrastructure needed to power the AI revolution.
The cloud provider has significantly scaled up its GPU infrastructure, expanding from last year's announcement of Europe's first AI supercomputer with 1,000 NVIDIA H100 GPUs to now offering 3,000 GPUs, with plans to reach over 5,000 high-performance GPUs in the coming weeks. In a European first, Scaleway announced the immediate availability of AMD MI 300X GPUs on their cloud platform, while also confirming readiness to deploy NVIDIA's upcoming Blackwell architecture, Lucas said.
Lucas highlighted Scaleway's commitment to sustainability. Most of its GPU infrastructure is housed in Europe's most energy-efficient data center, powered by low-carbon energy sources, he said. To promote transparency, Scaleway launched a beta version of their environmental footprint calculator, allowing customers to track and optimize their cloud usage impact.
Scaleway also announced new generative APIs that allow developers to switch seamlessly between OpenAI's GPT and open-source models running on their sovereign cloud with minimal code changes, helping avoid vendor lock-in while maintaining data control.
"We are working on openness both on the hardware level and the API level," Lucas said. "We're committed to giving AI builders more hardware options than ever thanks to our partners."
AI + Cloud Infrastructure
Ampere Computing CEO Renée James, a semiconductor industry veteran with nearly three decades at Intel, discussed the need for power efficiency, scalability, and innovation for the future of AI and cloud infrastructure.
James made a bold decision to leave Intel and establish Ampere Computing in 2016. The move stemmed from a desire to address the "power wall" problem in semiconductors. Traditional high-performance computing relied on ever-increasing power consumption, a trend unsustainable for the environment and future scalability.
Ampere's mission was to create CPUs that deliver high performance at reduced power consumption—critical for AI workloads that demand both speed and efficiency. The result was a product line capable of doubling performance while halving power usage compared to traditional chips, she said.
This innovation is crucial in the context of AI. The energy demands of generative AI models like ChatGPT far exceed those of conventional applications, making efficiency a key factor in the AI landscape. James explained that efficient CPUs can help optimize AI inference processes, which are increasingly deployed at scale on various devices.
This shift is essential as AI becomes ubiquitous, extending beyond massive cloud clusters to mobile devices, laptops, and the edge.
To power the AI revolution, James said the industry must think in terms of holistic efficiency, extending from CPU architecture to the data center level, to reduce energy consumption across the entire compute infrastructure. The transition to efficient, low-power computing mirrors past shifts in technology, such as the evolution of mobile phones, where developers adapted their software to more constrained power environments.
She also acknowledged the challenges facing the semiconductor industry, particularly in Europe. She noted that while Europe has ample intellectual talent, greater risk-taking and access to capital are essential to drive hardware innovation. Breaking down barriers for European tech companies is necessary to foster a thriving ecosystem capable of competing globally.
James encouraged AI developers to focus on efficient solutions that take advantage of scalable cloud and hardware offerings. The rapidly stabilizing AI models, coupled with accessible APIs and platforms, present a unique opportunity for innovation at a lower entry cost. According to James, the AI industry resembles past technological revolutions, such as the shift from command-line interfaces to graphical user interfaces. AI, she argued, will soon become as integral to infrastructure software as those previous advancements.
AI Infrastructure: Physical and Economic Limits
Bryan Catanzaro, NVIDIA's VP of Applied Deep Learning Research, spoke on stage about the current limitations and future directions of AI technology in a conversation with Scaleway CTO Jean-Baptiste Kempf.
Addressing the scaling of AI systems, Catanzaro said economic factors and power consumption ultimately constrain cluster sizes. While clusters of tens of thousands of GPUs are common today, with some pushing toward 100,000 GPUs, larger configurations face significant challenges. The power requirements for clusters approaching one million GPUs would demand multiple gigawatts, making co-location with power sources, particularly nuclear plants, an increasingly attractive option for future AI infrastructure.
"I think that co-locating compute with power is a great idea in the age of artificial intelligence," he said. "And I also think that using low carbon power to train AI is important. So it makes sense to me to put a whole bunch of GPUs next to nuclear power plants here...A million GPUs in a cluster, that's going to be an extraordinary amount of power. And I think we're going to find that rather than put a million GPUs in a single data center, we're going to run training and inference in a more distributed way so that we can have access to electricity better and scale in a more efficient way."
Catanzaro said that NVIDIA's goal is making AI training more efficient, given the economic and electrical constraints. The company approaches this through "accelerated computing," which involves optimizing the entire technology stack - from algorithms and applications to system design and chip architecture.
"Along with the economic constraints, we're also constrained by efficiency," he said. "So when we're talking about the limits of AI, the limits of economics, the limits of electricity, what that implies is that if we can make training more efficient, we're going to make AI better, and that's our primary mission at Nvidia."
Data Architecture + AI
VAST Data CEO Renen Hallak outlined how data management is becoming a crucial frontier in AI development, arguing for the need to make fundamental changes in how information for AI systems is handled and processed. As organizations build increasingly sophisticated AI systems, the ability to efficiently analyze massive amounts of diverse data will be crucial for success, he said.
Hallak said that AI's evolution from analyzing structured data to processing images, video, natural language, and sound has created unprecedented challenges in data management. Traditional systems, designed either for fast access to limited information or slow access to large datasets, are no longer sufficient for modern AI workloads that require both massive scale and high performance.
The shift to GPU-intensive computing, with clusters of tens or hundreds of thousands of GPUs, demands a complete rethinking of data architecture. VAST Data's response has been developing what they call a "disaggregated, shared everything" architecture, which is currently being deployed in some of the world's largest AI installations, including X.AI's 100,000 GPU cluster.
As models evolve beyond language to incorporate various data types, Hallak stressed that storage systems must become more intelligent, actively analyzing metadata and understanding content rather than simply storing raw information.
Hallak described how VAST Data approaches efficiency through algorithmic innovations rather than hardware alone. Their software-driven approach reduces data volume and provides resilience with less overhead, addressing both environmental and cost concerns as data volumes continue to double every six to eight months.
Looking toward the future, Hallak predicted that the next major breakthrough in AI will come from enabling interactions between AI agents. He suggested that just as human intelligence benefits from brainstorming and discussion, AI systems will need to communicate with each other to generate new ideas and solve complex problems beyond human capability.
"If we are trying to build a human brain, and we are trying to have a thinking machine, giving it access to the natural world is step number one," Hallack said. "The other key ingredient for thought is interaction between intelligences and the way we come up with new ideas is by talking about them, by brainstorming. I think AI agents are the way to achieve that, and we're going to start having AIs speak to each other and generate new ideas and hopefully solve a lot of the big problems that we're not smart enough to solve. That's the real game changer of this technology."
Regarding Europe's position in the AI landscape, Hallak acknowledged the region's strong research foundation but urged faster adoption and more ambitious development. He characterized AI as potentially the biggest technological revolution in history, surpassing even the internet, and warned that those who don't take it seriously risk being left behind.
"There are very, very talented teams that have done much of the research that has brought us to where we are," he said. "I think now is the time to realize that we're shifting gears and moving from low gear to high gear, and that means that we need to move a lot faster in the pace that we build, in the ambition of what we have. As long as we understand that, then Europe will stay in a leading position. The US is pretty good at shifting to high gear and hitting the gas, but not necessarily always knowing which direction we're traveling in. So I think Europe balances us being a little bit more conservative there."